diff --git a/README.md b/README.md index 7da79263d0e701a53a0d662d245c4fc3db39b287..4a754589e83689adcdd6934e94a934da6496081e 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,65 @@ ---- -license: cc-by-4.0 ---- +# LearningPaper24 Dataset + +This dataset contains video recordings and metadata from ICLR and NIPS 2024 conference talks. It includes both poster and oral presentations, along with their associated metadata such as titles, abstracts, and keywords. + +## Dataset Structure + +``` +learningpaper24/ +├── README.md +├── metadata/ +│ └── catalog.json +└── video/ + ├── {openreview_id}_{slideslive_id}.mp4 + └── ... +``` + +## Data Format + +### Catalog (metadata/catalog.json) +The catalog contains metadata for each talk in JSON format with the following fields: +- `video_file`: Filename of the video recording in the format `{openreview_id}_{slideslive_id}.mp4` +- `openreview_id`: Unique identifier from OpenReview +- `slideslive_id`: Video identifier from SlidesLive +- `venue`: Conference venue (e.g., "iclr2024") +- `title`: Paper title +- `status`: Presentation type (e.g., "Poster", "Oral") +- `keywords`: Research keywords +- `tldr`: Short summary +- `abstract`: Full paper abstract +- `primary_area`: Main research area +- `site`: Link to the conference page + +### Videos +Videos are stored in the `video` directory with filenames following the format: `{openreview_id}_{slideslive_id}.mp4` + +## Purpose + +This dataset can be used for: +- Video understanding and summarization +- Natural language processing tasks +- Video-text alignment studies + + + +## License + +This dataset is released under the [Creative Commons Attribution-NonCommercial 4.0 International License](https://creativecommons.org/licenses/by-nc/4.0/). + + \ No newline at end of file diff --git a/metadata/catalog.jsonl b/metadata/catalog.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..e172d99656d7d9dd1d514ec900e774131d6f190c --- /dev/null +++ b/metadata/catalog.jsonl @@ -0,0 +1,2287 @@ +{"video_file": "0JsRZEGZ7L_39017996.mp4", "openreview_id": "0JsRZEGZ7L", "slideslive_id": 39017996, "venue": "iclr2024", "title": "From Latent Graph to Latent Topology Inference: Differentiable Cell Complex Module", "status": "Poster", "keywords": "Topological Deep Learning;Geometric Deep Learning;Latent Topology Inference;Latent Graph Inference;Cell Complexes", "tldr": "We study Latent Topology Inference (LTI) for learning higher-order cell complexes (with sparse and not regular topology) describing multi-way interactions between data points.", "abstract": "Latent Graph Inference (LGI) relaxed the reliance of Graph Neural Networks (GNNs) on a given graph topology by dynamically learning it. However, most of LGI methods assume to have a (noisy, incomplete, improvable, ...) input graph to rewire and can solely learn regular graph topologies. In the wake of the success of Topological Deep Learning (TDL), we study Latent Topology Inference (LTI) for learning higher-order cell complexes (with sparse and not regular topology) describing multi-way interactions between data points. To this aim, we introduce the Differentiable Cell Complex Module (DCM), a novel learnable function that computes cell probabilities in the complex to improve the downstream task. We show how to integrate DCM with cell complex message-passing networks layers and train it in an end-to-end fashion, thanks to a two-step inference procedure that avoids an exhaustive search across all possible cells in the input, thus maintaining scalability. Our model is tested on several homophilic and heterophilic graph datasets and it is shown to outperform other state-of-the-art techniques, offering significant improvements especially in cases where an input graph is not provided.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/19619"} +{"video_file": "0akLDTFR9x_39018886.mp4", "openreview_id": "0akLDTFR9x", "slideslive_id": 39018886, "venue": "iclr2024", "title": "Contrastive Difference Predictive Coding", "status": "Poster", "keywords": "contrastive learning;reinforcement learning;goal-reaching;goal-conditioned RL;temporal difference", "tldr": "a temporal difference version of contrastive predictive coding", "abstract": "Predicting and reasoning about the future lie at the heart of many time-series questions. For example, goal-conditioned reinforcement learning can be viewed as learning representations to predict which states are likely to be visited in the future. While prior methods have used contrastive predictive coding to model time series data, learning representations that encode long-term dependencies usually requires large amounts of data. In this paper, we introduce a temporal difference version of contrastive predictive coding that stitches together pieces of different time series data to decrease the amount of data required to learn predictions of future events. We apply this representation learning method to derive an off-policy algorithm for goal-conditioned RL. Experiments demonstrate that, compared with prior RL methods, ours achieves\n2\n\u00d7\nmedian improvement in success rates and can better cope with stochastic environments. In tabular settings, we show that our method is about\n20\n\u00d7\nmore sample efficient than the successor representation and\n1500\n\u00d7\nmore sample efficient than the standard (Monte Carlo) version of contrastive predictive coding.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19613"} +{"video_file": "0jsfesDZDq_39018608.mp4", "openreview_id": "0jsfesDZDq", "slideslive_id": 39018608, "venue": "iclr2024", "title": "Sparse Spiking Neural Network: Exploiting Heterogeneity in Timescales for Pruning Recurrent SNN", "status": "Poster", "keywords": "spiking neural network;SNN;network pruning;stability;neuromorphic;leaky integrate and fire;STDP;sparsification;task-agnostic pruning;timescale optimization", "tldr": "A task-agnostic pruning method that exploits the diversity in timescales for heterogeneous RSNNs and gives small, stable pruned networks", "abstract": "Recurrent Spiking Neural Networks (RSNNs) have emerged as a computationally efficient and brain-inspired machine learning model. The design of sparse RSNNs with fewer neurons and synapses helps reduce the computational complexity of RSNNs. Traditionally, sparse SNNs are obtained by first training a dense and complex SNN for a target task and, next, eliminating neurons with low activity (activity-based pruning) while maintaining task performance. In contrast, this paper presents a task-agnostic methodology for designing sparse RSNNs by pruning an untrained (arbitrarily initialized) large model. We introduce a novel Lyapunov Noise Pruning (LNP) algorithm that uses graph sparsification methods and utilizes Lyapunov exponents to design a stable sparse RSNN from an untrained RSNN. We show that the LNP can leverage diversity in neuronal timescales to design a sparse Heterogeneous RSNN (HRSNN). Further, we show that the same sparse HRSNN model can be trained for different tasks, such as image classification and time-series prediction. The experimental results show that, in spite of being task-agnostic, LNP increases computational efficiency (fewer neurons and synapses) and prediction performance of RSNNs compared to traditional activity-based pruning of trained dense models.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19606"} +{"video_file": "0t1O8ziRZp_39018974.mp4", "openreview_id": "0t1O8ziRZp", "slideslive_id": 39018974, "venue": "iclr2024", "title": "Retrieval-Guided Reinforcement Learning for Boolean Circuit Minimization", "status": "Poster", "keywords": "Electronics Design Automation (EDA);Logic Synthesis;Reinforcement Learning;Hardware design;Circuits", "tldr": "We propose Retrieval Guided RL for logic synthesis to generalize for diverse hardware. Pre-trained agents, combined with MCTS fails on novel designs. We adjusts agent recommendations using nearest neighbor similarity scores for improved synthesis.", "abstract": "Logic synthesis, a pivotal stage in chip design, entails optimizing chip specifications encoded in hardware description languages like Verilog into highly efficient implementations using Boolean logic gates. The process involves a sequential application of logic minimization heuristics (``synthesis recipe\"), with their arrangement significantly impacting crucial metrics such as area and delay. Addressing the challenge posed by the broad spectrum of hardware design complexities \u2014 from variations of past designs (e.g., adders and multipliers) to entirely novel configurations (e.g., innovative processor instructions) \u2014 requires a nuanced 'synthesis recipe' guided by human expertise and intuition. This study conducts a thorough examination of learning and search techniques for logic synthesis, unearthing a surprising revelation: pre-trained agents, when confronted with entirely novel designs, may veer off course, detrimentally affecting the search trajectory. We present ABC-RL, a meticulously tuned\n\u03b1\nparameter that adeptly adjusts recommendations from pre-trained agents during the search process. Computed based on similarity scores through nearest neighbor retrieval from the training dataset, ABC-RL yields superior synthesis recipes tailored for a wide array of hardware designs. Our findings showcase substantial enhancements in the Quality of Result (QoR) of synthesized circuits, boasting improvements of up to 24.8% compared to state-of-the-art techniques. Furthermore, ABC-RL achieves an impressive up to 9x reduction in runtime (iso-QoR) when compared to current state-of-the-art methodologies.", "primary_area": "infrastructure, software libraries, hardware, etc.", "site": "https://iclr.cc/virtual/2024/poster/19604"} +{"video_file": "0uI5415ry7_39018607.mp4", "openreview_id": "0uI5415ry7", "slideslive_id": 39018607, "venue": "iclr2024", "title": "Linear attention is (maybe) all you need (to understand Transformer optimization)", "status": "Poster", "keywords": "Transformer;optimization;adam;clipping;heavy-tailed noise;directional smoothness", "tldr": "Shallow linearized transformer exhibits same training difficulties as a real transformer, hence opens up the exciting prospect of having an analyzable proxy to reality.", "abstract": "Transformer training is notoriously difficult, requiring a careful design of optimizers and use of various heuristics. We make progress towards understanding the subtleties of training Transformers by carefully studying a simple yet canonical linearized shallow Transformer model. Specifically, we train linear Transformers to solve regression tasks, inspired by J. von Oswald et al. (ICML 2023), and K. Ahn et al. (NeurIPS 2023). Most importantly, we observe that our proposed linearized models can reproduce several prominent aspects of Transformer training dynamics. Consequently, the results obtained in this paper suggest that a simple linearized Transformer model could actually be a valuable, realistic abstraction for understanding Transformer optimization.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/19602"} +{"video_file": "17pVDnpwwl_39018606.mp4", "openreview_id": "17pVDnpwwl", "slideslive_id": 39018606, "venue": "iclr2024", "title": "Tensor Programs VI: Feature Learning in Infinite Depth Neural Networks", "status": "Poster", "keywords": "Tensor Programs;mup;deep learning;optimization;optimal hyperparameter transfer", "tldr": "We introduce Depth-\n\u03bc\nP, a principled approach for depth scaling, allowing for the training of arbitrarily deep networks while maximizing feature learning and feature diversity.", "abstract": "Empirical studies have consistently demonstrated that increasing the size of neural networks often yields superior performance in practical applications. However, there is a lack of consensus regarding the appropriate scaling strategy, particularly when it comes to increasing the depth of neural networks. In practice, excessively large depths can lead to model performance degradation. In this paper, we introduce Depth-\n\u03bc\nP, a principled approach for depth scaling, allowing for the training of arbitrarily deep architectures while maximizing feature learning and diversity among nearby layers. Our method involves dividing the contribution of each residual block and the parameter update by the square root of the depth. Through the use of Tensor Programs, we rigorously establish the existence of a limit for infinitely deep neural networks under the proposed scaling scheme. This scaling strategy ensures more stable training for deep neural networks and guarantees the transferability of hyperparameters from shallow to deep models. To substantiate the efficacy of our scaling method, we conduct empirical validation on neural networks with depths up to\n2\n10\n.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/19599"} +{"video_file": "1CK45cqkEh_39017547.mp4", "openreview_id": "1CK45cqkEh", "slideslive_id": 39017547, "venue": "iclr2024", "title": "Unsupervised Order Learning", "status": "Poster", "keywords": "order learning;unsupervised clustering", "tldr": "A deep clustering algorithm for ordered data", "abstract": "A novel clustering algorithm for orderable data, called unsupervised order learning (UOL), is proposed in this paper. First, we develop the ordered\nk\n-means to group objects into ordered clusters by reducing the deviation of an object from consecutive clusters. Then, we train a network to construct an embedding space, in which objects are sorted compactly along a chain of line segments, determined by the cluster centroids. We alternate the clustering and the network training until convergence. Moreover, we perform unsupervised rank estimation via a simple nearest neighbor search in the embedding space. Extensive experiments on various orderable datasets demonstrate that UOL provides reliable ordered clustering results and decent rank estimation performances with no supervision. The source codes are available at https://github.com/seon92/UOL.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19596"} +{"video_file": "1NHgmKqOzZ_39018605.mp4", "openreview_id": "1NHgmKqOzZ", "slideslive_id": 39018605, "venue": "iclr2024", "title": "Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality", "status": "Poster", "keywords": "dataset distillation;dataset condensation", "tldr": "We propose a multi-stage dataset distillation framework to improve the quality of synthetic samples.", "abstract": "Dataset distillation aims to minimize the time and memory needed for training deep networks on large datasets, by creating a small set of synthetic images that has a similar generalization performance to that of the full dataset. However, current dataset distillation techniques fall short, showing a notable performance gap compared to training on the original data. In this work, we are the first to argue that the use of only one synthetic subset for distillation may not yield optimal generalization performance. This is because the training dynamics of deep networks drastically changes during training. Therefore, multiple synthetic subsets are required to capture the dynamics of training in different stages. To address this issue, we propose Progressive Dataset Distillation (PDD). PDD synthesizes multiple small sets of synthetic images, each conditioned on the previous sets, and trains the model on the cumulative union of these subsets without requiring additional training time. Our extensive experiments show that PDD can effectively improve the performance of existing dataset distillation methods by up to 4.3%. In addition, our method for the first time enables generating considerably larger synthetic datasets. Our codes are available at https://github.com/VITA-Group/ProgressiveDD.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/19592"} +{"video_file": "1bAUywYJTU_39018982.mp4", "openreview_id": "1bAUywYJTU", "slideslive_id": 39018982, "venue": "iclr2024", "title": "DreamTime: An Improved Optimization Strategy for Diffusion-Guided 3D Generation", "status": "Poster", "keywords": "Score Distillation;3D Content Creation;Diffusion Model", "tldr": "We analyze the drawbacks of random timestep sampling in score distillation and propose a non-increasing timestep sampling strategy.", "abstract": "Text-to-image diffusion models pre-trained on billions of image-text pairs have recently enabled 3D content creation by optimizing a randomly initialized differentiable 3D representation with score distillation. However, the optimization process suffers slow convergence and the resultant 3D models often exhibit two limitations: (a) quality concerns such as missing attributes and distorted shape and texture; (b) extremely low diversity comparing to text-guided image synthesis. In this paper, we show that the conflict between the 3D optimization process and uniform timestep sampling in score distillation is the main reason for these limitations. To resolve this conflict, we propose to prioritize timestep sampling with monotonically non-increasing functions, which aligns the 3D optimization process with the sampling process of diffusion model. Extensive experiments show that our simple redesign significantly improves 3D content creation with faster convergence, better quality and diversity.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19581"} +{"video_file": "1hsVvgW0rU_39018600.mp4", "openreview_id": "1hsVvgW0rU", "slideslive_id": 39018600, "venue": "iclr2024", "title": "Sample-Efficient Learning of POMDPs with Multiple Observations In Hindsight", "status": "Poster", "keywords": "reinforcement learning theory;POMDPs;partially observable reinforcement learning", "tldr": "We propose a new enhanced feedback model for learning POMDPs, and identify new broad classes of POMDPs that are sample-efficiently learnable under this feedback model.", "abstract": "This paper studies the sample-efficiency of learning in Partially Observable Markov Decision Processes (POMDPs), a challenging problem in reinforcement learning that is known to be exponentially hard in the worst-case. Motivated by real-world settings such as loading in game playing, we propose an enhanced feedback model called ``multiple observations in hindsight'', where after each episode of interaction with the POMDP, the learner may collect multiple additional observations emitted from the encountered latent states, but may not observe the latent states themselves. We show that sample-efficient learning under this feedback model is possible for two new subclasses of POMDPs: \\emph{multi-observation revealing POMDPs} and \\emph{distinguishable POMDPs}. Both subclasses generalize and substantially relax \\emph{revealing POMDPs}---a widely studied subclass for which sample-efficient learning is possible under standard trajectory feedback. Notably, distinguishable POMDPs only require the emission distributions from different latent states to be \\emph{different} instead of \\emph{linearly independent} as required in revealing POMDPs.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19578"} +{"video_file": "1jbh2e0b2K_39018516.mp4", "openreview_id": "1jbh2e0b2K", "slideslive_id": 39018516, "venue": "iclr2024", "title": "Towards Few-Shot Adaptation of Foundation Models via Multitask Finetuning", "status": "Poster", "keywords": "Foundation model;Multitask finetuning;Few-Shot learning", "tldr": "We delve into theoretical framework of multitask finetuning for foundation models on new tasks with limited labels. Our introduced task selection algorithm notably boosts model performance, backed by extensive empirical evidence.", "abstract": "Foundation models have emerged as a powerful tool for many AI problems. Despite the tremendous success of foundation models, effective adaptation to new tasks, particularly those with limited labels, remains an open question and lacks theoretical understanding. An emerging solution with recent success in vision and NLP involves finetuning a foundation model on a selection of relevant tasks, before its adaptation to a target task with limited labeled samples. In this paper, we study the theoretical justification of this multitask finetuning approach. Our theoretical analysis reveals that with a diverse set of related tasks, this multitask finetuning leads to reduced error in the target task, in comparison to directly adapting the same pretrained model. We quantify the relationship between finetuning tasks and target tasks by diversity and consistency metrics, and further propose a practical task selection algorithm. We substantiate our theoretical claims with extensive empirical evidence. Further, we present results affirming our task selection algorithm adeptly chooses related finetuning tasks, providing advantages to the model performance on target tasks. We believe our study shed new light on the effective adaptation of foundation models to new tasks that lack abundant labels. Our code is available at https://github.com/OliverXUZY/Foudation-Model_Multitask.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/19576"} +{"video_file": "1mNFsbvo2P_39018752.mp4", "openreview_id": "1mNFsbvo2P", "slideslive_id": 39018752, "venue": "iclr2024", "title": "Domain constraints improve risk prediction when outcome data is missing", "status": "Poster", "keywords": "Bayesian model;health;selective labels;distribution shift;domain constraint;biomedicine", "tldr": "We propose the use of domain constraints to improve disease risk prediction in the presence of missing outcome data for the historically untested population", "abstract": "Machine learning models are often trained to predict the outcome resulting from a human decision. For example, if a doctor decides to test a patient for disease, will the patient test positive? A challenge is that historical decision-making determines whether the outcome is observed: we only observe test outcomes for patients doctors historically tested. Untested patients, for whom outcomes are unobserved, may differ from tested patients along observed and unobserved dimensions. We propose a Bayesian model class which captures this setting. The purpose of the model is to accurately estimate risk for both tested and untested patients. Estimating this model is challenging due to the wide range of possibilities for untested patients. To address this, we propose two domain constraints which are plausible in health settings: a prevalence constraint, where the overall disease prevalence is known, and an expertise constraint, where the human decision-maker deviates from purely risk-based decision-making only along a constrained feature set. We show theoretically and on synthetic data that domain constraints improve parameter inference. We apply our model to a case study of cancer risk prediction, showing that the model's inferred risk predicts cancer diagnoses, its inferred testing policy captures known public health policies, and it can identify suboptimalities in test allocation. Though our case study is in healthcare, our analysis reveals a general class of domain constraints which can improve model estimation in many settings.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/19574"} +{"video_file": "1op5YGZu8X_39018512.mp4", "openreview_id": "1op5YGZu8X", "slideslive_id": 39018512, "venue": "iclr2024", "title": "Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK Approach", "status": "Poster", "keywords": "NTK;neural tangent kernels;adversarial training;robust overfitting", "tldr": "We present a theoretical explanation of robust overfitting for DNNs and design the first adversarial training algorithm for infinite-width DNNs.", "abstract": "Adversarial training (AT) is a canonical method for enhancing the robustness of deep neural networks (DNNs). However, recent studies empirically demonstrated that it suffers from robust overfitting, i.e., a long time AT can be detrimental to the robustness of DNNs. This paper presents a theoretical explanation of robust overfitting for DNNs. Specifically, we non-trivially extend the neural tangent kernel (NTK) theory to AT and prove that an adversarially trained wide DNN can be well approximated by a linearized DNN. Moreover, for squared loss, closed-form AT dynamics for the linearized DNN can be derived, which reveals a new AT degeneration phenomenon: a long-term AT will result in a wide DNN degenerates to that obtained without AT and thus cause robust overfitting. Based on our theoretical results, we further design a method namely Adv-NTK, the first AT algorithm for infinite-width DNNs. Experiments on real-world datasets show that Adv-NTK can help infinite-width DNNs enhance comparable robustness to that of their finite-width counterparts, which in turn justifies our theoretical findings. The code is available at https://github.com/fshp971/adv-ntk.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/19570"} +{"video_file": "1vmSEVL19f_39018510.mp4", "openreview_id": "1vmSEVL19f", "slideslive_id": 39018510, "venue": "iclr2024", "title": "Directly Fine-Tuning Diffusion Models on Differentiable Rewards", "status": "Poster", "keywords": "diffusion models;preference-based learning", "tldr": "We present methods that efficiently fine-tune diffusion models on reward functions by backpropagating through the reward.", "abstract": "We present Direct Reward Fine-Tuning (DRaFT), a simple and effective method for fine-tuning diffusion models to maximize differentiable reward functions, such as scores from human preference models. We first show that it is possible to backpropagate the reward function gradient through the full sampling procedure, and that doing so achieves strong performance on a variety of rewards, outperforming reinforcement learning-based approaches. We then propose more efficient variants of DRaFT: DRaFT-K, which truncates backpropagation to only the last K steps of sampling, and DRaFT-LV, which obtains lower-variance gradient estimates for the case when K=1. We show that our methods work well for a variety of reward functions and can be used to substantially improve the aesthetic quality of images generated by Stable Diffusion 1.4. Finally, we draw connections between our approach and prior work, providing a unifying perspective on the design space of gradient-based fine-tuning algorithms.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19564"} +{"video_file": "22OTbutug9_39018903.mp4", "openreview_id": "22OTbutug9", "slideslive_id": 39018903, "venue": "iclr2024", "title": "RA-DIT: Retrieval-Augmented Dual Instruction Tuning", "status": "Poster", "keywords": "retrieval-augmented language model;large language model;knowledge intensive NLP", "tldr": "We propose a fine-tuning approach that effectively retrofits any LLM with retrieval capabilities.", "abstract": "Retrieval-augmented language models (RALMs) improve performance by accessing long-tail and up-to-date knowledge from external data stores, but are challenging to build. Existing approaches require either expensive retrieval-specific modifications to LM pre-training or use post-hoc integration of the data store that leads to suboptimal performance. We introduce Retrieval-Augmented Dual Instruction Tuning (RA-DIT), a lightweight fine-tuning methodology that provides a third option by retrofitting any LLM with retrieval capabilities. Our approach operates in two distinct fine-tuning steps: (1) one updates a pre-trained LM to better use retrieved information, while (2) the other updates the retriever to return more relevant results, as preferred by the LM. By fine-tuning over tasks that require both knowledge utilization and contextual awareness, we demonstrate that each stage yields significant performance improvements, and using both leads to additional gains. Our best model, RA-DIT 65B, achieves state-of-the-art performance across a range of knowledge-intensive zero- and few-shot learning benchmarks, significantly outperforming existing in-context RALM approaches by up to +8.9% in 0-shot setting and +1.4% in 5-shot setting on average.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19562"} +{"video_file": "2DbVeuoa6a_39018688.mp4", "openreview_id": "2DbVeuoa6a", "slideslive_id": 39018688, "venue": "iclr2024", "title": "Neural Spectral Methods: Self-supervised learning in the spectral domain", "status": "Poster", "keywords": "Machine learning for PDEs;spectral methods;neural network differentiation;spectral loss;PDEs;neural operators", "tldr": "We present Neural Spectral Methods to solve parametric PDEs in the spectral domain.", "abstract": "We present Neural Spectral Methods, a technique to solve parametric Partial Differential Equations (PDEs), grounded in classical spectral methods. Our method uses orthogonal bases to learn PDE solutions as mappings between spectral coefficients, instantiating a spectral-based neural operator. In contrast to current machine learning approaches which enforce PDE constraints by minimizing the numerical quadrature of the residuals in the spatiotemporal domain, we leverage Parseval's identity and introduce a new training strategy through a spectral loss. Our spectral loss enables more efficient differentiation through the neural network, and substantially reduces training complexity. At inference time, the computational cost of our method remains constant, regardless of the spatiotemporal resolution of the domain. Our experimental results demonstrate that our method significantly outperforms previous machine learning approaches in terms of speed and accuracy by one to two orders of magnitude on multiple different problems, including reaction-diffusion, and forced and unforced Navier-Stokes equations. When compared to numerical solvers of the same accuracy, our method demonstrates a\n10\n\u00d7\nincrease in performance speed. Our source code is publicly available at https://github.com/ASK-Berkeley/Neural-Spectral-Methods.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/19557"} +{"video_file": "2Rwq6c3tvr_39017046.mp4", "openreview_id": "2Rwq6c3tvr", "slideslive_id": 39017046, "venue": "iclr2024", "title": "Time Travel in LLMs: Tracing Data Contamination in Large Language Models", "status": "Spotlight", "keywords": "Data Contamination;Large Language Models (LLMs);Guided Instruction;Memorization", "tldr": "We propose an effective method for detecting data contamination\u2014presence of test data from downstream tasks\u2014in the pre-training data of large language models (LLMs), achieving between 92% and 100% accuracy when validated against expert evaluations.", "abstract": "Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ \"guided instruction:\" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a \"general instruction\" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19550"} +{"video_file": "2UnCj3jeao_39019056.mp4", "openreview_id": "2UnCj3jeao", "slideslive_id": 39019056, "venue": "iclr2024", "title": "Unbalancedness in Neural Monge Maps Improves Unpaired Domain Translation", "status": "Poster", "keywords": "optimal transport;domain translation;image translation;flow matching", "tldr": "We propose a theoretically grounded method to incorporate unbalancedness into any Monge map estimator and show how unbalancedness yields enhanced results across three distinct tasks employing three estimators.", "abstract": "In optimal transport (OT), a Monge map is known as a mapping that transports a source distribution to a target distribution in the most cost-efficient way. Recently, multiple neural estimators for Monge maps have been developed and applied in diverse unpaired domain translation tasks, e.g. in single-cell biology and computer vision. However, the classic OT framework enforces mass conservation, which makes it prone to outliers and limits its applicability in real-world scenarios. The latter can be particularly harmful in OT domain translation tasks, where the relative position of a sample within a distribution is explicitly taken into account. While unbalanced OT tackles this challenge in the discrete setting, its integration into neural Monge map estimators has received limited attention. We propose a theoretically grounded method to incorporate unbalancedness into any Monge map estimator. We improve existing estimators to model cell trajectories over time and to predict cellular responses to perturbations. Moreover, our approach seamlessly integrates with the OT flow matching (OT-FM) framework. While we show that OT-FM performs competitively in image translation, we further improve performance by incorporating unbalancedness (UOT-FM), which better preserves relevant features. We hence establish UOT-FM as a principled method for unpaired image translation.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19548"} +{"video_file": "2XkTz7gdpc_39018499.mp4", "openreview_id": "2XkTz7gdpc", "slideslive_id": 39018499, "venue": "iclr2024", "title": "Efficient and Scalable Graph Generation through Iterative Local Expansion", "status": "Poster", "keywords": "Graph Generation;Denoising Diffusion;Spectral Graph Theory", "tldr": "We introduce a novel method to build a graph by progressively expanding a single node to a target graph, adding nodes and edges in a localized manner through denoising diffusion, building first the global structure and then refining.", "abstract": "In the realm of generative models for graphs, extensive research has been conducted. However, most existing methods struggle with large graphs due to the complexity of representing the entire joint distribution across all node pairs and capturing both global and local graph structures simultaneously. To overcome these issues, we introduce a method that generates a graph by progressively expanding a single node to a target graph. In each step, nodes and edges are added in a localized manner through denoising diffusion, building first the global structure, and then refining the local details. The local generation avoids modeling the entire joint distribution over all node pairs, achieving substantial computational savings with subquadratic runtime relative to node count while maintaining high expressivity through multiscale generation. Our experiments show that our model achieves state-of-the-art performance on well-established benchmark datasets while successfully scaling to graphs with at least 5000 nodes. Our method is also the first to successfully extrapolate to graphs outside of the training distribution, showcasing a much better generalization capability over existing methods.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19545"} +{"video_file": "2dhxxIKhqz_39018800.mp4", "openreview_id": "2dhxxIKhqz", "slideslive_id": 39018800, "venue": "iclr2024", "title": "Function-space Parameterization of Neural Networks for Sequential Learning", "status": "Poster", "keywords": "Neural networks;Bayesian deep learning;deep learning;Gaussian processes;Laplace approximation;sequential learning", "tldr": "Converting neural networks to sparse function-space representation for sequential learning", "abstract": "Sequential learning paradigms pose challenges for gradient-based deep learning due to difficulties incorporating new data and retaining prior knowledge. While Gaussian processes elegantly tackle these problems, they struggle with scalability and handling rich inputs, such as images. To address these issues, we introduce a technique that converts neural networks from weight space to function space, through a dual parameterization. Our parameterization offers: (i) a way to scale function-space methods to large data sets via sparsification, (ii) retention of prior knowledge when access to past data is limited, and (iii) a mechanism to incorporate new data without retraining. Our experiments demonstrate that we can retain knowledge in continual learning and incorporate new data efficiently. We further show its strengths in uncertainty quantification and guiding exploration in model-based RL. Further information and code is available on the project website.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/19542"} +{"video_file": "2iGiSHmeAN_39018498.mp4", "openreview_id": "2iGiSHmeAN", "slideslive_id": 39018498, "venue": "iclr2024", "title": "BroGNet: Momentum-Conserving Graph Neural Stochastic Differential Equation for Learning Brownian Dynamics", "status": "Poster", "keywords": "Brownian dynamics;stochastic differential equation;graph neural network;scientific machine learning", "tldr": "Here, we present a momentum-conserving graph neural network for learning brownian dynamics represented by a stochastic differential equation", "abstract": "Neural networks (NNs) that exploit strong inductive biases based on physical laws and symmetries have shown remarkable success in learning the dynamics of physical systems directly from their trajectory. However, these works focus only on the systems that follow deterministic dynamics, such as Newtonian or Hamiltonian. Here, we propose a framework, namely Brownian graph neural networks (BroGNet), combining stochastic differential equations (SDEs) and GNNs to learn Brownian dynamics directly from the trajectory. We modify the architecture of BroGNet to enforce linear momentum conservation of the system, which, in turn, provides superior performance on learning dynamics as revealed empirically. We demonstrate this approach on several systems, namely, linear spring, linear spring with binary particle types, and non-linear spring systems, all following Brownian dynamics at finite temperatures. We show that BroGNet significantly outperforms proposed baselines across all the benchmarked Brownian systems. In addition, we demonstrate zero-shot generalizability of BroGNet to simulate unseen system sizes that are two orders of magnitude larger and to different temperatures than those used during training. Finally, we show that BroGNet conserves the momentum of the system resulting in superior performance and data efficiency. Altogether, our study contributes to advancing the understanding of the intricate dynamics of Brownian motion and demonstrates the effectiveness of graph neural networks in modeling such complex systems.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/19540"} +{"video_file": "2oWRumm67L_39018494.mp4", "openreview_id": "2oWRumm67L", "slideslive_id": 39018494, "venue": "iclr2024", "title": "Light-MILPopt: Solving Large-scale Mixed Integer Linear Programs with Lightweight Optimizer and Small-scale Training Dataset", "status": "Poster", "keywords": "Large-scale MILP;Learning for Optimization;Lightweight Optimization Framework", "tldr": "This paper proposes Light-MILPopt that only uses a lightweight optimizer and small-scale training dataset to solve large-scale MILPs.", "abstract": "Machine Learning (ML)-based optimization approaches emerge as a promising technique for solving large-scale Mixed Integer Linear Programs (MILPs). However, existing ML-based frameworks suffer from high model computation complexity, weak problem reduction, and reliance on large-scale optimizers and large training datasets, resulting in performance bottlenecks for large-scale MILPs. This paper proposes Light-MILPopt, a lightweight large-scale optimization framework that only uses a lightweight optimizer and small training dataset to solve large-scale MILPs. Specifically, Light-MILPopt can be divided into four stages: Problem Formulation for problem division to reduce model computational costs, Model-based Initial Solution Prediction for predicting and constructing the initial solution using a small-scale training dataset, Problem Reduction for both variable and constraint reduction, and Data-driven Optimization for current solution improvement employing a lightweight optimizer. Experimental evaluations on four large-scale benchmark MILPs and a real-world case study demonstrate that Light-MILPopt, leveraging a lightweight optimizer and small training dataset, outperforms the state-of-the-art ML-based optimization framework and advanced large-scale solvers (e.g. Gurobi, SCIP). The results and further analyses substantiate the ML-based framework's feasibility and effectiveness in solving large-scale MILPs.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/19536"} +{"video_file": "30N3bNAiw3_39017176.mp4", "openreview_id": "30N3bNAiw3", "slideslive_id": 39017176, "venue": "iclr2024", "title": "Separating common from salient patterns with Contrastive Representation Learning", "status": "Poster", "keywords": "Contrastive Learning;Mutual Information;Contrastive Analysis;Disentanglement", "tldr": "We use Contrastive Learning when performing Contrastive Analysis (i.e: separating salient factors of variation - that only exist in the target dataset in contrast with common factors of variation between target and background datasets).", "abstract": "Contrastive Analysis is a sub-field of Representation Learning that aims at separating 1) salient factors of variation - that only exist in the target dataset (i.e., diseased subjects) in contrast with 2) common factors of variation between target and background (i.e., healthy subjects) datasets. Despite their relevance, current models based on Variational Auto-Encoders have shown poor performance in learning semantically-expressive representations. On the other hand, Contrastive Representation Learning has shown tremendous performance leaps in various applications (classification, clustering, etc.). In this work, we propose to leverage the ability of Contrastive Learning to learn semantically expressive representations when performing Contrastive Analysis. Namely, we reformulate Contrastive Analysis under the lens of the InfoMax Principle and identify two Mutual Information terms to maximize and one to minimize. We decompose the two first terms into an Alignment and a Uniformity term, as commonly done in Contrastive Learning. Then, we motivate a novel Mutual Information minimization strategy to prevent information leakage between common and salient distributions. We validate our method on datasets designed to assess the pattern separation capability in Contrastive Analysis, including MNIST superimposed on CIFAR10, CelebA accessories, dSprites item superimposed on a digit grid, and three medical datasets.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19533"} +{"video_file": "31IOmrnoP4_39018490.mp4", "openreview_id": "31IOmrnoP4", "slideslive_id": 39018490, "venue": "iclr2024", "title": "Repelling Random Walks", "status": "Poster", "keywords": "Graphs;random walkers;quasi-Monte Carlo;kernel;PageRank;graphlets;scalable;mixing", "tldr": "A novel mechanism to correlate the trajectories of random walkers on graphs, improving the concentration properties of estimators whilst leaving them unbiased", "abstract": "We present a novel quasi-Monte Carlo mechanism to improve graph-based sampling, coined repelling random walks. By inducing correlations between the trajectories of an interacting ensemble such that their marginal transition probabilities are unmodified, we are able to explore the graph more efficiently, improving the concentration of statistical estimators whilst leaving them unbiased. The mechanism has a trivial drop-in implementation. We showcase the effectiveness of repelling random walks in a range of settings including estimation of graph kernels, the PageRank vector and graphlet concentrations. We provide detailed experimental evaluation and robust theoretical guarantees. To our knowledge, repelling random walks constitute the first rigorously studied quasi-Monte Carlo scheme correlating the directions of walkers on a graph, inviting new research in this exciting nascent domain.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/19531"} +{"video_file": "327tbF3S65_39019212.mp4", "openreview_id": "327tbF3S65", "slideslive_id": 39019212, "venue": "iclr2024", "title": "DDMI: Domain-agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations", "status": "Poster", "keywords": "Implicit neural representation;generative model;domain agnostic;diffusion model", "tldr": "We propose a latent diffusion model that generates hierarchically decomposed positional embeddings of Implicit neural representations, enabling high-quality generation on various data domains.", "abstract": "Recent studies have introduced a new class of generative models for synthesizing implicit neural representations (INRs) that capture arbitrary continuous signals in various domains. These models opened the door for domain-agnostic generative models, but they often fail to achieve high-quality generation. We observed that the existing methods generate the weights of neural networks to parameterize INRs and evaluate the network with fixed positional embeddings (PEs). Arguably, this architecture limits the expressive power of generative models and results in low-quality INR generation. To address this limitation, we propose Domain-agnostic Latent Diffusion Model for INRs (DDMI) that generates adaptive positional embeddings instead of neural networks' weights. Specifically, we develop a Discrete-to-continuous space Variational AutoEncoder (D2C-VAE) that seamlessly connects discrete data and continuous signal functions in the shared latent space. Additionally, we introduce a novel conditioning mechanism for evaluating INRs with the hierarchically decomposed PEs to further enhance expressive power. Extensive experiments across four modalities, \\eg, 2D images, 3D shapes, Neural Radiance Fields, and videos, with seven benchmark datasets, demonstrate the versatility of DDMI and its superior performance compared to the existing INR generative models. Code is available at \\href{https://github.com/mlvlab/DDMI}{https://github.com/mlvlab/DDMI}.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19530"} +{"video_file": "36L7W3ri4U_39018488.mp4", "openreview_id": "36L7W3ri4U", "slideslive_id": 39018488, "venue": "iclr2024", "title": "Beating Price of Anarchy and Gradient Descent without Regret in Potential Games", "status": "Poster", "keywords": "q-replicator dynamics;potential games;average price of anarchy;learning", "tldr": "Despite being almost optimal on average in a class of \n2\n\u00d7\n2\n potential games with unbounded Price of Anarchy, gradient descent is not always the optimal choice even in this restricted setting. These findings extend experimentally in larger games.", "abstract": "Arguably one of the thorniest problems in game theory is that of equilibrium selection. Specifically, in the presence of multiple equilibria do self-interested learning dynamics typically select the socially optimal ones? We study a rich class of continuous-time no-regret dynamics in potential games (PGs). Our class of dynamics, Q-Replicator Dynamics (QRD), include gradient descent (GD), log-barrier and replicator dynamics (RD) as special cases. We start by establishing pointwise convergence of all QRD to Nash equilibria in almost all PGs. In the case of GD, we show a tight average case performance within a factor of two of optimal, for a class of symmetric\n2\n\u00d7\n2\npotential games with unbounded Price of Anarchy (PoA). Despite this positive result, we show that GD is not always the optimal choice even in this restricted setting. Specifically, GD outperforms RD, if and only if risk- and payoff-dominance equilibria coincide. Finally, we experimentally show how these insights extend to all QRD dynamics and that unbounded gaps between average case performance and PoA analysis are common even in larger settings.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/19527"} +{"video_file": "3K3s9qxSn7_39018484.mp4", "openreview_id": "3K3s9qxSn7", "slideslive_id": 39018484, "venue": "iclr2024", "title": "On Representation Complexity of Model-based and Model-free Reinforcement Learning", "status": "Poster", "keywords": "model-based and model-free RL;representation complexity;circuit complexity;approximation error", "tldr": "We study representation complexity of model-based and model-free RL through circuit complexity to provide unique insights into sample efficiency of model-based RL.", "abstract": "We study the representation complexity of model-based and model-free reinforcement learning (RL) in the context of circuit complexity. We prove theoretically that there exists a broad class of MDPs such that their underlying transition and reward functions can be represented by constant depth circuits with polynomial size, while the optimal\nQ\n-function suffers an exponential circuit complexity in constant-depth circuits. By drawing attention to the approximation errors and building connections to complexity theory, our theory provides unique insights into why model-based algorithms usually enjoy better sample complexity than model-free algorithms from a novel representation complexity perspective: in some cases, the ground-truth rule (model) of the environment is simple to represent, while other quantities, such as\nQ\n-function, appear complex. We empirically corroborate our theory by comparing the approximation error of the transition kernel, reward function, and optimal\nQ\n-function in various Mujoco environments, which demonstrates that the approximation errors of the transition kernel and reward function are consistently lower than those of the optimal\nQ\n-function. To the best of our knowledge, this work is the first to study the circuit complexity of RL, which also provides a rigorous framework for future research.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/19520"} +{"video_file": "3QkzYBSWqL_39018480.mp4", "openreview_id": "3QkzYBSWqL", "slideslive_id": 39018480, "venue": "iclr2024", "title": "Universal Backdoor Attacks", "status": "Poster", "keywords": "Backdoor;Data poisoning;Integrity;Image Classification", "tldr": "Using data poisoning to create backdoors that target every class in deep image classifiers.", "abstract": "Web-scraped datasets are vulnerable to data poisoning, which can be used for backdooring deep image classifiers during training. Since training on large datasets is expensive, a model is trained once and reused many times. Unlike adversarial examples, backdoor attacks often target specific classes rather than any class learned by the model. One might expect that targeting many classes through a na\u00efve composition of attacks vastly increases the number of poison samples. We show this is not necessarily true and more efficient, universal data poisoning attacks exist that allow controlling misclassifications from any source class into any target class with a slight increase in poison samples. Our idea is to generate triggers with salient characteristics that the model can learn. The triggers we craft exploit a phenomenon we call inter-class poison transferability, where learning a trigger from one class makes the model more vulnerable to learning triggers for other classes. We demonstrate the effectiveness and robustness of our universal backdoor attacks by controlling models with up to 6,000 classes while poisoning only 0.15% of the training dataset.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/19514"} +{"video_file": "3ROGsTX3IR_39019190.mp4", "openreview_id": "3ROGsTX3IR", "slideslive_id": 39019190, "venue": "iclr2024", "title": "Grokking as a First Order Phase Transition in Two Layer Networks", "status": "Poster", "keywords": "Grokking;deep neural networks;Gaussian Process;phase transitions", "tldr": "Analytical predictions of feature learning and Grokking properties which demonstrate a mapping between Grokking and the theory of phase transitions", "abstract": "A key property of deep neural networks (DNNs) is their ability to learn new features during training. This intriguing aspect of deep learning stands out most clearly in recently reported Grokking phenomena. While mainly reflected as a sudden increase in test accuracy, Grokking is also believed to be a beyond lazy-learning/Gaussian Process (GP) phenomenon involving feature learning. Here we apply a recent development in the theory of feature learning, the adaptive kernel approach, to two teacher-student models with cubic-polynomial and modular addition teachers. We provide analytical predictions on feature learning and Grokking properties of these models and demonstrate a mapping between Grokking and the theory of phase transitions. We show that after Grokking, the state of the DNN is analogous to the mixed phase following a first-order phase transition. In this mixed phase, the DNN generates useful internal representations of the teacher that are sharply distinct from those before the transition.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/19513"} +{"video_file": "3TO3TtnOFl_39018849.mp4", "openreview_id": "3TO3TtnOFl", "slideslive_id": 39018849, "venue": "iclr2024", "title": "BTR: Binary Token Representations for Efficient Retrieval Augmented Language Models", "status": "Spotlight", "keywords": "language models;question answering;binary representations;retrieval-augmented language models", "tldr": "We create cacheable binary token presentations for the passages in the reader of retrieval-augmented language models to improve inference efficiency", "abstract": "Retrieval augmentation addresses many critical problems in large language models such as hallucination, staleness, and privacy leaks. However, running retrieval-augmented language models (LMs) is slow and difficult to scale due to processing large amounts of retrieved text. We introduce binary token representations (BTR), which use 1-bit vectors to precompute every token in passages, significantly reducing computation during inference. Despite the potential loss of accuracy, our new calibration techniques and training objectives restore performance. Combined with offline and runtime compression, this only requires 127GB of disk space for encoding 3 billion tokens in Wikipedia. Our experiments show that on five knowledge-intensive NLP tasks, BTR accelerates state-of-the-art inference by up to 4x and reduces storage by over 100x while maintaining over 95% task performance. Our code is publicly available at https://github.com/csarron/BTR.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19511"} +{"video_file": "3UWuFoksGb_39019084.mp4", "openreview_id": "3UWuFoksGb", "slideslive_id": 39019084, "venue": "iclr2024", "title": "Learning Planning Abstractions from Language", "status": "Poster", "keywords": "Planning and Learning;Learning Abstractions;Compositional Generalization;Robotic Manipulation", "tldr": "A framework that utilizes language-annotated demonstrations to automatically discover a symbolic and abstract action space and induce a latent state abstraction for planning.", "abstract": "This paper presents a framework for learning state and action abstractions in sequential decision-making domains. Our framework, planning abstraction from language (PARL), utilizes language-annotated demonstrations to automatically discover a symbolic and abstract action space and induce a latent state abstraction based on it. PARL consists of three stages: 1) recovering object-level and action concepts, 2) learning state abstractions, abstract action feasibility, and transition models, and 3) applying low-level policies for abstract actions. During inference, given the task description, PARL first makes abstract action plans using the latent transition and feasibility functions, then refines the high-level plan using low-level policies. PARL generalizes across scenarios involving novel object instances and environments, unseen concept compositions, and tasks that require longer planning horizons than settings it is trained on.", "primary_area": "applications to robotics, autonomy, planning", "site": "https://iclr.cc/virtual/2024/poster/19510"} +{"video_file": "3Vw7DQqq7U_39017300.mp4", "openreview_id": "3Vw7DQqq7U", "slideslive_id": 39017300, "venue": "iclr2024", "title": "LEMON: Lossless model expansion", "status": "Poster", "keywords": "model growth;efficient deep learning;continual learning", "tldr": "We propose LEMON, a method that initializes large model with pretrained small model to save computational resources.", "abstract": "Scaling of deep neural networks, especially Transformers, is pivotal for their surging performance and has further led to the emergence of sophisticated reasoning capabilities in foundation models. Such scaling generally requires training large models from scratch with random initialization, failing to leverage the knowledge acquired by their smaller counterparts, which are already resource-intensive to obtain. To tackle this inefficiency, we present\nL\nossl\nE\nss\nMO\ndel Expansio\nN\n(LEMON), a recipe to initialize scaled models using the weights of their smaller but pre-trained counterparts. This is followed by model training with an optimized learning rate scheduler tailored explicitly for the scaled models, substantially reducing the training time compared to training from scratch. Notably, LEMON is versatile, ensuring compatibility with various network structures, including models like Vision Transformers and BERT. Our empirical results demonstrate that LEMON reduces computational costs by 56.7% for Vision Transformers and 33.2% for BERT when compared to training from scratch.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19508"} +{"video_file": "3ZqKxMHcAg_39017135.mp4", "openreview_id": "3ZqKxMHcAg", "slideslive_id": 39017135, "venue": "iclr2024", "title": "Evaluating Language Model Agency Through Negotiations", "status": "Poster", "keywords": "language model evaluation;dynamic evaluation;alignment;cooperative AI;agency;evolving benchmarks;multi-agent interactions", "tldr": "A benchmark approach to jointly evaluate performance and alignment of language models through dynamic, multi-step, and cross-model negotiation games.", "abstract": "We introduce an approach to evaluate language model (LM) agency using negotiation games. This approach better reflects real-world use cases and addresses some of the shortcomings of alternative LM benchmarks. Negotiation games enable us to study multi-turn, and cross-model interactions, modulate complexity, and side-step accidental evaluation data leakage. We use our approach to test six widely used and publicly accessible LMs, evaluating performance and alignment in both self-play and cross-play settings. Noteworthy findings include: (i) only closed-source models tested here were able to complete these tasks; (ii) cooperative bargaining games proved to be most challenging to the models; and (iii) even the most powerful models sometimes \"lose\" to weaker opponents.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/19505"} +{"video_file": "3f5PALef5B_39017130.mp4", "openreview_id": "3f5PALef5B", "slideslive_id": 39017130, "venue": "iclr2024", "title": "LEGO-Prover: Neural Theorem Proving with Growing Libraries", "status": "Oral", "keywords": "Theorem proving;Large language model;Autoformalization", "tldr": "Add:", "abstract": "Despite the success of large language models (LLMs), the task of theorem proving still remains one of the hardest reasoning tasks that is far from being fully solved. Prior methods using language models have demonstrated promising results, but they still struggle to prove even middle school level theorems. One common limitation of these methods is that they assume a fixed theorem library during the whole theorem proving process. However, as we all know, creating new useful theorems or even new theories is not only helpful but crucial and necessary for advancing mathematics and proving harder and deeper results. In this work, we present LEGO-Prover, which employs a growing skill library containing verified lemmas as skills to augment the capability of LLMs used in theorem proving. By constructing the proof modularly, LEGO-Prover enables LLMs to utilize existing skills retrieved from the library and to create new skills during the proving process. These skills are further evolved (by prompting an LLM) to enrich the library on another scale. Modular and reusable skills are constantly added to the library to enable tackling increasingly intricate mathematical problems. Moreover, the learned library further bridges the gap between human proofs and formal proofs by making it easier to impute missing steps. LEGO-Prover advances the state-of-the-art pass rate on miniF2F-valid (48.0% to 57.0%) and miniF2F-test (45.5% to 50.0%). During the proving process, LEGO-Prover also generates over 20,000 skills (theorems/lemmas) and adds them to the growing library. Our ablation study indicates that these newly added skills are indeed helpful for proving theorems, resulting in a 4.9% improvement in success rate", "primary_area": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)", "site": "https://iclr.cc/virtual/2024/poster/19499"} +{"video_file": "3mnWvUZIXt_39018649.mp4", "openreview_id": "3mnWvUZIXt", "slideslive_id": 39018649, "venue": "iclr2024", "title": "Towards Principled Representation Learning from Videos for Reinforcement Learning", "status": "Spotlight", "keywords": "Reinforcement Learning;Representation Learning", "tldr": "Theoretical analysis and experiments concerning the value reinforcement learning can gain from pretrained representations of unlabeled video data.", "abstract": "We study pre-training representations for decision-making using video data, which is abundantly available for tasks such as game agents and software testing. Even though significant empirical advances have been made on this problem, a theoretical understanding remains absent. We initiate the theoretical investigation into principled approaches for representation learning and focus on learning the latent state representations of the underlying MDP using video data. We study two types of settings: one where there is iid noise in the observation, and a more challenging setting where there is also the presence of exogenous noise, which is non-iid noise that is temporally correlated, such as the motion of people or cars in the background. We study three commonly used approaches: autoencoding, temporal contrastive learning, and forward modeling. We prove upper bounds for temporal contrastive learning and forward modeling in the presence of only iid noise. We show that these approaches can learn the latent state and use it to do efficient downstream RL with polynomial sample complexity. When exogenous noise is also present, we establish a lower bound result showing that the sample complexity of learning from video data can be exponentially worse than learning from action-labeled trajectory data. This partially explains why reinforcement learning with video pre-training is hard. We evaluate these representational learning methods in two visual domains, yielding results that are consistent with our theoretical findings.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19497"} +{"video_file": "3pf2hEdu8B_39019110.mp4", "openreview_id": "3pf2hEdu8B", "slideslive_id": 39019110, "venue": "iclr2024", "title": "Rethinking the Uniformity Metric in Self-Supervised Learning", "status": "Poster", "keywords": "Effective uniformity metrics;dimensional collapse;Wasserstein distance;self-supervised learning", "tldr": "We propose a new Wasserstein uniformity metric that could capture feature redundancy and dimensional collapse.", "abstract": "Uniformity plays an important role in evaluating learned representations, providing insights into self-supervised learning. In our quest for effective uniformity metrics, we pinpoint four principled properties that such metrics should possess. Namely, an effective uniformity metric should remain invariant to instance permutations and sample replications while accurately capturing feature redundancy and dimensional collapse. Surprisingly, we find that the uniformity metric proposed by \\citet{Wang2020UnderstandingCR} fails to satisfy the majority of these properties. Specifically, their metric is sensitive to sample replications, and can not account for feature redundancy and dimensional collapse correctly. To overcome these limitations, we introduce a new uniformity metric based on the Wasserstein distance, which satisfies all the aforementioned properties. Integrating this new metric in existing self-supervised learning methods effectively mitigates dimensional collapse and consistently improves their performance on downstream tasks involving CIFAR-10 and CIFAR-100 datasets. Code is available at \\url{https://github.com/statsle/WassersteinSSL}.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19494"} +{"video_file": "3qo1pJHabg_39018470.mp4", "openreview_id": "3qo1pJHabg", "slideslive_id": 39018470, "venue": "iclr2024", "title": "LRR: Language-Driven Resamplable Continuous Representation against Adversarial Tracking Attacks", "status": "Poster", "keywords": "Tracking defence;spatial-temporal implicit representation;languange-image model", "tldr": "We propose to use the language to guide the reconstruction of the adversarial frames through resamplable spatial-temporal implicit representations.", "abstract": "Visual object tracking plays a critical role in visual-based autonomous systems, as it aims to estimate the position and size of the object of interest within a live video. Despite significant progress made in this field, state-of-the-art (SOTA) trackers often fail when faced with adversarial perturbations in the incoming frames. This can lead to significant robustness and security issues when these trackers are deployed in the real world. To achieve high accuracy on both clean and adversarial data, we propose building a spatial-temporal continuous representation using the semantic text guidance of the object of interest. This novel continuous representation enables us to reconstruct incoming frames to maintain semantic and appearance consistency with the object of interest and its clean counterparts. As a result, our proposed method successfully defends against different SOTA adversarial tracking attacks while maintaining high accuracy on clean data. In particular, our method significantly increases tracking accuracy under adversarial attacks with around 90% relative improvement on UAV123, which is even higher than the accuracy on clean data.", "primary_area": "applications to robotics, autonomy, planning", "site": "https://iclr.cc/virtual/2024/poster/19493"} +{"video_file": "3tM1l5tSbv_39018634.mp4", "openreview_id": "3tM1l5tSbv", "slideslive_id": 39018634, "venue": "iclr2024", "title": "Generative Learning for Solving Non-Convex Problem with Multi-Valued Input-Solution Mapping", "status": "Poster", "keywords": "Non-convex optimization;Multi-valued solution mapping;Generative model;Ordinary differential equation;Supervised learning", "tldr": "We propose a generative learning framework to learn the multi-valued input-solution mapping for non-convex optimization problems.", "abstract": "By employing neural networks (NN) to learn input-solution mappings and passing a new input through the learned mapping to obtain a solution instantly, recent studies have shown remarkable speed improvements over iterative algorithms for solving optimization problems. Meanwhile, they also highlight methodological challenges to be addressed. In particular, general non-convex problems often present multiple optimal solutions for identical inputs, signifying a complex, multi-valued input-solution mapping. Conventional learning techniques, primarily tailored to learn single-valued mappings, struggle to train NNs to accurately decipher multi-valued ones, leading to inferior solutions. We address this fundamental issue by developing a generative learning approach using a rectified flow (RectFlow) model built upon ordinary differential equations. In contrast to learning input-solution mapping, we learn the mapping from input to solution distribution, exploiting the universal approximation capability of the RectFlow model. Upon receiving a new input, we employ the trained RectFlow model to sample high-quality solutions from the input-dependent distribution it has learned. Our approach outperforms conceivable GAN and Diffusion models in terms of training stability and run-time complexity. We provide a detailed characterization of the optimality loss and runtime complexity associated with our generative approach. Simulation results for solving non-convex problems show that our method achieves significantly better solution optimality than recent NN schemes, with comparable feasibility and speedup performance.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/19491"} +{"video_file": "3z60EWfh1p_39018464.mp4", "openreview_id": "3z60EWfh1p", "slideslive_id": 39018464, "venue": "iclr2024", "title": "Geometrically Aligned Transfer Encoder for Inductive Transfer in Regression Tasks", "status": "Poster", "keywords": "Transfer Learning;Inductive Transfer;Geometrical Deeplearning;Regression", "tldr": "A novel method of inductive transfer learning on regression tasks based on differential geometry", "abstract": "Transfer learning is a crucial technique for handling a small amount of data that is potentially related to other abundant data. However, most of the existing methods are focused on classification tasks using images and language datasets. Therefore, in order to expand the transfer learning scheme to regression tasks, we propose a novel transfer technique based on differential geometry, namely the Geometrically Aligned Transfer Encoder (\nG\nA\nT\nE\n). In this method, we interpret the latent vectors from the model to exist on a Riemannian curved manifold. We find a proper diffeomorphism between pairs of tasks to ensure that every arbitrary point maps to a locally flat coordinate in the overlapping region, allowing the transfer of knowledge from the source to the target data. This also serves as an effective regularizer for the model to behave in extrapolation regions. In this article, we demonstrate that\nG\nA\nT\nE\noutperforms conventional methods and exhibits stable behavior in both the latent space and extrapolation regions for various molecular graph datasets.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/19485"} +{"video_file": "3zKtaqxLhW_39018463.mp4", "openreview_id": "3zKtaqxLhW", "slideslive_id": 39018463, "venue": "iclr2024", "title": "On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes", "status": "Poster", "keywords": "Language models;Distillation;RLHF", "tldr": "Better distillation for autoregressive student models using on-policy student-generated data, which can be easily combined with RLHF.", "abstract": "Knowledge distillation (KD) is widely used for compressing a teacher model to reduce its inference cost and memory footprint, by training a smaller student model. However, current KD methods for auto-regressive sequence models suffer from distribution mismatch between output sequences seen during training and those generated by the student during inference. To address this issue, we introduce Generalized Knowledge Distillation (GKD). Instead of solely relying on a fixed set of output sequences, GKD trains the student on its self-generated output sequences by leveraging feedback from the teacher on such sequences. Unlike supervised KD approaches, GKD also offers the flexibility to employ alternative loss functions between the student and teacher, which can be useful when the student lacks the expressivity to mimic the teacher's distribution. Furthermore, GKD facilitates the seamless integration of distillation with RL fine-tuning (RLHF). We demonstrate the efficacy of GKD for distilling auto-regressive T5 language models on summarization, translation, and arithmetic reasoning tasks.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19484"} +{"video_file": "3zQo5oUvia_39018462.mp4", "openreview_id": "3zQo5oUvia", "slideslive_id": 39018462, "venue": "iclr2024", "title": "REBAR: Retrieval-Based Reconstruction for Time-series Contrastive Learning", "status": "Poster", "keywords": "time-series;contrastive learning;masked reconstruction;self-supervised learning;imputation;unsupervised learning", "tldr": "We introduce a novel method of identifying positives and negatives for time-series contrastive learning with Retrieval-Based Reconstruction (REBAR)", "abstract": "The success of self-supervised contrastive learning hinges on identifying positive data pairs, such that when they are pushed together in embedding space, the space encodes useful information for subsequent downstream tasks. Constructing positive pairs is non-trivial as the pairing must be similar enough to reflect a shared semantic meaning, but different enough to capture within-class variation. Classical approaches in vision use augmentations to exploit well-established invariances to construct positive pairs, but invariances in the time-series domain are much less obvious. In our work, we propose a novel method of using a learned measure for identifying positive pairs. Our Retrieval-Based Reconstruction (REBAR) measure measures the similarity between two sequences as the reconstruction error that results from reconstructing one sequence with retrieved information from the other. Then, if the two sequences have high REBAR similarity, we label them as a positive pair. Through validation experiments, we show that the REBAR error is a predictor of mutual class membership. Once integrated into a contrastive learning framework, our REBAR method learns an embedding that achieves state-of-the-art performance on downstream tasks across various modalities.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19483"} +{"video_file": "488A64eOf6_39019195.mp4", "openreview_id": "488A64eOf6", "slideslive_id": 39019195, "venue": "iclr2024", "title": "Language Model Decoding as Direct Metrics Optimization", "status": "Poster", "keywords": "language model;decoding algorithm;energy-based model", "tldr": "We introduce a novel decoding framework that treats language model decoding as an optimization problem, aiming to strictly match the expected performance of generations with human texts measured by metrics of desired aspects simultaneously.", "abstract": "Despite the remarkable advances in language modeling, current mainstream decoding methods still struggle to generate texts that align with human texts across different aspects. In particular, sampling-based methods produce less-repetitive texts which are often disjunctive in discourse, while search-based methods maintain topic coherence at the cost of increased repetition. Overall, these methods fall short in achieving holistic alignment across a broad range of aspects. In this work, we frame decoding from a language model as an optimization problem with the goal of strictly matching the expected performance with human texts measured by multiple metrics of desired aspects simultaneously. The resulting decoding distribution enjoys an analytical solution that scales the input language model distribution via a sequence-level energy function defined by these metrics. And most importantly, we prove that this induced distribution is guaranteed to improve the perplexity on human texts, which suggests a better approximation to the underlying distribution of human texts. To facilitate tractable sampling from this globally normalized distribution, we adopt the Sampling-Importance-Resampling technique. Experiments on various domains and model scales demonstrate the superiority of our method in metrics alignment with human texts and human evaluation over strong baselines.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19478"} +{"video_file": "49z97Y9lMq_39018712.mp4", "openreview_id": "49z97Y9lMq", "slideslive_id": 39018712, "venue": "iclr2024", "title": "LCOT: Linear Circular Optimal Transport", "status": "Poster", "keywords": "Optimal Transport;Circular Measure;Probability Metrics", "tldr": "The paper proposes a new metric, called LCOT, for probability measures supported on the unit circle that is computationally efficient, has an explicit linear embedding, and is rooted in the Circular OT metric.", "abstract": "The optimal transport problem for measures supported on non-Euclidean spaces has recently gained ample interest in diverse applications involving representation learning. In this paper, we focus on circular probability measures, i.e., probability measures supported on the unit circle, and introduce a new computationally efficient metric for these measures, denoted as Linear Circular Optimal Transport (LCOT). The proposed metric comes with an explicit linear embedding that allows one to apply Machine Learning (ML) algorithms to the embedded measures and seamlessly modify the underlying metric for the ML algorithm to LCOT. We show that the proposed metric is rooted in the Circular Optimal Transport (COT) and can be considered the linearization of the COT metric with respect to a fixed reference measure. We provide a theoretical analysis of the proposed metric and derive the computational complexities for pairwise comparison of circular probability measures. Lastly, through a set of numerical experiments, we demonstrate the benefits of LCOT in learning representations from circular measures.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/19477"} +{"video_file": "4IT2pgc9v6_39017683.mp4", "openreview_id": "4IT2pgc9v6", "slideslive_id": 39017683, "venue": "iclr2024", "title": "One For All: Towards Training One Graph Model For All Classification Tasks", "status": "Spotlight", "keywords": "Graph Neural Network;Large Language Model;In-context Learning", "tldr": "This paper proposes a unified graph learning framework capable of cross-domain classification spanning node, edge, and graph tasks.", "abstract": "Designing a single model to address multiple tasks has been a long-standing objective in artificial intelligence. Recently, large language models have demonstrated exceptional capability in solving different tasks within the language domain. However, a unified model for various graph tasks remains underexplored, primarily due to the challenges unique to the graph learning domain. First, graph data from different areas carry distinct attributes and follow different distributions. Such discrepancy makes it hard to represent graphs in a single representation space. Second, tasks on graphs diversify into node, link, and graph tasks, requiring distinct embedding strategies. Finally, an appropriate graph prompting paradigm for in-context learning is unclear. We propose One for All (OFA), the first general framework that can use a single graph model to address the above challenges. Specifically, OFA proposes text-attributed graphs to unify different graph data by describing nodes and edges with natural language and uses language models to encode the diverse and possibly cross-domain text attributes to feature vectors in the same embedding space. Furthermore, OFA introduces the concept of nodes-of-interest to standardize different tasks with a single task representation. For in-context learning on graphs, OFA introduces a novel graph prompting paradigm that appends prompting substructures to the input graph, which enables it to address varied tasks without fine-tuning. We train the OFA model using graph data from multiple domains (including citation networks, molecular graphs, knowledge graphs, etc.) simultaneously and evaluate its ability in supervised, few-shot, and zero-shot learning scenarios. OFA performs well across different tasks, making it the first general-purpose across-domains classification model on graphs.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/19474"} +{"video_file": "4KZpDGD4Nh_39018456.mp4", "openreview_id": "4KZpDGD4Nh", "slideslive_id": 39018456, "venue": "iclr2024", "title": "Neurosymbolic Grounding for Compositional World Models", "status": "Poster", "keywords": "neurosymbolic learning;machine learning;world modeling;compositional generalization", "tldr": "We study a new form of compositional generalization and develop a hybrid neurosymbolic world model for this form of compositional generalization..", "abstract": "We introduce Cosmos, a framework for object-centric world modeling that is designed for compositional generalization (CompGen), i.e., high performance on unseen input scenes obtained through the composition of known visual \"atoms.\" The central insight behind Cosmos is the use of a novel form of neurosymbolic grounding. Specifically, the framework introduces two new tools: (i) neurosymbolic scene encodings, which represent each entity in a scene using a real vector computed using a neural encoder, as well as a vector of composable symbols describing attributes of the entity, and (ii) a neurosymbolic attention mechanism that binds these entities to learned rules of interaction. Cosmos is end-to-end differentiable; also, unlike traditional neurosymbolic methods that require representations to be manually mapped to symbols, it computes an entity's symbolic attributes using vision-language foundation models. Through an evaluation that considers two different forms of CompGen on an established blocks-pushing domain, we show that the framework establishes a new state-of-the-art for CompGen in world modeling. Artifacts are available at: https://trishullab.github.io/cosmos-web/", "primary_area": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)", "site": "https://iclr.cc/virtual/2024/poster/19472"} +{"video_file": "4KqkizXgXU_39018912.mp4", "openreview_id": "4KqkizXgXU", "slideslive_id": 39018912, "venue": "iclr2024", "title": "Curiosity-driven Red-teaming for Large Language Models", "status": "Poster", "keywords": "Curiosity-driven exploration;Reinforcement learning;Language model", "tldr": "We use curiosity-driven exploration to improve the diversity of the test cases generated for red teaming large language models.", "abstract": "Large language models (LLMs) hold great potential for many natural language applications but risk generating incorrect or toxic content. To probe when an LLM generates unwanted content, the current paradigm is to recruit a\nred team\nof human testers to design input prompts (i.e., test cases) that elicit undesirable responses from LLMs. However, relying solely on human testers is expensive and time-consuming. Recent works automate red teaming by training a separate red team LLM with reinforcement learning (RL) to generate test cases that maximize the chance of eliciting undesirable responses from the target LLM. However, current RL methods are only able to generate a small number of effective test cases resulting in a low coverage of the span of prompts that elicit undesirable responses from the target LLM. To overcome this limitation, we draw a connection between the problem of increasing the coverage of generated test cases and the well-studied approach of curiosity-driven exploration that optimizes for novelty. Our method of curiosity-driven red teaming (CRT) achieves greater coverage of test cases while mantaining or increasing their effectiveness compared to existing methods. Our method, CRT successfully provokes toxic responses from LLaMA2 model that has been heavily fine-tuned using human preferences to avoid toxic outputs. Code is available at https://github.com/Improbable-AI/curiosity_redteam.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19471"} +{"video_file": "4N97bz1sP6_39018455.mp4", "openreview_id": "4N97bz1sP6", "slideslive_id": 39018455, "venue": "iclr2024", "title": "Weakly-supervised Audio Separation via Bi-modal Semantic Similarity", "status": "Poster", "keywords": "Audio-language learning;conditional audio separation;unsupervised learning;weakly supervised learning;semi-supervised learning", "tldr": "We propose a weakly supervised learning framework for conditional audio separation that significantly outperforms the baselines in unsupervised and semi-supervised settings.", "abstract": "Conditional sound separation in multi-source audio mixtures without having access to single source sound data during training is a long standing challenge. Existing mix-and-separate based methods suffer from significant performance drop with multi-source training mixtures due to the lack of supervision signal for single source separation cases during training. However, in the case of language-conditional audio separation, we do have access to corresponding text descriptions for each audio mixture in our training data, which can be seen as (rough) representations of the audio samples in the language modality. That raises the curious question of how to generate supervision signal for single-source audio extraction by leveraging the fact that single-source sounding language entities can be easily extracted from the text description. To this end, in this paper, we propose a generic bi-modal separation framework which can enhance the existing unsupervised frameworks to separate single-source signals in a target modality (i.e., audio) using the easily separable corresponding signals in the conditioning modality (i.e., language), without having access to single-source samples in the target modality during training. We empirically show that this is well within reach if we have access to a pretrained joint embedding model between the two modalities (i.e., CLAP). Furthermore, we propose to incorporate our framework into two fundamental scenarios to enhance separation performance. First, we show that our proposed methodology significantly improves the performance of purely unsupervised baselines by reducing the distribution shift between training and test samples. In particular, we show that our framework can achieve 71% boost in terms of Signal-to-Distortion Ratio (SDR) over the baseline, reaching 97.5% of the supervised learning performance. Second, we show that we can further improve the performance of the supervised learning itself by 17% if we augment it by our proposed weakly-supervised framework. Our framework achieves this by making large corpora of unsupervised data available to the supervised learning model as well as utilizing a natural, robust regularization mechanism through weak supervision from the language modality, and hence enabling a powerful semi-supervised framework for audio separation. Code is released at https://github.com/microsoft/BiModalAudioSeparation.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19468"} +{"video_file": "4VIgNuQ1pY_39018449.mp4", "openreview_id": "4VIgNuQ1pY", "slideslive_id": 39018449, "venue": "iclr2024", "title": "Stable Neural Stochastic Differential Equations in Analyzing Irregular Time Series Data", "status": "Spotlight", "keywords": "Neural Ordinary Differential Equations;Neural Stochastic Differential Equations;Irregular time series data", "tldr": "Stable Neural Stochastic Differential Equations", "abstract": "Irregular sampling intervals and missing values in real-world time series data present challenges for conventional methods that assume consistent intervals and complete data. Neural Ordinary Differential Equations (Neural ODEs) offer an alternative approach, utilizing neural networks combined with ODE solvers to learn continuous latent representations through parameterized vector fields. Neural Stochastic Differential Equations (Neural SDEs) extend Neural ODEs by incorporating a diffusion term, although this addition is not trivial, particularly when addressing irregular intervals and missing values. Consequently, careful design of drift and diffusion functions is crucial for maintaining stability and enhancing performance, while incautious choices can result in adverse properties such as the absence of strong solutions, stochastic destabilization, or unstable Euler discretizations, significantly affecting Neural SDEs' performance. In this study, we propose three stable classes of Neural SDEs: Langevin-type SDE, Linear Noise SDE, and Geometric SDE. Then, we rigorously demonstrate their robustness in maintaining excellent performance under distribution shift, while effectively preventing overfitting. To assess the effectiveness of our approach, we conduct extensive experiments on four benchmark datasets for interpolation, forecasting, and classification tasks, and analyze the robustness of our methods with 30 public datasets under different missing rates. Our results demonstrate the efficacy of the proposed method in handling real-world irregular time series data.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/19462"} +{"video_file": "4Zz5UELkIt_39018445.mp4", "openreview_id": "4Zz5UELkIt", "slideslive_id": 39018445, "venue": "iclr2024", "title": "Adaptive Instrument Design for Indirect Experiments", "status": "Poster", "keywords": "instrument variable;experiment design;indirect experiments;adaptive design", "tldr": "We take the initial steps towards enhancing sample efficiency for \\textit{indirect} experiments by adaptively designing a data collection policy over instrumental variables.", "abstract": "Indirect experiments provide a valuable framework for estimating treatment effects in situations where conducting randomized control trials (RCTs) is impractical or unethical. Unlike RCTs, indirect experiments estimate treatment effects by leveraging (conditional) instrumental variables, enabling estimation through encouragement and recommendation rather than strict treatment assignment. However, the sample efficiency of such estimators depends not only on the inherent variability in outcomes but also on the varying compliance levels of users with the instrumental variables and the choice of estimator being used, especially when dealing with numerous instrumental variables. While adaptive experiment design has a rich literature for \\textit{direct} experiments, in this paper we take the initial steps towards enhancing sample efficiency for \\textit{indirect} experiments by adaptively designing a data collection policy over instrumental variables. Our main contribution is a practical computational procedure that utilizes influence functions to search for an optimal data collection policy, minimizing the mean-squared error of the desired (non-linear) estimator. Through experiments conducted in various domains inspired by real-world applications, we showcase how our method can significantly improve the sample efficiency of indirect experiments.", "primary_area": "causal reasoning", "site": "https://iclr.cc/virtual/2024/poster/19457"} +{"video_file": "4eJDMjYZZG_39017202.mp4", "openreview_id": "4eJDMjYZZG", "slideslive_id": 39017202, "venue": "iclr2024", "title": "Language Model Detectors Are Easily Optimized Against", "status": "Poster", "keywords": "detector;language model;learning from preferences", "tldr": "We show that existing open source and commercial LLM detectors can be used as reward functions to produce much more difficult-to-detect language models.", "abstract": "The fluency and general applicability of large language models (LLMs) has motivated significant interest in detecting whether a piece of text was written by a language model. While both academic and commercial detectors have been deployed in some settings, particularly education, other research has highlighted the fragility of these systems. In this paper, we demonstrate a data-efficient attack that fine-tunes language models to confuse existing detectors, leveraging recent developments in reinforcement learning of language models. We use the `human-ness' score (often just a log probability) of various open-source and commercial detectors as a reward function for reinforcement learning, subject to a KL-divergence constraint that the resulting model does not differ significantly from the original. For a 7B parameter Llama-2 model, fine-tuning for under a day reduces the AUROC of the OpenAI RoBERTa-Large detector from 0.84 to 0.63, while perplexity on OpenWebText increases from 8.7 to only 9.0; with a larger perplexity budget, we can drive AUROC to 0.30 (worse than random). Similar to traditional adversarial attacks, we find that this increase in 'detector evasion' generalizes to other detectors not used during training. In light of our empirical results, we advise against continued reliance on LLM-generated text detectors. Models, datasets, and selected experiment code will be released at https://github.com/charlottttee/llm-detector-evasion.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/19453"} +{"video_file": "4h1apFjO99_39018442.mp4", "openreview_id": "4h1apFjO99", "slideslive_id": 39018442, "venue": "iclr2024", "title": "Diffusion-TS: Interpretable Diffusion for General Time Series Generation", "status": "Poster", "keywords": "Diffusion models;Synthetic Time series;Imputation;Forecasting", "tldr": "We propose an interpretable diffusion model for generating time series (un)conditionally.", "abstract": "Denoising diffusion probabilistic models (DDPMs) are becoming the leading paradigm for generative models. It has recently shown breakthroughs in audio synthesis, time series imputation and forecasting. In this paper, we propose Diffusion-TS, a novel diffusion-based framework that generates multivariate time series samples of high quality by using an encoder-decoder transformer with disentangled temporal representations, in which the decomposition technique guides Diffusion-TS to capture the semantic meaning of time series while transformers mine detailed sequential information from the noisy model input. Different from existing diffusion-based approaches, we train the model to directly reconstruct the sample instead of the noise in each diffusion step, combining a Fourier-based loss term. Diffusion-TS is expected to generate time series satisfying both interpretablity and realness. In addition, it is shown that the proposed Diffusion-TS can be easily extended to conditional generation tasks, such as forecasting and imputation, without any model changes. This also motivates us to further explore the performance of Diffusion-TS under irregular settings. Finally, through qualitative and quantitative experiments, results show that Diffusion-TS achieves the state-of-the-art results on various realistic analyses of time series.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19451"} +{"video_file": "4iPw1klFWa_39018441.mp4", "openreview_id": "4iPw1klFWa", "slideslive_id": 39018441, "venue": "iclr2024", "title": "Scalable Neural Network Kernels", "status": "Poster", "keywords": "scalable kernel methods;random features;deep neural networks", "tldr": "We provide novel kernel methods that can be used to linearize MLP blocks leading to parameter efficient training and reduction in storage complexity.", "abstract": "We introduce the concept of scalable neural network kernels (SNNKs), the replacements of regular feedforward layers (FFLs), capable of approximating the latter, but with favorable computational properties. SNNKs effectively disentangle the inputs from the parameters of the neural network in the FFL, only to connect them in the final computation via the dot-product kernel. They are also strictly more expressive, as allowing to model complicated relationships beyond the functions of the dot-products of parameter-input vectors. We also introduce the neural network bundling process that applies SNNKs to compactify deep neural network architectures, resulting in additional compression gains. In its extreme version, it leads to the fully bundled network whose optimal parameters can be expressed via explicit formulae for several loss functions (e.g. mean squared error), opening a possibility to bypass backpropagation. As a by-product of our analysis, we introduce the mechanism of the universal random features (or URFs), applied to instantiate several SNNK variants, and interesting on its own in the context of scalable kernel methods. We provide rigorous theoretical analysis of all these concepts as well as an extensive empirical evaluation, ranging from point-wise kernel estimation to Transformers' fine-tuning with novel adapter layers inspired by SNNKs. Our mechanism provides up to 5x reduction in the number of trainable parameters, while maintaining competitive accuracy.", "primary_area": "metric learning, kernel learning, and sparse coding", "site": "https://iclr.cc/virtual/2024/poster/19450"} +{"video_file": "4kLVvIh8cp_39018440.mp4", "openreview_id": "4kLVvIh8cp", "slideslive_id": 39018440, "venue": "iclr2024", "title": "Pessimistic Nonlinear Least-Squares Value Iteration for Offline Reinforcement Learning", "status": "Poster", "keywords": "Offline reinforcement learning;instance-dependent;least-squares value iteration", "tldr": "In this paper, we present Pessimistic Nonlinear Least-Square Value Iteration (PNLSVI), an oracle-efficient algorithm for offline RL with non-linear function approximation.", "abstract": "Offline reinforcement learning (RL), where the agent aims to learn the optimal policy based on the data collected by a behavior policy, has attracted increasing attention in recent years. While offline RL with linear function approximation has been extensively studied with optimal results achieved under certain assumptions, many works shift their interest to offline RL with non-linear function approximation. However, limited works on offline RL with non-linear function approximation have instance-dependent regret guarantees. In this paper, we propose an oracle-efficient algorithm, dubbed Pessimistic Nonlinear Least-Square Value Iteration (PNLSVI), for offline RL with non-linear function approximation. Our algorithmic design comprises three innovative components: (1) a variance-based weighted regression scheme that can be applied to a wide range of function classes, (2) a subroutine for variance estimation, and (3) a planning phase that utilizes a pessimistic value iteration approach. Our algorithm enjoys a regret bound that has a tight dependency on the function class complexity and achieves minimax optimal instance-dependent regret when specialized to linear function approximation. Our work extends the previous instance-dependent results within simpler function classes, such as linear and differentiable function to a more general framework. To the best of our knowledge, this is the first statistically optimal algorithm for nonlinear offline RL.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19449"} +{"video_file": "4r2ybzJnmN_39018439.mp4", "openreview_id": "4r2ybzJnmN", "slideslive_id": 39018439, "venue": "iclr2024", "title": "Learning Delays in Spiking Neural Networks using Dilated Convolutions with Learnable Spacings", "status": "Poster", "keywords": "Spiking Neural Networks;Delays;Neuromorphic Computing;Speech Recognition", "tldr": "A new method to learn delays with backprop in deep spiking neural networks", "abstract": "Spiking Neural Networks (SNNs) are a promising research direction for building power-efficient information processing systems, especially for temporal tasks such as speech recognition. In SNNs, delays refer to the time needed for one spike to travel from one neuron to another. These delays matter because they influence the spike arrival times, and it is well-known that spiking neurons respond more strongly to coincident input spikes. More formally, it has been shown theoretically that plastic delays greatly increase the expressivity in SNNs. Yet, efficient algorithms to learn these delays have been lacking. Here, we propose a new discrete-time algorithm that addresses this issue in deep feedforward SNNs using backpropagation, in an offline manner. To simulate delays between consecutive layers, we use 1D convolutions across time. The kernels contain only a few non-zero weights \u2013 one per synapse \u2013 whose positions correspond to the delays. These positions are learned together with the weights using the recently proposed Dilated Convolution with Learnable Spacings (DCLS). We evaluated our method on three datasets: the Spiking Heidelberg Dataset (SHD), the Spiking Speech Commands (SSC) and its non spiking version Google Speech Commands v0.02 (GSC) benchmarks, which require detecting temporal patterns. We used feedforward SNNs with two or three hidden fully connected layers, and vanilla leaky integrate-and-fire neurons. We showed that fixed random delays help and that learning them helps even more. Furthermore, our method outperformed the state-of-the-art in the three datasets without using recurrent connections and with substantially fewer parameters. Our work demonstrates the potential of delay learning in developing accurate and precise models for temporal data processing. Our code is based on PyTorch / SpikingJelly and available at: https://github.com/Thvnvtos/SNN-delays", "primary_area": "applications to neuroscience & cognitive science", "site": "https://iclr.cc/virtual/2024/poster/19447"} +{"video_file": "567BjxgaTp_39019208.mp4", "openreview_id": "567BjxgaTp", "slideslive_id": 39019208, "venue": "iclr2024", "title": "How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions", "status": "Poster", "keywords": "language models;lying;deception;alignment;safety;truthfulness;honesty", "tldr": "How to elicit, and detect, lying behaviour in black-box LLMs.", "abstract": "Large language models (LLMs) can \u201clie\u201d, which we define as outputting false statements when incentivised to, despite \u201cknowing\u201d the truth in a demonstrable sense. LLMs might \u201clie\u201d, for example, when instructed to output misinformation. Here, we develop a simple lie detector that requires neither access to the LLM\u2019s activations (black-box) nor ground-truth knowledge of the fact in question. The detector works by asking a predefined set of unrelated follow-up questions after a suspected lie, and feeding the LLM\u2019s yes/no answers into a logistic regression classifier. Despite its simplicity, this lie detector is highly accurate and surprisingly general. When trained on examples from a single setting\u2014prompting GPT-3.5 to lie about factual questions\u2014the detector generalises out-of-distribution to (1) other LLM architectures, (2) LLMs fine-tuned to lie, (3) sycophantic lies, and (4) lies emerging in real-life scenarios such as sales. These results indicate that LLMs have distinctive lie-related behavioural patterns, consistent across architectures and contexts, which could enable general-purpose lie detection.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/19439"} +{"video_file": "5BCFlnfE1g_39018432.mp4", "openreview_id": "5BCFlnfE1g", "slideslive_id": 39018432, "venue": "iclr2024", "title": "Demystifying CLIP Data", "status": "Spotlight", "keywords": "multi-modal pretraining;CLIP;image;text", "tldr": "CLIP data curation and scaling", "abstract": "Contrastive Language-Image Pre-training (CLIP) is an approach that has advanced research and applications in computer vision, fueling modern recognition systems and generative models. We believe that the main ingredient to the success of CLIP is its \\textit{data} and \\textit{not} the \\textit{model} architecture or pre-training {objective}. However, CLIP only provides very limited information about its data and how it has been collected, leading to works that aim to reproduce CLIP's data by filtering with its model parameters. In this work, we intend to reveal CLIP's data curation approach and in our pursuit of making it open to the community introduce Metadata-Curated Language-Image Pre-training (MetaCLIP). MetaCLIP takes a raw data pool and metadata (derived from CLIP's concepts) and yields a balanced subset over the metadata distribution. Our experimental study rigorously isolates the model and training settings, concentrating solely on data. MetaCLIP applied to CommonCrawl with 400M image-text data pairs outperforms CLIP's data on multiple standard benchmarks. In zero-shot ImageNet classification, MetaCLIP achieves 70.8% accuracy, surpassing CLIP's 68.3% on \\mbox{ViT-B} models. Scaling to 1B data, while maintaining the same training budget, attains \\textbf{72.4%}. Our observations hold across various model sizes, exemplified by ViT-H achieving \\textbf{80.5%}, without any bells-and-whistles. Curation code and training data distribution over metadata will be made available.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19438"} +{"video_file": "5Dwqu5urzs_39017446.mp4", "openreview_id": "5Dwqu5urzs", "slideslive_id": 39017446, "venue": "iclr2024", "title": "Physics-Regulated Deep Reinforcement Learning: Invariant Embeddings", "status": "Spotlight", "keywords": "Physics-informed deep reinforcement learning;Safety-critical autonomous systems", "tldr": "Physics-regulated DRL", "abstract": "This paper proposes the Phy-DRL: a physics-regulated deep reinforcement learning (DRL) framework for safety-critical autonomous systems. The Phy-DRL has three distinguished invariant-embedding designs: i) residual action policy (i.e., integrating data-driven-DRL action policy and physics-model-based action policy), ii) automatically constructed safety-embedded reward, and iii) physics-model-guided neural network (NN) editing, including link editing and activation editing. Theoretically, the Phy-DRL exhibits 1) a mathematically provable safety guarantee and 2) strict compliance of critic and actor networks with physics knowledge about the action-value function and action policy. Finally, we evaluate the Phy-DRL on a cart-pole system and a quadruped robot. The experiments validate our theoretical results and demonstrate that Phy-DRL features guaranteed safety compared to purely data-driven DRL and solely model-based design while offering remarkably fewer learning parameters and fast training towards safety guarantee.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19435"} +{"video_file": "5EniAcsO7f_39018428.mp4", "openreview_id": "5EniAcsO7f", "slideslive_id": 39018428, "venue": "iclr2024", "title": "Weatherproofing Retrieval for Localization with Generative AI and Geometric Consistency", "status": "Poster", "keywords": "visual localization;image retrieval;synthetic data;domain shift;geometric consistency;long-term visual localization;ret4loc;image alteration", "tldr": "We make retrieval for localization models robust to weather, seasonal and time-of-day changes by augmenting the training set with synthetic variations generated using Generative AI and leverage geometric consistency for sampling and filtering", "abstract": "State-of-the-art visual localization approaches generally rely on a first image retrieval step whose role is crucial. Yet, retrieval often struggles when facing varying conditions, due to e.g. weather or time of day, with dramatic consequences on the visual localization accuracy. In this paper, we improve this retrieval step and tailor it to the final localization task. Among the several changes we advocate for, we propose to synthesize variants of the training set images, obtained from generative text-to-image models, in order to automatically expand the training set towards a number of nameable variations that particularly hurt visual localization. After expanding the training set, we propose a training approach that leverages the specificities and the underlying geometry of this mix of real and synthetic images. We experimentally show that those changes translate into large improvements for the most challenging visual localization datasets.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19433"} +{"video_file": "5RielfrDkP_39018756.mp4", "openreview_id": "5RielfrDkP", "slideslive_id": 39018756, "venue": "iclr2024", "title": "Learning Adaptive Multiresolution Transforms via Meta-Framelet-based Graph Convolutional Network", "status": "Poster", "keywords": "Graph neural networks;graph multiresolution analysis", "tldr": "We propose the MM-FGCN, a novel framework designed to learn adaptive graph multiresolution transforms, resulting in the attainment of state-of-the-art performance in various graph representation learning tasks.", "abstract": "Graph Neural Networks are popular tools in graph representation learning that capture the graph structural properties. However, most GNNs employ single-resolution graph feature extraction, thereby failing to capture micro-level local patterns (high resolution) and macro-level graph cluster and community patterns (low resolution) simultaneously. Many multiresolution methods have been developed to capture graph patterns at multiple scales, but most of them depend on predefined and handcrafted multiresolution transforms that remain fixed throughout the training process once formulated. Due to variations in graph instances and distributions, fixed handcrafted transforms can not effectively tailor multiresolution representations to each graph instance. To acquire multiresolution representation suited to different graph instances and distributions, we introduce the Multiresolution Meta-Framelet-based Graph Convolutional Network (MM-FGCN), facilitating comprehensive and adaptive multiresolution analysis across diverse graphs. Extensive experiments demonstrate that our MM-FGCN achieves SOTA performance on various graph learning tasks.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/19425"} +{"video_file": "5dlfiJIXoh_39018422.mp4", "openreview_id": "5dlfiJIXoh", "slideslive_id": 39018422, "venue": "iclr2024", "title": "Structured Video-Language Modeling with Temporal Grouping and Spatial Grounding", "status": "Poster", "keywords": "multi-modal learning;video and language", "tldr": "We propose S-ViLM to strengthen model's understanding into fine-grained structures such as region-object correspondences and temporal scene changes.", "abstract": "Existing video-language pre-training methods primarily focus on instance-level alignment between video clips and captions via global contrastive learning but neglect rich fine-grained local information in both videos and text, which is of importance to downstream tasks requiring temporal localization and semantic reasoning. A powerful model is expected to be capable of capturing region-object correspondences and recognizing scene changes in a video clip, reflecting spatial and temporal granularity, respectively. To strengthen model's understanding into such fine-grained details, we propose a simple yet effective video-language modeling framework, S-ViLM, by exploiting the intrinsic structures of these two modalities. It includes two novel designs, inter-clip spatial grounding and intra-clip temporal grouping, to promote learning region-object alignment and temporal-aware features, simultaneously. Comprehensive evaluations demonstrate that S-ViLM performs favorably against existing approaches in learning more expressive representations. Specifically, S-ViLM surpasses the state-of-the-art methods substantially on four representative downstream tasks, covering text-video retrieval, video question answering, video action recognition, and temporal action localization.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19422"} +{"video_file": "5h0qf7IBZZ_39018420.mp4", "openreview_id": "5h0qf7IBZZ", "slideslive_id": 39018420, "venue": "iclr2024", "title": "MiniLLM: Knowledge Distillation of Large Language Models", "status": "Poster", "keywords": "Large Lanauge Models;Knowledge Distillation", "tldr": "We propose a knowledge distillation method for Large Language Models with minimizing the reverse KL divergence as the objective.", "abstract": "Knowledge Distillation (KD) is a promising technique for reducing the high computational demand of large language models (LLMs). However, previous KD methods are primarily applied to white-box classification models or training small models to imitate black-box model APIs like ChatGPT. How to effectively distill the knowledge of white-box LLMs into small models is still under-explored, which becomes more important with the prosperity of open-source LLMs. In this work, we propose a KD approach that distills LLMs into smaller language models. We first replace the forward Kullback-Leibler divergence (KLD) objective in the standard KD approaches with reverse KLD, which is more suitable for KD on generative language models, to prevent the student model from overestimating the low-probability regions of the teacher distribution. Then, we derive an effective optimization approach to learn this objective. The student models are named MiniLLM. Extensive experiments in the instruction-following setting show that MiniLLM generates more precise responses with higher overall quality, lower exposure bias, better calibration, and higher long-text generation performance than the baselines. Our method is scalable for different model families with 120M to 13B parameters. Our code, data, and model checkpoints can be found in https://github.com/microsoft/LMOps/tree/main/minillm.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19420"} +{"video_file": "5iENGLEJKG_39018419.mp4", "openreview_id": "5iENGLEJKG", "slideslive_id": 39018419, "venue": "iclr2024", "title": "INViTE: INterpret and Control Vision-Language Models with Text Explanations", "status": "Poster", "keywords": "Interpretation; Transformer", "tldr": "Interpreting vision transformer's latent tokens in natural language without any data collection or training.", "abstract": "Large-scale pre-trained vision foundation models, such as CLIP, have become de facto backbones for various vision tasks. However, due to their black-box nature, understanding the underlying rules behind these models\u2019 predictions and controlling model behaviors have remained open challenges. We present INViTE: a framework for INterpreting Vision Transformer\u2019s latent tokens with Text Explanations. Given a latent token, INViTE retains its semantic information to the final layer using transformer\u2019s local operations and retrieves the closest text for explanation. INViTE enables understanding of model visual reasoning procedure without needing additional model training or data collection. Based on the obtained interpretations, INViTE allows for model editing that controls model reasoning behaviors and improves model robustness against biases and spurious correlations. Our code is available at https://github.com/tonychenxyz/vit-interpret.", "primary_area": "visualization or interpretation of learned representations", "site": "https://iclr.cc/virtual/2024/poster/19418"} +{"video_file": "5jWsW08zUh_39017066.mp4", "openreview_id": "5jWsW08zUh", "slideslive_id": 39017066, "venue": "iclr2024", "title": "Some Fundamental Aspects about Lipschitz Continuity of Neural Networks", "status": "Poster", "keywords": "Lipschitz continuity;Double Descent;Label Noise;Generalization", "tldr": "An empirical investigation into the behaviour of the Lipschitz constant for Deep Learning models in various learning settings.", "abstract": "Lipschitz continuity is a crucial functional property of any predictive model, that naturally governs its robustness, generalisation, as well as adversarial vulnerability. Contrary to other works that focus on obtaining tighter bounds and developing different practical strategies to enforce certain Lipschitz properties, we aim to thoroughly examine and characterise the Lipschitz behaviour of Neural Networks. Thus, we carry out an empirical investigation in a range of different settings (namely, architectures, datasets, label noise, and more) by exhausting the limits of the simplest and the most general lower and upper bounds. As a highlight of this investigation, we showcase a remarkable fidelity of the lower Lipschitz bound, identify a striking Double Descent trend in both upper and lower bounds to the Lipschitz and explain the intriguing effects of label noise on function smoothness and generalisation.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/19417"} +{"video_file": "5liV2xUdJL_39018417.mp4", "openreview_id": "5liV2xUdJL", "slideslive_id": 39018417, "venue": "iclr2024", "title": "Time-Efficient Reinforcement Learning with Stochastic Stateful Policies", "status": "Poster", "keywords": "reinforcement learning;recurrent neural networks;stateful policies;backpropagation through time;imitation learning", "tldr": "Our novel gradient estimator makes training of stateful policies simpler and faster.", "abstract": "Stateful policies play an important role in reinforcement learning, such as handling partially observable environments, enhancing robustness, or imposing an inductive bias directly into the policy structure. The conventional method for training stateful policies is Backpropagation Through Time (BPTT), which comes with significant drawbacks, such as slow training due to sequential gradient propagation and the occurrence of vanishing or exploding gradients. The gradient is often truncated to address these issues, resulting in a biased policy update. We present a novel approach for training stateful policies by decomposing the latter into a stochastic internal state kernel and a stateless policy, jointly optimized by following the stateful policy gradient. We introduce different versions of the stateful policy gradient theorem, enabling us to easily instantiate stateful variants of popular reinforcement learning and imitation learning algorithms. Furthermore, we provide a theoretical analysis of our new gradient estimator and compare it with BPTT. We evaluate our approach on complex continuous control tasks, e.g. humanoid locomotion, and demonstrate that our gradient estimator scales effectively with task complexity while offering a faster and simpler alternative to BPTT.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19415"} +{"video_file": "5nM2AHzqUj_39018416.mp4", "openreview_id": "5nM2AHzqUj", "slideslive_id": 39018416, "venue": "iclr2024", "title": "Linear Log-Normal Attention with Unbiased Concentration", "status": "Poster", "keywords": "Neural Networks;Transformers;Self-Attention;Linear Attention;Scalable Transformers;Efficient Attention;Attention with Linear Complexity;Linearized Attention;Self-Attention Analysis", "tldr": "The quadratic complexity of the attention limits the scalability of the transformer models. We propose Linear Log-Normal Attention that offers linear complexity while maintaining key features of the original attention mechanism.", "abstract": "Transformer models have achieved remarkable results in a wide range of applications. However, their scalability is hampered by the quadratic time and memory complexity of the self-attention mechanism concerning the sequence length. This limitation poses a substantial obstacle when dealing with long documents or high-resolution images. In this work, we study the self-attention mechanism by analyzing the distribution of the attention matrix and its concentration ability. Furthermore, we propose instruments to measure these quantities and introduce a novel self-attention mechanism, Linear Log-Normal Attention, designed to emulate the distribution and concentration behavior of the original self-attention. Our experimental results on popular natural language benchmarks reveal that our proposed Linear Log-Normal Attention outperforms other linearized attention alternatives, offering a promising avenue for enhancing the scalability of transformer models.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/19414"} +{"video_file": "5o9G4XF1LI_39018415.mp4", "openreview_id": "5o9G4XF1LI", "slideslive_id": 39018415, "venue": "iclr2024", "title": "Goodhart's Law in Reinforcement Learning", "status": "Poster", "keywords": "reinforcement learning;goodhart's law;misspecification;reward learning", "tldr": "We study Goodhart's law in RL empirically, provide a theoretical explanation for why it occurs, and use these theoretical insights to derive two methods for avoiding Goodharting.", "abstract": "Implementing a reward function that perfectly captures a complex task in the real world is impractical. As a result, it is often appropriate to think of the reward function as a proxy for the true objective rather than as its definition. We study this phenomenon through the lens of Goodhart\u2019s law, which predicts that increasing optimisation of an imperfect proxy beyond some critical point decreases performance on the true objective. First, we propose a way to quantify the magnitude of this effect and show empirically that optimising an imperfect proxy reward often leads to the behaviour predicted by Goodhart\u2019s law for a wide range of environments and reward functions. We then provide a geometric explanation for why Goodhart's law occurs in Markov decision processes. We use these theoretical insights to propose an optimal early stopping method that provably avoids the aforementioned pitfall and derive theoretical regret bounds for this method. Moreover, we derive a training method that maximises worst-case reward, for the setting where there is uncertainty about the true reward function. Finally, we evaluate our early stopping method experimentally. Our results support a foundation for a theoretically-principled study of reinforcement learning under reward misspecification.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19413"} +{"video_file": "5sjxMwWmk8_39018413.mp4", "openreview_id": "5sjxMwWmk8", "slideslive_id": 39018413, "venue": "iclr2024", "title": "Robust Angular Synchronization via Directed Graph Neural Networks", "status": "Poster", "keywords": "group synchronization;angular synchronization;neural networks;directed graphs;deep learning;cycle consistency", "tldr": "We propose a neural network framework with novel loss functions to tackle the angular synchronization problem and its extension to k-synchronization.", "abstract": "The angular synchronization problem aims to accurately estimate (up to a constant additive phase) a set of unknown angles\n\u03b8\n1\n,\n\u2026\n,\n\u03b8\nn\n\u2208\n[\n0\n,\n2\n\u03c0\n)\nfrom\nm\nnoisy measurements of their offsets\n\u03b8\ni\n\u2212\n\u03b8\nj\nmod\n2\n\u03c0\n.\nApplications include, for example, sensor network localization, phase retrieval, and distributed clock synchronization. An extension of the problem to the heterogeneous setting (dubbed\nk\n-synchronization) is to estimate\nk\ngroups of angles simultaneously, given noisy observations (with unknown group assignment) from each group. Existing methods for angular synchronization usually perform poorly in high-noise regimes, which are common in applications. In this paper, we leverage neural networks for the angular synchronization problem, and its heterogeneous extension, by proposing GNNSync, a theoretically-grounded end-to-end trainable framework using directed graph neural networks. In addition, new loss functions are devised to encode synchronization objectives. Experimental results on extensive data sets demonstrate that GNNSync attains competitive, and often superior, performance against a comprehensive set of baselines for the angular synchronization problem and its extension, validating the robustness of GNNSync even at high noise levels.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/19411"} +{"video_file": "64kSvC4iPg_39018408.mp4", "openreview_id": "64kSvC4iPg", "slideslive_id": 39018408, "venue": "iclr2024", "title": "Compressed Context Memory for Online Language Model Interaction", "status": "Poster", "keywords": "context compression;efficient inference;natural language processing;transformer", "tldr": "We propose a compressed context KV memory system for memory-efficient online inference of language models", "abstract": "This paper presents a context key/value compression method for Transformer language models in online scenarios, where the context continually expands. As the context lengthens, the attention process demands increasing memory and computations, which in turn reduces the throughput of the language model. To address this challenge, we propose a compressed context memory system that continually compresses the accumulating attention key/value pairs into a compact memory space, facilitating language model inference in a limited memory space of computing environments. Our compression process involves integrating a lightweight conditional LoRA into the language model's forward pass during inference, without the need for fine-tuning the model's entire set of weights. We achieve efficient training by modeling the recursive compression process as a single parallelized forward computation. Through evaluations on conversation, personalization, and multi-task learning, we demonstrate that our approach achieves the performance level of a full context model with\n5\n\u00d7\nsmaller context memory size. We further demonstrate the applicability of our approach in a streaming setting with an unlimited context length, outperforming the sliding window approach. Codes are available at https://github.com/snu-mllab/context-memory.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/19404"} +{"video_file": "6CZ50WgfCG_39018405.mp4", "openreview_id": "6CZ50WgfCG", "slideslive_id": 39018405, "venue": "iclr2024", "title": "DrS: Learning Reusable Dense Rewards for Multi-Stage Tasks", "status": "Poster", "keywords": "Reward Learning;Multi-stage Task", "tldr": "We propose DrS, a novel reward learning approach that learns reusable dense rewards for multi-stage tasks.", "abstract": "The success of many RL techniques heavily relies on human-engineered dense rewards, which typically demands substantial domain expertise and extensive trial and error. In our work, we propose DrS (Dense reward learning from Stages), a novel approach for learning reusable dense rewards for multi-stage tasks in a data-driven manner. By leveraging the stage structures of the task, DrS learns a high-quality dense reward from sparse rewards and demonstrations if given. The learned rewards can be reused in unseen tasks, thus reducing the human effort for reward engineering. Extensive experiments on three physical robot manipulation task families with 1000+ task variants demonstrate that our learned rewards can be reused in unseen tasks, resulting in improved performance and sample efficiency of RL algorithms. The learned rewards even achieve comparable performance to human-engineered rewards on some tasks. See our project page for more details.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19399"} +{"video_file": "6IjN7oxjXt_39018404.mp4", "openreview_id": "6IjN7oxjXt", "slideslive_id": 39018404, "venue": "iclr2024", "title": "Conserve-Update-Revise to Cure Generalization and Robustness Trade-off in Adversarial Training", "status": "Poster", "keywords": "Adversarial training;Adversarial Robustness;Generalization;Robustness;Robust overfitting;Selective training", "tldr": "A selective adversarial training method that enhances the trade-off between standard and robust generalization while also mitigating robust overfitting.", "abstract": "Adversarial training improves the robustness of neural networks against adversarial attacks, albeit at the expense of the trade-off between standard and robust generalization. To unveil the underlying factors driving this phenomenon, we examine the layer-wise learning capabilities of neural networks during the transition from a standard to an adversarial setting. Our empirical findings demonstrate that selectively updating specific layers while preserving others can substantially enhance the network's learning capacity. We, therefore, propose CURE, a novel training framework that leverages a gradient prominence criterion to perform selective conservation, updating, and revision of weights. Importantly, CURE is designed to be dataset- and architecture-agnostic, ensuring its applicability across various scenarios. It effectively tackles both memorization and overfitting issues, thus enhancing the trade-off between robustness and generalization and additionally, this training approach also aids in mitigating \"robust overfitting\". Furthermore, our study provides valuable insights into the mechanisms of selective adversarial training and offers a promising avenue for future research.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19397"} +{"video_file": "6bcAD6g688_39018399.mp4", "openreview_id": "6bcAD6g688", "slideslive_id": 39018399, "venue": "iclr2024", "title": "Unmasking and Improving Data Credibility: A Study with Datasets for Training Harmless Language Models", "status": "Poster", "keywords": "Label errors;dataset cleaning;AI safety;toxicity;harmless;language models", "tldr": "We provide an opensource tool to find and fix an average of 6.16% label errors in 11 text datasets for training harmless language models.", "abstract": "Language models have shown promise in various tasks but can be affected by undesired data during training, fine-tuning, or alignment. For example, if some unsafe conversations are wrongly annotated as safe ones, the model fine-tuned on these samples may be harmful. Therefore, the correctness of annotations, i.e., the credibility of the dataset, is important. This study focuses on the credibility of real-world datasets, including the popular benchmarks Jigsaw Civil Comments, Anthropic Harmless & Red Team, PKU BeaverTails & SafeRLHF, that can be used for training a harmless language model. Given the cost and difficulty of cleaning these datasets by humans, we introduce a systematic framework for evaluating the credibility of datasets, identifying label errors, and evaluating the influence of noisy labels in the curated language data, specifically focusing on unsafe comments and conversation classification. With the framework, we find and fix an average of 6.16% label errors in 11 datasets constructed from the above benchmarks. The data credibility and downstream learning performance can be remarkably improved by directly fixing label errors, indicating the significance of cleaning existing real-world datasets. Code is available at https://github.com/Docta-ai/docta.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/19388"} +{"video_file": "6hvtSLkKeZ_39018397.mp4", "openreview_id": "6hvtSLkKeZ", "slideslive_id": 39018397, "venue": "iclr2024", "title": "Learning to solve Class-Constrained Bin Packing Problems via Encoder-Decoder Model", "status": "Poster", "keywords": "Combinatorial Optimization;Class-Contrained Bin Packing Problems;Graph Convolution Network;Cluster Decode", "tldr": "We introduce a vector BPP variant called Class-Constrained Bin Packing Problem and propose a learning-based Encoder-Decoder Model to solve various kinds of CCBPP with a very small gap from the optimal.", "abstract": "Neural methods have shown significant merit in solving combinatorial optimization (CO) problems, including the Bin Packing Problem (BPP). However, most existing ML-based approaches focus on geometric BPP like 3DBPP, neglecting complex vector BPP. In this study, we introduce a vector BPP variant called Class-Constrained Bin Packing Problem (CCBPP), dealing with items of both classes and sizes, and the objective is to pack the items in the least amount of bins respecting the bin capacity and the number of different classes that it can hold. To enhance the efficiency and practicality of solving CCBPP, we propose a learning-based Encoder-Decoder Model. The Encoder employs a Graph Convolution Network (GCN) to generate a heat-map, representing probabilities of different items packing together. The Decoder decodes and fine-tunes the solution through Cluster Decode and Active Search methods, thereby producing high-quality solutions for CCBPP instances. Extensive experiments demonstrate that our proposed method consistently yields high-quality solutions for various kinds of CCBPP with a very small gap from the optimal. Moreover, our Encoder-Decoder Model also shows promising performance on one practical application of CCBPP, the Manufacturing Order Consolidation Problem (OCP).", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/19386"} +{"video_file": "6okaSfANzh_39018789.mp4", "openreview_id": "6okaSfANzh", "slideslive_id": 39018789, "venue": "iclr2024", "title": "Large Language Model Cascades with Mixture of Thought Representations for Cost-Efficient Reasoning", "status": "Poster", "keywords": "Large Language Models;Natural Language Processing;Reasoning", "tldr": "The paper investigates approaches of building LLM cascades for saving the cost of few-shot LLMs in reasoning tasks.", "abstract": "Large language models (LLMs) such as GPT-4 have exhibited remarkable performance in a variety of tasks, but this strong performance often comes with the high expense of using paid API services. In this paper, we are motivated to study building an LLM \"cascade\" to save the cost of using LLMs, particularly for performing (e.g., mathematical, causal) reasoning tasks. Our cascade pipeline follows the intuition that simpler questions can be addressed by a weaker but more affordable LLM, whereas only the most challenging questions necessitate the stronger and more expensive LLM. To realize this decision-making, we consider the \"answer consistency\" of the weaker LLM as a signal of the question difficulty and propose several methods for answering sampling and consistency checking, including one leveraging a mixture of two thought representations (i.e., Chain-of-Thought and Program-of-Thought). Through experiments on six reasoning benchmark datasets, with GPT-3.5-turbo and GPT-4 being the weaker and stronger LLMs, respectively, our cascade pipeline demonstrates comparable performance but reduces about 60% of the cost compared with fully using the stronger LLM.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19383"} +{"video_file": "6tqgL8VluV_39017123.mp4", "openreview_id": "6tqgL8VluV", "slideslive_id": 39017123, "venue": "iclr2024", "title": "Towards Establishing Guaranteed Error for Learned Database Operations", "status": "Poster", "keywords": "Learned Indexing;Learned Cardinality Estimation;Machine learning for Data Management", "tldr": "We present the first known bounds on the model size required when using machine learning to perform indexing, cardinality and range-sum estimation", "abstract": "Machine learning models have demonstrated substantial performance enhancements over non-learned alternatives in various fundamental data management operations, including indexing (locating items in an array), cardinality estimation (estimating the number of matching records in a database), and range-sum estimation (estimating aggregate attribute values for query-matched records). However, real-world systems frequently favor less efficient non-learned methods due to their ability to offer (worst-case) error guarantees \u2014 an aspect where learned approaches often fall short. The primary objective of these guarantees is to ensure system reliability, ensuring that the chosen approach consistently delivers the desired level of accuracy across all databases. In this paper, we embark on the first theoretical study of such guarantees for learned methods, presenting the necessary conditions for such guarantees to hold when using machine learning to perform indexing, cardinality estimation and range-sum estimation. Specifically, we present the first known lower bounds on the model size required to achieve the desired accuracy for these three key database operations. Our results bound the required model size for given average and worst-case errors in performing database operations, serving as the first theoretical guidelines governing how model size must change based on data size to be able to guarantee an accuracy level. More broadly, our established guarantees pave the way for the broader adoption and integration of learned models into real-world systems.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/19380"} +{"video_file": "6yv8UHVJn4_39018393.mp4", "openreview_id": "6yv8UHVJn4", "slideslive_id": 39018393, "venue": "iclr2024", "title": "Towards Optimal Regret in Adversarial Linear MDPs with Bandit Feedback", "status": "Spotlight", "keywords": "adversarial MDPs;policy optimization;bandit feedback", "tldr": "We study online reinformencent learning in adversarial linear MDPs, proposing the first rate-optimal inefficent algorithm and an efficient algorithm that significantly improves prior results.", "abstract": "We study online reinforcement learning in linear Markov decision processes with adversarial losses and bandit feedback. We introduce two algorithms that achieve improved regret performance compared to existing approaches. The first algorithm, although computationally inefficient, achieves a regret of\nO\n~\n(\nK\n)\nwithout relying on simulators, where\nK\nis the number of episodes. This is the first rate-optimal result in the considered setting. The second algorithm is computationally efficient and achieves a regret of\nO\n~\n(\nK\n3\n4\n)\n. These results significantly improve over the prior state-of-the-art: a computationally inefficient algorithm by Kong et al. (2023) with\nO\n~\n(\nK\n4\n5\n+\n1\n/\n\u03bb\nmin\n)\nregret, and a computationally efficient algorithm by Sherman et al. (2023b) with\nO\n~\n(\nK\n6\n7\n)\nregret.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19377"} +{"video_file": "776lhoaulC_39018391.mp4", "openreview_id": "776lhoaulC", "slideslive_id": 39018391, "venue": "iclr2024", "title": "Exploring the Common Appearance-Boundary Adaptation for Nighttime Optical Flow", "status": "Spotlight", "keywords": "nighttime optical flow;event camera;domain adaptation;common space", "tldr": "We propose a novel common appearance-boundary adaptation framework to learn an intermediate common space with discriminative feature representations for nighttime optical flow.", "abstract": "We investigate a challenging task of nighttime optical flow, which suffers from weakened texture and amplified noise. These degradations weaken discriminative visual features, thus causing invalid motion feature matching. Typically, existing methods employ domain adaptation to transfer knowledge from auxiliary domain to nighttime domain in either input visual space or output motion space. However, this direct adaptation is ineffective, since there exists a large domain gap due to the intrinsic heterogeneous nature of the feature representations between auxiliary and nighttime domains. To overcome this issue, we explore a common-latent space as the intermediate bridge to reinforce the feature alignment between auxiliary and nighttime domains. In this work, we exploit two auxiliary daytime and event domains, and propose a novel common appearance-boundary adaptation framework for nighttime optical flow. In appearance adaptation, we employ the intrinsic image decomposition to embed the auxiliary daytime image and the nighttime image into a reflectance-aligned common space. We discover that motion distributions of the two reflectance maps are very similar, benefiting us to consistently transfer motion appearance knowledge from daytime to nighttime domain. In boundary adaptation, we theoretically derive the motion correlation formula between nighttime image and accumulated events within a spatiotemporal gradient-aligned common space. We figure out that the correlation of the two spatiotemporal gradient maps shares significant discrepancy, benefitting us to contrastively transfer boundary knowledge from event to nighttime domain. Moreover, appearance adaptation and boundary adaptation are complementary to each other, since they could jointly transfer global motion and local boundary knowledge to the nighttime domain. Extensive experiments have been performed to verify the superiority of the proposed method.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/19374"} +{"video_file": "78iGZdqxYY_39017153.mp4", "openreview_id": "78iGZdqxYY", "slideslive_id": 39017153, "venue": "iclr2024", "title": "Mirage: Model-agnostic Graph Distillation for Graph Classification", "status": "Poster", "keywords": "graph distillation;graph classification;frequent pattern mining", "tldr": "An unsupervised and model/hyper-parameter agnostic graph distillation algorithm for graph classification.", "abstract": "GNNs, like other deep learning models, are data and computation hungry. There is a pressing need to scale training of GNNs on large datasets to enable their usage on low-resource environments. Graph distillation is an effort in that direction with the aim to construct a smaller synthetic training set from the original training data without significantly compromising model performance. While initial efforts are promising, this work is motivated by two key observations: (1) Existing graph distillation algorithms themselves rely on training with the full dataset, which undermines the very premise of graph distillation. (2) The distillation process is specific to the target GNN architecture and hyper-parameters and thus not robust to changes in the modeling pipeline. We circumvent these limitations by designing a distillation algorithm called MIRAGE for graph classification. MIRAGE is built on the insight that a message-passing GNN decomposes the input graph into a multiset of computation trees. Furthermore, the frequency distribution of computation trees is often skewed in nature, enabling us to condense this data into a concise distilled summary. By compressing the computation data itself, as opposed to emulating gradient flows on the original training set\u2014a prevalent approach to date\u2014MIRAGE transforms into an unsupervised and architecture-agnostic distillation algorithm. Extensive benchmarking on real-world datasets underscores MIRAGE\u2019s superiority, showcasing enhanced generalization accuracy, data compression, and distillation efficiency when compared to state-of-the-art baselines.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/19373"} +{"video_file": "79FVDdfoSR_39017040.mp4", "openreview_id": "79FVDdfoSR", "slideslive_id": 39017040, "venue": "iclr2024", "title": "A Characterization Theorem for Equivariant Networks with Point-wise Activations", "status": "Poster", "keywords": "Geometric Deep Learning;Equivariant Neural Networks;Characterization Theorem;Point-wise Activations", "tldr": "A characterization theorem describing admissibile combinations of representations and activation functions for equivariant layers with point-wise activations", "abstract": "Equivariant neural networks have shown improved performance, expressiveness and sample complexity on symmetrical domains. But for some specific symmetries, representations, and choice of coordinates, the most common point-wise activations, such as ReLU, are not equivariant, hence they cannot be employed in the design of equivariant neural networks. The theorem we present in this paper describes all possibile combinations of representations, choice of coordinates and point-wise activations to obtain an equivariant layer, generalizing and strengthening existing characterizations. Notable cases of practical relevance are discussed as corollaries. Indeed, we prove that rotation-equivariant networks can only be invariant, as it happens for any network which is equivariant with respect to connected compact groups. Then, we discuss implications of our findings when applied to important instances of equivariant networks. First, we completely characterize permutation equivariant networks such as Invariant Graph Networks with point-wise nonlinearities and their geometric counterparts, highlighting a plethora of models whose expressive power and performance are still unknown. Second, we show that feature spaces of disentangled steerable convolutional neural networks are trivial representations.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/19372"} +{"video_file": "7FeIRqCedv_39018389.mp4", "openreview_id": "7FeIRqCedv", "slideslive_id": 39018389, "venue": "iclr2024", "title": "SLiMe: Segment Like Me", "status": "Poster", "keywords": "one-shot segmentation;computer vision;text-to-image models;stable diffusion;cross attention", "tldr": "a one-shot image segmentation method capable of segmenting at various levels of granularity", "abstract": "Significant strides have been made using large vision-language models, like Stable Diffusion (SD), for a variety of downstream tasks, including image generation, image editing, and 3D shape generation. Inspired by these advancements, we explore leveraging these vision-language models for segmenting images at any desired granularity using as few as one annotated sample. We propose SLiMe, which frames this problem as an optimization task. Specifically, given a single image and its segmentation mask, we first extract our novel \u201cweighted accumulated self-attention map\u201d along with cross-attention map from the SD prior. Then, using these extracted maps, the text embeddings of SD are optimized to highlight the segmented region in these attention maps, which in turn can be used to derive new segmentation results. Moreover, leveraging additional training data when available, i.e. few-shot, improves the performance of SLiMe. We performed comprehensive experiments examining various design factors and showed that SLiMe outperforms other existing one-shot and few-shot segmentation methods.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19368"} +{"video_file": "7JfKCZQPxJ_39019104.mp4", "openreview_id": "7JfKCZQPxJ", "slideslive_id": 39019104, "venue": "iclr2024", "title": "STREAM: Spatio-TempoRal Evaluation and Analysis Metric for Video Generative Models", "status": "Poster", "keywords": "Generative Models;Video Generative Models;Evaluation;Fidelity;Diversity;Assessment", "tldr": "Current video models lack robust metrics, focusing too much on spatial aspects. We introduce STREAM, a unique metric evaluating spatial and temporal aspects independently, offering improved insights for developing advanced video generative models.", "abstract": "Image generative models have made significant progress in generating realistic and diverse images, supported by comprehensive guidance from various evaluation metrics. However, current video generative models struggle to generate even short video clips, with limited tools that provide insights for improvements. Current video evaluation metrics are simple adaptations of image metrics by switching the embeddings with video embedding networks, which may underestimate the unique characteristics of video. Our analysis reveals that the widely used Frechet Video Distance (FVD) has a stronger emphasis on the spatial aspect than the temporal naturalness of video and is inherently constrained by the input size of the embedding networks used, limiting it to 16 frames. Additionally, it demonstrates considerable instability and diverges from human evaluations. To address the limitations, we propose STREAM, a new video evaluation metric uniquely designed to independently evaluate spatial and temporal aspects. This feature allows comprehensive analysis and evaluation of video generative models from various perspectives, unconstrained by video length. We provide analytical and experimental evidence demonstrating that STREAM provides an effective evaluation tool for both visual and temporal quality of videos, offering insights into area of improvement for video generative models. To the best of our knowledge, STREAM is the first evaluation metric that can separately assess the temporal and spatial aspects of videos. Our code is available at https://github.com/pro2nit/STREAM.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19367"} +{"video_file": "7Jwpw4qKkb_39017149.mp4", "openreview_id": "7Jwpw4qKkb", "slideslive_id": 39017149, "venue": "iclr2024", "title": "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models", "status": "Poster", "keywords": "Large Language Models;Jailbreak Attack;Adversarial Attack", "tldr": "In this paper, we propose a novel attack against LLMs that can automatically generate stealthy jailbreak prompts with semantic meaningfulness preserved.", "abstract": "The aligned Large Language Models (LLMs) are powerful language understanding and decision-making tools that are created through extensive alignment with human feedback. However, these large models remain susceptible to jailbreak attacks, where adversaries manipulate prompts to elicit malicious outputs that should not be given by aligned LLMs. Investigating jailbreak prompts can lead us to delve into the limitations of LLMs and further guide us to secure them. Unfortunately, existing jailbreak techniques suffer from either (1) scalability issues, where attacks heavily rely on manual crafting of prompts, or (2) stealthiness problems, as attacks depend on token-based algorithms to generate prompts that are often semantically meaningless, making them susceptible to detection through basic perplexity testing. In light of these challenges, we intend to answer this question: Can we develop an approach that can automatically generate stealthy jailbreak prompts? In this paper, we introduce AutoDAN, a novel jailbreak attack against aligned LLMs. AutoDAN can automatically generate stealthy jailbreak prompts by the carefully designed hierarchical genetic algorithm. Extensive evaluations demonstrate that AutoDAN not only automates the process while preserving semantic meaningfulness, but also demonstrates superior attack strength in cross-model transferability, and cross-sample universality compared with the baseline. Moreover, we also compare AutoDAN with perplexity-based defense methods and show that AutoDAN can bypass them effectively. Code is available at https://github.com/SheltonLiu-N/AutoDAN.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/19366"} +{"video_file": "7NzgkEdGyr_39018970.mp4", "openreview_id": "7NzgkEdGyr", "slideslive_id": 39018970, "venue": "iclr2024", "title": "Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization", "status": "Poster", "keywords": "Parameter-efficient finetuning;orthogonal;Butterfly matrix", "tldr": "A simple yet effective finetuning method for the adaptation of foundation models", "abstract": "Large foundation models are becoming ubiquitous, but training them from scratch is prohibitively expensive. Thus, efficiently adapting these powerful models to downstream tasks is increasingly important. In this paper, we study a principled finetuning paradigm -- Orthogonal Finetuning (OFT) -- for downstream task adaptation. Despite demonstrating good generalizability, OFT still uses a fairly large number of trainable parameters due to the high dimensionality of orthogonal matrices. To address this, we start by examining OFT from an information transmission perspective, and then identify a few key desiderata that enable better parameter-efficiency. Inspired by how the Cooley-Tukey fast Fourier transform algorithm enables efficient information transmission, we propose an efficient orthogonal parameterization using butterfly structures. We apply this parameterization to OFT, creating a novel parameter-efficient finetuning method, called Orthogonal Butterfly (BOFT). By subsuming OFT as a special case, BOFT introduces a generalized orthogonal finetuning framework. Finally, we conduct an extensive empirical study of adapting large vision transformers, large language models, and text-to-image diffusion models to various downstream tasks in computer vision and natural language. The results validate the effectiveness of BOFT as a generic finetuning method.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/19363"} +{"video_file": "7TOs9gjAg1_39018385.mp4", "openreview_id": "7TOs9gjAg1", "slideslive_id": 39018385, "venue": "iclr2024", "title": "Removing Biases from Molecular Representations via Information Maximization", "status": "Poster", "keywords": "Molecular Representation;Batch Effect;Contrastive Learning;Information Maximization;Drug Discovery", "tldr": "We propose InfoCORE to deal with batch effects in drug screens and obtain refined molecular representations. It is established on a variational lower bound of the conditional mutual information between latent representations given a batch identifier.", "abstract": "High-throughput drug screening -- using cell imaging or gene expression measurements as readouts of drug effect -- is a critical tool in biotechnology to assess and understand the relationship between the chemical structure and biological activity of a drug. Since large-scale screens have to be divided into multiple experiments, a key difficulty is dealing with batch effects, which can introduce systematic errors and non-biological associations in the data. We propose InfoCORE, an Information maximization approach for COnfounder REmoval, to effectively deal with batch effects and obtain refined molecular representations. InfoCORE establishes a variational lower bound on the conditional mutual information of the latent representations given a batch identifier. It adaptively reweights samples to equalize their implied batch distribution. Extensive experiments on drug screening data reveal InfoCORE's superior performance in a multitude of tasks including molecular property prediction and molecule-phenotype retrieval. Additionally, we show results for how InfoCORE offers a versatile framework and resolves general distribution shifts and issues of data fairness by minimizing correlation with spurious features or removing sensitive attributes.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/19360"} +{"video_file": "7VPTUWkiDQ_39018383.mp4", "openreview_id": "7VPTUWkiDQ", "slideslive_id": 39018383, "venue": "iclr2024", "title": "Provable Compositional Generalization for Object-Centric Learning", "status": "Oral", "keywords": "compositional generalization;identifiability;object-centric learning;generalization;OOD generalization;unsupervised learning;slot attention;disentanglement;autoencoders;representation learning", "tldr": "We show theoretical conditions under which compositional generalization is guaranteed for object-centric representation learning.", "abstract": "Learning representations that generalize to novel compositions of known concepts is crucial for bridging the gap between human and machine perception. One prominent effort is learning object-centric representations, which are widely conjectured to enable compositional generalization. Yet, it remains unclear when this conjecture will be true, as a principled theoretical or empirical understanding of compositional generalization is lacking. In this work, we investigate when compositional generalization is guaranteed for object-centric representations through the lens of identifiability theory. We show that autoencoders that satisfy structural assumptions on the decoder and enforce encoder-decoder consistency will learn object-centric representations that provably generalize compositionally. We validate our theoretical result and highlight the practical relevance of our assumptions through experiments on synthetic image data.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19357"} +{"video_file": "7W3GLNImfS_39018382.mp4", "openreview_id": "7W3GLNImfS", "slideslive_id": 39018382, "venue": "iclr2024", "title": "Human Feedback is not Gold Standard", "status": "Poster", "keywords": "human evaluation;large language models;evaluation;natural language generation", "tldr": "We critically analyse the use of human feedback for evaluating and training Large Language Models, finding that human preference scores under-represent some crucial error types, and are biased by the assertiveness of the output.", "abstract": "Human feedback has become the de facto standard for evaluating the performance of Large Language Models, and is increasingly being used as a training objective. However, it is not clear which properties of a generated output this single `preference' score captures. We hypothesise that preference scores are subjective and open to undesirable biases. We critically analyse the use of human feedback for both training and evaluation, to verify whether it fully captures a range of crucial error criteria. We find that while preference scores have fairly good coverage, they under-represent important aspects like factuality. We further hypothesise that both preference scores and error annotation may be affected by confounders, and leverage instruction-tuned models to generate outputs that vary along two possible confounding dimensions: assertiveness and complexity. We find that the assertiveness of an output skews the perceived rate of factuality errors, indicating that human annotations are not a fully reliable evaluation metric or training objective. Finally, we offer preliminary evidence that using human feedback as a training objective disproportionately increases the assertiveness of model outputs. We encourage future work to carefully consider whether preference scores are well aligned with the desired objective.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19356"} +{"video_file": "7gLfQT52Nn_39018925.mp4", "openreview_id": "7gLfQT52Nn", "slideslive_id": 39018925, "venue": "iclr2024", "title": "Proper Laplacian Representation Learning", "status": "Poster", "keywords": "Reinforcement learning;Graph Laplacian;Representation learning;Augmented Lagrangian optimization;Hyperparameter robustness", "tldr": "We propose a theoretically-sound method to learn the Laplacian representation with deep neural networks, addressing limitations of all the previous methods in the literature.", "abstract": "The ability to learn good representations of states is essential for solving large reinforcement learning problems, where exploration, generalization, and transfer are particularly challenging. The Laplacian representation is a promising approach to address these problems by inducing informative state encoding and intrinsic rewards for temporally-extended action discovery and reward shaping. To obtain the Laplacian representation one needs to compute the eigensystem of the graph Laplacian, which is often approximated through optimization objectives compatible with deep learning approaches. These approximations, however, depend on hyperparameters that are impossible to tune efficiently, converge to arbitrary rotations of the desired eigenvectors, and are unable to accurately recover the corresponding eigenvalues. In this paper we introduce a theoretically sound objective and corresponding optimization algorithm for approximating the Laplacian representation. Our approach naturally recovers both the true eigenvectors and eigenvalues while eliminating the hyperparameter dependence of previous approximations. We provide theoretical guarantees for our method and we show that those results translate empirically into robust learning across multiple environments.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19350"} +{"video_file": "7gUrYE50Rb_39018816.mp4", "openreview_id": "7gUrYE50Rb", "slideslive_id": 39018816, "venue": "iclr2024", "title": "EQA-MX: Embodied Question Answering using Multimodal Expression", "status": "Spotlight", "keywords": "multimodal representation learning;visual-language models;embodied question answering", "tldr": "We present EQA-MX, a dataset and benchmark tasks for embodied multimodal QA, and VQ-Fusion model, enhancing visual-language alignment that outperforming existing models by 13%.", "abstract": "Humans predominantly use verbal utterances and nonverbal gestures (e.g., eye gaze and pointing gestures) in their natural interactions. For instance, pointing gestures and verbal information is often required to comprehend questions such as \"what object is that?\" Thus, this question-answering (QA) task involves complex reasoning of multimodal expressions (verbal utterances and nonverbal gestures). However, prior works have explored QA tasks in non-embodied settings, where questions solely contain verbal utterances from a single verbal and visual perspective. In this paper, we have introduced 8 novel embodied question answering (EQA) tasks to develop learning models to comprehend embodied questions with multimodal expressions. We have developed a novel large-scale dataset, EQA-MX, with over 8 million diverse embodied QA data samples involving multimodal expressions from multiple visual and verbal perspectives. To learn salient multimodal representations from discrete verbal embeddings and continuous wrapping of multiview visual representations, we propose a vector-quantization (VQ) based multimodal representation learning model, VQ-Fusion, for the EQA tasks. Our extensive experimental results suggest that VQ-Fusion can improve the performance of existing state-of-the-art visual-language models up to 13% across EQA tasks.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/19349"} +{"video_file": "7hxoYxKDTV_39018888.mp4", "openreview_id": "7hxoYxKDTV", "slideslive_id": 39018888, "venue": "iclr2024", "title": "Continuous-Multiple Image Outpainting in One-Step via Positional Query and A Diffusion-based Approach", "status": "Poster", "keywords": "Diffusion models;image outpainting", "tldr": "We propose a new method to outpaint images with arbitrary multiples in one step.", "abstract": "Image outpainting aims to generate the content of an input sub-image beyond its original boundaries. It is an important task in content generation yet remains an open problem for generative models. This paper pushes the technical frontier of image outpainting in two directions that have not been resolved in literature: 1) outpainting with arbitrary and continuous multiples (without restriction), and 2) outpainting in a single step (even for large expansion multiples). Moreover, we develop a method that does not depend on a pre-trained backbone network, which is in contrast commonly required by the previous SOTA outpainting methods. The arbitrary multiple outpainting is achieved by utilizing randomly cropped views from the same image during training to capture arbitrary relative positional information. Specifically, by feeding one view and positional embeddings as queries, we can reconstruct another view. At inference, we generate images with arbitrary expansion multiples by inputting an anchor image and its corresponding positional embeddings. The one-step outpainting ability here is particularly noteworthy in contrast to previous methods that need to be performed for\nN\ntimes to obtain a final multiple which is\nN\ntimes of its basic and fixed multiple. We evaluate the proposed approach (called PQDiff as we adopt a diffusion-based generator as our embodiment, under our proposed \\textbf{P}ositional \\textbf{Q}uery scheme) on public benchmarks, demonstrating its superior performance over state-of-the-art approaches. Specifically, PQDiff achieves state-of-the-art FID scores on the Scenery (\\textbf{21.512}), Building Facades (\\textbf{25.310}), and WikiArts (\\textbf{36.212}) datasets. Furthermore, under the 2.25x, 5x and 11.7x outpainting settings, PQDiff only takes \\textbf{40.6%}, \\textbf{20.3%} and \\textbf{10.2%} of the time of the benchmark state-of-the-art (SOTA) method.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19348"} +{"video_file": "7oLshfEIC2_39019019.mp4", "openreview_id": "7oLshfEIC2", "slideslive_id": 39019019, "venue": "iclr2024", "title": "TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting", "status": "Poster", "keywords": "Time Series Forecasting;Mixing Networks", "tldr": "TimeMixer, as a fully MLP-based architecture, taking full advantage of disentangled multiscale time series, is proposed to achieve consistent SOTA performances in both long and short-term forecasting tasks with favorable run-time efficiency.", "abstract": "Time series forecasting is widely used in extensive applications, such as traffic planning and weather forecasting. However, real-world time series usually present intricate temporal variations, making forecasting extremely challenging. Going beyond the mainstream paradigms of plain decomposition and multiperiodicity analysis, we analyze temporal variations in a novel view of multiscale-mixing, where time series present distinct patterns in different sampling scales. Specifically, the microscopic and the macroscopic information are reflected in fine and coarse scales, respectively, and thereby complex variations are inherently disentangled. Based on this observation, we propose TimeMixer as a fully MLP-based architecture with Past-Decomposable-Mixing (PDM) and Future-Multipredictor-Mixing (FMM) blocks to take full advantage of disentangled multiscale series in both past extraction and future prediction phases. Concretely, PDM applies the decomposition to multiscale series and further mixes the decomposed seasonal and trend components in fine-to-coarse and coarse-to-fine directions separately, which successively aggregates the microscopic seasonal and macroscopic trend information. FMM further ensembles multiple predictors to utilize complementary forecasting capabilities in multiscale observations. Consequently, our proposed TimeMixer is able to achieve consistent state-of-the-art performances in both long-term and short-term forecasting tasks with favorable run-time efficiency.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19347"} +{"video_file": "7zY781bMDO_39018375.mp4", "openreview_id": "7zY781bMDO", "slideslive_id": 39018375, "venue": "iclr2024", "title": "Free from Bellman Completeness: Trajectory Stitching via Model-based Return-conditioned Supervised Learning", "status": "Poster", "keywords": "Offline Reinforcement Learning;Return-Conditioned Supervised Learning;Bellman Completeness;Trajectory Stitching", "tldr": "We analyze advantage of return-conditioned supervised learning in near-deterministic environments and improves return-conditioned supervised learning to enable trajectory stitching.", "abstract": "Off-policy dynamic programming (DP) techniques such as\nQ\n-learning have proven to be important in sequential decision-making problems. In the presence of function approximation, however, these techniques often diverge due to the absence of Bellman completeness in the function classes considered, a crucial condition for the success of DP-based methods. In this paper, we show how off-policy learning techniques based on return-conditioned supervised learning (RCSL) are able to circumvent these challenges of Bellman completeness, converging under significantly more relaxed assumptions inherited from supervised learning. We prove there exists a natural environment in which if one uses two-layer multilayer perceptron as the function approximator, the layer width needs to grow linearly with the state space size to satisfy Bellman completeness while a constant layer width is enough for RCSL. These findings take a step towards explaining the superior empirical performance of RCSL methods compared to DP-based methods in environments with near-optimal datasets. Furthermore, in order to learn from sub-optimal datasets, we propose a simple framework called MBRCSL, granting RCSL methods the ability of dynamic programming to stitch together segments from distinct trajectories. MBRCSL leverages learned dynamics models and forward sampling to accomplish trajectory stitching while avoiding the need for Bellman completeness that plagues all dynamic programming algorithms. We propose both theoretical analysis and experimental evaluation to back these claims, outperforming state-of-the-art model-free and model-based offline RL algorithms across several simulated robotics problems.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19343"} +{"video_file": "89A5c6enfc_39018373.mp4", "openreview_id": "89A5c6enfc", "slideslive_id": 39018373, "venue": "iclr2024", "title": "Local Graph Clustering with Noisy Labels", "status": "Poster", "keywords": "local graph clustering;graph diffusion;attributed graphs;noisy labels", "tldr": "We provide a simple yet highly effective way to perform local clustering in attributed graphs while utilizing node labels, without the need to access the entire graph.", "abstract": "The growing interest in machine learning problems over graphs with additional node information such as texts, images, or labels has popularized methods that require the costly operation of processing the entire graph. Yet, little effort has been made to the development of fast local methods (i.e. without accessing the entire graph) that extract useful information from such data. To that end, we propose a study of local graph clustering using noisy node labels as a proxy for additional node information. In this setting, nodes receive initial binary labels based on cluster affiliation: 1 if they belong to the target cluster and 0 otherwise. Subsequently, a fraction of these labels is flipped. We investigate the benefits of incorporating noisy labels for local graph clustering. By constructing a weighted graph with such labels, we study the performance of graph diffusion-based local clustering method on both the original and the weighted graphs. From a theoretical perspective, we consider recovering an unknown target cluster with a single seed node in a random graph with independent noisy node labels. We provide sufficient conditions on the label noise under which, with high probability, using diffusion in the weighted graph yields a more accurate recovery of the target cluster. This approach proves more effective than using the given labels alone or using diffusion in the label-free original graph. Empirically, we show that reliable node labels can be obtained with just a few samples from an attributed graph. Moreover, utilizing these labels via diffusion in the weighted graph leads to significantly better local clustering performance across several real-world datasets, improving F1 scores by up to 13%.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/19337"} +{"video_file": "8BAkNCqpGW_39019007.mp4", "openreview_id": "8BAkNCqpGW", "slideslive_id": 39019007, "venue": "iclr2024", "title": "A Policy Gradient Method for Confounded POMDPs", "status": "Poster", "keywords": "Offline Reinforcement Learning;Confounded POMDP;Policy Gradient;Statistical Guarantee;Function Approximation", "tldr": "We propose a policy gradient method for Confounded POMDPs with theoretical guarantees.", "abstract": "In this paper, we propose a policy gradient method for confounded partially observable Markov decision processes (POMDPs) with continuous state and observation spaces in the offline setting. We first establish a novel identification result to non-parametrically estimate any history-dependent policy gradient under POMDPs using the offline data. The identification enables us to solve a sequence of conditional moment restrictions and adopt the min-max learning procedure with general function approximation for estimating the policy gradient. We then provide a finite-sample non-asymptotic bound for estimating the gradient uniformly over a pre-specified policy class in terms of the sample size, length of horizon, concentratability coefficient and the measure of ill-posedness in solving the conditional moment restrictions. Lastly, by deploying the proposed gradient estimation in the gradient ascent algorithm, we show the global convergence of the proposed algorithm in finding the history-dependent optimal policy under some technical conditions. To the best of our knowledge, this is the first work studying the policy gradient method for POMDPs under the offline setting.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/19336"} +{"video_file": "8HCARN2hhw_39019184.mp4", "openreview_id": "8HCARN2hhw", "slideslive_id": 39019184, "venue": "iclr2024", "title": "Learning with a Mole: Transferable latent spatial representations for navigation without reconstruction", "status": "Poster", "keywords": "Navigation;Embodied AI;Perception", "tldr": "Instead of learning to reconstruct, we cast the robotic perception task as a navigation task by a blind auxiliary agent generating a learning signal for the main agent", "abstract": "Agents navigating in 3D environments require some form of memory, which should hold a compact and actionable representation of the history of observations useful for decision taking and planning. In most end-to-end learning approaches the representation is latent and usually does not have a clearly defined interpretation, whereas classical robotics addresses this with scene reconstruction resulting in some form of map, usually estimated with geometry and sensor models and/or learning. In this work we propose to learn an actionable representation of the scene independently of the targeted downstream task and without explicitly optimizing reconstruction. The learned representation is optimized by a blind auxiliary agent trained to navigate with it on multiple short sub episodes branching out from a waypoint and, most importantly, without any direct visual observation. We argue and show that the blindness property is important and forces the (trained) latent representation to be the only means for planning. With probing experiments we show that the learned representation optimizes navigability and not reconstruction. On downstream tasks we show that it is robust to changes in distribution, in particular the sim2real gap, which we evaluate with a real physical robot in a real office building, significantly improving performance.", "primary_area": "applications to robotics, autonomy, planning", "site": "https://iclr.cc/virtual/2024/poster/19332"} +{"video_file": "8VPWfqtQMX_39018365.mp4", "openreview_id": "8VPWfqtQMX", "slideslive_id": 39018365, "venue": "iclr2024", "title": "Context is Environment", "status": "Poster", "keywords": "Domain Generalization; In-Context Learning", "tldr": "Researchers in domain generalization should consider environment as context, and harness the adaptive power of in-context learning. Researchers in LLMs should consider context as environment, to better structure data towards generalization.", "abstract": "Two lines of work are taking the central stage in AI research. On the one hand, the community is making increasing efforts to build models that discard spurious correlations and generalize better in novel test environments. Unfortunately, the hard lesson so far is that no proposal convincingly outperforms a simple empirical risk minimization baseline. On the other hand, large language models (LLMs) have erupted as algorithms able to learn in-context, generalizing on-the-fly to eclectic contextual circumstances that users enforce by means of prompting. In this paper, we argue that context is environment, and posit that in-context learning holds the key to better domain generalization. Via extensive theory and experiments, we show that paying attention to context\n\u2013\n\u2013\nunlabeled examples as they arrive\n\u2013\n\u2013\nallows our proposed In-Context Risk Minimization (ICRM) algorithm to zoom-in on the test environment risk minimizer, leading to significant out-of-distribution performance improvements. Furthermore, training with context helps the model learn a better featurizer. From all of this, two messages are worth taking home. Researchers in domain generalization should consider environment as context, and harness the adaptive power of in-context learning. Researchers in LLMs should consider context as environment, to better structure data towards generalization. Code is available at https://github.com/facebookresearch/ICRM.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/19324"} +{"video_file": "8nxy1bQWTG_39018627.mp4", "openreview_id": "8nxy1bQWTG", "slideslive_id": 39018627, "venue": "iclr2024", "title": "DiffEnc: Variational Diffusion with a Learned Encoder", "status": "Poster", "keywords": "DDPM;diffusion;image generation;encoder", "tldr": "Adding a learned time dependent encoder to a diffusion model can improve the likelihood bound on image generation", "abstract": "Diffusion models may be viewed as hierarchical variational autoencoders (VAEs) with two improvements: parameter sharing for the conditionals in the generative process and efficient computation of the loss as independent terms over the hierarchy. We consider two changes to the diffusion model that retain these advantages while adding flexibility to the model. Firstly, we introduce a data and depth-dependent mean function in the diffusion process, which leads to a modified diffusion loss. Our proposed framework, DiffEnc, achieves a statistically significant improvement in likelihood on CIFAR-10. Secondly, we let the ratio of the noise variance of the reverse encoder process and the generative process be a free weight parameter rather than being fixed to one. This leads to theoretical insights: For a finite depth hierarchy, the evidence lower bound (ELBO) can be used as an objective for a weighted diffusion loss approach and for optimizing the noise schedule specifically for inference. For the infinite-depth hierarchy, on the other hand, the weight parameter has to be one to have a well-defined ELBO.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19315"} +{"video_file": "8sKcAWOf2D_39018358.mp4", "openreview_id": "8sKcAWOf2D", "slideslive_id": 39018358, "venue": "iclr2024", "title": "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking", "status": "Poster", "keywords": "Mechanistic Interpretability;Fine-Tuning;Entity Tracking;Mechanisms", "tldr": "We study how fine-tuning affects the internal mechanisms implemented in language models.", "abstract": "Fine-tuning on generalized tasks such as instruction following, code generation, and mathematics has been shown to enhance language models' performance on a range of tasks. Nevertheless, explanations of how such fine-tuning influences the internal computations in these models remain elusive. We study how fine-tuning affects the internal mechanisms implemented in language models. As a case study, we explore the property of entity tracking, a crucial facet of language comprehension, where models fine-tuned on mathematics have substantial performance gains. We identify a mechanism that enables entity tracking and show that (i) both the original model and its fine-tuned version implement entity tracking with the same circuit. In fact, the entity tracking circuit of the fine-tuned version performs better than the full original model. (ii) The circuits of all the models implement roughly the same functionality, that is entity tracking is performed by tracking the position of the correct entity in both the original model and its fine-tuned version. (iii) Performance boost in the fine-tuned model is primarily attributed to its improved ability to handle positional information. To uncover these findings, we employ two methods: DCM, which automatically detects model components responsible for specific semantics, and CMAP, a new approach for patching activations across models to reveal improved mechanisms. Our findings suggest that fine-tuning enhances, rather than fundamentally alters, the mechanistic operation of the model.", "primary_area": "visualization or interpretation of learned representations", "site": "https://iclr.cc/virtual/2024/poster/19313"} +{"video_file": "92btneN9Wm_39018355.mp4", "openreview_id": "92btneN9Wm", "slideslive_id": 39018355, "venue": "iclr2024", "title": "SPDER: Semiperiodic Damping-Enabled Object Representation", "status": "Poster", "keywords": "Implicit neural representations;spectral bias;computer vision;neural network architectures;activations;image representation;edge detection", "tldr": "A new activation function that we claim overcomes spectral bias in neural networks and be used to represent images, audio, video, etc.", "abstract": "We present a neural network architecture designed to naturally learn a positional embedding and overcome the spectral bias towards lower frequencies faced by conventional implicit neural representation networks. Our proposed architecture, SPDER, is a simple MLP that uses an activation function composed of a sinusoidal multiplied by a sublinear function, called the damping function. The sinusoidal enables the network to automatically learn the positional embedding of an input coordinate while the damping passes on the actual coordinate value by preventing it from being projected down to within a finite range of values. Our results indicate that SPDERs speed up training by 10 times and converge to losses 1,500 to 50,000 times lower than that of the state-of-the-art for image representation. SPDER is also state-of-the-art in audio representation. The superior representation capability allows SPDER to also excel on multiple downstream tasks such as image super-resolution and video frame interpolation. We provide intuition as to why SPDER significantly improves fitting compared to that of other INR methods while requiring no hyperparameter tuning or preprocessing. See code at https://github.com/katop1234/SPDER.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19307"} +{"video_file": "9DXXMXnIGm_39018352.mp4", "openreview_id": "9DXXMXnIGm", "slideslive_id": 39018352, "venue": "iclr2024", "title": "Elucidating the design space of classifier-guided diffusion generation", "status": "Poster", "keywords": "conditional diffusion sampling;classifier guidance", "tldr": "Through a comprehensive investigation into the design space of classifier guidance in diffusion generation, we achieved significant improvements over existing guidance schemes by leveraging off-the-shelf classifiers in a training-free fashion.", "abstract": "Guidance in conditional diffusion generation is of great importance for sample quality and controllability. However, existing guidance schemes are to be desired. On one hand, mainstream methods such as classifier guidance and classifier-free guidance both require extra training with labeled data, which is time-consuming and unable to adapt to new conditions. On the other hand, training-free methods such as universal guidance, though more flexible, have yet to demonstrate comparable performance. In this work, through a comprehensive investigation into the design space, we show that it is possible to achieve significant performance improvements over existing guidance schemes by leveraging off-the-shelf classifiers in a training-free fashion, enjoying the best of both worlds. Employing calibration as a general guideline, we propose several pre-conditioning techniques to better exploit pretrained off-the-shelf classifiers for guiding diffusion generation. Extensive experiments on ImageNet validate our proposed method, showing that state-of-the-art (SOTA) diffusion models (DDPM, EDM, DiT) can be further improved (up to 20%) using off-the-shelf classifiers with barely any extra computational cost. With the proliferation of publicly available pretrained classifiers, our proposed approach has great potential and can be readily scaled up to text-to-image generation tasks.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19302"} +{"video_file": "9RIbNmx984_39017173.mp4", "openreview_id": "9RIbNmx984", "slideslive_id": 39017173, "venue": "iclr2024", "title": "On Double Descent in Reinforcement Learning with LSTD and Random Features", "status": "Poster", "keywords": "Regularized Least-Square Temporal Difference;double descent;over-parameterization;random features", "tldr": "We show a performance drop of TD algorithms when the ratio of network parameters to visited states is around one, leading to a double descent phenomenon, and that this can be mitigated with increased \nl\n2\n-regularization or by visiting all states.", "abstract": "Temporal Difference (TD) algorithms are widely used in Deep Reinforcement Learning (RL). Their performance is heavily influenced by the size of the neural network. While in supervised learning, the regime of over-parameterization and its benefits are well understood, the situation in RL is much less clear. In this paper, we present a theoretical analysis of the influence of network size and\nl\n2\n-regularization on performance. We identify the ratio between the number of parameters and the number of visited states as a crucial factor and define over-parameterization as the regime when it is larger than one. Furthermore, we observe a double descent phenomenon, i.e., a sudden drop in performance around the parameter/state ratio of one. Leveraging random features and the lazy training regime, we study the regularized Least-Square Temporal Difference (LSTD) algorithm in an asymptotic regime, as both the number of parameters and states go to infinity, maintaining a constant ratio. We derive deterministic limits of both the empirical and the true Mean-Squared Bellman Error (MSBE) that feature correction terms responsible for the double descent. Correction terms vanish when the\nl\n2\n-regularization is increased or the number of unvisited states goes to zero. Numerical experiments with synthetic and small real-world environments closely match the theoretical predictions.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19297"} +{"video_file": "9bmTbVaA2A_39018724.mp4", "openreview_id": "9bmTbVaA2A", "slideslive_id": 39018724, "venue": "iclr2024", "title": "Bootstrapping Variational Information Pursuit with Large Language and Vision Models for Interpretable Image Classification", "status": "Poster", "keywords": "Interpretable ML;Explainable AI;Information Pursuit;Large Language Models;Large Multimodal Models;Vision Language Models", "tldr": "Extending the Variational Information Pursuit framework by annotating data with Large Language and Multimodal Models", "abstract": "Variational Information Pursuit (V-IP) is an interpretable-by-design framework that makes predictions by sequentially selecting a short chain of user-defined, interpretable queries about the data that are most informative for the task. The prediction is based solely on the obtained query answers, which also serve as a faithful explanation for the prediction. Applying the framework to any task requires (i) specification of a query set, and (ii) densely annotated data with query answers to train classifiers to answer queries at test time. This limits V-IP's application to small-scale tasks where manual data annotation is feasible. In this work, we focus on image classification tasks and propose to relieve this bottleneck by leveraging pretrained language and vision models. Specifically, following recent work, we propose to use GPT, a Large Language Model, to propose semantic concepts as queries for a given classification task. To answer these queries, we propose a light-weight Concept Question-Answering network (Concept-QA) which learns to answer binary queries about semantic concepts in images. We design pseudo-labels to train our Concept-QA model using GPT and CLIP (a Vision-Language Model). Empirically, we find our Concept-QA model to be competitive with state-of-the-art VQA models in terms of answering accuracy but with an order of magnitude fewer parameters. This allows for seamless integration of Concept-QA into the V-IP framework as a fast-answering mechanism. We name this method Concept-QA+V-IP. Finally, we show on several datasets that Concept-QA+V-IP produces shorter, interpretable query chains which are more accurate than V-IP trained with CLIP-based answering systems. Code available at https://github.com/adityac94/conceptqa_vip.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/19290"} +{"video_file": "9j1RD9LlWH_39018973.mp4", "openreview_id": "9j1RD9LlWH", "slideslive_id": 39018973, "venue": "iclr2024", "title": "Bayesian Optimization through Gaussian Cox Process Models for Spatio-temporal Data", "status": "Poster", "keywords": "Bayesian optimization;Gaussian Cox process", "tldr": "Bayesian optimization method on Gaussian Cox process stochastic models for spatial temporal data analysis.", "abstract": "Bayesian optimization (BO) has established itself as a leading strategy for efficiently optimizing expensive-to-evaluate functions. Existing BO methods mostly rely on Gaussian process (GP) surrogate models and are not applicable to (doubly-stochastic) Gaussian Cox processes, where the observation process is modulated by a latent intensity function modeled as a GP. In this paper, we propose a novel maximum a posteriori inference of Gaussian Cox processes. It leverages the Laplace approximation and change of kernel technique to transform the problem into a new reproducing kernel Hilbert space, where it becomes more tractable computationally. It enables us to obtain both a functional posterior of the latent intensity function and the covariance of the posterior, thus extending existing works that often focus on specific link functions or estimating the posterior mean. Using the result, we propose a BO framework based on the Gaussian Cox process model and further develop a Nystr\u00f6m approximation for efficient computation. Extensive evaluations on various synthetic and real-world datasets demonstrate significant improvement over state-of-the-art inference solutions for Gaussian Cox processes, as well as effective BO with a wide range of acquisition functions designed through the underlying Gaussian Cox process model.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/19287"} +{"video_file": "9kG7TwgLYu_39018711.mp4", "openreview_id": "9kG7TwgLYu", "slideslive_id": 39018711, "venue": "iclr2024", "title": "Time Fairness in Online Knapsack Problems", "status": "Poster", "keywords": "fairness;online knapsack;learning-augmented algorithm;Pareto-optimality;robustness;consistency", "tldr": "We present time fairness in the online knapsack problem, show optimal fairness-competitiveness trade-off, and give a learning-augmented algorithm which improves both fairness and performance.", "abstract": "The online knapsack problem is a classic problem in the field of online algorithms. Its canonical version asks how to pack items of different values and weights arriving online into a capacity-limited knapsack so as to maximize the total value of the admitted items. Although optimal competitive algorithms are known for this problem, they may be fundamentally unfair, i.e., individual items may be treated inequitably in different ways. We formalize a practically-relevant notion of time fairness which effectively models a trade off between static and dynamic pricing in a motivating application such as cloud resource allocation, and show that existing algorithms perform poorly under this metric. We propose a parameterized deterministic algorithm where the parameter precisely captures the Pareto-optimal trade-off between fairness (static pricing) and competitiveness (dynamic pricing). We show that randomization is theoretically powerful enough to be simultaneously competitive and fair; however, it does not work well in experiments. To further improve the trade-off between fairness and competitiveness, we develop a nearly-optimal learning-augmented algorithm which is fair, consistent, and robust (competitive), showing substantial performance improvements in numerical experiments.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/19285"} +{"video_file": "9m02ib92Wz_39018344.mp4", "openreview_id": "9m02ib92Wz", "slideslive_id": 39018344, "venue": "iclr2024", "title": "DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models", "status": "Poster", "keywords": "Influence function;Data valuation", "tldr": "We propose DataInf, an efficient influence calculation method that can be easily applied to LLMs and diffusion models.", "abstract": "Quantifying the impact of training data points is crucial for understanding the outputs of machine learning models and for improving the transparency of the AI pipeline. The influence function is a principled and popular data attribution method, but its computational cost often makes it challenging to use. This issue becomes more pronounced in the setting of large language models and text-to-image models. In this work, we propose DataInf, an efficient influence approximation method that is practical for large-scale generative AI models. Leveraging an easy-to-compute closed-form expression, DataInf outperforms existing influence computation algorithms in terms of computational and memory efficiency. Our theoretical analysis shows that DataInf is particularly well-suited for parameter-efficient fine-tuning techniques such as LoRA. Through systematic empirical evaluations, we show that DataInf accurately approximates influence scores and is orders of magnitude faster than existing methods. In applications to RoBERTa-large, Llama-2-13B-chat, and stable-diffusion-v1.5 models, DataInf effectively identifies the most influential fine-tuning examples better than other approximate influence scores. Moreover, it can help to identify which data points are mislabeled.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/19284"} +{"video_file": "9nsNyN0vox_39018343.mp4", "openreview_id": "9nsNyN0vox", "slideslive_id": 39018343, "venue": "iclr2024", "title": "Mastering Symbolic Operations: Augmenting Language Models with Compiled Neural Networks", "status": "Poster", "keywords": "Language Models;Compiled Neural Networks;Neural Comprehension;Symbolic Operations;Length Generalization", "tldr": "We have enabled language models to more fundamental comprehension of the rule, to achieve completely absolute accuracy in symbolic operations without additional tools.", "abstract": "Language models' (LMs) proficiency in handling deterministic symbolic reasoning and rule-based tasks remains limited due to their dependency implicit learning on textual data. To endow LMs with genuine rule comprehension abilities, we propose \"Neural Comprehension\" - a framework that synergistically integrates compiled neural networks (CoNNs) into the standard transformer architecture. CoNNs are neural modules designed to explicitly encode rules through artificially generated attention weights. By incorporating CoNN modules, the Neural Comprehension framework enables LMs to accurately and robustly execute rule-intensive symbolic tasks. Extensive experiments demonstrate the superiority of our approach over existing techniques in terms of length generalization, efficiency, and interpretability for symbolic operations. Furthermore, it can be applied to LMs across different model scales, outperforming tool-calling methods in arithmetic reasoning tasks while maintaining superior inference efficiency. Our work highlights the potential of seamlessly unifying explicit rule learning via CoNNs and implicit pattern learning in LMs, paving the way for true symbolic comprehension capabilities. The code is released at: \\url{https://github.com/wengsyx/Neural-Comprehension}.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19283"} +{"video_file": "9rPyHyjfwP_39018341.mp4", "openreview_id": "9rPyHyjfwP", "slideslive_id": 39018341, "venue": "iclr2024", "title": "Domain-Agnostic Molecular Generation with Chemical Feedback", "status": "Poster", "keywords": "molecule generation;pre-trained language models;SELFIES;natural products;self-feedback", "tldr": "A novel pre-trained molecular language model specifically tailored for molecular generation using SELFIES.", "abstract": "The generation of molecules with desired properties has become increasingly popular, revolutionizing the way scientists design molecular structures and providing valuable support for chemical and drug design. However, despite the potential of language models in molecule generation, they face challenges such as generating syntactically or chemically flawed molecules, having narrow domain focus, and struggling to create diverse and feasible molecules due to limited annotated data or external molecular databases. To tackle these challenges, we introduce MolGen, a pre-trained molecular language model tailored specifically for molecule generation. Through the reconstruction of over 100 million molecular SELFIES, MolGen internalizes structural and grammatical insights. This is further enhanced by domain-agnostic molecular prefix tuning, fostering robust knowledge transfer across diverse domains. Importantly, our chemical feedback paradigm steers the model away from \"molecular hallucinations\", ensuring alignment between the model's estimated probabilities and real-world chemical preferences. Extensive experiments on well-known benchmarks underscore MolGen's optimization capabilities in properties such as penalized logP, QED, and molecular docking. Additional analyses confirm its proficiency in accurately capturing molecule distributions, discerning intricate structural patterns, and efficiently exploring the chemical space (https://github.com/zjunlp/MolGen).", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/19281"} +{"video_file": "9w3iw8wDuE_39018340.mp4", "openreview_id": "9w3iw8wDuE", "slideslive_id": 39018340, "venue": "iclr2024", "title": "Entropy is not Enough for Test-Time Adaptation: From the Perspective of Disentangled Factors", "status": "Spotlight", "keywords": "Test-time adaptation;Roustness", "tldr": "We first address the limitations of relying solely on entropy as a confidence metric for TTA. Based on the observation, we introduce a new TTA method called DeYO, which leverages our proposed confidence metric, PLPD.", "abstract": "Test-time adaptation (TTA) fine-tunes pre-trained deep neural networks for unseen test data. The primary challenge of TTA is limited access to the entire test dataset during online updates, causing error accumulation. To mitigate it, TTA methods have utilized the model output's entropy as a confidence metric that aims to determine which samples have a lower likelihood of causing error. Through experimental studies, however, we observed the unreliability of entropy as a confidence metric for TTA under biased scenarios and theoretically revealed that it stems from the neglect of the influence of latent disentangled factors of data on predictions. Building upon these findings, we introduce a novel TTA method named Destroy Your Object (DeYO), which leverages a newly proposed confidence metric named Pseudo-Label Probability Difference (PLPD). PLPD quantifies the influence of the shape of an object on prediction by measuring the difference between predictions before and after applying an object-destructive transformation. DeYO consists of sample selection and sample weighting, which employ entropy and PLPD concurrently. For robust adaptation, DeYO prioritizes samples that dominantly incorporate shape information when making predictions. Our extensive experiments demonstrate the consistent superiority of DeYO over baseline methods across various scenarios, including biased and wild. Project page is publicly available at https://whitesnowdrop.github.io/DeYO/.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/19280"} +{"video_file": "A2mRcRyGdl_39019078.mp4", "openreview_id": "A2mRcRyGdl", "slideslive_id": 39019078, "venue": "iclr2024", "title": "Semantic Flow: Learning Semantic Fields of Dynamic Scenes from Monocular Videos", "status": "Poster", "keywords": "3D vision;NeRF;semantic understanding", "tldr": "We propose Semantic Flow for learning semantic fields of dynamic scenes from monocular videos.", "abstract": "In this work, we pioneer Semantic Flow, a neural semantic representation of dynamic scenes from monocular videos. In contrast to previous NeRF methods that reconstruct dynamic scenes from the colors and volume densities of individual points, Semantic Flow learns semantics from continuous flows that contain rich 3D motion information. As there is 2D-to-3D ambiguity problem in the viewing direction when extracting 3D flow features from 2D video frames, we consider the volume densities as opacity priors that describe the contributions of flow features to the semantics on the frames. More specifically, we first learn a flow network to predict flows in the dynamic scene, and propose a flow feature aggregation module to extract flow features from video frames. Then, we propose a flow attention module to extract motion information from flow features, which is followed by a semantic network to output semantic logits of flows. We integrate the logits with volume densities in the viewing direction to supervise the flow features with semantic labels on video frames. Experimental results show that our model is able to learn from multiple dynamic scenes and supports a series of new tasks such as instance-level scene editing, semantic completions, dynamic scene tracking and semantic adaption on novel scenes.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19275"} +{"video_file": "A7t7z6g6tM_39018336.mp4", "openreview_id": "A7t7z6g6tM", "slideslive_id": 39018336, "venue": "iclr2024", "title": "Hyper Evidential Deep Learning to Quantify Composite Classification Uncertainty", "status": "Poster", "keywords": "Evidential Neural Network;hyperdomain;vagueness", "tldr": "We propose a novel framework called Hyper-Evidential Neural Network (HENN) that explicitly models predictive uncertainty caused by composite set labels in training data using a belief theory called Subjective Logic (SL).", "abstract": "Deep neural networks (DNNs) have been shown to perform well on exclusive, multi-class classification tasks. However, when different classes have similar visual features, it becomes challenging for human annotators to differentiate them. When an image is ambiguous, such as a blurry one where an annotator can't distinguish between a husky and a wolf, it may be labeled with both classes: {husky, wolf}. This scenario necessitates the use of composite set labels. In this paper, we propose a novel framework called Hyper-Evidential Neural Network (HENN) that explicitly models predictive uncertainty caused by composite set labels in training data in the context of the belief theory called Subjective Logic (SL). By placing a Grouped Dirichlet distribution on the class probabilities, we treat predictions of a neural network as parameters of hyper-subjective opinions and learn the network that collects both single and composite evidence leading to these hyper-opinions by a deterministic DNN from data. We introduce a new uncertainty type called vagueness originally designed for hyper-opinions in SL to quantify composite classification uncertainty for DNNs. Our experiments prove that HENN outperforms its state-of-the-art counterparts based on four image datasets. The code and datasets are available at: https://shorturl.at/dhoqx.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/19273"} +{"video_file": "AJBkfwXh3u_39018331.mp4", "openreview_id": "AJBkfwXh3u", "slideslive_id": 39018331, "venue": "iclr2024", "title": "Causality-Inspired Spatial-Temporal Explanations for Dynamic Graph Neural Networks", "status": "Poster", "keywords": "Dynamic Graph;Graph Explanation;Graph Neural Network;Causal Inference", "tldr": "To the best of our knowledge, we are the first to explain dynamic graph neural networks.", "abstract": "Dynamic Graph Neural Networks (DyGNNs) have gained significant popularity in the research of dynamic graphs, but are limited by the low transparency, such that human-understandable insights can hardly be drawn from their predictions. Although a number of existing research have been devoted to investigating the interpretability of graph neural networks (GNNs), achieving the interpretability of DyGNNs is pivotally challenging due to the complex spatial-temporal correlations in dynamic graphs. To this end, we propose an innovative causality-inspired generative model based on structural causal model (SCM), which explores the underlying philosophies of DyGNN predictions by identifying the trivial, static, and dynamic causal relationships. To reach this goal, two critical tasks need to be accomplished including (1) disentangling the complex causal relationships, and (2) fitting the spatial-temporal explanations of DyGNNs in the SCM architecture. To tackle these challenges, the proposed method incorporates a contrastive learning module to disentangle trivial and causal relationships, and a dynamic correlating module to disentangle dynamic and static causal relationships, respectively. A dynamic VGAE-based framework is further developed, which generates causal-and-dynamic masks for spatial interpretability, and recognizes dynamic relationships along the time horizon through causal invention for temporal interpretability. Comprehensive experiments have been conducted on both synthetic and real-world datasets, where our approach yields substantial improvements, thereby demonstrating significant superiority.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/19267"} +{"video_file": "ALVwQjZRS8_39018330.mp4", "openreview_id": "ALVwQjZRS8", "slideslive_id": 39018330, "venue": "iclr2024", "title": "Coeditor: Leveraging Repo-level Diffs for Code Auto-editing", "status": "Spotlight", "keywords": "language model for code;editing;refactoring", "tldr": "Coeditor, a code editing model trained on code commits, surpasses previous code completion methods and unveils a new multi-round auto-editing application.", "abstract": "Developers often dedicate significant time to maintaining and refactoring existing code. However, most prior work on generative models for code focuses solely on creating new code, overlooking the distinctive needs of editing existing code. In this work, we explore a multi-round code auto-editing setting, aiming to predict edits to a code region based on recent changes within the same codebase. Our model, Coeditor, is a fine-tuned language model specifically designed for code editing tasks. We represent code changes using a line diff format and employ static analysis to form large customized model contexts, ensuring the availability of appropriate information for prediction. We collect a code editing dataset from the commit histories of 1650 open-source Python projects for training and evaluation. In a simplified single-round, single-edit task, Coeditor significantly outperforms GPT-3.5 and SOTA open-source code completion models (bringing exact-match accuracy from 34.7 up to 60.4), demonstrating the benefits of incorporating editing history for code completion. In a multi-round, multi-edit setting, we observe substantial gains by iteratively conditioning on additional user edits. We have open-sourced our code, data, and model weights to encourage future research and have released a VSCode extension powered by our model for interactive IDE usage.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19265"} +{"video_file": "ARPrtuzAnQ_39019095.mp4", "openreview_id": "ARPrtuzAnQ", "slideslive_id": 39019095, "venue": "iclr2024", "title": "On the hardness of learning under symmetries", "status": "Spotlight", "keywords": "Equivariance;statistical query;lower bound;computational hardness;invariance;symmetry;neural networks", "tldr": "We give statistical query lower bounds for learning symmetry-preserving neural networks and other invariant functions.", "abstract": "We study the problem of learning equivariant neural networks via gradient descent. The incorporation of known symmetries (\"equivariance\") into neural nets has empirically improved the performance of learning pipelines, in domains ranging from biology to computer vision. However, a rich yet separate line of learning theoretic research has demonstrated that actually learning shallow, fully-connected (i.e. non-symmetric) networks has exponential complexity in the correlational statistical query (CSQ) model, a framework encompassing gradient descent. In this work, we ask: are known problem symmetries sufficient to alleviate the fundamental hardness of learning neural nets with gradient descent? We answer this question in the negative. In particular, we give lower bounds for shallow graph neural networks, convolutional networks, invariant polynomials, and frame-averaged networks for permutation subgroups, which all scale either superpolynomially or exponentially in the relevant input dimension. Therefore, in spite of the significant inductive bias imparted via symmetry, actually learning the complete classes of functions represented by equivariant neural networks via gradient descent remains hard.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/19262"} +{"video_file": "AU2gS9ut61_39018327.mp4", "openreview_id": "AU2gS9ut61", "slideslive_id": 39018327, "venue": "iclr2024", "title": "A differentiable brain simulator bridging brain simulation and brain-inspired computing", "status": "Poster", "keywords": "brain simulator;brain simulation;computational neuroscience;brain-inspired computing", "tldr": "We developed BrainPy, a differentiable brain simulator, to help bridge the gap between brain simulation and brain-inspired computing.", "abstract": "Brain simulation builds dynamical models to mimic the structure and functions of the brain, while brain-inspired computing (BIC) develops intelligent systems by learning from the structure and functions of the brain. The two fields are intertwined and should share a common programming framework to facilitate each other's development. However, none of the existing software in the fields can achieve this goal, because traditional brain simulators lack differentiability for training, while existing deep learning (DL) frameworks fail to capture the biophysical realism and complexity of brain dynamics. In this paper, we introduce BrainPy, a differentiable brain simulator developed using JAX and XLA, with the aim of bridging the gap between brain simulation and BIC. BrainPy expands upon the functionalities of JAX, a powerful AI framework, by introducing complete capabilities for flexible, efficient, and scalable brain simulation. It offers a range of sparse and event-driven operators for efficient and scalable brain simulation, an abstraction for managing the intricacies of synaptic computations, a modular and flexible interface for constructing multi-scale brain models, and an object-oriented just-in-time compilation approach to handle the memory-intensive nature of brain dynamics. We showcase the efficiency and scalability of BrainPy on benchmark tasks, and highlight its differentiable simulation for biologically plausible spiking models.", "primary_area": "applications to neuroscience & cognitive science", "site": "https://iclr.cc/virtual/2024/poster/19260"} +{"video_file": "AY6aM13gGF_39019013.mp4", "openreview_id": "AY6aM13gGF", "slideslive_id": 39019013, "venue": "iclr2024", "title": "Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning", "status": "Poster", "keywords": "Offline Reinforcement Learning;Decision Transformer;Motion Control", "tldr": "We leverage the power of pre-trained Language Models for low-level motion control in offline reinforcement learning.", "abstract": "Offline reinforcement learning (RL) aims to find a near-optimal policy using pre-collected datasets. Given recent advances in Large Language Models (LLMs) and their few-shot learning prowess, this paper introduces\nLa\nnguage Models for\nMo\ntion Control (\nLaMo\n), a general framework based on Decision Transformers to effectively use pre-trained Language Models (LMs) for offline RL. Our framework highlights four crucial components: (1) Initializing Decision Transformers with sequentially pre-trained LMs, (2) employing the LoRA fine-tuning method, in contrast to full-weight fine-tuning, to combine the pre-trained knowledge from LMs and in-domain knowledge effectively, (3) using the non-linear MLP transformation instead of linear projections, to generate embeddings, and (4) integrating an auxiliary language prediction loss during fine-tuning to stabilize the LMs and retain their original abilities on languages. Empirical results indicate\nLaMo\nachieves state-of-the-art performance in sparse-reward tasks and closes the gap between value-based offline RL methods and decision transformers in dense-reward tasks. In particular, our method demonstrates superior performance in scenarios with limited data samples.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19257"} +{"video_file": "AZW3qlCGTe_39018323.mp4", "openreview_id": "AZW3qlCGTe", "slideslive_id": 39018323, "venue": "iclr2024", "title": "Enhancing Instance-Level Image Classification with Set-Level Labels", "status": "Poster", "keywords": "set-level labels;fast excess risk rate;representation learning;few-shot learning", "tldr": "We present a novel approach to enhance instance-level image classification by leveraging set-level labels.", "abstract": "Instance-level image classification tasks have traditionally relied on single-instance labels to train models, e.g., few-shot learning and transfer learning. However, set-level coarse-grained labels that capture relationships among instances can provide richer information in real-world scenarios. In this paper, we present a novel approach to enhance instance-level image classification by leveraging set-level labels. We provide a theoretical analysis of the proposed method, including recognition conditions for fast excess risk rate, shedding light on the theoretical foundations of our approach. We conducted experiments on two distinct categories of datasets: natural image datasets and histopathology image datasets. Our experimental results demonstrate the effectiveness of our approach, showcasing improved classification performance compared to traditional single-instance label-based methods. Notably, our algorithm achieves 13% improvement in classification accuracy compared to the strongest baseline on the histopathology image classification benchmarks. Importantly, our experimental findings align with the theoretical analysis, reinforcing the robustness and reliability of our proposed method. This work bridges the gap between instance-level and set-level image classification, offering a promising avenue for advancing the capabilities of image classification models with set-level coarse-grained labels.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19254"} +{"video_file": "AcRfzLS6se_39018684.mp4", "openreview_id": "AcRfzLS6se", "slideslive_id": 39018684, "venue": "iclr2024", "title": "Out-of-Distribution Detection by Leveraging Between-Layer Transformation Smoothness", "status": "Poster", "keywords": "out-of-distribution detection;deep neural networks;transformers;representation analysis;uncertainty quantification;text classification", "tldr": "BLOOD is a novel OOD detection method for deep neural networks, applicable to pre-trained models, relying on between-layer transformation smoothness, and outperforming similar methods.", "abstract": "Effective out-of-distribution (OOD) detection is crucial for reliable machine learning models, yet most current methods are limited in practical use due to requirements like access to training data or intervention in training. We present a novel method for detecting OOD data in Transformers based on transformation smoothness between intermediate layers of a network (BLOOD), which is applicable to pre-trained models without access to training data. BLOOD utilizes the tendency of between-layer representation transformations of in-distribution (ID) data to be smoother than the corresponding transformations of OOD data, a property that we also demonstrate empirically. We evaluate BLOOD on several text classification tasks with Transformer networks and demonstrate that it outperforms methods with comparable resource requirements. Our analysis also suggests that when learning simpler tasks, OOD data transformations maintain their original sharpness, whereas sharpness increases with more complex tasks.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/19250"} +{"video_file": "AcSChDWL6V_39018653.mp4", "openreview_id": "AcSChDWL6V", "slideslive_id": 39018653, "venue": "iclr2024", "title": "Distinguished In Uniform: Self-Attention Vs. Virtual Nodes", "status": "Poster", "keywords": "Graph Neural Networks;Message Passing;Graph Transformers;Virtual Nodes;Expressivity;Uniform Expressivity", "tldr": "Graph Transformers and MPGNNs with virtual nodes do not subsume each other in terms of uniform function approximation while neither is \"universal\" in this setting.", "abstract": "Graph Transformers (GTs) such as SAN and GPS are graph processing models that combine Message-Passing GNNs (MPGNNs) with global Self-Attention. They were shown to be universal function approximators, with two reservations: 1. The initial node features must be augmented with certain positional encodings. 2. The approximation is non-uniform: Graphs of different sizes may require a different approximating network.\nWe first clarify that this form of universality is not unique to GTs: Using the same positional encodings, also pure MPGNNs and even 2-layer MLPs are non-uniform universal approximators. We then consider uniform expressivity: The target function is to be approximated by a single network for graphs of all sizes. There, we compare GTs to the more efficient MPGNN + Virtual Node architecture. The essential difference between the two model definitions is in their global computation method: Self-Attention Vs Virtual Node. We prove that none of the models is a uniform-universal approximator, before proving our main result: Neither model\u2019s uniform expressivity subsumes the other\u2019s. We demonstrate the theory with experiments on synthetic data. We further augment our study with real-world datasets, observing mixed results which indicate no clear ranking in practice as well.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/19249"} +{"video_file": "AgM3MzT99c_39018315.mp4", "openreview_id": "AgM3MzT99c", "slideslive_id": 39018315, "venue": "iclr2024", "title": "OMNI: Open-endedness via Models of human Notions of Interestingness", "status": "Poster", "keywords": "Open-endedness;Auto-Curriculum Learning;Reinforcement Learning", "tldr": "Open-endedness via Models of human Notions of Interestingness (OMNI) leverages foundation models to improve open-ended learning by focusing on tasks that are both learnable and interesting, advancing self-improving AI and auto-curricula.", "abstract": "Open-ended algorithms aim to learn new, interesting behaviors forever. That requires a vast environment search space, but there are thus infinitely many possible tasks. Even after filtering for tasks the current agent can learn (i.e., learning progress), countless learnable yet uninteresting tasks remain (e.g., minor variations of previously learned tasks). An Achilles Heel of open-endedness research is the inability to quantify (and thus prioritize) tasks that are not just learnable, but also\ninteresting\n(e.g., worthwhile and novel). We propose solving this problem by\nOpen-endedness via Models of human Notions of Interestingness\n(OMNI). The insight is that we can utilize foundation models (FMs) as a model of interestingness (MoI), because they\nalready\ninternalize human concepts of interestingness from training on vast amounts of human-generated data, where humans naturally write about what they find interesting or boring. We show that FM-based MoIs improve open-ended learning by focusing on tasks that are both learnable\nand interesting\n, outperforming baselines based on uniform task sampling or learning progress alone. This approach has the potential to dramatically advance the ability to intelligently select which tasks to focus on next (i.e., auto-curricula), and could be seen as AI selecting its own next task to learn, facilitating self-improving AI and AI-Generating Algorithms.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19242"} +{"video_file": "Ax2yRhCQr1_39018310.mp4", "openreview_id": "Ax2yRhCQr1", "slideslive_id": 39018310, "venue": "iclr2024", "title": "Understanding Augmentation-based Self-Supervised Representation Learning via RKHS Approximation and Regression", "status": "Spotlight", "keywords": "Learning Theory;Representation Learning;Self-supervised Learning;Data Augmentation;RKHS Approximation;RKHS Regression", "tldr": "We establish an RKHS approximation/regression framework for analyzing self-supervised pretraining based on data augmentation, and derive nonparametric learning guarantees that disentangles the effects of the model and the augmentation.", "abstract": "Data augmentation is critical to the empirical success of modern self-supervised representation learning, such as contrastive learning and masked language modeling. However, a theoretical understanding of the exact role of the augmentation remains limited. Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator, suggesting that learning a linear probe atop such representation can be connected to RKHS regression. Building on this insight, this work delves into a statistical analysis of augmentation-based pretraining. Starting from the isometry property, a geometric characterization of the target function given by the augmentation, we disentangle the effects of the model and the augmentation, and prove two generalization bounds that are free of model complexity. Our first bound works for an arbitrary encoder, and it is the sum of an estimation error bound incurred by fitting a linear probe, and an approximation error bound by RKHS approximation. Our second bound specifically addresses the case where the encoder extracts the top-d eigenspace of a finite-sample-based approximation of the underlying RKHS. A key ingredient in our analysis is the augmentation complexity, which we use to quantitatively compare different augmentations and analyze their impact on downstream performance.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19234"} +{"video_file": "AyzkDpuqcl_39018309.mp4", "openreview_id": "AyzkDpuqcl", "slideslive_id": 39018309, "venue": "iclr2024", "title": "Learning Energy-Based Models by Cooperative Diffusion Recovery Likelihood", "status": "Spotlight", "keywords": "Energy-based model;recovery-likelihood;cooperative learning", "tldr": "We propose cooperative diffusion recovery likelihood (CDRL) model. Our model substantially improves the generation performance of EBM-based generative models.", "abstract": "Training energy-based models (EBMs) on high-dimensional data can be both challenging and time-consuming, and there exists a noticeable gap in sample quality between EBMs and other generative frameworks like GANs and diffusion models. To close this gap, inspired by the recent efforts of learning EBMs by maximimizing diffusion recovery likelihood (DRL), we propose cooperative diffusion recovery likelihood (CDRL), an effective approach to tractably learn and sample from a series of EBMs defined on increasingly noisy versons of a dataset, paired with an initializer model for each EBM. At each noise level, the two models are jointly estimated within a cooperative training framework: Samples from the initializer serve as starting points that are refined by a few MCMC sampling steps from the EBM. The EBM is then optimized by maximizing recovery likelihood, while the initializer model is optimized by learning from the difference between the refined samples and the initial samples. In addition, we made several practical designs for EBM training to further improve the sample quality. Combining these advances, we significantly boost the generation performance compared to existing EBM methods on CIFAR-10 and ImageNet 32x32. And we have shown that CDRL has great potential to largely reduce the sampling time. We also demonstrate the effectiveness of our models for several downstream tasks, including classifier-free guided generation, compositional generation, image inpainting and out-of-distribution detection.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19232"} +{"video_file": "BEH4mGo7zP_39019202.mp4", "openreview_id": "BEH4mGo7zP", "slideslive_id": 39019202, "venue": "iclr2024", "title": "Pre-training Sequence, Structure, and Surface Features for Comprehensive Protein Representation Learning", "status": "Poster", "keywords": "Protein representation learning;self-supervised learning;implicit neural representation", "tldr": "We describe a new pre-training approach for protein representation learning using generalizable implicit neural networks on protein molecular surfaces, showing SOTA results for various tasks.", "abstract": "Proteins can be represented in various ways, including their sequences, 3D structures, and surfaces. While recent studies have successfully employed sequence- or structure-based representations to address multiple tasks in protein science, there has been significant oversight in incorporating protein surface information, a critical factor for protein function. In this paper, we present a pre-training strategy that incorporates information from protein sequences, 3D structures, and surfaces to improve protein representation learning. Specifically, we utilize Implicit Neural Representations (INRs) for learning surface characteristics, and name it ProteinINR. We confirm that ProteinINR successfully reconstructs protein surfaces, and integrate this surface learning into the existing pre-training strategy of sequences and structures. Our results demonstrate that our approach can enhance performance in various downstream tasks, thereby underscoring the importance of including surface attributes in protein representation learning. These findings underline the importance of understanding protein surfaces for generating effective protein representations.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/19226"} +{"video_file": "BEyEziZ4R6_39018304.mp4", "openreview_id": "BEyEziZ4R6", "slideslive_id": 39018304, "venue": "iclr2024", "title": "DP-SGD Without Clipping: The Lipschitz Neural Network Way", "status": "Poster", "keywords": "lipschitz neural networks;dp-sgd;privacy;robustness", "tldr": "Lipschitz neural networks can be trained with DP guarantees without gradient clipping", "abstract": "State-of-the-art approaches for training Differentially Private (DP) Deep Neural Networks (DNN) face difficulties to estimate tight bounds on the sensitivity of the network's layers, and instead rely on a process of per-sample gradient clipping. This clipping process not only biases the direction of gradients but also proves costly both in memory consumption and in computation. To provide sensitivity bounds and bypass the drawbacks of the clipping process, we propose to rely on Lipschitz constrained networks. Our theoretical analysis reveals an unexplored link between the Lipschitz constant with respect to their input and the one with respect to their parameters. By bounding the Lipschitz constant of each layer with respect to its parameters, we prove that we can train these networks with privacy guarantees. Our analysis not only allows the computation of the aforementioned sensitivities at scale, but also provides guidance on how to maximize the gradient-to-noise ratio for fixed privacy guarantees. To facilitate the application of Lipschitz networks and foster robust and certifiable learning under privacy guarantees, we provide a Python package that implements building blocks allowing the construction and private training of such networks.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/19225"} +{"video_file": "BLGQ3oqldb_39017145.mp4", "openreview_id": "BLGQ3oqldb", "slideslive_id": 39017145, "venue": "iclr2024", "title": "LogicMP: A Neuro-symbolic Approach for Encoding First-order Logic Constraints", "status": "Poster", "keywords": "Variational Inference", "tldr": "We present a novel and modular neural layer LogicMP, capable of encoding first-order logic constraints using fully parallel computation.", "abstract": "Integrating first-order logic constraints (FOLCs) with neural networks is a crucial but challenging problem since it involves modeling intricate correlations to satisfy the constraints. This paper proposes a novel neural layer, LogicMP, which performs mean-field variational inference over a Markov Logic Network (MLN). It can be plugged into any off-the-shelf neural network to encode FOLCs while retaining modularity and efficiency. By exploiting the structure and symmetries in MLNs, we theoretically demonstrate that our well-designed, efficient mean-field iterations greatly mitigate the difficulty of MLN inference, reducing the inference from sequential calculation to a series of parallel tensor operations. Empirical results in three kinds of tasks over images, graphs, and text show that LogicMP outperforms advanced competitors in both performance and efficiency.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/19220"} +{"video_file": "BPb5AhT2Vf_39018638.mp4", "openreview_id": "BPb5AhT2Vf", "slideslive_id": 39018638, "venue": "iclr2024", "title": "FreeReg: Image-to-Point Cloud Registration Leveraging Pretrained Diffusion Models and Monocular Depth Estimators", "status": "Poster", "keywords": "Image-to-point cloud registration;cross-modality feature extraction;diffusion models", "tldr": "FreeReg extracts cross-modality features from pretrained diffusion models and monocular depth estimators for accurate image to point cloud registration.", "abstract": "Matching cross-modality features between images and point clouds is a fundamental problem for image-to-point cloud registration. However, due to the modality difference between images and points, it is difficult to learn robust and discriminative cross-modality features by existing metric learning methods for feature matching. Instead of applying metric learning on cross-modality data, we propose to unify the modality between images and point clouds by pretrained large-scale models first, and then establish robust correspondence within the same modality. We show that the intermediate features, called diffusion features, extracted by depth-to-image diffusion models are semantically consistent between images and point clouds, which enables the building of coarse but robust cross-modality correspondences. We further extract geometric features on depth maps produced by the monocular depth estimator. By matching such geometric features, we significantly improve the accuracy of the coarse correspondences produced by diffusion features. Extensive experiments demonstrate that without any task-specific training, direct utilization of both features produces accurate image-to-point cloud registration. On three public indoor and outdoor benchmarks, the proposed method averagely achieves a 20.6 percent improvement in Inlier Ratio, a\n3.0\n\u00d7\nhigher Inlier Number, and a 48.6 percent improvement in Registration Recall than existing state-of-the-arts. The code and additional results are available at \\url{https://whu-usi3dv.github.io/FreeReg/}.", "primary_area": "visualization or interpretation of learned representations", "site": "https://iclr.cc/virtual/2024/poster/19217"} +{"video_file": "BRdEBlwUW6_39018300.mp4", "openreview_id": "BRdEBlwUW6", "slideslive_id": 39018300, "venue": "iclr2024", "title": "DAFA: Distance-Aware Fair Adversarial Training", "status": "Poster", "keywords": "adversarial robustness;robust fairness;adversarial examples;adversarial training", "tldr": "We propose a method to improve robust fairness taking into account the similarities between classes.", "abstract": "The disparity in accuracy between classes in standard training is amplified during adversarial training, a phenomenon termed the robust fairness problem. Existing methodologies aimed to enhance robust fairness by sacrificing the model's performance on easier classes in order to improve its performance on harder ones. However, we observe that under adversarial attacks, the majority of the model's predictions for samples from the worst class are biased towards classes similar to the worst class, rather than towards the easy classes. Through theoretical and empirical analysis, we demonstrate that robust fairness deteriorates as the distance between classes decreases. Motivated by these insights, we introduce the Distance-Aware Fair Adversarial Training (DAFA) methodology, which addresses robust fairness by taking into account the similarities between classes. Specifically, our method assigns distinct adversarial margins and loss weights to each class and adjusts them to encourage a trade-off in robustness among similar classes. Experimental results across various datasets demonstrate that our method not only maintains average robust accuracy but also significantly improves the worst robust accuracy, indicating a marked improvement in robust fairness compared to existing methods.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19216"} +{"video_file": "BV1PHbTJzd_39017079.mp4", "openreview_id": "BV1PHbTJzd", "slideslive_id": 39017079, "venue": "iclr2024", "title": "Accelerating Distributed Stochastic Optimization via Self-Repellent Random Walks", "status": "Oral", "keywords": "Distributed Learning;Self-Repellent Random Walk;Token Algorithm;Central Limit Theorem;Asymptotic Analysis", "tldr": "In distributed learning, we present SA-SRRW algorithm to prioritize lesser-visited nodes while discouraging frequently visited nodes in distributed learning, and show its performance improvement.", "abstract": "We study a family of distributed stochastic optimization algorithms where gradients are sampled by a token traversing a network of agents in random-walk fashion. Typically, these random-walks are chosen to be Markov chains that asymptotically sample from a desired target distribution, and play a critical role in the convergence of the optimization iterates. In this paper, we take a novel approach by replacing the standard linear Markovian token by one which follows a non-linear Markov chain - namely the Self-Repellent Radom Walk (SRRW). Defined for any given 'base' Markov chain, the SRRW, parameterized by a positive scalar\n\u03b1\n, is less likely to transition to states that were highly visited in the past, thus the name. In the context of MCMC sampling on a graph, a recent breakthrough in Doshi et al. (2023) shows that the SRRW achieves\nO\n(\n1\n/\n\u03b1\n)\ndecrease in the asymptotic variance for sampling. We propose the use of a `generalized' version of the SRRW to drive token algorithms for distributed stochastic optimization in the form of stochastic approximation, termed SA-SRRW. We prove that the optimization iterate errors of the resulting SA-SRRW converge to zero almost surely and prove a central limit theorem, deriving the explicit form of the resulting asymptotic covariance matrix corresponding to iterate errors. This asymptotic covariance is always smaller than that of an algorithm driven by the base Markov chain and decreases at rate\nO\n(\n1\n/\n\u03b1\n2\n)\n- the performance benefit of using SRRW thereby amplified in the stochastic optimization context. Empirical results support our theoretical findings.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/19214"} +{"video_file": "Bb21JPnhhr_39017187.mp4", "openreview_id": "Bb21JPnhhr", "slideslive_id": 39017187, "venue": "iclr2024", "title": "AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?", "status": "Poster", "keywords": "long-term action anticipation;multimodal learning", "tldr": "LLMs provide temporal dynamics priors of human behaviors for long-term action anticipation", "abstract": "Can we better anticipate an actor\u2019s future actions (e.g. mix eggs) by knowing what commonly happens after the current action (e.g. crack eggs)? What if the actor also shares the goal (e.g. make fried rice) with us? The long-term action anticipation (LTA) task aims to predict an actor\u2019s future behavior from video observations in the form of verb and noun sequences, and it is crucial for human-machine interaction. We propose to formulate the LTA task from two perspectives: a bottom-up approach that predicts the next actions autoregressively by modeling temporal dynamics; and a top-down approach that infers the goal of the actor and plans the needed procedure to accomplish the goal. We hypothesize that large language models (LLMs), which have been pretrained on procedure text data (e.g. recipes, how-tos), have the potential to help LTA from both perspectives. It can help provide the prior knowledge on the possible next actions, and infer the goal given the observed part of a procedure, respectively. We propose AntGPT, which represents video observations as sequences of human actions, and uses the action representation for an LLM to infer the goals and model temporal dynamics. AntGPT achieves state- of-the-art performance on Ego4D LTA v1 and v2, EPIC-Kitchens-55, as well as EGTEA GAZE+, thanks to LLMs\u2019 goal inference and temporal dynamics modeling capabilities. We further demonstrate that these capabilities can be effectively distilled into a compact neural network 1.3% of the original LLM model size. Code and model will be released upon acceptance.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19210"} +{"video_file": "Bb4VGOWELI_39018626.mp4", "openreview_id": "Bb4VGOWELI", "slideslive_id": 39018626, "venue": "iclr2024", "title": "Large Language Models as Optimizers", "status": "Poster", "keywords": "large language model;optimizer;prompting", "tldr": "We propose a simple and effective approach to use large language models as optimizers, and demonstrated its capability on math and prompt optimization problems.", "abstract": "Optimization is ubiquitous. While derivative-based algorithms have been powerful tools for various problems, the absence of gradient imposes challenges on many real-world applications. In this work, we propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. We first showcase OPRO on linear regression and traveling salesman problems, then move on to our main application in prompt optimization, where the goal is to find instructions that maximize the task accuracy. With a variety of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. Code at https://github.com/google-deepmind/opro.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/19209"} +{"video_file": "BifeBRhikU_39018799.mp4", "openreview_id": "BifeBRhikU", "slideslive_id": 39018799, "venue": "iclr2024", "title": "PB-LLM: Partially Binarized Large Language Models", "status": "Poster", "keywords": "Large Language Model;Network Compression", "tldr": "Frist work using network binarization for large language model compression.", "abstract": "This paper explores network binarization, a radical form of quantization, compressing model weights to a single bit, specifically for Large Language Models (LLMs) compression. Due to previous binarization methods collapsing LLMs, we propose a novel approach, Partially-Binarized LLM (PB-LLM), which can achieve extreme low-bit quantization while maintaining the linguistic reasoning capacity of quantized LLMs. Specifically, our exploration first uncovers the ineffectiveness of na\u00efve applications of existing binarization algorithms and highlights the imperative role of salient weights in achieving low-bit quantization. Thus, PB-LLM filters a small ratio of salient weights during binarization, allocating them to higher-bit storage, i.e., partially-binarization. PB-LLM is extended to recover the capacities of quantized LMMs, by analyzing from the perspective of post-training quantization (PTQ) and quantization-aware training (QAT). Under PTQ, combining the concepts from GPTQ, we reconstruct the binarized weight matrix guided by the Hessian matrix and successfully recover the reasoning capacity of PB-LLM in low-bit. Under QAT, we freeze the salient weights during training, explore the derivation of optimal scaling factors crucial for minimizing the quantization error, and propose a scaling mechanism based on this derived scaling strategy for residual binarized weights. Those explorations and the developed methodologies significantly contribute to rejuvenating the performance of low-bit quantized LLMs and present substantial advancements in the field of network binarization for LLMs. Code is available at https://github.com/hahnyuan/PB-LLM.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19207"} +{"video_file": "BllUWdpIOA_39018294.mp4", "openreview_id": "BllUWdpIOA", "slideslive_id": 39018294, "venue": "iclr2024", "title": "Continual Momentum Filtering on Parameter Space for Online Test-time Adaptation", "status": "Poster", "keywords": "Online Test-time Adaptation;Catastrophic Forgetting;Kalman Filter", "tldr": "We propose continual momentum filtering framework, which a novel approach to bolster the online test-time adaptation methodology. This was achieved by deducing a refined source model through target model denoising by leveraging the Kalman filtering.", "abstract": "Deep neural networks (DNNs) have revolutionized tasks such as image classification and speech recognition but often falter when training and test data diverge in distribution. External factors, from weather effects on images to varied speech environments, can cause this discrepancy, compromising DNN performance. Online test-time adaptation (OTTA) methods present a promising solution, recalibrating models in real-time during the test stage without requiring historical data. However, the OTTA paradigm is imperfect, often falling prey to issues such as catastrophic forgetting due to its reliance on noisy, self-trained predictions. Although some contemporary strategies mitigate this by tying adaptations to the static source model, this restricts model flexibility. This paper introduces a continual momentum filtering (CMF) framework, leveraging the Kalman filter (KF) to strike a balance between model adaptability and information retention. The CMF intertwines optimization via stochastic gradient descent with a KF-based inference process. This methodology not only aids in averting catastrophic forgetting but also provides high adaptability to shifting data distributions. We validate our framework on various OTTA scenarios and real-world situations regarding covariate and label shifts, and the CMF consistently shows superior performance compared to state-of-the-art methods.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/19204"} +{"video_file": "Bo6GpQ3B9a_39018292.mp4", "openreview_id": "Bo6GpQ3B9a", "slideslive_id": 39018292, "venue": "iclr2024", "title": "Out-Of-Domain Unlabeled Data Improves Generalization", "status": "Spotlight", "keywords": "Out-of-domain data;Semi-supervised learing;learning theory;generalization bound;adversarial robustness", "tldr": "We propose a framework to incorporate unlabeled out-of-domain samples in order to enhance the generalization error. We derive explicit bounds for Gaussian mixture models, and test our method on real datasets.", "abstract": "We propose a novel framework for incorporating unlabeled data into semi-supervised classification problems, where scenarios involving the minimization of either i) adversarially robust or ii) non-robust loss functions have been considered. Notably, we allow the unlabeled samples to deviate slightly (in total variation sense) from the in-domain distribution. The core idea behind our framework is to combine Distributionally Robust Optimization (DRO) with self-supervised training. As a result, we also leverage efficient polynomial-time algorithms for the training stage. From a theoretical standpoint, we apply our framework on the classification problem of a mixture of two Gaussians in\nR\nd\n, where in addition to the\nm\nindependent and labeled samples from the true distribution, a set of\nn\n(usually with\nn\n\u226b\nm\n) out of domain and unlabeled samples are gievn as well. Using only the labeled data, it is known that the generalization error can be bounded by\n\u221d\n(\nd\n/\nm\n)\n1\n/\n2\n. However, using our method on both isotropic and non-isotropic Gaussian mixture models, one can derive a new set of analytically explicit and non-asymptotic bounds which show substantial improvement on the generalization error compared ERM. Our results underscore two significant insights: 1) out-of-domain samples, even when unlabeled, can be harnessed to narrow the generalization gap, provided that the true data distribution adheres to a form of the \"cluster assumption\", and 2) the semi-supervised learning paradigm can be regarded as a special case of our framework when there are no distributional shifts. We validate our claims through experiments conducted on a variety of synthetic and real-world datasets.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/19202"} +{"video_file": "Bpcgcr8E8Z_39017110.mp4", "openreview_id": "Bpcgcr8E8Z", "slideslive_id": 39017110, "venue": "iclr2024", "title": "Fast-DetectGPT: Efficient Zero-Shot Detection of Machine-Generated Text via Conditional Probability Curvature", "status": "Poster", "keywords": "Fake Detection;Machine-Generated Text Detection;Zero-Shot Detection", "tldr": "Fast-DetectGPT accelerates DetectGPT by two-orders of magnitude and enhancing the detection accuracy by a relative 75%.", "abstract": "Large language models (LLMs) have shown the ability to produce fluent and cogent content, presenting both productivity opportunities and societal risks. To build trustworthy AI systems, it is imperative to distinguish between machine-generated and human-authored content. The leading zero-shot detector, DetectGPT, showcases commendable performance but is marred by its intensive computational costs. In this paper, we introduce the concept of conditional probability curvature to elucidate discrepancies in word choices between LLMs and humans within a given context. Utilizing this curvature as a foundational metric, we present Fast-DetectGPT, an optimized zero-shot detector, which substitutes DetectGPT's perturbation step with a more efficient sampling step. Our evaluations on various datasets, source models, and test conditions indicate that Fast-DetectGPT not only surpasses DetectGPT by a relative around 75% in both the white-box and black-box settings but also accelerates the detection process by a factor of 340, as detailed in Table 1.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/19201"} +{"video_file": "Bpkhu2ExxU_39018916.mp4", "openreview_id": "Bpkhu2ExxU", "slideslive_id": 39018916, "venue": "iclr2024", "title": "Stochastic Modified Equations and Dynamics of Dropout Algorithm", "status": "Poster", "keywords": "Stochastic modified equations;dropout;noise structure;flatness", "tldr": "We present a comprehensive theoretical framework and empirical evidence for studying Stochastic Modified Equations (SME) in dropout, shedding light on the implicit regularization through revealing the noise structure introduced by dropout.", "abstract": "Dropout is a widely utilized regularization technique in the training of neural networks, nevertheless, its underlying mechanism and impact on achieving good generalization abilities remain to be further understood. In this work, we start by undertaking a rigorous theoretical derivation of the stochastic modified equations, with the primary aim of providing an effective approximation for the discrete iterative process of dropout. Meanwhile, we experimentally verify SDE's ability to approximate dropout under a wider range of settings. Subsequently, we empirically delve into the intricate mechanisms by which dropout facilitates the identification of flatter minima. This exploration is conducted through intuitive approximations, exploiting the structural analogies inherent in the Hessian of loss landscape and the covariance of dropout. Our empirical findings substantiate the ubiquitous presence of the Hessian-variance alignment relation throughout the training process of dropout.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/19200"} +{"video_file": "BqHaLnans2_39019214.mp4", "openreview_id": "BqHaLnans2", "slideslive_id": 39019214, "venue": "iclr2024", "title": "LLM-CXR: Instruction-Finetuned LLM for CXR Image Understanding and Generation", "status": "Poster", "keywords": "large language model;multimodal;medical imaging;chest X-ray;bidirectional;instruction-tuning;vision-question answering", "tldr": "We present a state-of-the-art multimodal LLM for chest X-ray understanding and generation, developed using a method that builds upon the transformer+VQGAN architecture and adapts it for instruction-finetuning of an LLM pretrained only on text.", "abstract": "Following the impressive development of LLMs, vision-language alignment in LLMs is actively being researched to enable multimodal reasoning and visual input/output. This direction of research is particularly relevant to medical imaging because accurate medical image analysis and generation consist of a combination of reasoning based on visual features and prior knowledge. Many recent works have focused on training adapter networks that serve as an information bridge between image processing (encoding or generating) networks and LLMs; but presumably, in order to achieve maximum reasoning potential of LLMs on visual information as well, visual and language features should be allowed to interact more freely. This is especially important in the medical domain because understanding and generating medical images such as chest X-rays (CXR) require not only accurate visual and language-based reasoning but also a more intimate mapping between the two modalities. Thus, taking inspiration from previous work on the transformer and VQ-GAN combination for bidirectional image and text generation, we build upon this approach and develop a method for instruction-tuning an LLM pre-trained only on text to gain vision-language capabilities for medical images. Specifically, we leverage a pretrained LLM\u2019s existing question-answering and instruction-following abilities to teach it to understand visual inputs by instructing it to answer questions about image inputs and, symmetrically, output both text and image responses appropriate to a given query by tuning the LLM with diverse tasks that encompass image-based text-generation and text-based image-generation. We show that our LLM-CXR trained in this approach shows better image-text alignment in both CXR understanding and generation tasks while being smaller in size compared to previously developed models that perform a narrower range of tasks.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/19198"} +{"video_file": "BtT6o5tfHu_39019264.mp4", "openreview_id": "BtT6o5tfHu", "slideslive_id": 39019264, "venue": "iclr2024", "title": "Solving Diffusion ODEs with Optimal Boundary Conditions for Better Image Super-Resolution", "status": "Poster", "keywords": "diffusion models;diffusion ODE;image super-resolution", "tldr": "We propose a method of steadily sampling higher-quality SR images from an existing diffusion-based image SR model.", "abstract": "Diffusion models, as a kind of powerful generative model, have given impressive results on image super-resolution (SR) tasks. However, due to the randomness introduced in the reverse process of diffusion models, the performances of diffusion-based SR models are fluctuating at every time of sampling, especially for samplers with few resampled steps. This inherent randomness of diffusion models results in ineffectiveness and instability, making it challenging for users to guarantee the quality of SR results. However, our work takes this randomness as an opportunity: fully analyzing and leveraging it leads to the construction of an effective plug-and-play sampling method that owns the potential to benefit a series of diffusion-based SR methods. More in detail, we propose to steadily sample high-quality SR images from pre-trained diffusion-based SR models by solving diffusion ordinary differential equations (diffusion ODEs) with optimal boundary conditions (BCs) and analyze the characteristics between the choices of BCs and their corresponding SR results. Our analysis shows the route to obtain an approximately optimal BC via an efficient exploration in the whole space. The quality of SR results sampled by the proposed method with fewer steps outperforms the quality of results sampled by current methods with randomness from the same pre-trained diffusion-based SR model, which means that our sampling method ''boosts'' current diffusion-based SR models without any additional training.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19196"} +{"video_file": "C36v8541Ns_39018287.mp4", "openreview_id": "C36v8541Ns", "slideslive_id": 39018287, "venue": "iclr2024", "title": "The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing", "status": "Poster", "keywords": "Lipschitz;randomized smoothing;margin;variance;deep learning", "tldr": "A study on the interplays between Lipschitz constant and randomized smoothing procedure.", "abstract": "Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks. The certified radius in this context is a crucial indicator of the robustness of models. However how to design an efficient classifier with an associated certified radius? Randomized smoothing provides a promising framework by relying on noise injection into the inputs to obtain a smoothed and robust classifier. In this paper, we first show that the variance introduced by the Monte-Carlo sampling in the randomized smoothing procedure estimate closely interacts with two other important properties of the classifier, \\textit{i.e.} its Lipschitz constant and margin. More precisely, our work emphasizes the dual impact of the Lipschitz constant of the base classifier, on both the smoothed classifier and the empirical variance. To increase the certified robust radius, we introduce a different way to convert logits to probability vectors for the base classifier to leverage the variance-margin trade-off. We leverage the use of Bernstein's concentration inequality along with enhanced Lipschitz bounds for randomized smoothing. Experimental results show a significant improvement in certified accuracy compared to current state-of-the-art methods. Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/19189"} +{"video_file": "C4BikKsgmK_39018158.mp4", "openreview_id": "C4BikKsgmK", "slideslive_id": 39018158, "venue": "iclr2024", "title": "Str2Str: A Score-based Framework for Zero-shot Protein Conformation Sampling", "status": "Poster", "keywords": "proteins;conformational sampling;diffusion models;score-based models;generative modeling;equivariant network", "tldr": "We built a score-based protein conformation sampling method via equivariant structure-to-structure translation with no reliance of simulation data", "abstract": "The dynamic nature of proteins is crucial for determining their biological functions and properties, for which Monte Carlo (MC) and molecular dynamics (MD) simulations stand as predominant tools to study such phenomena. By utilizing empirically derived force fields, MC or MD simulations explore the conformational space through numerically evolving the system via Markov chain or Newtonian mechanics. However, the high-energy barrier of the force fields can hamper the exploration of both methods by the rare event, resulting in inadequately sampled ensemble without exhaustive running. Existing learning-based approaches perform direct sampling yet heavily rely on target-specific simulation data for training, which suffers from high data acquisition cost and poor generalizability. Inspired by simulated annealing, we propose Str2Str, a novel structure-to-structure translation framework capable of zero-shot conformation sampling with roto-translation equivariant property. Our method leverages an amortized denoising score matching objective trained on general crystal structures and has no reliance on simulation data during both training and inference. Experimental results across several benchmarking protein systems demonstrate that Str2Str outperforms previous state-of-the-art generative structure prediction models and can be orders of magnitude faster compared with long MD simulations.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/19188"} +{"video_file": "C4CxQmp9wc_39018991.mp4", "openreview_id": "C4CxQmp9wc", "slideslive_id": 39018991, "venue": "iclr2024", "title": "Jumanji: a Diverse Suite of Scalable Reinforcement Learning Environments in JAX", "status": "Poster", "keywords": "reinforcement learning;jax;combinatorial;research", "tldr": "We introduce Jumanji, an open-source, diverse suite of RL environments, designed to be fast, flexible, and scalable.", "abstract": "Open-source reinforcement learning (RL) environments have played a crucial role in driving progress in the development of AI algorithms. In modern RL research, there is a need for simulated environments that are performant, scalable, and modular to enable their utilization in a wider range of potential real-world applications. Therefore, we present Jumanji, a suite of diverse RL environments specifically designed to be fast, flexible, and scalable. Jumanji provides a suite of environments focusing on combinatorial problems frequently encountered in industry, as well as challenging general decision-making tasks. By leveraging the efficiency of JAX and hardware accelerators like GPUs and TPUs, Jumanji enables rapid iteration of research ideas and large-scale experimentation, ultimately empowering more capable agents. Unlike existing RL environment suites, Jumanji is highly customizable, allowing users to tailor the initial state distribution and problem complexity to their needs. Furthermore, we provide actor-critic baselines for each environment, accompanied by preliminary findings on scaling and generalization scenarios. Jumanji aims to set a new standard for speed, adaptability, and scalability of RL environments.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/19187"} +{"video_file": "CAqdG2dy5s_39018286.mp4", "openreview_id": "CAqdG2dy5s", "slideslive_id": 39018286, "venue": "iclr2024", "title": "Graph-based Virtual Sensing from Sparse and Partial Multivariate Observations", "status": "Poster", "keywords": "Spatio-temporal data;time series;virtual sensing;imputation;graph neural networks;deep learning", "tldr": "We present a novel framework for sparse multivariate virtual sensing that leverages dependencies between the target variable and available covariates.", "abstract": "Virtual sensing techniques allow for inferring signals at new unmonitored locations by exploiting spatio-temporal measurements coming from physical sensors at different locations. However, as the sensor coverage becomes sparse due to costs or other constraints, physical proximity cannot be used to support interpolation. In this paper, we overcome this challenge by leveraging dependencies between the target variable and a set of correlated variables (covariates) that can frequently be associated with each location of interest. From this viewpoint, covariates provide partial observability, and the problem consists of inferring values for unobserved channels by exploiting observations at other locations to learn how such variables can correlate. We introduce a novel graph-based methodology to exploit such relationships and design a graph deep learning architecture, named GgNet, implementing the framework. The proposed approach relies on propagating information over a nested graph structure that is used to learn dependencies between variables as well as locations. GgNet is extensively evaluated under different virtual sensing scenarios, demonstrating higher reconstruction accuracy compared to the state-of-the-art.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19185"} +{"video_file": "CK5Hfb5hBG_39018917.mp4", "openreview_id": "CK5Hfb5hBG", "slideslive_id": 39018917, "venue": "iclr2024", "title": "Channel Vision Transformers: An Image Is Worth 1 x 16 x 16 Words", "status": "Poster", "keywords": "vision transformer;representation learning;hyper spectral imaging", "tldr": "ChannelViT facilitates robust representation learning across different input channels.", "abstract": "Vision Transformer (ViT) has emerged as a powerful architecture in the realm of modern computer vision. However, its application in certain imaging fields, such as microscopy and satellite imaging, presents unique challenges. In these domains, images often contain multiple channels, each carrying semantically distinct and independent information. Furthermore, the model must demonstrate robustness to sparsity in input channels, as they may not be densely available during training or testing. In this paper, we propose a modification to the ViT architecture that enhances reasoning across the input channels and introduce Hierarchical Channel Sampling (HCS) as an additional regularization technique to ensure robustness when only partial channels are presented during test time. Our proposed model, ChannelViT, constructs patch tokens independently from each input channel and utilizes a learnable channel embedding that is added to the patch tokens, similar to positional embeddings. We evaluate the performance of ChannelViT on ImageNet, JUMP-CP (microscopy cell imaging), and So2Sat (satellite imaging). Our results show that ChannelViT outperforms ViT on classification tasks and generalizes well, even when a subset of input channels is used during testing. Across our experiments, HCS proves to be a powerful regularizer, independent of the architecture employed, suggesting itself as a straightforward technique for robust ViT training. Lastly, we find that ChannelViT generalizes effectively even when there is limited access to all channels during training, highlighting its potential for multi-channel imaging under real-world conditions with sparse sensors. Our code is available at https://github.com/insitro/ChannelViT.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19178"} +{"video_file": "CYmF38ysDa_39018889.mp4", "openreview_id": "CYmF38ysDa", "slideslive_id": 39018889, "venue": "iclr2024", "title": "FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets", "status": "Spotlight", "keywords": "large language models;language model evaluation;natural language processing", "tldr": "We introduce fine-grained language model evaluation based on alignment skill sets to measure the performance of various LLMs.", "abstract": "Evaluation of Large Language Models (LLMs) is challenging because instruction-following necessitates alignment with human values and the required set of skills varies depending on the instruction. However, previous studies have mainly focused on coarse-grained evaluation (i.e. overall preference-based evaluation), which limits interpretability since it does not consider the nature of user instructions that require instance-wise skill composition. In this paper, we introduce FLASK (Fine-grained Language Model Evaluation based on Alignment Skill Sets), a fine-grained evaluation protocol for both human-based and model-based evaluation which decomposes coarse-level scoring to a skill set-level scoring for each instruction. We experimentally observe that the fine-graininess of evaluation is crucial for attaining a holistic view of model performance and increasing the reliability of the evaluation. Using FLASK, we compare multiple open-source and proprietary LLMs and observe a high correlation between model-based and human-based evaluations.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/19170"} +{"video_file": "CdjnzWsQax_39018273.mp4", "openreview_id": "CdjnzWsQax", "slideslive_id": 39018273, "venue": "iclr2024", "title": "Generative Learning for Financial Time Series with Irregular and Scale-Invariant Patterns", "status": "Spotlight", "keywords": "generative model;time series pattern recognition;diffusion model;financial time series", "tldr": "We develop a novel generative framework, FTS-Diffusion, specifically for financial time series generation, exploring the underlying irregular and scale-invariant patterns.", "abstract": "Limited data availability poses a major obstacle in training deep learning models for financial applications. Synthesizing financial time series to augment real-world data is challenging due to the irregular and scale-invariant patterns uniquely associated with financial time series - temporal dynamics that repeat with varying duration and magnitude. Such dynamics cannot be captured by existing approaches, which often assume regularity and uniformity in the underlying data. We develop a novel generative framework called FTS-Diffusion to model irregular and scale-invariant patterns that consists of three modules. First, we develop a scale-invariant pattern recognition algorithm to extract recurring patterns that vary in duration and magnitude. Second, we construct a diffusion-based generative network to synthesize segments of patterns. Third, we model the temporal transition of patterns in order to aggregate the generated segments. Extensive experiments show that FTS-Diffusion generates synthetic financial time series highly resembling observed data, outperforming state-of-the-art alternatives. Two downstream experiments demonstrate that augmenting real-world data with synthetic data generated by FTS-Diffusion reduces the error of stock market prediction by up to 17.9%. To the best of our knowledge, this is the first work on generating intricate time series with irregular and scale-invariant patterns, addressing data limitation issues in finance.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19166"} +{"video_file": "ChHx5ORqF0_39017042.mp4", "openreview_id": "ChHx5ORqF0", "slideslive_id": 39017042, "venue": "iclr2024", "title": "Transferring Labels to Solve Annotation Mismatches Across Object Detection Datasets", "status": "Poster", "keywords": "object detection;data-centric AI;label translation;dataset improvements", "tldr": "We introduce the label translation problem for object detection, where we modify the annotations of a labeled source dataset to match the label protocols in a target dataset.", "abstract": "In object detection, varying annotation protocols across datasets can result in annotation mismatches, leading to inconsistent class labels and bounding regions. Addressing these mismatches typically involves manually identifying common trends and fixing the corresponding bounding boxes and class labels. To alleviate this laborious process, we introduce the label transfer problem in object detection. Here, the goal is to transfer bounding boxes from one or more source datasets to match the annotation style of a target dataset. We propose a data-centric approach, Label-Guided Pseudo-Labeling (LGPL), that improves downstream detectors in a manner agnostic to the detector learning algorithms and model architectures. Validating across four object detection scenarios, defined over seven different datasets and three different architectures, we show that transferring labels for a target task via LGPL consistently improves the downstream detection in every setting, on average by\n1.88\nmAP and 2.65 AP\n75\n. Most importantly, we find that when training with multiple labeled datasets, carefully addressing annotation mismatches with LGPL alone can improve downstream object detection better than off-the-shelf supervised domain adaptation techniques that align instance features.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/19161"} +{"video_file": "CtOA9aN8fr_39018271.mp4", "openreview_id": "CtOA9aN8fr", "slideslive_id": 39018271, "venue": "iclr2024", "title": "Effective pruning of web-scale datasets based on complexity of concept clusters", "status": "Poster", "keywords": "pruning;large-scale;data curation;concept-based;LAION;DataComp", "tldr": "We propose a pruning method where we aim to obtain optimal dataset coverage by assessing sample complexity; we report SotA results on the DataComp Medium benchmark and outperform regular OpenCLIP training on LAION with significantly less data.", "abstract": "Utilizing massive web-scale datasets has led to unprecedented performance gains in machine learning models, but also imposes outlandish compute requirements for their training. In order to improve training and data efficiency, we here push the limits of pruning large-scale multimodal datasets for training CLIP-style models. Today\u2019s most effective pruning method on ImageNet clusters data samples into separate concepts according to their embedding and prunes away the most proto- typical samples. We scale this approach to LAION and improve it by noting that the pruning rate should be concept-specific and adapted to the complexity of the concept. Using a simple and intuitive complexity measure, we are able to reduce the training cost to a quarter of regular training. More specifically, we are able to outperform the LAION-trained OpenCLIP-ViT-B/32 model on ImageNet zero-shot accuracy by 1.1p.p. while only using 27.7% of the data and training compute. On the DataComp Medium benchmark, we achieve a new state-of-the-art ImageNet zero-shot accuracy and a competitive average zero-shot accuracy on 38 evaluation tasks.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19159"} +{"video_file": "CvYBvgEUK9_39018270.mp4", "openreview_id": "CvYBvgEUK9", "slideslive_id": 39018270, "venue": "iclr2024", "title": "On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation", "status": "Spotlight", "keywords": "Bilevel-Optimization;Penalty Methods;Landscape Analysis;Non-Asymptotic Analysis;First-Order Methods", "tldr": "We establish a strong connection between the penalty function and the hyper-objective by explicitly characterizing the conditions under which the values and derivatives of the two must be close", "abstract": "In this work, we study first-order algorithms for solving Bilevel Optimization (BO) where the objective functions are smooth but possibly nonconvex in both levels and the variables are restricted to closed convex sets. As a first step, we study the landscape of BO through the lens of penalty methods, in which the upper- and lower-level objectives are combined in a weighted sum with penalty parameter $\\sigma > 0$. In particular, we establish a strong connection between the penalty function and the hyper-objective by explicitly characterizing the conditions under which the values and derivatives of the two must be $O(\\sigma)$-close. A by-product of our analysis is the explicit formula for the gradient of hyper-objective when the lower-level problem has multiple solutions under minimal conditions, which could be of independent interest. Next, viewing the penalty formulation as $O(\\sigma)$-approximation of the original BO, we propose first-order algorithms that find an $\\epsilon$-stationary solution by optimizing the penalty formulation with $\\sigma = O(\\epsilon)$. When the perturbed lower-level problem uniformly satisfies the {\\it small-error} proximal error-bound (EB) condition, we propose a first-order algorithm that converges to an $\\epsilon$-stationary point of the penalty function using in total $O(\\epsilon^{-7})$ accesses to first-order stochastic gradient oracles. Under an additional assumption on stochastic oracles, we show that the algorithm can be implemented in a fully {\\it single-loop} manner, {\\it i.e.,} with $O(1)$ samples per iteration, and achieves the improved oracle-complexity of $O(\\epsilon^{-5})$.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/19157"} +{"video_file": "Cy5v64DqEF_39018269.mp4", "openreview_id": "Cy5v64DqEF", "slideslive_id": 39018269, "venue": "iclr2024", "title": "Idempotence and Perceptual Image Compression", "status": "Spotlight", "keywords": "perceptual image compression;neural image compression", "tldr": "Indempotence constraint inversion of unconditional generative model achieve perceptual image compression.", "abstract": "Idempotence is the stability of image codec to re-compression. At the first glance, it is unrelated to perceptual image compression. However, we find that theoretically: 1) Conditional generative model-based perceptual codec satisfies idempotence; 2) Unconditional generative model with idempotence constraint is equivalent to conditional generative codec. Based on this newfound equivalence, we propose a new paradigm of perceptual image codec by inverting unconditional generative model with idempotence constraints. Our codec is theoretically equivalent to conditional generative codec, and it does not require training new models. Instead, it only requires a pre-trained mean-square-error codec and unconditional generative model. Empirically, we show that our proposed approach outperforms state-of-the-art methods such as HiFiC and ILLM, in terms of Fr\u00e9chet Inception Distance (FID). The source code is provided in https://github.com/tongdaxu/Idempotence-and-Perceptual-Image-Compression.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19156"} +{"video_file": "D7KJmfEDQP_39018899.mp4", "openreview_id": "D7KJmfEDQP", "slideslive_id": 39018899, "venue": "iclr2024", "title": "Model Merging by Uncertainty-Based Gradient Matching", "status": "Poster", "keywords": "Model Merging;Gradient Matching;Language Modeling;Model Editing;Transfer Learning", "tldr": "We connect model merging to gradient matching, show that uncertainty-based reduction of gradient mismatch can improve the performance of the merged model, and connections to several existing methods.", "abstract": "Models trained on different datasets can be merged by a weighted-averaging of their parameters, but why does it work and when can it fail? Here, we connect the inaccuracy of weighted-averaging to mismatches in the gradients and propose a new uncertainty-based scheme to improve the performance by reducing the mismatch. The connection also reveals implicit assumptions in other schemes such as averaging, task arithmetic, and Fisher-weighted averaging. Our new method gives consistent improvements for large language models and vision transformers, both in terms of performance and robustness to hyperparameters.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/19152"} +{"video_file": "DGez4B2a6Y_39018262.mp4", "openreview_id": "DGez4B2a6Y", "slideslive_id": 39018262, "venue": "iclr2024", "title": "A Plug-and-Play Image Registration Network", "status": "Poster", "keywords": "deformable image registration;plug-and-play priors;deep equilibrium models;iterative algorithms", "tldr": "We propose the first plug-and-play methods for deformable image registration: (a) PIRATE that uses deep denoiser trained on registration fields as prior, and (b) PIRATE+ that improves PIRATE by using deep equilibrium models to fine-tune the prior.", "abstract": "Deformable image registration (DIR) is an active research topic in biomedical imaging. There is a growing interest in developing DIR methods based on deep learning (DL). A traditional DL approach to DIR is based on training a convolutional neural network (CNN) to estimate the registration field between two input images. While conceptually simple, this approach comes with a limitation that it exclusively relies on a pre-trained CNN without explicitly enforcing fidelity between the registered image and the reference. We present plug-and-play image registration network (PIRATE) as a new DIR method that addresses this issue by integrating an explicit data-fidelity penalty and a CNN prior. PIRATE pre-trains a CNN denoiser on the registration field and \"plugs\" it into an iterative method as a regularizer. We additionally present PIRATE+ that fine-tunes the CNN prior in PIRATE using deep equilibrium models (DEQ). PIRATE+ interprets the fixed-point iteration of PIRATE as a network with effectively infinite layers and then trains the resulting network end-to-end, enabling it to learn more task-specific information and boosting its performance. Our numerical results on OASIS and CANDI datasets show that our methods achieve state-of-the-art performance on DIR.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/19145"} +{"video_file": "DJZDgMOLXQ_39018260.mp4", "openreview_id": "DJZDgMOLXQ", "slideslive_id": 39018260, "venue": "iclr2024", "title": "Prediction Error-based Classification for Class-Incremental Learning", "status": "Poster", "keywords": "continual learning;class-incremental learning", "tldr": "We introduce Prediction Error-based Classification, an approach for class-incremental learning that differs from traditional discriminative and generative paradigms, and demonstrate its strong performance.", "abstract": "Class-incremental learning (CIL) is a particularly challenging variant of continual learning, where the goal is to learn to discriminate between all classes presented in an incremental fashion. Existing approaches often suffer from excessive forgetting and imbalance of the scores assigned to classes that have not been seen together during training. In this study, we introduce a novel approach, Prediction Error-based Classification (PEC), which differs from traditional discriminative and generative classification paradigms. PEC computes a class score by measuring the prediction error of a model trained to replicate the outputs of a frozen random neural network on data from that class. The method can be interpreted as approximating a classification rule based on Gaussian Process posterior variance. PEC offers several practical advantages, including sample efficiency, ease of tuning, and effectiveness even when data are presented one class at a time. Our empirical results show that PEC performs strongly in single-pass-through-data CIL, outperforming other rehearsal-free baselines in all cases and rehearsal-based methods with moderate replay buffer size in most cases across multiple benchmarks.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/19144"} +{"video_file": "DLJznSp6X3_39018259.mp4", "openreview_id": "DLJznSp6X3", "slideslive_id": 39018259, "venue": "iclr2024", "title": "ReLoRA: High-Rank Training Through Low-Rank Updates", "status": "Poster", "keywords": "language models;pre-training;training efficiency;parameter-efficient fine-tuning;lora", "tldr": "ReLoRA is a parameter-efficient method that can be used during model pre-training stage. We demonstrate it's efficacy by training LMs with up to 1B parameters and show significant speed and memory improvements", "abstract": "Despite the dominance and effectiveness of scaling, resulting in large networks with hundreds of billions of parameters, the necessity to train overparameterized models remains poorly understood, while training costs grow exponentially. In this paper, we explore parameter-efficient training techniques as an approach to training large neural networks. We introduce a novel method called ReLoRA, which utilizes low-rank updates to train high-rank networks. We apply ReLoRA to training transformer language models with up to 1.3B parameters and demonstrate comparable performance to regular neural network training. ReLoRA saves up to 5.5Gb of RAM per GPU and improves training speed by 9-40% depending on the model size and hardware setup. Our findings show the potential of parameter- efficient techniques for large-scale pre-training. Our code is available on GitHub.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19143"} +{"video_file": "DZUzOKE6og_39018655.mp4", "openreview_id": "DZUzOKE6og", "slideslive_id": 39018655, "venue": "iclr2024", "title": "HypeBoy: Generative Self-Supervised Representation Learning on Hypergraphs", "status": "Poster", "keywords": "Hypergraph;Self-supervised learning;Hypergraph neural network", "tldr": "We propose a hypergraph generative self-supervised learning strategy.", "abstract": "Hypergraphs are marked by complex topology, expressing higher-order interactions among multiple nodes with hyperedges, and better capturing the topology is essential for effective representation learning. Recent advances in generative self-supervised learning (SSL) suggest that hypergraph neural networks (HNNs) learned from generative self-supervision have the potential to effectively encode the complex hypergraph topology. Designing a generative SSL strategy for hypergraphs, however, is not straightforward. Questions remain with regard to its generative SSL task, connection to downstream tasks, and empirical properties of learned representations. In light of the promises and challenges, we propose a novel generative SSL strategy for hypergraphs. We first formulate a generative SSL task on hypergraphs, hyperedge filling, and highlight its theoretical connection to node classification. Based on the generative SSL task, we propose a hypergraph SSL method, HYPEBOY. HYPEBOY learns effective general-purpose hypergraph representations, outperforming 15 baseline methods across 11 benchmark datasets. To our knowledge, this is the first study on generative SSL on hypergraphs, and we demonstrate its theoretical and empirical strengths for hypergraph representation learning.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19139"} +{"video_file": "DfPtC8uSot_39018980.mp4", "openreview_id": "DfPtC8uSot", "slideslive_id": 39018980, "venue": "iclr2024", "title": "Bounding the Expected Robustness of Graph Neural Networks Subject to Node Feature Attacks", "status": "Poster", "keywords": "Graph Neural Networks;Adversarial Robustness", "tldr": "We define and upper-bound the expected adversarial robustness of Graph Neural Networks, which allows us to propose the more robust Graph Convolutional Orthonormal Robust Networks (GCORN).", "abstract": "Graph Neural Networks (GNNs) have demonstrated state-of-the-art performance in various graph representation learning tasks. Recently, studies revealed their vulnerability to adversarial attacks. In this work, we theoretically define the concept of expected robustness in the context of attributed graphs and relate it to the classical definition of adversarial robustness in the graph representation learning literature. Our definition allows us to derive an upper bound of the expected robustness of Graph Convolutional Networks (GCNs) and Graph Isomorphism Networks subject to node feature attacks. Building on these findings, we connect the expected robustness of GNNs to the orthonormality of their weight matrices and consequently propose an attack-independent, more robust variant of the GCN, called the Graph Convolutional Orthonormal Robust Networks (GCORNs). We further introduce a probabilistic method to estimate the expected robustness, which allows us to evaluate the effectiveness of GCORN on several real-world datasets. Experimental experiments showed that GCORN outperforms available defense methods. Our code is publicly available at: https://github.com/Sennadir/GCORN .", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/19134"} +{"video_file": "Diq6urt3lS_39018252.mp4", "openreview_id": "Diq6urt3lS", "slideslive_id": 39018252, "venue": "iclr2024", "title": "Cleanba: A Reproducible and Efficient Distributed Reinforcement Learning Platform", "status": "Poster", "keywords": "Distributed;Deep Reinforcement Learning;Distributed Deep Reinforcement Learning;Reproducibility", "tldr": "IMPALA has reproducibility issues; we propose a more reproducible architecture called Cleanba; our Atari-57 experiments shows Cleanba PPO and IMPALA variants outperform torchbeast and moolib's IMPALA.", "abstract": "Distributed Deep Reinforcement Learning (DRL) aims to leverage more computational resources to train autonomous agents with less training time. Despite recent progress in the field, reproducibility issues have not been sufficiently explored. This paper first shows that the typical actor-learner framework can have reproducibility issues even if hyperparameters are controlled. We then introduce Cleanba, a new open-source platform for distributed DRL that proposes a highly reproducible architecture. Cleanba implements highly optimized distributed variants of PPO and IMPALA. Our Atari experiments show that these variants can obtain equivalent or higher scores than strong IMPALA baselines in moolib and torchbeast and PPO baseline in CleanRL. However, Cleanba variants present 1) shorter training time and 2) more reproducible learning curves in different hardware settings.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19132"} +{"video_file": "DmD1wboID9_39018248.mp4", "openreview_id": "DmD1wboID9", "slideslive_id": 39018248, "venue": "iclr2024", "title": "BayesPrompt: Prompting Large-Scale Pre-Trained Language Models on Few-shot Inference via Debiased Domain Abstraction", "status": "Poster", "keywords": "Prompt;Pre-Trained;Few-shot;Debiased;Domain Abstraction", "tldr": "Abstracting training domains in a debiased manner to generate discriminative prompts, which provide unambiguous guidance for PLMs.", "abstract": "As a novel and effective fine-tuning paradigm based on large-scale pre-trained language models (PLMs), prompt-tuning aims to reduce the gap between downstream tasks and pre-training objectives. While prompt-tuning has yielded continuous advancements in various tasks, such an approach still remains a persistent defect: prompt-tuning methods fail to generalize to specific few-shot patterns. From the perspective of distribution analyses, we disclose that the intrinsic issues behind the phenomenon are the over-multitudinous conceptual knowledge contained in PLMs and the abridged knowledge for target downstream domains, which jointly result in that PLMs mis-locate the knowledge distributions corresponding to the target domains in the universal knowledge embedding space. To this end, we intuitively explore to approximate the unabridged target domains of downstream tasks in a debiased manner, and then abstract such domains to generate discriminative prompts, thereby providing the de-ambiguous guidance for PLMs. Guided by such an intuition, we propose a simple yet effective approach, namely BayesPrompt, to learn prompts that contain the domain discriminative information against the interference from domain-irrelevant knowledge. BayesPrompt primitively leverages known distributions to approximate the debiased factual distributions of target domains and further uniformly samples certain representative features from the approximated distributions to generate the ultimate prompts for PLMs. We provide theoretical insights with the connection to domain adaptation. Empirically, our method achieves state-of-the-art performance on benchmarks.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/19127"} +{"video_file": "Dnc3paMqDE_39017322.mp4", "openreview_id": "Dnc3paMqDE", "slideslive_id": 39017322, "venue": "iclr2024", "title": "DeepSPF: Spherical SO(3)-Equivariant Patches for Scan-to-CAD Estimation", "status": "Poster", "keywords": "3D Point Cloud Representation;3D Point Cloud Registration;Scan-to-CAD;Spherical Gaussians;Equivariant", "tldr": "We present Learnable Spherical Patch Fields (DeepSPF), a versatile, SO(3)-equivariant, and easily integrable backbone suitable for instance-based point networks", "abstract": "Recently, SO(3)-equivariant methods have been explored for 3D reconstruction via Scan-to-CAD. Despite significant advancements attributed to the unique characteristics of 3D data, existing SO(3)-equivariant approaches often fall short in seamlessly integrating local and global contextual information in a widely generalizable manner. Our contributions in this paper are threefold. First, we introduce Spherical Patch Fields, a representation technique designed for patch-wise, SO(3)-equivariant 3D point clouds, anchored theoretically on the principles of Spherical Gaussians. Second, we present the Patch Gaussian Layer, designed for the adaptive extraction of local and global contextual information from resizable point cloud patches. Culminating our contributions, we present Learnable Spherical Patch Fields (DeepSPF) \u2013 a versatile and easily integrable backbone suitable for instance-based point networks. Through rigorous evaluations, we demonstrate significant enhancements in Scan-to-CAD performance for point cloud registration, retrieval, and completion: a significant reduction in the rotation error of existing registration methods, an improvement of up to 17% in the Top-1 error for retrieval tasks, and a notable reduction of up to 30% in the Chamfer Distance for completion models, all attributable to the incorporation of DeepSPF.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19126"} +{"video_file": "DqziS8DG4M_39018244.mp4", "openreview_id": "DqziS8DG4M", "slideslive_id": 39018244, "venue": "iclr2024", "title": "Point2SSM: Learning Morphological Variations of Anatomies from Point Clouds", "status": "Spotlight", "keywords": "Unsupervised learning;global correspondence;point cloud;statsitical shape modeling", "tldr": "An unsupervised approach to learn correspondence-based statistical shape models of anatomy directly from point clouds.", "abstract": "We present Point2SSM, a novel unsupervised learning approach for constructing correspondence-based statistical shape models (SSMs) directly from raw point clouds. SSM is crucial in clinical research, enabling population-level analysis of morphological variation in bones and organs. Traditional methods of SSM construction have limitations, including the requirement of noise-free surface meshes or binary volumes, reliance on assumptions or templates, and prolonged inference times due to simultaneous optimization of the entire cohort. Point2SSM overcomes these barriers by providing a data-driven solution that infers SSMs directly from raw point clouds, reducing inference burdens and increasing applicability as point clouds are more easily acquired. While deep learning on 3D point clouds has seen success in unsupervised representation learning and shape correspondence, its application to anatomical SSM construction is largely unexplored. We conduct a benchmark of state-of-the-art point cloud deep networks on the SSM task, revealing their limited robustness to clinical challenges such as noisy, sparse, or incomplete input and limited training data. Point2SSM addresses these issues through an attention-based module, providing effective correspondence mappings from learned point features. Our results demonstrate that the proposed method significantly outperforms existing networks in terms of accurate surface sampling and correspondence, better capturing population-level statistics. The source code is provided at https://github.com/jadie1/Point2SSM.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19123"} +{"video_file": "DrhZneqz4n_39018245.mp4", "openreview_id": "DrhZneqz4n", "slideslive_id": 39018245, "venue": "iclr2024", "title": "Single Motion Diffusion", "status": "Spotlight", "keywords": "Deep Learning;Motion synthesis;Animation;Single Instance Learning;Generative models", "tldr": "We present a model designed to learn the internal motifs of a single motion sequence with arbitrary topology and synthesize diverse motions that are faithful to the learned motifs.", "abstract": "Synthesizing realistic animations of humans, animals, and even imaginary creatures, has long been a goal for artists and computer graphics professionals. Compared to the imaging domain, which is rich with large available datasets, the number of data instances for the motion domain is limited, particularly for the animation of animals and exotic creatures (e.g., dragons), which have unique skeletons and motion patterns. In this work, we introduce SinMDM, a Single Motion Diffusion Model. It is designed to learn the internal motifs of a single motion sequence with arbitrary topology and synthesize a variety of motions of arbitrary length that remain faithful to the learned motifs. We harness the power of diffusion models and present a denoising network explicitly designed for the task of learning from a single input motion. SinMDM is crafted as a lightweight architecture, which avoids overfitting by using a shallow network with local attention layers that narrow the receptive field and encourage motion diversity. Our work applies to multiple contexts, including spatial and temporal in-betweening, motion expansion, style transfer, and crowd animation. Our results show that SinMDM outperforms existing methods both qualitatively and quantitatively. Moreover, while prior network-based approaches require additional training for different applications, SinMDM supports these applications during inference. Our project page, which includes links to the code and trained models, is accessible at https://sinmdm.github.io/SinMDM-page.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19122"} +{"video_file": "E1NxN5QMOE_39018241.mp4", "openreview_id": "E1NxN5QMOE", "slideslive_id": 39018241, "venue": "iclr2024", "title": "Enhancing Group Fairness in Online Settings Using Oblique Decision Forests", "status": "Spotlight", "keywords": "Fairness;Online Learning;Oblique Decision Trees", "tldr": "We present an fairness algorithm, Aranyani, to achieve group fairness when data instances arrive one at a time in an online setting.", "abstract": "Fairness, especially group fairness, is an important consideration in the context of machine learning systems. The most commonly adopted group fairness-enhancing techniques are in-processing methods that rely on a mixture of a fairness objective (e.g., demographic parity) and a task-specific objective (e.g., cross-entropy) during the training process. However, when data arrives in an online fashion \u2013 one instance at a time \u2013 optimizing such fairness objectives poses several challenges. In particular, group fairness objectives are defined using expectations of predictions across different demographic groups. In the online setting, where the algorithm has access to a single instance at a time, estimating the group fairness objective requires additional storage and significantly more computation (e.g., forward/backward passes) than the task-specific objective at every time step. In this paper, we propose Aranyani, an ensemble of oblique decision trees, to make fair decisions in online settings. The hierarchical tree structure of Aranyani enables parameter isolation and allows us to efficiently compute the fairness gradients using aggregate statistics of previous decisions, eliminating the need for additional storage and forward/backward passes. We also present an efficient framework to train Aranyani and theoretically analyze several of its properties. We conduct empirical evaluations on 5 publicly available benchmarks (including vision and language datasets) to show that Aranyani achieves a better accuracy-fairness trade-off compared to baseline approaches.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/19119"} +{"video_file": "E34AlVLN0v_39018628.mp4", "openreview_id": "E34AlVLN0v", "slideslive_id": 39018628, "venue": "iclr2024", "title": "Parallelizing non-linear sequential models over the sequence length", "status": "Poster", "keywords": "parallel algorithm;recurrent neural networks;neural ordinary differential equations;sequential models", "tldr": "We present a parallel algorithm to evaluate and training sequential models (e.g., RNN & NeuralODE) despite their inherent sequential nature", "abstract": "Sequential models, such as Recurrent Neural Networks and Neural Ordinary Differential Equations, have long suffered from slow training due to their inherent sequential nature. For many years this bottleneck has persisted, as many thought sequential models could not be parallelized. We challenge this long-held belief with our parallel algorithm that accelerates GPU evaluation of sequential models by up to 3 orders of magnitude faster without compromising output accuracy. The algorithm does not need any special structure in the sequential models' architecture, making it applicable to a wide range of architectures. Using our method, training sequential models can be more than 10 times faster than the common sequential method without any meaningful difference in the training results. Leveraging this accelerated training, we discovered the efficacy of the Gated Recurrent Unit in a long time series classification problem with 17k time samples. By overcoming the training bottleneck, our work serves as the first step to unlock the potential of non-linear sequential models for long sequence problems.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/19118"} +{"video_file": "EDPxCjXzSb_39018658.mp4", "openreview_id": "EDPxCjXzSb", "slideslive_id": 39018658, "venue": "iclr2024", "title": "Vision-by-Language for Training-Free Compositional Image Retrieval", "status": "Poster", "keywords": "Vision-Language Models;Large Language Models", "tldr": "A simple method using off-the-shelf foundation models for Composed Image Retrieval without any training", "abstract": "Given an image and a target modification (e.g an image of the Eiffel tower and the text \u201cwithout people and at night-time\u201d), Compositional Image Retrieval (CIR) aims to retrieve the relevant target image in a database. While supervised approaches rely on annotating triplets that is costly (i.e. query image, textual modification, and target image), recent research sidesteps this need by using large-scale vision-language models (VLMs), performing Zero-Shot CIR (ZS-CIR). However, state-of-the-art approaches in ZS-CIR still require training task-specific, customized models over large amounts of image-text pairs. In this work, we proposeto tackle CIR in a training-free manner via our Compositional Image Retrieval through Vision-by-Language (CIReVL), a simple, yet human-understandable and scalable pipeline that effectively recombines large-scale VLMs with large language models (LLMs). By captioning the reference image using a pre-trained generative VLM and asking a LLM to recompose the caption based on the textual target modification for subsequent retrieval via e.g. CLIP, we achieve modular language reasoning. In four ZS-CIR benchmarks, we find competitive, in-part state-of-the-art performance - improving over supervised methods Moreover, the modularity of CIReVL offers simple scalability without re-training, allowing us to both investigate scaling laws and bottlenecks for ZS-CIR while easily scaling up to in parts more than double of previously reported results. Finally, we show that CIReVL makes CIR human-understandable by composing image and text in a modular fashion in the language domain, thereby making it intervenable, allowing to post-hoc re-align failure cases. Code will be released upon acceptance.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19114"} +{"video_file": "EH2O3h7sBI_39018236.mp4", "openreview_id": "EH2O3h7sBI", "slideslive_id": 39018236, "venue": "iclr2024", "title": "Prompt Gradient Projection for Continual Learning", "status": "Spotlight", "keywords": "Continual Learning;Prompt Tuning;Gradient Projection;Anti-forgetting", "tldr": "Gradient projection against forgetting in prompt-tuning based continual learning method", "abstract": "Prompt-tuning has demonstrated impressive performance in continual learning by querying relevant prompts for each input instance, which can avoid the introduction of task identifier. Its forgetting is therefore reduced as this instance-wise query mechanism enables us to select and update only relevant prompts. In this paper, we further integrate prompt-tuning with gradient projection approach. Our observation is: prompt-tuning releases the necessity of task identifier for gradient projection method; and gradient projection provides theoretical guarantees against forgetting for prompt-tuning. This inspires a new prompt gradient projection approach (PGP) for continual learning. In PGP, we deduce that reaching the orthogonal condition for prompt gradient can effectively prevent forgetting via the self-attention mechanism in vision-transformer. The condition equations are then realized by conducting Singular Value Decomposition (SVD) on an element-wise sum space between input space and prompt space. We validate our method on diverse datasets and experiments demonstrate the efficiency of reducing forgetting both in class incremental, online class incremental, and task incremental settings. The code is available at https://github.com/JingyangQiao/prompt-gradient-projection.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/19110"} +{"video_file": "EHg5GDnyq1_39019258.mp4", "openreview_id": "EHg5GDnyq1", "slideslive_id": 39019258, "venue": "iclr2024", "title": "AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors", "status": "Poster", "keywords": "large language mode;agent;multi-agent", "tldr": "We propose AgentVerse, a simple and effective multi-agent collaborative framework, and demonstrate its effectiveness on via a bunch of quantitative and qualitative experiments.", "abstract": "Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks. However, in real-world scenarios, cooperation among individuals is often required to enhance the efficiency and effectiveness of task accomplishment. Hence, inspired by human group dynamics, we propose a multi-agent framework AgentVerse that can effectively orchestrate a collaborative group of expert agents as a greater-than-the-sum-of-its-parts system. Our experiments demonstrate that AgentVerse can proficiently deploy multi-agent groups that outperform a single agent. Extensive experiments on text understanding, reasoning, coding, tool utilization, and embodied AI confirm the effectiveness of AgentVerse. Moreover, our analysis of agent interactions within AgentVerse reveals the emergence of specific collaborative behaviors, contributing to heightened group efficiency. We will release our codebase, AgentVerse, to further facilitate multi-agent research.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19109"} +{"video_file": "EHrvRNs2Y0_39018235.mp4", "openreview_id": "EHrvRNs2Y0", "slideslive_id": 39018235, "venue": "iclr2024", "title": "ResFields: Residual Neural Fields for Spatiotemporal Signals", "status": "Spotlight", "keywords": "neural fields;NeRF;reconstruction", "tldr": "A novel time-dependent layer for MLPs to improve capturing and reconstruction of spatiotemporal signals.", "abstract": "Neural fields, a category of neural networks trained to represent high-frequency signals, have gained significant attention in recent years due to their impressive performance in modeling complex 3D data, such as signed distance (SDFs) or radiance fields (NeRFs), via a single multi-layer perceptron (MLP). However, despite the power and simplicity of representing signals with an MLP, these methods still face challenges when modeling large and complex temporal signals due to the limited capacity of MLPs. In this paper, we propose an effective approach to address this limitation by incorporating temporal residual layers into neural fields, dubbed ResFields. It is a novel class of networks specifically designed to effectively represent complex temporal signals. We conduct a comprehensive analysis of the properties of ResFields and propose a matrix factorization technique to reduce the number of trainable parameters and enhance generalization capabilities. Importantly, our formulation seamlessly integrates with existing MLP-based neural fields and consistently improves results across various challenging tasks: 2D video approximation, dynamic shape modeling via temporal SDFs, and dynamic NeRF reconstruction. Lastly, we demonstrate the practical utility of ResFields by showcasing its effectiveness in capturing dynamic 3D scenes from sparse RGBD cameras of a lightweight capture system.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19108"} +{"video_file": "EJPIzl7mgc_39017887.mp4", "openreview_id": "EJPIzl7mgc", "slideslive_id": 39017887, "venue": "iclr2024", "title": "Adversarial Supervision Makes Layout-to-Image Diffusion Models Thrive", "status": "Poster", "keywords": "diffusion models;layout-to-image;domain generalization", "tldr": "We propose adversarial supervision and multistep unrolling strategy for improved layout-to-image diffusion models, and further domonstarte its utility on the domain generalization of semantic segmentation.", "abstract": "Despite the recent advances in large-scale diffusion models, little progress has been made on the layout-to-image (L2I) synthesis task. Current L2I models either suffer from poor editability via text or weak alignment between the generated image and the input layout. This limits their usability in practice. To mitigate this, we propose to integrate adversarial supervision into the conventional training pipeline of L2I diffusion models (ALDM). Specifically, we employ a segmentation-based discriminator which provides explicit feedback to the diffusion generator on the pixel-level alignment between the denoised image and the input layout. To encourage consistent adherence to the input layout over the sampling steps, we further introduce the multistep unrolling strategy. Instead of looking at a single timestep, we unroll a few steps recursively to imitate the inference process, and ask the discriminator to assess the alignment of denoised images with the layout over a certain time window. Our experiments show that ALDM enables layout faithfulness of the generated images, while allowing broad editability via text prompts. Moreover, we showcase its usefulness for practical applications: by synthesizing target distribution samples via text control, we improve domain generalization of semantic segmentation models by a large margin (~12 mIoU points).", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19106"} +{"video_file": "EXitynZhYn_39018233.mp4", "openreview_id": "EXitynZhYn", "slideslive_id": 39018233, "venue": "iclr2024", "title": "Open-ended VQA benchmarking of Vision-Language models by exploiting Classification datasets and their semantic hierarchy", "status": "Spotlight", "keywords": "Open-ended VQA;benchmark;Vision-Language;VL;Vision-Text;VLM;Vision-Language models;Image classification;Visual question answering;Text-generating VLM", "tldr": "We evaluate Vision-Language models by asking them open-ended questions about existing datasets like ImageNet.", "abstract": "The evaluation of text-generative vision-language models is a challenging yet crucial endeavor. By addressing the limitations of existing Visual Question Answering (VQA) benchmarks and proposing innovative evaluation methodologies, our research seeks to advance our understanding of these models\u2019 capabilities. We propose a novel VQA benchmark based on well-known visual classification datasets which allows a granular evaluation of text-generative vision-language models and their comparison with discriminative vision-language models. To improve the assessment of coarse answers on fine-grained classification tasks, we suggest using the semantic hierarchy of the label space to ask automatically generated follow-up questions about the ground-truth category. Finally, we compare traditional NLP and LLM-based metrics for the problem of evaluating model predictions given ground-truth answers. We perform a human evaluation study upon which we base our decision on the final metric. We apply our benchmark to a suite of vision-language models and show a detailed comparison of their abilities on object, action, and attribute classification. Our contributions aim to lay the foundation for more precise and meaningful assessments, facilitating targeted progress in the exciting field of vision-language modeling.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/19102"} +{"video_file": "EanCFCwAjM_39017208.mp4", "openreview_id": "EanCFCwAjM", "slideslive_id": 39017208, "venue": "iclr2024", "title": "Cameras as Rays: Pose Estimation via Ray Diffusion", "status": "Oral", "keywords": "3D Computer Vision;Pose Estimation;Diffusion", "tldr": "Over-parameterize camera as a bundle of rays, which is a representation that can be predicted using a denoising diffusion model.", "abstract": "Estimating camera poses is a fundamental task for 3D reconstruction and remains challenging given sparsely sampled views (<10). In contrast to existing approaches that pursue top-down prediction of global parametrizations of camera extrinsics, we propose a distributed representation of camera pose that treats a camera as a bundle of rays. This representation allows for a tight coupling with spatial image features improving pose precision. We observe that this representation is naturally suited for set-level transformers and develop a regression-based approach that maps image patches to corresponding rays. To capture the inherent uncertainties in sparse-view pose inference, we adapt this approach to learn a denoising diffusion model which allows us to sample plausible modes while improving performance. Our proposed methods, both regression- and diffusion-based, demonstrate state-of-the-art performance on camera pose estimation on CO3D while generalizing to unseen object categories and in-the-wild captures.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19101"} +{"video_file": "EhrzQwsV4K_39018229.mp4", "openreview_id": "EhrzQwsV4K", "slideslive_id": 39018229, "venue": "iclr2024", "title": "L2MAC: Large Language Model Automatic Computer for Extensive Code Generation", "status": "Poster", "keywords": "Code Generation;Memory-augmented LLMs;Large Language Models (LLMs);LLM coder agent;LLM Agent;Stored-program computer;von neumann architecture", "tldr": "Introducing L2MAC, pioneering the first practical LLM-based stored-program automatic computer (von Neumann architecture) framework in an LLM-based multi-agent system, for solving complex tasks through generating extensive and consistent outputs.", "abstract": "Transformer-based large language models (LLMs) are constrained by the fixed context window of the underlying transformer architecture, hindering their ability to produce long and coherent outputs. Memory-augmented LLMs are a promising solution, but current approaches cannot handle long output generation tasks since they (1) only focus on reading memory and reduce its evolution to the concatenation of new memories or (2) use very specialized memories that cannot adapt to other domains. This paper presents L2MAC, the first practical LLM-based general-purpose stored-program automatic computer (von Neumann architecture) framework, an LLM-based multi-agent system, for long and consistent output generation. Its memory has two components: the instruction registry, which is populated with a prompt program to solve the user-given task, and a file store, which will contain the final and intermediate outputs. Each instruction in turn is executed by a separate LLM agent, whose context is managed by a control unit capable of precise memory reading and writing to ensure effective interaction with the entire file store. These components enable L2MAC to generate extensive outputs, bypassing the constraints of the finite context window while producing outputs that fulfill a complex user-specified task. We empirically demonstrate that L2MAC achieves state-of-the-art performance in generating large codebases for system design tasks, significantly outperforming other coding methods in implementing the detailed user-specified task; we show that L2MAC works for general-purpose extensive text-based tasks, such as writing an entire book; and we provide valuable insights into L2MAC's performance improvement over existing methods.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/19096"} +{"video_file": "EmQSOi1X2f_39019089.mp4", "openreview_id": "EmQSOi1X2f", "slideslive_id": 39019089, "venue": "iclr2024", "title": "Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation", "status": "Poster", "keywords": "language model;hallucination;trustworthy artificial intelligence;reasoning", "tldr": "We present a comprehensive analysis showing that state-of-the-art LLMs frequently produce self-contradictory hallucinations. We then design prompting methods that effectively detect and mitigate self-contradictions.", "abstract": "Large language models (large LMs) are susceptible to producing text that contains hallucinated content. An important instance of this problem is self-contradiction, where the LM generates two contradictory sentences within the same context. In this work, we present a comprehensive investigation into self-contradiction for various instruction-tuned LMs, covering evaluation, detection, and mitigation. Our primary evaluation task is open-domain text generation, but we also demonstrate the applicability of our approach to shorter question answering. Our analysis reveals the prevalence of self-contradictions, e.g., in 17.7% of all sentences produced by ChatGPT. We then propose a novel prompting-based framework designed to effectively detect and mitigate self-contradictions. Our detector achieves high accuracy, e.g., around 80% F1 score when prompting ChatGPT. The mitigation algorithm iteratively refines the generated text to remove contradictory information while preserving text fluency and informativeness. Importantly, our entire framework is applicable to black-box LMs and does not require retrieval of external knowledge. Rather, our method complements retrieval-based methods, as a large portion of self-contradictions (e.g., 35.2% for ChatGPT) cannot be verified using online text. Our approach is practically effective and has been released as a push-button tool to benefit the public at https://chatprotect.ai/.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/19094"} +{"video_file": "EnXJfQqy0K_39019003.mp4", "openreview_id": "EnXJfQqy0K", "slideslive_id": 39019003, "venue": "iclr2024", "title": "Building Cooperative Embodied Agents Modularly with Large Language Models", "status": "Poster", "keywords": "Large Language Models;Embodied Intelligence;Multi-Agent Cooperation;Human-AI Interaction;Communication", "tldr": "We present CoELA, a modular framework integrating LLMs to address the challenging multi-agent embodied cooperation problem with decentralized control, costly communication, and long-horizon multi-objective tasks.", "abstract": "In this work, we address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments. While previous research either presupposes a cost-free communication channel or relies on a centralized controller with shared observations, we harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework that integrates with perception, memory, and execution. Thus building a Cooperative Embodied Language Agent CoELA, who can plan, communicate, and cooperate with others to accomplish long-horizon tasks efficiently. Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication. Though current Open LMs like LLAMA-2 still underperform, we fine-tune a CoELA with data collected with our agents and show how they can achieve promising performance. We also conducted a user study for human-agent interaction and discovered that CoELA communicating in natural language can earn more trust and cooperate more effectively with humans. Our research underscores the potential of LLMs for future research in multi-agent cooperation. Videos can be found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/.", "primary_area": "applications to robotics, autonomy, planning", "site": "https://iclr.cc/virtual/2024/poster/19093"} +{"video_file": "EpVe8jAjdx_39018907.mp4", "openreview_id": "EpVe8jAjdx", "slideslive_id": 39018907, "venue": "iclr2024", "title": "Privileged Sensing Scaffolds Reinforcement Learning", "status": "Spotlight", "keywords": "reinforcement learning;model-based reinforcement learning;world models;robotics;privileged information;asymmetric learning;multimodality;perception;sensing", "tldr": "We study how privileged, training-time only observation streams can aid skill learning, and instantiate a MBRL algorithm that incorporates privileged sensing into all auxiliary, training-time components of RL to better train the policy.", "abstract": "We need to look at our shoelaces as we first learn to tie them but having mastered this skill, can do it from touch alone. We call this phenomenon \u201csensory scaffolding\u201d: observation streams that are not needed by a master might yet aid a novice learner. We consider such sensory scaffolding setups for training artificial agents. For example, a robot arm may need to be deployed with just a low-cost, robust, general-purpose camera; yet its performance may improve by having privileged training-time-only access to informative albeit expensive and unwieldy motion capture rigs or fragile tactile sensors. For these settings, we propose \u201cScaffolder\u201d, a reinforcement learning approach which effectively exploits privileged sensing in critics, world models, reward estimators, and other such auxiliary components that are only used at training time, to improve the target policy. For evaluating sensory scaffolding agents, we design a new \u201cS3\u201d suite of ten diverse simulated robotic tasks that explore a wide range of practical sensor setups. Agents must use privileged camera sensing to train blind hurdlers, privileged active visual perception to help robot arms overcome visual occlusions, privileged touch sensors to train robot hands, and more. Scaffolder easily outperforms relevant prior baselines and frequently performs comparably even to policies that have test-time access to the privileged sensors. Website: https://penn-pal-lab.github.io/scaffolder/", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19090"} +{"video_file": "EpYnZpDpsQ_39018862.mp4", "openreview_id": "EpYnZpDpsQ", "slideslive_id": 39018862, "venue": "iclr2024", "title": "Self-supervised Representation Learning from Random Data Projectors", "status": "Poster", "keywords": "Representation learning;Self-supervised learning;random data projections;domain-agnostic representation learning", "tldr": "We propose a new domain-agnostic self-supervised learning framework using random data projections", "abstract": "Self-supervised representation learning (SSRL) has advanced considerably by exploiting the transformation invariance assumption under artificially designed data augmentations. While augmentation-based SSRL algorithms push the boundaries of performance in computer vision and natural language processing, they are often not directly applicable to other data modalities, and can conflict with application-specific data augmentation constraints. This paper presents an SSRL approach that can be applied to any data modality and network architecture because it does not rely on augmentations or masking. Specifically, we show that high-quality data representations can be learned by reconstructing random data projections. We evaluate the proposed approach on a wide range of representation learning tasks that span diverse modalities and real-world applications. We show that it outperforms multiple state-of-the-art SSRL baselines. Due to its wide applicability and strong empirical results, we argue that learning from randomness is a fruitful research direction worthy of attention and further study.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19089"} +{"video_file": "EriR6Ec69a_39018226.mp4", "openreview_id": "EriR6Ec69a", "slideslive_id": 39018226, "venue": "iclr2024", "title": "Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust Closed-Loop Control", "status": "Spotlight", "keywords": "Low-rank;sparsity;closed-loop;recurrent neural networks", "tldr": "Low-rank and sparse recurrent matrices of RNNs can help generalization to closed-loop settings and distribution-shifts", "abstract": "Developing autonomous agents that can interact with changing environments is an open challenge in machine learning. Robustness is particularly important in these settings as agents are often fit offline on expert demonstrations but deployed online where they must generalize to the closed feedback loop within the environment. In this work, we explore the application of recurrent neural networks to tasks of this nature and understand how a parameterization of their recurrent connectivity influences robustness in closed-loop settings. Specifically, we represent the recurrent connectivity as a function of rank and sparsity and show both theoretically and empirically that modulating these two variables has desirable effects on network dynamics. The proposed low-rank, sparse connectivity induces an interpretable prior on the network that proves to be most amenable for a class of models known as closed-form continuous-time neural networks (CfCs). We find that CfCs with fewer parameters can outperform their full-rank, fully-connected counterparts in the online setting under distribution shift. This yields memory-efficient and robust agents while opening a new perspective on how we can modulate network dynamics through connectivity.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19088"} +{"video_file": "F76bwRSLeK_39018221.mp4", "openreview_id": "F76bwRSLeK", "slideslive_id": 39018221, "venue": "iclr2024", "title": "Sparse Autoencoders Find Highly Interpretable Features in Language Models", "status": "Poster", "keywords": "language model;interpretability;representation learning;sparsity;dictionary learning;unsupervised learning", "tldr": "We use a scalable and unsupervised method called Sparse Autoencoders to find interpretable, monosemantic features in the residual streams of real LLMs (Pythia-70M/410M).", "abstract": "One of the roadblocks to a better understanding of neural networks' internals is \\textit{polysemanticity}, where neurons appear to activate in multiple, semantically distinct contexts. Polysemanticity prevents us from identifying concise, human-understandable explanations for what neural networks are doing internally. One hypothesised cause of polysemanticity is \\textit{superposition}, where neural networks represent more features than they have neurons by assigning features to an overcomplete set of directions in activation space, rather than to individual neurons. Here, we attempt to identify those directions, using sparse autoencoders to reconstruct the internal activations of a language model. These autoencoders learn sets of sparsely activating features that are more interpretable and monosemantic than directions identified by alternative approaches, where interpretability is measured by automated methods. Moreover, we show that with our learned set of features, we can pinpoint the features that are causally responsible for counterfactual behaviour on the indirect object identification task \\citep{wang2022interpretability} to a finer degree than previous decompositions. This work indicates that it is possible to resolve superposition in language models using a scalable, unsupervised method. Our method may serve as a foundation for future mechanistic interpretability work, which we hope will enable greater model transparency and steerability.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19081"} +{"video_file": "FDb2JQZsFH_39018218.mp4", "openreview_id": "FDb2JQZsFH", "slideslive_id": 39018218, "venue": "iclr2024", "title": "Attention-based Iterative Decomposition for Tensor Product Representation", "status": "Poster", "keywords": "tensor product representation;systematic generalization;compositional generalization;binding problem;structured representation learning;competitive attention", "tldr": "Slot-based competitive mechanism that effectively binds sequential features to the structured representations (roles and fillers) of TPR", "abstract": "In recent research, Tensor Product Representation (TPR) is applied for the systematic generalization task of deep neural networks by learning the compositional structure of data. However, such prior works show limited performance in discovering and representing the symbolic structure from unseen test data because their decomposition to the structural representations was incomplete. In this work, we propose an Attention-based Iterative Decomposition (AID) module designed to enhance the decomposition operations for the structured representations encoded from the sequential input data with TPR. Our AID can be easily adapted to any TPR-based model and provides enhanced systematic decomposition through a competitive attention mechanism between input features and structured representations. In our experiments, AID shows effectiveness by significantly improving the performance of TPR-based prior works on the series of systematic generalization tasks. Moreover, in the quantitative and qualitative evaluations, AID produces more compositional and well-bound structural representations than other works.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19077"} +{"video_file": "FHqAzWl2wE_39018217.mp4", "openreview_id": "FHqAzWl2wE", "slideslive_id": 39018217, "venue": "iclr2024", "title": "Multimarginal Generative Modeling with Stochastic Interpolants", "status": "Poster", "keywords": "multi-marginal;unsupervised learning;generative modeling;measure transport;optimal transport", "tldr": "We introduce a method to generalize flow-based and diffusion based generative models to map between K distributions instead of two, revealing multiway-correspondences between densities.", "abstract": "Given a set of\nK\nprobability densities, we consider the multimarginal generative modeling problem of learning a joint distribution that recovers these densities as marginals. The structure of this joint distribution should identify multi-way correspondences among the prescribed marginals. We formalize an approach to this task within a generalization of the stochastic interpolant framework, leading to efficient learning algorithms built upon dynamical transport of measure. Our generative models are defined by velocity and score fields that can be characterized as the minimizers of simple quadratic objectives, and they are defined on a simplex that generalizes the time variable in the usual dynamical transport framework. The resulting transport on the simplex is influenced by all marginals, and we show that multi-way correspondences can be extracted. The identification of such correspondences has applications to style transfer, algorithmic fairness, and data decorruption. In addition, the multimarginal perspective enables an efficient algorithm for optimizing the dynamical transport cost in the ordinary two-marginal setting. We demonstrate these capacities with several numerical examples.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19075"} +{"video_file": "FIplmUWdm3_39019234.mp4", "openreview_id": "FIplmUWdm3", "slideslive_id": 39019234, "venue": "iclr2024", "title": "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models", "status": "Poster", "keywords": "Network Quantization;Large Language Models", "tldr": "We propose QLLM, an accurate and efficient low-bitwidth PTQ method designed for LLMs.", "abstract": "Large Language Models (LLMs) have demonstrated unparalleled efficacy in natural language processing. However, their high computational demands and memory overheads hinder their broad deployment. To address this, two quantization strategies emerge, including Quantization-Aware Training (QAT) and Post-Training Quantization (PTQ). For LLMs, the billions of parameters make the QAT impractical due to the prohibitive training cost and thus PTQ becomes more prevalent. In existing studies, activation outliers in particular channels are identified as the biggest challenge to PTQ accuracy. They propose to transform the magnitudes from activations to weights, which however offers limited alleviation or suffers from unstable gradients, resulting in a severe performance drop at low-bitwidth. In this paper, we propose QLLM, an accurate and efficient low-bitwidth PTQ method designed for LLMs. QLLM introduces an adaptive channel reassembly technique that reallocates the magnitude of outliers to other channels, thereby mitigating their impact on the quantization range. This is achieved by channel disassembly and channel assembly, which first breaks down the outlier channels into several sub-channels to ensure a more balanced distribution of activation magnitudes. Then similar channels are merged to maintain the original channel number for efficiency. Additionally, an adaptive strategy is designed to autonomously determine the optimal number of sub-channels for channel disassembly. To further compensate for the performance loss caused by quantization, we propose an efficient tuning method that only learns a small number of low-rank weights while freezing the pre-trained quantized model. After training, these low-rank parameters can be fused into the frozen weights without affecting inference. Extensive experiments on LLaMA-1 and LLaMA-2 show that QLLM is able to obtain accurate quantized models efficiently. For example, QLLM quantizes the 4-bit LLaMA-2-70B within 10 hours on a single A100-80G GPU, outperforming the previous state-of-the-art method by 7.89% on the average accuracy across five zero-shot tasks. Code is available at ZIP Lab and ModelTC.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19073"} +{"video_file": "FMMF1a9ifL_39017185.mp4", "openreview_id": "FMMF1a9ifL", "slideslive_id": 39017185, "venue": "iclr2024", "title": "Gradual Optimization Learning for Conformational Energy Minimization", "status": "Poster", "keywords": "energy minimization;conformational optimization;geometry optimization", "tldr": "We propose a data-efficient framework for conformational energy minimization with neural networks", "abstract": "Molecular conformation optimization is crucial to computer-aided drug discovery and materials design. Traditional energy minimization techniques rely on iterative optimization methods that use molecular forces calculated by a physical simulator (oracle) as anti-gradients. However, this is a computationally expensive approach that requires many interactions with a physical simulator. One way to accelerate this procedure is to replace the physical simulator with a neural network. Despite recent progress in neural networks for molecular conformation energy prediction, such models are prone to errors due to distribution shift, leading to inaccurate energy minimization. We find that the quality of energy minimization with neural networks can be improved by providing optimization trajectories as additional training data. Still, obtaining complete optimization trajectories demands a lot of additional computations. To reduce the required additional data, we present the Gradual Optimization Learning Framework (GOLF) for energy minimization with neural networks. The framework consists of an efficient data-collecting scheme and an external optimizer. The external optimizer utilizes gradients from the energy prediction model to generate optimization trajectories, and the data-collecting scheme selects additional training data to be processed by the physical simulator. Our results demonstrate that the neural network trained with GOLF performs \\textit{on par} with the oracle on a benchmark of diverse drug-like molecules using significantly less additional data.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/19068"} +{"video_file": "FRCHDhbxZF_39018730.mp4", "openreview_id": "FRCHDhbxZF", "slideslive_id": 39018730, "venue": "iclr2024", "title": "ZeroFlow: Scalable Scene Flow via Distillation", "status": "Poster", "keywords": "Scene Flow;Distillation;Scaling", "tldr": "We propose a scalable, human annotation-free distillation pipeline that captures state-of-the-art by leveraging raw data.", "abstract": "Scene flow estimation is the task of describing the 3D motion field between temporally successive point clouds. State-of-the-art methods use strong priors and test-time optimization techniques, but require on the order of tens of seconds to process full-size point clouds, making them unusable as computer vision primitives for real-time applications such as open world object detection. Feedforward methods are considerably faster, running on the order of tens to hundreds of milliseconds for full-size point clouds, but require expensive human supervision. To address both limitations, we propose Scene Flow via Distillation, a simple, scalable distillation framework that uses a label-free optimization method to produce pseudo-labels to supervise a feedforward model. Our instantiation of this framework, ZeroFlow, achieves state-of-the-art performance on the Argoverse 2 Self-Supervised Scene Flow Challenge while using zero human labels by simply training on large-scale, diverse unlabeled data. At test-time, ZeroFlow is over 1000\n\u00d7\nfaster than label-free state-of-the-art optimization-based methods on full-size point clouds (34 FPS vs 0.028 FPS) and over 1000\n\u00d7\ncheaper to train on unlabeled data compared to the cost of human annotation ($394 vs ~$750,000). To facilitate further research, we will release our code, trained model weights, and high quality pseudo-labels for the Argoverse 2 and Waymo Open datasets.", "primary_area": "applications to robotics, autonomy, planning", "site": "https://iclr.cc/virtual/2024/poster/19064"} +{"video_file": "FVhmnvqnsI_39018790.mp4", "openreview_id": "FVhmnvqnsI", "slideslive_id": 39018790, "venue": "iclr2024", "title": "Multisize Dataset Condensation", "status": "Oral", "keywords": "Dataset Condensation;Dataset Distillation;Image Classification", "tldr": "Compress N condensation processes into one single condensation process to generate condensed datasets with various sizes.", "abstract": "While dataset condensation effectively enhances training efficiency, its application in on-device scenarios brings unique challenges. 1) Due to the fluctuating computational resources of these devices, there's a demand for a flexible dataset size that diverges from a predefined size. 2) The limited computational power on devices often prevents additional condensation operations. These two challenges connect to the \"subset degradation problem\" in traditional dataset condensation: a subset from a larger condensed dataset is often unrepresentative compared to directly condensing the whole dataset to that smaller size. In this paper, we propose Multisize Dataset Condensation (MDC) by compressing\nN\ncondensation processes into a single condensation process to obtain datasets with multiple sizes. Specifically, we introduce an \"adaptive subset loss\" on top of the basic condensation loss to mitigate the \"subset degradation problem\". Our MDC method offers several benefits: 1) No additional condensation process is required; 2) reduced storage requirement by reusing condensed images. Experiments validate our findings on networks including ConvNet, ResNet and DenseNet, and datasets including SVHN, CIFAR-10, CIFAR-100 and ImageNet. For example, we achieved 5.22%-6.40% average accuracy gains on condensing CIFAR-10 to ten images per class. Code is available at: https://github.com/he-y/Multisize-Dataset-Condensation.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/19062"} +{"video_file": "FddFxi08J3_39017112.mp4", "openreview_id": "FddFxi08J3", "slideslive_id": 39017112, "venue": "iclr2024", "title": "On the Power of the Weisfeiler-Leman Test for Graph Motif Parameters", "status": "Poster", "keywords": "WL test;graph neural networks;graph motif parameters;subgraph counting", "tldr": "We provide a characterization of the expressive power of the WL test in terms of graph motif parameters", "abstract": "Seminal research in the field of graph neural networks (GNNs) has revealed a direct correspondence between the expressive capabilities of GNNs and the\nk\n-dimensional Weisfeiler-Leman (\nk\nWL) test, a widely-recognized method for verifying graph isomorphism. This connection has reignited interest in comprehending the specific graph properties effectively distinguishable by the\nk\nWL test. A central focus of research in this field revolves around determining the least dimensionality\nk\n, for which\nk\nWL can discern graphs with different number of occurrences of a pattern graph\np\n. We refer to such a least\nk\nas the WL-dimension of this pattern counting problem. This inquiry traditionally delves into two distinct counting problems related to patterns: subgraph counting and induced subgraph counting. Intriguingly, despite their initial appearance as separate challenges with seemingly divergent approaches, both of these problems are interconnected components of a more comprehensive problem: \"graph motif parameters\". In this paper, we provide a precise characterization of the WL-dimension of labeled graph motif parameters. As specific instances of this result, we obtain characterizations of the WL-dimension of the subgraph counting and induced subgraph counting problem for every labeled pattern\np\n. Particularly noteworthy is our resolution of a problem left open in previous work concerning induced copies. We additionally demonstrate that in cases where the\nk\nWL test distinguishes between graphs with varying occurrences of a pattern\np\n, the exact number of occurrences of\np\ncan be computed uniformly using only local information of the last layer of a corresponding GNN. We finally delve into the challenge of recognizing the WL-dimension of various graph parameters. We give a polynomial time algorithm for determining the WL-dimension of the subgraph counting problem for given pattern\np\n, answering an open question from previous work. We additionally show how to utilize deep results from the field of graph motif parameters, together with our characterization, to determine the WL-dimension of induced subgraph counting and counting\nk\n-graphlets.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/19057"} +{"video_file": "Feiz5HtCD0_39018207.mp4", "openreview_id": "Feiz5HtCD0", "slideslive_id": 39018207, "venue": "iclr2024", "title": "Does Writing with Language Models Reduce Content Diversity?", "status": "Poster", "keywords": "collaborative writing;text generation;language models;evaluation;human-AI collaboration;diversity", "tldr": "We show via a controlled experiment that users collaborating with InstructGPT write with less content diversity than those collaborating with GPT3 and solo writers without model help.", "abstract": "Large language models (LLMs) have led to a surge in collaborative writing with model assistance. As different users incorporate suggestions from the same model, there is a risk of decreased diversity in the produced content, potentially limiting diverse perspectives in public discourse. In this work, we measure the impact of co-writing on diversity via a controlled experiment, where users write argumentative essays in three setups---using a base LLM (GPT3), a feedback-tuned LLM (InstructGPT), and writing without model help. We develop a set of diversity metrics and find that writing with InstructGPT (but not the GPT3) results in a statistically significant reduction in diversity. Specifically, it increases the similarity between the writings of different authors and reduces the overall lexical and content diversity. We additionally find that this effect is mainly attributable to InstructGPT contributing less diverse text to co-written essays. In contrast, the user-contributed text remains unaffected by model collaboration. This suggests that the recent improvement in generation quality from adapting models to human feedback might come at the cost of more homogeneous and less diverse content.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/19056"} +{"video_file": "FvK2noilxT_39018201.mp4", "openreview_id": "FvK2noilxT", "slideslive_id": 39018201, "venue": "iclr2024", "title": "GeneOH Diffusion: Towards Generalizable Hand-Object Interaction Denoising via Denoising Diffusion", "status": "Poster", "keywords": "motion refinement;hand-object interaction;inverse problem;generative prior", "tldr": "GeneOH Diffusion cleans erroneous out-of-domain HOI tracks with new objects, motions, and novel noise distributions into natural sequences by only training on limited data.", "abstract": "In this work, we tackle the challenging problem of denoising hand-object interactions (HOI). Given an erroneous interaction sequence, the objective is to refine the incorrect hand trajectory to remove interaction artifacts for a perceptually realistic sequence. This challenge involves intricate interaction noise, including unnatural hand poses and incorrect hand-object relations, alongside the necessity for robust generalization to new interactions and diverse noise patterns. We tackle those challenges through a novel approach, GeneOH Diffusion, incorporating two key designs: an innovative contact-centric HOI representation named GeneOH and a new domain-generalizable denoising scheme. The contact-centric representation GeneOH informatively parameterizes the HOI process, facilitating enhanced generalization across various HOI scenarios. The new denoising scheme consists of a canonical denoising model trained to project noisy data samples from a whitened noise space to a clean data manifold and a ``denoising via diffusion'' strategy which can handle input trajectories with various noise patterns by first diffusing them to align with the whitened noise space and cleaning via the canonical denoiser. Extensive experiments on four benchmarks with significant domain variations demonstrate the superior effectiveness of our method. GeneOH Diffusion also shows promise for various downstream applications. We include a website for introducing the work.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19045"} +{"video_file": "Fx2SbBgcte_39019174.mp4", "openreview_id": "Fx2SbBgcte", "slideslive_id": 39019174, "venue": "iclr2024", "title": "AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning", "status": "Spotlight", "keywords": "Deep Learning;Diffusion Model;Video Generation", "tldr": "In this paper, we present AnimateDiff, a practical framework for animating personalized text-to-image diffusion models without requiring model-specific tuning.", "abstract": "With the advance of text-to-image (T2I) diffusion models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. However, adding motion dynamics to existing high-quality personalized T2Is and enabling them to generate animations remains an open challenge. In this paper, we present AnimateDiff, a practical framework for animating personalized T2I models without requiring model-specific tuning. At the core of our framework is a plug-and-play motion module that can be trained once and seamlessly integrated into any personalized T2Is originating from the same base T2I. Through our proposed training strategy, the motion module effectively learns transferable motion priors from real-world videos. Once trained, the motion module can be inserted into a personalized T2I model to form a personalized animation generator. We further propose MotionLoRA, a lightweight fine-tuning technique for AnimateDiff that enables a pre-trained motion module to adapt to new motion patterns, such as different shot types, at a low training and data collection cost. We evaluate AnimateDiff and MotionLoRA on several public representative personalized T2I models collected from the community. The results demonstrate that our approaches help these models generate temporally smooth animation clips while preserving the visual quality and motion diversity. Codes and pre-trained weights are available at https://github.com/guoyww/AnimateDiff.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19044"} +{"video_file": "G1Hlubz1fR_39019039.mp4", "openreview_id": "G1Hlubz1fR", "slideslive_id": 39019039, "venue": "iclr2024", "title": "Customizable Combination of Parameter-Efficient Modules for Multi-Task Learning", "status": "Poster", "keywords": "Modular skill learning;Multi-task learning;Parameter-Efficient;Fine-Tuning", "tldr": "A novel paradigm of Parameter Efficient Fine-Tuning (PEFT) for multi-task learning, harnessing specialized and shared domain skills.", "abstract": "Modular and composable transfer learning is an emerging direction in the field of Parameter Efficient Fine-Tuning, as it enables neural networks to better organize various aspects of knowledge, leading to improved cross-task generalization. In this paper, we introduce a novel approach Customized Polytropon (\nC-Poly\n) that combines task-common skills and task-specific skills, while the skill parameters being highly parameterized using low-rank techniques. Each task is associated with a customizable number of exclusive specialized skills and also benefits from skills shared with peer tasks. A skill assignment matrix is jointly learned. To evaluate our approach, we conducted extensive experiments on the Super-NaturalInstructions and the SuperGLUE benchmarks. Our findings demonstrate that\nC-Poly\noutperforms fully-shared, task-specific, and skill-indistinguishable baselines, significantly enhancing the sample efficiency in multi-task learning scenarios.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19042"} +{"video_file": "G2cG3mQqop_39018200.mp4", "openreview_id": "G2cG3mQqop", "slideslive_id": 39018200, "venue": "iclr2024", "title": "Image Clustering Conditioned on Text Criteria", "status": "Poster", "keywords": "image clustering;vision-language models;large language models;foundation models", "tldr": "We propose a novel image clustering method that perform clustering based on a user-specified criterion.", "abstract": "Classical clustering methods do not provide users with direct control of the clustering results, and the clustering results may not be consistent with the relevant criterion that a user has in mind. In this work, we present a new methodology for performing image clustering based on user-specified criteria in the form of text by leveraging modern Vision-Language Models and Large Language Models. We call our method Image Clustering Conditioned on Text Criteria (IC\n|\nTC), and it represents a different paradigm of image clustering. IC\n|\nTC requires a minimal and practical degree of human intervention and grants the user significant control over the clustering results in return. Our experiments show that IC\n|\nTC can effectively cluster images with various criteria, such as human action, physical location, or the person's mood, significantly outperforming baselines.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19041"} +{"video_file": "GEcwtMk1uA_39017057.mp4", "openreview_id": "GEcwtMk1uA", "slideslive_id": 39017057, "venue": "iclr2024", "title": "Identifying the Risks of LM Agents with an LM-Emulated Sandbox", "status": "Spotlight", "keywords": "Language Model Agent;Tool Use;Evaluation;Safety;Language Model", "tldr": "An LM-based emulation framework for identifying the risks of LM agents at scale", "abstract": "Recent advances in Language Model (LM) agents and tool use, exemplified by applications like ChatGPT Plugins, enable a rich set of capabilities but also amplify potential risks\u2014such as leaking private data or causing financial losses. Identifying these risks is labor-intensive, necessitating implementing the tools, setting up the environment for each test scenario manually, and finding risky cases. As tools and agents become more complex, the high cost of testing these agents will make it increasingly difficult to find high-stakes, long-tail risks. To address these challenges, we introduce ToolEmu: a framework that uses an LM to emulate tool execution and enables scalable testing of LM agents against a diverse range of tools and scenarios. Alongside the emulator, we develop an LM-based automatic safety evaluator that examines agent failures and quantifies associated risks. We test both the tool emulator and evaluator through human evaluation and find that 68.8% of failures identified with ToolEmu would be valid real-world agent failures. Using our curated initial benchmark consisting of 36 high-stakes toolkits and 144 test cases, we provide a quantitative risk analysis of current LM agents and identify numerous failures with potentially severe outcomes. Notably, even the safest LM agent exhibits such failures 23.9% of the time according to our evaluator, underscoring the need to develop safer LM agents for real-world deployment.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/19037"} +{"video_file": "GN921JHCRw_39019272.mp4", "openreview_id": "GN921JHCRw", "slideslive_id": 39019272, "venue": "iclr2024", "title": "RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval", "status": "Poster", "keywords": "Retrieval Augmented Language Models;Information Retrieval;summarization;QA", "tldr": "RAPTOR improves LLM QA performance by constructing a hierarchical summarization tree for information retrieval, outperforming existing retrieval methods across various metrics and datasets.", "abstract": "Retrieval-augmented language models can better adapt to changes in world state and incorporate long-tail knowledge. However, most existing methods retrieve only short contiguous chunks from a retrieval corpus, limiting holistic understanding of the overall document context. We introduce the novel approach of recursively embedding, clustering, and summarizing chunks of text, constructing a tree with differing levels of summarization from the bottom up. At inference time, our RAPTOR model retrieves from this tree, integrating information across lengthy documents at different levels of abstraction. Controlled experiments show that retrieval with recursive summaries offers significant improvements over traditional retrieval-augmented LMs on several tasks. On question-answering tasks that involve complex, multi-step reasoning, we show state-of-the-art results; for example, by coupling RAPTOR retrieval with the use of GPT-4, we can improve the best performance on the QuALITY benchmark by 20% in absolute accuracy.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19034"} +{"video_file": "GPKTIktA0k_39019282.mp4", "openreview_id": "GPKTIktA0k", "slideslive_id": 39019282, "venue": "iclr2024", "title": "The Reversal Curse: LLMs trained on \u201cA is B\u201d fail to learn \u201cB is A\u201d", "status": "Poster", "keywords": "LLMs;Large Language Models;Question Answering;Generalization;Knowledge Representation;Logical Inference;Relations", "tldr": "We demonstrate experimentally that LLMs trained on facts in one direction (\"A is B\") do not generalize to the reverse direction (\"B is A\").", "abstract": "We expose a surprising failure of generalization in auto-regressive large language models (LLMs). If a model is trained on a sentence of the form ''A is B'', it will not automatically generalize to the reverse direction ''B is A''. This is the Reversal Curse. For instance, if a model is trained on ''Valentina Tereshkova was the first woman to travel to space'', it will not automatically be able to answer the question, ''Who was the first woman to travel to space?''. Moreover, the likelihood of the correct answer (''Valentina Tershkova'') will not be higher than for a random name. Thus, models do not generalize a prevalent pattern in their training set: if ''A is B'' occurs, ''B is A'' is more likely to occur. It is worth noting, however, that if ''A is B'' appears in-context, models can deduce the reverse relationship.\nWe provide evidence for the Reversal Curse by finetuning GPT-3 and Llama-1 on fictitious statements such as ''Uriah Hawthorne is the composer of Abyssal Melodies'' and showing that they fail to correctly answer ''Who composed Abyssal Melodies?''. The Reversal Curse is robust across model sizes and model families and is not alleviated by data augmentation.\nWe also evaluate ChatGPT (GPT-3.5 and GPT-4) on questions about real-world celebrities, such as ''Who is Tom Cruise's mother? [A: Mary Lee Pfeiffer]'' and the reverse ''Who is Mary Lee Pfeiffer's son?''. GPT-4 correctly answers questions like the former 79% of the time, compared to 33% for the latter.\nCode available at: https://github.com/lukasberglund/reversal_curse.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/19033"} +{"video_file": "GXtmuiVrOM_39018191.mp4", "openreview_id": "GXtmuiVrOM", "slideslive_id": 39018191, "venue": "iclr2024", "title": "Domain Randomization via Entropy Maximization", "status": "Poster", "keywords": "Reinforcement Learning;Sim-to-Real Transfer;Domain Randomization", "tldr": "A novel approach for sim-to-real transfer with domain randomization that directly maximizes the entropy of the dynamics distribution during training, while retaining convergence and generalization capabilities.", "abstract": "Varying dynamics parameters in simulation is a popular Domain Randomization (DR) approach for overcoming the reality gap in Reinforcement Learning (RL). Nevertheless, DR heavily hinges on the choice of the sampling distribution of the dynamics parameters, since high variability is crucial to regularize the agent's behavior but notoriously leads to overly conservative policies when randomizing excessively. In this paper, we propose a novel approach to address sim-to-real transfer, which automatically shapes dynamics distributions during training in simulation without requiring real-world data. We introduce DOmain RAndomization via Entropy MaximizatiON (DORAEMON), a constrained optimization problem that directly maximizes the entropy of the training distribution while retaining generalization capabilities. In achieving this, DORAEMON gradually increases the diversity of sampled dynamics parameters as long as the probability of success of the current policy is sufficiently high. We empirically validate the consistent benefits of DORAEMON in obtaining highly adaptive and generalizable policies, i.e. solving the task at hand across the widest range of dynamics parameters, as opposed to representative baselines from the DR literature. Notably, we also demonstrate the Sim2Real applicability of DORAEMON through its successful zero-shot transfer in a robotic manipulation setup under unknown real-world parameters.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19025"} +{"video_file": "GZ6AcZwA8r_39018931.mp4", "openreview_id": "GZ6AcZwA8r", "slideslive_id": 39018931, "venue": "iclr2024", "title": "MMD Graph Kernel: Effective Metric Learning for Graphs via Maximum Mean Discrepancy", "status": "Spotlight", "keywords": "graph kernel;graph metric learning;maximum mean discrepancy", "tldr": "This paper presents MMD-based graph kernels for improved graph metric learning with proven effectiveness in clustering and classification tasks.", "abstract": "This paper focuses on graph metric learning. First, we present a class of maximum mean discrepancy (MMD) based graph kernels, called MMD-GK. These kernels are computed by applying MMD to the node representations of two graphs with message-passing propagation. Secondly, we provide a class of deep MMD-GKs that are able to learn graph kernels and implicit graph features adaptively in an unsupervised manner. Thirdly, we propose a class of supervised deep MMD-GKs that are able to utilize label information of graphs and hence yield more discriminative metrics. Besides the algorithms, we provide theoretical analysis for the proposed methods. The proposed methods are evaluated in comparison to many baselines such as graph kernels and graph neural networks in the tasks of graph clustering and graph classification. The numerical results demonstrate the effectiveness and superiority of our methods.", "primary_area": "metric learning, kernel learning, and sparse coding", "site": "https://iclr.cc/virtual/2024/poster/19024"} +{"video_file": "GaLCLvJaoF_39019152.mp4", "openreview_id": "GaLCLvJaoF", "slideslive_id": 39019152, "venue": "iclr2024", "title": "Robust Model Based Reinforcement Learning Using $\\mathcal{L}_1$ Adaptive Control", "status": "Poster", "keywords": "Robust control;Reinforcement learning", "tldr": "We propose a novel framework as an add-on scheme to enhance the robustness of model-based RL algorithms against uncertainties.", "abstract": "We introduce\nL\n1\n-MBRL, a control-theoretic augmentation scheme for Model-Based Reinforcement Learning (MBRL) algorithms. Unlike model-free approaches, MBRL algorithms learn a model of the transition function using data and use it to design a control input. Our approach generates a series of approximate control-affine models of the learned transition function according to the proposed switching law. Using the approximate model, control input produced by the underlying MBRL is perturbed by the\nL\n1\nadaptive control, which is designed to enhance the robustness of the system against uncertainties. Importantly, this approach is agnostic to the choice of MBRL algorithm, enabling the use of the scheme with various MBRL algorithms. MBRL algorithms with\nL\n1\naugmentation exhibit enhanced performance and sample efficiency across multiple MuJoCo environments, outperforming the original MBRL algorithms, both with and without system noise.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19023"} +{"video_file": "Gg7cXo3S8l_39018189.mp4", "openreview_id": "Gg7cXo3S8l", "slideslive_id": 39018189, "venue": "iclr2024", "title": "Dictionary Contrastive Learning for Efficient Local Supervision without Auxiliary Networks", "status": "Spotlight", "keywords": "Contrastive learning;Forward learning;Local learning;Image classification;Efficient learning", "tldr": "We propose a simple and efficient local contrastive learning objective that directly compares local features with label embeddings.", "abstract": "While backpropagation (BP) has achieved widespread success in deep learning, it faces two prominent challenges: computational inefficiency and biological implausibility. In response to these challenges, local supervision, encompassing Local Learning (LL) and Forward Learning (FL), has emerged as a promising research direction. LL employs module-wise BP to achieve competitive results yet relies on module-wise auxiliary networks, which increase memory and parameter demands. Conversely, FL updates layer weights without BP and auxiliary networks but falls short of BP\u2019s performance. This paper proposes a simple yet effective objective within a contrastive learning framework for local supervision without auxiliary networks. Given the insight that the existing contrastive learning framework for local supervision is susceptible to task-irrelevant information without auxiliary networks, we present DICTIONARY CONTRASTIVE LEARNING (DCL) that optimizes the similarity between local features and label embeddings. Our method using static label embeddings yields substantial performance improvements in the FL scenario, outperforming state-of-the-art FL approaches. Moreover, our method using adaptive label embeddings closely approaches the performance achieved by LL while achieving superior memory and parameter efficiency.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19021"} +{"video_file": "GkJOCga62u_39018639.mp4", "openreview_id": "GkJOCga62u", "slideslive_id": 39018639, "venue": "iclr2024", "title": "Orbit-Equivariant Graph Neural Networks", "status": "Poster", "keywords": "graph neural networks;equivariance;expressivity;graph orbits", "tldr": "We define orbit-equivariance, a relaxation of equivariance, to enable solving a new class of problems and propose some orbit-equivariant GNNs", "abstract": "Equivariance is an important structural property that is captured by architectures such as graph neural networks (GNNs). However, equivariant graph functions cannot produce different outputs for similar nodes, which may be undesirable when the function is trying to optimize some global graph property. In this paper, we define orbit-equivariance, a relaxation of equivariance which allows for such functions whilst retaining important structural inductive biases. We situate the property in the hierarchy of graph functions, define a taxonomy of orbit-equivariant functions, and provide four different ways to achieve non-equivariant GNNs. For each, we analyze their expressivity with respect to orbit-equivariance and evaluate them on two novel datasets, one of which stems from a real-world use-case of designing optimal bioisosteres.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/19019"} +{"video_file": "GkJiNn2QDF_39018737.mp4", "openreview_id": "GkJiNn2QDF", "slideslive_id": 39018737, "venue": "iclr2024", "title": "FeatUp: A Model-Agnostic Framework for Features at Any Resolution", "status": "Poster", "keywords": "deep learning;deep features;computer vision;feature upsampling", "tldr": "We introduce a method for obtaining high-resolution features from any vision model, trainable end-to-end with the model itself and producing high-quality results on vision tasks.", "abstract": "Deep features are a cornerstone of computer vision research, capturing image semantics and enabling the community to solve downstream tasks even in the zero- or few-shot regime. However, these features often lack the spatial resolution to directly perform dense prediction tasks like segmentation and depth prediction because models aggressively pool information over large areas. In this work, we introduce FeatUp, a task- and model-agnostic framework to restore lost spatial information in deep features. We introduce two variants of FeatUp: one that guides features with high-resolution signal in a single forward pass, and one that fits an implicit model to a single image to reconstruct features at any resolution. Both approaches use a multi-view consistency loss with deep analogies to NeRFs. Our features retain their original semantics and can be swapped into existing applications to yield resolution and performance gains even without re-training. We show that FeatUp significantly outperforms other feature upsampling and image super-resolution approaches in class activation map generation, transfer learning for segmentation and depth prediction, and end-to-end training for semantic segmentation.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/19018"} +{"video_file": "GlpawHh80l_39018186.mp4", "openreview_id": "GlpawHh80l", "slideslive_id": 39018186, "venue": "iclr2024", "title": "Improved algorithm and bounds for successive projection", "status": "Poster", "keywords": "Simplex;vertex hunting;successive projection;pseudo-points;pruning;hyper-spectral unmixing;archetypal analysis;network analysis.", "tldr": "A new approach to estimating the vertices of a simplex", "abstract": "Consider a\nK\n-vertex simplex in a\nd\n-dimensional space. We measure\nn\npoints on the simplex, but due to the measurement noise, some of the observed points fall outside the simplex. The interest is vertex hunting (i.e., estimating the vertices of the simplex). The successive projection algorithm (SPA) is one of the most popular approaches to vertex hunting, but it is vulnerable to noise and outliers, and may perform unsatisfactorily. We propose pseudo-point SPA (pp-SPA) as a new approach to vertex hunting. The approach contains two novel ideas (a projection step and a denoise step) and generates roughly\nn\npseudo-points, which can be fed in to SPA for vertex hunting. For theory, we first derive an improved non-asymptotic bound for the orthodox SPA, and then use the result to derive the bounds for pp-SPA. Compared with the orthodox SPA, pp-SPA has a faster rate and more satisfactory numerical performance in a broad setting. The analysis is quite delicate: the non-asymptotic bound is hard to derive, and we need precise results on the extreme values of (possibly) high-dimensional random vectors.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/19016"} +{"video_file": "GruDNzQ4ux_39018802.mp4", "openreview_id": "GruDNzQ4ux", "slideslive_id": 39018802, "venue": "iclr2024", "title": "DreamSmooth: Improving Model-based Reinforcement Learning via Reward Smoothing", "status": "Poster", "keywords": "Model-based Reinforcement Learning; Reward Shaping; Reward Smoothing", "tldr": "We show that reward prediction is challenging in many sparse-reward tasks, and propose a simple yet effective method, reward smoothing, which effectively facilitates reward prediction and thus, improves model-based reinforcement learning.", "abstract": "Model-based reinforcement learning (MBRL) has gained much attention for its ability to learn complex behaviors in a sample-efficient way: planning actions by generating imaginary trajectories with predicted rewards. Despite its success, we found that surprisingly, reward prediction is often a bottleneck of MBRL, especially for sparse rewards that are challenging (or even ambiguous) to predict. Motivated by the intuition that humans can learn from rough reward estimates, we propose a simple yet effective reward smoothing approach, DreamSmooth, which learns to predict a temporally-smoothed reward, instead of the exact reward at the given timestep. We empirically show that DreamSmooth achieves state-of-the-art performance on long-horizon sparse-reward tasks both in sample efficiency and final performance without losing performance on common benchmarks, such as Deepmind Control Suite and Atari benchmarks.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/19014"} +{"video_file": "GzNaCp6Vcg_39018182.mp4", "openreview_id": "GzNaCp6Vcg", "slideslive_id": 39018182, "venue": "iclr2024", "title": "PINNACLE: PINN Adaptive ColLocation and Experimental points selection", "status": "Spotlight", "keywords": "Physics-informed Neural Networks;PINNs;adaptive training points selection", "tldr": "A novel PINN training algorithm, motivated by analysis of the Neural Tangent Kernel, that jointly selects all training point types in the composite loss function to gain large performance boosts for forward, inverse, and transfer learning problems.", "abstract": "Physics-Informed Neural Networks (PINNs), which incorporate PDEs as soft constraints, train with a composite loss function that contains multiple training point types: different types of collocation points chosen during training to enforce each PDE and initial/boundary conditions, and experimental points which are usually costly to obtain via experiments or simulations. Training PINNs using this loss function is challenging as it typically requires selecting large numbers of points of different types, each with different training dynamics. Unlike past works that focused on the selection of either collocation or experimental points, this work introduces PINN Adaptive ColLocation and Experimental points selection (PINNACLE), the first algorithm that jointly optimizes the selection of all training point types, while automatically adjusting the proportion of collocation point types as training progresses. PINNACLE uses information on the interactions among training point types, which had not been considered before, based on an analysis of PINN training dynamics via the Neural Tangent Kernel (NTK). We theoretically show that the criterion used by PINNACLE is related to the PINN generalization error, and empirically demonstrate that PINNACLE is able to outperform existing point selection methods for forward, inverse, and transfer learning problems.", "primary_area": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)", "site": "https://iclr.cc/virtual/2024/poster/19012"} +{"video_file": "GzNhzX9kVa_39018181.mp4", "openreview_id": "GzNhzX9kVa", "slideslive_id": 39018181, "venue": "iclr2024", "title": "A Benchmark Study on Calibration", "status": "Poster", "keywords": "Calibration", "tldr": "This research explore the calibration property by analyzing 117,702 unique models and answering questions on calibration's generalizability, robustness and etc.", "abstract": "Deep neural networks are increasingly utilized in various machine learning tasks. However, as these models grow in complexity, they often face calibration issues, despite enhanced prediction accuracy. Many studies have endeavored to improve calibration performance through the use of specific loss functions, data preprocessing and training frameworks. Yet, investigations into calibration properties have been somewhat overlooked. Our study leverages the Neural Architecture Search (NAS) search space, offering an exhaustive model architecture space for thorough calibration properties exploration. We specifically create a model calibration dataset. This dataset evaluates 90 bin-based and 12 additional calibration measurements across 117,702 unique neural networks within the widely employed NATS-Bench search space. Our analysis aims to answer several longstanding questions in the field, using our proposed dataset: (i) Can model calibration be generalized across different datasets? (ii) Can robustness be used as a calibration measurement? (iii) How reliable are calibration metrics? (iv) Does a post-hoc calibration method affect all models uniformly? (v) How does calibration interact with accuracy? (vi) What is the impact of bin size on calibration measurement? (vii) Which architectural designs are beneficial for calibration? Additionally, our study bridges an existing gap by exploring calibration within NAS. By providing this dataset, we enable further research into NAS calibration. As far as we are aware, our research represents the first large-scale investigation into calibration properties and the premier study of calibration issues within NAS.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/19011"} +{"video_file": "H3UayAQWoE_39017152.mp4", "openreview_id": "H3UayAQWoE", "slideslive_id": 39017152, "venue": "iclr2024", "title": "On the Humanity of Conversational AI: Evaluating the Psychological Portrayal of LLMs", "status": "Oral", "keywords": "LLM;Benchmark;Evaluation;Psychometrics", "tldr": "We propose PsychoBench, a framework for evaluating the psychological portrayal of LLMs. We provide insights on the humanity of LLM leveraging our tool.", "abstract": "Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, ChatGPT, GPT-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/19008"} +{"video_file": "HC0msxE3sf_39018923.mp4", "openreview_id": "HC0msxE3sf", "slideslive_id": 39018923, "venue": "iclr2024", "title": "Lewis's Signaling Game as beta-VAE For Natural Word Lengths and Segments", "status": "Poster", "keywords": "Emergent Communication;Emergent Language;Probabilistic Generative Model;Variational Autoencoder;beta-VAE;Zipf\u2019s law of abbreviation;Harris\u2019s articulation scheme", "tldr": "We reinterpret Lewis's signaling game, a frequently used setting in emergent communication, as beta-VAE and reformulate its objective function as ELBO.", "abstract": "As a sub-discipline of evolutionary and computational linguistics, emergent communication (EC) studies communication protocols, called emergent languages, arising in simulations where agents communicate. A key goal of EC is to give rise to languages that share statistical properties with natural languages. In this paper, we reinterpret Lewis's signaling game, a frequently used setting in EC, as beta-VAE and reformulate its objective function as ELBO. Consequently, we clarify the existence of prior distributions of emergent languages and show that the choice of the priors can influence their statistical properties. Specifically, we address the properties of word lengths and segmentation, known as Zipf's law of abbreviation (ZLA) and Harris's articulation scheme (HAS), respectively. It has been reported that the emergent languages do not follow them when using the conventional objective. We experimentally demonstrate that by selecting an appropriate prior distribution, more natural segments emerge, while suggesting that the conventional one prevents the languages from following ZLA and HAS.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/19004"} +{"video_file": "HHbRxoDTxE_39018178.mp4", "openreview_id": "HHbRxoDTxE", "slideslive_id": 39018178, "venue": "iclr2024", "title": "Looped Transformers are Better at Learning Learning Algorithms", "status": "Poster", "keywords": "in-context learning;transformers;looped transformers", "tldr": "We train a looped transformer from scratch to perform in-context learning of simple function classes. Empirical results indicate the looped transformer can match or outperform the standard transformer.", "abstract": "Transformers have demonstrated effectiveness in in-context solving data-fitting problems from various (latent) models, as reported by Garg et al. (2022). However, the absence of an inherent iterative structure in the transformer architecture presents a challenge in emulating the iterative algorithms, which are commonly employed in traditional machine learning methods. To address this, we propose the utilization of looped transformer architecture and its associated training methodology, with the aim of incorporating iterative characteristics into the transformer architectures. Experimental results suggest that the looped transformer achieves performance comparable to the standard transformer in solving various data-fitting problems, while utilizing less than 10% of the parameter count.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18999"} +{"video_file": "HRkyLbBRHI_39018174.mp4", "openreview_id": "HRkyLbBRHI", "slideslive_id": 39018174, "venue": "iclr2024", "title": "Compositional Conservatism: A Transductive Approach in Offline Reinforcement Learning", "status": "Poster", "keywords": "offline reinforcement learning;compositional generalization;conservatism;transduction", "tldr": "We encourage conservatism in the compositional input space of the policy and Q-function, independently to the prevalent behavioral conservatism.", "abstract": "Offline reinforcement learning (RL) is a compelling framework for learning optimal policies from past experiences without additional interaction with the environment. Nevertheless, offline RL inevitably faces the problem of distributional shifts, where the states and actions encountered during policy execution may not be in the training dataset distribution. A common solution involves incorporating conservatism into the policy or the value function to safeguard against uncertainties and unknowns. In this work, we focus on achieving the same objectives of conservatism but from a different perspective. We propose COmpositional COnservatism with Anchor-seeking (COCOA) for offline RL, an approach that pursues conservatism in a compositional manner on top of the transductive reparameterization (Netanyahu et al., 2023), which decomposes the input variable (the state in our case) into an anchor and its difference from the original input. Our COCOA seeks both in-distribution anchors and differences by utilizing the learned reverse dynamics model, encouraging conservatism in the compositional input space for the policy or value function. Such compositional conservatism is independent of and agnostic to the prevalent behavioral conservatism in offline RL. We apply COCOA to four state-of-the-art offline RL algorithms and evaluate them on the D4RL benchmark, where COCOA generally improves the performance of each algorithm. The code is available at https://github.com/runamu/compositional-conservatism.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18995"} +{"video_file": "HT2dAhh4uV_39018172.mp4", "openreview_id": "HT2dAhh4uV", "slideslive_id": 39018172, "venue": "iclr2024", "title": "Learning to Compose: Improving Object Centric Learning by Injecting Compositionality", "status": "Poster", "keywords": "Object-Centric learning;Compositionality", "tldr": "We propose a novel objective that explicitly encourages compositionality of the representations.", "abstract": "Learning compositional representation is a key aspect of object-centric learning as it enables flexible systematic generalization and supports complex visual reasoning. However, most of the existing approaches rely on auto-encoding objective, while the compositionality is implicitly imposed by the architectural or algorithmic bias in the encoder. This misalignment between auto-encoding objective and learning compositionality often results in failure of capturing meaningful object representations. In this study, we propose a novel objective that explicitly encourages compositionality of the representations. Built upon the existing object-centric learning framework (e.g., slot attention), our method incorporates additional constraints that an arbitrary mixture of object representations from two images should be valid by maximizing the likelihood of the composite data. We demonstrate that incorporating our objective to the existing framework consistently improves the objective-centric learning and enhances the robustness to the architectural choices.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18993"} +{"video_file": "HXWTXXtHNl_39018170.mp4", "openreview_id": "HXWTXXtHNl", "slideslive_id": 39018170, "venue": "iclr2024", "title": "Label-Noise Robust Diffusion Models", "status": "Poster", "keywords": "diffusion model;noisy label;robustness", "tldr": "We propose an approach to training diffusion models with noisy labels.", "abstract": "Conditional diffusion models have shown remarkable performance in various generative tasks, but training them requires large-scale datasets that often contain noise in conditional inputs, a.k.a. noisy labels. This noise leads to condition mismatch and quality degradation of generated data. This paper proposes Transition-aware weighted Denoising Score Matching (TDSM) for training conditional diffusion models with noisy labels, which is the first study in the line of diffusion models. The TDSM objective contains a weighted sum of score networks, incorporating instance-wise and time-dependent label transition probabilities. We introduce a transition-aware weight estimator, which leverages a time-dependent noisy-label classifier distinctively customized to the diffusion process. Through experiments across various datasets and noisy label settings, TDSM improves the quality of generated samples aligned with given conditions. Furthermore, our method improves generation performance even on prevalent benchmark datasets, which implies the potential noisy labels and their risk of generative model learning. Finally, we show the improved performance of TDSM on top of conventional noisy label corrections, which empirically proving its contribution as a part of label-noise robust generative models. Our code is available at: https://github.com/byeonghu-na/tdsm.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18991"} +{"video_file": "HXc5aXeoc8_39018169.mp4", "openreview_id": "HXc5aXeoc8", "slideslive_id": 39018169, "venue": "iclr2024", "title": "Diffusion Sampling with Momentum for Mitigating Divergence Artifacts", "status": "Poster", "keywords": "diffusion models;heavy ball momentum;divergence artifacts;numerical method;ode solver;image generation", "tldr": "This paper addresses the issue of unexpected divergence artifacts in diffusion sampling and proposes two novel techniques to address them", "abstract": "Despite the remarkable success of diffusion models in image generation, slow sampling remains a persistent issue. To accelerate the sampling process, prior studies have reformulated diffusion sampling as an ODE/SDE and introduced higher-order numerical methods. However, these methods often produce divergence artifacts, especially with a low number of sampling steps, which limits the achievable acceleration. In this paper, we investigate the potential causes of these artifacts and suggest that the small stability regions of these methods could be the principal cause. To address this issue, we propose two novel techniques. The first technique involves the incorporation of Heavy Ball (HB) momentum, a well-known technique for improving optimization, into existing diffusion numerical methods to expand their stability regions. We also prove that the resulting methods have first-order convergence. The second technique, called Generalized Heavy Ball (GHVB), constructs a new high-order method that offers a variable trade-off between accuracy and artifact suppression. Experimental results show that our techniques are highly effective in reducing artifacts and improving image quality, surpassing state-of-the-art diffusion solvers on both pixel-based and latent-based diffusion models for low-step sampling. Our research provides novel insights into the design of numerical methods for future diffusion work.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18990"} +{"video_file": "HZ3S17EI0o_39018167.mp4", "openreview_id": "HZ3S17EI0o", "slideslive_id": 39018167, "venue": "iclr2024", "title": "Set Learning for Accurate and Calibrated Models", "status": "Poster", "keywords": "set learning;calibration;overconfidence;class imbalance;long-tailed classification;low data;classification calibration;safety", "tldr": "We introduce odd-k-out learning (OKO), a novel, theoretically grounded training framework for classification based on learning from sets of data to yield accurate and well-calibrated models, even in low data regimes.", "abstract": "Model overconfidence and poor calibration are common in machine learning and difficult to account for when applying standard empirical risk minimization. In this work, we propose a novel method to alleviate these problems that we call odd-$k$-out learning (OKO), which minimizes the cross-entropy error for sets rather than for single examples. This naturally allows the model to capture correlations across data examples and achieves both better accuracy and calibration, especially in limited training data and class-imbalanced regimes. Perhaps surprisingly, OKO often yields better calibration even when training with hard labels and dropping any additional calibration parameter tuning, such as temperature scaling. We demonstrate this in extensive experimental analyses and provide a mathematical theory to interpret our findings. We emphasize that OKO is a general framework that can be easily adapted to many settings and a trained model can be applied to single examples at inference time, without significant run-time overhead or architecture changes.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18987"} +{"video_file": "HZndRcfyNI_39018166.mp4", "openreview_id": "HZndRcfyNI", "slideslive_id": 39018166, "venue": "iclr2024", "title": "Principled Architecture-aware Scaling of Hyperparameters", "status": "Poster", "keywords": "Hyperparameter Transfer;Neural Network Architecture;Neural Network Initialization;Learning Rate", "tldr": "We propose a principled and architecture-aware scaling rule for learning rate and initializations across a wide range of deep networks.", "abstract": "Training a high-quality deep neural network requires choosing suitable hyperparameters, which is a non-trivial and expensive process. Current works try to automatically optimize or design principles of hyperparameters, such that they can generalize to diverse unseen scenarios. However, most designs of principles or optimization methods are agnostic to the choice of network structures, and thus largely ignore the impact of neural architectures on hyperparameters. In this work, we precisely characterize the dependence of initializations and maximal learning rates on the network architecture, which includes the network depth, width, convolutional kernel size, and connectivity patterns. By pursuing every parameter to be maximally updated with the same mean squared change in pre-activations, we can generalize our initialization and learning rates across MLPs (multi-layer perception) and CNNs (convolutional neural network) with sophisticated graph topologies. We verify our principles with comprehensive experiments. More importantly, our strategy further sheds light on advancing current benchmarks for architecture design. A fair comparison of AutoML algorithms requires accurate network rankings. However, we demonstrate that network rankings can be easily changed by better training networks in benchmarks with our architecture-aware learning rates and initialization.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18986"} +{"video_file": "HiYMiZYwkw_39018159.mp4", "openreview_id": "HiYMiZYwkw", "slideslive_id": 39018159, "venue": "iclr2024", "title": "Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning", "status": "Poster", "keywords": "self-supervised learning;domain-agnostic learning;masked modeling;protein biology;chemistry;particle physics", "tldr": "Masking inputs based on a model's attention map can create a strong, domain-agnostic masking strategy for masked modeling.", "abstract": "Self-supervised learning excels in learning representations from large amounts of unlabeled data, demonstrating success across multiple data modalities. Yet, extending self-supervised learning to new modalities is non-trivial because the specifics of existing methods are tailored to each domain, such as domain-specific augmentations which reflect the invariances in the target task. While masked modeling is promising as a domain-agnostic framework for self-supervised learning because it does not rely on input augmentations, its mask sampling procedure remains domain-specific. We present Self-guided Masked Autoencoders (SMA), a fully domain-agnostic masked modeling method. SMA trains an attention based model using a masked modeling objective, by learning masks to sample without any domain-specific assumptions. We evaluate SMA on three self-supervised learning benchmarks in protein biology, chemical property prediction, and particle physics. We find SMA is capable of learning representations without domain-specific knowledge and achieves state-of-the-art performance on these three benchmarks.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18978"} +{"video_file": "I1quoTXZzc_39019232.mp4", "openreview_id": "I1quoTXZzc", "slideslive_id": 39019232, "venue": "iclr2024", "title": "Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations", "status": "Poster", "keywords": "Interpretability;Concepts;Energy-Based Model;Probabilistic Methods", "tldr": "We introduce Energy-Based Concept Bottleneck Models as a unified framework for concept-based prediction, concept correction, and fine-grained interpretations based on conditional probabilities.", "abstract": "Existing methods, such as concept bottleneck models (CBMs), have been successful in providing concept-based interpretations for black-box deep learning models. They typically work by predicting concepts given the input and then predicting the final class label given the predicted concepts. However, (1) they often fail to capture the high-order, nonlinear interaction between concepts, e.g., correcting a predicted concept (e.g., \u201cyellow breast\u201d) does not help correct highly correlated concepts (e.g., \u201cyellow belly\u201d), leading to suboptimal final accuracy; (2) they cannot naturally quantify the complex conditional dependencies between different concepts and class labels (e.g., for an image with the class label \u201cKentucky Warbler\u201d and a concept \u201cblack bill\u201d, what is the probability that the model correctly predicts another concept \u201cblack crown\u201d), therefore failing to provide deeper insight into how a black-box model works. In response to these limitations, we propose Energy-based Concept Bottleneck Models (ECBMs). Our ECBMs use a set of neural networks to define the joint energy of candidate (input, concept, class) tuples. With such a unified interface, prediction, concept correction, and conditional dependency quantification are then represented as conditional probabilities, which are generated by composing different energy functions. Our ECBMs address both limitations of existing CBMs, providing higher accuracy and richer concept interpretations. Empirical results show that our approach outperforms the state-of-the-art on real-world datasets.", "primary_area": "visualization or interpretation of learned representations", "site": "https://iclr.cc/virtual/2024/poster/18975"} +{"video_file": "I2mIxuXA72_39018157.mp4", "openreview_id": "I2mIxuXA72", "slideslive_id": 39018157, "venue": "iclr2024", "title": "Understanding Domain Generalization: A Noise Robustness Perspective", "status": "Poster", "keywords": "out-of-distribution generalization;distribution shifts;spurious correlation;noise robustness", "tldr": "Label noise exacerbates the effect of spurious correlations for ERM. Invariance learning algorithms with label-noise robustness may improve the situation under certain circumstances.", "abstract": "Despite the rapid development of machine learning algorithms for domain generalization (DG), there is no clear empirical evidence that the existing DG algorithms outperform the classic empirical risk minimization (ERM) across standard benchmarks. To better understand this phenomenon, we investigate whether there are benefits of DG algorithms over ERM through the lens of label noise. Specifically, our finite-sample analysis reveals that label noise exacerbates the effect of spurious correlations for ERM, undermining generalization. Conversely, we illustrate that DG algorithms exhibit implicit label-noise robustness during finite-sample training even when spurious correlation is present. Such desirable property helps mitigate spurious correlations and improve generalization in synthetic experiments. However, additional comprehensive experiments on real-world benchmark datasets indicate that label-noise robustness does not necessarily translate to better performance compared to ERM. We conjecture that the failure mode of ERM arising from spurious correlations may be less pronounced in practice. Our code is available at https://github.com/qiaoruiyt/NoiseRobustDG", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18974"} +{"video_file": "IL71c1z7et_39018153.mp4", "openreview_id": "IL71c1z7et", "slideslive_id": 39018153, "venue": "iclr2024", "title": "Robot Fleet Learning via Policy Merging", "status": "Poster", "keywords": "Fleet Learning;Weight Merging;Multi-task Policy Learning", "tldr": "We investigated a \"bottom-up\" approach to learn robotic policies from a fleet of robots, by leveraging weight merging, and the method demonstrates strong results on settings such as linear control, Meta-World, and a novel robotic tool-use benchmark.", "abstract": "Fleets of robots ingest massive amounts of heterogeneous streaming data silos generated by interacting with their environments, far more than what can be stored or transmitted with ease. At the same time, teams of robots should co-acquire diverse skills through their heterogeneous experiences in varied settings. How can we enable such fleet-level learning without having to transmit or centralize fleet-scale data? In this paper, we investigate policy merging (PoMe) from such distributed heterogeneous datasets as a potential solution. To efficiently merge policies in the fleet setting, we propose FLEET-MERGE, an instantiation of distributed learning that accounts for the permutation invariance that arises when parameterizing the control policies with recurrent neural networks. We show that FLEET-MERGE consolidates the behavior of policies trained on 50 tasks in the Meta-World environment, with good performance on nearly all training tasks at test time. Moreover, we introduce a novel robotic tool-use benchmark, FLEET-TOOLS, for fleet policy learning in compositional and contact-rich robot manipulation tasks, to validate the efficacy of FLEET-MERGE on the benchmark.", "primary_area": "applications to robotics, autonomy, planning", "site": "https://iclr.cc/virtual/2024/poster/18969"} +{"video_file": "ILYjDvUM6U_39018152.mp4", "openreview_id": "ILYjDvUM6U", "slideslive_id": 39018152, "venue": "iclr2024", "title": "Uncertainty-aware Constraint Inference in Inverse Constrained Reinforcement Learning", "status": "Poster", "keywords": "Inverse Constrained Reinforcement Learning;Constrained Reinforcement Learning;Inverse Reinforcement Learning;Uncertainty Modeling", "tldr": "We proposed Uncertainty-aware Inverse Constrained Reinforcement Learning (UAICRL), a novel ICRL framework that models both the aleatoric and epistemic uncertainties towards uncertainty-aware constraint inference.", "abstract": "Aiming for safe control, Inverse Constrained Reinforcement Learning (ICRL) considers inferring the constraints respected by expert agents from their demonstrations and learning imitation policies that adhere to these constraints. While previous ICRL works often neglected underlying uncertainties during training, we contend that modeling these uncertainties is crucial for facilitating robust constraint inference. This insight leads to the development of an Uncertainty-aware Inverse Constrained Reinforcement Learning (UAICRL) algorithm. Specifically, 1) aleatoric uncertainty arises from the inherent stochasticity of environment dynamics, leading to constraint-violating behaviors in imitation policies. To address this, UAICRL constructs risk-sensitive constraints by incorporating distributional Bellman updates into the cumulative costs model. 2) Epistemic uncertainty, resulting from the model's limited knowledge of Out-of-Distribution (OoD) samples, affects the accuracy of step-wise cost predictions. To tackle this issue, UAICRL develops an information-theoretic quantification of the epistemic uncertainty and mitigates its impact through flow-based generative data augmentation. Empirical results demonstrate that UAICRL consistently outperforms other baselines in continuous and discrete environments with stochastic dynamics. The code is available at https://github.com/Jasonxu1225/UAICRL.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18968"} +{"video_file": "IRcv4yFX6z_39018833.mp4", "openreview_id": "IRcv4yFX6z", "slideslive_id": 39018833, "venue": "iclr2024", "title": "Learning Hierarchical Image Segmentation For Recognition and By Recognition", "status": "Spotlight", "keywords": "segmentation in the loop for recognition;hierarchical segmentation;part-to-whole recognition;vision transformer", "tldr": "We propose a learning framework that integrates segmentation in the loop for recognition, enabling concurrent hierarchical segmentation and recognition using a single model.", "abstract": "Large vision and language models learned directly through image-text associations often lack detailed visual substantiation, whereas image segmentation tasks are treated separately from recognition, supervisedly learned without interconnections.\nOur key observation is that, while an image can be recognized in multiple ways, each has a consistent part-and-whole visual organization. Segmentation thus should be treated not as an end task to be mastered through supervised learning, but as an internal process that evolves with and supports the ultimate goal of recognition.\nWe propose to integrate a hierarchical segmenter into the recognition process, {\\it train} and {\\it adapt} the entire model solely on image-level recognition objectives. We learn hierarchical segmentation {\\it for free} alongside recognition, automatically uncovering part-to-whole relationships that not only underpin but also enhance recognition.\nEnhancing the Vision Transformer (ViT) with adaptive segment tokens and graph pooling, our model surpasses ViT in unsupervised part-whole discovery, semantic segmentation, image classification, and efficiency. Notably, our model (trained on {\\it unlabeled} 1M ImageNet images) outperforms SAM (trained on 11M images and 1 billion masks) by absolute 8% in mIoU on PartImageNet object segmentation.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18964"} +{"video_file": "IYxDy2jDFL_39018981.mp4", "openreview_id": "IYxDy2jDFL", "slideslive_id": 39018981, "venue": "iclr2024", "title": "Improved Active Learning via Dependent Leverage Score Sampling", "status": "Oral", "keywords": "leverage score sampling;active learning;polynomial regression;differential equations;pivotal sampling", "tldr": "Better active learning (in theory and practice) in the presence of adversarial noise via non-independent leverage score sampling.", "abstract": "We show how to obtain improved active learning methods in the agnostic (adversarial noise) setting by combining marginal leverage score sampling with non-independent sampling strategies that promote spatial coverage. In particular, we propose an easily implemented method based on the \\emph{pivotal sampling algorithm}, which we test on problems motivated by learning-based methods for parametric PDEs and uncertainty quantification. In comparison to independent sampling, our method reduces the number of samples needed to reach a given target accuracy by up to\n50\n.\nWe support our findings with two theoretical results. First, we show that any non-independent leverage score sampling method that obeys a weak \\emph{one-sided\n\u2113\n\u221e\nindependence condition} (which includes pivotal sampling) can actively learn\nd\ndimensional linear functions with\nO\n(\nd\nlog\n\u2061\nd\n)\nsamples, matching independent sampling. This result extends recent work on matrix Chernoff bounds under\n\u2113\n\u221e\nindependence, and may be of interest for analyzing other sampling strategies beyond pivotal sampling. Second, we show that, for the important case of polynomial regression, our pivotal method obtains an improved bound of\nO\n(\nd\n)\nsamples.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18962"} +{"video_file": "IcR1OOFzxm_39018146.mp4", "openreview_id": "IcR1OOFzxm", "slideslive_id": 39018146, "venue": "iclr2024", "title": "Towards Generative Abstract Reasoning: Completing Raven\u2019s Progressive Matrix via Rule Abstraction and Selection", "status": "Poster", "keywords": "Deep Latent Variable Models;Generative Models;Raven\u2019s Progressive Matrix;Abstract Visual Reasoning", "tldr": "This paper proposes a novel deep latent variable model to solve generative RPM problems through rule abstraction and selection.", "abstract": "Endowing machines with abstract reasoning ability has been a long-term research topic in artificial intelligence. Raven's Progressive Matrix (RPM) is widely used to probe abstract visual reasoning in machine intelligence, where models will analyze the underlying rules and select one image from candidates to complete the image matrix. Participators of RPM tests can show powerful reasoning ability by inferring and combining attribute-changing rules and imagining the missing images at arbitrary positions of a matrix. However, existing solvers can hardly manifest such an ability in realistic RPM tests. In this paper, we propose a deep latent variable model for answer generation problems through Rule AbstractIon and SElection (RAISE). RAISE can encode image attributes into latent concepts and abstract atomic rules that act on the latent concepts. When generating answers, RAISE selects one atomic rule out of the global knowledge set for each latent concept to constitute the underlying rule of an RPM. In the experiments of bottom-right and arbitrary-position answer generation, RAISE outperforms the compared solvers in most configurations of realistic RPM datasets. In the odd-one-out task and two held-out configurations, RAISE can leverage acquired latent concepts and atomic rules to find the rule-breaking image in a matrix and handle problems with unseen combinations of rules and attributes.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18960"} +{"video_file": "IcVNBR7qZi_39018963.mp4", "openreview_id": "IcVNBR7qZi", "slideslive_id": 39018963, "venue": "iclr2024", "title": "Vanishing Gradients in Reinforcement Finetuning of Language Models", "status": "Poster", "keywords": "Vanishing Gradients;Reinforcement Finetuning;Supervised Finetuning;Language Models", "tldr": "We uncover a fundamental vanishing gradients problem in reinforcement finetuning of language models, demonstrate its prevalence and detrimental effects, and explore possible solutions.", "abstract": "Pretrained language models are commonly aligned with human preferences and downstream tasks via reinforcement finetuning (RFT), which refers to maximizing a (possibly learned) reward function using policy gradient algorithms. This work identifies a fundamental optimization obstacle in RFT: we prove that the expected gradient for an input vanishes when its reward standard deviation under the model is small, even if the expected reward is far from optimal. Through experiments on an RFT benchmark and controlled environments, as well as a theoretical analysis, we then demonstrate that vanishing gradients due to small reward standard deviation are prevalent and detrimental, leading to extremely slow reward maximization. Lastly, we explore ways to overcome vanishing gradients in RFT. We find the common practice of an initial supervised finetuning (SFT) phase to be the most promising candidate, which sheds light on its importance in an RFT pipeline. Moreover, we show that a relatively small number of SFT optimization steps on as few as 1% of the input samples can suffice, indicating that the initial SFT phase need not be expensive in terms of compute and data labeling efforts. Overall, our results emphasize that being mindful for inputs whose expected gradient vanishes, as measured by the reward standard deviation, is crucial for successful execution of RFT.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18959"} +{"video_file": "IjMUGuUmBI_39018820.mp4", "openreview_id": "IjMUGuUmBI", "slideslive_id": 39018820, "venue": "iclr2024", "title": "GraphChef: Decision-Tree Recipes to Explain Graph Neural Networks", "status": "Poster", "keywords": "Graph Neural Networks;GNN;Explainability;Decision Trees", "tldr": "GraphChef integrate Decision Trees into Graph Neural Networks to allow explaining the full decision process.", "abstract": "We propose a new self-explainable Graph Neural Network (GNN) model: GraphChef. GraphChef integrates decision trees into the GNN message passing framework. Given a dataset, GraphChef returns a set of rules (a recipe) that explains each class in the dataset unlike existing GNNs and explanation methods that reason on individual graphs. Thanks to the decision trees, GraphChef recipes are human understandable. We also present a new pruning method to produce small and easy to digest trees. Experiments demonstrate that GraphChef reaches comparable accuracy to not self-explainable GNNs and produced decision trees are indeed small. We further validate the correctness of the discovered recipes on datasets where explanation ground truth is available: Reddit-Binary, MUTAG, BA-2Motifs, BA-Shapes, Tree-Cycle, and Tree-Grid.", "primary_area": "visualization or interpretation of learned representations", "site": "https://iclr.cc/virtual/2024/poster/18957"} +{"video_file": "IoKRezZMxF_39018144.mp4", "openreview_id": "IoKRezZMxF", "slideslive_id": 39018144, "venue": "iclr2024", "title": "Consistent Video-to-Video Transfer Using Synthetic Dataset", "status": "Poster", "keywords": "Computer Vision;Video Editing;Diffusion Model", "tldr": "We've developed a synthetic dataset to train a text-based video editing model, eliminating the need for per-video fine-tuning, and introduced a method for seamless long video editing.", "abstract": "We introduce a novel and efficient approach for text-based video-to-video editing that eliminates the need for resource-intensive per-video-per-model finetuning. At the core of our approach is a synthetic paired video dataset tailored for video-to-video transfer tasks. Inspired by Instruct Pix2Pix's image transfer via editing instruction, we adapt this paradigm to the video domain. Extending the Prompt-to-Prompt to videos, we efficiently generate paired samples, each with an input video and its edited counterpart. Alongside this, we introduce the Long Video Sampling Correction during sampling, ensuring consistent long videos across batches. Our method surpasses current methods like Tune-A-Video, heralding substantial progress in text-based video-to-video editing and suggesting exciting avenues for further exploration and deployment.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18955"} +{"video_file": "Ixi4j6LtdX_39018142.mp4", "openreview_id": "Ixi4j6LtdX", "slideslive_id": 39018142, "venue": "iclr2024", "title": "A Good Learner can Teach Better: Teacher-Student Collaborative Knowledge Distillation", "status": "Poster", "keywords": "Knowledge Distillation;Meta-Knowledge Distillation;Policy-driven Knowledge Distillation;Large Language Models", "tldr": "The paper introduces collaborative joint loss and curriculum learning for meta-teacher knowledge distillation", "abstract": "Knowledge distillation (KD) is a technique used to transfer knowledge from a larger ''teacher'' model into a smaller ''student'' model. Recent advancements in meta-learning-based knowledge distillation (MetaKD) emphasize that the fine-tuning of teacher models should be aware of the student's need to achieve better knowledge distillation. However, existing MetaKD methods often lack incentives for the teacher model to improve itself. In this study, we introduce MPDistil, a meta-policy distillation technique, that utilizes novel optimization strategies to foster both collaboration and competition during the fine-tuning of the teacher model in the meta-learning step. Additionally, we propose a curriculum learning framework for the student model in a competitive setup, in which the student model aims to outperform the teacher model by self-training on various tasks. Exhaustive experiments on SuperGLUE and GLUE benchmarks demonstrate the efficacy of MPDistil compared to\n20\nconventional KD and advanced MetaKD baselines, showing significant performance enhancements in the student model -- e.g., a distilled 6-layer BERT model outperforms a 12-layer BERT model on five out of six SuperGLUE tasks. Furthermore, MPDistil, while applied to a large language teacher model (DeBERTa-v2-xxlarge), significantly narrows the performance gap of its smaller student counterpart (DeBERTa-12) by just\n4.6\n% on SuperGLUE. We further demonstrate how higher rewards and customized training curricula strengthen the student model and enhance generalizability.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/18953"} +{"video_file": "Iyve2ycvGZ_39018141.mp4", "openreview_id": "Iyve2ycvGZ", "slideslive_id": 39018141, "venue": "iclr2024", "title": "Bellman Optimal Stepsize Straightening of Flow-Matching Models", "status": "Poster", "keywords": "flow matching;generative model;efficient sampling;distillation;responsible ML", "tldr": "This paper introduces Bellman Optimal Step-size Straightening (BOSS) technique for distilling flow-matching generative models while adhering to a computational budget constraint.", "abstract": "Flow matching is a powerful framework for generating high-quality samples in various applications, especially image synthesis. However, the intensive computational demands of these models, especially during the finetuning process and sampling processes, pose significant challenges for low-resource scenarios. This paper introduces Bellman Optimal Stepsize Straightening (BOSS) technique for distilling flow-matching generative models: it aims specifically for a few-step efficient image sampling while adhering to a computational budget constraint. First, this technique involves a dynamic programming algorithm that optimizes the stepsizes of the pretrained network. Then, it refines the velocity network to match the optimal step sizes, aiming to straighten the generation paths. Extensive experimental evaluations across image generation tasks demonstrate the efficacy of BOSS in terms of both resource utilization and image quality. Our results reveal that BOSS achieves substantial gains in efficiency while maintaining competitive sample quality, effectively bridging the gap between low-resource constraints and the demanding requirements of flow-matching generative models. Our paper also fortifies the responsible development of artificial intelligence, offering a more sustainable generative model that reduces computational costs and environmental footprints. Our code can be found at https://github.com/nguyenngocbaocmt02/BOSS.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18951"} +{"video_file": "J1djqLAa6N_39018139.mp4", "openreview_id": "J1djqLAa6N", "slideslive_id": 39018139, "venue": "iclr2024", "title": "Efficient Score Matching with Deep Equilibrium Layers", "status": "Poster", "keywords": "score matching;deep equilibrium model;density estimation", "tldr": "We improve the memory efficiency of score matching by leveraging deep equilibrium models", "abstract": "Score matching methods -- estimate probability densities without computing the normalization constant -- are particularly useful in deep learning. However, computational and memory costs of score matching methods can be prohibitive for high-dimensional data or complex models, particularly due to the derivatives or Hessians of the log density function appearing in the objective function. Some existing approaches modify the objective function to reduce the quadratic computational complexity for Hessian computation. However, the memory bottleneck of score matching methods remains for deep learning. This study improves the memory efficiency of score matching by leveraging deep equilibrium models. We provide a theoretical analysis of deep equilibrium models for scoring matching and applying implicit differentiation to higher-order derivatives. Empirical evaluations demonstrate that our approach enables the development of deep and expressive models with improved performance and comparable computational and memory costs over shallow architectures.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18949"} +{"video_file": "JN7TcCm9LF_39017127.mp4", "openreview_id": "JN7TcCm9LF", "slideslive_id": 39017127, "venue": "iclr2024", "title": "Koopman-based generalization bound: New aspect for full-rank weights", "status": "Poster", "keywords": "generalization bound;full-rank weight matrix;Koopman operator", "tldr": "We propose a new bound for generalization of neural networks, which sheds light on a new perspective regarding full-rank weight matrices and provides a connection between operator-theoretic analysis and generalization of neural networks.", "abstract": "We propose a new bound for generalization of neural networks using Koopman operators. Whereas most of existing works focus on low-rank weight matrices, we focus on full-rank weight matrices. Our bound is tighter than existing norm-based bounds when the condition numbers of weight matrices are small. Especially, it is completely independent of the width of the network if the weight matrices are orthogonal. Our bound does not contradict to the existing bounds but is a complement to the existing bounds. As supported by several existing empirical results, low-rankness is not the only reason for generalization. Furthermore, our bound can be combined with the existing bounds to obtain a tighter bound. Our result sheds new light on understanding generalization of neural networks with full-rank weight matrices, and it provides a connection between operator-theoretic analysis and generalization of neural networks.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/18944"} +{"video_file": "JO7k0SJ5V6_39017041.mp4", "openreview_id": "JO7k0SJ5V6", "slideslive_id": 39017041, "venue": "iclr2024", "title": "Scaling Laws of RoPE-based Extrapolation", "status": "Poster", "keywords": "Position Embeddin;Length Extrapolation;Large Language Model;Natural Language Processing", "tldr": "In this work, we propose a unified framework from the prospective of period, to explain the mechanism of RoPE-based extrapolation by whether increasing or decreasing the rotary base.", "abstract": "The extrapolation capability of Large Language Models (LLMs) based on Rotary Position Embedding \\citep{su2021roformer} is currently a topic of considerable interest. The mainstream approach to addressing extrapolation with LLMs involves modifying RoPE by replacing 10000, the rotary base of\n\u03b8\nn\n=\n10000\n\u2212\n2\nn\n/\nd\nin the original RoPE, with a larger value and providing longer fine-tuning text. In this work, we first observe that fine-tuning a RoPE-based LLM with either a smaller or larger base in pre-training context length could significantly enhance its extrapolation performance. After that, we propose \\textbf{\\textit{Scaling Laws of RoPE-based Extrapolation}}, a unified framework from the periodic perspective, to describe the relationship between the extrapolation performance and base value as well as tuning context length. In this process, we also explain the origin of the RoPE-based extrapolation issue by \\textbf{\\textit{critical dimension for extrapolation}}. Besides these observations and analyses, we achieve extrapolation up to 1 million context length within only 16K training length on LLaMA2 7B and 13B \\citep{touvron2023llama2}.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18943"} +{"video_file": "JW3jTjaaAB_39018937.mp4", "openreview_id": "JW3jTjaaAB", "slideslive_id": 39018937, "venue": "iclr2024", "title": "AirPhyNet: Harnessing Physics-Guided Neural Networks for Air Quality Prediction", "status": "Poster", "keywords": "air quality prediction;physics-informed;spatiotemporal-learning;interpretability", "tldr": "AirPhyNet is a physics-guided deep learning framework for air quality prediction. It shows superior performance in lead times upto 72-hours especially in sparse data scenarios while generating forecasts with a real physical meaning.", "abstract": "Air quality prediction and modelling plays a pivotal role in public health and environment management, for individuals and authorities to make informed decisions. Although traditional data-driven models have shown promise in this domain, their long-term prediction accuracy can be limited, especially in scenarios with sparse or incomplete data and they often rely on black-box deep learning structures that lack solid physical foundation leading to reduced transparency and interpretability in predictions. To address these limitations, this paper presents a novel approach named Physics guided Neural Network for Air Quality Prediction (AirPhyNet). Specifically, we leverage two well-established physics principles of air particle movement (diffusion and advection) by representing them as differential equation networks. Then, we utilize a graph structure to integrate physics knowledge into a neural network architecture and exploit latent representations to capture spatio-temporal relationships within the air quality data. Experiments on two real-world benchmark datasets demonstrate that AirPhyNet outperforms state-of-the-art models for different testing scenarios including different lead time (24h, 48h, 72h), sparse data and sudden change prediction, achieving reduction in prediction errors up to 10%. Moreover, a case study further validates that our model captures underlying physical processes of particle movement and generates accurate predictions with real physical meaning. The code is available at: https://github.com/kethmih/AirPhyNet", "primary_area": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18940"} +{"video_file": "JYu5Flqm9D_39017106.mp4", "openreview_id": "JYu5Flqm9D", "slideslive_id": 39017106, "venue": "iclr2024", "title": "Towards Codable Watermarking for Injecting Multi-Bits Information to LLMs", "status": "Poster", "keywords": "text watermarking;large language model;codable;systematic study", "tldr": "This paper conducts a systematic study on the topic of Codable Text Watermarking for LLMs (CTWL) and propose a novel method Balance-Marking for CTWL.", "abstract": "As large language models (LLMs) generate texts with increasing fluency and realism, there is a growing need to identify the source of texts to prevent the abuse of LLMs. Text watermarking techniques have proven reliable in distinguishing whether a text is generated by LLMs by injecting hidden patterns. However, we argue that existing LLM watermarking methods are encoding-inefficient and cannot flexibly meet the diverse information encoding needs (such as encoding model version, generation time, user id, etc.). In this work, we conduct the first systematic study on the topic of Codable Text Watermarking for LLMs (CTWL) that allows text watermarks to carry multi-bit customizable information. First of all, we study the taxonomy of LLM watermarking technologies and give a mathematical formulation for CTWL. Additionally, we provide a comprehensive evaluation system for CTWL: (1) watermarking success rate, (2) robustness against various corruptions, (3) coding rate of payload information, (4) encoding and decoding efficiency, (5) impacts on the quality of the generated text. To meet the requirements of these non-Pareto-improving metrics, we follow the most prominent vocabulary partition-based watermarking direction, and devise an advanced CTWL method named Balance-Marking. The core idea of our method is to use a proxy language model to split the vocabulary into probability-balanced parts, thereby effectively maintaining the quality of the watermarked text. Our code is available at https://github.com/lancopku/codable-watermarking-for-llm.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/18937"} +{"video_file": "Je5SHCKpPa_39018136.mp4", "openreview_id": "Je5SHCKpPa", "slideslive_id": 39018136, "venue": "iclr2024", "title": "Multimodal Patient Representation Learning with Missing Modalities and Labels", "status": "Poster", "keywords": "multi-modal learning;missing modalities;missing labels;clinical predictive modeling;patient representation learning", "tldr": "We propose MUSE to effectively learn representations for patients with missing modalities and labels.", "abstract": "Multimodal patient representation learning aims to integrate information from multiple modalities and generate comprehensive patient representations for subsequent clinical predictive tasks. However, many existing approaches either presuppose the availability of all modalities and labels for each patient or only deal with missing modalities. In reality, patient data often comes with both missing modalities and labels for various reasons (i.e., the missing modality and label issue). Moreover, multimodal models might over-rely on certain modalities, causing sub-optimal performance when these modalities are absent (i.e., the modality collapse issue). To address these issues, we introduce MUSE: a mutual-consistent graph contrastive learning method. MUSE uses a flexible bipartite graph to represent the patient-modality relationship, which can adapt to various missing modality patterns. To tackle the modality collapse issue, MUSE learns to focus on modality-general and label-decisive features via a mutual-consistent contrastive learning loss. Notably, the unsupervised component of the contrastive objective only requires self-supervision signals, thereby broadening the training scope to incorporate patients with missing labels. We evaluate MUSE on three publicly available datasets: MIMIC-IV, eICU, and ADNI. Results show that MUSE outperforms all baselines, and MUSE+ further elevates the absolute improvement to ~4% by extending the training scope to patients with absent labels.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18934"} +{"video_file": "JgqftqZQZ7_39018633.mp4", "openreview_id": "JgqftqZQZ7", "slideslive_id": 39018633, "venue": "iclr2024", "title": "FLATTEN: optical FLow-guided ATTENtion for consistent text-to-video editing", "status": "Poster", "keywords": "diffusion model;video editing;text-to-video", "tldr": "This paper presents a training-free framework for high consistent text-to-video editing by integrating optical flow into attention modules.", "abstract": "Text-to-video editing aims to edit the visual appearance of a source video conditional on textual prompts. A major challenge in this task is to ensure that all frames in the edited video are visually consistent. Most recent works apply advanced text-to-image diffusion models to this task by inflating 2D spatial attention in the U-Net into spatio-temporal attention. Although temporal context can be added through spatio-temporal attention, it may introduce some irrelevant information for each patch and therefore cause inconsistency in the edited video. In this paper, for the first time, we introduce optical flow into the attention module in diffusion model's U-Net to address the inconsistency issue for text-to-video editing. Our method, FLATTEN, enforces the patches on the same flow path across different frames to attend to each other in the attention module, thus improving the visual consistency in the edited videos. Additionally, our method is training-free and can be seamlessly integrated into any diffusion based text-to-video editing methods and improve their visual consistency. Experiment results on existing text-to-video editing benchmarks show that our proposed method achieves the new state-of-the-art performance. In particular, our method excels in maintaining the visual consistency in the edited videos.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18929"} +{"video_file": "JnYaF3vv3G_39018132.mp4", "openreview_id": "JnYaF3vv3G", "slideslive_id": 39018132, "venue": "iclr2024", "title": "LabelDP-Pro: Learning with Label Differential Privacy via Projections", "status": "Poster", "keywords": "Differential Privacy;Label Differential Privacy;Projections", "tldr": "We propose a family of label DP training algorithms that use projections to denoise the private gradients and achieve better utility in the high-privacy regime.", "abstract": "Label differentially private (label DP) algorithms seek to preserve the privacy of the labels in a training dataset in settings where the features are known to the adversary. In this work, we study a new family of label DP training algorithms. Unlike most prior label DP algorithms that have been based on label randomization, our algorithm naturally leverages the power of the central model of DP. It interleaves gradient projection operations with private stochastic gradient descent steps in order to improve the utility of the trained model while guaranteeing the privacy of the labels. We show that such projection-based algorithms can be made practical and that they improve on the state-of-the art for label DP training in the high-privacy regime. We complement our empirical evaluation with theoretical results shedding light on the efficacy of our method through the lens of bias-variance trade-offs.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/18926"} +{"video_file": "JrmPG9ufKg_39018661.mp4", "openreview_id": "JrmPG9ufKg", "slideslive_id": 39018661, "venue": "iclr2024", "title": "A Mutual Information Perspective on Federated Contrastive Learning", "status": "Spotlight", "keywords": "federated learning;contrastive learning;self-supervised;semi-supervised;mutual information", "tldr": "We extend SimCLR to the unsupervised / semi-supervised federated learning setting through a mutual information lens and study how it interacts with different sources of non-i.i.d. data..", "abstract": "We investigate contrastive learning in the federated setting through the lens of Sim- CLR and multi-view mutual information maximization. In doing so, we uncover a connection between contrastive representation learning and user verification; by adding a user verification loss to each client\u2019s local SimCLR loss we recover a lower bound to the global multi-view mutual information. To accommodate for the case of when some labelled data are available at the clients, we extend our SimCLR variant to the federated semi-supervised setting. We see that a supervised SimCLR objective can be obtained with two changes: a) the contrastive loss is computed between datapoints that share the same label and b) we require an additional auxiliary head that predicts the correct labels from either of the two views. Along with the proposed SimCLR extensions, we also study how different sources of non-i.i.d.-ness can impact the performance of federated unsupervised learning through global mutual information maximization; we find that a global objective is beneficial for some sources of non-i.i.d.-ness but can be detrimental for others. We empirically evaluate our proposed extensions in various tasks to validate our claims and furthermore demonstrate that our proposed modifications generalize to other pretraining methods.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18925"} +{"video_file": "JsnR0YO4Fq_39018131.mp4", "openreview_id": "JsnR0YO4Fq", "slideslive_id": 39018131, "venue": "iclr2024", "title": "Exploring Weight Balancing on Long-Tailed Recognition Problem", "status": "Poster", "keywords": "long-tailed recognition;imbalanced learning;weight decay;regularization;neural collapse;simplex ETF;machine learning;learning theory", "tldr": "We theoretically and empirically analyze weight balancing in long-tailed recognition, which leads to the further improvement of performance.", "abstract": "Recognition problems in long-tailed data, in which the sample size per class is heavily skewed, have gained importance because the distribution of the sample size per class in a dataset is generally exponential unless the sample size is intentionally adjusted. Various methods have been devised to address these problems. Recently, weight balancing, which combines well-known classical regularization techniques with two-stage training, has been proposed. Despite its simplicity, it is known for its high performance compared with existing methods devised in various ways. However, there is a lack of understanding as to why this method is effective for long-tailed data. In this study, we analyze weight balancing by focusing on neural collapse and the cone effect at each training stage and found that it can be decomposed into an increase in Fisher's discriminant ratio of the feature extractor caused by weight decay and cross entropy loss and implicit logit adjustment caused by weight decay and class-balanced loss. Our analysis enables the training method to be further simplified by reducing the number of training stages to one while increasing accuracy. Code is available at https://github.com/HN410/Exploring-Weight-Balancing-on-Long-Tailed-Recognition-Problem.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18923"} +{"video_file": "JzG7kSpjJk_39017061.mp4", "openreview_id": "JzG7kSpjJk", "slideslive_id": 39017061, "venue": "iclr2024", "title": "Rethinking Channel Dimensions to Isolate Outliers for Low-bit Weight Quantization of Large Language Models", "status": "Poster", "keywords": "large language models;quantization;model compression", "tldr": "Augmenting low-bit weight quantization methods with adaptive per-channel quantization", "abstract": "Large Language Models (LLMs) have recently demonstrated a remarkable success across various tasks. However, efficiently serving LLMs has been a challenge due to its large memory bottleneck, specifically in small batch inference settings (e.g. mobile devices). Weight-only quantization can be a promising approach, but sub-4 bit quantization remains a challenge due to large-magnitude activation outliers. To mitigate the undesirable outlier effect, we first propose per-IC quantization, a simple yet effective method that creates quantization groups within each input channel (IC) rather than the conventional per-output channel (OC). Our method is motivated by the observation that activation outliers affect the input dimension of the weight matrix, so similarly grouping the weights in the IC direction can\nisolate outliers to be within a group\n. We also find that activation outliers do not dictate quantization difficulty, and inherent weight sensitivities also exist. With per-IC quantization as a new outlier-friendly scheme, we then propose Adaptive Dimensions (\nAdaDim\n), a versatile quantization framework that can adapt to various weight sensitivity patterns. We demonstrate the effectiveness of AdaDim by augmenting prior methods such as Round-To-Nearest and GPTQ, showing significant improvements across various language modeling benchmarks for both base (up to\n+\n4.7\non MMLU) and instruction-tuned (up to\n+\n10\non HumanEval) LLMs.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18921"} +{"video_file": "K9V7ugVuUz_39018125.mp4", "openreview_id": "K9V7ugVuUz", "slideslive_id": 39018125, "venue": "iclr2024", "title": "Robust Similarity Learning with Difference Alignment Regularization", "status": "Poster", "keywords": "contrastive learning;metric learning;regularization", "tldr": "A new regularization approach to deal with inconsistent differences in similarity learning", "abstract": "Similarity-based representation learning has shown impressive capabilities in both supervised (e.g., metric learning) and unsupervised (e.g., contrastive learning) scenarios. Existing approaches effectively constrained the representation difference (i.e., the disagreement between the embeddings of two instances) to fit the corresponding (pseudo) similarity supervision. However, most of them can hardly restrict the variation of representation difference, sometimes leading to overfitting results where the clusters are disordered by drastically changed differences. In this paper, we thus propose a novel difference alignment regularization (DAR) to encourage all representation differences between inter-class instances to be as close as possible, so that the learning algorithm can produce consistent differences to distinguish data points from each other. To this end, we construct a new cross-total-variation (CTV) norm to measure the divergence among representation differences, and we convert it into an equivalent stochastic form for easy optimization. Then, we integrate the proposed regularizer into the empirical loss for difference-aligned similarity learning (DASL), shrinking the hypothesis space and alleviating overfitting. Theoretically, we prove that our regularizer tightens the error bound of the traditional similarity learning. Experiments on multi-domain data demonstrate the superiority of DASL over existing approaches in both supervised metric learning and unsupervised contrastive learning tasks.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18916"} +{"video_file": "KI9NqjLVDT_39017085.mp4", "openreview_id": "KI9NqjLVDT", "slideslive_id": 39017085, "venue": "iclr2024", "title": "ReMasker: Imputing Tabular Data with Masked Autoencoding", "status": "Poster", "keywords": "Masked modeling; Tabular data imputation", "tldr": "A simple yet effective imputation method", "abstract": "We present ReMasker, a new method of imputing missing values in tabular data by extending the masked autoencoding framework. Compared with prior work, ReMasker is extremely simple -- besides the missing values (i.e., naturally masked), we randomly \"re-mask\" another set of values, optimize the autoencoder by reconstructing this re-masked set, and apply the trained model to predict the missing values; and yet highly effective -- with extensive evaluation on benchmark datasets, we show that ReMasker performs on par with or outperforms state-of-the-art methods in terms of both imputation fidelity and utility under various missingness settings, while its performance advantage often increases with the ratio of missing data. We further explore theoretical justification for its effectiveness, showing that ReMasker tends to learn missingness-invariant representations of tabular data. Our findings indicate that masked modeling represents a promising direction for further research on tabular data imputation. The code is publicly available.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18911"} +{"video_file": "KOZu91CzbK_39018842.mp4", "openreview_id": "KOZu91CzbK", "slideslive_id": 39018842, "venue": "iclr2024", "title": "Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization", "status": "Spotlight", "keywords": "Language Agent;AI Agent;Reinforcement Learning", "tldr": "This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient.", "abstract": "Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment.", "primary_area": "applications to robotics, autonomy, planning", "site": "https://iclr.cc/virtual/2024/poster/18908"} +{"video_file": "KQe9tHd0k8_39018828.mp4", "openreview_id": "KQe9tHd0k8", "slideslive_id": 39018828, "venue": "iclr2024", "title": "Learning from Label Proportions: Bootstrapping Supervised Learners via Belief Propagation", "status": "Poster", "keywords": "Learning from Label Proportions;Belief Propagation;Pseudo-Labeling;Embedding Learning", "tldr": "Pseudo-labeling through BP leveraging unsupervised covariate information for LLP", "abstract": "Learning from Label Proportions (LLP) is a learning problem where only aggregate level labels are available for groups of instances, called bags, during training, and the aim is to get the best performance at the instance-level on the test data. This setting arises in domains like advertising and medicine due to privacy considerations. We propose a novel algorithmic framework for this problem that iteratively performs two main steps. For the first step (Pseudo Labeling) in every iteration, we define a Gibbs distribution over binary instance labels that incorporates a) covariate information through the constraint that instances with similar covariates should have similar labels and b) the bag level aggregated label. We then use Belief Propagation (BP) to marginalize the Gibbs distribution to obtain pseudo labels. In the second step (Embedding Refinement), we use the pseudo labels to provide supervision for a learner that yields a better embedding. Further, we iterate on the two steps again by using the second step's embeddings as new covariates for the next iteration. In the final iteration, a classifier is trained using the pseudo labels. Our algorithm displays strong gains against several SOTA baselines (upto 15%) for the LLP Binary Classification problem on various dataset types - tabular and Image. We achieve these improvements with minimal computational overhead above standard supervised learning due to Belief Propagation, for large bag sizes, even for a million samples.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18905"} +{"video_file": "KZSEgJGPxu_39017198.mp4", "openreview_id": "KZSEgJGPxu", "slideslive_id": 39017198, "venue": "iclr2024", "title": "SNIP: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-training", "status": "Spotlight", "keywords": "Symbolic Mathematics;Pre-training;Transformers;Symbolic Regression;Deep Learning", "tldr": "We introduce a multi-modal pre-training framework to enable mutual understanding between mathematical symbolic expressions and their numeric counterparts.", "abstract": "In an era where symbolic mathematical equations are indispensable for modeling complex natural phenomena, scientific inquiry often involves collecting observations and translating them into mathematical expressions. Recently, deep learning has emerged as a powerful tool for extracting insights from data. However, existing models typically specialize in either numeric or symbolic domains, and are usually trained in a supervised manner tailored to specific tasks. This approach neglects the substantial benefits that could arise from a task-agnostic multi-modal understanding between symbolic equations and their numeric counterparts. To bridge the gap, we introduce SNIP, a Symbolic-Numeric Integrated Pre-training model, which employs contrastive learning between symbolic and numeric domains, enhancing their mutual similarities in the embeddings. By performing latent space analysis, we observe that SNIP provides cross-domain insights into the representations, revealing that symbolic supervision enhances the embeddings of numeric data and vice versa. We evaluate SNIP across diverse tasks, including symbolic-to-numeric mathematical property prediction and numeric-to-symbolic equation discovery, commonly known as symbolic regression. Results show that SNIP effectively transfers to various tasks, consistently outperforming fully supervised baselines and competing strongly with established task-specific methods, especially in the low data regime scenarios where available data is limited.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18896"} +{"video_file": "KrtGfTGaGe_39018738.mp4", "openreview_id": "KrtGfTGaGe", "slideslive_id": 39018738, "venue": "iclr2024", "title": "The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models", "status": "Poster", "keywords": "pomdp;guarantees;representation learning;reinforcement learning", "tldr": "Wasserstein Belief Updater is an RNN free RL algorithm for POMDPs that learns a representation of the history via an approximation of the belief update in a reliable latent space model, providing theoretical guarantees for learning the optimal value.", "abstract": "Partially Observable Markov Decision Processes (POMDPs) are used to model environments where the state cannot be perceived, necessitating reasoning based on past observations and actions. However, remembering the full history is generally intractable due to the exponential growth in the history space. Maintaining a probability distribution that models the belief over the current state can be used as a sufficient statistic of the history, but its computation requires access to the model of the environment and is often intractable. While SOTA algorithms use Recurrent Neural Networks to compress the observation-action history aiming to learn a sufficient statistic, they lack guarantees of success and can lead to sub-optimal policies. To overcome this, we propose the Wasserstein Belief Updater, an RL algorithm that learns a latent model of the POMDP and an approximation of the belief update under the assumption that the state is observable during training. Our approach comes with theoretical guarantees on the quality of our approximation ensuring that our latent beliefs allow for learning the optimal value function.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18882"} +{"video_file": "Kuj5gVp5GQ_39018100.mp4", "openreview_id": "Kuj5gVp5GQ", "slideslive_id": 39018100, "venue": "iclr2024", "title": "Accelerating Sinkhorn algorithm with sparse Newton iterations", "status": "Poster", "keywords": "Optimal transport;Convex optimization;Quasi-Newton methods;Non-asymptotic analysis;Extremal combinatorics", "tldr": "A quasi-Newton method with a sparse approximation of a Hessian matrix greatly accelerates entropic optimal transport in both iteration count and in runtime. Numerical analysis is provided which corroborates the algorithm.", "abstract": "Computing the optimal transport distance between statistical distributions is a fundamental task in machine learning. One remarkable recent advancement is entropic regularization and the Sinkhorn algorithm, which utilizes only matrix scaling and guarantees an approximated solution with near-linear runtime. Despite the success of the Sinkhorn algorithm, its runtime may still be slow due to the potentially large number of iterations needed for convergence. To achieve possibly super-exponential convergence, we introduce Sinkhorn-Newton-Sparse (SNS), an extension to the Sinkhorn algorithm, by introducing early stopping for the matrix scaling steps and a second stage featuring a Newton-type subroutine. Adopting the variational viewpoint that the Sinkhorn algorithm maximizes a concave Lyapunov potential, we offer the insight that the Hessian matrix of the potential function is approximately sparse. Sparsification of the Hessian results in a fast\nO\n(\nn\n2\n)\nper-iteration complexity, the same as the Sinkhorn algorithm. In terms of total iteration count, we observe that the SNS algorithm converges orders of magnitude faster across a wide range of practical cases, including optimal transportation between empirical distributions and calculating the Wasserstein\nW\n1\n,\nW\n2\ndistance of discretized continuous densities. The empirical performance is corroborated by a rigorous bound on the approximate sparsity of the Hessian matrix.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18878"} +{"video_file": "Kz3yckpCN5_39018099.mp4", "openreview_id": "Kz3yckpCN5", "slideslive_id": 39018099, "venue": "iclr2024", "title": "The False Promise of Imitating Proprietary Language Models", "status": "Spotlight", "keywords": "Language Models;Model Imitation;Distillation;Instruction-Tuning", "tldr": "We critically analyze the performance of large language models that are trained to imitate ChatGPT (e.g., Alpaca, Vicuna).", "abstract": "An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model, such as a proprietary system like ChatGPT (e.g., Alpaca, Self-Instruct, and others). In this work, we critically analyze this approach of imitating language models. We first finetune a series of LMs that imitate ChatGPT using varying base model sizes (1.5B--13B), data sources, and imitation data amounts (0.3M--150M tokens). We then evaluate the models using crowd raters and canonical NLP benchmarks. Initially, we were surprised by the output quality of our imitation models---they appear far better at following instructions, and crowd workers rate their outputs as competitive with ChatGPT. However, when conducting more targeted automatic evaluations, we find that imitation models close little to none of the gap from the base LM to ChatGPT on tasks that are not heavily supported in the imitation data. We show that these performance discrepancies may slip past human raters because imitation models are adept at mimicking ChatGPT\u2019s style but not its factuality. Overall, we conclude that while model imitation can be useful for training models to follow instructions and avoid toxic outputs, it falls short its full promise in many ways. In particular, there exists a substantial capabilities gap between open and closed LMs that we find cannot be bridged merely by adding more imitation data. Instead, we find that fine-tuning more capable base LMs has a significantly more substantial effect on closing this gap. In turn, we argue that the higher leverage action for improving open-source models is to tackle the difficult challenge of developing better base LMs, rather than taking the shortcut of imitating proprietary systems.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/18877"} +{"video_file": "L0r0GphlIL_39018098.mp4", "openreview_id": "L0r0GphlIL", "slideslive_id": 39018098, "venue": "iclr2024", "title": "Improving Convergence and Generalization Using Parameter Symmetries", "status": "Oral", "keywords": "Symmetry;optimization;generalization", "tldr": "We provide theoretical guarantees that teleportation accelerates the convergence rate, show that teleportation can be used to improve generalization, and integrate teleportation into various optimization algorithms such as meta-learning.", "abstract": "In many neural networks, different values of the parameters may result in the same loss value. Parameter space symmetries are loss-invariant transformations that change the model parameters. Teleportation applies such transformations to accelerate optimization. However, the exact mechanism behind this algorithm's success is not well understood. In this paper, we show that teleportation not only speeds up optimization in the short-term, but gives overall faster time to convergence. Additionally, teleporting to minima with different curvatures improves generalization, which suggests a connection between the curvature of the minimum and generalization ability. Finally, we show that integrating teleportation into a wide range of optimization algorithms and optimization-based meta-learning improves convergence. Our results showcase the versatility of teleportation and demonstrate the potential of incorporating symmetry in optimization.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18876"} +{"video_file": "L8UNn7Llt4_39018094.mp4", "openreview_id": "L8UNn7Llt4", "slideslive_id": 39018094, "venue": "iclr2024", "title": "ODICE: Revealing the Mystery of Distribution Correction Estimation via Orthogonal-gradient Update", "status": "Spotlight", "keywords": "offline reinforcement learning;imitation learning;distribution correction estimation", "tldr": "A simple modification to DICE-based method could achieve SOTA performance and great robustness.", "abstract": "In this study, we investigate the DIstribution Correction Estimation (DICE) methods, an important line of work in offline reinforcement learning (RL) and imitation learning (IL). DICE-based methods impose state-action-level behavior constraint, which is an ideal choice for offline learning. However, they typically perform much worse than current state-of-the-art (SOTA) methods that solely use action-level behavior constraint. After revisiting DICE-based methods, we find there exist two gradient terms when learning the value function using true-gradient update: forward gradient (taken on the current state) and backward gradient (taken on the next state). Using forward gradient bears a large similarity to many offline RL methods, and thus can be regarded as applying action-level constraint. However, directly adding the backward gradient may degenerate or cancel out its effect if these two gradients have conflicting directions. To resolve this issue, we propose a simple yet effective modification that projects the backward gradient onto the normal plane of the forward gradient, resulting in an orthogonal-gradient update, a new learning rule for DICE-based methods. We conduct thorough theoretical analyses and find that the projected backward gradient brings state-level behavior regularization, which reveals the mystery of DICE-based methods: the value learning objective does try to impose state-action-level constraint, but needs to be used in a corrected way. Through toy examples and extensive experiments on complex offline RL and IL tasks, we demonstrate that DICE-based methods using orthogonal-gradient updates achieve SOTA performance and great robustness.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18871"} +{"video_file": "L9U5MJJleF_39018943.mp4", "openreview_id": "L9U5MJJleF", "slideslive_id": 39018943, "venue": "iclr2024", "title": "Concept Bottleneck Generative Models", "status": "Poster", "keywords": "Interpretability;generative models", "tldr": "We extend Concept bottleneck models to generative models.", "abstract": "We introduce a generative model with an intrinsically interpretable layer---a concept bottleneck layer---that constrains the model to encode human-understandable concepts. The concept bottleneck layer partitions the generative model into three parts: the pre-concept bottleneck portion, the CB layer, and the post-concept bottleneck portion. To train CB generative models, we complement the traditional task-based loss function for training generative models with a concept loss and an orthogonality loss. The CB layer and these loss terms are model agnostic, which we demonstrate by applying the CB layer to three different families of generative models: generative adversarial networks, variational autoencoders, and diffusion models. On multiple datasets across different types of generative models, steering a generative model, with the CB layer, outperforms all baselines---in some cases, it is \\textit{10 times} more effective. In addition, we show how the CB layer can be used to interpret the output of the generative model and debug the model during or post training.", "primary_area": "visualization or interpretation of learned representations", "site": "https://iclr.cc/virtual/2024/poster/18870"} +{"video_file": "LEYUkvdUhq_39018093.mp4", "openreview_id": "LEYUkvdUhq", "slideslive_id": 39018093, "venue": "iclr2024", "title": "ZipIt! Merging Models from Different Tasks without Training", "status": "Poster", "keywords": "Model Merging;Mode Connectivity;Classification;Deep Learning", "tldr": "We merge models trained on completely different tasks, without retraining.", "abstract": "Typical deep visual recognition models are capable of performing the one task they were trained on. In this paper, we tackle the extremely difficult problem of combining distinct models with different initializations, each solving a separate task, into one multi-task model without any additional training. Prior work in model merging permutes one model to the space of the other then averages them together. While this works for models trained on the same task, we find that this fails to account for the differences in models trained on disjoint tasks. Thus, we introduce \"ZipIt!\", a general method for merging two arbitrary models of the same architecture that incorporates two simple strategies. First, in order to account for features that aren't shared between models, we expand the model merging problem to allow for merging features within each model by defining a general \"zip\" operation. Second, we add support for partially zipping the models up until a specified layer, naturally creating a multi-head model. We find that these two changes combined account for 20-60% improvement over prior work, making it more feasible to merge models trained on disjoint tasks without retraining.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18869"} +{"video_file": "LSYhE2hLWG_39018089.mp4", "openreview_id": "LSYhE2hLWG", "slideslive_id": 39018089, "venue": "iclr2024", "title": "SineNet: Learning Temporal Dynamics in Time-Dependent Partial Differential Equations", "status": "Poster", "keywords": "Partial differential equations;Physics simulation;Dynamics learning", "tldr": "Multi-stage U-Net for time-evolving PDEs", "abstract": "We consider using deep neural networks to solve time-dependent partial differential equations (PDEs), where multi-scale processing is crucial for modeling complex, time-evolving dynamics. While the U-Net architecture with skip connections is commonly used by prior studies to enable multi-scale processing, our analysis shows that the need for features to evolve across layers results in temporally misaligned features in skip connections, which limits the model\u2019s performance. To address this limitation, we propose SineNet, consisting of multiple sequentially connected U-shaped network blocks, referred to as waves. In SineNet, high-resolution features are evolved progressively through multiple stages, thereby reducing the amount of misalignment within each stage. We furthermore analyze the role of skip connections in enabling both parallel and sequential processing of multi-scale information. Our method is rigorously tested on multiple PDE datasets, including the Navier-Stokes equations and shallow water equations, showcasing the advantages of our proposed approach over conventional U-Nets with a comparable parameter budget. We further demonstrate that increasing the number of waves in SineNet while maintaining the same number of parameters leads to a monotonically improved performance. The results highlight the effectiveness of SineNet and the potential of our approach in advancing the state-of-the-art in neural PDE solver design. Our code is available as part of AIRS (https://github.com/divelab/AIRS).", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18865"} +{"video_file": "LY3ukUANko_39018085.mp4", "openreview_id": "LY3ukUANko", "slideslive_id": 39018085, "venue": "iclr2024", "title": "Zoology: Measuring and Improving Recall in Efficient Language Models", "status": "Poster", "keywords": "nlp;language models;representation learning;in-context learning", "tldr": "We show fundamental differences between attention-based language models and increasingly-popular gated-convolution based ones.", "abstract": "Attention-free language models that combine gating and convolutions are growing in popularity due to their efficiency and increasingly competitive performance. To better understand these architectures, we pretrain a suite of 17 attention and gated-convolution language models, finding that SoTA gated-convolution architectures still underperform attention by up to 2.1 perplexity points on the Pile. In fine-grained analysis, we find 82% of the gap is explained by each model's ability to recall information that is previously mentioned in-context, e.g. \"Hakuna Matata means no worries Hakuna Matata it means no\" -> ??. On this task, termed \"associative recall\", we find that attention outperforms gated-convolutions by a large margin: a 70M parameter attention model outperforms a 1.4 billion parameter gated-convolution model on associative recall. This is surprising because prior work shows gated convolutions can perfectly solve synthetic tests for AR capability. To close the gap between synthetics and real language, we develop a new formalization of the task called multi-query associative recall (MQAR) that better reflects actual language. We perform an empirical and theoretical study of MQAR that elucidates differences in the parameter-efficiency of attention and gated-convolution recall. Informed by our analysis, we evaluate simple convolution-attention hybrids and show that hybrids with input-dependent sparse attention patterns can close 97.4% of the gap to attention, while maintaining sub-quadratic scaling. Code is at: https://github.com/HazyResearch/zoology.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18860"} +{"video_file": "LbJqRGNYCf_39018083.mp4", "openreview_id": "LbJqRGNYCf", "slideslive_id": 39018083, "venue": "iclr2024", "title": "JoMA: Demystifying Multilayer Transformers via Joint Dynamics of MLP and Attention", "status": "Poster", "keywords": "multilayer transformer;training dynamics;theoretical analysis;self-attention;interpretability;neural network understanding", "tldr": "We analyze the training dynamics of multilayer transformer, characterizing the role of self-attention, MLP nonlinearity, and the learning procedure of hierarchical structure, if the data follow hierarchical generative models.", "abstract": "We propose Joint MLP/Attention (JoMA) dynamics, a novel mathematical framework to understand the training procedure of multilayer Transformer architectures. This is achieved by integrating out the self-attention layer in Transformers, producing a modified dynamics of MLP layers only. JoMA removes unrealistic assumptions in previous analysis (e.g., lack of residual connection), and predicts that the attention first becomes sparse (to learn salient tokens), then dense (to learn less salient tokens) in the presence of nonlinear activations, while in the linear case, it is consistent with existing works. We leverage JoMA to qualitatively explains how tokens are combined to form hierarchies in multilayer Transformers, when the input tokens are generated by a latent hierarchical generative model. Experiments on models trained from real-world dataset (Wikitext2/Wikitext103) and various pre- trained models (OPT, Pythia) verify our theoretical findings. The code is at https://github.com/facebookresearch/luckmatters/tree/yuandong3.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18857"} +{"video_file": "LemSSn8htt_39018081.mp4", "openreview_id": "LemSSn8htt", "slideslive_id": 39018081, "venue": "iclr2024", "title": "Delta-AI: Local objectives for amortized inference in sparse graphical models", "status": "Poster", "keywords": "amortized inference;variational inference;graphical models;Markov random fields;generative flow networks;GFlowNets", "tldr": "An objective for amortized samplers of sparse graphical models that achieves efficient credit assignment by matching local conditionals, applied to sampling energy models and training sparse latent variable models.", "abstract": "We present a new algorithm for amortized inference in sparse probabilistic graphical models (PGMs), which we call\n\u0394\n-amortized inference (\n\u0394\n-AI). Our approach is based on the observation that when the sampling of variables in a PGM is seen as a sequence of actions taken by an agent, sparsity of the PGM enables local credit assignment in the agent's policy learning objective. This yields a local constraint that can be turned into a local loss in the style of generative flow networks (GFlowNets) that enables off-policy training but avoids the need to instantiate all the random variables for each parameter update, thus speeding up training considerably. The\n\u0394\n-AI objective matches the conditional distribution of a variable given its Markov blanket in a tractable learned sampler, which has the structure of a Bayesian network, with the same conditional distribution under the target PGM. As such, the trained sampler recovers marginals and conditional distributions of interest and enables inference of partial subsets of variables. We illustrate\n\u0394\n-AI's effectiveness for sampling from synthetic PGMs and training latent variable models with sparse factor structure. Code: https://github.com/GFNOrg/Delta-AI.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18855"} +{"video_file": "LfmZh91tDI_39018079.mp4", "openreview_id": "LfmZh91tDI", "slideslive_id": 39018079, "venue": "iclr2024", "title": "Layer-wise linear mode connectivity", "status": "Poster", "keywords": "linear mode connectivity;layer-wise;federated averaging", "tldr": "Investigation of the layer-wise structure of the barriers on the line between two parametrizations of deep models", "abstract": "Averaging neural network parameters is an intuitive method for fusing the knowledge of two independent models. It is most prominently used in federated learning. If models are averaged at the end of training, this can only lead to a good performing model if the loss surface of interest is very particular, i.e., the loss in the midpoint between the two models needs to be sufficiently low. This is impossible to guarantee for the non-convex losses of state-of-the-art networks. For averaging models trained on vastly different datasets, it was proposed to average only the parameters of particular layers or combinations of layers, resulting in better performing models. To get a better understanding of the effect of layer-wise averaging, we analyse the performance of the models that result from averaging single layers, or groups of layers. Based on our empirical and theoretical investigation, we introduce a novel notion of the layer-wise linear connectivity, and show that deep networks do not have layer-wise barriers between them.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18853"} +{"video_file": "LjeqMvQpen_39017480.mp4", "openreview_id": "LjeqMvQpen", "slideslive_id": 39017480, "venue": "iclr2024", "title": "Transformer Fusion with Optimal Transport", "status": "Poster", "keywords": "Fusion;Transformers;Model Merging", "tldr": "Fusing Transformers with Optimal Transport", "abstract": "Fusion is a technique for merging multiple independently-trained neural networks in order to combine their capabilities. Past attempts have been restricted to the case of fully-connected, convolutional, and residual networks. This paper presents a systematic approach for fusing two or more transformer-based networks exploiting Optimal Transport to (soft-)align the various architectural components. We flesh out an abstraction for layer alignment, that can generalize to arbitrary architectures -- in principle -- and we apply this to the key ingredients of Transformers such as multi-head self-attention, layer-normalization, and residual connections, and we discuss how to handle them via various ablation studies. Furthermore, our method allows the fusion of models of different sizes (heterogeneous fusion), providing a new and efficient way to compress Transformers. The proposed approach is evaluated on both image classification tasks via Vision Transformer and natural language modeling tasks using BERT. Our approach consistently outperforms vanilla fusion, and, after a surprisingly short finetuning, also outperforms the individual converged parent models. In our analysis, we uncover intriguing insights about the significant role of soft alignment in the case of Transformers. Our results showcase the potential of fusing multiple Transformers, thus compounding their expertise, in the budding paradigm of model fusion and recombination. Code is available at https://github.com/graldij/transformer-fusion.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18852"} +{"video_file": "LqRGsGWOTX_39018076.mp4", "openreview_id": "LqRGsGWOTX", "slideslive_id": 39018076, "venue": "iclr2024", "title": "Bilevel Optimization under Unbounded Smoothness: A New Algorithm and Convergence Analysis", "status": "Spotlight", "keywords": "Bilevel Optimization;Unbounded Smoothness;Deep Learning", "tldr": "This paper design and analyze a new algorithm for bilevel optimization under unbounded smoothness, with empirical validation on deep learning tasks.", "abstract": "Bilevel optimization is an important formulation for many machine learning problems, such as meta-learning and hyperparameter optimization. Current bilevel optimization algorithms assume that the gradient of the upper-level function is Lipschitz (i.e., the upper-level function has a bounded smoothness parameter). However, recent studies reveal that certain neural networks such as recurrent neural networks (RNNs) and long-short-term memory networks (LSTMs) exhibit potential unbounded smoothness, rendering conventional bilevel optimization algorithms unsuitable for these neural networks. In this paper, we design a new bilevel optimization algorithm, namely BO-REP, to address this challenge. This algorithm updates the upper-level variable using normalized momentum and incorporates two novel techniques for updating the lower-level variable: \\textit{initialization refinement} and \\textit{periodic updates}. Specifically, once the upper-level variable is initialized, a subroutine is invoked to obtain a refined estimate of the corresponding optimal lower-level variable, and the lower-level variable is updated only after every specific period instead of each iteration. When the upper-level problem is nonconvex and unbounded smooth, and the lower-level problem is strongly convex, we prove that our algorithm requires\nO\n~\n(\n1\n/\n\u03f5\n4\n)\n\\footnote{Here\nO\n~\n(\n\u22c5\n)\ncompresses logarithmic factors of\n1\n/\n\u03f5\nand\n1\n/\n\u03b4\n, where\n\u03b4\n\u2208\n(\n0\n,\n1\n)\ndenotes the failure probability.} iterations to find an\n\u03f5\n-stationary point in the stochastic setting, where each iteration involves calling a stochastic gradient or Hessian-vector product oracle. Notably, this result matches the state-of-the-art complexity results under the bounded smoothness setting and without mean-squared smoothness of the stochastic gradient, up to logarithmic factors. Our proof relies on novel technical lemmas for the periodically updated lower-level variable, which are of independent interest. Our experiments on hyper-representation learning, hyperparameter optimization, and data hyper-cleaning for text classification tasks demonstrate the effectiveness of our proposed algorithm. The code is available at https://github.com/MingruiLiu-ML-Lab/Bilevel-Optimization-under-Unbounded-Smoothness.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18848"} +{"video_file": "Lvf7GnaLru_39019171.mp4", "openreview_id": "Lvf7GnaLru", "slideslive_id": 39019171, "venue": "iclr2024", "title": "Unraveling the Key Components of OOD Generalization via Diversification", "status": "Poster", "keywords": "Algorithm Design;Diversity;OOD Generalization;Spurious Correlation;Understanding Neural Networks", "tldr": "We distill the critical design factors of current state-of-the-art methods (multi-hypotheses/diversification methods) for spurious correlation situations.", "abstract": "Supervised learning datasets may contain multiple cues that explain the training set equally well, i.e., learning any of them would lead to the correct predictions on the training data. However, many of them can be spurious, i.e., lose their predictive power under a distribution shift and consequently fail to generalize to out-of-distribution (OOD) data. Recently developed \"diversification\" methods (Lee et al., 2023; Pagliardini et al., 2023) approach this problem by finding multiple diverse hypotheses that rely on different features. This paper aims to study this class of methods and identify the key components contributing to their OOD generalization abilities.\nWe show that (1) diversification methods are highly sensitive to the distribution of the unlabeled data used for diversification and can underperform significantly when away from a method-specific sweet spot. (2) Diversification alone is insufficient for OOD generalization. The choice of the used learning algorithm, e.g., the model's architecture and pretraining, is crucial. In standard experiments (classification on Waterbirds and Office-Home datasets), using the second-best choice leads to an up to 20% absolute drop in accuracy. (3) The optimal choice of learning algorithm depends on the unlabeled data and vice versa i.e. they are co-dependent. (4) Finally, we show that, in practice, the above pitfalls cannot be alleviated by increasing the number of diverse hypotheses, the major feature of diversification methods.\nThese findings provide a clearer understanding of the critical design factors influencing the OOD generalization abilities of diversification methods. They can guide practitioners in how to use the existing methods best and guide researchers in developing new, better ones.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18844"} +{"video_file": "MCl0TLboP1_39018068.mp4", "openreview_id": "MCl0TLboP1", "slideslive_id": 39018068, "venue": "iclr2024", "title": "Improving Offline RL by Blending Heuristics", "status": "Spotlight", "keywords": "offline RL;heuristic;RL;MDP;sequential decision-making", "tldr": "A method for improving many existing offline RL algorithms' performance by blending Monte-Carlo-based heuristic state value estimates into these algorithms' Bellman operators.", "abstract": "We propose Heuristic Blending (HUBL), a simple performance-improving technique for a broad class of offline RL algorithms based on value bootstrapping. HUBL modifies the Bellman operators used in these algorithms, partially replacing the bootstrapped values with heuristic ones that are estimated with Monte-Carlo returns. For trajectories with higher returns, HUBL relies more on the heuristic values and less on bootstrapping; otherwise, it leans more heavily on bootstrapping. HUBL is very easy to combine with many existing offline RL implementations by relabeling the offline datasets with adjusted rewards and discount factors. We derive a theory that explains HUBL's effect on offline RL as reducing offline RL's complexity and thus increasing its finite-sample performance. Furthermore, we empirically demonstrate that HUBL consistently improves the policy quality of four state-of-the-art bootstrapping-based offline RL algorithms (ATAC, CQL, TD3+BC, and IQL), by 9% on average over 27 datasets of the D4RL and Meta-World benchmarks.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18837"} +{"video_file": "MEGQGNUfPx_39018067.mp4", "openreview_id": "MEGQGNUfPx", "slideslive_id": 39018067, "venue": "iclr2024", "title": "The Effectiveness of Random Forgetting for Robust Generalization", "status": "Poster", "keywords": "Adversarial training;robust overfitting;forgetting;reinitialization;robust accuracy;generalization", "tldr": "A method alternates between the forgetting phase, which randomly forgets a subset of weights and regulates the model's information through weight reinitialization, and the relearning phase, which mitigate robust overfitting.", "abstract": "Deep neural networks are susceptible to adversarial attacks, which can compromise their performance and accuracy. Adversarial Training (AT) has emerged as a popular approach for protecting neural networks against such attacks. However, a key challenge of AT is robust overfitting, where the network's robust performance on test data deteriorates with further training, thus hindering generalization. Motivated by the concept of active forgetting in the brain, we introduce a novel learning paradigm called \"Forget to Mitigate Overfitting (FOMO)\". FOMO alternates between the forgetting phase, which randomly forgets a subset of weights and regulates the model's information through weight reinitialization, and the relearning phase, which emphasizes learning generalizable features. Our experiments on benchmark datasets and adversarial attacks show that FOMO alleviates robust overfitting by significantly reducing the gap between the best and last robust test accuracy while improving the state-of-the-art robustness. Furthermore, FOMO provides a better trade-off between the standard and robust accuracy outperforming baseline adversarial methods. Finally, our framework is robust to AutoAttacks and increases generalization in many real-world scenarios.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18836"} +{"video_file": "MFCjgEOLJT_39018910.mp4", "openreview_id": "MFCjgEOLJT", "slideslive_id": 39018910, "venue": "iclr2024", "title": "Learning interpretable control inputs and dynamics underlying animal locomotion", "status": "Poster", "keywords": "computational neuroscience;interpretable dynamics;motor control;animal behavior;dynamical systems;system identification;unsupervised learning;zebrafish", "tldr": "We proposed a novel approach to modeling time series of behavior observations by combining two existing methods in order to learn a reduced and interpretable model of behavioral dynamics", "abstract": "A central objective in neuroscience is to understand how the brain orchestrates movement. Recent advances in automated tracking technologies have made it possible to document behavior with unprecedented temporal resolution and scale, generating rich datasets which can be exploited to gain insights into the neural control of movement. One common approach is to identify stereotypical motor primitives using cluster analysis. However, this categorical description can limit our ability to model the effect of more continuous control schemes. Here we take a control theoretic approach to behavioral modeling and argue that movements can be understood as the output of a controlled dynamical system. Previously, models of movement dynamics, trained solely on behavioral data, have been effective in reproducing observed features of neural activity. These models addressed specific scenarios where animals were trained to execute particular movements upon receiving a prompt. In this study, we extend this approach to analyze the full natural locomotor repertoire of an animal: the zebrafish larva. Our findings demonstrate that this repertoire can be effectively generated through a sparse control signal driving a latent Recurrent Neural Network (RNN). Our model's learned latent space preserves key kinematic features and disentangles different categories of movements. To further interpret the latent dynamics, we used balanced model reduction to yield a simplified model. Collectively, our methods serve as a case study for interpretable system identification, and offer a novel framework for understanding neural activity in relation to movement.", "primary_area": "applications to neuroscience & cognitive science", "site": "https://iclr.cc/virtual/2024/poster/18835"} +{"video_file": "MIEnYtlGyv_39018066.mp4", "openreview_id": "MIEnYtlGyv", "slideslive_id": 39018066, "venue": "iclr2024", "title": "Symphony: Symmetry-Equivariant Point-Centered Spherical Harmonics for 3D Molecule Generation", "status": "Poster", "keywords": "molecule;spherical harmonics;equivariant;symmetry;generation", "tldr": "We propose a new method for generating molecules in a symmetry-preserving manner using spherical harmonic projections.", "abstract": "We present Symphony, an\nE\n(\n3\n)\nequivariant autoregressive generative model for 3D molecular geometries that iteratively builds a molecule from molecular fragments. Existing autoregressive models such as G-SchNet and G-SphereNet for molecules utilize rotationally invariant features to respect the 3D symmetries of molecules. In contrast, Symphony uses message-passing with higher-degree\nE\n(\n3\n)\n-equivariant features. This allows a novel representation of probability distributions via spherical harmonic signals to efficiently model the 3D geometry of molecules. We show that Symphony is able to accurately generate small molecules from the QM9 dataset, outperforming existing autoregressive models and approaching the performance of diffusion models. Our code is available at https://github.com/atomicarchitects/symphony.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18833"} +{"video_file": "MLBdiWu4Fw_39018851.mp4", "openreview_id": "MLBdiWu4Fw", "slideslive_id": 39018851, "venue": "iclr2024", "title": "InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation", "status": "Spotlight", "keywords": "video-language dataset;video understanding;video generation;multimodal understanding;action recognition;video retrieval", "tldr": "This paper introduces InternVid, a large-scale video-centric multimodal dataset that enables learning powerful and transferable video-text representations for multimodal understanding and generation.", "abstract": "This paper introduces InternVid, a large-scale video-centric multimodal dataset that enables learning powerful and transferable video-text representations for multimodal understanding and generation. InternVid contains over 7 million videos lasting nearly 760K hours, yielding 234M video clips accompanied by detailed descriptions of total 4.1B words. Our core contribution is to develop a scalable approach to autonomously build a high-quality video-text dataset with large language models (LLM), thereby showcasing its efficacy in learning video-language representation at scale. Specifically, we utilize a multi-scale approach to generate video-related descriptions. Furthermore, we introduce ViCLIP, a video-text representation learning model based on ViT-L. Learned on InternVid via contrastive learning, this model demonstrates leading zero-shot action recognition and competitive video retrieval performance. Beyond basic video understanding tasks like recognition and retrieval, our dataset and model have broad applications. They are particularly beneficial for generating interleaved video-text data for learning a video-centric dialogue system, advancing video-to-text and text-to-video generation research. These proposed resources provide a tool for researchers and practitioners interested in multimodal video understanding and generation.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/18829"} +{"video_file": "MN3yH2ovHb_39018062.mp4", "openreview_id": "MN3yH2ovHb", "slideslive_id": 39018062, "venue": "iclr2024", "title": "SyncDreamer: Generating Multiview-consistent Images from a Single-view Image", "status": "Spotlight", "keywords": "diffusion model; single-view reconstruction; 3D generation; generative models", "tldr": "SyncDreamer is able to generate multiview-consistent images for single-view 3D reconstruction of arbitary objects.", "abstract": "In this paper, we present a novel diffusion model called SyncDreamer that generates multiview-consistent images from a single-view image. Using pretrained large-scale 2D diffusion models, recent work Zero123 demonstrates the ability to generate plausible novel views from a single-view image of an object. However, maintaining consistency in geometry and colors for the generated images remains a challenge. To address this issue, we propose a synchronized multiview diffusion model that models the joint probability distribution of multiview images, enabling the generation of multiview-consistent images in a single reverse process. SyncDreamer synchronizes the intermediate states of all the generated images at every step of the reverse process through a 3D-aware feature attention mechanism that correlates the corresponding features across different views. Experiments show that SyncDreamer generates images with high consistency across different views, thus making it well-suited for various 3D generation tasks such as novel-view-synthesis, text-to-3D, and image-to-3D. Project page: https://liuyuan-pal.github.io/SyncDreamer/.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18828"} +{"video_file": "MNyOI3C7YB_39018060.mp4", "openreview_id": "MNyOI3C7YB", "slideslive_id": 39018060, "venue": "iclr2024", "title": "SEABO: A Simple Search-Based Method for Offline Imitation Learning", "status": "Poster", "keywords": "offline imitation learning; reward learning; reinforcement learning", "tldr": "We propose a simple yet effective search-based method for offline imitation learning.", "abstract": "Offline reinforcement learning (RL) has attracted much attention due to its ability in learning from static offline datasets and eliminating the need of interacting with the environment. Nevertheless, the success of offline RL relies heavily on the offline transitions annotated with reward labels. In practice, we often need to hand-craft the reward function, which is sometimes difficult, labor-intensive, or inefficient. To tackle this challenge, we set our focus on the offline imitation learning (IL) setting, and aim at getting a reward function based on the expert data and unlabeled data. To that end, we propose a simple yet effective search-based offline IL method, tagged SEABO. SEABO allocates a larger reward to the transition that is close to its closest neighbor in the expert demonstration, and a smaller reward otherwise, all in an unsupervised learning manner. Experimental results on a variety of D4RL datasets indicate that SEABO can achieve competitive performance to offline RL algorithms with ground-truth rewards, given only a single expert trajectory, and can outperform prior reward learning and offline IL methods across many tasks. Moreover, we demonstrate that SEABO also works well if the expert demonstrations contain only observations. Our code is publicly available at https://github.com/dmksjfl/SEABO.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18826"} +{"video_file": "MSe8YFbhUE_39018056.mp4", "openreview_id": "MSe8YFbhUE", "slideslive_id": 39018056, "venue": "iclr2024", "title": "DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization", "status": "Spotlight", "keywords": "Visual RL; Dormant Ratio", "tldr": "DrM, a visual RL algorithm, minimizes the dormant ratio to guide exploration-exploitation trade-offs, achieving significant improvements in sample efficiency and asymptotic performance across diverse domains.", "abstract": "Visual reinforcement learning (RL) has shown promise in continuous control tasks. Despite its progress, current algorithms are still unsatisfactory in virtually every aspect of the performance such as sample efficiency, asymptotic performance, and their robustness to the choice of random seeds. In this paper, we identify a major shortcoming in existing visual RL methods that is the agents often exhibit sustained inactivity during early training, thereby limiting their ability to explore effectively. Expanding upon this crucial observation, we additionally unveil a significant correlation between the agents' inclination towards motorically inactive exploration and the absence of neuronal activity within their policy networks. To quantify this inactivity, we adopt dormant ratio as a metric to measure inactivity in the RL agent's network. Empirically, we also recognize that the dormant ratio can act as a standalone indicator of an agent's activity level, regardless of the received reward signals. Leveraging the aforementioned insights, we introduce DrM, a method that uses three core mechanisms to guide agents' exploration-exploitation trade-offs by actively minimizing the dormant ratio. Experiments demonstrate that DrM achieves significant improvements in sample efficiency and asymptotic performance with no broken seeds (76 seeds in total) across three continuous control benchmark environments, including DeepMind Control Suite, MetaWorld, and Adroit. Most importantly, DrM is the first model-free algorithm that consistently solves tasks in both the Dog and Manipulator domains from the DeepMind Control Suite as well as three dexterous hand manipulation tasks without demonstrations in Adroit, all based on pixel observations.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18821"} +{"video_file": "MY0qlcFcUg_39018054.mp4", "openreview_id": "MY0qlcFcUg", "slideslive_id": 39018054, "venue": "iclr2024", "title": "Denoising Task Routing for Diffusion Models", "status": "Poster", "keywords": "Diffusion Model Architecture;Multi-Task Learning (MTL);Diffusion Models", "tldr": "Simple add-on strategy improves diffusion model architectures by explicitly routing denoising tasks in diffusion models.", "abstract": "Diffusion models generate highly realistic images by learning a multi-step denoising process, naturally embodying the principles of multi-task learning (MTL). Despite the inherent connection between diffusion models and MTL, there remains an unexplored area in designing neural architectures that explicitly incorporate MTL into the framework of diffusion models. In this paper, we present Denoising Task Routing (DTR), a simple add-on strategy for existing diffusion model architectures to establish distinct information pathways for individual tasks within a single architecture by selectively activating subsets of channels in the model. What makes DTR particularly compelling is its seamless integration of prior knowledge of denoising tasks into the framework: (1) Task Affinity: DTR activates similar channels for tasks at adjacent timesteps and shifts activated channels as sliding windows through timesteps, capitalizing on the inherent strong affinity between tasks at adjacent timesteps. (2) Task Weights: During the early stages (higher timesteps) of the denoising process, DTR assigns a greater number of task-specific channels, leveraging the insight that diffusion models prioritize reconstructing global structure and perceptually rich contents in earlier stages, and focus on simple noise removal in later stages. Our experiments reveal that DTR not only consistently boosts diffusion models' performance across different evaluation protocols without adding extra parameters but also accelerates training convergence. Finally, we show the complementarity between our architectural approach and existing MTL optimization techniques, providing a more complete view of MTL in the context of diffusion training. Significantly, by leveraging this complementarity, we attain matched performance of DiT-XL using the smaller DiT-L with a reduction in training iterations from 7M to 2M. Our project page is available at https://byeongjun-park.github.io/DTR/", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18818"} +{"video_file": "MbfAK4s61A_39018856.mp4", "openreview_id": "MbfAK4s61A", "slideslive_id": 39018856, "venue": "iclr2024", "title": "GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher", "status": "Poster", "keywords": "Safety Alignment;Jailbreak", "tldr": "We propose a novel framework CipherChat to systematically examine the generalizability of safety alignment to non-natural languages -- ciphers.", "abstract": "Safety lies at the core of the development of Large Language Models (LLMs). There is ample work on aligning LLMs with human ethics and preferences, including data filtering in pretraining, supervised fine-tuning, reinforcement learning from human feedback, red teaming, etc. In this study, we discover that chat in cipher can bypass the safety alignment techniques of LLMs, which are mainly conducted in natural languages. We propose a novel framework CipherChat to systematically examine the generalizability of safety alignment to non-natural languages -- ciphers. CipherChat enables humans to chat with LLMs through cipher prompts topped with system role descriptions and few-shot enciphered demonstrations. We use CipherChat to assess state-of-the-art LLMs, including ChatGPT and GPT-4 for different representative human ciphers across 11 safety domains in both English and Chinese. Experimental results show that certain ciphers succeed almost 100% of the time in bypassing the safety alignment of GPT-4 in several safety domains, demonstrating the necessity of developing safety alignment for non-natural languages. Notably, we identify that LLMs seem to have a ''secret cipher'', and propose a novel SelfCipher that uses only role play and several unsafe demonstrations in natural language to evoke this capability. SelfCipher surprisingly outperforms existing human ciphers in almost all cases.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/18817"} +{"video_file": "MiRPBbQNHv_39018048.mp4", "openreview_id": "MiRPBbQNHv", "slideslive_id": 39018048, "venue": "iclr2024", "title": "COCO-Periph: Bridging the Gap Between Human and Machine Perception in the Periphery", "status": "Poster", "keywords": "peripheral vision;object detection;dataset;foveation;psychophysics", "tldr": "comparing DNNs to human peripheral vision", "abstract": "Evaluating deep neural networks (DNNs) as models of human perception has given rich insights into both human visual processing and representational properties of DNNs. We extend this work by analyzing how well DNNs perform compared to humans when constrained by peripheral vision -- which limits human performance on a variety of tasks, but also benefits the visual system significantly. We evaluate this by (1) modifying the Texture Tiling Model (TTM), a well tested model of peripheral vision to be more flexibly used with DNNs, (2) generating a large dataset which we call COCO-Periph that contains images transformed to capture the information available in human peripheral vision, and (3) comparing DNNs to humans at peripheral object detection using a psychophysics experiment. Our results show that common DNNs underperform at object detection compared to humans when simulating peripheral vision with TTM. Training on COCO-Periph begins to reduce the gap between human and DNN performance and leads to small increases in corruption robustness, but DNNs still struggle to capture human-like sensitivity to peripheral clutter. Our work brings us closer to accurately modeling human vision, and paves the way for DNNs to mimic and sometimes benefit from properties of human visual processing.", "primary_area": "applications to neuroscience & cognitive science", "site": "https://iclr.cc/virtual/2024/poster/18811"} +{"video_file": "MrYiwlDRQO_39018046.mp4", "openreview_id": "MrYiwlDRQO", "slideslive_id": 39018046, "venue": "iclr2024", "title": "PeFLL: Personalized Federated Learning by Learning to Learn", "status": "Poster", "keywords": "Personalized Federated Learning;Learning-to-Learn", "tldr": "PeFLL is a hypernetwork-based approach for personalized federated learning. Based on a learning-to-learn approach it efficiently generates accurate individual models for current as well as future clients.", "abstract": "We present PeFLL, a new personalized federated learning algorithm that improves over the state-of-the-art in three aspects: 1) it produces more accurate models, especially in the low-data regime, and not only for clients present during its training phase, but also for any that may emerge in the future; 2) it reduces the amount of on-client computation and client-server communication by providing future clients with ready-to-use personalized models that require no additional finetuning or optimization; 3) it comes with theoretical guarantees that establish generalization from the observed clients to future ones. At the core of PeFLL lies a learning-to-learn approach that jointly trains an embedding network and a hypernetwork. The embedding network is used to represent clients in a latent descriptor space in a way that reflects their similarity to each other. The hypernetwork takes as input such descriptors and outputs the parameters of fully personalized client models. In combination, both networks constitute a learning algorithm that achieves state-of-the-art performance in several personalized federated learning benchmarks.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/18806"} +{"video_file": "My7lkRNnL9_39017203.mp4", "openreview_id": "My7lkRNnL9", "slideslive_id": 39017203, "venue": "iclr2024", "title": "Forward Learning with Top-Down Feedback: Empirical and Analytical Characterization", "status": "Poster", "keywords": "Forward-only learning;Biologically inspired learning;Artificial neural networks;Analytical characterization", "tldr": "We discuss \"forward-only\" algorithms, provide an analytical characterization and test strategies to improve their performance.", "abstract": "\"Forward-only\" algorithms, which train neural networks while avoiding a backward pass, have recently gained attention as a way of solving the biologically unrealistic aspects of backpropagation. Here, we first address compelling challenges related to the \"forward-only\" rules, which include reducing the performance gap with backpropagation and providing an analytical understanding of their dynamics. To this end, we show that the forward-only algorithm with top-down feedback is well-approximated by an \"adaptive-feedback-alignment\" algorithm, and we analytically track its performance during learning in a prototype high-dimensional setting. Then, we compare different versions of forward-only algorithms, focusing on the Forward-Forward and PEPITA frameworks, and we show that they share the same learning principles. Overall, our work unveils the connections between three key neuro-inspired learning rules, providing a link between \"forward-only\" algorithms, i.e., Forward-Forward and PEPITA, and an approximation of backpropagation, i.e., Feedback Alignment.", "primary_area": "applications to neuroscience & cognitive science", "site": "https://iclr.cc/virtual/2024/poster/18804"} +{"video_file": "N0gT4A0jNV_39017692.mp4", "openreview_id": "N0gT4A0jNV", "slideslive_id": 39017692, "venue": "iclr2024", "title": "Low Rank Matrix Completion via Robust Alternating Minimization in Nearly Linear Time", "status": "Poster", "keywords": "matrix completion", "tldr": "We provide a framework for matrix completion using fast alternating minimization that runs in nearly linear time in terms of verifying the solution", "abstract": "Given a matrix\nM\n\u2208\nR\nm\n\u00d7\nn\n, the low rank matrix completion problem asks us to find a rank-\nk\napproximation of\nM\nas\nU\nV\n\u22a4\nfor\nU\n\u2208\nR\nm\n\u00d7\nk\nand\nV\n\u2208\nR\nn\n\u00d7\nk\nby only observing a few entries specified by a set of entries\n\u03a9\n\u2286\n[\nm\n]\n\u00d7\n[\nn\n]\n. In particular, we examine an approach that is widely used in practice --- the alternating minimization framework. Jain, Netrapalli and Sanghavi showed that if\nM\nhas incoherent rows and columns, then alternating minimization provably recovers the matrix\nM\nby observing a nearly linear in\nn\nnumber of entries. While the sample complexity has been subsequently improved, alternating minimization steps are required to be computed exactly. This hinders the development of more efficient algorithms and fails to depict the practical implementation of alternating minimization, where the updates are usually performed approximately in favor of efficiency.\nIn this paper, we take a major step towards a more efficient and error-robust alternating minimization framework. To this end, we develop an analytical framework for alternating minimization that can tolerate a moderate amount of errors caused by approximate updates. Moreover, our algorithm runs in time\nO\n~\n(\n|\n\u03a9\n|\nk\n)\n, which is nearly linear in the time to verify the solution while preserving the sample complexity. This improves upon all prior known alternating minimization approaches which require\nO\n~\n(\n|\n\u03a9\n|\nk\n2\n)\ntime.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18801"} +{"video_file": "N0nTk5BSvO_39018045.mp4", "openreview_id": "N0nTk5BSvO", "slideslive_id": 39018045, "venue": "iclr2024", "title": "TESTAM: A Time-Enhanced Spatio-Temporal Attention Model with Mixture of Experts", "status": "Poster", "keywords": "Traffic Prediction;Deep Learning;Spatio-Temporal data modeling", "tldr": "We propose a novel mixture-of-experts model named TESTAM that enables in-situ modeling of the traffic data", "abstract": "Accurate traffic forecasting is challenging due to the complex dependency on road networks, various types of roads, and the abrupt speed change due to the events. Recent works mainly focus on dynamic spatial modeling with adaptive graph embedding or graph attention having less consideration for temporal characteristics and in-situ modeling. In this paper, we propose a novel deep learning model named TESTAM, which individually models recurring and non-recurring traffic patterns by a mixture-of-experts model with three experts on temporal modeling, spatio-temporal modeling with static graph, and dynamic spatio-temporal dependency modeling with dynamic graph. By introducing different experts and properly routing them, TESTAM could better model various circumstances, including spatially isolated nodes, highly related nodes, and recurring and non-recurring events. For the proper routing, we reformulate a gating problem into a classification problem with pseudo labels. Experimental results on three public traffic network datasets, METR-LA, PEMS-BAY, and EXPY-TKY, demonstrate that TESTAM achieves a better indication and modeling of recurring and non-recurring traffic.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18800"} +{"video_file": "N23A4ybMJr_39018043.mp4", "openreview_id": "N23A4ybMJr", "slideslive_id": 39018043, "venue": "iclr2024", "title": "Win-Win: Training High-Resolution Vision Transformers from Two Windows", "status": "Poster", "keywords": "Vision transformers;High resolution;Dense tasks;Optical flow", "tldr": "WinWin enables to train vanilla ViTs for high-resolution dense pixelwise tasks at a fraction of the (quadratic) cost", "abstract": "Transformers have become the standard in state-of-the-art vision architectures, achieving impressive performance on both image-level and dense pixelwise tasks. However, training vision transformers for high-resolution pixelwise tasks has a prohibitive cost. Typical solutions boil down to hierarchical architectures, fast and approximate attention, or training on low-resolution crops. This latter solution does not constrain architectural choices, but it leads to a clear performance drop when testing at resolutions significantly higher than that used for training, thus requiring ad-hoc and slow post-processing schemes. In this paper, we propose a novel strategy for efficient training and inference of high-resolution vision transformers. The key principle is to mask out most of the high-resolution inputs during training, keeping only N random windows. This allows the model to learn local interactions between tokens inside each window, and global interactions between tokens from different windows. As a result, the model can directly process the high-resolution input at test time without any special trick. We show that this strategy is effective when using relative positional embedding such as rotary embeddings. It is 4 times faster to train than a full-resolution network, and it is straightforward to use at test time compared to existing approaches. We apply this strategy to three dense prediction tasks with high-resolution data. First, we show on the task of semantic segmentation that a simple setting with 2 windows performs best, hence the name of our method: Win-Win. Second, we confirm this result on the task of monocular depth prediction. Third, to demonstrate the generality of our contribution, we further extend it to the binocular task of optical flow, reaching state-of-the-art performance on the Spring benchmark that contains Full-HD images with an order of magnitude faster inference than the best competitor", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18799"} +{"video_file": "NLevOah0CJ_39018036.mp4", "openreview_id": "NLevOah0CJ", "slideslive_id": 39018036, "venue": "iclr2024", "title": "Hindsight PRIORs for Reward Learning from Human Preferences", "status": "Poster", "keywords": "preference based reinforcement learning;world models;return redistribution", "tldr": "Presents a method to address credit assignment problem in preference-based reinforcement learning by guiding rewards to key states according to relative state importance.", "abstract": "Preference based Reinforcement Learning (PbRL) removes the need to hand specify a reward function by learning one from preference feedback over policy behaviors. Current approaches to PbRL do not address the credit assignment problem inherent in determining which parts of a behavior most contributed to a preference resulting in data intensive approaches and subpar reward models. We address such limitations by introducing a credit assignment strategy (PRIOR) that uses a forward dynamics world model to approximate state importance within a trajectory and then guides rewards to be proportional to state importance through an auxiliary predicted return redistribution objective. Incorporating state importance into reward learning improves the speed of policy learning, overall policy performance, and reward recovery on both locomotion and manipulation tasks. For example, PRIOR achieves 80% success rate with half the amount of data compared to baselines. The performance gains and our ablations demonstrate the benefits even a simple credit assignment strategy can have on reward learning and that state importance in forward dynamics prediction is a strong proxy for a state's contribution to a preference decision.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18790"} +{"video_file": "NgaLU2fP5D_39018028.mp4", "openreview_id": "NgaLU2fP5D", "slideslive_id": 39018028, "venue": "iclr2024", "title": "Predictive, scalable and interpretable knowledge tracing on structured domains", "status": "Spotlight", "keywords": "knowledge tracing;interpretable representations;knowledge graphs;probabilistic models;variational inference;continual learning", "tldr": "Performant and scalable knowledge tracing with the interpretability that personalized education needs.", "abstract": "Intelligent tutoring systems optimize the selection and timing of learning materials to enhance understanding and long-term retention. This requires estimates of both the learner's progress (\"knowledge tracing\"; KT), and the prerequisite structure of the learning domain (\"knowledge mapping\"). While recent deep learning models achieve high KT accuracy, they do so at the expense of the interpretability of psychologically-inspired models. In this work, we present a solution to this trade-off. PSI-KT is a hierarchical generative approach that explicitly models how both individual cognitive traits and the prerequisite structure of knowledge influence learning dynamics, thus achieving interpretability by design. Moreover, by using scalable Bayesian inference, PSI-KT targets the real-world need for efficient personalization even with a growing body of learners and interaction data. Evaluated on three datasets from online learning platforms, PSI-KT achieves superior multi-step predictive accuracy and scalable inference in continual-learning settings, all while providing interpretable representations of learner-specific traits and the prerequisite structure of knowledge that causally supports learning. In sum, predictive, scalable and interpretable knowledge tracing with solid knowledge mapping lays a key foundation for effective personalized learning to make education accessible to a broad, global audience.", "primary_area": "applications to neuroscience & cognitive science", "site": "https://iclr.cc/virtual/2024/poster/18778"} +{"video_file": "NjNfLdxr3A_39019059.mp4", "openreview_id": "NjNfLdxr3A", "slideslive_id": 39019059, "venue": "iclr2024", "title": "VeRA: Vector-based Random Matrix Adaptation", "status": "Poster", "keywords": "Parameter-efficient fine-tuning;Transfer learning;Low-rank;NLP", "tldr": "3 good", "abstract": "Low-rank adapation (LoRA) is a popular method that reduces the number of trainable parameters when finetuning large language models, but still faces acute storage challenges when scaling to even larger models or deploying numerous per-user or per-task adapted models. In this work, we present Vector-based Random Matrix Adaptation (VeRA), which significantly reduces the number of trainable parameters compared to LoRA, yet maintains the same performance. It achieves this by using a single pair of low-rank matrices shared across all layers and learning small scaling vectors instead. We demonstrate its effectiveness on the GLUE and E2E benchmarks, image classification tasks, and show its application in instruction-tuning of 7B and 13B language models. Website: https://dkopi.github.io/vera", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/18775"} +{"video_file": "NkmJotfL42_39018659.mp4", "openreview_id": "NkmJotfL42", "slideslive_id": 39018659, "venue": "iclr2024", "title": "Fantastic Generalization Measures are Nowhere to be Found", "status": "Poster", "keywords": "overparametrization;generalization", "tldr": "We uncover some of the reasons behind the failure of many generalization measures to estimate the performance of overparametrized models.", "abstract": "We study the notion of a generalization bound being uniformly tight, meaning that the difference between the bound and the population loss is small for all learning algorithms and all population distributions. Numerous generalization bounds have been proposed in the literature as potential explanations for the ability of neural networks to generalize in the overparameterized setting. However, in their paper \"Fantastic Generalization Measures and Where to Find Them,\" Jiang et al. (2020) examine more than a dozen generalization bounds, and show empirically that none of them are uniformly tight. This raises the question of whether uniformly-tight generalization bounds are at all possible in the overparameterized setting. We consider two types of generalization bounds: (1) bounds that may depend on the training set and the learned hypothesis (e.g., margin bounds). We prove mathematically that no such bound can be uniformly tight in the overparameterized setting; (2) bounds that may in addition also depend on the learning algorithm (e.g., stability bounds). For these bounds, we show a trade-off between the algorithm's performance and the bound's tightness. Namely, if the algorithm achieves good accuracy on certain distributions, then no generalization bound can be uniformly tight for it in the overparameterized setting. We explain how these formal results can, in our view, inform research on generalization bounds for neural networks, while stressing that other interpretations of these results are also possible.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/18773"} +{"video_file": "NnyD0Rjx2B_39017038.mp4", "openreview_id": "NnyD0Rjx2B", "slideslive_id": 39017038, "venue": "iclr2024", "title": "fairret: a Framework for Differentiable Fairness Regularization Terms", "status": "Poster", "keywords": "fairness;statistics;differentiation;regularization;classification", "tldr": "We represent fairness notions as an equality between statistics with a general (linear-fractional) definition. We propose differentiable regularization terms to then pursue these fairness notions in a modular, simple pipeline.", "abstract": "Current tools for machine learning fairness only admit a limited range of fairness definitions and have seen little integration with automatic differentiation libraries, despite the central role these libraries play in modern machine learning pipelines.\nWe introduce a framework of fairness regularization terms (fairret) which quantify bias as modular objectives that are easily integrated in automatic differentiation pipelines. By employing a general definition of fairness in terms of linear-fractional statistics, a wide class of fairrets can be computed efficiently. Experiments show the behavior of their gradients and their utility in enforcing fairness with minimal loss of predictive power compared to baselines. Our contribution includes a PyTorch implementation of the fairret framework.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/18770"} +{"video_file": "Nq45xeghcL_39018022.mp4", "openreview_id": "Nq45xeghcL", "slideslive_id": 39018022, "venue": "iclr2024", "title": "Intelligent Switching for Reset-Free RL", "status": "Poster", "keywords": "Reset-Free RL", "tldr": "Intelligently switching between controllers leads to state-of-the-art performance on Reset-Free RL.", "abstract": "In the real world, the strong episode resetting mechanisms that are needed to train agents in simulation are unavailable. The resetting assumption limits the potential of reinforcement learning in the real world, as providing resets to an agent usually requires the creation of additional handcrafted mechanisms or human interventions. Recent work aims to train agents (forward) with learned resets by constructing a second (backward) agent that returns the forward agent to the initial state. We find that the termination and timing of the transitions between these two agents are crucial for algorithm success. With this in mind, we create a new algorithm, Reset Free RL with Intelligently Switching Controller (RISC) which intelligently switches between the two agents based on the agent\u2019s confidence in achieving its current goal. Our new method achieves state-of-the-art performance on several challenging environments for reset-free RL.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18769"} +{"video_file": "Nshk5YpdWE_39018021.mp4", "openreview_id": "Nshk5YpdWE", "slideslive_id": 39018021, "venue": "iclr2024", "title": "Lagrangian Flow Networks for Conservation Laws", "status": "Spotlight", "keywords": "Physics-informed Neural Network;Fluid Dynamics;Conservation Law;Partial Differential Equation;Conditional Normalizing Flows;Bird-Migration", "tldr": "An approach for modeling fluid densities and velocities continuously in space and time while satisfying the continuity equation by construction.", "abstract": "We introduce Lagrangian Flow Networks (LFlows) for modeling fluid densities and velocities continuously in space and time. By construction, the proposed LFlows satisfy the continuity equation, a PDE describing mass conservation in its differential form. Our model is based on the insight that solutions to the continuity equation can be expressed as time-dependent density transformations via differentiable and invertible maps. This follows from classical theory of the existence and uniqueness of Lagrangian flows for smooth vector fields. Hence, we model fluid densities by transforming a base density with parameterized diffeomorphisms conditioned on time. The key benefit compared to methods relying on numerical ODE solvers or PINNs is that the analytic expression of the velocity is always consistent with changes in density. Furthermore, we require neither expensive numerical solvers, nor additional penalties to enforce the PDE. LFlows show higher predictive accuracy in density modeling tasks compared to competing models in 2D and 3D, while being computationally efficient. As a real-world application, we model bird migration based on sparse weather radar measurements.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18767"} +{"video_file": "NvbeD9Ttkx_39018018.mp4", "openreview_id": "NvbeD9Ttkx", "slideslive_id": 39018018, "venue": "iclr2024", "title": "FOSI: Hybrid First and Second Order Optimization", "status": "Poster", "keywords": "convex optimization;nonconvex optimization;first order optimization;second order optimization;deep learning", "tldr": "FOSI is a novel meta-algorithm that improves the performance of any first-order optimizer by efficiently incorporating second-order information.", "abstract": "Popular machine learning approaches forgo second-order information due to the difficulty of computing curvature in high dimensions. We present FOSI, a novel meta-algorithm that improves the performance of any base first-order optimizer by efficiently incorporating second-order information during the optimization process. In each iteration, FOSI implicitly splits the function into two quadratic functions defined on orthogonal subspaces, then uses a second-order method to minimize the first, and the base optimizer to minimize the other. We formally analyze FOSI's convergence and the conditions under which it improves a base optimizer. Our empirical evaluation demonstrates that FOSI improves the convergence rate and optimization time of first-order methods such as Heavy-Ball and Adam, and outperforms second-order methods (K-FAC and L-BFGS).", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18763"} +{"video_file": "Ny8NiVfi95_39018016.mp4", "openreview_id": "Ny8NiVfi95", "slideslive_id": 39018016, "venue": "iclr2024", "title": "Masked Audio Generation using a Single Non-Autoregressive Transformer", "status": "Poster", "keywords": "Audio modeling;audio generation;music generation;non-autoregressive models", "tldr": "We present MAGNeT, a fully non-autoregressive model for music and audio generation. MAGNeT reaches comparable performance to autoregressive models (e.g., MusicGen) but x7 faster during inference time.", "abstract": "We introduce MAGNeT, a masked generative sequence modeling method that operates directly over several streams of audio tokens. Unlike prior work, MAGNeT is comprised of a single-stage, non-autoregressive transformer. During training, we predict spans of masked tokens obtained from a masking scheduler, while during inference we gradually construct the output sequence using several decoding steps. To further enhance the quality of the generated audio, we introduce a novel rescoring method in which, we leverage an external pre-trained model to rescore and rank predictions from MAGNeT, which will be then used for later decoding steps. Lastly, we explore a hybrid version of MAGNeT, in which we fuse between autoregressive and non-autoregressive models to generate the first few seconds in an autoregressive manner while the rest of the sequence is being decoded in parallel. We demonstrate the efficiency of MAGNeT for the task of text-to-music and text-to-audio generation and conduct an extensive empirical evaluation, considering both objective metrics and human studies. The proposed approach is comparable to the evaluated baselines, while being significantly faster (x\n7\nfaster than the autoregressive baseline). Through ablation studies and analysis, we shed light on the importance of each of the components comprising MAGNeT, together with pointing to the trade-offs between autoregressive and non-autoregressive modeling, considering latency, throughput, and generation quality. Samples are available on our demo page https://pages.cs.huji.ac.il/adiyoss-lab/MAGNeT.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18760"} +{"video_file": "O9PArxKLe1_39018759.mp4", "openreview_id": "O9PArxKLe1", "slideslive_id": 39018759, "venue": "iclr2024", "title": "Leveraging Optimization for Adaptive Attacks on Image Watermarks", "status": "Poster", "keywords": "watermarking;adaptive attacks;optimization;stable diffusion", "tldr": "We leverage optimization to break five image watermarks for Stable Diffusion models using adaptive, learnable attacks.", "abstract": "Untrustworthy users can misuse image generators to synthesize high-quality deepfakes and engage in unethical activities. Watermarking deters misuse by marking generated content with a hidden message, enabling its detection using a secret watermarking key. A core security property of watermarking is robustness, which states that an attacker can only evade detection by substantially degrading image quality. Assessing robustness requires designing an adaptive attack for the specific watermarking algorithm. When evaluating watermarking algorithms and their (adaptive) attacks, it is challenging to determine whether an adaptive attack is optimal, i.e., the best possible attack. We solve this problem by defining an objective function and then approach adaptive attacks as an optimization problem. The core idea of our adaptive attacks is to replicate secret watermarking keys locally by creating surrogate keys that are differentiable and can be used to optimize the attack's parameters. We demonstrate for Stable Diffusion models that such an attacker can break all five surveyed watermarking methods at no visible degradation in image quality. Optimizing our attacks is efficient and requires less than 1 GPU hour to reduce the detection accuracy to 6.3% or less. Our findings emphasize the need for more rigorous robustness testing against adaptive, learnable attackers.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/18755"} +{"video_file": "OEL4FJMg1b_39018011.mp4", "openreview_id": "OEL4FJMg1b", "slideslive_id": 39018011, "venue": "iclr2024", "title": "DragonDiffusion: Enabling Drag-style Manipulation on Diffusion Models", "status": "Spotlight", "keywords": "Diffusion model;Image editing;Image generation", "tldr": "A tuning-free diffusion method for general and drag-style image editing.", "abstract": "Despite the ability of text-to-image (T2I) diffusion models to generate high-quality images, transferring this ability to accurate image editing remains a challenge. In this paper, we propose a novel image editing method, DragonDiffusion, enabling Drag-style manipulation on Diffusion models. Specifically, we treat image editing as the change of feature correspondence in a pre-trained diffusion model. By leveraging feature correspondence, we develop energy functions that align with the editing target, transforming image editing operations into gradient guidance. Based on this guidance approach, we also construct multi-scale guidance that considers both semantic and geometric alignment. Furthermore, we incorporate a visual cross-attention strategy based on a memory bank design to ensure consistency between the edited result and original image. Benefiting from these efficient designs, all content editing and consistency operations come from the feature correspondence without extra model fine-tuning. Extensive experiments demonstrate that our method has promising performance on various image editing tasks, including within a single image (e.g., object moving, resizing, and content dragging) or across images (e.g., appearance replacing and object pasting). Code is available at https://github.com/MC-E/DragonDiffusion.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18751"} +{"video_file": "OI3RoHoWAN_39018009.mp4", "openreview_id": "OI3RoHoWAN", "slideslive_id": 39018009, "venue": "iclr2024", "title": "GenSim: Generating Robotic Simulation Tasks via Large Language Models", "status": "Spotlight", "keywords": "LLM Code Generation;Robotic Simulation;Multi-task Policy Learning", "tldr": "We investigated LLM's capability to generate over 100 simulation tasks for training language-conditioned multitask robotic manipulation policy, which demonstrates task-level generalization in both simulation and the real world.", "abstract": "Collecting large amounts of real-world interaction data to train general robotic policies is often prohibitively expensive, thus motivating the use of simulation data. However, existing methods for data generation have generally focused on scene-level diversity (e.g., object instances and poses) rather than task-level diversity, due to the human effort required to come up with and verify novel tasks. This has made it challenging for policies trained on simulation data to demonstrate significant task-level generalization. In this paper, we propose to automatically generate rich simulation environments and expert demonstrations by exploiting a large language models' (LLM) grounding and coding ability. Our approach, dubbed GenSim, has two modes: goal-directed generation, wherein a target task is given to the LLM and the LLM proposes a task curriculum to solve the target task, and exploratory generation, wherein the LLM bootstraps from previous tasks and iteratively proposes novel tasks that would be helpful in solving more complex tasks. We use GPT4 to expand the existing benchmark by ten times to over 100 tasks, on which we conduct supervised finetuning and evaluate several LLMs including finetuned GPTs and Code Llama on code generation for robotic simulation tasks. Furthermore, we observe that LLMs-generated simulation programs can enhance task-level generalization significantly when used for multitask policy training. We further find that with minimal sim-to-real adaptation, the multitask policies pretrained on GPT4-generated simulation tasks exhibit stronger transfer to unseen long-horizon tasks in the real world and outperform baselines by 25%. See our project website (https://gen-sim.github.io) and demo (https://huggingface.co/spaces/Gen-Sim/Gen-Sim) for visualizations and open-source models and datasets.", "primary_area": "applications to robotics, autonomy, planning", "site": "https://iclr.cc/virtual/2024/poster/18747"} +{"video_file": "OIsahq1UYC_39018008.mp4", "openreview_id": "OIsahq1UYC", "slideslive_id": 39018008, "venue": "iclr2024", "title": "Diffusion Generative Flow Samplers: Improving learning signals through partial trajectory optimization", "status": "Poster", "keywords": "probabilistic inference;sampling;stochastic optimal control;gflownets", "tldr": "DGFS is an algorithm which learns a stochastic process to sample from unnormalized densities, and can update parameters without full specification of diffusion chains.", "abstract": "We tackle the problem of sampling from intractable high-dimensional density functions, a fundamental task that often appears in machine learning and statistics. We extend recent sampling-based approaches that leverage controlled stochastic processes to model approximate samples from these target densities.\nThe main drawback of these approaches is that the training objective requires full trajectories to compute, resulting in sluggish credit assignment issues due to use of entire trajectories and a learning signal present only at the terminal time. In this work, we present Diffusion Generative Flow Samplers (DGFS), a sampling-based framework where the learning process can be tractably broken down into short partial trajectory segments, via parameterizing an additional ``flow function''. Our method takes inspiration from the theory developed for generative flow networks (GFlowNets), allowing us to make use of intermediate learning signals. Through various challenging experiments, we demonstrate that DGFS achieves more accurate estimates of the normalization constant than closely-related prior methods.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18746"} +{"video_file": "OUkZXbbwQr_39018001.mp4", "openreview_id": "OUkZXbbwQr", "slideslive_id": 39018001, "venue": "iclr2024", "title": "Reward Design for Justifiable Sequential Decision-Making", "status": "Poster", "keywords": "reinforcement learning;reward design;alignment;preference-based learning", "tldr": "This paper proposes the use of a reward model defined as the outcome of a two-player zero-sum debate game, where agents compete to justify made decisions with supporting evidence according to human preferences.", "abstract": "Equipping agents with the capacity to justify made decisions using supporting evidence represents a cornerstone of accountable decision-making. Furthermore, ensuring that justifications are in line with human expectations and societal norms is vital, especially in high-stakes situations such as healthcare. In this work, we propose the use of a debate-based reward model for reinforcement learning agents, where the outcome of a zero-sum debate game quantifies the justifiability of a decision in a particular state. This reward model is then used to train a justifiable policy, whose decisions can be more easily corroborated with supporting evidence. In the debate game, two argumentative agents take turns providing supporting evidence for two competing decisions. Given the proposed evidence, a proxy of a human judge evaluates which decision is better justified. We demonstrate the potential of our approach in learning policies for prescribing and justifying treatment decisions of septic patients. We show that augmenting the reward with the feedback signal generated by the debate-based reward model yields policies highly favored by the judge when compared to the policy obtained solely from the environment rewards, while hardly sacrificing any performance. Moreover, in terms of the overall performance and justifiability of trained policies, the debate-based feedback is comparable to the feedback obtained from an ideal judge proxy that evaluates decisions using the full information encoded in the state. This suggests that the debate game outputs key information contained in states that is most relevant for evaluating decisions, which in turn substantiates the practicality of combining our approach with human-in-the-loop evaluations. Lastly, we showcase that agents trained via multi-agent debate learn to propose evidence that is resilient to refutations and closely aligns with human preferences.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18740"} +{"video_file": "OdpIjS0vkO_39017999.mp4", "openreview_id": "OdpIjS0vkO", "slideslive_id": 39017999, "venue": "iclr2024", "title": "More is Better: when Infinite Overparameterization is Optimal and Overfitting is Obligatory", "status": "Poster", "keywords": "overparameterization;interpolation;random feature regression;kernel regression;generalization;overfitting", "tldr": "We show that (a) random feature regression strictly benefits from additional features and (b) a realistic class of kernel learning task requires (near-)zero regularization to reach optimal performance.", "abstract": "In our era of enormous neural networks, empirical progress has been driven by the philosophy that more is better. Recent deep learning practice has found repeatedly that larger model size, more data, and more computation (resulting in lower training loss) optimizing to near-interpolation improves performance. In this paper, we give theoretical backing to these empirical observations by showing that these three properties hold in random feature (RF) regression, a class of models equivalent to shallow networks with only the last layer trained.\nConcretely, we first show that the test risk of RF regression decreases monotonically with both the number of features and samples, provided the ridge penalty is tuned optimally. In particular, this implies that infinite width RF architectures are preferable to those of any finite width. We then proceed to demonstrate that, for a large class of tasks characterized by powerlaw eigenstructure, training to near-zero training loss is obligatory: near-optimal performance can only be achieved when the training error is much smaller than the test error. Grounding our theory in real-world data, we find empirically that standard computer vision tasks with convolutional neural kernels clearly fall into this class. Taken together, our results tell a simple, testable story of the benefits of overparameterization and overfitting in random feature models.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/18736"} +{"video_file": "OeQE9zsztS_39017997.mp4", "openreview_id": "OeQE9zsztS", "slideslive_id": 39017997, "venue": "iclr2024", "title": "Spectrally Transformed Kernel Regression", "status": "Spotlight", "keywords": "Learning Theory;Unlabeled Data;Kernel Methods;Semi-supervised Learning;Representation Learning;Label Propagation", "tldr": "STKR leverages unlabeled data by mixing the information from a kernel and data distribution via diffusion. We provide new STKR estimators applicable to the inductive setting, together with statistical guarantees and complexity analysis.", "abstract": "Unlabeled data is a key component of modern machine learning. In general, the role of unlabeled data is to impose a form of smoothness, usually from the similarity information encoded in a base kernel, such as the \u03f5-neighbor kernel or the adjacency matrix of a graph. This work revisits the classical idea of spectrally transformed kernel regression (STKR), and provides a new class of general and scalable STKR estimators able to leverage unlabeled data. Intuitively, via spectral transformation, STKR exploits the data distribution for which unlabeled data can provide additional information. First, we show that STKR is a principled and general approach, by characterizing a universal type of \u201ctarget smoothness\u201d, and proving that any sufficiently smooth function can be learned by STKR. Second, we provide scalable STKR implementations for the inductive setting and a general transformation function, while prior work is mostly limited to the transductive setting. Third, we derive statistical guarantees for two scenarios: STKR with a known polynomial transformation, and STKR with kernel PCA when the transformation is unknown. Overall, we believe that this work helps deepen our understanding of how to work with unlabeled data, and its generality makes it easier to inspire new methods.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/18734"} +{"video_file": "OfXqQ5TRwp_39017995.mp4", "openreview_id": "OfXqQ5TRwp", "slideslive_id": 39017995, "venue": "iclr2024", "title": "ALAM: Averaged Low-Precision Activation for Memory-Efficient Training of Transformer Models", "status": "Poster", "keywords": "Memory efficient training;Activation-compressed training;Average Quantization;NLP;Transformer", "tldr": "ALAM compresses activations to their group average with a lightweight sensitivity calculation, achieving up to a 10x activation memory reduction in LLMs.", "abstract": "One of the key challenges in deep neural network training is the substantial amount of GPU memory required to store activations obtained in the forward pass. Various Activation-Compressed Training (ACT) schemes have been proposed to mitigate this issue; however, it is challenging to adopt those approaches in recent transformer-based large language models (LLMs), which experience significant performance drops when the activations are deeply compressed during training. In this paper, we introduce ALAM, a novel ACT framework that utilizes average quantization and a lightweight sensitivity calculation scheme, enabling large memory saving in LLMs while maintaining training performance. We first demonstrate that compressing activations into their group average values minimizes the gradient variance. Employing this property, we propose Average Quantization which provides high-quality deeply compressed activations with an effective precision of less than 1 bit and improved flexibility of precision allocation. In addition, we present a cost-effective yet accurate sensitivity calculation algorithm that solely relies on the L2 norm of parameter gradients, substantially reducing memory overhead due to sensitivity calculation. In experiments, the ALAM framework significantly reduces activation memory without compromising accuracy, achieving up to a 10\n\u00d7\ncompression rate in LLMs.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18732"} +{"video_file": "Oju2Qu9jvn_39018898.mp4", "openreview_id": "Oju2Qu9jvn", "slideslive_id": 39018898, "venue": "iclr2024", "title": "Estimating Conditional Mutual Information for Dynamic Feature Selection", "status": "Poster", "keywords": "dynamic feature selection;adaptive;feature selection;mutual information;information theory", "tldr": "We develop a method for dynamic feature selection by directly predicting the conditional mutual information with the response variable", "abstract": "Dynamic feature selection, where we sequentially query features to make accurate predictions with a minimal budget, is a promising paradigm to reduce feature acquisition costs and provide transparency into the prediction process. The problem is challenging, however, as it requires both making predictions with arbitrary feature sets and learning a policy to identify the most valuable selections. Here, we take an information-theoretic perspective and prioritize features based on their mutual information with the response variable. The main challenge is implementing this policy, and we design a new approach that estimates the mutual information in a discriminative rather than a generative fashion. Building on our learning approach, we introduce several further improvements: allowing variable feature budgets across samples, enabling non-uniform costs between features, incorporating prior information, and exploring modern architectures to handle partial input information. We find that our method provides consistent gains over recent state-of-the-art methods across a variety of datasets.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18731"} +{"video_file": "OrOd8PxOO2_39017994.mp4", "openreview_id": "OrOd8PxOO2", "slideslive_id": 39017994, "venue": "iclr2024", "title": "Universal Humanoid Motion Representations for Physics-Based Control", "status": "Spotlight", "keywords": "humanoid control;motion generation;physics simulation", "tldr": "We present a universal motion latent space for physics-based humanoid control.", "abstract": "We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control. Due to the high dimensionality of humanoids and the inherent difficulties in reinforcement learning, prior methods have focused on learning skill embeddings for a narrow range of movement styles (e.g. locomotion, game characters) from specialized motion datasets. This limited scope hampers their applicability in complex tasks. We close this gap by significantly increasing the coverage of our motion representation space. To achieve this, we first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset. We then create our motion representation by distilling skills directly from the imitator. This is achieved by using an encoder-decoder structure with a variational information bottleneck. Additionally, we jointly learn a prior conditioned on proprioception (humanoid's own pose and velocities) to improve model expressiveness and sampling efficiency for downstream tasks. By sampling from the prior, we can generate long, stable, and diverse human motions. Using this latent space for hierarchical RL, we show that our policies solve tasks using human-like behavior. We demonstrate the effectiveness of our motion representation by solving generative tasks (e.g. strike, terrain traversal) and motion tracking using VR controllers.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18728"} +{"video_file": "OuV9ZrkQlc_39018883.mp4", "openreview_id": "OuV9ZrkQlc", "slideslive_id": 39018883, "venue": "iclr2024", "title": "ImagenHub: Standardizing the evaluation of conditional image generation models", "status": "Poster", "keywords": "Image Generation;Image Editing;Evaluation;Benchmark;Diffusion Model", "tldr": "We present ImagenHub, which is a continuous effort to standardize the inference and evaluation of all the existing conditional image generation models. We present lots of key takeaways in the paper to help develop better image generation models.", "abstract": "Recently, a myriad of conditional image generation and editing models have been developed to serve different downstream tasks, including text-to-image generation, text-guided image editing, subject-driven image generation, control-guided image generation, etc. However, we observe huge inconsistencies in experimental conditions: datasets, inference, and evaluation metrics -- render fair comparisons difficult.\nThis paper proposes ImagenHub, which is a one-stop library to standardize the inference and evaluation of all the conditional image generation models. Firstly, we define seven prominent tasks and curate high-quality evaluation datasets for them. Secondly, we built a unified inference pipeline to ensure fair comparison. Thirdly, we design two human evaluation scores, i.e. Semantic Consistency and Perceptual Quality, along with comprehensive guidelines to evaluate generated images. We train expert raters to evaluate the model outputs based on the proposed metrics. Our human evaluation achieves a high inter-worker agreement of Krippendorff\u2019s alpha on 76% models with a value higher than 0.4. We comprehensively evaluated a total of around 30 models and observed three key takeaways: (1) the existing models\u2019 performance is generally unsatisfying except for Text-guided Image Generation and Subject-driven Image Generation, with 74% models achieving an overall score lower than 0.5. (2) we examined the claims from published papers and found 83% of them hold with a few exceptions. (3) None of the existing automatic metrics has a Spearman's correlation higher than 0.2 except subject-driven image generation. Moving forward, we will continue our efforts to evaluate newly published models and update our leaderboard to keep track of the progress in conditional image generation.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/18726"} +{"video_file": "Ouj6p4ca60_39017992.mp4", "openreview_id": "Ouj6p4ca60", "slideslive_id": 39017992, "venue": "iclr2024", "title": "Amortizing intractable inference in large language models", "status": "Oral", "keywords": "large language models;LLMs;Bayesian inference;chain-of-thought reasoning;latent variable models;generative flow networks;GFlowNets", "tldr": "We fine-tune LLMs to sample from intractable posteriors for tasks such as infilling, chain-of-thought reasoning, and tool-augmented inference.", "abstract": "Autoregressive large language models (LLMs) compress knowledge from their training data through next-token conditional distributions. This limits tractable querying of this knowledge to start-to-end autoregressive sampling. However, many tasks of interest---including sequence continuation, infilling, and other forms of constrained generation---involve sampling from intractable posterior distributions. We address this limitation by using amortized Bayesian inference to sample from these intractable posteriors. Such amortization is algorithmically achieved by fine-tuning LLMs via diversity-seeking reinforcement learning algorithms: generative flow networks (GFlowNets). We empirically demonstrate that this distribution-matching paradigm of LLM fine-tuning can serve as an effective alternative to maximum-likelihood training and reward-maximizing policy optimization. As an important application, we interpret chain-of-thought reasoning as a latent variable modeling problem and demonstrate that our approach enables data-efficient adaptation of LLMs to tasks that require multi-step rationalization and tool use.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18725"} +{"video_file": "OvlcyABNQT_39019069.mp4", "openreview_id": "OvlcyABNQT", "slideslive_id": 39019069, "venue": "iclr2024", "title": "Augmented Bayesian Policy Search", "status": "Poster", "keywords": "Reinforcement learning;Policy search;Bayesian optimization;Gaussian Processes", "tldr": "A novel mean function for Gaussian processes that bridges the gap between reinforcement learning and Bayesian optimization (BO) by leveraging the performance difference lemma to augment BO schemes with an action-value function.", "abstract": "Deterministic policies are often preferred over stochastic ones when implemented on physical systems. They can prevent erratic and harmful behaviors while being easier to implement and interpret. However, in practice, exploration is largely performed by stochastic policies. First-order Bayesian Optimization (BO) methods offer a principled way of performing exploration using deterministic policies. This is done through a learned probabilistic model of the objective function and its gradient. Nonetheless, such approaches treat policy search as a black-box problem, and thus, neglect the reinforcement learning nature of the problem. In this work, we leverage the performance difference lemma to introduce a novel mean function for the probabilistic model. This results in augmenting BO methods with the action-value function. Hence, we call our method Augmented Bayesian Search (ABS). Interestingly, this new mean function enhances the posterior gradient with the deterministic policy gradient, effectively bridging the gap between BO and policy gradient methods. The resulting algorithm combines the convenience of the direct policy search with the scalability of reinforcement learning. We validate ABS on high-dimensional locomotion problems and demonstrate competitive performance compared to existing direct policy search schemes.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18724"} +{"video_file": "OwtMhMSybu_39017991.mp4", "openreview_id": "OwtMhMSybu", "slideslive_id": 39017991, "venue": "iclr2024", "title": "Unlocking the Power of Representations in Long-term Novelty-based Exploration", "status": "Spotlight", "keywords": "Deep RL;exploration;density estimation;representation learning", "tldr": "We introduce a new novelty estimator for exploration in deep RL which can preserve long-term memory and be used with any representation learning techniques", "abstract": "We introduce Robust Exploration via Clustering-based Online Density Estimation (RECODE), a non-parametric method for novelty-based exploration that estimates visitation counts for clusters of states based on their similarity in a chosen embedding space. By adapting classical clustering to the nonstationary setting of Deep RL, RECODE can efficiently track state visitation counts over thousands of episodes. We further propose a novel generalization of the inverse dynamics loss, which leverages masked transformer architectures for multi-step prediction; which in conjunction with \\DETOCS achieves a new state-of-the-art in a suite of challenging 3D-exploration tasks in DM-Hard-8. RECODE also sets new state-of-the-art in hard exploration Atari games, and is the first agent to reach the end screen in \"Pitfall!\"", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18723"} +{"video_file": "P15CHILQlg_39017989.mp4", "openreview_id": "P15CHILQlg", "slideslive_id": 39017989, "venue": "iclr2024", "title": "Learning Energy Decompositions for Partial Inference in GFlowNets", "status": "Oral", "keywords": "Generative flow networks;reinforcement learning;generative models", "tldr": "We investigate a learning-based approach to produce informative local credits that facilitate the partial inference of GFlowNet", "abstract": "This paper studies generative flow networks (GFlowNets) to sample objects from the Boltzmann energy distribution via a sequence of actions. In particular, we focus on improving GFlowNet with partial inference: training flow functions with the evaluation of the intermediate states or transitions. To this end, the recently developed forward-looking GFlowNet reparameterizes the flow functions based on evaluating the energy of intermediate states. However, such an evaluation of intermediate energies may (i) be too expensive or impossible to evaluate and (ii) even provide misleading training signals under large energy fluctuations along the sequence of actions. To resolve this issue, we propose learning energy decompositions for GFlowNets (LED-GFN). Our main idea is to (i) decompose the energy of an object into learnable potential functions defined on state transitions and (ii) reparameterize the flow functions using the potential functions. In particular, to produce informative local credits, we propose to regularize the potential to change smoothly over the sequence of actions. It is also noteworthy that training GFlowNet with our learned potential can preserve the optimal policy. We empirically verify the superiority of LED-GFN in five problems including the generation of unstructured and maximum independent sets, molecular graphs, and RNA sequences.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18721"} +{"video_file": "P1ANzoGg3W_39017988.mp4", "openreview_id": "P1ANzoGg3W", "slideslive_id": 39017988, "venue": "iclr2024", "title": "H2O-SDF: Two-phase Learning for 3D Indoor Reconstruction using Object Surface Fields", "status": "Spotlight", "keywords": "3D reconstruction;Neural implicit surface learning", "tldr": "For 3D indoor scene reconstruction, we present a novel two-phase learning approach, which comprises holistic surface learning and object surface learning.", "abstract": "Advanced techniques using Neural Radiance Fields (NeRF), Signed Distance Fields (SDF), and Occupancy Fields have recently emerged as solutions for 3D indoor scene reconstruction. We introduce a novel two-phase learning approach, H2O-SDF, that discriminates between object and non-object regions within indoor environments. This method achieves a nuanced balance, carefully preserving the geometric integrity of room layouts while also capturing intricate surface details of specific objects. A cornerstone of our two-phase learning framework is the introduction of the Object Surface Field (OSF), a novel concept designed to mitigate the persistent vanishing gradient problem that has previously hindered the capture of high-frequency details in other methods. Our proposed approach is validated through several experiments that include ablation studies.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18720"} +{"video_file": "P1aobHnjjj_39017986.mp4", "openreview_id": "P1aobHnjjj", "slideslive_id": 39017986, "venue": "iclr2024", "title": "Implicit bias of SGD in $L_2$-regularized linear DNNs: One-way jumps from high to low rank", "status": "Spotlight", "keywords": "implicit bias;SGD;low-rank;linear networks", "tldr": "Low-rank bias of SGD in L2 reg. linear nets: SGD jumps from high to low rank minima, with no probability of jumping back.", "abstract": "The\nL\n2\n-regularized loss of Deep Linear Networks (DLNs) with more than one hidden layers has multiple local minima, corresponding to matrices with different ranks. In tasks such as matrix completion, the goal is to converge to the local minimum with the smallest rank that still fits the training data. While rank-underestimating minima can be avoided since they do not fit the data, GD might get stuck at rank-overestimating minima. We show that with SGD, there is always a probability to jump from a higher rank minimum to a lower rank one, but the probability of jumping back is zero. More precisely, we define a sequence of sets\nB\n1\n\u2282\nB\n2\n\u2282\n\u22ef\n\u2282\nB\nR\nso that\nB\nr\ncontains all minima of rank\nr\nor less (and not more) that are absorbing for small enough ridge parameters\n\u03bb\nand learning rates\n\u03b7\n: SGD has prob. 0 of leaving\nB\nr\n, and from any starting point there is a non-zero prob. for SGD to go in\nB\nr\n.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/18719"} +{"video_file": "PHLVmV88Zy_39017983.mp4", "openreview_id": "PHLVmV88Zy", "slideslive_id": 39017983, "venue": "iclr2024", "title": "Unconstrained Stochastic CCA: Unifying Multiview and Self-Supervised Learning", "status": "Poster", "keywords": "Canonical Correlation Analysis;Multiview Learning;Self-Supervised Learning", "tldr": "our work unifies CCA, Deep CCA and PLS through a Generalised Eigenvalue Problem framework, introduces faster algorithms with SGD, and sets new benchmarks.", "abstract": "The Canonical Correlation Analysis (CCA) family of methods is foundational in multiview learning. Regularised linear CCA methods can be seen to generalise Partial Least Squares (PLS) and be unified with a Generalized Eigenvalue Problem (GEP) framework. However, classical algorithms for these linear methods are computationally infeasible for large-scale data. Extensions to Deep CCA show great promise, but current training procedures are slow and complicated. First we propose a novel unconstrained objective that characterizes the top subspace of GEPs. Our core contribution is a family of fast algorithms for stochastic PLS, stochastic CCA, and Deep CCA, simply obtained by applying stochastic gradient descent (SGD) to the corresponding CCA objectives. Our algorithms show far faster convergence and recover higher correlations than the previous state-of-the-art on all standard CCA and Deep CCA benchmarks. These improvements allow us to perform a first-of-its-kind PLS analysis of an extremely large biomedical dataset from the UK Biobank, with over 33,000 individuals and 500,000 features. Finally, we apply our algorithms to match the performance of `CCA-family' Self-Supervised Learning (SSL) methods on CIFAR-10 and CIFAR-100 with minimal hyper-parameter tuning, and also present theory to clarify the links between these methods and classical CCA, laying the groundwork for future insights.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18714"} +{"video_file": "PJVUWpPnZC_39017982.mp4", "openreview_id": "PJVUWpPnZC", "slideslive_id": 39017982, "venue": "iclr2024", "title": "Reinforcement Symbolic Regression Machine", "status": "Poster", "keywords": "symbolic regression;reinforcement learning;equation discovery", "tldr": "Proposed a novel Reinforcement Symbolic Regression Machine (RSRM) that masters the capability of uncovering complex math equations from only scarce data.", "abstract": "In nature, the behavior of many complex systems can be described by parsimonious math equations. Symbolic Regression (SR) is defined as the task of automatically distilling equations from limited data. Keen efforts have been placed on tackling this issue and demonstrated success in SR. However, there still exist bottlenecks that current methods struggle to break, when the expressions we need to explore tend toward infinity and especially when the underlying math formula is intricate. To this end, we propose a novel Reinforcement Symbolic Regression Machine (RSRM) that masters the capability of uncovering complex math equations from only scarce data. The RSRM model is composed of three key modules: (1) a Monte Carlo tree search (MCTS) agent, designed for exploration, that explores optimal math expression trees consisting of pre-defined math operators and variables, (2) a Double Q-learning block, designed for exploitation, that helps reduce the feasible search space of MCTS via properly understanding the distribution of reward, and (3) a modulated sub-tree discovery block that heuristically learns and defines new math operators to improve representation ability of math expression trees. Binding of these modules yields the SOTA performance of RSRM in SR as demonstrated by multiple benchmark datasets. The RSRM shows clear superiority over several representative baseline models.", "primary_area": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18713"} +{"video_file": "PJwAkg0z7h_39019149.mp4", "openreview_id": "PJwAkg0z7h", "slideslive_id": 39019149, "venue": "iclr2024", "title": "EasyTPP: Towards Open Benchmarking Temporal Point Processes", "status": "Poster", "keywords": "Event sequence;Temporal point process;Open benchmarking", "tldr": "We present a code base along with datasets for benchmarking temporal point process.", "abstract": "Continuous-time event sequences play a vital role in real-world domains such as healthcare, finance, online shopping, social networks, and so on. To model such data, temporal point processes (TPPs) have emerged as the most natural and competitive models, making a significant impact in both academic and application communities. Despite the emergence of many powerful models in recent years, there hasn't been a central benchmark for these models and future research endeavors. This lack of standardization impedes researchers and practitioners from comparing methods and reproducing results, potentially slowing down progress in this field. In this paper, we present EasyTPP, the first central repository of research assets (e.g., data, models, evaluation programs, documentations) in the area of event sequence modeling. Our EasyTPP makes several unique contributions to this area: a unified interface of using existing datasets and adding new datasets; a wide range of evaluation programs that are easy to use and extend as well as facilitate reproducible research; implementations of popular neural TPPs, together with a rich library of modules by composing which one could quickly build complex models. We will actively maintain this benchmark and welcome contributions from other researchers and practitioners. Our benchmark will help promote reproducible research in this field, thus accelerating research progress as well as making more significant real-world impacts. The code and data are available at \\url{https://github.com/ant-research/EasyTemporalPointProcess}.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/18712"} +{"video_file": "PLoWVP7Mjc_39017980.mp4", "openreview_id": "PLoWVP7Mjc", "slideslive_id": 39017980, "venue": "iclr2024", "title": "Embarrassingly Simple Dataset Distillation", "status": "Poster", "keywords": "Dataset Distillation;Data Condensation", "tldr": "We propose a new simple method for dataset distillation to achieve state-of-art results on a vast majority of benchmarks, providing further improvements through a boosted variant.", "abstract": "Dataset distillation extracts a small set of synthetic training samples from a large dataset with the goal of achieving competitive performance on test data when trained on this sample. In this work, we tackle dataset distillation at its core by treating it directly as a bilevel optimization problem. Re-examining the foundational back-propagation through time method, we study the pronounced variance in the gradients, computational burden, and long-term dependencies. We introduce an improved method: Random Truncated Backpropagation Through Time (RaT-BPTT) to address them. RaT-BPTT incorporates a truncation coupled with a random window, effectively stabilizing the gradients and speeding up the optimization while covering long dependencies. This allows us to establish new state-of-the-art for a variety of standard dataset benchmarks. A deeper dive into the nature of distilled data unveils pronounced intercorrelation. In particular, subsets of distilled datasets tend to exhibit much worse performance than directly distilled smaller datasets of the same size. Leveraging RaT-BPTT, we devise a boosting mechanism that generates distilled datasets that contain subsets with near optimal performance across different data budgets.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/18710"} +{"video_file": "PXD3FAVHJT_39017976.mp4", "openreview_id": "PXD3FAVHJT", "slideslive_id": 39017976, "venue": "iclr2024", "title": "Understanding the Effects of RLHF on LLM Generalisation and Diversity", "status": "Poster", "keywords": "reinforcement learning;large language models;rlhf;ood generalisation;diversity", "tldr": "We analyse the effects of RLHF fine-tuning on LLMs in terms of OOD generalisation and output diversity, finding that RLHF makes better-generalising but less diverse models", "abstract": "Large language models (LLMs) fine-tuned with reinforcement learning from human feedback (RLHF) have been used in some of the most widely deployed AI models to date, such as OpenAI's ChatGPT or Anthropic's Claude. While there has been significant work developing these methods, our understanding of the benefits and downsides of each stage in RLHF is still limited. To fill this gap, we present an extensive analysis of how each stage of the process (i.e. supervised fine-tuning (SFT), reward modelling, and RLHF) affects two key properties: out-of-distribution (OOD) generalisation and output diversity. OOD generalisation is crucial given the wide range of real-world scenarios in which these models are being used, while output diversity refers to the model's ability to generate varied outputs and is important for a variety of use cases. We perform our analysis across two base models on both summarisation and instruction following tasks, the latter being highly relevant for current LLM use cases. We find that RLHF generalises better than SFT to new inputs, particularly as the distribution shift between train and test becomes larger. However, RLHF significantly reduces output diversity compared to SFT across a variety of measures, implying a tradeoff in current LLM fine-tuning methods between generalisation and diversity. Our results provide guidance on which fine-tuning method should be used depending on the application, and show that more research is needed to improve the tradeoff between generalisation and diversity.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18704"} +{"video_file": "PXNrncg2DF_39017975.mp4", "openreview_id": "PXNrncg2DF", "slideslive_id": 39017975, "venue": "iclr2024", "title": "SOHES: Self-supervised Open-world Hierarchical Entity Segmentation", "status": "Poster", "keywords": "self-supervised learning;open-world learning;segmentation", "tldr": "We propose a self-supervised approach for open-world entity segmentation with hierarchical structures.", "abstract": "Open-world entity segmentation, as an emerging computer vision task, aims at segmenting entities in images without being restricted by pre-defined classes, offering impressive generalization capabilities on unseen images and concepts. Despite its promise, existing entity segmentation methods like Segment Anything Model (SAM) rely heavily on costly expert annotators. This work presents Self-supervised Open-world Hierarchical Entity Segmentation (SOHES), a novel approach that eliminates the need for human annotations. SOHES operates in three phases: self-exploration, self-instruction, and self-correction. Given a pre-trained self-supervised representation, we produce abundant high-quality pseudo-labels through visual feature clustering. Then, we train a segmentation model on the pseudo-labels, and rectify the noises in pseudo-labels via a teacher-student mutual-learning procedure. Beyond segmenting entities, SOHES also captures their constituent parts, providing a hierarchical understanding of visual entities. Using raw images as the sole training data, our method achieves unprecedented performance in self-supervised open-world segmentation, marking a significant milestone towards high-quality open-world entity segmentation in the absence of human-annotated masks. Project page: https://SOHES.github.io.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18703"} +{"video_file": "PcxQgtHGj2_39018839.mp4", "openreview_id": "PcxQgtHGj2", "slideslive_id": 39018839, "venue": "iclr2024", "title": "Pre-training with Synthetic Data Helps Offline Reinforcement Learning", "status": "Poster", "keywords": "Deep Reinforcement Learning;Offline Reinforcement Learning;Pretraining", "tldr": "We show pre-training with synthetic Markov Chain and MDP data can significantly improve offline DRL performance.", "abstract": "Recently, it has been shown that for offline deep reinforcement learning (DRL), pre-training Decision Transformer with a large language corpus can improve downstream performance (Reid et al., 2022). A natural question to ask is whether this performance gain can only be achieved with language pre-training, or can be achieved with simpler pre-training schemes which do not involve language. In this paper, we first show that language is not essential for improved performance, and indeed pre-training with synthetic IID data for a small number of updates can match the performance gains from pre-training with a large language corpus; moreover, pre-training with data generated by a one-step Markov chain can further improve the performance. Inspired by these experimental results, we then consider pre-training Conservative Q-Learning (CQL), a popular offline DRL algorithm, which is Q-learning-based and typically employs a Multi-Layer Perceptron (MLP) backbone. Surprisingly, pre-training with simple synthetic data for a small number of updates can also improve CQL, providing consistent performance improvement on D4RL Gym locomotion datasets. The results of this paper not only illustrate the importance of pre-training for offline DRL but also show that the pre-training data can be synthetic and generated with remarkably simple mechanisms.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18700"} +{"video_file": "PdaPky8MUn_39017971.mp4", "openreview_id": "PdaPky8MUn", "slideslive_id": 39017971, "venue": "iclr2024", "title": "Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors", "status": "Oral", "keywords": "Pre Training;Transformers;State Space Models;Long Range Models;Fair Evaluation", "tldr": "Training a model directly on a dataset from sctrach can lead to grossly under-estimated performance. For proper evaluation, one must first pretrain on the dataset and then finetune.", "abstract": "Modeling long-range dependencies across sequences is a longstanding goal in machine learning and has led to architectures, such as state space models, that dramatically outperform Transformers on long sequences. However, these impressive empirical gains have been by and large demonstrated on benchmarks (e.g. Long Range Arena), where models are randomly initialized and trained to predict a target label from an input sequence. In this work, we show that random initialization leads to gross overestimation of the differences between architectures and that pretraining with standard denoising objectives, using only the downstream task data, leads to dramatic gains across multiple architectures and to very small gaps between Transformers and state space models (SSMs). In stark contrast to prior works, we find vanilla Transformers to match the performance of S4 on Long Range Arena when properly pretrained, and we improve the best reported results of SSMs on the PathX-256 task by 20 absolute points. Subsequently, we analyze the utility of previously-proposed structured parameterizations for SSMs and show they become mostly redundant in the presence of data-driven initialization obtained through pretraining. Our work shows that, when evaluating different architectures on supervised tasks, incorporation of data-driven priors via pretraining is essential for reliable performance estimation, and can be done efficiently.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18698"} +{"video_file": "PfPnugdxup_39017141.mp4", "openreview_id": "PfPnugdxup", "slideslive_id": 39017141, "venue": "iclr2024", "title": "From Molecules to Materials: Pre-training Large Generalizable Models for Atomic Property Prediction", "status": "Poster", "keywords": "atomic property prediction;pre-training;3D atomic pre-training;graph neural networks;multi-task learning;molecules;materials", "tldr": "We pre-train a large model on multiple chemical datasets in a multi-task learning framework to generate transferable atomic representations that can be fine-tuned for SOTA results across various tasks.", "abstract": "Foundation models have been transformational in machine learning fields such as natural language processing and computer vision. Similar success in atomic property prediction has been limited due to the challenges of training effective models across multiple chemical domains. To address this, we introduce Joint Multi-domain Pre-training (JMP), a supervised pre-training strategy that simultaneously trains on multiple datasets from different chemical domains, treating each dataset as a unique pre-training task within a multi-task framework. Our combined training dataset consists of\n\u223c\n120M systems from OC20, OC22, ANI-1x, and Transition-1x. We evaluate performance and generalization by fine-tuning over a diverse set of downstream tasks and datasets including: QM9, rMD17, MatBench, QMOF, SPICE, and MD22. JMP demonstrates an average improvement of 59% over training from scratch and matches or sets state-of-the-art on 34 out of 40 tasks. Our work highlights the potential of pre-training strategies that utilize diverse data to advance property prediction across chemical domains, especially for low-data tasks.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18696"} +{"video_file": "PnR1MNen7u_39018971.mp4", "openreview_id": "PnR1MNen7u", "slideslive_id": 39018971, "venue": "iclr2024", "title": "Deep Geodesic Canonical Correlation Analysis for Covariance-Based Neuroimaging Data", "status": "Spotlight", "keywords": "Geometric Deep Learning;Self-Supervised Learning;Brain-Computer Interfaces;Neuroimaging;Neuroscience", "tldr": "A geometric deep learning-based approach to learn the SPD matrix-valued latent representation for paired covariance-based neuroimaging modalities under the self-supervised learning framework.", "abstract": "In human neuroimaging, multi-modal imaging techniques are frequently combined to enhance our comprehension of whole-brain dynamics and improve diagnosis in clinical practice. Modalities like electroencephalography and functional magnetic resonance imaging provide distinct views to the brain dynamics due to diametral spatiotemporal sensitivities and underlying neurophysiological coupling mechanisms. These distinct views pose a considerable challenge to learning a shared representation space, especially when dealing with covariance-based data characterized by their geometric structure. To capitalize on the geometric structure, we introduce a measure called geodesic correlation which expands traditional correlation consistency to covariance-based data on the symmetric positive definite (SPD) manifold. This measure is derived from classical canonical correlation analysis and serves to evaluate the consistency of latent representations obtained from paired views. For multi-view, self-supervised learning where one or both latent views are SPD we propose an innovative geometric deep learning framework termed DeepGeoCCA. Its primary objective is to enhance the geodesic correlation of unlabeled, paired data, thereby generating novel representations while retaining the geometric structures. In simulations and experiments with multi-view and multi-modal human neuroimaging data, we find that DeepGeoCCA learns latent representations with high geodesic correlation for unseen data while retaining relevant information for downstream tasks.", "primary_area": "applications to neuroscience & cognitive science", "site": "https://iclr.cc/virtual/2024/poster/18694"} +{"video_file": "PsDFgTosqb_39018905.mp4", "openreview_id": "PsDFgTosqb", "slideslive_id": 39018905, "venue": "iclr2024", "title": "Learning to Solve Bilevel Programs with Binary Tender", "status": "Poster", "keywords": "Deep Learning;Bilevel Program;Binary Tender;Enhanced Sampling;Input Supermodular Neural Network", "tldr": "We develop a enhanced sampling method and a novel input supermodular neural network to solve bilevel programs with binary tender", "abstract": "Bilevel programs (BPs) find a wide range of applications in fields such as energy, transportation, and machine learning. As compared to BPs with continuous (linear/convex) optimization problems in both levels, the BPs with discrete decision variables have received much less attention, largely due to the ensuing computational intractability and the incapability of gradient-based algorithms for handling discrete optimization formulations. In this paper, we develop deep learning techniques to address this challenge. Specifically, we consider a BP with binary tender, wherein the upper and lower levels are linked via binary variables. We train a neural network to approximate the optimal value of the lower-level problem, as a function of the binary tender. Then, we obtain a single-level reformulation of the BP through a mixed-integer representation of the value function. Furthermore, we conduct a comparative analysis between two types of neural networks: general neural networks and the novel input supermodular neural networks, studying their representational capacities. To solve high-dimensional BPs, we introduce an enhanced sampling method to generate higher-quality samples and implement an iterative process to refine solutions. We demonstrate the performance of these approaches through extensive numerical experiments, whose lower-level problems are linear and mixed-integer programs, respectively.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18692"} +{"video_file": "PvJnX3dwsD_39017967.mp4", "openreview_id": "PvJnX3dwsD", "slideslive_id": 39017967, "venue": "iclr2024", "title": "Quadratic models for understanding catapult dynamics of neural networks", "status": "Poster", "keywords": "quadratic models;wide neural networks;catapult phase;optimization dynamics", "tldr": "Quadratic models capture properties of wide neural networks in both optimization and generalization.", "abstract": "While neural networks can be approximated by linear models as their width increases, certain properties of wide neural networks cannot be captured by linear models. In this work we show that recently proposed Neural Quadratic Models can exhibit the \"catapult phase\" Lewkowycz et al. (2020) that arises when training such models with large learning rates. We then empirically show that the behaviour of quadratic models parallels that of neural networks in generalization, especially in the catapult phase regime. Our analysis further demonstrates that quadratic models are an effective tool for analysis of neural networks.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18689"} +{"video_file": "PxoFut3dWW_39018855.mp4", "openreview_id": "PxoFut3dWW", "slideslive_id": 39018855, "venue": "iclr2024", "title": "A Simple and Effective Pruning Approach for Large Language Models", "status": "Poster", "keywords": "network pruning;sparsity;large language models;network architectures;outlier features", "tldr": "We propose a simple and effective method to prune LLMs by weights and activations.", "abstract": "As their size increases, Large Languages Models (LLMs) are natural candidates for network pruning methods: approaches that drop a subset of network weights while striving to preserve performance. Existing methods, however, require either retraining, which is rarely affordable for billion-scale LLMs, or solving a weight reconstruction problem reliant on second-order information, which may also be computationally expensive. In this paper, we introduce a novel, straightforward yet effective pruning method, termed Wanda (Pruning by Weights and activations), designed to induce sparsity in pretrained LLMs. Motivated by the recent observation of emergent large magnitude features in LLMs, our approach prunes weights with the smallest magnitudes multiplied by the corresponding input activations, on a per-output basis. Notably, Wanda requires no retraining or weight update, and the pruned LLM can be used as is. We conduct a thorough evaluation of our method Wanda on LLaMA and LLaMA-2 across various language benchmarks. Wanda significantly outperforms the established baseline of magnitude pruning and performs competitively against recent method involving intensive weight update.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18687"} +{"video_file": "Q1u25ahSuy_39017966.mp4", "openreview_id": "Q1u25ahSuy", "slideslive_id": 39017966, "venue": "iclr2024", "title": "SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression", "status": "Poster", "keywords": "quantization;sparsity;large language models", "tldr": "Almost-lossless 3-4 bit quantization for LLMs through a novel sparse-quantized representation.", "abstract": "Recent advances in large language model (LLM) pretraining have led to high-quality LLMs with impressive abilities. By compressing such LLMs via quantization to 3-4 bits per parameter, they can fit into memory-limited devices such as laptops and mobile phones, enabling personalized use. Quantizing models to 3-4 bits per parameter can lead to moderate to high accuracy losses, especially for smaller models (1-10B parameters), which are suitable for edge deployment. To address this accuracy issue, we introduce the Sparse-Quantized Representation (SpQR), a new compressed format and quantization technique that enables for the first time \\emph{near-lossless} compression of LLMs across model scales while reaching similar compression levels to previous methods. SpQR works by identifying and isolating \\emph{outlier weights}, which cause particularly large quantization errors, and storing them in higher precision while compressing all other weights to 3-4 bits, and achieves relative accuracy losses of less than\n1\nin perplexity for highly-accurate LLaMA and Falcon LLMs. This makes it possible to run a 33B parameter LLM on a single 24 GB consumer GPU without performance degradation at 15% speedup, thus making powerful LLMs available to consumers without any downsides. SpQR comes with efficient algorithms for both encoding weights into its format, as well as decoding them efficiently at runtime. Specifically, we provide an efficient GPU inference algorithm for SpQR, which yields faster inference than 16-bit baselines at similar accuracy while enabling memory compression gains of more than 4x.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18686"} +{"video_file": "Q3YaCghZNt_39017964.mp4", "openreview_id": "Q3YaCghZNt", "slideslive_id": 39017964, "venue": "iclr2024", "title": "Lemur: Integrating Large Language Models in Automated Program Verification", "status": "Poster", "keywords": "Large Language Models;Formal verification", "tldr": "We present a general methodology for combining LLMs and formal verifiers for automated program verification.", "abstract": "The demonstrated code-understanding capability of LLMs raises the question of whether they can be used for automated program verification, a task that demands high-level abstract reasoning about program properties that is challenging for verification tools. We propose a general methodology to combine the power of LLMs and automated reasoners for automated program verification. We formally describe this methodology as a set of derivation rules and prove its soundness. We instantiate the calculus as a sound automated verification procedure, which led to practical improvements on a set of synthetic and competition benchmarks.", "primary_area": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18684"} +{"video_file": "QHROe7Mfcb_39017962.mp4", "openreview_id": "QHROe7Mfcb", "slideslive_id": 39017962, "venue": "iclr2024", "title": "Less is More: One-shot Subgraph Reasoning on Large-scale Knowledge Graphs", "status": "Poster", "keywords": "knowledge graph reasoning;graph sampling", "tldr": "We propose the one-shot subgraph reasoning to achieve efficient as well as adaptive reasoning on knowledge graphs", "abstract": "To deduce new facts on a knowledge graph (KG), a link predictor learns from the graph structure and collects local evidence to find the answer to a given query. However, existing methods suffer from a severe scalability problem due to the utilization of the whole KG for prediction, which hinders their promise on large scale KGs and cannot be directly addressed by vanilla sampling methods. In this work, we propose the one-shot-subgraph link prediction to achieve efficient and adaptive prediction. The design principle is that, instead of directly acting on the whole KG, the prediction procedure is decoupled into two steps, i.e., (i) extracting only one subgraph according to the query and (ii) predicting on this single, query dependent subgraph. We reveal that the non-parametric and computation-efficient heuristics Personalized PageRank (PPR) can effectively identify the potential answers and supporting evidence. With efficient subgraph-based prediction, we further introduce the automated searching of the optimal configurations in both data and model spaces. Empirically, we achieve promoted efficiency and leading performances on five large-scale benchmarks. The code is publicly available at: https://github.com/tmlr-group/one-shot-subgraph.", "primary_area": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18681"} +{"video_file": "QJGj07PD9C_39017961.mp4", "openreview_id": "QJGj07PD9C", "slideslive_id": 39017961, "venue": "iclr2024", "title": "Guaranteed Approximation Bounds for Mixed-Precision Neural Operators", "status": "Poster", "keywords": "neural operators", "tldr": "We theoretically and empirically demonstrate the power of a new mixed-precision training pipeline for neural operators.", "abstract": "Neural operators, such as Fourier Neural Operators (FNO), form a principled approach for learning solution operators for partial differential equations (PDE) and other mappings between function spaces. However, many real-world problems require high-resolution training data, and the training time and limited GPU memory pose big barriers. One solution is to train neural operators in mixed precision to reduce the memory requirement and increase training speed. However, existing mixed-precision training techniques are designed for standard neural networks, and we find that their direct application to FNO leads to numerical overflow and poor memory efficiency. Further, at first glance, it may appear that mixed precision in FNO will lead to drastic accuracy degradation since reducing the precision of the Fourier transform yields poor results in classical numerical solvers. We show that this is not the case; in fact, we prove that reducing the precision in FNO still guarantees a good approximation bound, when done in a targeted manner. Specifically, we build on the intuition that neural operator learning inherently induces an approximation error, arising from discretizing the infinite-dimensional ground-truth input function, implying that training in full precision is not needed. We formalize this intuition by rigorously characterizing the approximation and precision errors of FNO and bounding these errors for general input functions. We prove that the precision error is asymptotically comparable to the approximation error. Based on this, we design a simple method to optimize the memory-intensive half-precision tensor contractions by greedily finding the optimal contraction order. Through extensive experiments on different state-of-the-art neural operators, datasets, and GPUs, we demonstrate that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18680"} +{"video_file": "QrEHs9w5UF_39019248.mp4", "openreview_id": "QrEHs9w5UF", "slideslive_id": 39019248, "venue": "iclr2024", "title": "PRIME: Prioritizing Interpretability in Failure Mode Extraction", "status": "Poster", "keywords": "interpretability;failure modes;bias", "tldr": "We propose a new method to detect and explain failure mode of trained models in human-understandable terms.", "abstract": "In this work, we study the challenge of providing human-understandable descriptions for failure modes in trained image classification models. Existing works address this problem by first identifying clusters (or directions) of incorrectly classified samples in a latent space and then aiming to provide human-understandable text descriptions for them. We observe that in some cases, describing text does not match well with identified failure modes, partially owing to the fact that shared interpretable attributes of failure modes may not be captured using clustering in the feature space. To improve on these shortcomings, we propose a novel approach that prioritizes interpretability in this problem: we start by obtaining human-understandable concepts (tags) of images in the dataset and then analyze the model's behavior based on the presence or absence of combinations of these tags. Our method also ensures that the tags describing a failure mode form a minimal set, avoiding redundant and noisy descriptions. Through several experiments on different datasets, we show that our method successfully identifies failure modes and generates high-quality text descriptions associated with them. These results highlight the importance of prioritizing interpretability in understanding model failures.", "primary_area": "visualization or interpretation of learned representations", "site": "https://iclr.cc/virtual/2024/poster/18664"} +{"video_file": "QuIiLSktO4_39017950.mp4", "openreview_id": "QuIiLSktO4", "slideslive_id": 39017950, "venue": "iclr2024", "title": "Algorithms for Caching and MTS with reduced number of predictions", "status": "Poster", "keywords": "ML-Augmented Algorithms;Caching;Metrical Task Systems", "tldr": "We present 1-consistent, smooth, and robust algorithm for caching and consistent, smooth and robust algorithm for MTS with limited access to predictor", "abstract": "ML-augmented algorithms utilize predictions to achieve performance beyond their worst-case bounds. Producing these predictions might be a costly operation \u2013 this motivated Im et al. [2022] to introduce the study of algorithms which use predictions parsimoniously. We design parsimonious algorithms for caching and MTS with action predictions, proposed by Antoniadis et al. [2023], focusing on the parameters of consistency (performance with perfect predictions) and smoothness (dependence of their performance on prediction error). Our algorithm for caching is 1-consistent, robust, and its smoothness deteriorates with decreasing number of available predictions. We propose an algorithm for general MTS whose consistency and smoothness both scale linearly with the decreasing number of predictions. Without restriction on the number of available predictions, both algorithms match the earlier guarantees achieved by Antoniadis et al. [2023].", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/18663"} +{"video_file": "QxItoEAVMb_39017178.mp4", "openreview_id": "QxItoEAVMb", "slideslive_id": 39017178, "venue": "iclr2024", "title": "TorchRL: A data-driven decision-making library for PyTorch", "status": "Spotlight", "keywords": "Reinforcement Learning;pytorch;control;robotics", "tldr": "We present TorchRL, a new generalistic RL and control library for PyTorch. TorchRL offers a modular, lightweight, and agnostic tool for training reinforcement learning agents and other decision-making paradigms.", "abstract": "PyTorch has ascended as a premier machine learning framework, yet it lacks a native and comprehensive library for decision and control tasks suitable for large development teams dealing with complex real-world data and environments. To address this issue, we propose TorchRL, a generalistic control library for PyTorch that provides well-integrated, yet standalone components. We introduce a new and flexible PyTorch primitive, the TensorDict, which facilitates streamlined algorithm development across the many branches of Reinforcement Learning (RL) and control. We provide a detailed description of the building blocks and an extensive overview of the library across domains and tasks. Finally, we experimentally demonstrate its reliability and flexibility, and show comparative benchmarks to demonstrate its computational efficiency. TorchRL fosters long-term support and is publicly available on GitHub for greater reproducibility and collaboration within the research community. The code is open-sourced on GitHub.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18660"} +{"video_file": "R3Tf7LDdX4_39017945.mp4", "openreview_id": "R3Tf7LDdX4", "slideslive_id": 39017945, "venue": "iclr2024", "title": "Memory-Consistent Neural Networks for Imitation Learning", "status": "Poster", "keywords": "Imitation Learning;Behavior Cloning;Deep Learning", "tldr": "We develop a method to interpolate between nearest neighbours and neural networks for controlling the sub-optimality gap and improving performance in imitation learning.", "abstract": "Imitation learning considerably simplifies policy synthesis compared to alternative approaches by exploiting access to expert demonstrations. For such imitation policies, errors away from the training samples are particularly critical. Even rare slip-ups in the policy action outputs can compound quickly over time, since they lead to unfamiliar future states where the policy is still more likely to err, eventually causing task failures. We revisit simple supervised \"behavior cloning\" for conveniently training the policy from nothing more than pre-recorded demonstrations, but carefully design the model class to counter the compounding error phenomenon. Our \"memory-consistent neural network\" (MCNN) outputs are hard-constrained to stay within clearly specified permissible regions anchored to prototypical \"memory\" training samples. We provide a guaranteed upper bound for the sub-optimality gap induced by MCNN policies. Using MCNNs on 10 imitation learning tasks, with MLP, Transformer, and Diffusion backbones, spanning dexterous robotic manipulation and driving, proprioceptive inputs and visual inputs, and varying sizes and types of demonstration data, we find large and consistent gains in performance, validating that MCNNs are better-suited than vanilla deep neural networks for imitation learning applications. Website: https://sites.google.com/view/mcnn-imitation", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18656"} +{"video_file": "RIEW6M9YoV_39018845.mp4", "openreview_id": "RIEW6M9YoV", "slideslive_id": 39018845, "venue": "iclr2024", "title": "Graph Generation with $K^2$-trees", "status": "Poster", "keywords": "Graph generative models;graph neural networks", "tldr": "We propose a new graph generative model based on the \nK\n2\n-tree, which is a compact and hierarchical representation for graphs.", "abstract": "Generating graphs from a target distribution is a significant challenge across many domains, including drug discovery and social network analysis. In this work, we introduce a novel graph generation method leveraging\nK\n2\nrepresentation, originally designed for lossless graph compression. The\nK\n2\nrepresentation enables compact generation while concurrently capturing an inherent hierarchical structure of a graph. In addition, we make contributions by (1) presenting a sequential\nK\n2\nrepresentation that incorporates pruning, flattening, and tokenization processes and (2) introducing a Transformer-based architecture designed to generate the sequence by incorporating a specialized tree positional encoding scheme. Finally, we extensively evaluate our algorithm on four general and two molecular graph datasets to confirm its superiority for graph generation.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18652"} +{"video_file": "RIuevDSK5V_39018844.mp4", "openreview_id": "RIuevDSK5V", "slideslive_id": 39018844, "venue": "iclr2024", "title": "ConR: Contrastive Regularizer for Deep Imbalanced Regression", "status": "Poster", "keywords": "Deep imbalanced regression;Contrastive learning;Representation learning", "tldr": "We proposed a contrastive regularizer to address feature collapse in deep imbalanced regression.", "abstract": "Imbalanced distributions are ubiquitous in real-world data. They create constraints on Deep Neural Networks to represent the minority labels and avoid bias towards majority labels. The extensive body of imbalanced approaches address categorical label spaces but fail to effectively extend to regression problems where the label space is continuous. Local and global correlations among continuous labels provide valuable insights towards effectively modelling relationships in feature space. In this work, we propose ConR, a contrastive regularizer that models global and local label similarities in feature space and prevents the features of minority samples from being collapsed into their majority neighbours. ConR discerns the disagreements between the label space and feature space, and imposes a penalty on these disagreements. ConR minds the continuous nature of label space with two main strategies in a contrastive manner: incorrect proximities are penalized proportionate to the label similarities and the correct ones are encouraged to model local similarities. ConR consolidates essential considerations into a generic, easy-to-integrate, and efficient method that effectively addresses deep imbalanced regression. Moreover, ConR is orthogonal to existing approaches and smoothly extends to uni- and multi-dimensional label spaces. Our comprehensive experiments show that ConR significantly boosts the performance of all the state-of-the-art methods on four large-scale deep imbalanced regression benchmarks.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18649"} +{"video_file": "RJDjSXNuAZ_39019128.mp4", "openreview_id": "RJDjSXNuAZ", "slideslive_id": 39019128, "venue": "iclr2024", "title": "Weakly Supervised Virus Capsid Detection with Image-Level Annotations in Electron Microscopy Images", "status": "Poster", "keywords": "Weakly Supervised Object Detection;Limited Annotation Time;Bounding Box Regression;Electron Microscopy", "tldr": "We propose an optimization strategy with shrinking receptive field to extract virus capsids directly by bounding box regression from image level annotations.", "abstract": "Current state-of-the-art methods for object detection rely on annotated bounding boxes of large data sets for training. However, obtaining such annotations is expensive and can require up to hundreds of hours of manual labor. This poses a challenge, especially since such annotations can only be provided by experts, as they require knowledge about the scientific domain. To tackle this challenge, we propose a domain-specific weakly supervised object detection algorithm that only relies on image-level annotations, which are significantly easier to acquire. Our method distills the knowledge of a pre-trained model, on the task of predicting the presence or absence of a virus in an image, to obtain a set of pseudo-labels that can be used to later train a state-of-the-art object detection model. To do so, we use an optimization approach with a shrinking receptive field to extract virus particles directly without specific network architectures. Through a set of extensive studies, we show how the proposed pseudo-labels are easier to obtain, and, more importantly, are able to outperform other existing weak labeling methods, and even ground truth labels, in cases where the time to obtain the annotation is limited.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18648"} +{"video_file": "RR8y0WKrFv_39018879.mp4", "openreview_id": "RR8y0WKrFv", "slideslive_id": 39018879, "venue": "iclr2024", "title": "Ensemble Distillation for Unsupervised Constituency Parsing", "status": "Poster", "keywords": "Constituency Parsing;Unsupervised Grammar Induction;Knowledge Distillation", "tldr": "The paper proposes an ensemble method and multi-teacher distillation approach for unsupervised constituency parsing, demonstrating robustness and effectiveness.", "abstract": "We investigate the unsupervised constituency parsing task, which organizes words and phrases of a sentence into a hierarchical structure without using linguistically annotated data. We observe that existing unsupervised parsers capture different aspects of parsing structures, which can be leveraged to enhance unsupervised parsing performance. To this end, we propose a notion of \"tree averaging,\" based on which we further propose a novel ensemble method for unsupervised parsing. To improve inference efficiency, we further distill the ensemble knowledge into a student model; such an ensemble-then-distill process is an effective approach to mitigate the over-smoothing problem existing in common multi-teacher distilling methods. Experiments show that our method surpasses all previous approaches, consistently demonstrating its effectiveness and robustness across various runs, with different ensemble components, and under domain-shift conditions.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18643"} +{"video_file": "RXFVcynVe1_39017937.mp4", "openreview_id": "RXFVcynVe1", "slideslive_id": 39017937, "venue": "iclr2024", "title": "Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning", "status": "Poster", "keywords": "large language models (LLM);feature learning;text attributed graphs (TAG);graph neural networks (GNN)", "tldr": "We propose the first framework that leverages LLMs to enhance representation learning on text-attributed graphs, achieving SOTA results on four benchmark datasets.", "abstract": "Representation learning on text-attributed graphs (TAGs) has become a critical research problem in recent years. A typical example of a TAG is a paper citation graph, where the text of each paper serves as node attributes. Initial graph neural network (GNN) pipelines handled these text attributes by transforming them into shallow or hand-crafted features, such as skip-gram or bag-of-words features. Recent efforts have focused on enhancing these pipelines with language models (LMs), which typically demand intricate designs and substantial computational resources. With the advent of powerful large language models (LLMs) such as GPT or Llama2, which demonstrate an ability to reason and to utilize general knowledge, there is a growing need for techniques which combine the textual modelling abilities of LLMs with the structural learning capabilities of GNNs. Hence, in this work, we focus on leveraging LLMs to capture textual information as features, which can be used to boost GNN performance on downstream tasks. A key innovation is our use of \\emph{explanations as features}: we prompt an LLM to perform zero-shot classification, request textual explanations for its decision-making process, and design an \\emph{LLM-to-LM interpreter} to translate these explanations into informative features for downstream GNNs. Our experiments demonstrate that our method achieves state-of-the-art results on well-established TAG datasets, including \\texttt{Cora}, \\texttt{PubMed}, \\texttt{ogbn-arxiv}, as well as our newly introduced dataset, \\texttt{tape-arxiv23}. Furthermore, our method significantly speeds up training, achieving a 2.88 times improvement over the closest baseline on \\texttt{ogbn-arxiv}. Lastly, we believe the versatility of the proposed method extends beyond TAGs and holds the potential to enhance other tasks involving graph-text data~\\footnote{Our codes and datasets are available at: \\url{https://github.com/XiaoxinHe/TAPE}}.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/18640"} +{"video_file": "RsztjXcvUf_39017932.mp4", "openreview_id": "RsztjXcvUf", "slideslive_id": 39017932, "venue": "iclr2024", "title": "A Primal-Dual Approach to Solving Variational Inequalities with General Constraints", "status": "Poster", "keywords": "Variational Inequaly;optimization;constraints;primal-dual;interior-point method;Monotone operator;last iterate convergence", "tldr": "Novel first-order methods for solving constrained variational inequalities with convergence guarantees on monotone variational inequalities.", "abstract": "Yang et al. (2023) recently showed how to use first-order gradient methods to solve general variational inequalities (VIs) under a limiting assumption that analytic solutions of specific subproblems are available. In this paper, we circumvent this assumption via a warm-starting technique where we solve subproblems approximately and initialize variables with the approximate solution found at the previous iteration. We prove the convergence of this method and show that the gap function of the last iterate of the method decreases at a rate of\nO\n(\n1\nK\n)\nwhen the operator is\nL\n-Lipschitz and monotone. In numerical experiments, we show that this technique can converge much faster than its exact counterpart. Furthermore, for the cases when the inequality constraints are simple, we introduce an alternative variant of ACVI and establish its convergence under the same conditions. Finally, we relax the smoothness assumptions in Yang et al., yielding, to our knowledge, the first convergence result for VIs with general constraints that does not rely on the assumption that the operator is\nL\n-Lipschitz.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18631"} +{"video_file": "RtAct1E2zS_39017931.mp4", "openreview_id": "RtAct1E2zS", "slideslive_id": 39017931, "venue": "iclr2024", "title": "On Error Propagation of Diffusion Models", "status": "Poster", "keywords": "Diffusion Models;Error Propagation;Theoretical Explanation;Regularization", "tldr": "We present a theoretical framework to explain why error propagation happens to diffusion models and a regularization to address this problem", "abstract": "Although diffusion models (DMs) have shown promising performances in a number of tasks (e.g., speech synthesis and image generation), they might suffer from error propagation because of their sequential structure. However, this is not certain because some sequential models, such as Conditional Random Field (CRF), are free from this problem. To address this issue, we develop a theoretical framework to mathematically formulate error propagation in the architecture of DMs, The framework contains three elements, including modular error, cumulative error, and propagation equation. The modular and cumulative errors are related by the equation, which interprets that DMs are indeed affected by error propagation. Our theoretical study also suggests that the cumulative error is closely related to the generation quality of DMs. Based on this finding, we apply the cumulative error as a regularization term to reduce error propagation. Because the term is computationally intractable, we derive its upper bound and design a bootstrap algorithm to efficiently estimate the bound for optimization. We have conducted extensive experiments on multiple image datasets, showing that our proposed regularization reduces error propagation, significantly improves vanilla DMs, and outperforms previous baselines.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18630"} +{"video_file": "RthOl4jHw5_39017929.mp4", "openreview_id": "RthOl4jHw5", "slideslive_id": 39017929, "venue": "iclr2024", "title": "Meta-Evolve: Continuous Robot Evolution for One-to-many Policy Transfer", "status": "Poster", "keywords": "policy transfer;transfer learning;imitation learning;reinforcement learning", "tldr": "A method for efficiently transferring an expert policy from one robot to multiple different robots", "abstract": "We investigate the problem of transferring an expert policy from a source robot to multiple different robots. To solve this problem, we propose a method named Meta-Evolve that uses continuous robot evolution to efficiently transfer the policy to each target robot through a set of tree-structured evolutionary robot sequences. The robot evolution tree allows the robot evolution paths to be shared, so our approach can significantly outperform naive one-to-one policy transfer. We present a heuristic approach to determine an optimized robot evolution tree. Experiments have shown that our method is able to improve the efficiency of one-to-three transfer of manipulation policy by up to 3.2\n\u00d7\nand one-to-six transfer of agile locomotion policy by 2.4\n\u00d7\nin terms of simulation cost over the baseline of launching multiple independent one-to-one policy transfers. Supplementary videos available at the project website: https://sites.google.com/view/meta-evolve.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18628"} +{"video_file": "RwI7ZEfR27_39019093.mp4", "openreview_id": "RwI7ZEfR27", "slideslive_id": 39019093, "venue": "iclr2024", "title": "BrainLM: A foundation model for brain activity recordings", "status": "Poster", "keywords": "foundation model;fMRI", "tldr": "Trained an masked autoencoder on the largest fMRI dataset", "abstract": "We introduce the Brain Language Model (BrainLM), a foundation model for brain activity dynamics trained on 6,700 hours of fMRI recordings. Utilizing self-supervised masked-prediction training, BrainLM demonstrates proficiency in both fine-tuning and zero-shot inference tasks. Fine-tuning allows for the accurate prediction of clinical variables like age, anxiety, and PTSD as well as forecasting of future brain states. Critically, the model generalizes well to entirely new external cohorts not seen during training. In zero-shot inference mode, BrainLM can identify intrinsic functional networks directly from raw fMRI data without any network-based supervision during training. The model also generates interpretable latent representations that reveal relationships between brain activity patterns and cognitive states. Overall, BrainLM offers a versatile and interpretable framework for elucidating the complex spatiotemporal dynamics of human brain activity. It serves as a powerful \"lens\" through which massive repositories of fMRI data can be analyzed in new ways, enabling more effective interpretation and utilization at scale. The work demonstrates the potential of foundation models to advance computational neuroscience research.", "primary_area": "applications to neuroscience & cognitive science", "site": "https://iclr.cc/virtual/2024/poster/18625"} +{"video_file": "SA19ijj44B_39017922.mp4", "openreview_id": "SA19ijj44B", "slideslive_id": 39017922, "venue": "iclr2024", "title": "A Study of Bayesian Neural Network Surrogates for Bayesian Optimization", "status": "Poster", "keywords": "Bayesian Optimization;Gaussian Processes;Bayesian Neural Networks", "tldr": "We conduct a study of Bayesian neural networks as surrogate models for Bayesian optimization.", "abstract": "Bayesian optimization is a highly efficient approach to optimizing objective functions which are expensive to query. These objectives are typically represented by Gaussian process (GP) surrogate models which are easy to optimize and support exact inference. While standard GP surrogates have been well-established in Bayesian optimization, Bayesian neural networks (BNNs) have recently become practical function approximators, with many benefits over standard GPs such as the ability to naturally handle non-stationarity and learn representations for high-dimensional data. In this paper, we study BNNs as alternatives to standard GP surrogates for optimization. We consider a variety of approximate inference procedures for finite-width BNNs, including high-quality Hamiltonian Monte Carlo, low-cost stochastic MCMC, and heuristics such as deep ensembles. We also consider infinite-width BNNs, linearized Laplace approximations, and partially stochastic models such as deep kernel learning. We evaluate this collection of surrogate models on diverse problems with varying dimensionality, number of objectives, non-stationarity, and discrete and continuous inputs. We find: (i) the ranking of methods is highly problem dependent, suggesting the need for tailored inductive biases; (ii) HMC is the most successful approximate inference procedure for fully stochastic BNNs; (iii) full stochasticity may be unnecessary as deep kernel learning is relatively competitive; (iv) deep ensembles perform relatively poorly; (v) infinite-width BNNs are particularly promising, especially in high dimensions.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18615"} +{"video_file": "SBj2Qdhgew_39017921.mp4", "openreview_id": "SBj2Qdhgew", "slideslive_id": 39017921, "venue": "iclr2024", "title": "Demystifying Local & Global Fairness Trade-offs in Federated Learning Using Partial Information Decomposition", "status": "Poster", "keywords": "Fairness;Federated Learning;Machine Learning;Information Theory", "tldr": "This work presents an information-theoretic perspective to group fairness trade-offs in federated learning (FL)", "abstract": "This work presents an information-theoretic perspective to group fairness trade-offs in federated learning (FL) with respect to sensitive attributes, such as gender, race, etc. Existing works often focus on either\nglobal fairness\n(overall disparity of the model across all clients) or\nlocal fairness\n(disparity of the model at each client), without always considering their trade-offs. There is a lack of understanding regarding the interplay between global and local fairness in FL, particularly under data heterogeneity, and if and when one implies the other. To address this gap, we leverage a body of work in information theory called partial information decomposition (PID), which first identifies three sources of unfairness in FL, namely,\nUnique Disparity\n,\nRedundant Disparity\n, and\nMasked Disparity\n. We demonstrate how these three disparities contribute to global and local fairness using canonical examples. This decomposition helps us derive fundamental limits on the trade-off between global and local fairness, highlighting where they agree or disagree. We introduce the\nAccuracy and Global-Local Fairness Optimality Problem\n(AGLFOP), a convex optimization that defines the theoretical limits of accuracy and fairness trade-offs, identifying the best possible performance any FL strategy can attain given a dataset and client distribution. We also present experimental results on synthetic datasets and the ADULT dataset to support our theoretical findings.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/18614"} +{"video_file": "SIZWiya7FE_39017097.mp4", "openreview_id": "SIZWiya7FE", "slideslive_id": 39017097, "venue": "iclr2024", "title": "Label-Agnostic Forgetting: A Supervision-Free Unlearning in Deep Models", "status": "Poster", "keywords": "Machine Unlearning;Unsupervised Learning;Deep Learning", "tldr": "A supervision-free framework for deep model unlearning via variational inference and contrastive loss", "abstract": "Machine unlearning aims to remove information derived from forgotten data while preserving that of the remaining dataset in a well-trained model. With the increasing emphasis on data privacy, several approaches to machine unlearning have emerged. However, these methods typically rely on complete supervision throughout the unlearning process. Unfortunately, obtaining such supervision, whether for the forgetting or remaining data, can be impractical due to the substantial cost associated with annotating real-world datasets. This challenge prompts us to propose a supervision-free unlearning approach that operates without the need for labels during the unlearning process. Specifically, we introduce a variational approach to approximate the distribution of representations for the remaining data. Leveraging this approximation, we adapt the original model to eliminate information from the forgotten data at the representation level. To further address the issue of lacking supervision information, which hinders alignment with ground truth, we introduce a contrastive loss to facilitate the matching of representations between the remaining data and those of the original model, thus preserving predictive performance. Experimental results across various unlearning tasks demonstrate the effectiveness of our proposed method, Label-Agnostic Forgetting (LAF) without using any labels, which achieves comparable performance to state-of-the-art methods that rely on full supervision information. Furthermore, our approach excels in semi-supervised scenarios, leveraging limited supervision information to outperform fully supervised baselines. This work not only showcases the viability of supervision-free unlearning in deep models but also opens up a new possibility for future research in unlearning at the representation level.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18610"} +{"video_file": "SLw9fp4yI6_39017916.mp4", "openreview_id": "SLw9fp4yI6", "slideslive_id": 39017916, "venue": "iclr2024", "title": "Controlled Text Generation via Language Model Arithmetic", "status": "Spotlight", "keywords": "Controlled text generation;LLM;Natural Language Processing", "tldr": "We provide a principled and intuitive way to combine multiple LLMs and bias them towards and away from attributes.", "abstract": "As Large Language Models (LLMs) are deployed more widely, customization with respect to vocabulary, style, and character becomes more important. In this work, we introduce model arithmetic, a novel inference framework for composing and biasing LLMs without the need for model (re)training or highly specific datasets. In addition, the framework allows for more precise control of generated text than direct prompting and prior controlled text generation (CTG) techniques. Using model arithmetic, we can express prior CTG techniques as simple formulas and naturally extend them to new and more effective formulations. Further, we show that speculative sampling, a technique for efficient LLM sampling, extends to our setting. This enables highly efficient text generation with multiple composed models with only marginal overhead over a single model. Our empirical evaluation demonstrates that model arithmetic allows fine-grained control of generated text while outperforming state-of-the-art on the task of toxicity reduction. We release an open source easy-to-use implementation of our framework at https://github.com/eth-sri/language-model-arithmetic.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18607"} +{"video_file": "SQpnEfv9WH_39017055.mp4", "openreview_id": "SQpnEfv9WH", "slideslive_id": 39017055, "venue": "iclr2024", "title": "Social-Transmotion: Promptable Human Trajectory Prediction", "status": "Poster", "keywords": "human trajectory prediction;robot navigation;autonomous driving;attention mechanism", "tldr": "We propose a generic Transformer-based model that integrates diverse visual cues as prompts, powered by masking technique to enhance human trajectory prediction.", "abstract": "Accurate human trajectory prediction is crucial for applications such as autonomous vehicles, robotics, and surveillance systems. Yet, existing models often fail to fully leverage the non-verbal social cues human subconsciously communicate when navigating the space. To address this, we introduce Social-Transmotion, a generic Transformer-based model that exploits diverse and numerous visual cues to predict human behavior. We translate the idea of a prompt from Natural Language Processing (NLP) to the task of human trajectory prediction, where a prompt can be a sequence of x-y coordinates on the ground, bounding boxes in the image plane, or body pose keypoints in either 2D or 3D. This, in turn, augments trajectory data, leading to enhanced human trajectory prediction. Using masking technique, our model exhibits flexibility and adaptability by capturing spatiotemporal interactions between agents based on the available visual cues. We delve into the merits of using 2D versus 3D poses, and a limited set of poses. Additionally, we investigate the spatial and temporal attention map to identify which keypoints and time-steps in the sequence are vital for optimizing human trajectory prediction. Our approach is validated on multiple datasets, including JTA, JRDB, Pedestrians and Cyclists in Road Traffic, and ETH-UCY. The code is publicly available: https://github.com/vita-epfl/social-transmotion.", "primary_area": "applications to robotics, autonomy, planning", "site": "https://iclr.cc/virtual/2024/poster/18604"} +{"video_file": "SQrHpTllXa_39018722.mp4", "openreview_id": "SQrHpTllXa", "slideslive_id": 39018722, "venue": "iclr2024", "title": "CABINET: Content Relevance-based Noise Reduction for Table Question Answering", "status": "Spotlight", "keywords": "Table Question Answering;Large Language Models;Noise Reduction;Unsupervised Relevance Scoring;Table Parsing;Relevant Cell Highlighting", "tldr": "A content relevance based noise reduction framework for table QA that weighs the table content based on its relevance to question without removing table content explicitly.", "abstract": "Table understanding capability of Large Language Models (LLMs) has been extensively studied through the task of question-answering (QA) over tables. Typically, only a small part of the whole table is relevant to derive the answer for a given question. The irrelevant parts act as noise and are distracting information, resulting in sub-optimal performance due to the vulnerability of LLMs to noise. To mitigate this, we propose CABINET (Content RelevAnce-Based NoIse ReductioN for TablE QuesTion-Answering) \u2013 a framework to enable LLMs to focus on relevant tabular data by suppressing extraneous information. CABINET comprises an Unsupervised Relevance Scorer (URS), trained differentially with the QA LLM, that weighs the table content based on its relevance to the input question before feeding it to the question answering LLM (QA LLM). To further aid the relevance scorer, CABINET employs a weakly supervised module that generates a parsing statement describing the criteria of rows and columns relevant to the question and highlights the content of corresponding table cells. CABINET significantly outperforms various tabular LLM baselines, as well as GPT3-based in-context learning methods, is more robust to noise, maintains outperformance on tables of varying sizes, and establishes new SoTA performance on WikiTQ, FeTaQA, and WikiSQL datasets. We release our code and datasets here.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18603"} +{"video_file": "SZzQz8ikwg_39017912.mp4", "openreview_id": "SZzQz8ikwg", "slideslive_id": 39017912, "venue": "iclr2024", "title": "Efficient local linearity regularization to overcome catastrophic overfitting", "status": "Poster", "keywords": "Fast Adversarial Training;Catastrophic Overfitting;Local Linearity", "tldr": "We propose a local-linearity based regularization term to efficiently avoid Catastrophic Overfitting in single-step Adversarial Training, even for large \n\u03f5\n and long training schedules.", "abstract": "Catastrophic overfitting (CO) in single-step adversarial training (AT) results in abrupt drops in the adversarial test accuracy (even down to\n0\n%). For models trained with multi-step AT, it has been observed that the loss function behaves locally linearly with respect to the input, this is however lost in single-step AT. To address CO in single-step AT, several methods have been proposed to enforce local linearity of the loss via regularization. However, these regularization terms considerably slow down training due to Double Backpropagation. Instead, in this work, we introduce a regularization term, called ELLE, to mitigate CO effectively and efficiently in classical AT evaluations, as well as some more difficult regimes, e.g., large adversarial perturbations and long training schedules. Our regularization term can be theoretically linked to curvature of the loss function and is computationally cheaper than previous methods by avoiding Double Backpropagation. Our thorough experimental validation demonstrates that our work does not suffer from CO, even in challenging settings where previous works suffer from it. We also notice that adapting our regularization parameter during training (ELLE-A) greatly improves the performance, specially in large\n\u03f5\nsetups. Our implementation is available in https://github.com/LIONS-EPFL/ELLE.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18598"} +{"video_file": "Sx7BIiPzys_39017901.mp4", "openreview_id": "Sx7BIiPzys", "slideslive_id": 39017901, "venue": "iclr2024", "title": "Variational Bayesian Last Layers", "status": "Spotlight", "keywords": "bayesian deep learning;variational methods;bayesian last layers;neural linear models", "tldr": "We introduce a deterministic variational formulation for training Bayesian last layer neural networks that improves accuracy and calibration for free.", "abstract": "We introduce a deterministic variational formulation for training Bayesian last layer neural networks. This yields a sampling-free, single-pass model and loss that effectively improves uncertainty estimation. Our variational Bayesian last layer (VBLL) can be trained and evaluated with only quadratic complexity in last layer width, and is thus (nearly) computationally free to add to standard architectures. We experimentally investigate VBLLs, and show that they improve predictive accuracy, calibration, and out of distribution detection over baselines across both regression and classification. Finally, we investigate combining VBLL layers with variational Bayesian feature learning, yielding a lower variance collapsed variational inference method for Bayesian neural networks.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18585"} +{"video_file": "TFKIfhvdmZ_39017899.mp4", "openreview_id": "TFKIfhvdmZ", "slideslive_id": 39017899, "venue": "iclr2024", "title": "Proximal Policy Gradient Arborescence for Quality Diversity Reinforcement Learning", "status": "Spotlight", "keywords": "Reinforcement Learning;Quality Diversity;Robotics;Machine Learning;Evolution Strategies", "tldr": "We present a novel QD-RL method that leverages on-policy RL and Differentiable Quality Diversity to discover a variety of high performing locomtion gaits on the challenging mujoco environment, including, for the first time in QD-RL, humanoid.", "abstract": "Training generally capable agents that thoroughly explore their environment and learn new and diverse skills is a long-term goal of robot learning. Quality Diversity Reinforcement Learning (QD-RL) is an emerging research area that blends the best aspects of both fields \u2013 Quality Diversity (QD) provides a principled form of exploration and produces collections of behaviorally diverse agents, while Reinforcement Learning (RL) provides a powerful performance improvement operator enabling generalization across tasks and dynamic environments. Existing QD-RL approaches have been constrained to sample efficient, deterministic off- policy RL algorithms and/or evolution strategies and struggle with highly stochastic environments. In this work, we, for the first time, adapt on-policy RL, specifically Proximal Policy Optimization (PPO), to the Differentiable Quality Diversity (DQD) framework and propose several changes that enable efficient optimization and discovery of novel skills on high-dimensional, stochastic robotics tasks. Our new algorithm, Proximal Policy Gradient Arborescence (PPGA), achieves state-of- the-art results, including a 4x improvement in best reward over baselines on the challenging humanoid domain.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18581"} +{"video_file": "THJEa8adBn_39018650.mp4", "openreview_id": "THJEa8adBn", "slideslive_id": 39018650, "venue": "iclr2024", "title": "Harnessing Density Ratios for Online Reinforcement Learning", "status": "Spotlight", "keywords": "reinforcement learning theory;online RL;offline RL;hybrid RL;density ratio;marginalized importance weight;weight function;general function approximation", "tldr": "The notion of density ratio modeling, an emerging topic in offline RL, has been largely absent from online RL. We show a perhaps surprising result, that density ratio-based algorithms have online counterparts.", "abstract": "The theories of offline and online reinforcement learning, despite having evolved in parallel, have begun to show signs of the possibility for a unification, with algorithms and analysis techniques for one setting often having natural counterparts in the other. However, the notion of density ratio modeling, an emerging paradigm in offline RL, has been largely absent from online RL, perhaps for good reason: the very existence and boundedness of density ratios relies on access to an exploratory dataset with good coverage, but the core challenge in online RL is to collect such a dataset without having one to start.\nIn this work we show---perhaps surprisingly---that density ratio-based algorithms have online counterparts. Assuming only the existence of an exploratory distribution with good coverage, a structural condition known as coverability (Xie et al., 2023), we give a new algorithm (GLOW) that uses density ratio realizability and value function realizability to perform sample-efficient online exploration. GLOW addresses unbounded density ratios via careful use of truncation, and combines this with optimism to guide exploration. GLOW is computationally inefficient; we complement it with a more efficient counterpart, HyGLOW, for the Hybrid RL setting (Song et al., 2023) wherein online RL is augmented with additional offline data. HyGLOW is derived as a special case of a more general meta-algorithm that provides a provable black-box reduction from hybrid RL to offline RL, which may be of independent interest.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/18580"} +{"video_file": "THUBTfSAS2_39017898.mp4", "openreview_id": "THUBTfSAS2", "slideslive_id": 39017898, "venue": "iclr2024", "title": "Querying Easily Flip-flopped Samples for Deep Active Learning", "status": "Poster", "keywords": "active learning;uncertainty;closeness;disagree metric;diversity", "tldr": "The uncertainty-based active learning algorithm that queries samples easily flip-flopped by a small perturbation of the decision boundary.", "abstract": "Active learning, a paradigm within machine learning, aims to select and query unlabeled data to enhance model performance strategically. A crucial selection strategy leverages the model's predictive uncertainty, reflecting the informativeness of a data point. While the sample's distance to the decision boundary intuitively measures predictive uncertainty, its computation becomes intractable for complex decision boundaries formed in multiclass classification tasks. This paper introduces the least disagree metric (LDM), the smallest probability of predicted label disagreement. We propose an asymptotically consistent estimator for LDM under mild assumptions. The estimator boasts computational efficiency and straightforward implementation for deep learning models using parameter perturbation. The LDM-based active learning algorithm queries unlabeled data with the smallest LDM, achieving state-of-the-art overall performance across various datasets and deep architectures, as demonstrated by the experimental results.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18579"} +{"video_file": "TTrzgEZt9s_39017892.mp4", "openreview_id": "TTrzgEZt9s", "slideslive_id": 39017892, "venue": "iclr2024", "title": "Distributionally Robust Optimization with Bias and Variance Reduction", "status": "Spotlight", "keywords": "stochastic optimization;convex optimization;distributionally robust learning;spectral risk measures;incremental optimization", "tldr": "We propose a linearly convergent stochastic (a.k.a. incremental) algorithm that optimizes spectral risk measures, which are distributionally robust objectives that include the superquantile.", "abstract": "We consider the distributionally robust optimization (DRO) problem, wherein a learner optimizes the worst-case empirical risk achievable by reweighing the observed training examples. We present Prospect, a stochastic gradient-based algorithm that only requires tuning a single learning rate hyperparameter, and prove that it enjoys linear convergence for smooth regularized losses. This contrasts with previous algorithms that either require tuning multiple hyperparameters or potentially fail to converge due to biased gradient estimates or inadequate regularization. Empirically, we show that Prospect can converge 2-3x faster than baselines such as SGD and stochastic saddle-point methods on distribution shift and fairness benchmarks spanning tabular, vision, and language domains.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18571"} +{"video_file": "TVDUVpgu9s_39018666.mp4", "openreview_id": "TVDUVpgu9s", "slideslive_id": 39018666, "venue": "iclr2024", "title": "Zeroth-Order Optimization Meets Human Feedback: Provable Learning via Ranking Oracles", "status": "Poster", "keywords": "Learng from human feedback;zeroth-order optimization;Stable Diffusion;ranking and preferences", "tldr": "We invent the first zeroth-order algorithm for solving optimization problems with only ranking oracles of the objective function available.", "abstract": "In this study, we delve into an emerging optimization challenge involving a black-box objective function that can only be gauged via a ranking oracle\u2014a situation frequently encountered in real-world scenarios, especially when the function is evaluated by human judges. A prominent instance of such a situation is Reinforcement Learning with Human Feedback (RLHF), an approach recently employed to enhance the performance of Large Language Models (LLMs) using human guidance [Ouyang et al. 2022, Liu et al. 2023, OpenAI et al. 2022, Bai et al. 2022]. We introduce ZO-RankSGD, an innovative zeroth-order optimization algorithm designed to tackle this optimization problem, accompanied by theoretical assurances. Our algorithm utilizes a novel rank-based random estimator to determine the descent direction and guarantees convergence to a stationary point. Moreover, ZO-RankSGD is readily applicable to policy optimization problems in Reinforcement Learning (RL), particularly when only ranking oracles for the episode reward are available. Last but not least, we demonstrate the effectiveness of ZO-RankSGD in a novel application: improving the quality of images generated by a diffusion generative model with human ranking feedback. Throughout experiments, we found that ZO-RankSGD can significantly enhance the detail of generated images with only a few rounds of human feedback. Overall, our work advances the field of zeroth-order optimization by addressing the problem of optimizing functions with only ranking feedback, and offers a new and effective approach for aligning Artificial Intelligence (AI) with human intentions.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18570"} +{"video_file": "Tlsdsb6l9n_39019165.mp4", "openreview_id": "Tlsdsb6l9n", "slideslive_id": 39019165, "venue": "iclr2024", "title": "Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models", "status": "Poster", "keywords": "instruction dataset;large language models;biomolecular studies;molecule;protein", "tldr": "A large-scale biomolecular instruction dataset for large language models.", "abstract": "Large Language Models (LLMs), with their remarkable task-handling capabilities and innovative outputs, have catalyzed significant advancements across a spectrum of fields. However, their proficiency within specialized domains such as biomolecular studies remains limited. To address this challenge, we introduce Mol-Instructions, a comprehensive instruction dataset designed for the biomolecular domain. Mol-Instructions encompasses three key components: molecule-oriented instructions, protein-oriented instructions, and biomolecular text instructions. Each component aims to improve the understanding and prediction capabilities of LLMs concerning biomolecular features and behaviors. Through extensive instruction tuning experiments on LLMs, we demonstrate the effectiveness of Mol-Instructions in enhancing large models' performance in the intricate realm of biomolecular studies, thus fostering progress in the biomolecular research community. Mol-Instructions is publicly available for ongoing research and will undergo regular updates to enhance its applicability (https://github.com/zjunlp/Mol-Instructions).", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/18554"} +{"video_file": "Tr0lPx9woF_39017877.mp4", "openreview_id": "Tr0lPx9woF", "slideslive_id": 39017877, "venue": "iclr2024", "title": "Plug-and-Play: An Efficient Post-training Pruning Method for Large Language Models", "status": "Poster", "keywords": "Post-Training Pruning;Combinatorial Optimization;Large Language Models;Inference Acceleration", "tldr": "By integrating Relative Importance and Activations and Channel Permutation, we present a plug-and-play solution for post-training pruning of LLMs, which accelerates the inference speed of LLMs without performance degradation.", "abstract": "With the rapid growth of large language models (LLMs), there is increasing demand for memory and computation in LLMs. Recent efforts on post-training pruning of LLMs aim to reduce the model size and computation requirements, yet the performance is still sub-optimal. In this paper, we present a plug-and-play solution for post-training pruning of LLMs. The proposed solution has two innovative components: 1) Relative Importance and Activations (RIA), a new pruning metric that jointly considers the weight and activations efficiently on LLMs, and 2) Channel Permutation, a new approach to maximally preserves important weights under N:M sparsity. The two proposed components can be readily combined to further enhance the N:M semi-structured pruning of LLMs. Our empirical experiments show that RIA alone can already surpass all existing post-training pruning methods on prevalent LLMs, e.g., LLaMA ranging from 7B to 65B. Furthermore, N:M semi-structured pruning with channel permutation can even outperform the original LLaMA2-70B on zero-shot tasks, together with practical speed-up on specific hardware. Our code is available at: https://github.com/biomedical-cybernetics/Relative-importance-and-activation-pruning", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18549"} +{"video_file": "Ts95eXsPBc_39017876.mp4", "openreview_id": "Ts95eXsPBc", "slideslive_id": 39017876, "venue": "iclr2024", "title": "Spatially-Aware Transformers for Embodied Agents", "status": "Spotlight", "keywords": "Episodic Memory;Spatial Inference;Prediction;Generation;Reinforcement Learning", "tldr": "We propose a transformer-based episodic memory model, the Spatially-Aware Episodic Transformer, that incorporates both temporal and spatial dimensions to improve memory utilization and downstream task accuracy.", "abstract": "Episodic memory plays a crucial role in various cognitive processes, such as the ability to mentally recall past events. While cognitive science emphasizes the significance of spatial context in the formation and retrieval of episodic memory, the current primary approach to implementing episodic memory in AI systems is through transformers that store temporally ordered experiences, which overlooks the spatial dimension. As a result, it is unclear how the underlying structure could be extended to incorporate the spatial axis beyond temporal order alone and thereby what benefits can be obtained. To address this, this paper explores the use of Spatially-Aware Transformer models that incorporate spatial information. These models enable the creation of place-centric episodic memory that considers both temporal and spatial dimensions. Adopting this approach, we demonstrate that memory utilization efficiency can be improved, leading to enhanced accuracy in various place-centric downstream tasks. Additionally, we propose the Adaptive Memory Allocator, a memory management method based on reinforcement learning that aims to optimize efficiency of memory utilization. Our experiments demonstrate the advantages of our proposed model in various environments and across multiple downstream tasks, including prediction, generation, reasoning, and reinforcement learning. The source code for our models and experiments will be available at \\href{https://github.com/spatially_aware_transformer}{https://github.com/spatially_aware_transformer}.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18546"} +{"video_file": "TskzCtpMEO_39017875.mp4", "openreview_id": "TskzCtpMEO", "slideslive_id": 39017875, "venue": "iclr2024", "title": "Training Bayesian Neural Networks with Sparse Subspace Variational Inference", "status": "Poster", "keywords": "Bayesian neural networks;sparse Bayesian learning;variational inference", "tldr": "We propose the first fully sparse Bayesian training framework that achieves state-of-the-art performance in the realm of sparse Bayesian neural networks.", "abstract": "Bayesian neural networks (BNNs) offer uncertainty quantification but come with the downside of substantially increased training and inference costs. Sparse BNNs have been investigated for efficient inference, typically by either slowly introducing sparsity throughout the training or by post-training compression of dense BNNs. The dilemma of how to cut down massive training costs remains, particularly given the requirement to learn about the uncertainty. To solve this challenge, we introduce Sparse Subspace Variational Inference (SSVI), the first fully sparse BNN framework that maintains a consistently sparse Bayesian model throughout the training and inference phases. Starting from a randomly initialized low-dimensional sparse subspace, our approach alternately optimizes the sparse subspace basis selection and its associated parameters. While basis selection is characterized as a non-differentiable problem, we approximate the optimal solution with a removal-and-addition strategy, guided by novel criteria based on weight distribution statistics. Our extensive experiments show that SSVI sets new benchmarks in crafting sparse BNNs, achieving, for instance, a 10-20\u00d7 compression in model size with under 3% performance drop, and up to 20\u00d7 FLOPs reduction during training. Remarkably, SSVI also demonstrates enhanced robustness to hyperparameters, reducing the need for intricate tuning in VI and occasionally even surpassing VI-trained dense BNNs.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18545"} +{"video_file": "Tvwf4Vsi5F_39017873.mp4", "openreview_id": "Tvwf4Vsi5F", "slideslive_id": 39017873, "venue": "iclr2024", "title": "PubDef: Defending Against Transfer Attacks From Public Models", "status": "Poster", "keywords": "adversarial robustness;adversarial examples;transfer attack;security", "tldr": "We propose a new practical threat model, transfer attacks from public models (TAPM), and build a defense that provides higher robustness than adversarial training with almost no drop in the clean accuracy compared to undefended models.", "abstract": "Adversarial attacks have been a looming and unaddressed threat in the industry. However, through a decade-long history of the robustness evaluation literature, we have learned that mounting a strong or optimal attack is challenging. It requires both machine learning and domain expertise. In other words, the white-box threat model, religiously assumed by a large majority of the past literature, is unrealistic. In this paper, we propose a new practical threat model where the adversary relies on transfer attacks through publicly available surrogate models. We argue that this setting will become the most prevalent for security-sensitive applications in the future. We evaluate the transfer attacks in this setting and propose a specialized defense method based on a game-theoretic perspective. The defenses are evaluated under 24 public models and 11 attack algorithms across three datasets (CIFAR-10, CIFAR-100, and ImageNet). Under this threat model, our defense, PubDef, outperforms the state-of-the-art white-box adversarial training by a large margin with almost no loss in the normal accuracy. For instance, on ImageNet, our defense achieves 62% accuracy under the strongest transfer attack vs only 36% of the best adversarially trained model. Its accuracy when not under attack is only 2% lower than that of an undefended model (78% vs 80%). We release our code at https://github.com/wagner-group/pubdef.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/18543"} +{"video_file": "TyFrPOKYXw_39018993.mp4", "openreview_id": "TyFrPOKYXw", "slideslive_id": 39018993, "venue": "iclr2024", "title": "Safe RLHF: Safe Reinforcement Learning from Human Feedback", "status": "Spotlight", "keywords": "Safe Reinforcement Learning;Reinforcement Learning from Human Feedback;Large Language Model;AI Safety", "tldr": "Safe Reinforcement Learning from Human Feedback", "abstract": "With the development of large language models (LLMs), striking a balance between the performance and safety of AI systems has never been more critical. However, the inherent tension between the objectives of helpfulness and harmlessness presents a significant challenge during LLM training. To address this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly decouples human preferences regarding helpfulness and harmlessness, effectively avoiding the crowd workers' confusion about the tension and allowing us to train separate reward and cost models. We formalize the safety concern of LLMs as an optimization task of maximizing the reward function while satisfying specified cost constraints. Leveraging the Lagrangian method to solve this constrained problem, Safe RLHF dynamically adjusts the balance between the two objectives during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we demonstrate a superior ability to mitigate harmful responses while enhancing model performance compared to existing value-aligned algorithms. Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with collected human preferences, significantly improving its helpfulness and harmlessness according to human evaluations.\nCode is available at https://github.com/PKU-Alignment/safe-rlhf.\nWarning: This paper contains example data that may be offensive or harmful.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18540"} +{"video_file": "TzoHLiGVMo_39018962.mp4", "openreview_id": "TzoHLiGVMo", "slideslive_id": 39018962, "venue": "iclr2024", "title": "ODEFormer: Symbolic Regression of Dynamical Systems with Transformers", "status": "Spotlight", "keywords": "symbolic regression;dynamical systems;differential equations;transformer", "tldr": "We introduce ODEFormer, a transformer model able of inferring dynamical systems in symbolic form from observational data with state-of-the-art performance.", "abstract": "We introduce ODEFormer, the first transformer able to infer multidimensional ordinary differential equation (ODE) systems in symbolic form from the observation of a single solution trajectory. We perform extensive evaluations on two datasets: (i) the existing \u2018Strogatz\u2019 dataset featuring two-dimensional systems; (ii) ODEBench, a collection of one- to four-dimensional systems that we carefully curated from the literature to provide a more holistic benchmark. ODEFormer consistently outperforms existing methods while displaying substantially improved robustness to noisy and irregularly sampled observations, as well as faster inference. We release our code, model and benchmark at https://github.com/sdascoli/odeformer.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18537"} +{"video_file": "UBVNwD3hPN_39019158.mp4", "openreview_id": "UBVNwD3hPN", "slideslive_id": 39019158, "venue": "iclr2024", "title": "CivRealm: A Learning and Reasoning Odyssey in Civilization for Decision-Making Agents", "status": "Spotlight", "keywords": "Interactive Environments;Benchmark;Reinforcement Learning;Language Agent;Multi-agent", "tldr": "We introduce an interactive environment benchmark grounded in the Civilization game for reinforcement learning (RL) and language agents.", "abstract": "The generalization of decision-making agents encompasses two fundamental elements: learning from past experiences and reasoning in novel contexts. However, the predominant emphasis in most interactive environments is on learning, often at the expense of complexity in reasoning. In this paper, we introduce CivRealm, an environment inspired by the Civilization game. Civilization\u2019s profound alignment with human society requires sophisticated learning and prior knowledge, while its ever-changing space and action space demand robust reasoning for generalization. Particularly, CivRealm sets up an imperfect-information general-sum game with a changing number of players; it presents a plethora of complex features, challenging the agent to deal with open-ended stochastic environments that require diplomacy and negotiation skills. Within CivRealm, we provide interfaces for two typical agent types: tensor-based agents that focus on learning, and language-based agents that emphasize reasoning. To catalyze further research, we present initial results for both paradigms. The canonical RL-based agents exhibit reasonable performance in mini-games, whereas both RL- and LLM-based agents struggle to make substantial progress in the full game. Overall, CivRealm stands as a unique learning and reasoning challenge for decision-making agents. The code is available at https://github.com/bigai-ai/civrealm.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/18531"} +{"video_file": "UCfz492fM8_39017114.mp4", "openreview_id": "UCfz492fM8", "slideslive_id": 39017114, "venue": "iclr2024", "title": "CrossLoco: Human Motion Driven Control of Legged Robots via Guided Unsupervised Reinforcement Learning", "status": "Poster", "keywords": "Human Motion Driven Control;Legged Locomotion;Unsupervised Reinforcement Learning", "tldr": "a guided unsupervised reinforcement learning framework that simultaneously learns robot skills and their correspondence to human motions.", "abstract": "Human motion driven control (HMDC) is an effective approach for generating natural and compelling robot motions while preserving high-level semantics. However, establishing the correspondence between humans and robots with different body structures is not straightforward due to the mismatches in kinematics and dynamics properties, which causes intrinsic ambiguity to the problem. Many previous algorithms approach this motion retargeting problem with unsupervised learning, which requires the prerequisite skill sets. However, it will be extremely costly to learn all the skills without understanding the given human motions, particularly for high-dimensional robots. In this work, we introduce CrossLoco, a guided unsupervised reinforcement learning framework that simultaneously learns robot skills and their correspondence to human motions. Our key innovation is to introduce a cycle-consistency-based reward term designed to maximize the mutual information between human motions and robot states. We demonstrate that the proposed framework can generate compelling robot motions by translating diverse human motions, such as running, hopping, and dancing. We quantitatively compare our CrossLoco against the manually engineered and unsupervised baseline algorithms along with the ablated versions of our framework and demonstrate that our method translates human motions with better accuracy, diversity, and user preference. We also showcase its utility in other applications, such as synthesizing robot movements from language input and enabling interactive robot control.", "primary_area": "applications to robotics, autonomy, planning", "site": "https://iclr.cc/virtual/2024/poster/18530"} +{"video_file": "UMfcdRIotC_39019029.mp4", "openreview_id": "UMfcdRIotC", "slideslive_id": 39019029, "venue": "iclr2024", "title": "Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals", "status": "Poster", "keywords": "NLP;LLMs;Interpretability;Explanation;Causal Inference;Matching", "tldr": "Addressing challenges of black-box NLP model interpretability through two model-agnostic approaches of counterfactual explanation: generation and efficient matching.", "abstract": "Causal explanations of the predictions of NLP systems are essential to ensure safety and establish trust. Yet, existing methods often fall short of explaining model predictions effectively or efficiently and are often model-specific. In this paper, we address model-agnostic explanations, proposing two approaches for counterfactual (CF) approximation. The first approach is CF generation, where a large language model (LLM) is prompted to change a specific text concept while keeping confounding concepts unchanged. While this approach is demonstrated to be very effective, applying LLM at inference-time is costly. We hence present a second approach based on matching, and propose a method that is guided by an LLM at training-time and learns a dedicated embedding space. This space is faithful to a given causal graph and effectively serves to identify matches that approximate CFs. After showing theoretically that approximating CFs is required in order to construct faithful explanations, we benchmark our approaches and explain several models, including LLMs with billions of parameters. Our empirical results demonstrate the excellent performance of CF generation models as model-agnostic explainers. Moreover, our matching approach, which requires far less test-time resources, also provides effective explanations, surpassing many baselines. We also find that Top-K techniques universally improve every tested method. Finally, we showcase the potential of LLMs in constructing new benchmarks for model explanation and subsequently validate our conclusions. Our work illuminates new pathways for efficient and accurate approaches to interpreting NLP systems.", "primary_area": "visualization or interpretation of learned representations", "site": "https://iclr.cc/virtual/2024/poster/18527"} +{"video_file": "UPvufoBAIs_39017866.mp4", "openreview_id": "UPvufoBAIs", "slideslive_id": 39017866, "venue": "iclr2024", "title": "Source-Free and Image-Only Unsupervised Domain Adaptation for Category Level Object Pose Estimation", "status": "Poster", "keywords": "3D Pose Estimation;Unsupervised Learning;Neural Rendering;Analysis-by-Synthesis", "tldr": "We propose the first method to do UDA for 3D pose estimation using only images from target domain by incremental and selective vertex feature updates of a source neural mesh model of a feature level render and compare model.", "abstract": "We consider the problem of source-free unsupervised category-level 3D pose estimation from only RGB images to an non-annotated and unlabelled target domain without any access to source domain data or annotations during adaptation. Collecting and annotating real world 3D data and corresponding images is laborious, expensive yet unavoidable process since even 3D pose domain adaptation methods require 3D data in the target domain. We introduce a method which is capable of adapting to a nuisance ridden target domain without any 3D data or annotations. We represent object categories as simple cuboid meshes, and harness a generative model of neural feature activations modeled as a von Mises Fisher distribution at each mesh vertex learnt using differential rendering. We focus on individual mesh vertex features and iteratively update them based on their proximity to corresponding features in the target domain. Our key insight stems from the observation that specific object subparts remain stable across out-of-domain (OOD) scenarios, enabling strategic utilization of these invariant subcomponents for effective model updates. Our model is then trained in an EM fashion alternating between updating the vertex features and feature extractor. We show that our method simulates fine-tuning on a global-pseudo labelled dataset under mild assumptions which converges to the target domain asymptotically. Through extensive empirical validation, we demonstrate the potency of our simple approach in addressing the domain shift challenge and significantly enhancing pose estimation accuracy. By accentuating robust and less changed object subcomponents, our framework contributes to the evolution of UDA techniques in the context of 3D pose estimation using only images from the target domain.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18526"} +{"video_file": "Unb5CVPtae_39019020.mp4", "openreview_id": "Unb5CVPtae", "slideslive_id": 39019020, "venue": "iclr2024", "title": "Time-LLM: Time Series Forecasting by Reprogramming Large Language Models", "status": "Poster", "keywords": "time series forecasting;large language models;model reprogramming", "tldr": "We present Time-LLM, a reprogramming framework to repurpose LLMs for general time series forecasting with the backbone language models kept intact.", "abstract": "Time series forecasting holds significant importance in many real-world dynamic systems and has been extensively studied. Unlike natural language process (NLP) and computer vision (CV), where a single large model can tackle multiple tasks, models for time series forecasting are often specialized, necessitating distinct designs for different tasks and applications. While pre-trained foundation models have made impressive strides in NLP and CV, their development in time series domains has been constrained by data sparsity. Recent studies have revealed that large language models (LLMs) possess robust pattern recognition and reasoning abilities over complex sequences of tokens. However, the challenge remains in effectively aligning the modalities of time series data and natural language to leverage these capabilities. In this work, we present Time-LLM, a reprogramming framework to repurpose LLMs for general time series forecasting with the backbone language models kept intact. We begin by reprogramming the input time series with text prototypes before feeding it into the frozen LLM to align the two modalities. To augment the LLM's ability to reason with time series data, we propose Prompt-as-Prefix (PaP), which enriches the input context and directs the transformation of reprogrammed input patches. The transformed time series patches from the LLM are finally projected to obtain the forecasts. Our comprehensive evaluations demonstrate that \\method is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models. Moreover, Time-LLM excels in both few-shot and zero-shot learning scenarios. The code is made available at https://github.com/KimMeen/Time-LLM.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18518"} +{"video_file": "Uw8xvFqVAE_39019125.mp4", "openreview_id": "Uw8xvFqVAE", "slideslive_id": 39019125, "venue": "iclr2024", "title": "A representation-learning game for classes of prediction tasks", "status": "Poster", "keywords": "representation learning;semi-supervised learning;dimensionality-reduction;regret;minimax solution;mixed strategies;multiplicative weights update", "tldr": "We derive optimal representations for classes of prediction tasks. We establish the theoretically optimal randomized representation in the linear-MSE setting, and propose an iterative algorithm for optimal representation in the general setting.", "abstract": "We propose a game-based formulation for learning dimensionality-reducing representations of feature vectors, when only a prior knowledge on future prediction tasks is available. In this game, the first player chooses a representation, and then the second player adversarially chooses a prediction task from a given class, representing the prior knowledge. The first player aims to minimize, and the second player to maximize, the regret: The minimal prediction loss using the representation, compared to the same loss using the original features. We consider the canonical setting in which the representation, the response to predict and the predictors are all linear functions, and the loss function is the mean squared error. We derive the theoretically optimal representation in pure strategies, which shows the effectiveness of the prior knowledge, and the optimal regret in mixed strategies, which shows the usefulness of randomizing the representation. For general representation, prediction and loss functions, we propose an efficient algorithm to optimize a randomized representation. The algorithm only requires the gradients of the loss function, and is based on incrementally adding a representation rule to a mixture of such rules.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18514"} +{"video_file": "V1GM9xDvIY_39018893.mp4", "openreview_id": "V1GM9xDvIY", "slideslive_id": 39018893, "venue": "iclr2024", "title": "Neural structure learning with stochastic differential equations", "status": "Poster", "keywords": "Structure Learning;Causal Discovery;Generative Model;Variational Inference;Differential Equation", "tldr": "We propose a novel structure learning method that leverages stochastic differential equations and variational inference to model continuous temporal process and infer posterior distributions over possible structures with theoretical guarantees.", "abstract": "Discovering the underlying relationships among variables from temporal observations has been a longstanding challenge in numerous scientific disciplines, including biology, finance, and climate science. The dynamics of such systems are often best described using continuous-time stochastic processes. Unfortunately, most existing structure learning approaches assume that the underlying process evolves in discrete-time and/or observations occur at regular time intervals. These mismatched assumptions can often lead to incorrect learned structures and models. In this work, we introduce a novel structure learning method, SCOTCH, which combines neural stochastic differential equations (SDE) with variational inference to infer a posterior distribution over possible structures. This continuous-time approach can naturally handle both learning from and predicting observations at arbitrary time points. Theoretically, we establish sufficient conditions for an SDE and SCOTCH to be structurally identifiable, and prove its consistency under infinite data limits. Empirically, we demonstrate that our approach leads to improved structure learning performance on both synthetic and real-world datasets compared to relevant baselines under regular and irregular sampling intervals.", "primary_area": "causal reasoning", "site": "https://iclr.cc/virtual/2024/poster/18511"} +{"video_file": "V5tdi14ple_39017858.mp4", "openreview_id": "V5tdi14ple", "slideslive_id": 39017858, "venue": "iclr2024", "title": "Don't Trust: Verify -- Grounding LLM Quantitative Reasoning with Autoformalization", "status": "Poster", "keywords": "mathematical reasoning;autoformalization;automated theorem proving;quantitative reasoning", "tldr": "We show that automatically formalizing and verifying LLM generated quantitative reasoning solutions consistently outperforms vanilla majority voting.", "abstract": "Large language models (LLM), such as Google's Minerva and OpenAI's GPT families, are becoming increasingly capable of solving mathematical quantitative reasoning problems. However, they still make unjustified logical and computational errors in their reasoning steps and answers. In this paper, we leverage the fact that if the training corpus of LLMs contained sufficiently many examples of formal mathematics (e.g. in Isabelle, a formal theorem proving environment), they can be prompted to translate i.e. autoformalize informal mathematical statements into formal Isabelle code --- which can be verified automatically for internal consistency. This provides a mechanism to automatically reject solutions whose formalized versions are inconsistent within themselves or with the formalized problem statement. We evaluate our method on GSM8K, MATH and MultiArith datasets and demonstrate that our approach provides a consistently better heuristic than vanilla majority voting --- the previously best method to identify correct answers, by more than 12% on GSM8K. In our experiments it improves results consistently across all datasets and LLM model sizes. The code can be found at https://github.com/jinpz/dtv.", "primary_area": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18508"} +{"video_file": "VTYg5ykEGS_39017856.mp4", "openreview_id": "VTYg5ykEGS", "slideslive_id": 39017856, "venue": "iclr2024", "title": "ImageNet-OOD: Deciphering Modern Out-of-Distribution Detection Algorithms", "status": "Poster", "keywords": "Out-of-distribution Detection;Failure Detection;Object Discovery;Novelty Detection;Robustness", "tldr": "We introduce a clean semantic shift detection dataset to demonstrate that modern out-of-distributions are overly sensitive to covariate shifts.", "abstract": "The task of out-of-distribution (OOD) detection is notoriously ill-defined. Earlier works focused on new-class detection, aiming to identify label-altering data distribution shifts, also known as \"semantic shift.\" However, recent works argue for a focus on failure detection, expanding the OOD evaluation framework to account for label-preserving data distribution shifts, also known as \"covariate shift.\u201d Intriguingly, under this new framework, complex OOD detectors that were previously considered state-of-the-art now perform similarly to, or even worse than the simple maximum softmax probability baseline. This raises the question: what are the latest OOD detectors actually detecting? Deciphering the behavior of OOD detection algorithms requires evaluation datasets that decouples semantic shift and covariate shift. To aid our investigations, we present ImageNet-OOD, a clean semantic shift dataset that minimizes the interference of covariate shift. Through comprehensive experiments, we show that OOD detectors are more sensitive to covariate shift than to semantic shift, and the benefits of recent OOD detection algorithms on semantic shift detection is minimal. Our dataset and analyses provide important insights for guiding the design of future OOD detectors.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/18504"} +{"video_file": "Vja3ecieXY_39019113.mp4", "openreview_id": "Vja3ecieXY", "slideslive_id": 39019113, "venue": "iclr2024", "title": "Towards Green AI in Fine-tuning Large Language Models via Adaptive Backpropagation", "status": "Poster", "keywords": "Green AI;Large Language Models;Fine-Tuning;Adaptive Backpropagation", "tldr": "This paper presents a new technique of minimizing the FLOPs of LLM fine-tuning with respect to the Green AI requirements, by selecting the best trainable portions of the model based on their backpropagation costs.", "abstract": "Fine-tuning is essential to adapting pre-trained large language models to downstream applications. With the increasing popularity of LLM-enabled applications, fine-tuning has been performed intensively worldwide, incurring a tremendous amount of computing costs that correspond to big carbon footprint and environmental impact. Mitigating such environmental impact directly correlates to reducing the fine-tuning FLOPs. Existing fine-tuning schemes focus on either saving memory or reducing the overhead of computing weight updates, but cannot achieve sufficient FLOPs reduction due to their ignorance of the training cost in backpropagation. To address this limitation, in this paper we present GreenTrainer, a new technique that minimizes the FLOPs of LLM fine-tuning via adaptive backpropagation, which adaptively selects the most appropriate set of LLM tensors for fine-tuning based on their importance and backpropagation cost in training. Experiment results show that GreenTrainer can save up to 64% training FLOPs compared to full fine-tuning, without any noticeable accuracy loss. Compared to the existing schemes such as Prefix Tuning and LoRA, GreenTrainer can achieve up to 4% improvement of model accuracy, with on-par FLOPs reduction.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18496"} +{"video_file": "VoLDkQ6yR3_39017848.mp4", "openreview_id": "VoLDkQ6yR3", "slideslive_id": 39017848, "venue": "iclr2024", "title": "Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation", "status": "Poster", "keywords": "Dataset Distillation;Reconstruction Attacks;Neural Tangent Kernel", "tldr": "We analyze parameter-based reconstruction attacks from an NTK perspective and show that it is a variant of dataset distillation", "abstract": "Modern deep learning requires large volumes of data, which could contain sensitive or private information that cannot be leaked. Recent work has shown for homogeneous neural networks a large portion of this training data could be reconstructed with only access to the trained network parameters. While the attack was shown to work empirically, there exists little formal understanding of its effective regime and which datapoints are susceptible to reconstruction. In this work, we first build a stronger version of the dataset reconstruction attack and show how it can provably recover the \\emph{entire training set} in the infinite width regime. We then empirically study the characteristics of this attack on two-layer networks and reveal that its success heavily depends on deviations from the frozen infinite-width Neural Tangent Kernel limit. Next, we study the nature of easily-reconstructed images. We show that both theoretically and empirically, reconstructed images tend to ``outliers'' in the dataset, and that these reconstruction attacks can be used for \\textit{dataset distillation}, that is, we can retrain on reconstructed images and obtain high predictive accuracy.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18493"} +{"video_file": "W2d3LZbhhI_39017846.mp4", "openreview_id": "W2d3LZbhhI", "slideslive_id": 39017846, "venue": "iclr2024", "title": "A Unified Sampling Framework for Solver Searching of Diffusion Probabilistic Models", "status": "Poster", "keywords": "Diffusion Probabilistic Model;Diffusion Sampler;Solver Schedule", "tldr": "We provide a unified sampling framework for diffusion model and search for solver schedules based on it.", "abstract": "Recent years have witnessed the rapid progress and broad application of diffusion probabilistic models (DPMs). Sampling from DPMs can be viewed as solving an ordinary differential equation (ODE). Despite the promising performance, the generation of DPMs usually consumes much time due to the large number of function evaluations (NFE). Though recent works have accelerated the sampling to around 20 steps with high-order solvers, the sample quality with less than 10 NFE can still be improved. In this paper, we propose a unified sampling framework (USF) to study the optional strategies for solver. Under this framework, we further reveal that taking different solving strategies at different timesteps may help further decrease the truncation error, and a carefully designed \\emph{solver schedule} has the potential to improve the sample quality by a large margin. Therefore, we propose a new sampling framework based on the exponential integral formulation that allows free choices of solver strategy at each step and design specific decisions for the framework. Moreover, we propose\nS\n3\n, a predictor-based search method that automatically optimizes the solver schedule to get a better time-quality trade-off of sampling. We demonstrate that\nS\n3\ncan find outstanding solver schedules which outperform the state-of-the-art sampling methods on CIFAR-10, CelebA, ImageNet-64, and LSUN-Bedroom datasets. Specifically, we achieve 2.69 FID with 9 NFE and 6.86 FID with 5 NFE on CIFAR-10 dataset, outperforming the SOTA method significantly. We further apply\nS\n3\nto Stable-Diffusion model and get an acceleration ratio of 2\n\u00d7\n, showing the feasibility of sampling in very few steps without retraining of the neural network.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18489"} +{"video_file": "WNQjN5HzXt_39017842.mp4", "openreview_id": "WNQjN5HzXt", "slideslive_id": 39017842, "venue": "iclr2024", "title": "AUGCAL: Improving Sim2Real Adaptation by Uncertainty Calibration on Augmented Synthetic Images", "status": "Poster", "keywords": "Unsupervised Domain Adaptation;Sim2Real", "tldr": "A method to reduce miscalibration for unsupervised Sim2Real adaptation by optimizing for calibrated predictions on augmented synthetic data.", "abstract": "Synthetic data (Sim) drawn from simulators have emerged as a popular alternativefor training models where acquiring annotated real-world images is difficult. However, transferring models trained on synthetic images to real-world applicationscan be challenging due to appearance disparities. A commonly employed solution to counter this Sim2Real gap is unsupervised domain adaptation, where models are trained using labeled Sim data and unlabeled Real data. Mispredictions made by such Sim2Real adapted models are often associated with miscalibration \u2013 stemming from overconfident predictions on real data. In this paper, we introduce AUGCAL, a simple training-time patch for unsupervised adaptation that improves Sim2Real adapted models by \u2013 (1) reducing overall miscalibration, (2) reducing overconfidence in incorrect predictions and (3) improving confidence score reliability by better guiding misclassification detection \u2013 all while retaining or improving Sim2Real performance. Given a base Sim2Real adaptation algorithm, at training time, AUGCAL involves replacing vanilla Sim images with strongly augmented views (AUG intervention) and additionally optimizing for a training time calibration loss on augmented Sim predictions (CAL intervention). We motivate AUGCAL using a brief analytical justification of how to reduce miscalibration on unlabeled REAL data. Through our experiments, we empirically show the efficacy of AUGCAL across multiple adaptation methods, backbones, tasks and shifts.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/18481"} +{"video_file": "WQYHbr36Fo_39018933.mp4", "openreview_id": "WQYHbr36Fo", "slideslive_id": 39018933, "venue": "iclr2024", "title": "Mind Your Augmentation: The Key to Decoupling Dense Self-Supervised Learning", "status": "Poster", "keywords": "Dense Self-supervised Learning;representation learning", "tldr": "we are providing novel and practical solution for enhancing Dense SSL, offering valuable insights into disentangled feature representation within the realm of self-supervised learning.", "abstract": "Dense Self-Supervised Learning (SSL) creates positive pairs by building positive paired regions or points, thereby aiming to preserve local features, for example of individual objects. However, existing approaches tend to couple objects by leaking information from the neighboring contextual regions when the pairs have a limited overlap. In this paper, we first quantitatively identify and confirm the existence of such a coupling phenomenon. We then address it by developing a remarkably simple yet highly effective solution comprising a novel augmentation method, Region Collaborative Cutout (RCC), and a corresponding decoupling branch. Importantly, our design is versatile and can be seamlessly integrated into existing SSL frameworks, whether based on Convolutional Neural Networks (CNNs) or Vision Transformers (ViTs). We conduct extensive experiments, incorporating our solution into two CNN-based and two ViT-based methods, with results confirming the effectiveness of our approach. Moreover, we provide empirical evidence that our method significantly contributes to the disentanglement of feature representations among objects, both in quantitative and qualitative terms.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18476"} +{"video_file": "WipsLtH77t_39018744.mp4", "openreview_id": "WipsLtH77t", "slideslive_id": 39018744, "venue": "iclr2024", "title": "Adaptive Self-training Framework for Fine-grained Scene Graph Generation", "status": "Poster", "keywords": "Scene Graph Generation;Scene Understanding;Imbalanced Classification;Self-training;Long-tailed Problem", "tldr": "This work proposes a novel self-training method for scene graph generation that assigns pseudo-labels for unannotated triplets to enhance the scene representation, which is challenging to adopt due to the unique nature of scene graph generation.", "abstract": "Scene graph generation (SGG) models have suffered from inherent problems regarding the benchmark datasets such as the long-tailed predicate distribution and missing annotation problems. In this work, we aim to alleviate the long-tailed problem of SGG by utilizing unannotated triplets. To this end, we introduce a Self-Training framework for SGG (ST-SGG) that assigns pseudo-labels for unannotated triplets based on which the SGG models are trained. While there has been significant progress in self-training for image recognition, designing a self-training framework for the SGG task is more challenging due to its inherent nature such as the semantic ambiguity and the long-tailed distribution of predicate classes. Hence, we propose a novel pseudo-labeling technique for SGG, called Class-specific Adaptive Thresholding with Momentum (CATM), which is a model-agnostic framework that can be applied to any existing SGG models. Furthermore, we devise a graph structure learner (GSL) that is beneficial when adopting our proposed self-training framework to the state-of-the-art message-passing neural network (MPNN)-based SGG models. Our extensive experiments verify the effectiveness of ST-SGG on various SGG models, particularly in enhancing the performance on fine-grained predicate classes.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18465"} +{"video_file": "WjRPZsfeBO_39017834.mp4", "openreview_id": "WjRPZsfeBO", "slideslive_id": 39017834, "venue": "iclr2024", "title": "A Statistical Analysis of Wasserstein Autoencoders for Intrinsically Low-dimensional Data", "status": "Poster", "keywords": "Wasserstein Autoencoders;Statistical Analysis;Error rates;Intrinsic Dimension", "tldr": "We show that WAE's can achieve an excess risk that, as a function of the number of samples, depends only on the intrinsic data dimensions rather than the high dimensions of the ambient feature-space.", "abstract": "Variational Autoencoders (VAEs) have gained significant popularity among researchers as a powerful tool for understanding unknown distributions based on limited samples. This popularity stems partly from their impressive performance and partly from their ability to provide meaningful feature representations in the latent space. Wasserstein Autoencoders (WAEs), a variant of VAEs, aim to not only improve model efficiency but also interpretability. However, there has been limited focus on analyzing their statistical guarantees. The matter is further complicated by the fact that the data distributions to which WAEs are applied - such as natural images - are often presumed to possess an underlying low-dimensional structure within a high-dimensional feature space, which current theory does not adequately account for, rendering known bounds inefficient. To bridge the gap between the theory and practice of WAEs, in this paper, we show that WAEs can learn the data distributions when the network architectures are properly chosen. We show that the convergence rates of the expected excess risk in the number of samples for WAEs are independent of the high feature dimension, instead relying only on the intrinsic dimension of the data distribution.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/18464"} +{"video_file": "X6tNkN6ate_39017832.mp4", "openreview_id": "X6tNkN6ate", "slideslive_id": 39017832, "venue": "iclr2024", "title": "Interpretable Diffusion via Information Decomposition", "status": "Poster", "keywords": "Diffusion Models;Information Theory;Interpretable Machine Learning", "tldr": "The study explores how denoising diffusion models understand data, revealing ways to measure relationships between images and text, and introducing techniques for object localization and directed image modifications.", "abstract": "Denoising diffusion models enable conditional generation and density modeling of complex relationships like images and text. However, the nature of the learned relationships is opaque making it difficult to understand precisely what relationships between words and parts of an image are captured, or to predict the effect of an intervention. We illuminate the fine-grained relationships learned by diffusion models by noticing a precise relationship between diffusion and information decomposition. Exact expressions for mutual information and conditional mutual information can be written in terms of the denoising model. Furthermore,\np\no\ni\nn\nt\nw\ni\ns\ne\nestimates can be easily estimated as well, allowing us to ask questions about the relationships between specific images and captions. Decomposing information even further to understand which variables in a high-dimensional space carry information is a long-standing problem. For diffusion models, we show that a natural non-negative decomposition of mutual information emerges, allowing us to quantify informative relationships between words and pixels in an image. We exploit these new relations to measure the compositional understanding of diffusion models, to do unsupervised localization of objects in images, and to measure effects when selectively editing images through prompt interventions.", "primary_area": "visualization or interpretation of learned representations", "site": "https://iclr.cc/virtual/2024/poster/18458"} +{"video_file": "XIaS66XkNA_39017829.mp4", "openreview_id": "XIaS66XkNA", "slideslive_id": 39017829, "venue": "iclr2024", "title": "Idempotent Generative Network", "status": "Poster", "keywords": "Generative model;idempotent;energy based models", "tldr": "A new generative model, based on projection onto the data manifold, s.t. f(f(z))=f(z).", "abstract": "We propose a new approach for generative modeling based on training a neural network to be idempotent. An idempotent operator is one that can be applied sequentially without changing the result beyond the initial application, namely\nf\n(\nf\n(\nz\n)\n)\n=\nf\n(\nz\n)\n. The proposed model\nf\nis trained to map a source distribution (e.g, Gaussian noise) to a target distribution (e.g. realistic images) using the following objectives: (1) Instances from the target distribution should map to themselves, namely\nf\n(\nx\n)\n=\nx\n. We define the target manifold as the set of all instances that\nf\nmaps to themselves. (2) Instances that form the source distribution should map onto the defined target manifold. This is achieved by optimizing the idempotence term,\nf\n(\nf\n(\nz\n)\n)\n=\nf\n(\nz\n)\nwhich encourages the range of\nf\n(\nz\n)\nto be on the target manifold. Under ideal assumptions such a process provably converges to the target distribution. This strategy results in a model capable of generating an output in one step, maintaining a consistent latent space, while also allowing sequential applications for refinement. Additionally, we find that by processing inputs from both target and source distributions, the model adeptly projects corrupted or modified data back to the target manifold. This work is a first step towards a ``global projector'' that enables projecting any input into a target data distribution.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18455"} +{"video_file": "XTHfNGI3zT_39017825.mp4", "openreview_id": "XTHfNGI3zT", "slideslive_id": 39017825, "venue": "iclr2024", "title": "Quantifying the Plausibility of Context Reliance in Neural Machine Translation", "status": "Poster", "keywords": "explainable AI;interpretability;feature attribution;machine translation;document-level machine translation;natural language generation", "tldr": "We introduce PECoRe, an end-to-end interpretability framework to evaluate the plausibility of context usage in language models generations.", "abstract": "Establishing whether language models can use contextual information in a human-plausible way is important to ensure their safe adoption in real-world settings. However, the questions of\nwhen\nand\nwhich parts\nof the context affect model generations are typically tackled separately, and current plausibility evaluations are practically limited to a handful of artificial benchmarks. To address this, we introduce\nP\nlausibility\nE\nvaluation of\nCo\nntext\nRe\nliance (PECoRe), an end-to-end interpretability framework designed to quantify context usage in language models' generations. Our approach leverages model internals to (i) contrastively identify context-sensitive target tokens in generated texts and (ii) link them to contextual cues justifying their prediction. We use PECoRe to quantify the plausibility of context-aware machine translation models, comparing model rationales with human annotations across several discourse-level phenomena. Finally, we apply our method to unannotated model translations to identify context-mediated predictions and highlight instances of (im)plausible context usage throughout generation.", "primary_area": "visualization or interpretation of learned representations", "site": "https://iclr.cc/virtual/2024/poster/18449"} +{"video_file": "Xz13DtbOVW_39018647.mp4", "openreview_id": "Xz13DtbOVW", "slideslive_id": 39018647, "venue": "iclr2024", "title": "Balancing Act: Constraining Disparate Impact in Sparse Models", "status": "Poster", "keywords": "deep learning;sparsity;disparate impact;constrained optimization;pruning;fairness", "tldr": "We propose a constrained optimization method that directly reduces the disparate impact of pruning on accuracy across data sub-groups.", "abstract": "Model pruning is a popular approach to enable the deployment of large deep learning models on edge devices with restricted computational or storage capacities. Although sparse models achieve performance comparable to that of their dense counterparts at the level of the entire dataset, they exhibit high accuracy drops for some data sub-groups. Existing methods to mitigate this disparate impact induced by pruning (i) rely on surrogate metrics that address the problem indirectly and have limited interpretability; or (ii) scale poorly with the number of protected sub-groups in terms of computational cost. We propose a constrained optimization approach that directly addresses the disparate impact of pruning: our formulation bounds the accuracy change between the dense and sparse models, for each sub-group. This choice of constraints provides an interpretable success criterion to determine if a pruned model achieves acceptable disparity levels. Experimental results demonstrate that our technique scales reliably to problems involving large models and hundreds of protected sub-groups.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18437"} +{"video_file": "Y3wpuxd7u9_39018995.mp4", "openreview_id": "Y3wpuxd7u9", "slideslive_id": 39018995, "venue": "iclr2024", "title": "GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction", "status": "Poster", "keywords": "Information Extraction;Zero-Shot;Annotation Guidelines;Large Language Models;LLM;prompt", "tldr": "We propose GoLLIE (Guideline-following Large Language Model for IE), a model able to improve zero-shot results on unseen IE tasks by virtue of being fine-tuned to comply with annotation guidelines.", "abstract": "Large Language Models (LLMs) combined with instruction tuning have made significant progress when generalizing to unseen tasks. However, they have been less successful in Information Extraction (IE), lagging behind task-specific models. Typically, IE tasks are characterized by complex annotation guidelines which describe the task and give examples to humans. Previous attempts to leverage such information have failed, even with the largest models, as they are not able to follow the guidelines out-of-the-box. In this paper we propose GoLLIE (Guideline-following Large Language Model for IE), a model able to improve zero-shot results on unseen IE tasks by virtue of being fine-tuned to comply with annotation guidelines. Comprehensive evaluation empirically demonstrates that GoLLIE is able to generalize to and follow unseen guidelines, outperforming previous attempts at zero-shot information extraction. The ablation study shows that detailed guidelines is key for good results. Code, data and models will be made publicly available.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18435"} +{"video_file": "YCPDFfmkFr_39017817.mp4", "openreview_id": "YCPDFfmkFr", "slideslive_id": 39017817, "venue": "iclr2024", "title": "Leveraging augmented-Lagrangian techniques for differentiating over infeasible quadratic programs in machine learning", "status": "Spotlight", "keywords": "Machine Learning;Optimization;Differentiable Optimization;Optimization layers", "tldr": "We propose a unified approach to differentiate over the closest feasible quadratic programming (QP) solutions. We show it enables to learn a wider range of QP layers with better performance for some classic learning tasks", "abstract": "Optimization layers within neural network architectures have become increasingly popular for their ability to solve a wide range of machine learning tasks and to model domain-specific knowledge. However, designing optimization layers requires careful consideration as the underlying optimization problems might be infeasible during training. Motivated by applications in learning, control and robotics, this work focuses on convex quadratic programming (QP) layers. The specific structure of this type of optimization layer can be efficiently exploited for faster computations while still allowing rich modeling capabilities. We leverage primal-dual augmented Lagrangian techniques for computing derivatives of both feasible and infeasible QP solutions. More precisely, we propose a unified approach which tackles the differentiability of the closest feasible QP solutions in a classical\n\u2113\n2\nsense. We then harness this approach to enrich the expressive capabilities of existing QP layers. More precisely, we show how differentiating through infeasible QPs during training enables to drive towards feasibility at test time a new range of QP layers. These layers notably demonstrate superior predictive performance in some conventional learning tasks. Additionally, we present alternative formulations that enhance numerical robustness, speed, and accuracy for training such layers. Along with these contributions, we provide an open-source C++ software package called QPLayer for differentiating feasible and infeasible convex QPs and which can be interfaced with modern learning frameworks.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18433"} +{"video_file": "YHUGlwTzFB_39017813.mp4", "openreview_id": "YHUGlwTzFB", "slideslive_id": 39017813, "venue": "iclr2024", "title": "Active Test-Time Adaptation: Theoretical Analyses and An Algorithm", "status": "Poster", "keywords": "Domain adaptation;Test-time adaptation;Distribution shift;Catastrophic forgetting", "tldr": "We propose ATTA, an innovative setting, standing as a cost-effective option for efficiency and effectiveness between Test-Time Adaptation and Active Domain Adaptation.", "abstract": "Test-time adaptation (TTA) addresses distribution shifts for streaming test data in unsupervised settings. Currently, most TTA methods can only deal with minor shifts and rely heavily on heuristic and empirical studies. To advance TTA under domain shifts, we propose the novel problem setting of active test-time adaptation (ATTA) that integrates active learning within the fully TTA setting. We provide a learning theory analysis, demonstrating that incorporating limited labeled test instances enhances overall performances across test domains with a theoretical guarantee. We also present a sample entropy balancing for implementing ATTA while avoiding catastrophic forgetting (CF). We introduce a simple yet effective ATTA algorithm, known as SimATTA, using real-time sample selection techniques. Extensive experimental results confirm consistency with our theoretical analyses and show that the proposed ATTA method yields substantial performance improvements over TTA methods while maintaining efficiency and shares similar effectiveness to the more demanding active domain adaptation (ADA) methods. Our code is available at https://github.com/divelab/ATTA.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/18428"} +{"video_file": "YbZxT0SON4_39017806.mp4", "openreview_id": "YbZxT0SON4", "slideslive_id": 39017806, "venue": "iclr2024", "title": "Improving Intrinsic Exploration by Creating Stationary Objectives", "status": "Poster", "keywords": "Reinforcement Learning;Exploration;Intrinsic Rewards;Stationarity", "tldr": "We propose a framework to transform intrinsic reward methods into stationary learning signals, which enables better policy learning across many challenging environments.", "abstract": "Exploration bonuses in reinforcement learning guide long-horizon exploration by defining custom intrinsic objectives. Count-based methods use the frequency of state visits to derive an exploration bonus. In this paper, we identify that any intrinsic reward function derived from count-based methods is non-stationary and hence induces a difficult objective to optimize for the agent. The key contribution of our work lies in transforming the original non-stationary rewards into stationary rewards through an augmented state representation. For this purpose, we introduce the Stationary Objectives For Exploration (SOFE) framework. SOFE requires identifying sufficient statistics for different exploration bonuses and finding an efficient encoding of these statistics to use as input to a deep network. SOFE is based on proposing state augmentations that expand the state space but hold the promise of simplifying the optimization of the agent's objective. Our experiments show that SOFE improves the agents' performance in challenging exploration problems, including sparse-reward tasks, pixel-based observations, 3D navigation, and procedurally generated environments.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18419"} +{"video_file": "YcW8i9VCf5_39017805.mp4", "openreview_id": "YcW8i9VCf5", "slideslive_id": 39017805, "venue": "iclr2024", "title": "Adversarial Causal Bayesian Optimization", "status": "Poster", "keywords": "causality;bayesian optimization", "tldr": "A causal Bayesian optimization algorithm for when other agents can also intervene on the system", "abstract": "In Causal Bayesian Optimization (CBO), an agent intervenes on a structural causal model with known graph but unknown mechanisms to maximize a downstream reward variable. In this paper, we consider the generalization where other agents or external events also intervene on the system, which is key for enabling adaptiveness to non-stationarities such as weather changes, market forces, or adversaries. We formalize this generalization of CBO as Adversarial Causal Bayesian Optimization (ACBO) and introduce the first algorithm for ACBO with bounded regret: Causal Bayesian Optimization with Multiplicative Weights (CBO-MW). Our approach combines a classical online learning strategy with causal modeling of the rewards. To achieve this, it computes optimistic counterfactual reward estimates by propagating uncertainty through the causal graph. We derive regret bounds for CBO-MW that naturally depend on graph-related quantities. We further propose a scalable implementation for the case of combinatorial interventions and submodular rewards. Empirically, CBO-MW outperforms non-causal and non-adversarial Bayesian optimization methods on synthetic environments and environments based on real-word data. Our experiments include a realistic demonstration of how CBO-MW can be used to learn users' demand patterns in a shared mobility system and reposition vehicles in strategic areas.", "primary_area": "causal reasoning", "site": "https://iclr.cc/virtual/2024/poster/18417"} +{"video_file": "YrXHEb2qMb_39017078.mp4", "openreview_id": "YrXHEb2qMb", "slideslive_id": 39017078, "venue": "iclr2024", "title": "Posterior Sampling Based on Gradient Flows of the MMD with Negative Distance Kernel", "status": "Poster", "keywords": "Bayesian inverse Problems;MMD;Gradient Flows;Deep Learning", "tldr": "We establish a negative distance kernel MMD flow to the joint distribution, which allows for posterior sampling in Bayesian inverse problems.", "abstract": "We propose conditional flows of the maximum mean discrepancy (MMD) with the negative distance kernel for posterior sampling and conditional generative modelling. This MMD, which is also known as energy distance, has several advantageous properties like efficient computation via slicing and sorting. We approximate the joint distribution of the ground truth and the observations using discrete Wasserstein gradient flows and establish an error bound for the posterior distributions. Further, we prove that our particle flow is indeed a Wasserstein gradient flow of an appropriate functional. The power of our method is demonstrated by numerical examples including conditional image generation and inverse problems like superresolution, inpainting and computed tomography in low-dose and limited-angle settings.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18412"} +{"video_file": "Z8UfDs4J46_39018623.mp4", "openreview_id": "Z8UfDs4J46", "slideslive_id": 39018623, "venue": "iclr2024", "title": "Addressing Signal Delay in Deep Reinforcement Learning", "status": "Spotlight", "keywords": "Deep Reinforcement Learning;Signal Delay;Robotic Control;Continuous Control", "tldr": "This paper formalizes and addresses signal delay in deep reinforcement learning, introducing effective strategies that maintain high performance in robotic control tasks despite substantial delays.", "abstract": "Despite the notable advancements in deep reinforcement learning (DRL) in recent years, a prevalent issue that is often overlooked is the impact of signal delay. Signal delay occurs when there is a lag between an agent's perception of the environment and its corresponding actions. In this paper, we first formalize delayed-observation Markov decision processes (DOMDP) by extending the standard MDP framework to incorporate signal delays. Next, we elucidate the challenges posed by the presence of signal delay in DRL, showing that trivial DRL algorithms and generic methods for partially observable tasks suffer greatly from delays. Lastly, we propose effective strategies to overcome these challenges. Our methods achieve remarkable performance in continuous robotic control tasks with large delays, yielding results comparable to those in non-delayed cases. Overall, our work contributes to a deeper understanding of DRL in the presence of signal delays and introduces novel approaches to address the associated challenges.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18410"} +{"video_file": "ZEZ0CPmoSI_39017800.mp4", "openreview_id": "ZEZ0CPmoSI", "slideslive_id": 39017800, "venue": "iclr2024", "title": "Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization", "status": "Poster", "keywords": "Optimization;First-order optimization;Non-convex optimization;Distributed optimization", "tldr": "We propose two matrix stepsized sketch gradient descent algorithms for minimizing matrix-smooth non-convex objectives.", "abstract": "This paper introduces a new method for minimizing matrix-smooth non-convex objectives through the use of novel Compressed Gradient Descent (CGD) algorithms enhanced with a matrix-valued stepsize. The proposed algorithms are theoretically analyzed first in the single-node and subsequently in the distributed settings. Our theoretical results reveal that the matrix stepsize in CGD can capture the objective\u2019s structure and lead to faster convergence compared to a scalar stepsize. As a byproduct of our general results, we emphasize the importance of selecting the compression mechanism and the matrix stepsize in a layer-wise manner, taking advantage of model structure. Moreover, we provide theoretical guarantees for free compression, by designing specific layer-wise compressors for the non-convex matrix smooth objectives. Our findings are supported with empirical evidence.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18407"} +{"video_file": "ZKEuFKfCKA_39018857.mp4", "openreview_id": "ZKEuFKfCKA", "slideslive_id": 39018857, "venue": "iclr2024", "title": "A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging", "status": "Spotlight", "keywords": "federated learning;partial client participation;adaptation;aggregation weights", "tldr": "We present the FedAU algorithm and its analysis, which improves federated averaging (FedAvg) by adaptively weighting the client updates, based on online estimates of the optimal weights without knowing the statistics of client participation.", "abstract": "In federated learning (FL), clients usually have diverse participation statistics that are unknown a priori, which can significantly harm the performance of FL if not handled properly. Existing works aiming at addressing this problem are usually based on global variance reduction, which requires a substantial amount of additional memory in a multiplicative factor equal to the total number of clients. An important open problem is to find a lightweight method for FL in the presence of clients with unknown participation rates. In this paper, we address this problem by adapting the aggregation weights in federated averaging (FedAvg) based on the participation history of each client. We first show that, with heterogeneous participation statistics, FedAvg with non-optimal aggregation weights can diverge from the optimal solution of the original FL objective, indicating the need of finding optimal aggregation weights. However, it is difficult to compute the optimal weights when the participation statistics are unknown. To address this problem, we present a new algorithm called FedAU, which improves FedAvg by adaptively weighting the client updates based on online estimates of the optimal weights without knowing the statistics of client participation. We provide a theoretical convergence analysis of FedAU using a novel methodology to connect the estimation error and convergence. Our theoretical results reveal important and interesting insights, while showing that FedAU converges to an optimal solution of the original objective and has desirable properties such as linear speedup. Our experimental results also verify the advantage of FedAU over baseline methods with various participation patterns.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18403"} +{"video_file": "ZMv6zKYYUs_39017796.mp4", "openreview_id": "ZMv6zKYYUs", "slideslive_id": 39017796, "venue": "iclr2024", "title": "Learning semilinear neural operators: A unified recursive framework for prediction and data assimilation.", "status": "Poster", "keywords": "Neural operator;PDEs;semi-linear evolution;sequential learning;filtering;data assimilation", "tldr": "We propose a flexible recursive neural operator approach for prediction and data assimilation with semilinear PDEs.", "abstract": "Recent advances in the theory of Neural Operators (NOs) have enabled fast and accurate computation of the solutions to complex systems described by partial differential equations (PDEs). Despite their great success, current NO-based solutions face important challenges when dealing with spatio-temporal PDEs over long time scales. Specifically, the current theory of NOs does not present a systematic framework to perform data assimilation and efficiently correct the evolution of PDE solutions over time based on sparsely sampled noisy measurements. In this paper, we propose a learning-based state-space approach to compute the solution operators to infinite-dimensional semilinear PDEs. Exploiting the structure of semilinear PDEs and the theory of nonlinear observers in function spaces, we develop a flexible recursive method that allows for both prediction and data assimilation by combining prediction and correction operations. The proposed framework is capable of producing fast and accurate predictions over long time horizons, dealing with irregularly sampled noisy measurements to correct the solution, and benefits from the decoupling between the spatial and temporal dynamics of this class of PDEs. We show through experiments on the Kuramoto-Sivashinsky, Navier-Stokes and Korteweg-de Vries equations that the proposed model is robust to noise and can leverage arbitrary amounts of measurements to correct its prediction over a long time horizon with little computational overhead.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18401"} +{"video_file": "ZPdZLlNXSm_39017795.mp4", "openreview_id": "ZPdZLlNXSm", "slideslive_id": 39017795, "venue": "iclr2024", "title": "Mean Field Theory in Deep Metric Learning", "status": "Poster", "keywords": "deep metric learning;image retrieval;mean field theory", "tldr": "Application of statistical physics to deep metric learning", "abstract": "In this paper, we explore the application of mean field theory, a technique from statistical physics, to deep metric learning and address the high training complexity commonly associated with conventional metric learning loss functions. By adapting mean field theory for deep metric learning, we develop an approach to design classification-based loss functions from pair-based ones, which can be considered complementary to the proxy-based approach. Applying the mean field theory to two pair-based loss functions, we derive two new loss functions, MeanFieldContrastive and MeanFieldClassWiseMultiSimilarity losses, with reduced training complexity. We extensively evaluate these derived loss functions on three image-retrieval datasets and demonstrate that our loss functions outperform baseline methods in two out of the three datasets.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18398"} +{"video_file": "ZULjcYLWKe_39017792.mp4", "openreview_id": "ZULjcYLWKe", "slideslive_id": 39017792, "venue": "iclr2024", "title": "DMBP: Diffusion model-based predictor for robust offline reinforcement learning against state observation perturbations", "status": "Poster", "keywords": "Robust Reinforcement Learning;Offline Reinforcement Learning;Diffusion Models", "tldr": "For state-based reinforcement learning tasks with state observation perturbations, we propose a new framework that recovers the actual states with offline-trained conditional diffusion models.", "abstract": "Offline reinforcement learning (RL), which aims to fully explore offline datasets for training without interaction with environments, has attracted growing recent attention. A major challenge for the real-world application of offline RL stems from the robustness against state observation perturbations, e.g., as a result of sensor errors or adversarial attacks. Unlike online robust RL, agents cannot be adversarially trained in the offline setting. In this work, we propose Diffusion Model-Based Predictor (DMBP) in a new framework that recovers the actual states with conditional diffusion models for state-based RL tasks. To mitigate the error accumulation issue in model-based estimation resulting from the classical training of conventional diffusion models, we propose a non-Markovian training objective to minimize the sum entropy of denoised states in RL trajectory. Experiments on standard benchmark problems demonstrate that DMBP can significantly enhance the robustness of existing offline RL algorithms against different scales of ran- dom noises and adversarial attacks on state observations. Further, the proposed framework can effectively deal with incomplete state observations with random combinations of multiple unobserved dimensions in the test. Our implementation is available at https://github.com/zhyang2226/DMBP.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18394"} +{"video_file": "ZZTkLDRmkg_39017787.mp4", "openreview_id": "ZZTkLDRmkg", "slideslive_id": 39017787, "venue": "iclr2024", "title": "BENO: Boundary-embedded Neural Operators for Elliptic PDEs", "status": "Poster", "keywords": "AI for PDEs; physical simulation;neural operators;boundary-embedded", "tldr": "We introduce a boundary-embedded neural operator that incorporates complex boundary shape and inhomogeneous boundary values into the solving of Elliptic PDEs", "abstract": "Elliptic partial differential equations (PDEs) are a major class of time-independent PDEs that play a key role in many scientific and engineering domains such as fluid dynamics, plasma physics, and solid mechanics. Recently, neural operators have emerged as a promising technique to solve elliptic PDEs more efficiently by directly mapping the input to solutions. However, existing networks typically neglect complex geometries and inhomogeneous boundary values present in the real world. Here we introduce Boundary-Embedded Neural Operators (BENO), a novel neural operator architecture that embeds the complex geometries and inhomogeneous boundary values into the solving of elliptic PDEs. Inspired by classical Green's function, BENO consists of two Graph Neural Networks (GNNs) for interior source term and boundary values, respectively. Furthermore, a Transformer encoder maps the global boundary geometry into a latent vector which influences each message passing layer of the GNNs. We test our model and strong baselines extensively in elliptic PDEs with complex boundary conditions. We show that all existing baseline methods fail to learn the solution operator. In contrast, our model, endowed with boundary-embedded architecture, outperforms state-of-the-art neural operators and strong baselines by an average of 60.96%.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18389"} +{"video_file": "Zh2iqiOtMt_39017786.mp4", "openreview_id": "Zh2iqiOtMt", "slideslive_id": 39017786, "venue": "iclr2024", "title": "Towards the Fundamental Limits of Knowledge Transfer over Finite Domains", "status": "Poster", "keywords": "knowledge transfer;classification;minimax optimality;density estimation;knowledge distillation", "tldr": "We settle the sample complexity of knowledge transfer at various levels of privileged information in the tabular setting.", "abstract": "We characterize the statistical efficiency of knowledge transfer through\nn\nsamples from a teacher to a probabilistic student classifier with input space\nS\nover labels\nA\n. We show that privileged information at three progressive levels accelerates the transfer. At the first level, only samples with hard labels are known, via which the maximum likelihood estimator attains the minimax rate\n|\nS\n|\n|\nA\n|\n/\nn\n. The second level has the teacher probabilities of sampled labels available in addition, which turns out to boost the convergence rate lower bound to\n|\nS\n|\n|\nA\n|\n/\nn\n. However, under this second data acquisition protocol, minimizing a naive adaptation of the cross-entropy loss results in an asymptotically biased student. We overcome this limitation and achieve the fundamental limit by using a novel empirical variant of the squared error logit loss. The third level further equips the student with the soft labels (complete logits) on\nA\ngiven every sampled input, thereby provably enables the student to enjoy a rate\n|\nS\n|\n/\nn\nfree of\n|\nA\n|\n. We find any Kullback-Leibler divergence minimizer to be optimal in the last case. Numerical simulations distinguish the four learners and corroborate our theory.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/18387"} +{"video_file": "ZlQRiFmq7Y_39017785.mp4", "openreview_id": "ZlQRiFmq7Y", "slideslive_id": 39017785, "venue": "iclr2024", "title": "Retrieval-based Disentangled Representation Learning with Natural Language Supervision", "status": "Spotlight", "keywords": "Disentangled representation learning;information retriever;sparse retriever", "tldr": "We extend the functionality of lexical retriever to handle multi-modal data and harness this enhancement to facilitate disentangled representation learning.", "abstract": "Disentangled representation learning remains challenging as the underlying factors of variation in the data do not naturally exist. The inherent complexity of real-world data makes it unfeasible to exhaustively enumerate and encapsulate all its variations within a finite set of factors. However, it is worth noting that most real-world data have linguistic equivalents, typically in the form of textual descriptions. These linguistic counterparts can represent the data and effortlessly decomposed into distinct tokens. In light of this, we present Vocabulary Disentangled Retrieval (VDR), a retrieval-based framework that harnesses natural language as proxies of the underlying data variation to drive disentangled representation learning. Our approach employ a bi-encoder model to represent both data and natural language in a vocabulary space, enabling the model to distinguish dimensions that capture intrinsic characteristics within data through its natural language counterpart, thus facilitating disentanglement. We extensively assess the performance of VDR across 15 retrieval benchmark datasets, covering text-to-text and cross-modal retrieval scenarios, as well as human evaluation. Our experimental results compellingly demonstrate the superiority of VDR over previous bi-encoder retrievers with comparable model size and training costs, achieving an impressive 8.7% improvement in NDCG@10 on the BEIR benchmark, a 5.3% increase on MS COCO, and a 6.0% increase on Flickr30k in terms of mean recall in the zero-shot setting. Moreover, The results from human evaluation indicate that interpretability of our method is on par with SOTA captioning models.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18384"} +{"video_file": "aBUidW4Nkd_39017780.mp4", "openreview_id": "aBUidW4Nkd", "slideslive_id": 39017780, "venue": "iclr2024", "title": "Object-Centric Learning with Slot Mixture Module", "status": "Poster", "keywords": "Object-centric representations;Gaussian Mixture Model;Slot Attention;Set Prediction Task", "tldr": "We proposed a generalization of a slot-based approach for object-centric representations as a Slot Mixture Model that allows state-of-the-art performance in the set property prediction and object discovery tasks.", "abstract": "Object-centric architectures usually apply a differentiable module to the entire feature map to decompose it into sets of entity representations called slots. Some of these methods structurally resemble clustering algorithms, where the cluster's center in latent space serves as a slot representation. Slot Attention is an example of such a method, acting as a learnable analog of the soft k-means algorithm. Our work employs a learnable clustering method based on the Gaussian Mixture Model. Unlike other approaches, we represent slots not only as centers of clusters but also incorporate information about the distance between clusters and assigned vectors, leading to more expressive slot representations. Our experiments demonstrate that using this approach instead of Slot Attention improves performance in object-centric scenarios, achieving state-of-the-art results in the set property prediction task.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18374"} +{"video_file": "aGH43rjoe4_39019185.mp4", "openreview_id": "aGH43rjoe4", "slideslive_id": 39019185, "venue": "iclr2024", "title": "Multi-modal Gaussian Process Variational Autoencoders for Neural and Behavioral Data", "status": "Poster", "keywords": "Gaussian Processes;Latent Variable Models;Variational Autoencoders;Neuroscience", "tldr": "A novel latent variable model that extracts smoothly evolving latent subspaces that are shared between or independent to distinct data modalities for use with modern systems neuroscience experiments.", "abstract": "Characterizing the relationship between neural population activity and behavioral data is a central goal of neuroscience. While latent variable models (LVMs) are successful in describing high-dimensional data, they are typically only designed for a single type of data, making it difficult to identify structure shared across different experimental data modalities. Here, we address this shortcoming by proposing an unsupervised LVM which extracts shared and independent latents for distinct, simultaneously recorded experimental modalities. We do this by combining Gaussian Process Factor Analysis (GPFA), an interpretable LVM for neural spiking data with temporally smooth latent space, with Gaussian Process Variational Autoencoders (GP-VAEs), which similarly use a GP prior to characterize correlations in a latent space, but admit rich expressivity due to a deep neural network mapping to observations. We achieve interpretability in our model by partitioning latent variability into components that are either shared between or independent to each modality. We parameterize the latents of our model in the Fourier domain, and show improved latent identification using this approach over standard GP-VAE methods. We validate our model on simulated multi-modal data consisting of Poisson spike counts and MNIST images that scale and rotate smoothly over time. We show that the multi-modal GP-VAE (MM-GPVAE) is able to not only identify the shared and independent latent structure across modalities accurately, but provides good reconstructions of both images and neural rates on held-out trials. Finally, we demonstrate our framework on two real world multi-modal experimental settings: Drosophila whole-brain calcium imaging alongside tracked limb positions, and Manduca sexta spike train measurements from ten wing muscles as the animal tracks a visual stimulus.", "primary_area": "applications to neuroscience & cognitive science", "site": "https://iclr.cc/virtual/2024/poster/18372"} +{"video_file": "aIok3ZD9to_39017213.mp4", "openreview_id": "aIok3ZD9to", "slideslive_id": 39017213, "venue": "iclr2024", "title": "LLMCarbon: Modeling the End-to-End Carbon Footprint of Large Language Models", "status": "Oral", "keywords": "carbon footprint modeling;large lanaguage models", "tldr": "we propose a carbon footprint modeling tool for large language models.", "abstract": "The carbon footprint associated with large language models (LLMs) is a significant concern, encompassing emissions from their training, inference, experimentation, and storage processes, including operational and embodied carbon emissions. An essential aspect is accurately estimating the carbon impact of emerging LLMs even before their training, which heavily relies on GPU usage. Existing studies have reported the carbon footprint of LLM training, but only one tool, mlco2, can predict the carbon footprint of new neural networks prior to physical training. However, mlco2 has several serious limitations. It cannot extend its estimation to dense or mixture-of-experts (MoE) LLMs, disregards critical architectural parameters, focuses solely on GPUs, and cannot model embodied carbon footprints. Addressing these gaps, we introduce \\textit{\\carb}, an end-to-end carbon footprint projection model designed for both dense and MoE LLMs. Compared to mlco2, \\carb~significantly enhances the accuracy of carbon footprint estimations for various LLMs. The source code is released at \\url{https://github.com/SotaroKaneda/MLCarbon}.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/18370"} +{"video_file": "aKJEHWmBEf_39018998.mp4", "openreview_id": "aKJEHWmBEf", "slideslive_id": 39018998, "venue": "iclr2024", "title": "Approximately Piecewise E(3) Equivariant Point Networks", "status": "Poster", "keywords": "E(3) equivariant networks", "tldr": "A network design for functions satisfying bounded approximation error of piecewise \nE\n(\n3\n)\n equivariance", "abstract": "Integrating a notion of symmetry into point cloud neural networks is a provably effective way to improve their generalization capability. Of particular interest are\nE\n(\n3\n)\nequivariant point cloud networks where Euclidean transformations applied to the inputs are preserved in the outputs. Recent efforts aim to extend networks that are equivariant with respect to a single global\nE\n(\n3\n)\ntransformation, to accommodate inputs made of multiple parts, each of which exhibits local\nE\n(\n3\n)\nsymmetry. In practical settings, however, the partitioning into individually transforming regions is unknown a priori. Errors in the partition prediction would unavoidably map to errors in respecting the true input symmetry. Past works have proposed different ways to predict the partition, which may exhibit uncontrolled errors in their ability to maintain equivariance to the actual partition. To this end, we introduce APEN: a general framework for constructing approximate piecewise-\nE\n(\n3\n)\nequivariant point networks. Our framework offers an adaptable design to guaranteed bounds on the resulting piecewise\nE\n(\n3\n)\nequivariance approximation errors. Our primary insight is that functions which are equivariant with respect to a finer partition (compared to the unknown true partition) will also maintain equivariance in relation to the true partition. Leveraging this observation, we propose a compositional design for a partition prediction model. It initiates with a fine partition and incrementally transitions towards a coarser subpartition of the true one, consistently maintaining piecewise equivariance in relation to the current partition. As a result, the equivariance approximation error can be bounded solely in terms of (i) uncertainty quantification of the partition prediction, and (ii) bounds on the probability of failing to suggest a proper subpartition of the ground truth one. We demonstrate the practical effectiveness of APEN using two data types exemplifying part-based symmetry: (i) real-world scans of room scenes containing multiple furniture-type objects; and, (ii) human motions, characterized by articulated parts exhibiting rigid movement. Our empirical results demonstrate the advantage of integrating piecewise\nE\n(\n3\n)\nsymmetry into network design, showing a distinct improvement in generalization accuracy compared to prior works for both classification and segmentation tasks", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/18369"} +{"video_file": "aZH1dM3GOX_39017777.mp4", "openreview_id": "aZH1dM3GOX", "slideslive_id": 39017777, "venue": "iclr2024", "title": "Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts", "status": "Poster", "keywords": "Reinforcement Learning;Multi-Task Learning;Mixture of Experts", "tldr": "A novel method for Multi-Task Reinforcement Learning that encourages diversity across the shared representations extracted by a mixture of experts.", "abstract": "Multi-Task Reinforcement Learning (MTRL) tackles the long-standing problem of endowing agents with skills that generalize across a variety of problems. To this end, sharing representations plays a fundamental role in capturing both unique and common characteristics of the tasks. Tasks may exhibit similarities in terms of skills, objects, or physical properties while leveraging their representations eases the achievement of a universal policy. Nevertheless, the pursuit of learning a shared set of diverse representations is still an open challenge. In this paper, we introduce a novel approach for representation learning in MTRL that encapsulates common structures among the tasks using orthogonal representations to promote diversity. Our method, named Mixture Of Orthogonal Experts (MOORE), leverages a Gram-Schmidt process to shape a shared subspace of representations generated by a mixture of experts. When task-specific information is provided, MOORE generates relevant representations from this shared subspace. We assess the effectiveness of our approach on two MTRL benchmarks, namely MiniGrid and MetaWorld, showing that MOORE surpasses related baselines and establishes a new state-of-the-art result on MetaWorld.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18365"} +{"video_file": "aaBnFAyW9O_39017776.mp4", "openreview_id": "aaBnFAyW9O", "slideslive_id": 39017776, "venue": "iclr2024", "title": "Soft Mixture Denoising: Beyond the Expressive Bottleneck of Diffusion Models", "status": "Poster", "keywords": "Diffusion Models;Expressive Bottleneck;Soft Mixture Denoising (SMD)", "tldr": "We proved that current diffusion models have limited approximation capabilities and proposed soft mixture denoising (SMD), an expressive and efficient backward denoising model.", "abstract": "Because diffusion models have shown impressive performances in a number of tasks, such as image synthesis, there is a trend in recent works to prove (with certain assumptions) that these models have strong approximation capabilities. In this paper, we show that current diffusion models actually have an expressive bottleneck in backward denoising and some assumption made by existing theoretical guarantees is too strong. Based on this finding, we prove that diffusion models have unbounded errors in both local and global denoising. In light of our theoretical studies, we introduce soft mixture denoising (SMD), an expressive and efficient model for backward denoising. SMD not only permits diffusion models to well approximate any Gaussian mixture distributions in theory, but also is simple and efficient for implementation. Our experiments on multiple image datasets show that SMD significantly improves different types of diffusion models (e.g., DDPM), espeically in the situation of few backward iterations.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18364"} +{"video_file": "adSGeugiuj_39017774.mp4", "openreview_id": "adSGeugiuj", "slideslive_id": 39017774, "venue": "iclr2024", "title": "On the Posterior Distribution in Denoising: Application to Uncertainty Quantification", "status": "Poster", "keywords": "Gaussian Denoising;Posterior Moments Estimation;Uncertainty Quantification;Uncertainty Visualization", "tldr": "We derive a fundamental property of the posterior distribution in Gaussian denoising, and use it to propose a new way for uncertainty visualization, which requires no training or fine-tuning.", "abstract": "Denoisers play a central role in many applications, from noise suppression in low-grade imaging sensors, to empowering score-based generative models. The latter category of methods makes use of Tweedie's formula, which links the posterior mean in Gaussian denoising (i.e., the minimum MSE denoiser) with the score of the data distribution. Here, we derive a fundamental relation between the higher-order central moments of the posterior distribution, and the higher-order derivatives of the posterior mean. We harness this result for uncertainty quantification of pre-trained denoisers. Particularly, we show how to efficiently compute the principal components of the posterior distribution for any desired region of an image, as well as to approximate the full marginal distribution along those (or any other) one-dimensional directions. Our method is fast and memory-efficient, as it does not explicitly compute or store the high-order moment tensors and it requires no training or fine tuning of the denoiser. Code and examples are available on the project website.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18362"} +{"video_file": "ag3o2T51Ht_39017771.mp4", "openreview_id": "ag3o2T51Ht", "slideslive_id": 39017771, "venue": "iclr2024", "title": "Circumventing Concept Erasure Methods For Text-To-Image Generative Models", "status": "Poster", "keywords": "Model Editing;Diffusion Model;Concept Erasure", "tldr": "Post hoc concept erasure in generative models provides a false sense of security.", "abstract": "Text-to-image generative models can produce photo-realistic images for an extremely broad range of concepts, and their usage has proliferated widely among the general public. On the flip side, these models have numerous drawbacks, including their potential to generate images featuring sexually explicit content, mirror artistic styles without permission, or even hallucinate (or deepfake) the likenesses of celebrities. Consequently, various methods have been proposed in order to \"erase\" sensitive concepts from text-to-image models. In this work, we examine seven recently proposed concept erasure methods, and show that targeted concepts are not fully excised from any of these methods. Specifically, we leverage the existence of special learned word embeddings that can retrieve \"erased\" concepts from the sanitized models with no alterations to their weights. Our results highlight the brittleness of post hoc concept erasure methods, and call into question their use in the algorithmic toolkit for AI safety.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18359"} +{"video_file": "anzIzGZuLi_39018696.mp4", "openreview_id": "anzIzGZuLi", "slideslive_id": 39018696, "venue": "iclr2024", "title": "Making Pre-trained Language Models Great on Tabular Prediction", "status": "Spotlight", "keywords": "language models;classification and regression;model pre-training;tabular data", "tldr": "A language model adaption approach for precise tabular data classification and regression.", "abstract": "The transferability of deep neural networks (DNNs) has made significant progress in image and language processing. However, due to the heterogeneity among tables, such DNN bonus is still far from being well exploited on tabular data prediction (e.g., regression or classification tasks). Condensing knowledge from diverse domains, language models (LMs) possess the capability to comprehend feature names from various tables, potentially serving as versatile learners in transferring knowledge across distinct tables and diverse prediction tasks, but their discrete text representation space is inherently incompatible with numerical feature values in tables. In this paper, we present TP-BERTa, a specifically pre-trained LM for tabular data prediction. Concretely, a novel relative magnitude tokenization converts scalar numerical feature values to finely discrete, high-dimensional tokens, and an intra-feature attention approach integrates feature values with the corresponding feature names. Comprehensive experiments demonstrate that our pre-trained TP-BERTa leads the performance among tabular DNNs and is competitive with Gradient Boosted Decision Tree models in typical tabular data regime.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/18356"} +{"video_file": "b3kDP3IytM_39017763.mp4", "openreview_id": "b3kDP3IytM", "slideslive_id": 39017763, "venue": "iclr2024", "title": "KITAB: Evaluating LLMs on Constraint Satisfaction for Information Retrieval", "status": "Poster", "keywords": "language model evaluation;benchmark;constraint satisfaction;information retrieval;retrieval-augmented architecture", "tldr": "State-of-the-art LLMs struggle with answering constraint satisfaction queries for finding information. We contribute a dataset and an evaluation study to support progress in this emerging ability.", "abstract": "We study the ability of state-of-the art models to answer constraint satisfaction queries for information retrieval (e.g., \u201ca list of ice cream shops in San Diego\u201d). In the past, such queries were considered as tasks that could only be solved via web-search or knowledge bases. More recently, large language models (LLMs) have demonstrated initial emergent abilities in this task. However, many current retrieval benchmarks are either saturated or do not measure constraint satisfaction. Motivated by rising concerns around factual incorrectness and hallucinations of LLMs, we present KITAB, a new dataset for measuring constraint satisfaction abilities of language models. KITAB consists of book-related data across more than 600 authors and 13,000 queries, and also offers an associated dynamic data collection and constraint verification approach for acquiring similar test data for other authors. Our extended experiments on GPT4 and GPT3.5 characterize and decouple common failure modes across dimensions such as information popularity, constraint types, and context availability. Results show that in the absence of context, models exhibit severe limitations as measured by irrelevant information, factual errors, and incompleteness, many of which exacerbate as information popularity decreases. While context availability mitigates irrelevant information, it is not helpful for satisfying constraints, identifying fundamental barriers to constraint satisfaction. We open source our contributions to foster further research on improving constraint satisfaction abilities of future models.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/18346"} +{"video_file": "bRLed9prWC_39017753.mp4", "openreview_id": "bRLed9prWC", "slideslive_id": 39017753, "venue": "iclr2024", "title": "Future Language Modeling from Temporal Document History", "status": "Poster", "keywords": "Future Language Modeling;Future Language Model;Temporal Document History", "tldr": "We propose the task of future language modeling and develop several models for this task, outperforming strong non-temporal baselines.", "abstract": "Predicting the future is of great interest across many aspects of human activity. Businesses are interested in future trends, traders are interested in future stock prices, and companies are highly interested in future technological breakthroughs. While there are many automated systems for predicting future numerical data, such as weather, stock prices, and demand for products, there is relatively little work in automatically predicting textual data. Humans are interested in textual data predictions because it is a natural format for our consumption, and experts routinely make predictions in a textual format (Christensen et al., 2004; Tetlock & Gardner, 2015; Frick, 2015). However, there has been relatively little formalization of this general problem in the machine learning or natural language processing communities. To address this gap, we introduce the task of future language modeling: probabilistic modeling of texts in the future based on a temporal history of texts. To our knowledge, our work is the first work to formalize the task of predicting the future in this way. We show that it is indeed possible to build future language models that improve upon strong non-temporal language model baselines, opening the door to working on this important, and widely applicable problem.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18332"} +{"video_file": "bWNJFD1l8M_39017751.mp4", "openreview_id": "bWNJFD1l8M", "slideslive_id": 39017751, "venue": "iclr2024", "title": "Transferring Learning Trajectories of Neural Networks", "status": "Poster", "keywords": "neural networks;learning dynamics;permutation symmetry;loss landscape", "tldr": "We formulate the problem of transferring learning trajectories between neural networks, and derive the first algorithm to approximately solve it.", "abstract": "Training deep neural networks (DNNs) is computationally expensive, which is problematic especially when performing duplicated or similar training runs in model ensemble or fine-tuning pre-trained models, for example. Once we have trained one DNN on some dataset, we have its learning trajectory (i.e., a sequence of intermediate parameters during training) which may potentially contain useful information for learning the dataset. However, there has been no attempt to utilize such information of a given learning trajectory for another training. In this paper, we formulate the problem of \"transferring\" a given learning trajectory from one initial parameter to another one (named learning transfer problem) and derive the first algorithm to approximately solve it by matching gradients successively along the trajectory via permutation symmetry. We empirically show that the transferred parameters achieve non-trivial accuracy before any direct training, and can be trained significantly faster than training from scratch.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18330"} +{"video_file": "bbCL5aRjUx_39017749.mp4", "openreview_id": "bbCL5aRjUx", "slideslive_id": 39017749, "venue": "iclr2024", "title": "Multilinear Operator Networks", "status": "Poster", "keywords": "Polynomial Networks;Image recognition", "tldr": "We introduce a network based solely on multilinear operations.", "abstract": "Despite the remarkable capabilities of deep neural networks in image recognition, the dependence on activation functions remains a largely unexplored area and has yet to be eliminated. On the other hand, Polynomial Networks is a class of models that does not require activation functions, but have yet to perform on par with modern architectures. In this work, we aim close this gap and propose MONet, which relies solely on multilinear operators. The core layer of MONet, called Mu-Layer, captures multiplicative interactions of the elements of the input token. MONet captures high-degree interactions of the input elements and we demonstrate the efficacy of our approach on a series of image recognition and scientific computing benchmarks. The proposed model outperforms prior polynomial networks and performs on par with modern architectures. We believe that MONet can inspire further research on models that use entirely multilinear operations.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18326"} +{"video_file": "bkdWThqE6q_39019216.mp4", "openreview_id": "bkdWThqE6q", "slideslive_id": 39019216, "venue": "iclr2024", "title": "A Simple Interpretable Transformer for Fine-Grained Image Classification and Analysis", "status": "Poster", "keywords": "Explainability;Interpretability;Transformer;Fine-grained recognition;Attribute discovery", "tldr": "Transformer based Interpretable Image recognition where each query in the decoder will learn class specific features.", "abstract": "We present a novel usage of Transformers to make image classification interpretable. Unlike mainstream classifiers that wait until the last fully connected layer to incorporate class information to make predictions, we investigate a proactive approach, asking each class to search for itself in an image. We realize this idea via a Transformer encoder-decoder inspired by DEtection TRansformer (DETR). We learn ''class-specific'' queries (one for each class) as input to the decoder, enabling each class to localize its patterns in an image via cross-attention. We name our approach INterpretable TRansformer (INTR), which is fairly easy to implement and exhibits several compelling properties. We show that INTR intrinsically encourages each class to attend distinctively; the cross-attention weights thus provide a faithful interpretation of the prediction. Interestingly, via ''multi-head'' cross-attention, INTR could identify different ''attributes'' of a class, making it particularly suitable for fine-grained classification and analysis, which we demonstrate on eight datasets. Our code and pre-trained models are publicly accessible at the Imageomics Institute GitHub site: https://github.com/Imageomics/INTR.", "primary_area": "visualization or interpretation of learned representations", "site": "https://iclr.cc/virtual/2024/poster/18324"} +{"video_file": "c7DND1iIgb_39017028.mp4", "openreview_id": "c7DND1iIgb", "slideslive_id": 39017028, "venue": "iclr2024", "title": "Democratizing Fine-grained Visual Recognition with Large Language Models", "status": "Poster", "keywords": "Vision-Language Models;Large Language Models;Prompting;Multimodal;Fine-grained Visual Recognition", "tldr": "We propose Fine-grained Semantic Category Reasoning (FineR) system to address fine-grained visual recognition without needing expert annotations. FineR leverages the world knowledge of large language models to reason fine-grained category names.", "abstract": "Identifying subordinate-level categories from images is a longstanding task in computer vision and is referred to as fine-grained visual recognition (FGVR). It has tremendous significance in real-world applications since an average layperson does not excel at differentiating species of birds or mushrooms due to subtle differences among the species. A major bottleneck in developing FGVR systems is caused by the need of high-quality paired expert annotations. To circumvent the need of expert knowledge we propose Fine-grained Semantic Category Reasoning (FineR) that internally leverages the world knowledge of large language models (LLMs) as a proxy in order to reason about fine-grained category names. In detail, to bridge the modality gap between images and LLM, we extract part-level visual attributes from images as text and feed that information to a LLM. Based on the visual attributes and its internal world knowledge the LLM reasons about the subordinate-level category names. Our training-free FineR outperforms several state-of-the-art FGVR and language and vision assistant models and shows promise in working in the wild and in new domains where gathering expert annotation is arduous.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18308"} +{"video_file": "cINwAhrgLf_39017739.mp4", "openreview_id": "cINwAhrgLf", "slideslive_id": 39017739, "venue": "iclr2024", "title": "Aux-NAS: Exploiting Auxiliary Labels with Negligibly Extra Inference Cost", "status": "Poster", "keywords": "Auxiliary Learning; Neural Architecture Search; Soft Parameter Sharing; Multi-Task Learning; Single Task Inference Cost", "tldr": "We propose a novel soft-parameter sharing architecture-based method optimized by Neural Architecture Search, which exploits auxiliary task labels to boost the primary task performance without increasing the inference cost for the primary task.", "abstract": "We aim at exploiting additional auxiliary labels from an independent (auxiliary) task to boost the primary task performance which we focus on, while preserving a single task inference cost of the primary task. While most existing auxiliary learning methods are optimization-based relying on loss weights/gradients manipulation, our method is architecture-based with a flexible asymmetric structure for the primary and auxiliary tasks, which produces different networks for training and inference. Specifically, starting from two single task networks/branches (each representing a task), we propose a novel method with evolving networks where only primary-to-auxiliary links exist as the cross-task connections after convergence. These connections can be removed during the primary task inference, resulting in a single-task inference cost. We achieve this by formulating a Neural Architecture Search (NAS) problem, where we initialize bi-directional connections in the search space and guide the NAS optimization converging to an architecture with only the single-side primary-to-auxiliary connections. Moreover, our method can be incorporated with optimization-based auxiliary learning approaches. Extensive experiments with six tasks on NYU v2, CityScapes, and Taskonomy datasets using VGG, ResNet, and ViT backbones validate the promising performance. The codes are available at https://github.com/ethanygao/Aux-NAS.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18302"} +{"video_file": "cUSNs8nGaV_39018718.mp4", "openreview_id": "cUSNs8nGaV", "slideslive_id": 39018718, "venue": "iclr2024", "title": "GlucoBench: Curated List of Continuous Glucose Monitoring Datasets with Prediction Benchmarks", "status": "Poster", "keywords": "diabetes management;continuous glucose monitors (CGM);glucose trajectory prediction;artificial pancreas systems;public datasets;standardized tasks;benchmark models;glycemic control", "tldr": "The paper introduces a comprehensive resource for CGM-based glucose forecasting, including a curated list of public datasets, a standardized task list, and benchmark models.", "abstract": "The rising rates of diabetes necessitate innovative methods for its management. Continuous glucose monitors (CGM) are small medical devices that measure blood glucose levels at regular intervals providing insights into daily patterns of glucose variation. Forecasting of glucose trajectories based on CGM data holds the potential to substantially improve diabetes management, by both refining artificial pancreas systems and enabling individuals to make adjustments based on predictions to maintain optimal glycemic range. Despite numerous methods proposed for CGM-based glucose trajectory prediction, these methods are typically evaluated on small, private datasets, impeding reproducibility, further research, and practical adoption. The absence of standardized prediction tasks and systematic comparisons between methods has led to uncoordinated research efforts, obstructing the identification of optimal tools for tackling specific challenges. As a result, only a limited number of prediction methods have been implemented in clinical practice.\nTo address these challenges, we present a comprehensive resource that provides (1) a consolidated repository of curated publicly available CGM datasets to foster reproducibility and accessibility; (2) a standardized task list to unify research objectives and facilitate coordinated efforts; (3) a set of benchmark models with established baseline performance, enabling the research community to objectively gauge new methods' efficacy; and (4) a detailed analysis of performance-influencing factors for model development. We anticipate these resources to propel collaborative research endeavors in the critical domain of CGM-based glucose predictions. Our code is available online at github.com/IrinaStatsLab/GlucoBench.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/18296"} +{"video_file": "cWdAYDLmPa_39018727.mp4", "openreview_id": "cWdAYDLmPa", "slideslive_id": 39018727, "venue": "iclr2024", "title": "State Representation Learning Using an Unbalanced Atlas", "status": "Poster", "keywords": "Self-supervised learning;State representation learning", "tldr": "We introduce a new self-supervised learning paradigm using an unbalanced atlas to represent a manifold and design a state representation learning method based on the paradigm.", "abstract": "The manifold hypothesis posits that high-dimensional data often lies on a lower-dimensional manifold and that utilizing this manifold as the target space yields more efficient representations. While numerous traditional manifold-based techniques exist for dimensionality reduction, their application in self-supervised learning has witnessed slow progress. The recent MSimCLR method combines manifold encoding with SimCLR but requires extremely low target encoding dimensions to outperform SimCLR, limiting its applicability. This paper introduces a novel learning paradigm using an unbalanced atlas (UA), capable of surpassing state-of-the-art self-supervised learning approaches. We investigated and engineered the DeepInfomax with an unbalanced atlas (DIM-UA) method by adapting the Spatiotemporal DeepInfomax (ST-DIM) framework to align with our proposed UA paradigm. The efficacy of DIM-UA is demonstrated through training and evaluation on the Atari Annotated RAM Interface (AtariARI) benchmark, a modified version of the Atari 2600 framework that produces annotated image samples for representation learning. The UA paradigm improves existing algorithms significantly as the number of target encoding dimensions grows. For instance, the mean F1 score averaged over categories of DIM-UA is~75% compared to ~70% of ST-DIM when using 16384 hidden units.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18294"} +{"video_file": "cdUpf6t6LZ_39017734.mp4", "openreview_id": "cdUpf6t6LZ", "slideslive_id": 39017734, "venue": "iclr2024", "title": "Robust NAS under adversarial training: benchmark, theory, and beyond", "status": "Poster", "keywords": "neural architecture search;robustness;benchmark;generalization theory", "tldr": "To facilitate the neural architecture search for robust architecture, we release a benchmark under adversarial training and study the robust generalization theory", "abstract": "Recent developments in neural architecture search (NAS) emphasize the significance of considering robust architectures against malicious data. However, there is a notable absence of benchmark evaluations and theoretical guarantees for searching these robust architectures, especially when adversarial training is considered. In this work, we aim to address these two challenges, making twofold contributions. First, we release a comprehensive data set that encompasses both clean accuracy and robust accuracy for a vast array of adversarially trained networks from the NAS-Bench-201 search space on image datasets. Then, leveraging the neural tangent kernel (NTK) tool from deep learning theory, we establish a generalization theory for searching architecture in terms of clean accuracy and robust accuracy under multi-objective adversarial training. We firmly believe that our benchmark and theoretical insights will significantly benefit the NAS community through reliable reproducibility, efficient assessment, and theoretical foundation, particularly in the pursuit of robust architectures.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/18286"} +{"video_file": "cmcD05NPKa_39017732.mp4", "openreview_id": "cmcD05NPKa", "slideslive_id": 39017732, "venue": "iclr2024", "title": "Learning the greatest common divisor: explaining transformer predictions", "status": "Spotlight", "keywords": "mathematics;arithmetic;transformers;explainability", "tldr": "Transformers can learn to predict greatest common divisors, their predictions can be fully explained, training distribution matters", "abstract": "The predictions of small transformers, trained to calculate the greatest common divisor (GCD) of two positive integers, can be fully characterized by looking at model inputs and outputs. As training proceeds, the model learns a list\nD\nof integers, products of divisors of the base used to represent integers and small primes, and predicts the largest element of\nD\nthat divides both inputs. Training distributions impact performance. Models trained from uniform operands only learn a handful of GCD (up to\n38\nGCD\n\u2264\n100\n). Log-uniform operands boost performance to\n73\nGCD\n\u2264\n100\n, and a log-uniform distribution of outcomes (i.e. GCD) to\n91\n. However, training from uniform (balanced) GCD breaks explainability.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18284"} +{"video_file": "coIaBY8EVF_39017731.mp4", "openreview_id": "coIaBY8EVF", "slideslive_id": 39017731, "venue": "iclr2024", "title": "Decongestion by Representation: Learning to Improve Economic Welfare in Marketplaces", "status": "Poster", "keywords": "congestion;decongestion;online marketplaces;learning in economic settings;efficient allocation", "tldr": "Online marketplaces suffer from congestion---when many users are interested in the same indivisible goods; we propose that platforms decongest by appropriately *representing* items, and propose a differentiable learning framework for doing so.", "abstract": "Congestion is a common failure mode of markets, where consumers compete inefficiently on the same subset of goods (e.g., chasing the same small set of properties on a vacation rental platform). The typical economic story is that prices decongest by balancing supply and demand. But in modern online marketplaces, prices are typically set in a decentralized way by sellers, and the information about items is inevitably partial. The power of a platform is limited to controlling representations---the subset of information about items presented by default to users. This motivates the present study of decongestion by representation, where a platform seeks to learn representations that reduce congestion and thus improve social welfare. The technical challenge is twofold: relying only on revealed preferences from the choices of consumers, rather than true preferences; and the combinatorial problem associated with representations that determine the features to reveal in the default view. We tackle both challenges by proposing a differentiable proxy of welfare that can be trained end-to-end on consumer choice data. We develop sufficient conditions for when decongestion promotes welfare, and present the results of extensive experiments on both synthetic and real data that demonstrate the utility of our approach.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/18283"} +{"video_file": "cphhnHjCvC_39017196.mp4", "openreview_id": "cphhnHjCvC", "slideslive_id": 39017196, "venue": "iclr2024", "title": "End-to-End (Instance)-Image Goal Navigation through Correspondence as an Emergent Phenomenon", "status": "Poster", "keywords": "Navigation;Embodied AI;Perception", "tldr": "In an ImageGoal navigation context, we propose two pre-text tasks which let correspondence emerge as a solution and train a dual visual encoder based on a binocular transforme", "abstract": "Most recent work in goal oriented visual navigation resorts to large-scale machine learning in simulated environments. The main challenge lies in learning compact representations generalizable to unseen environments and in learning high-capacity perception modules capable of reasoning on high-dimensional input. The latter is particularly difficult when the goal is not given as a category (\"ObjectNav\") but as an exemplar image (\"ImageNav\"), as the perception module needs to learn a comparison strategy requiring to solve an underlying visual correspondence problem. This has been shown to be difficult from reward alone or with standard auxiliary tasks. We address this problem through a sequence of two pretext tasks, which serve as a prior for what we argue is one of the main bottleneck in perception, extremely wide-baseline relative pose estimation and visibility prediction in complex scenes. The first pretext task, cross-view completion is a proxy for the underlying visual correspondence problem, while the second task addresses goal detection and finding directly. We propose a new dual encoder with a large-capacity binocular ViT model and show that correspondence solutions naturally emerge from the training signals. Experiments show significant improvements and SOTA performance on the two benchmarks, ImageNav and the Instance-ImageNav variant, where camera intrinsics and height differ between observation and goal.", "primary_area": "applications to robotics, autonomy, planning", "site": "https://iclr.cc/virtual/2024/poster/18282"} +{"video_file": "csukJcpYDe_39017033.mp4", "openreview_id": "csukJcpYDe", "slideslive_id": 39017033, "venue": "iclr2024", "title": "Generalized Policy Iteration using Tensor Approximation for Hybrid Control", "status": "Spotlight", "keywords": "Optimal Control;Hybrid Actions;Robotics;Approximate Dynamic Programming;Tensor Approximation", "tldr": "The paper proposes a novel approximate dynamic programming algorithm that can handle hybrid action space", "abstract": "Control of dynamic systems involving hybrid actions is a challenging task in robotics. To address this, we present a novel algorithm called Generalized Policy Iteration using Tensor Train (TTPI) that belongs to the class of Approximate Dynamic Programming (ADP). We use a low-rank tensor approximation technique called Tensor Train (TT) to approximate the state-value and advantage function which enables us to efficiently handle hybrid systems. We demonstrate the superiority of our approach over previous baselines for some benchmark problems with hybrid action spaces. Additionally, the robustness and generalization of the policy for hybrid systems are showcased through a real-world robotics experiment involving a non-prehensile manipulation task which is considered to be a highly challenging control problem.", "primary_area": "applications to robotics, autonomy, planning", "site": "https://iclr.cc/virtual/2024/poster/18281"} +{"video_file": "cuAxSHcsSX_39017730.mp4", "openreview_id": "cuAxSHcsSX", "slideslive_id": 39017730, "venue": "iclr2024", "title": "On Differentially Private Federated Linear Contextual Bandits", "status": "Poster", "keywords": "linear contextual bandits;federated learning;differential privacy", "tldr": "Identify the fundamental gaps in state-of-the-art and propose a generic framework to not only fix them but achieve improved results", "abstract": "We consider cross-silo federated linear contextual bandit (LCB) problem under differential privacy, where multiple silos interact with their respective local users and communicate via a central server to realize collaboration without sacrificing each user's privacy. We identify three issues in the state-of-the-art~\\citep{dubey2020differentially}: (i) failure of claimed privacy protection, (ii) incorrect regret bound due to noise miscalculation and (iii) ungrounded communication cost. To resolve these issues, we take a two-step approach. First, we design an algorithmic framework consisting of a generic federated LCB algorithm and flexible privacy protocols. Then, leveraging the proposed framework, we study federated LCBs under two different privacy constraints. We first establish privacy and regret guarantees under silo-level local differential privacy, which fix the issues present in state-of-the-art algorithm. To further improve the regret performance, we next consider shuffle model of differential privacy, under which we show that our algorithm can achieve nearly ``optimal'' regret without a trusted server. We accomplish this via two different schemes -- one relies on a new result on privacy amplification via shuffling for DP mechanisms and another one leverages the integration of a shuffle protocol for vector sum into the tree-based mechanism, both of which might be of independent interest. Finally, we support our theoretical results with numerical evaluations over contextual bandit instances generated from both synthetic and real-life data.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18280"} +{"video_file": "cxfPefbu1s_39017729.mp4", "openreview_id": "cxfPefbu1s", "slideslive_id": 39017729, "venue": "iclr2024", "title": "Procedural Fairness Through Decoupling Objectionable Data Generating Components", "status": "Spotlight", "keywords": "Procedural Fairness;Decouple Objectionable Component;Reference Point;Causal Fairness;Data Generating Process;Bias Mitigation", "tldr": "We reveal and address the frequently overlooked issue of disguised procedural unfairness, and propose a framework to decouple objectionable data generating components to achieve procedural fairness.", "abstract": "We reveal and address the frequently overlooked yet important issue of disguised procedural unfairness, namely, the potentially inadvertent alterations on the behavior of neutral (i.e., not problematic) aspects of data generating process, and/or the lack of procedural assurance of the greatest benefit of the least advantaged individuals. Inspired by John Rawls's advocacy for pure procedural justice (Rawls, 1971; 2001), we view automated decision-making as a microcosm of social institutions, and consider how the data generating process itself can satisfy the requirements of procedural fairness. We propose a framework that decouples the objectionable data generating components from the neutral ones by utilizing reference points and the associated value instantiation rule. Our findings highlight the necessity of preventing disguised procedural unfairness, drawing attention not only to the objectionable data generating components that we aim to mitigate, but also more importantly, to the neutral components that we intend to keep unaffected.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/18279"} +{"video_file": "d6tUsZeVs7_39017724.mp4", "openreview_id": "d6tUsZeVs7", "slideslive_id": 39017724, "venue": "iclr2024", "title": "Energy-guided Entropic Neural Optimal Transport", "status": "Poster", "keywords": "energy-based model;generative model;optimal transport;entropic optimal transport;general optimal transport cost function", "tldr": "We propose a novel energy-based method to compute entropic optimal transport with general cost functions.", "abstract": "Energy-based models (EBMs) are known in the Machine Learning community for decades. Since the seminal works devoted to EBMs dating back to the noughties, there have been a lot of efficient methods which solve the generative modelling problem by means of energy potentials (unnormalized likelihood functions). In contrast, the realm of Optimal Transport (OT) and, in particular, neural OT solvers is much less explored and limited by few recent works (excluding WGAN-based approaches which utilize OT as a loss function and do not model OT maps themselves). In our work, we bridge the gap between EBMs and Entropy-regularized OT. We present a novel methodology which allows utilizing the recent developments and technical improvements of the former in order to enrich the latter. From the theoretical perspective, we prove generalization bounds for our technique. In practice, we validate its applicability in toy 2D and image domains. To showcase the scalability, we empower our method with a pre-trained StyleGAN and apply it to high-res AFHQ\n512\n\u00d7\n512\nunpaired I2I translation. For simplicity, we choose simple short- and long-run EBMs as a backbone of our Energy-guided Entropic OT approach, leaving the application of more sophisticated EBMs for future research. Our code is available at: https://github.com/PetrMokrov/Energy-guided-Entropic-OT", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18274"} +{"video_file": "d94x0gWTUX_39019100.mp4", "openreview_id": "d94x0gWTUX", "slideslive_id": 39019100, "venue": "iclr2024", "title": "Tool-Augmented Reward Modeling", "status": "Spotlight", "keywords": "Reward Model;Large Language Model;Tool Learning;Augmented Language Model", "tldr": "This paper introduces tool-augmented reward models for reinforcement learning from human feedback (RLHF), improving precision and interpretability and contributing a comprehensive dataset from seven diverse tool APIs to advance the field.", "abstract": "Reward modeling (a.k.a., preference modeling) is instrumental for aligning large language models with human preferences, particularly within the context of reinforcement learning from human feedback (RLHF). While conventional reward models (RMs) have exhibited remarkable scalability, they oft struggle with fundamental functionality such as arithmetic computation, code execution, and factual lookup. In this paper, we propose a tool-augmented preference modeling approach, named Themis, to address these limitations by empowering RMs with access to external environments, including calculators and search engines. This approach not only fosters synergy between tool utilization and reward grading but also enhances interpretive capacity and scoring reliability. Our study delves into the integration of external tools into RMs, enabling them to interact with diverse external sources and construct task-specific tool engagement and reasoning traces in an autoregressive manner. We validate our approach across a wide range of domains, incorporating seven distinct external tools. Our experimental results demonstrate a noteworthy overall improvement of 17.7% across eight tasks in preference ranking. Furthermore, our approach outperforms Gopher 280B by 7.3% on TruthfulQA task in zero-shot evaluation. In human evaluations, RLHF trained with Themis attains an average win rate of 32% when compared to baselines across four distinct tasks. Additionally, we provide a comprehensive collection of tool-related RM datasets, incorporating data from seven distinct tool APIs, totaling 15,000 instances. We have made the code, data, and model checkpoints publicly available to facilitate and inspire further research advancements (https://github.com/ernie-research/Tool-Augmented-Reward-Model).", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18272"} +{"video_file": "dKl6lMwbCy_39017721.mp4", "openreview_id": "dKl6lMwbCy", "slideslive_id": 39017721, "venue": "iclr2024", "title": "Peering Through Preferences: Unraveling Feedback Acquisition for Aligning Large Language Models", "status": "Poster", "keywords": "LLMs;Sparse Feedback;Ratings;Rankings;Inconsistency;Evaluation", "tldr": "We investigate how the choice of sparse feedback, such as ratings and rankings, impact the alignment and evaluation of large language models.", "abstract": "Aligning large language models (LLMs) with human values and intents critically involves the use of human or AI feedback. While dense feedback annotations are expensive to acquire and integrate, sparse feedback presents a structural design choice between ratings (e.g., score Response A on a scale of 1-7) and rankings (e.g., is Response A better than Response B?). In this work, we analyze the effect of this design choice for the alignment and evaluation of LLMs. We uncover an inconsistency problem wherein the preferences inferred from ratings and rankings significantly disagree 60% for both human and AI annotators. Our subsequent analysis identifies various facets of annotator biases that explain this phenomena such as human annotators would rate denser responses higher while preferring accuracy during pairwise judgments, for a particular comparison instance. To our surprise, we observe that the choice of feedback protocol has a significant effect on the evaluation of aligned LLMs. In particular, we find that LLMs that leverage rankings data for alignment (say model X) are preferred over those that leverage ratings data (say model Y), with a rank-based evaluation protocol (is X/Y's response better than reference response?) but not with a rating-based evaluation protocol (score Rank X/Y's response on a scale of 1-7). Our findings thus shed light on critical gaps in methods for evaluating the real-world utility of language models and their strong dependence on the feedback protocol used for alignment. Our code and data are available at \\url{https://github.com/Hritikbansal/sparse_feedback}.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18266"} +{"video_file": "dPHLbUqGbr_39019010.mp4", "openreview_id": "dPHLbUqGbr", "slideslive_id": 39019010, "venue": "iclr2024", "title": "Fast, Expressive $\\mathrm{SE}(n)$ Equivariant Networks through Weight-Sharing in Position-Orientation Space", "status": "Poster", "keywords": "Equivariance;Point Clouds;Message Passing Neural Network;Molecules;Diffusion Model", "tldr": "We propose efficient SE(d) equivariant networks using group convolutions over position-orientation space and achieve state-of-the-art performance on 2D and 3D data.", "abstract": "Based on the theory of homogeneous spaces we derive geometrically optimal edge attributes to be used within the flexible message-passing framework. We formalize the notion of weight sharing in convolutional networks as the sharing of message functions over point-pairs that should be treated equally. We define equivalence classes of point-pairs that are identical up to a transformation in the group and derive attributes that uniquely identify these classes. Weight sharing is then obtained by conditioning message functions on these attributes. As an application of the theory, we develop an efficient equivariant group convolutional network for processing 3D point clouds. The theory of homogeneous spaces tells us how to do group convolutions with feature maps over the homogeneous space of positions\nR\n3\n, position and orientations\nR\n3\n\u00d7\nS\n2\n, and the group\nS\nE\n(\n3\n)\nitself. Among these,\nR\n3\n\u00d7\nS\n2\nis an optimal choice due to the ability to represent directional information, which\nR\n3\nmethods cannot, and it significantly enhances computational efficiency compared to indexing features on the full\nS\nE\n(\n3\n)\ngroup. We support this claim with state-of-the-art results \u2014in accuracy and speed\u2014 on five different benchmarks in 2D and 3D, including interatomic potential energy prediction, trajectory forecasting in N-body systems, and generating molecules via equivariant diffusion models.\nCode available at https://github.com/ebekkers/ponita", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18259"} +{"video_file": "dbQH9AOVd5_39019284.mp4", "openreview_id": "dbQH9AOVd5", "slideslive_id": 39019284, "venue": "iclr2024", "title": "Stable Anisotropic Regularization", "status": "Poster", "keywords": "isotropy;LLMs;outlier dimensions", "tldr": "We propose I-STAR: IsoScore*-based STable Anisotropic Regularization and show that decreasing isotropy in model representations improves downstream performance.", "abstract": "Given the success of Large Language Models (LLMs), there has been considerable interest in studying the properties of model activations. The literature overwhelmingly agrees that LLM representations are dominated by a few ``outlier dimensions'' with exceedingly high variance and magnitude. Several studies in Natural Language Processing (NLP) have sought to mitigate the impact of such outlier dimensions and force LLMs to be isotropic (i.e., have uniform variance across all dimensions in embedding space). Isotropy is thought to be a desirable property for LLMs that improves model performance and more closely aligns textual representations with human intuition. However, many claims regarding isotropy in NLP have been based on the average cosine similarity of embeddings, which has recently been shown to be a flawed measure of isotropy. In this paper, we propose I-STAR: IsoScore\n\u22c6\n-based STable Anisotropic Regularization, a novel regularization method that can be used to increase or decrease levels of isotropy in embedding space during training. I-STAR uses IsoScore\n\u22c6\n, the first accurate measure of isotropy that is both differentiable and stable on mini-batch computations. In contrast to several previous works, we find that \\textit{decreasing} isotropy in contextualized embeddings improves performance on the majority of tasks and models considered in this paper.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18254"} +{"video_file": "duyA42HlCK_39017703.mp4", "openreview_id": "duyA42HlCK", "slideslive_id": 39017703, "venue": "iclr2024", "title": "HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion", "status": "Poster", "keywords": "Human Image Generation;Latent Structural Diffusion", "tldr": "We propose a unified framework, HyperHuman with latent structural diffusion, that generates in-the-wild human images of high realism and diverse layouts.", "abstract": "Despite significant advances in large-scale text-to-image models, achieving hyper-realistic human image generation remains a desirable yet unsolved task. Existing models like Stable Diffusion and DALL\u00b7E 2 tend to generate human images with incoherent parts or unnatural poses. To tackle these challenges, our key insight is that human image is inherently structural over multiple granularities, from the coarse-level body skeleton to fine-grained spatial geometry. Therefore, capturing such correlations between the explicit appearance and latent structure in one model is essential to generate coherent and natural human images. To this end, we propose a unified framework, HyperHuman, that generates in-the-wild human images of high realism and diverse layouts. Specifically, 1) we first build a large-scale human-centric dataset, named HumanVerse, which consists of 340M images with comprehensive annotations like human pose, depth, and surface normal. 2) Next, we propose a Latent Structural Diffusion Model that simultaneously denoises the depth and surface normal along with the synthesized RGB image. Our model enforces the joint learning of image appearance, spatial relationship, and geometry in a unified network, where each branch in the model complements to each other with both structural awareness and textural richness. 3) Finally, to further boost the visual quality, we propose a Structure-Guided Refiner to compose the predicted conditions for more detailed generation of higher resolution. Extensive experiments demonstrate that our framework yields the state-of-the-art performance, generating hyper-realistic human images under diverse scenarios.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18239"} +{"video_file": "dyrGMhicMw_39018617.mp4", "openreview_id": "dyrGMhicMw", "slideslive_id": 39018617, "venue": "iclr2024", "title": "Initializing Models with Larger Ones", "status": "Spotlight", "keywords": "Deep Learning;Neural Networks;Weight Initialization;Small Models;Computer Vision", "tldr": "Selecting weights from a pretrained large model to initialize a small model improves accuracy and reduces training time", "abstract": "Weight initialization plays an important role in neural network training. Widely used initialization methods are proposed and evaluated for networks that are trained from scratch. However, the growing number of pretrained models now offers new opportunities for tackling this classical problem of weight initialization. In this work, we introduce weight selection, a method for initializing smaller models by selecting a subset of weights from a pretrained larger model. This enables the transfer of knowledge from pretrained weights to smaller models. Our experiments demonstrate that weight selection can significantly enhance the performance of small models and reduce their training time. Notably, it can also be used together with knowledge distillation. Weight selection offers a new approach to leverage the power of pretrained models in resource-constrained settings, and we hope it can be a useful tool for training small models in the large-model era.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18236"} +{"video_file": "e4xS9ZarDr_39017699.mp4", "openreview_id": "e4xS9ZarDr", "slideslive_id": 39017699, "venue": "iclr2024", "title": "Lion Secretly Solves a Constrained Optimization: As Lyapunov Predicts", "status": "Spotlight", "keywords": "Lion;Optimization;Lyapunov Analysis", "tldr": "This work shows that the lion optimizer is performing a constrained optimization, and the key design choices of lion is equivalent to performing a hamiltonian mirror descent.", "abstract": "Lion (Evolved Sign Momentum), a new optimizer discovered through program search, has shown promising results in training large AI models. It achieves results comparable to AdamW but with greater memory efficiency. As what we can expect from the result of the random search, Lion blends a number of elements from existing algorithms, including signed momentum, decoupled weight decay, Polayk and Nesterov momentum, but doesn't fit into any existing category of theoretically grounded optimizers. Thus, even though Lion appears to perform well as a general-purpose optimizer for a wide range of tasks, its theoretical basis remains uncertain. This absence of theoretical clarity limits opportunities to further enhance and expand Lion's efficacy. This work aims to demystify Lion. Using both continuous-time and discrete-time analysis, we demonstrate that Lion is a novel and theoretically grounded approach for minimizing a general loss function\nf\n(\nx\n)\nwhile enforcing a bound constraint\n|\n|\nx\n|\n|\n\u221e\n\u2264\n1\n/\n\u03bb\n. Lion achieves this through the incorporation of decoupled weight decay, where\n\u03bb\nrepresents the weight decay coefficient. Our analysis is facilitated by the development of a new Lyapunov function for the Lion updates. It applies to a wide range of Lion-\n\u03d5\nalgorithms, where the\ns\ni\ng\nn\n(\n\u22c5\n)\noperator in Lion is replaced by the subgradient of a convex function\n\u03d5\n, leading to the solution of the general composite optimization problem\nmin\nx\nf\n(\nx\n)\n+\n\u03d5\n\u2217\n(\nx\n)\n. Our findings provide valuable insights into the dynamics of Lion and pave the way for further enhancements and extensions of Lion-related algorithms.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18232"} +{"video_file": "eMHn77ZKOp_39017695.mp4", "openreview_id": "eMHn77ZKOp", "slideslive_id": 39017695, "venue": "iclr2024", "title": "Combinatorial Bandits for Maximum Value Reward Function under Value-Index Feedback", "status": "Poster", "keywords": "Combinatorial multi-armed bandit;$k$-MAX bandit;value-index feedback;maximum reward function", "tldr": "We studied a combinatorial MAB problem for max reward function under a new feedback structure.", "abstract": "We investigate the combinatorial multi-armed bandit problem where an action is to select\nk\narms from a set of base arms, and its reward is the maximum of the sample values of these\nk\narms, under a weak feedback structure that only returns the value and index of the arm with the maximum value. This novel feedback structure is much weaker than the semi-bandit feedback previously studied and is only slightly stronger than the full-bandit feedback, and thus it presents a new challenge for the online learning task. We propose an algorithm and derive a regret bound for instances where arm outcomes follow distributions with finite supports. Our algorithm introduces a novel concept of biased arm replacement to address the weak feedback challenge, and it achieves a distribution-dependent regret bound of\nO\n(\n(\nk\n/\n\u0394\n)\nlog\n\u2061\n(\nT\n)\n)\nand a distribution-independent regret bound of\nO\n~\n(\nT\n)\n, where\n\u0394\nis the reward gap and\nT\nis the time horizon. Notably, our regret bound is comparable to the bounds obtained under the more informative semi-bandit feedback. We demonstrate the effectiveness of our algorithm through experimental results.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/18225"} +{"video_file": "eT6oLkm1cm_39019028.mp4", "openreview_id": "eT6oLkm1cm", "slideslive_id": 39019028, "venue": "iclr2024", "title": "Annealing Self-Distillation Rectification Improves Adversarial Training", "status": "Poster", "keywords": "Adversarial training;Adversarial robustness", "tldr": "We propose a data-driven label rectification technique to mitigate robust overfitting.", "abstract": "In standard adversarial training, models are optimized to fit invariant one-hot labels for adversarial data when the perturbations are within allowable budgets. However, the overconfident target harms generalization and causes the problem of robust overfitting. To address this issue and enhance adversarial robustness, we analyze the characteristics of robust models and identify that robust models tend to produce smoother and well-calibrated outputs. Based on the observation, we propose a simple yet effective method, Annealing Self-Distillation Rectification (ADR), which generates soft labels as a better guidance mechanism that reflects the underlying distribution of data. By utilizing ADR, we can obtain rectified labels that improve model robustness without the need for pre-trained models or extensive extra computation. Moreover, our method facilitates seamless plug-and-play integration with other adversarial training techniques by replacing the hard labels in their objectives. We demonstrate the efficacy of ADR through extensive experiments and strong performances across datasets.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/18220"} +{"video_file": "eY7sLb0dVF_39018908.mp4", "openreview_id": "eY7sLb0dVF", "slideslive_id": 39018908, "venue": "iclr2024", "title": "Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs", "status": "Poster", "keywords": "Time Series Generation;Koopman Theory; Variational Autoencoder; Generative Modeling", "tldr": "We introduce Koopman VAE (KoVAE) for time series generation, which is based on a novel design for the model prior, and which can be optimized for either regular and irregular training data.", "abstract": "Generating realistic time series data is important for many engineering and scientific applications. Existing work tackles this problem using generative adversarial networks (GANs). However, GANs are unstable during training, and they can suffer from mode collapse. While variational autoencoders (VAEs) are known to be more robust to the these issues, they are (surprisingly) less considered for time series generation. In this work, we introduce Koopman VAE (KoVAE), a new generative framework that is based on a novel design for the model prior, and that can be optimized for either regular and irregular training data. Inspired by Koopman theory, we represent the latent conditional prior dynamics using a linear map. Our approach enhances generative modeling with two desired features: (i) incorporating domain knowledge can be achieved by leveraging spectral tools that prescribe constraints on the eigenvalues of the linear map; and (ii) studying the qualitative behavior and stability of the system can be performed using tools from dynamical systems theory. Our results show that KoVAE outperforms state-of-the-art GAN and VAE methods across several challenging synthetic and real-world time series generation benchmarks. Whether trained on regular or irregular data, KoVAE generates time series that improve both discriminative and predictive metrics. We also present visual evidence suggesting that KoVAE learns probability density functions that better approximate the empirical ground truth distribution.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18218"} +{"video_file": "efFmBWioSc_39017424.mp4", "openreview_id": "efFmBWioSc", "slideslive_id": 39017424, "venue": "iclr2024", "title": "Multimodal Web Navigation with Instruction-Finetuned Foundation Models", "status": "Poster", "keywords": "Web Navigation;Foundation Models;Large Language Models;Instruction Finetuning;Decision Making;Multimodal Document Understanding", "tldr": "We propose an offline multimodal agent for autonomous web navigation based on instruction-finetuned large language models, that achieves comparable performance to humans and RL-finetuned SoTA agents.", "abstract": "The progress of autonomous web navigation has been hindered by the dependence on billions of exploratory interactions via online reinforcement learning, and domain-specific model designs that make it difficult to leverage generalization from rich out-of-domain data. In this work, we study data-driven offline training for web agents with vision-language foundation models. We propose an instruction-following multimodal agent, WebGUM, that observes both webpage screenshots and HTML pages and outputs web navigation actions, such as click and type. WebGUM is trained by jointly finetuning an instruction-finetuned language model and a vision encoder with temporal and local perception on a large corpus of demonstrations. We empirically demonstrate this recipe improves the agent's ability of grounded multimodal perception, HTML comprehension, and multi-step reasoning, outperforming prior works by a significant margin. On the MiniWoB, we improve over the previous best offline methods by more than 45.8%, even outperforming online-finetuned SoTA, humans, and GPT-4-based agent. On the WebShop benchmark, our 3-billion-parameter model achieves superior performance to the existing SoTA, PaLM-540B. Furthermore, WebGUM exhibits strong positive transfer to the real-world planning tasks on the Mind2Web. We also collect 347K high-quality demonstrations using our trained models, 38 times larger than prior work, and make them available to promote future research in this direction.", "primary_area": "applications to robotics, autonomy, planning", "site": "https://iclr.cc/virtual/2024/poster/18215"} +{"video_file": "eo9dHwtTFt_39017687.mp4", "openreview_id": "eo9dHwtTFt", "slideslive_id": 39017687, "venue": "iclr2024", "title": "Consciousness-Inspired Spatio-Temporal Abstractions for Better Generalization in Reinforcement Learning", "status": "Poster", "keywords": "Reinforcement Learning;Planning;Neural Networks;Temporal Difference Learning;Generalization;Deep Reinforcement Learning", "tldr": "Planning for better generalization by using abstraction in both space and time", "abstract": "Inspired by human conscious planning, we propose Skipper, a model-based reinforcement learning framework utilizing spatio-temporal abstractions to generalize better in novel situations. It automatically decomposes the given task into smaller, more manageable subtasks, and thus enables sparse decision-making and focused computation on the relevant parts of the environment. The decomposition relies on the extraction of an abstracted proxy problem represented as a directed graph, in which vertices and edges are learned end-to-end from hindsight. Our theoretical analyses provide performance guarantees under appropriate assumptions and establish where our approach is expected to be helpful. Generalization-focused experiments validate Skipper\u2019s significant advantage in zero-shot generalization, compared to some existing state-of-the-art hierarchical planning methods.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18208"} +{"video_file": "ey3GhWXQ97_39019235.mp4", "openreview_id": "ey3GhWXQ97", "slideslive_id": 39019235, "venue": "iclr2024", "title": "Sample-Efficiency in Multi-Batch Reinforcement Learning: The Need for Dimension-Dependent Adaptivity", "status": "Poster", "keywords": "RL Theory;Low adaptive RL;Linear RL;Multi-Batch RL", "tldr": "We show that dimension-dependent adaptivity is necessary for sample efficiency in linear RL.", "abstract": "We theoretically explore the relationship between sample-efficiency and adaptivity in reinforcement learning. An algorithm is sample-efficient if it uses a number of queries\nn\nto the environment that is polynomial in the dimension\nd\nof the problem. Adaptivity refers to the frequency at which queries are sent and feedback is processed to update the querying strategy. To investigate this interplay, we employ a learning framework that allows sending queries in\nK\nbatches, with feedback being processed and queries updated after each batch. This model encompasses the whole adaptivity spectrum, ranging from non-adaptive `offline' (\nK\n=\n1\n) to fully adaptive (\nK\n=\nn\n) scenarios, and regimes in between. For the problems of policy evaluation and best-policy identification under\nd\n-dimensional linear function approximation, we establish\n\u03a9\n(\nlog\n\u2061\nlog\n\u2061\nd\n)\nlower bounds on the number of batches\nK\nrequired for sample-efficient algorithms with\nn\n=\nO\n(\np\no\nl\ny\n(\nd\n)\n)\nqueries. Our results show that just having adaptivity (\nK\n>\n1\n) does not necessarily guarantee sample-efficiency. Notably, the adaptivity-boundary for sample-efficiency is not between offline reinforcement learning (\nK\n=\n1\n), where sample-efficiency was known to not be possible, and adaptive settings. Instead, the boundary lies between different regimes of adaptivity and depends on the problem dimension.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18203"} +{"video_file": "farT6XXntP_39017668.mp4", "openreview_id": "farT6XXntP", "slideslive_id": 39017668, "venue": "iclr2024", "title": "A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models", "status": "Poster", "keywords": "Machine Translation;Language Language Models;Multilingual", "tldr": "We introduce a novel training recipe for decoder-only LLMs in translation which beats NLLB-54B and GPT-3.5 with 7B/13B model size.", "abstract": "Generative Large Language Models (LLMs) have achieved remarkable advancements in various NLP tasks. However, these advances have not been reflected in the translation task, especially those with moderate model sizes (i.e., 7B or 13B parameters), which still lag behind conventional supervised encoder-decoder translation models. Previous studies have attempted to improve the translation capabilities of these LLMs, but their gains have been limited. In this study, we propose a novel fine-tuning approach for LLMs that is specifically designed for the translation task, eliminating the need for the abundant parallel data that traditional translation models usually depend on. Our approach consists of two fine-tuning stages: initial fine-tuning on monolingual data followed by subsequent fine-tuning on a small set of high-quality parallel data. We introduce the LLM developed through this strategy as Advanced Language Model-based trAnslator (ALMA). Based on LLaMA-2 as our underlying model, our results show that the model can achieve an average improvement of more than 12 BLEU and 12 COMET over its zero-shot performance across 10 translation directions from the WMT'21 (2 directions) and WMT'22 (8 directions) test datasets. The performance is significantly better than all prior work and even superior to the NLLB-54B model \\citep{nllb} and GPT-3.5-text-davinci-003, with only 7B or 13B parameters. This method establishes the foundation for a novel training paradigm in machine translation.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18182"} +{"video_file": "fe6ANBxcKM_39017667.mp4", "openreview_id": "fe6ANBxcKM", "slideslive_id": 39017667, "venue": "iclr2024", "title": "Federated Q-Learning: Linear Regret Speedup with Low Communication Cost", "status": "Poster", "keywords": "Federated Learning;Reinforcement Learning;Q-Learning;Regret;Communication Cost", "tldr": "This paper proposes two model-free federated reinforcement learning algorithms that achieve linear regret speedup with logarithmic communication cost.", "abstract": "In this paper, we consider federated reinforcement learning for tabular episodic Markov Decision Processes (MDP) where, under the coordination of a central server, multiple agents collaboratively explore the environment and learn an optimal policy without sharing their raw data. While linear speedup in the number of agents has been achieved for some metrics, such as convergence rate and sample complexity, in similar settings, it is unclear whether it is possible to design a model-free algorithm to achieve linear regret speedup with low communication cost. We propose two federated Q-Learning algorithms termed as FedQ-Hoeffding and FedQ-Bernstein, respectively, and show that the corresponding total regrets achieve a linear speedup compared with their single-agent counterparts, while the communication cost scales logarithmically in the total number of time steps\nT\n. Those results rely on an event-triggered synchronization mechanism between the agents and the server, a novel step size selection when the server aggregates the local estimates of the state-action values to form the global estimates, and a set of new concentration inequalities to bound the sum of non-martingale differences. This is the first work showing that linear regret speedup and logarithmic communication cost can be achieved by model-free algorithms in federated reinforcement learning.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18180"} +{"video_file": "fgKjiVrm6u_39017665.mp4", "openreview_id": "fgKjiVrm6u", "slideslive_id": 39017665, "venue": "iclr2024", "title": "REFACTOR: Learning to Extract Theorems from Proofs", "status": "Poster", "keywords": "theorem extraction;mathematical reasoning;theorem proving", "tldr": "We extract useful mathematical theorems using graph neural networks, evaluating on several downstream tasks to demonstrate their great utility.", "abstract": "Human mathematicians are often good at recognizing modular and reusable theorems that make complex mathematical results within reach. In this paper, we propose a novel method called theoREm-from-prooF extrACTOR (REFACTOR) for training neural networks to mimic this ability in formal mathematical theorem proving. We show on a set of unseen proofs, REFACTOR is able to extract 19.6% of the theorems that humans would use to write the proofs. When applying the model to the existing Metamath library, REFACTOR extracted 16 new theorems. With newly extracted theorems, we show that the existing proofs in the MetaMath database can be refactored. The new theorems are used very frequently after refactoring, with an average usage of 733.5 times, and help shorten the proof lengths. Lastly, we demonstrate that the prover trained on the new-theorem refactored dataset proves more test theorems and outperforms state-of-the-art baselines by frequently leveraging a diverse set of newly extracted theorems. Code can be found at https://github.com/jinpz/refactor.", "primary_area": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18178"} +{"video_file": "fjpfCOV4ru_39017660.mp4", "openreview_id": "fjpfCOV4ru", "slideslive_id": 39017660, "venue": "iclr2024", "title": "Ito Diffusion Approximation of Universal Ito Chains for Sampling, Optimization and Boosting", "status": "Poster", "keywords": "markov chains;diffusion processes", "tldr": "Approximating General Markov Chains by Diffusion Processes without assuming Gaussian noise.", "abstract": "In this work, we consider rather general and broad class of Markov chains, Ito chains, that look like Euler-Maryama discretization of some Stochastic Differential Equation. The chain we study is a unified framework for theoretical analysis. It comes with almost arbitrary isotropic and state-dependent noise instead of normal and state-independent one as in most related papers. Moreover, in our chain the drift and diffusion coefficient can be inexact in order to cover wide range of applications as Stochastic Gradient Langevin Dynamics, sampling, Stochastic Gradient Descent or Stochastic Gradient Boosting. We prove the bound in\nW\n2\n-distance between the laws of our Ito chain and corresponding differential equation. These results improve or cover most of the known estimates. And for some particular cases, our analysis is the first.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18173"} +{"video_file": "fwCoLe3TAX_39019148.mp4", "openreview_id": "fwCoLe3TAX", "slideslive_id": 39019148, "venue": "iclr2024", "title": "Improving Generalization of Alignment with Human Preferences through Group Invariant Learning", "status": "Spotlight", "keywords": "alignment;language model;invariant learning", "tldr": "This paper introduces a novel method for aligning AI assistants with human preferences, boosting RLHF training stability and improving the model\u2019s generalization across various domains.", "abstract": "The success of AI assistants based on language models (LLMs) hinges crucially on Reinforcement Learning from Human Feedback (RLHF), which enables the generation of responses more aligned with human preferences. As universal AI assistants, there's a growing expectation for them to perform consistently across various domains. However, previous work shows that Reinforcement Learning (RL) often exploits shortcuts to attain high rewards and overlooks challenging samples. This focus on quick reward gains undermines both the stability in training and the model's ability to generalize to new, unseen data. In this work, we propose a novel approach that can learn a consistent policy via RL across various data groups or domains. Given the challenges associated with acquiring group annotations, our method automatically classifies data into different groups, deliberately maximizing performance variance. Then, we optimize the policy to perform well on challenging groups. Lastly, leveraging the established groups, our approach adaptively adjusts the exploration space, allocating more learning capacity to more challenging data and preventing the model from over-optimizing on simpler data. Experimental results indicate that our approach significantly enhances training stability and model generalization.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18165"} +{"video_file": "g52tgL8jy6_39017652.mp4", "openreview_id": "g52tgL8jy6", "slideslive_id": 39017652, "venue": "iclr2024", "title": "A Progressive Training Framework for Spiking Neural Networks with Learnable Multi-hierarchical Model", "status": "Poster", "keywords": "Spiking Neural Networks;Learnable Multi-hierarchical Model;Spatio-Temporal Back-propagation", "tldr": "We propose Learnable Multi-hierarchical (LM-H) neuron, which is an advanced model that can dynamically regulate the extraction ratio between historical and current representation.", "abstract": "Spiking Neural Networks (SNNs) have garnered considerable attention due to their energy efficiency and unique biological characteristics. However, the widely adopted Leaky Integrate-and-Fire (LIF) model, as the mainstream neuron model in current SNN research, has been revealed to exhibit significant deficiencies in deep-layer gradient calculation and capturing global information on the time dimension. In this paper, we propose the Learnable Multi-hierarchical (LM-H) model to address these issues by dynamically regulating its membrane-related factors. We point out that the LM-H model fully encompasses the information representation range of the LIF model while offering the flexibility to adjust the extraction ratio between historical and current information. Additionally, we theoretically demonstrate the effectiveness of the LM-H model and the functionality of its internal parameters, and propose a progressive training algorithm tailored specifically for the LM-H model. Furthermore, we devise an efficient training framework for our novel advanced model, encompassing hybrid training and time-slicing online training. Through extensive experiments on various datasets, we validate the remarkable superiority of our model and training algorithm compared to previous state-of-the-art approaches. Code is available at https://github.com/hzc1208/STBP_LMH.", "primary_area": "applications to neuroscience & cognitive science", "site": "https://iclr.cc/virtual/2024/poster/18160"} +{"video_file": "g6rZtxaXRm_39017650.mp4", "openreview_id": "g6rZtxaXRm", "slideslive_id": 39017650, "venue": "iclr2024", "title": "Follow-Up Differential Descriptions: Language Models Resolve Ambiguities for Image Classification", "status": "Poster", "keywords": "representation learning for computer vision;image classification;vision-language models;large language models;CLIP;GPT-3;LLAMA", "tldr": "We propose a novel zero-shot approach that uses natural language to provide vision-language models with differentiating information about classes in downstream image recognition tasks.", "abstract": "A promising approach for improving the performance of vision-language models like CLIP for image classification is to extend the class descriptions (i.e., prompts) with related attributes, e.g., using brown sparrow instead of sparrow. However, current zero-shot methods select a subset of attributes regardless of commonalities between the target classes, potentially providing no useful information that would have helped to distinguish between them. For instance, they may use color instead of bill shape to distinguish between sparrows and wrens, which are both brown. We propose Follow-up Differential Descriptions (FuDD), a zero-shot approach that tailors the class descriptions to each dataset and leads to additional attributes that better differentiate the target classes. FuDD first identifies the ambiguous classes for each image, and then uses a Large Language Model (LLM) to generate new class descriptions that differentiate between them. The new class descriptions resolve the initial ambiguity and help predict the correct label. In our experiments, FuDD consistently outperforms generic description ensembles and naive LLM-generated descriptions on 12 datasets. We show that differential descriptions are an effective tool to resolve class ambiguities, which otherwise significantly degrade the performance. We also show that high quality natural language class descriptions produced by FuDD result in comparable performance to few-shot adaptation methods.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18158"} +{"video_file": "g8sGBSQjYk_39017648.mp4", "openreview_id": "g8sGBSQjYk", "slideslive_id": 39017648, "venue": "iclr2024", "title": "On the Parameterization of Second-Order Optimization Effective towards the Infinite Width", "status": "Poster", "keywords": "Deep learning;Second-order optimization;K-FAC;Feature learning;Infinite width;Maximum update parameterization", "tldr": "We uncover the parameterization of second-order optimization methods that enable feature learning in infinite-width neural networks and demonstrate its benefits.", "abstract": "Second-order optimization has been developed to accelerate the training of deep neural networks and it is being applied to increasingly larger-scale models. In this study, towards training on further larger scales, we identify a specific parameterization for second-order optimization that promotes feature learning in a stable manner even if the network width increases significantly. Inspired by a maximal update parametrization, we consider a one-step update of the gradient and reveal the appropriate scales of hyperparameters including random initialization, learning rates, and damping terms. Our approach covers two major second-order optimization algorithms, K-FAC and Shampoo, and we demonstrate that our parametrization achieves higher generalization performance in feature learning. In particular, it enables us to transfer the hyperparameters across models with different widths.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18156"} +{"video_file": "g90ysX1sVs_39017647.mp4", "openreview_id": "g90ysX1sVs", "slideslive_id": 39017647, "venue": "iclr2024", "title": "Adaptive Rational Activations to Boost Deep Reinforcement Learning", "status": "Spotlight", "keywords": "Deep Reinforcement Learning;Neural Plasticity;Activation Functions;Rational Functions", "tldr": "We exhibit the importance of plasticity for reinforcement learning, and propose to use learnable rational activation functions to augment agents' plasticity.", "abstract": "Latest insights from biology show that intelligence not only emerges from the connections between neurons, but that individual neurons shoulder more computational responsibility than previously anticipated. Specifically, neural plasticity should be critical in the context of constantly changing reinforcement learning (RL) environments, yet current approaches still primarily employ static activation functions. In this work, we motivate the use of adaptable activation functions in RL and show that rational activation functions are particularly suitable for augmenting plasticity. Inspired by residual networks, we derive a condition under which rational units are closed under residual connections and formulate a naturally regularised version. The proposed joint-rational activation allows for desirable degrees of flexibility, yet regularises plasticity to an extent that avoids overfitting by leveraging a mutual set of activation function parameters across layers. We demonstrate that equipping popular algorithms with (joint) rational activations leads to consistent improvements on different games from the Atari Learning Environment benchmark, notably making DQN competitive to DDQN and Rainbow.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18155"} +{"video_file": "g9diuvxN6D_39019280.mp4", "openreview_id": "g9diuvxN6D", "slideslive_id": 39019280, "venue": "iclr2024", "title": "Evaluating the Zero-shot Robustness of Instruction-tuned Language Models", "status": "Spotlight", "keywords": "Instruction Tuning;Robustness;Large Language Models", "tldr": "An evaluation of O.O.D. robustness of instruction-tuned LLMs with different instructions w.r.t. the tuning process", "abstract": "Instruction fine-tuning has recently emerged as a promising approach for improving the zero-shot capabilities of Large Language Models (LLMs) on new tasks. This technique has shown particular strength in improving the performance of modestly sized LLMs, sometimes inducing performance competitive with much larger model variants. In this paper, we ask two questions: (1) How sensitive are instruction-tuned models to the particular phrasings of instructions, and, (2) How can we make them more robust to such natural language variation? To answer the former, we collect a set of 319 instructions manually written by NLP practitioners for over 80 unique tasks included in widely used benchmarks, and we evaluate the variance and average performance of these instructions as compared to instruction phrasings observed during instruction fine-tuning. We find that using novel (unobserved) but appropriate instruction phrasings consistently degrades model performance, sometimes substantially so. Further, such natural instructions yield a wide variance in downstream performance, despite their semantic equivalence. Put another way, instruction-tuned models are not especially robust to instruction re-phrasings. We propose a simple method to mitigate this issue by introducing ``soft prompt'' embedding parameters and optimizing these to maximize the similarity between representations of semantically equivalent instructions. We show that this method consistently improves the robustness of instruction-tuned models.", "primary_area": "visualization or interpretation of learned representations", "site": "https://iclr.cc/virtual/2024/poster/18154"} +{"video_file": "gjeQKFxFpZ_39017634.mp4", "openreview_id": "gjeQKFxFpZ", "slideslive_id": 39017634, "venue": "iclr2024", "title": "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs", "status": "Poster", "keywords": "uncertainty quantification;uncertainty estimation;calibration;failure prediction;large language models;black-box language models;LLM evaluation", "tldr": "We propose a framework for eliciting confidence in black-box LLMs, revealing that while its calibration improves with model capacity, failure prediction remains a challenge.", "abstract": "Empowering large language models (LLMs) to accurately express confidence in their answers is essential for reliable and trustworthy decision-making. Previous confidence elicitation methods, which primarily rely on white-box access to internal model information or model fine-tuning, have become less suitable for LLMs, especially closed-source commercial APIs. This leads to a growing need to explore the untapped area of black-box approaches for LLM uncertainty estimation. To better break down the problem, we define a systematic framework with three components: prompting strategies for eliciting verbalized confidence, sampling methods for generating multiple responses, and aggregation techniques for computing consistency. We then benchmark these methods on two key tasks\u2014confidence calibration and failure prediction\u2014across five types of datasets (e.g., commonsense and arithmetic reasoning) and five widely-used LLMs including GPT-4 and LLaMA 2 Chat. Our analysis uncovers several key insights: 1) LLMs, when verbalizing their confidence, tend to be overconfident, potentially imitating human patterns of expressing confidence. 2) As model capability scales up, both calibration and failure prediction performance improve, yet still far from ideal performance. 3) Employing our proposed strategies, such as human-inspired prompts, consistency among multiple responses, and better aggregation strategies can help mitigate this overconfidence from various perspectives. 4) Comparisons with white-box methods indicate that while white-box methods perform better, the gap is narrow, e.g., 0.522 to 0.605 in AUROC. Despite these advancements, none of these techniques consistently outperform others, and all investigated methods struggle in challenging tasks, such as those requiring professional knowledge, indicating significant scope for improvement. We believe this study can serve as a strong baseline and provide insights for eliciting confidence in black-box LLMs. The code is publicly available at https://github.com/MiaoXiong2320/llm-uncertainty.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18135"} +{"video_file": "gppLqZLQeY_39017631.mp4", "openreview_id": "gppLqZLQeY", "slideslive_id": 39017631, "venue": "iclr2024", "title": "Efficient Subgraph GNNs by Learning Effective Selection Policies", "status": "Poster", "keywords": "Graph Neural Networks;Subgraphs;Expressive power;Sampling", "tldr": "We propose a novel framework that learns to select subgraphs sequentially in order to reduce the computational cost of Subgraph GNNs.", "abstract": "Subgraph GNNs are provably expressive neural architectures that learn graph representations from sets of subgraphs. Unfortunately, their applicability is hampered by the computational complexity associated with performing message passing on many subgraphs. In this paper, we consider the problem of learning to select a small subset of the large set of possible subgraphs in a data-driven fashion. We first motivate the problem by proving that there are families of WL-indistinguishable graphs for which there exist efficient subgraph selection policies: small subsets of subgraphs that can already identify all the graphs within the family. We then propose a new approach, called Policy-Learn, that learns how to select subgraphs in an iterative manner. We prove that, unlike popular random policies and prior work addressing the same problem, our architecture is able to learn the efficient policies mentioned above. Our experimental results demonstrate that Policy-Learn outperforms existing baselines across a wide range of datasets.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/18129"} +{"video_file": "gx2BT0a9MQ_39017628.mp4", "openreview_id": "gx2BT0a9MQ", "slideslive_id": 39017628, "venue": "iclr2024", "title": "ZeRO++: Extremely Efficient Collective Communication for Large Model Training", "status": "Poster", "keywords": "low-precision LLM pretraining;2 bits;auto compression;low memory pretraining", "tldr": "Efficient collective communication design for large model training", "abstract": "Zero Redundancy Optimizer (ZeRO) has been used to train a wide range of large language models on massive GPU clusters due to its ease of use, efficiency, and good scalability. However, when training on low-bandwidth clusters, and/or when small batch size per GPU is used, ZeRO\u2019s effective throughput is limited due to communication overheads. To alleviate this limitation, this paper introduces ZeRO++ composing of three communication volume reduction techniques (lowprecision all-gather, data remapping, and low-precision gradient averaging) to significantly reduce the communication volume up to 4x that enables up to 2.16x better throughput at 384 GPU scale. Our results also show ZeRO++ can speedup the RLHF by 3.3x compared to vanilla ZeRO. To verify the convergence of ZeRO++, we test up to 13B model for pretraining with 8/6-bits all gather and up to 30B model for finetuning with 4/2-bits all gather, and demonstrate on-par accuracy as original ZeRO (aka standard training). As a byproduct, the model trained with ZeRO++ is naturally weight-quantized, which can be directly used for inference without post-training quantization or quantization-aware training.", "primary_area": "infrastructure, software libraries, hardware, etc.", "site": "https://iclr.cc/virtual/2024/poster/18124"} +{"video_file": "h05eQniJsQ_39018950.mp4", "openreview_id": "h05eQniJsQ", "slideslive_id": 39018950, "venue": "iclr2024", "title": "Understanding Certified Training with Interval Bound Propagation", "status": "Poster", "keywords": "Certified Robustness;Adversarial Robustness;Neural Network Verification;Certified Training", "tldr": "We theoretically investigate certified training with Interval Bound Propagation, using a novel metric measuring the tightness of the resulting bounds.", "abstract": "As robustness verification methods are becoming more precise, training certifiably robust neural networks is becoming ever more relevant. To this end, certified training methods compute and then optimize an upper bound on the worst-case loss over a robustness specification. Curiously, training methods based on the imprecise interval bound propagation (IBP) consistently outperform those leveraging more precise bounds. Still, we lack a theoretical understanding of the mechanisms making IBP so successful. In this work, we investigate these mechanisms by leveraging a novel metric measuring the tightness of IBP bounds. We first show theoretically that, for deep linear models (DLNs), tightness decreases with width and depth at initialization, but improves with IBP training. We, then, derive sufficient and necessary conditions on weight matrices for IBP bounds to become exact and demonstrate that these impose strong regularization, providing an explanation for the observed robustness-accuracy trade-off. Finally, we show how these results on DLNs transfer to ReLU networks, before conducting an extensive empirical study, (i) confirming this transferability and yielding state-of-the-art certified accuracy, (ii) finding that while all IBP-based training methods lead to high tightness, this increase is dominated by the size of the propagated input regions rather than the robustness specification, and finally (iii) observing that non-IBP-based methods do not increase tightness. Together, these results help explain the success of recent certified training methods and may guide the development of new ones.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/18118"} +{"video_file": "h922Qhkmx1_39017617.mp4", "openreview_id": "h922Qhkmx1", "slideslive_id": 39017617, "venue": "iclr2024", "title": "Multi-Source Diffusion Models for Simultaneous Music Generation and Separation", "status": "Oral", "keywords": "source separation;probabilistic diffusion models;music generation", "tldr": "In this work, we define a diffusion-based generative model which is the first to be capable of both music generation and source separation. We also introduce the partial generation task, where we generate a subset of the sources given the others.", "abstract": "In this work, we define a diffusion-based generative model capable of both music generation and source separation by learning the score of the joint probability density of sources sharing a context. Alongside the classic total inference tasks (i.e., generating a mixture, separating the sources), we also introduce and experiment on the partial generation task of source imputation, where we generate a subset of the sources given the others (e.g., play a piano track that goes well with the drums). Additionally, we introduce a novel inference method for the separation task based on Dirac likelihood functions. We train our model on Slakh2100, a standard dataset for musical source separation, provide qualitative results in the generation settings, and showcase competitive quantitative results in the source separation setting. Our method is the first example of a single model that can handle both generation and separation tasks, thus representing a step toward general audio models.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18110"} +{"video_file": "hB7SlfEmze_39017614.mp4", "openreview_id": "hB7SlfEmze", "slideslive_id": 39017614, "venue": "iclr2024", "title": "PhyloGFN: Phylogenetic inference with generative flow networks", "status": "Poster", "keywords": "Phylogenetic Inference;GFlowNets;Bayesian Inference;Deep Generative Modeling", "tldr": "We use generative flow networks as amortized samplers of phylogenetic trees, achieving strong results in both Bayesian and parsimony-based phylogenetic inference.", "abstract": "Phylogenetics is a branch of computational biology that studies the evolutionary relationships among biological entities. Its long history and numerous applications notwithstanding, inference of phylogenetic trees from sequence data remains challenging: the high complexity of tree space poses a significant obstacle for the current combinatorial and probabilistic techniques. In this paper, we adopt the framework of generative flow networks (GFlowNets) to tackle two core problems in phylogenetics: parsimony-based and Bayesian phylogenetic inference. Because GFlowNets are well-suited for sampling complex combinatorial structures, they are a natural choice for exploring and sampling from the multimodal posterior distribution over tree topologies and evolutionary distances. We demonstrate that our amortized posterior sampler, PhyloGFN, produces diverse and high-quality evolutionary hypotheses on real benchmark datasets. PhyloGFN is competitive with prior works in marginal likelihood estimation and achieves a closer fit to the target distribution than state-of-the-art variational inference methods.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18107"} +{"video_file": "hILVmJ4Uvu_39017109.mp4", "openreview_id": "hILVmJ4Uvu", "slideslive_id": 39017109, "venue": "iclr2024", "title": "True Knowledge Comes from Practice: Aligning Large Language Models with Embodied Environments via Reinforcement Learning", "status": "Poster", "keywords": "Reinforcement Learning;Large Language Models;Parameter-Efficient Fine-Tuning", "tldr": "Using reinforcement learning to train large language models to align their knowledge with embodied interactive environments.", "abstract": "Despite the impressive performance across numerous tasks, large language models (LLMs) often fail in solving simple decision-making tasks due to the misalignment of the knowledge in LLMs with environments. On the contrary, reinforcement learning (RL) agents learn policies from scratch, which makes them always align with environments but difficult to incorporate prior knowledge for efficient explorations. To narrow the gap, we propose TWOSOME, a novel general online framework that deploys LLMs as decision-making agents to efficiently interact and align with embodied environments via RL without requiring any prepared datasets or prior knowledge of the environments. Firstly, we query the joint probabilities of each valid action with LLMs to form behavior policies. Then, to enhance the stability and robustness of the policies, we propose two normalization methods and summarize four prompt design principles. Finally, we design a novel parameter-efficient training architecture where the actor and critic share one frozen LLM equipped with low-rank adapters (LoRA) updated by PPO. We conduct extensive experiments to evaluate TWOSOME. i) TWOSOME exhibits significantly better sample efficiency and performance compared to the conventional RL method, PPO, and prompt tuning method, SayCan, in both classical decision-making environment, Overcooked, and simulated household environment, VirtualHome. ii) Benefiting from LLMs' open-vocabulary feature, TWOSOME shows superior generalization ability to unseen tasks. iii) Under our framework, there is no significant loss of the LLMs' original ability during online PPO finetuning.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18102"} +{"video_file": "hOMVq57Ce0_39017611.mp4", "openreview_id": "hOMVq57Ce0", "slideslive_id": 39017611, "venue": "iclr2024", "title": "Piecewise Linear Parametrization of Policies: Towards Interpretable Deep Reinforcement Learning", "status": "Poster", "keywords": "Reinforcement learning;interpretability;control;navigation;transparency;discrete", "tldr": "We propose a neural policy constrained to express a small number of linear behaviors, and show that it leads to improved interpretability while performing comparably to baselines in several control and navigation tasks.", "abstract": "Learning inherently interpretable policies is a central challenge in the path to developing autonomous agents that humans can trust. Linear policies can justify their decisions while interacting in a dynamic environment, but their reduced expressivity prevents them from solving hard tasks. Instead, we argue for the use of piecewise-linear policies. We carefully study to what extent they can retain the interpretable properties of linear policies while reaching competitive performance with neural baselines. In particular, we propose the HyperCombinator (HC), a piecewise-linear neural architecture expressing a policy with a controllably small number of sub-policies. Each sub-policy is linear with respect to interpretable features, shedding light on the decision process of the agent without requiring an additional explanation model. We evaluate HC policies in control and navigation experiments, visualize the improved interpretability of the agent and highlight its trade-off with performance. Moreover, we validate that the restricted model class that the HyperCombinator belongs to is compatible with the algorithmic constraints of various reinforcement learning algorithms.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18099"} +{"video_file": "hj9ZuNimRl_39017603.mp4", "openreview_id": "hj9ZuNimRl", "slideslive_id": 39017603, "venue": "iclr2024", "title": "Better Neural PDE Solvers Through Data-Free Mesh Movers", "status": "Poster", "keywords": "neural PDE solvers;adaptive moving mesh;neural operators;Monge-Amp\u00e8re equation", "tldr": "This paper introduces a neural-network-based mesh adapter called Data-free Mesh Mover (DMM), which is trained in a physics-informed data-free way. The DMM can be embedded into the neural PDE solver through proper architectural design, called MM-PDE.", "abstract": "Recently, neural networks have been extensively employed to solve partial differential equations (PDEs) in physical system modeling. While major studies focus on learning system evolution on predefined static mesh discretizations, some methods utilize reinforcement learning or supervised learning techniques to create adaptive and dynamic meshes, due to the dynamic nature of these systems. However, these approaches face two primary challenges: (1) the need for expensive optimal mesh data, and (2) the change of the solution space's degree of freedom and topology during mesh refinement. To address these challenges, this paper proposes a neural PDE solver with a neural mesh adapter. To begin with, we introduce a novel data-free neural mesh adaptor, called Data-free Mesh Mover (DMM), with two main innovations. Firstly, it is an operator that maps the solution to adaptive meshes and is trained using the Monge-Amp\u00e8re equation without optimal mesh data. Secondly, it dynamically changes the mesh by moving existing nodes rather than adding or deleting nodes and edges. Theoretical analysis shows that meshes generated by DMM have the lowest interpolation error bound. Based on DMM, to efficiently and accurately model dynamic systems, we develop a moving mesh based neural PDE solver (MM-PDE) that embeds the moving mesh with a two-branch architecture and a learnable interpolation framework to preserve information within the data. Empirical experiments demonstrate that our method generates suitable meshes and considerably enhances accuracy when modeling widely considered PDE systems. The code can be found at: https://github.com/Peiyannn/MM-PDE.git.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18088"} +{"video_file": "huGECz8dPp_39018664.mp4", "openreview_id": "huGECz8dPp", "slideslive_id": 39018664, "venue": "iclr2024", "title": "Information Bottleneck Analysis of Deep Neural Networks via Lossy Compression", "status": "Poster", "keywords": "deep learning;information bottleneck principle;stochastic neural networks;lossy compression", "tldr": "We propose and justify a lossy compression step to overcome the obstacles associated with high dimensionality in the information-theoretic approah to DNNs", "abstract": "The Information Bottleneck (IB) principle offers an information-theoretic framework for analyzing the training process of deep neural networks (DNNs). Its essence lies in tracking the dynamics of two mutual information (MI) values: between the hidden layer output and the DNN input/target. According to the hypothesis put forth by Shwartz-Ziv & Tishby (2017), the training process consists of two distinct phases: fitting and compression. The latter phase is believed to account for the good generalization performance exhibited by DNNs. Due to the challenging nature of estimating MI between high-dimensional random vectors, this hypothesis was only partially verified for NNs of tiny sizes or specific types, such as quantized NNs. In this paper, we introduce a framework for conducting IB analysis of general NNs. Our approach leverages the stochastic NN method proposed by Goldfeld et al. (2019) and incorporates a compression step to overcome the obstacles associated with high dimensionality. In other words, we estimate the MI between the compressed representations of high-dimensional random vectors. The proposed method is supported by both theoretical and practical justifications. Notably, we demonstrate the accuracy of our estimator through synthetic experiments featuring predefined MI values and comparison with MINE (Belghazi et al., 2018). Finally, we perform IB analysis on a close-to-real-scale convolutional DNN, which reveals new features of the MI dynamics.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/18081"} +{"video_file": "i9wDX850jR_39017592.mp4", "openreview_id": "i9wDX850jR", "slideslive_id": 39017592, "venue": "iclr2024", "title": "Feature emergence via margin maximization: case studies in algebraic tasks", "status": "Spotlight", "keywords": "inductive bias;margin maximization;feature learning;mechanistic interpretability", "tldr": "We prove that algebraic structure in training data leads to emergent Fourier/representation theoretic features in neural networks", "abstract": "Understanding the internal representations learned by neural networks is a cornerstone challenge in the science of machine learning. While there have been significant recent strides in some cases towards understanding how neural networks implement specific target functions, this paper explores a complementary question -- why do networks arrive at particular computational strategies? Our inquiry focuses on the algebraic learning tasks of modular addition, sparse parities, and finite group operations. Our primary theoretical findings analytically characterize the features learned by stylized neural networks for these algebraic tasks. Notably, our main technique demonstrates how the principle of margin maximization alone can be used to fully specify the features learned by the network. Specifically, we prove that the trained networks utilize Fourier features to perform modular addition and employ features corresponding to irreducible group-theoretic representations to perform compositions in general groups, aligning closely with the empirical observations of Nanda et al. (2023) and Chughtai et al. (2023). More generally, we hope our techniques can help to foster a deeper understanding of why neural networks adopt specific computational strategies.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/18073"} +{"video_file": "iAW2EQXfwb_39017591.mp4", "openreview_id": "iAW2EQXfwb", "slideslive_id": 39017591, "venue": "iclr2024", "title": "Negatively Correlated Ensemble Reinforcement Learning for Online Diverse Game Level Generation", "status": "Poster", "keywords": "Level Generation;Video Games;Deep Reinforcement Learning;Ensemble Learning;Regularisation", "tldr": "This paper proposes a regularised ensemble reinforcement learning approach with policy regularisation theorems to train generators that generates diverse and promising game levels in real-time.", "abstract": "Deep reinforcement learning has recently been successfully applied to online procedural content generation in which a policy determines promising game-level segments. However, existing methods can hardly discover diverse level patterns, while the lack of diversity makes the gameplay boring. This paper proposes an ensemble reinforcement learning approach that uses multiple negatively correlated sub-policies to generate different alternative level segments, and stochastically selects one of them following a selector model. A novel policy regularisation technique is integrated into the approach to diversify the generated alternatives. In addition, we develop theorems to provide general methodologies for optimising policy regularisation in a Markov decision process. The proposed approach is compared with several state-of-the-art policy ensemble methods and classic methods on a well-known level generation benchmark, with two different reward functions expressing game-design goals from different perspectives. Results show that our approach boosts level diversity notably with competitive performance in terms of the reward. Furthermore, by varying the regularisation coefficient, the trained generators form a well-spread Pareto front, allowing explicit trade-offs between diversity and rewards of generated levels.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18072"} +{"video_file": "iHcTLIor0m_39019271.mp4", "openreview_id": "iHcTLIor0m", "slideslive_id": 39019271, "venue": "iclr2024", "title": "Poly-View Contrastive Learning", "status": "Poster", "keywords": "Contrastive learning;Self-Supervised Learning;SimCLR;Multi-View;Augmentations;Multiplicity;InfoMax;Sufficient Statistics", "tldr": "We look at contrastive learning with more than two related views per sample and find that for some objectives, performance can be improved compared to two views (e.g. SimCLR) at no computational cost.", "abstract": "Contrastive learning typically matches pairs of related views among a number of unrelated negative views. Views can be generated (e.g. by augmentations) or be observed. We investigate matching when there are more than two related views which we call poly-view tasks, and derive new representation learning objectives using information maximization and sufficient statistics. We show that with unlimited computation, one should maximize the number of related views, and with a fixed compute budget, it is beneficial to decrease the number of unique samples whilst increasing the number of views of those samples. In particular, poly-view contrastive models trained for 128 epochs with batch size 256 outperform SimCLR trained for 1024 epochs at batch size 4096 on ImageNet1k, challenging the belief that contrastive models require large batch sizes and many training epochs.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18069"} +{"video_file": "iPWxqnt2ke_39017587.mp4", "openreview_id": "iPWxqnt2ke", "slideslive_id": 39017587, "venue": "iclr2024", "title": "Identifying Policy Gradient Subspaces", "status": "Poster", "keywords": "reinforcement learning;policy gradients;gradient subspaces", "tldr": "We investigate the potential of gradient subspaces in deep reinforcement learning.", "abstract": "Policy gradient methods hold great potential for solving complex continuous control tasks. Still, their training efficiency can be improved by exploiting structure within the optimization problem. Recent work indicates that supervised learning can be accelerated by leveraging the fact that gradients lie in a low-dimensional and slowly-changing subspace. In this paper, we conduct a thorough evaluation of this phenomenon for two popular deep policy gradient methods on various simulated benchmark tasks. Our results demonstrate the existence of such gradient subspaces despite the continuously changing data distribution inherent to reinforcement learning. These findings reveal promising directions for future work on more efficient reinforcement learning, e.g., through improving parameter-space exploration or enabling second-order optimization.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18066"} +{"video_file": "ijK5hyxs0n_39017578.mp4", "openreview_id": "ijK5hyxs0n", "slideslive_id": 39017578, "venue": "iclr2024", "title": "Graph Metanetworks for Processing Diverse Neural Architectures", "status": "Spotlight", "keywords": "Metanetwork;graph;equivariance;expressivity", "tldr": "We develop metanetworks that allow expressive, permutation equivariant processing of diverse neural network architectures.", "abstract": "Neural networks efficiently encode learned information within their parameters. Consequently, many tasks can be unified by treating neural networks themselves as input data. When doing so, recent studies demonstrated the importance of accounting for the symmetries and geometry of parameter spaces. However, those works developed architectures tailored to specific networks such as MLPs and CNNs without normalization layers, and generalizing such architectures to other types of networks can be challenging. In this work, we overcome these challenges by building new metanetworks --- neural networks that take weights from other neural networks as input. Put simply, we carefully build graphs representing the input neural networks and process the graphs using graph neural networks. Our approach, Graph Metanetworks (GMNs), generalizes to neural architectures where competing methods struggle, such as multi-head attention layers, normalization layers, convolutional layers, ResNet blocks, and group-equivariant linear layers. We prove that GMNs are expressive and equivariant to parameter permutation symmetries that leave the input neural network functions unchanged. We validate the effectiveness of our method on several metanetwork tasks over diverse neural network architectures.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/18054"} +{"video_file": "ijoqFqSC7p_39019079.mp4", "openreview_id": "ijoqFqSC7p", "slideslive_id": 39019079, "venue": "iclr2024", "title": "FreeNoise: Tuning-Free Longer Video Diffusion via Noise Rescheduling", "status": "Poster", "keywords": "diffusion;video diffusion;video generation;tuning-free", "tldr": "A tuning-free and time-efficient paradigm for longer video generation based on pretrained video diffusion models", "abstract": "With the availability of large-scale video datasets and the advances of diffusion models, text-driven video generation has achieved substantial progress. However, existing video generation models are typically trained on a limited number of frames, resulting in the inability to generate high-fidelity long videos during inference. Furthermore, these models only support single-text conditions, whereas real-life scenarios often require multi-text conditions as the video content changes over time. To tackle these challenges, this study explores the potential of extending the text-driven capability to generate longer videos conditioned on multiple texts. 1) We first analyze the impact of initial noise in video diffusion models. Then building upon the observation of noise, we propose FreeNoise, a tuning-free and time-efficient paradigm to enhance the generative capabilities of pretrained video diffusion models while preserving content consistency. Specifically, instead of initializing noises for all frames, we reschedule a sequence of noises for long-range correlation and perform temporal attention over them by window-based fusion. 2) Additionally, we design a novel motion injection method to support the generation of videos conditioned on multiple text prompts. Extensive experiments validate the superiority of our paradigm in extending the generative capabilities of video diffusion models. It is noteworthy that compared with the previous best-performing method which brought about 255% extra time cost, our method incurs only negligible time cost of approximately 17%. Generated video samples are available at our website: http://haonanqiu.com/projects/FreeNoise.html.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18053"} +{"video_file": "itGkF993gz_39019199.mp4", "openreview_id": "itGkF993gz", "slideslive_id": 39019199, "venue": "iclr2024", "title": "MAPE-PPI: Towards Effective and Efficient Protein-Protein Interaction Prediction via Microenvironment-Aware Protein Embedding", "status": "Spotlight", "keywords": "Bioinformatics;Protein-Protein Interaction;Protein Sequence-Structure Co-Modeling", "tldr": "In this paper, we propose Microenvironment-Aware Protein Embedding for PPI prediction (MPAE-PPI), which encodes microenvironments into chemically meaningful discrete codes via a sufficiently large microenvironment ``vocabulary\" (i.e., codebook).", "abstract": "Protein-Protein Interactions (PPIs) are fundamental in various biological processes and play a key role in life activities. The growing demand and cost of experimental PPI assays require computational methods for efficient PPI prediction. While existing methods rely heavily on protein sequence for PPI prediction, it is the protein structure that is the key to determine the interactions. To take both protein modalities into account, we define the microenvironment of an amino acid residue by its sequence and structural contexts, which describe the surrounding chemical properties and geometric features. In addition, microenvironments defined in previous work are largely based on experimentally assayed physicochemical properties, for which the \"vocabulary\" is usually extremely small. This makes it difficult to cover the diversity and complexity of microenvironments. In this paper, we propose Microenvironment-Aware Protein Embedding for PPI prediction (MPAE-PPI), which encodes microenvironments into chemically meaningful discrete codes via a sufficiently large microenvironment \"vocabulary\" (i.e., codebook). Moreover, we propose a novel pre-training strategy, namely Masked Codebook Modeling (MCM), to capture the dependencies between different microenvironments by randomly masking the codebook and reconstructing the input. With the learned microenvironment codebook, we can reuse it as an off-the-shelf tool to efficiently and effectively encode proteins of different sizes and functions for large-scale PPI prediction. Extensive experiments show that MAPE-PPI can scale to PPI prediction with millions of PPIs with superior trade-offs between effectiveness and computational efficiency than the state-of-the-art competitors.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/18046"} +{"video_file": "ix7rLVHXyY_39017573.mp4", "openreview_id": "ix7rLVHXyY", "slideslive_id": 39017573, "venue": "iclr2024", "title": "Learning Performance-Improving Code Edits", "status": "Spotlight", "keywords": "Large Language Models;Retrieval Augmented Generation;Program Synthesis;Program Optimization;Fine-Tuning;Goal-Conditioning;Data Augmentation;Self-Play;Synthetic Dataset;Performance Optimization;Machine Learning for Code Optimization;Dataset", "tldr": "We introduce a benchmark for reproducible research on neural program optimization, evaluate the capabilities of LLMs, and present three effective strategies for program optimization, achieving up to average 6.86X times speedup with our best model", "abstract": "With the decline of Moore's law, optimizing program performance has become a major focus of software research. However, high-level optimizations such as API and algorithm changes remain elusive due to the difficulty of understanding the semantics of code. Simultaneously, pretrained large language models (LLMs) have demonstrated strong capabilities at solving a wide range of programming tasks. To that end, we introduce a framework for adapting LLMs to high-level program optimization. First, we curate a dataset of performance-improving edits made by human programmers of over 77,000 competitive C++ programming submission pairs, accompanied by extensive unit tests. A major challenge is the significant variability of measuring performance on commodity hardware, which can lead to spurious \"improvements.\" To isolate and reliably evaluate the impact of program optimizations, we design an environment based on the gem5 full system simulator, the de facto simulator used in academia and industry. Next, we propose a broad range of adaptation strategies for code optimization; for prompting, these include retrieval-based few-shot prompting and chain-of-thought, and for finetuning, these include performance-conditioned generation and synthetic data augmentation based on self-play. A combination of these techniques achieves a mean speedup of 6.86\n\u00d7\nwith eight generations, higher than average optimizations from individual programmers (3.66\n\u00d7\n). Using our model's fastest generations, we set a new upper limit on the fastest speedup possible for our dataset at 9.64\n\u00d7\ncompared to using the fastest human submissions available (9.56\n\u00d7\n).", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/18045"} +{"video_file": "izrOLJov5y_39017572.mp4", "openreview_id": "izrOLJov5y", "slideslive_id": 39017572, "venue": "iclr2024", "title": "Spoken Question Answering and Speech Continuation Using Spectrogram-Powered LLM", "status": "Poster", "keywords": "Speech Continuation;Spoken Question Answering", "tldr": "Spoken question answering and speech continuation leveraging pre-trained language model operating in the spectrogram domain", "abstract": "We present Spectron, a novel approach to adapting pre-trained large language models (LLMs) to perform spoken question answering (QA) and speech continuation. By endowing the LLM with a pre-trained speech encoder, our model becomes able to take speech inputs and generate speech outputs. The entire system is trained end-to-end and operates directly on spectrograms, simplifying our architecture. Key to our approach is a training objective that jointly supervises speech recognition, text continuation, and speech synthesis using only paired speech-text pairs, enabling a `cross-modal' chain-of-thought within a single decoding pass. Our method surpasses existing spoken language models in speaker preservation and semantic coherence. Furthermore, the proposed model improves upon direct initialization in retaining the knowledge of the original LLM as demonstrated through spoken QA datasets. We release our audio samples and spoken QA dataset via our website.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/18042"} +{"video_file": "j8hdRqOUhN_39019211.mp4", "openreview_id": "j8hdRqOUhN", "slideslive_id": 39019211, "venue": "iclr2024", "title": "Solving Inverse Problems with Latent Diffusion Models via Hard Data Consistency", "status": "Spotlight", "keywords": "Diffusion models;inverse problems", "tldr": "We show how to effectively leverage latent (or stable) diffusion as generative priors for solving general inverse problems.", "abstract": "Latent diffusion models have been demonstrated to generate high-quality images, while offering efficiency in model training compared to diffusion models operating in the pixel space. However, incorporating latent diffusion models to solve inverse problems remains a challenging problem due to the nonlinearity of the encoder and decoder. To address these issues, we propose ReSample, an algorithm that can solve general inverse problems with pre-trained latent diffusion models. Our algorithm incorporates data consistency by solving an optimization problem during the reverse sampling process, a concept that we term as hard data consistency. Upon solving this optimization problem, we propose a novel resampling scheme to map the measurement-consistent sample back onto the noisy data manifold and theoretically demonstrate its benefits. Lastly, we apply our algorithm to solve a wide range of linear and nonlinear inverse problems in both natural and medical images, demonstrating that our approach outperforms existing state-of-the-art approaches, including those based on pixel-space diffusion models.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18037"} +{"video_file": "jFJPd9kIiF_39018824.mp4", "openreview_id": "jFJPd9kIiF", "slideslive_id": 39018824, "venue": "iclr2024", "title": "Compressing Latent Space via Least Volume", "status": "Poster", "keywords": "Autoencoder;Representation Learning;Dimension Reduction", "tldr": "This paper introduces a volume-based regularization that automatically reduces the dimensionality of latent space and order the latent dimensions..", "abstract": "This paper introduces Least Volume---a simple yet effective regularization inspired by geometric intuition---that can reduce the necessary number of latent dimensions needed by an autoencoder without requiring any prior knowledge of the intrinsic dimensionality of the dataset. We show that the Lipschitz continuity of the decoder is the key to making it work, provide a proof that PCA is just a linear special case of it, and reveal that it has a similar PCA-like importance ordering effect when applied to nonlinear models. We demonstrate the intuition behind the regularization on some pedagogical toy problems, and its effectiveness on several benchmark problems, including MNIST, CIFAR-10 and CelebA.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/18035"} +{"video_file": "jId5PXbBbX_39017565.mp4", "openreview_id": "jId5PXbBbX", "slideslive_id": 39017565, "venue": "iclr2024", "title": "Provably Efficient UCB-type Algorithms For Learning Predictive State Representations", "status": "Poster", "keywords": "Reinforcement learning;Sequential decision-making problem;Predictive state representation;POMDP;UCB;online and offline", "tldr": "We developed a provably efficient upper confidence bound algorithm for online and offline low rank decision-making problem, which is modeled by predictive state representation.", "abstract": "The general sequential decision-making problem, which includes Markov decision processes (MDPs) and partially observable MDPs (POMDPs) as special cases, aims at maximizing a cumulative reward by making a sequence of decisions based on a history of observations and actions over time. Recent studies have shown that the sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs). Despite these advancements, existing approaches typically involve oracles or steps that are computationally intractable. On the other hand, the upper confidence bound (UCB) based approaches, which have served successfully as computationally efficient methods in bandits and MDPs, have not been investigated for more general PSRs, due to the difficulty of optimistic bonus design in these more challenging settings. This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models. We further characterize the sample complexity bounds for our designed UCB-type algorithms for both online and offline PSRs. In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational tractability, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/18032"} +{"video_file": "jODehvtTDx_39017559.mp4", "openreview_id": "jODehvtTDx", "slideslive_id": 39017559, "venue": "iclr2024", "title": "Analyzing and Improving Optimal-Transport-based Adversarial Networks", "status": "Poster", "keywords": "Optimal Transport;Generative Adversarial Networks", "tldr": "We analyze and improve Optimal-Transport-based adversarial networks.", "abstract": "Optimal Transport (OT) problem aims to find a transport plan that bridges two distributions while minimizing a given cost function. OT theory has been widely utilized in generative modeling. In the beginning, OT distance has been used as a measure for assessing the distance between data and generated distributions. Recently, OT transport map between data and prior distributions has been utilized as a generative model. These OT-based generative models share a similar adversarial training objective. In this paper, we begin by unifying these OT-based adversarial methods within a single framework. Then, we elucidate the role of each component in training dynamics through a comprehensive analysis of this unified framework. Moreover, we suggest a simple but novel method that improves the previously best-performing OT-based model. Intuitively, our approach conducts a gradual refinement of the generated distribution, progressively aligning it with the data distribution. Our approach achieves a FID score of 2.51 on CIFAR-10 and 5.99 on CelebA-HQ-256, outperforming unified OT-based adversarial approaches.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/18024"} +{"video_file": "jhPvuc7kxB_39017551.mp4", "openreview_id": "jhPvuc7kxB", "slideslive_id": 39017551, "venue": "iclr2024", "title": "Look, Remember and Reason: Grounded Reasoning in Videos with Language Models", "status": "Poster", "keywords": "Grounding;Reasoning;Language Models", "tldr": "We show the effectiveness of off-the-shelf language models for reasoning on videos, when grounded using surrogate tasks.", "abstract": "Multi-modal language models (LM) have recently shown promising performance in high-level reasoning tasks on videos. However, existing methods still fall short in tasks like causal or compositional spatiotemporal reasoning over actions, in which model predictions need to be grounded in fine-grained low-level details, such as object motions and object interactions. In this work, we propose training an LM end-to-end on low-level surrogate tasks, including object detection, re-identification, and tracking, to endow the model with the required low-level visual capabilities. We show that a two-stream video encoder with spatiotemporal attention is effective at capturing the required static and motion-based cues in the video. By leveraging the LM's ability to perform the low-level surrogate tasks, we can cast reasoning in videos as the three-step process of Look, Remember, Reason, wherein visual information is extracted using low-level visual skills step-by-step and then integrated to arrive at a final answer. We demonstrate the effectiveness of our framework on diverse visual reasoning tasks from the ACRE, CATER, Something-Else and STAR datasets. Our approach is trainable end-to-end and surpasses state-of-the-art task-specific methods across these tasks by a large margin.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/18014"} +{"video_file": "jj5ZjZsWJe_39017549.mp4", "openreview_id": "jj5ZjZsWJe", "slideslive_id": 39017549, "venue": "iclr2024", "title": "Stochastic Controlled Averaging for Federated Learning with Communication Compression", "status": "Spotlight", "keywords": "federated learning;communication compression;data heterogeneity;controlled averaging", "tldr": "We propose SCALLION and SCAFCOM that are built on a new formulation of stochastic controlled averaging and reach the SOTA performance among compressed FL algorithms.", "abstract": "Communication compression has been an important topic in Federated Learning (FL) for alleviating the communication overhead. However, communication compression brings forth new challenges in FL due to the interplay of compression-incurred information distortion and inherent characteristics of FL such as partial participation and data heterogeneity. Despite the recent development, the existing approaches either cannot accommodate arbitrary data heterogeneity or partial participation, or require stringent conditions on compression. In this paper, we revisit the seminal stochastic controlled averaging method by proposing an equivalent but more efficient/simplified formulation with halved uplink communication costs, building upon which we propose two compressed FL algorithms, SCALLION and SCAFCOM, to support unbiased and biased compression, respectively. Both the proposed methods outperform the existing compressed FL methods in terms of communication and computation complexities. Moreover,SCALLION and SCAFCOM attain fast convergence rates under arbitrary data heterogeneity without any additional assumptions on compression errors. Experiments show that \\scallion and \\scafcom outperform recent compressed FL methods under the same communication budget.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/18012"} +{"video_file": "jzzEHTBFOT_39018868.mp4", "openreview_id": "jzzEHTBFOT", "slideslive_id": 39018868, "venue": "iclr2024", "title": "C-TPT: Calibrated Test-Time Prompt Tuning for Vision-Language Models via Text Feature Dispersion", "status": "Poster", "keywords": "Calibration;Test-time adaptation;CLIP;Prompt tuning", "tldr": "We address the critical yet under-explored challenge of achieving calibrated zero-shot inference during test-time prompt tuning in large-scale vision-language models.", "abstract": "In deep learning, test-time adaptation has gained attention as a method for model fine-tuning without the need for labeled data. A prime exemplification is the recently proposed test-time prompt tuning for large-scale vision-language models such as CLIP. Unfortunately, these prompts have been mainly developed to improve accuracy, overlooking the importance of calibration, which is a crucial aspect for quantifying prediction uncertainty. However, traditional calibration methods rely on substantial amounts of labeled data, making them impractical for test-time scenarios. To this end, this paper explores calibration during test-time prompt tuning by leveraging the inherent properties of CLIP. Through a series of observations, we find that the prompt choice significantly affects the calibration in CLIP, where the prompts leading to higher text feature dispersion result in better-calibrated predictions. Introducing the Average Text Feature Dispersion (ATFD), we establish its relationship with calibration error and present a novel method, Calibrated Test-time Prompt Tuning (C-TPT), for optimizing prompts during test-time with enhanced calibration. Through extensive experiments on different CLIP architectures and datasets, we show that C-TPT can effectively improve the calibration of test-time prompt tuning without needing labeled data. The code is publicly accessible at https://github.com/hee-suk-yoon/C-TPT.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/17996"} +{"video_file": "kIP0duasBb_39017528.mp4", "openreview_id": "kIP0duasBb", "slideslive_id": 39017528, "venue": "iclr2024", "title": "Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models", "status": "Poster", "keywords": "Vision-Language Models;Zero-Shot Generalization;Test-Time Adaptation;CLIP reward", "tldr": "We propose to improve the zero-shot generalization capacity of vision-language models on the fly with CLIP as the feedback source.", "abstract": "One fascinating aspect of pre-trained vision-language models (VLMs) learning under language supervision is their impressive zero-shot generalization capability. However, this ability is hindered by distribution shifts between the training and testing data. Previous test time adaptation (TTA) methods for VLMs in zero-shot classification rely on minimizing the entropy of model outputs, tending to be stuck in incorrect model predictions. In this work, we propose TTA with feedback to rectify the model output and prevent the model from becoming blindly confident. Specifically, a CLIP model is adopted as the reward model during TTA and provides feedback for the VLM. Given a single test sample, the VLM is forced to maximize the CLIP reward between the input and sampled results from the VLM output distribution. The proposed \\textit{reinforcement learning with CLIP feedback~(RLCF)} framework is highly flexible and universal. Beyond the classification task, with task-specific sampling strategies and a proper reward baseline choice, RLCF can be easily extended to not only discrimination tasks like retrieval but also generalization tasks like image captioning, improving the zero-shot generalization capacity of VLMs. According to the characteristics of these VL tasks, we build different fully TTA pipelines with RLCF to improve the zero-shot generalization ability of various VLMs. Extensive experiments along with promising empirical results demonstrate the effectiveness of RLCF. The code is available at https://github.com/mzhaoshuai/RLCF.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/17984"} +{"video_file": "kJ0qp9Xdsh_39017526.mp4", "openreview_id": "kJ0qp9Xdsh", "slideslive_id": 39017526, "venue": "iclr2024", "title": "Towards Aligned Layout Generation via Diffusion Model with Aesthetic Constraints", "status": "Poster", "keywords": "Diffusion model;Layout generation;Constrained Optimization", "tldr": "Unified model for layout generation using constrained diffusion.", "abstract": "Controllable layout generation refers to the process of creating a plausible visual arrangement of elements within a graphic design (e.g., document and web designs) with constraints representing design intentions. Although recent diffusion-based models have achieved state-of-the-art FID scores, they tend to exhibit more pronounced misalignment compared to earlier transformer-based models. In this work, we propose the LAyout Constraint diffusion modEl (LACE), a unified model to handle a broad range of layout generation tasks, such as arranging elements with specified attributes and refining or completing a coarse layout design. The model is based on continuous diffusion models. Compared with existing methods that use discrete diffusion models, continuous state-space design can enable the incorporation of continuous aesthetic constraint functions in training more naturally. For conditional generation, we propose injecting layout conditions in the form of masks or gradient guidance during inference. Empirical results show that LACE produces high-quality layouts and outperforms existing state-of-the-art baselines. We will release our source code and model checkpoints.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/17981"} +{"video_file": "kUCgHbmO11_39017520.mp4", "openreview_id": "kUCgHbmO11", "slideslive_id": 39017520, "venue": "iclr2024", "title": "SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation", "status": "Poster", "keywords": "Source-free domain adaptation;Data augmentation", "tldr": "We propose a novel source-free domain adaptation method that leverages intuitions derived from data augmentation", "abstract": "In the face of the deep learning model's vulnerability to domain shift, source-free domain adaptation (SFDA) methods have been proposed to adapt models to new, unseen target domains without requiring access to source domain data. Although the potential benefits of applying data augmentation to SFDA are attractive, several challenges arise such as the dependence on prior knowledge of class-preserving transformations and the increase in memory and computational requirements. In this paper, we propose Source-free Domain Adaptation Through the Lens of Data Augmentation (SF(DA)\n2\n), a novel approach that leverages the benefits of data augmentation without suffering from these challenges. We construct an augmentation graph in the feature space of the pretrained model using the neighbor relationships between target features and propose spectral neighborhood clustering to identify partitions in the prediction space. Furthermore, we propose implicit feature augmentation and feature disentanglement as regularization loss functions that effectively utilize class semantic information within the feature space. These regularizers simulate the inclusion of an unlimited number of augmented target features into the augmentation graph while minimizing computational and memory demands. Our method shows superior adaptation performance in SFDA scenarios, including 2D image and 3D point cloud datasets and a highly imbalanced dataset.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/17973"} +{"video_file": "kUuKFW7DIF_39017519.mp4", "openreview_id": "kUuKFW7DIF", "slideslive_id": 39017519, "venue": "iclr2024", "title": "Multi-resolution HuBERT: Multi-resolution Speech Self-Supervised Learning with Masked Unit Prediction", "status": "Spotlight", "keywords": "Speech Representation Learning;Self-supervised Learning;Multi-resolution", "tldr": "We propose a multi-resolution framework for speech representation learning, which demonstrate significant gain in performance and efficiency.", "abstract": "Existing Self-Supervised Learning (SSL) models for speech typically process speech signals at a fixed resolution of 20 milliseconds. This approach overlooks the varying informational content present at different resolutions in speech signals. In contrast, this paper aims to incorporate multi-resolution information into speech self-supervised representation learning. We introduce an SSL model that leverages a hierarchical Transformer architecture, complemented by HuBERT-style masked prediction objectives, to process speech at multiple resolutions. Experimental results indicate that the proposed model not only achieves more efficient inference but also exhibits superior or comparable performance to the original HuBERT model over various tasks. Specifically, significant performance improvements over the original HuBERT have been observed in fine-tuning experiments on the LibriSpeech speech recognition benchmark as well as in evaluations using the Speech Universal PERformance Benchmark (SUPERB) and Multilingual SUPERB (ML-SUPERB).", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/17972"} +{"video_file": "kmn0BhQk7p_39017513.mp4", "openreview_id": "kmn0BhQk7p", "slideslive_id": 39017513, "venue": "iclr2024", "title": "Beyond Memorization: Violating Privacy via Inference with Large Language Models", "status": "Spotlight", "keywords": "Privacy;Large Language Models", "tldr": "We present the first comprehensive study on the capabilities of pretrained LLMs to infer personal attributes from texts given at inference.", "abstract": "Current privacy research on large language models (LLMs) primarily focuses on the issue of extracting memorized training data. At the same time, models\u2019 inference capabilities have increased drastically. This raises the key question of whether current LLMs could violate individuals\u2019 privacy by inferring personal attributes from text given at inference time. In this work, we present the first comprehensive study on the capabilities of pretrained LLMs to infer personal attributes from text. We construct a dataset consisting of real Reddit profiles, and show that current LLMs can infer a wide range of personal attributes (e.g., location, income, sex), achieving up to 85% top-1 and 95% top-3 accuracy at a fraction of the cost (100x) and time (240x) required by humans. As people increasingly interact with LLM-powered chatbots across all aspects of life, we also explore the emerging threat of privacy-invasive chatbots trying to extract personal information through seemingly benign questions. Finally, we show that common mitigations, i.e., text anonymization and model alignment, are currently ineffective at protecting user privacy against LLM inference. Our findings highlight that current LLMs can infer personal data at a previously unattainable scale. In the absence of working defenses, we advocate for a broader discussion around LLM privacy implications beyond memorization, striving for stronger and wider privacy protection.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/17964"} +{"video_file": "krx55l2A6G_39017512.mp4", "openreview_id": "krx55l2A6G", "slideslive_id": 39017512, "venue": "iclr2024", "title": "Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning", "status": "Poster", "keywords": "Privacy;Federated Learning;Gradient Leakage", "tldr": "We study detectability of malicious server attacks in federated learning, show that prior attacks are detectable, and propose SEER, a novel attack framework that reconstructs data from large batch sizes and is by design harder to detect.", "abstract": "Malicious server (MS) attacks have enabled the scaling of data stealing in federated learning to large batch sizes and secure aggregation, settings previously considered private. However, many concerns regarding the client-side detectability of MS attacks were raised, questioning their practicality. In this work, for the first time, we thoroughly study client-side detectability. We first demonstrate that all prior MS attacks are detectable by principled checks, and formulate a necessary set of requirements that a practical MS attack must satisfy. Next, we propose SEER, a novel attack framework that satisfies these requirements. The key insight of SEER is the use of a secret decoder, jointly trained with the shared model. We show that SEER can steal user data from gradients of realistic networks, even for large batch sizes of up to 512 and under secure aggregation. Our work is a promising step towards assessing the true vulnerability of federated learning in real-world settings.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/17962"} +{"video_file": "kvByNnMERu_39017508.mp4", "openreview_id": "kvByNnMERu", "slideslive_id": 39017508, "venue": "iclr2024", "title": "Estimating Shape Distances on Neural Representations with Limited Samples", "status": "Poster", "keywords": "representational geometry;shape metrics;dissimilarity metrics", "tldr": "Novel estimator of geometric similarity with tunable bias-variance tradeoff, outperforms standard estimators in high-dimensional settings.", "abstract": "Measuring geometric similarity between high-dimensional network representations is a topic of longstanding interest to neuroscience and deep learning. Although many methods have been proposed, only a few works have rigorously analyzed their statistical efficiency or quantified estimator uncertainty in data-limited regimes. Here, we derive upper and lower bounds on the worst-case convergence of standard estimators of shape distance\u2014a measure of representational dissimilarity proposed by Williams et al. (2021). These bounds reveal the challenging nature of the problem in high-dimensional feature spaces. To overcome these challenges, we introduce a novel method-of-moments estimator with a tunable bias-variance tradeoff parameterized by an upper bound on bias. We show that this estimator achieves superior performance to standard estimators in simulation and on neural data, particularly in high-dimensional settings. Our theoretical work and estimator thus respectively define and dramatically expand the scope of neural data for which geometric similarity can be accurately measured.", "primary_area": "applications to neuroscience & cognitive science", "site": "https://iclr.cc/virtual/2024/poster/17956"} +{"video_file": "l3qtSNsPvC_39018621.mp4", "openreview_id": "l3qtSNsPvC", "slideslive_id": 39018621, "venue": "iclr2024", "title": "A Poincar\u00e9 Inequality and Consistency Results for Signal Sampling on Large Graphs", "status": "Spotlight", "keywords": "large-scale graphs;signal sampling;graphons", "tldr": "We formulate sampling problems in the graphon limit to discover intrinsic structures of large graph, with both theoretical guarantees and empirical evidence.", "abstract": "Large-scale graph machine learning is challenging as the complexity of learning models scales with the graph size. Subsampling the graph is a viable alternative, but sampling on graphs is nontrivial as graphs are non-Euclidean. Existing graph sampling techniques require not only computing the spectra of large matrices but also repeating these computations when the graph changes, e.g., grows. In this paper, we introduce a signal sampling theory for a type of graph limit---the graphon. We prove a Poincar\u00e9 inequality for graphon signals and show that complements of node subsets satisfying this inequality are unique sampling sets for Paley-Wiener spaces of graphon signals. Exploiting connections with spectral clustering and Gaussian elimination, we prove that such sampling sets are consistent in the sense that unique sampling sets on a convergent graph sequence converge to unique sampling sets on the graphon. We then propose a related graphon signal sampling algorithm for large graphs, and demonstrate its good empirical performance on graph machine learning tasks.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/17949"} +{"video_file": "lF2aip4Scn_39019189.mp4", "openreview_id": "lF2aip4Scn", "slideslive_id": 39019189, "venue": "iclr2024", "title": "Demonstration-Regularized RL", "status": "Poster", "keywords": "reinforcement learning;regularization in reinforcement leaning;learning with demonstrations;reinforcemenet learning with human feedback", "tldr": "We showed a theoretically efficient way to inject expert demonstrations into RL agent and, moreover, into RLHF.", "abstract": "Incorporating expert demonstrations has empirically helped to improve the sample efficiency of reinforcement learning (RL). This paper quantifies theoretically to what extent this extra information reduces RL's sample complexity. In particular, we study the demonstration-regularized reinforcement learning framework that leverages the expert demonstrations by\nKL\n-regularization for a policy learned by behavior cloning. Our findings reveal that using\nN\nE\nexpert demonstrations enables the identification of an optimal policy at a sample complexity of order\nO\n~\n(\nPoly\n(\nS\n,\nA\n,\nH\n)\n/\n(\n\u03b5\n2\nN\nE\n)\n)\nin finite and\nO\n~\n(\nPoly\n(\nd\n,\nH\n)\n/\n(\n\u03b5\n2\nN\nE\n)\n)\nin linear Markov decision processes, where\n\u03b5\nis the target precision,\nH\nthe horizon,\nA\nthe number of action,\nS\nthe number of states in the finite case and\nd\nthe dimension of the feature space in the linear case. As a by-product, we provide tight convergence guarantees for the behavior cloning procedure under general assumptions on the policy classes. Additionally, we establish that demonstration-regularized methods are provably efficient for reinforcement learning from human feedback (RLHF). In this respect, we provide theoretical evidence showing the benefits of KL-regularization for RLHF in tabular and linear MDPs. Interestingly, we avoid pessimism injection by employing computationally feasible regularization to handle reward estimation uncertainty, thus setting our approach apart from the prior works.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/17944"} +{"video_file": "lR3rk7ysXz_39017494.mp4", "openreview_id": "lR3rk7ysXz", "slideslive_id": 39017494, "venue": "iclr2024", "title": "On Diffusion Modeling for Anomaly Detection", "status": "Spotlight", "keywords": "Diffusion based models;Anomaly detection;Probabilistic Inference", "tldr": "Identify anomalies in a dataset by estimating the diffusion time, anomalies have higher diffusion times", "abstract": "Known for their impressive performance in generative modeling, diffusion models are attractive candidates for density-based anomaly detection. This paper investigates different variations of diffusion modeling for unsupervised and semi-supervised anomaly detection. In particular, we find that Denoising Diffusion Probability Models (DDPM) are performant on anomaly detection benchmarks yet computationally expensive. By simplifying DDPM in application to anomaly detection, we are naturally led to an alternative approach called Diffusion Time Estimation (DTE). DTE estimates the distribution over diffusion time for a given input and uses the mode or mean of this distribution as the anomaly score. We derive an analytical form for this density and leverage a deep neural network to improve inference efficiency. Through empirical evaluations on the ADBench benchmark, we demonstrate that all diffusion-based anomaly detection methods perform competitively for both semi-supervised and unsupervised settings. Notably, DTE achieves orders of magnitude faster inference time than DDPM, while outperforming it on this benchmark. These results establish diffusion-based anomaly detection as a scalable alternative to traditional methods and recent deep-learning techniques for standard unsupervised and semi-supervised anomaly detection settings.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17930"} +{"video_file": "ldJXXxPE0L_39017491.mp4", "openreview_id": "ldJXXxPE0L", "slideslive_id": 39017491, "venue": "iclr2024", "title": "The Cost of Scaling Down Large Language Models: Reducing Model Size Affects Memory before In-context Learning", "status": "Poster", "keywords": "large language model;scaling;in-context learning;pruning", "tldr": "Moderate down-scaling harms fact recall, and yet the ability to learn from a few input-output examples from context withstands aggressive down-scaling.", "abstract": "We study how down-scaling large language model (LLM) size impacts LLM capabilities. We begin by measuring the effects of weight pruning \u2013 a popular technique for reducing model size \u2013 on the two abilities of LLMs: (a) recalling facts presented during pre-training and (b) processing information presented in context. Surprisingly, we find that existing pruning techniques affect these two abilities of LLMs differently. For example, pruning more than 30% of weights significantly decreases an LLM\u2019s ability to recall facts presented during pre-training. Yet pruning 60-70% of weights largely preserves an LLM\u2019s ability to process information in-context, ranging from retrieving answers based on information presented in context to learning parameterized functions such as a linear classifier based on a few examples. Moderate pruning impairs LLM\u2019s ability to recall facts learnt from pre-training. However, its effect on model\u2019s ability to process information presented in context is much less pronounced. The said disparate effects similarly arise when replacing the original model with a smaller dense one with reduced width and depth. This similarity suggests that model size reduction in general underpins the said disparity.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/17926"} +{"video_file": "likXVjmh3E_39017488.mp4", "openreview_id": "likXVjmh3E", "slideslive_id": 39017488, "venue": "iclr2024", "title": "The Expressive Power of Low-Rank Adaptation", "status": "Poster", "keywords": "LoRA;expressive power;parameter-efficient fine-tuning;adaptation;neural networks;transformer", "tldr": "This paper takes the first step to theoretically analyzing the expressive power of Low-Rank Adaptation (LoRA).", "abstract": "Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning method that leverages low-rank adaptation of weight matrices, has emerged as a prevalent technique for fine-tuning pre-trained models such as large language models and diffusion models. Despite its huge success in practice, the theoretical underpinnings of LoRA have largely remained unexplored. This paper takes the first step to bridge this gap by theoretically analyzing the expressive power of LoRA. We prove that, for fully connected neural networks, LoRA can adapt any model\nf\nto accurately represent any smaller target model\nf\n\u00af\nif LoRA-rank\n\u2265\n(\nwidth of \nf\n)\n\u00d7\ndepth of \nf\n\u00af\ndepth of \nf\n, under a mild assumption. We also quantify the approximation error when the LoRA-rank is lower than the threshold. For Transformer networks, we show any model can be adapted to a target model of the same size with rank-\n(\nembedding size\n2\n)\nLoRA adapters. All our theoretical insights are validated by numerical experiments.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/17923"} +{"video_file": "loYSzjSaAK_39018641.mp4", "openreview_id": "loYSzjSaAK", "slideslive_id": 39018641, "venue": "iclr2024", "title": "Submodular Reinforcement Learning", "status": "Spotlight", "keywords": "Reinforcement learning;Non-Markovian rewards;Submodular optimization;Policy gradient;Complex objectives in RL", "tldr": "The paper introduces SubRL, a paradigm for policy optimization under submodular reward functions. It discusses the difficulty of approximating SubRL and proposes SubPO, a simple PG-based algorithm inspired by submodular optimization's greedy strategy", "abstract": "In reinforcement learning (RL), rewards of states are typically considered additive, and following the Markov assumption, they are independent of states visited previously. In many important applications, such as coverage control, experiment design and informative path planning, rewards naturally have diminishing returns, i.e., their value decreases in light of similar states visited previously. To tackle this, we propose Submodular RL (subRL), a paradigm which seeks to optimize more general, non-additive (and history-dependent) rewards modelled via submodular set functions, which capture diminishing returns. Unfortunately, in general, even in tabular settings, we show that the resulting optimization problem is hard to approximate. On the other hand, motivated by the success of greedy algorithms in classical submodular optimization, we propose subPO, a simple policy gradient-based algorithm for subRL that handles non-additive rewards by greedily maximizing marginal gains. Indeed, under some assumptions on the underlying Markov Decision Process (MDP), subPO recovers optimal constant factor approximations of submodular bandits. Moreover, we derive a natural policy gradient approach for locally optimizing subRL instances even in large state- and action- spaces. We showcase the versatility of our approach by applying subPO to several applications, such as biodiversity monitoring, Bayesian experiment design, informative path planning, and coverage maximization. Our results demonstrate sample efficiency, as well as scalability to high-dimensional state-action spaces.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/17918"} +{"video_file": "m2NVG4Htxs_39017481.mp4", "openreview_id": "m2NVG4Htxs", "slideslive_id": 39017481, "venue": "iclr2024", "title": "To the Cutoff... and Beyond? A Longitudinal Perspective on LLM Data Contamination", "status": "Poster", "keywords": "contamination;memorization;llm;codeforces;project euler;datasets;benchmarks;training cutoff", "tldr": "We present the first thorough study of LLM data contamination on datasets released over time, showing that LLMs\u2019 ability to solve coding problems changes dramatically as a function of metrics such as release date and GitHub popularity.", "abstract": "Recent claims about the impressive abilities of large language models (LLMs) are often supported by evaluating publicly available benchmarks. Since LLMs train on wide swaths of the internet, this practice raises concerns of data contamination, i.e., evaluating on examples that are explicitly or implicitly included in the training data. Data contamination remains notoriously challenging to measure and mitigate, even with partial attempts like controlled experimentation of training data, canary strings, or embedding similarities. In this work, we conduct the first thorough longitudinal analysis of data contamination in LLMs by using the natural experiment of training cutoffs in GPT models to look at benchmarks released over time. Specifically, we consider two code/mathematical problem-solving datasets, Codeforces and Project Euler, and find statistically significant trends among LLM pass rate vs. GitHub popularity and release date that provide strong evidence of contamination. By open-sourcing our dataset, raw results, and evaluation framework, our work paves the way for rigorous analyses of data contamination in modern models. We conclude with a discussion of best practices and future steps for publicly releasing benchmark in the age of LLMs which train on webscale data.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/17911"} +{"video_file": "mGHJAyR8w0_39019187.mp4", "openreview_id": "mGHJAyR8w0", "slideslive_id": 39019187, "venue": "iclr2024", "title": "Rethinking the Benefits of Steerable Features in 3D Equivariant Graph Neural Networks", "status": "Poster", "keywords": "Steerable features;Equivariant graph neural networks;Message passing", "tldr": "We discuss the benefits of steerable features of different types for 3D equivariant graph neural networks", "abstract": "Theoretical and empirical comparisons have been made to assess the expressive power and performance of invariant and equivariant GNNs. However, there is currently no theoretical result comparing the expressive power of\nk\n-hop invariant GNNs and equivariant GNNs. Additionally, little is understood about whether the performance of equivariant GNNs, employing steerable features up to type-\nL\n, increases as\nL\ngrows -- especially when the feature dimension is held constant. In this study, we introduce a key lemma that allows us to analyze steerable features by examining their corresponding invariant features. The lemma facilitates us in understanding the limitations of\nk\n-hop invariant GNNs, which fail to capture the global geometric structure due to the loss of geometric information between local structures. Furthermore, we investigate the invariant features associated with different types of steerable features and demonstrate that the expressiveness of steerable features is primarily determined by their dimension -- independent of their irreducible decomposition. This suggests that when the feature dimension is constant, increasing\nL\ndoes not lead to essentially improved performance in equivariant GNNs employing steerable features up to type-\nL\n. We substantiate our theoretical insights with numerical evidence.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/17900"} +{"video_file": "mQ72XRfYRZ_39017472.mp4", "openreview_id": "mQ72XRfYRZ", "slideslive_id": 39017472, "venue": "iclr2024", "title": "A Hierarchical Bayesian Model for Few-Shot Meta Learning", "status": "Spotlight", "keywords": "Bayesian models;Meta learning;Few-shot learning", "tldr": "A novel hierarchical Bayesian model for the few-shot meta learning problem, with the efficient one-time episodic learning algorithm that can scale up to modern architectures (eg, ViT).", "abstract": "We propose a novel hierarchical Bayesian model for the few-shot meta learning problem. We consider episode-wise random variables to model episode-specific generative processes, where these local random variables are governed by a higher-level global random variable. The global variable captures information shared across episodes, while controlling how much the model needs to be adapted to new episodes in a principled Bayesian manner. Within our framework, prediction on a novel episode/task can be seen as a Bayesian inference problem. For tractable training, we need to be able to relate each local episode-specific solution to the global higher-level parameters. We propose a Normal-Inverse-Wishart model, for which establishing this local-global relationship becomes feasible due to the approximate closed-form solutions for the local posterior distributions. The resulting algorithm is more attractive than the MAML in that it does not maintain a costly computational graph for the sequence of gradient descent steps in an episode. Our approach is also different from existing Bayesian meta learning methods in that rather than modeling a single random variable for all episodes, it leverages a hierarchical structure that exploits the local-global relationships desirable for principled Bayesian learning with many related tasks.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17894"} +{"video_file": "mYWsyTuiRp_39017470.mp4", "openreview_id": "mYWsyTuiRp", "slideslive_id": 39017470, "venue": "iclr2024", "title": "Analyzing Feed-Forward Blocks in Transformers through the Lens of Attention Maps", "status": "Spotlight", "keywords": "Transformer;Attention map;Feed-forward;Contextualization;Interpretation;Analysis;Pre-trained models;Masked language models;Causal language models", "tldr": "We analyze the input contextualization effects of Feed-Forward blocks in Transformer-based models by rendering them in the attention maps.", "abstract": "Transformers are ubiquitous in wide tasks. Interpreting their internals is a pivotal goal. Nevertheless, their particular components, feed-forward (FF) blocks, have typically been less analyzed despite their substantial parameter amounts. We analyze the input contextualization effects of FF blocks by rendering them in the attention maps as a human-friendly visualization scheme. Our experiments with both masked- and causal-language models reveal that FF networks modify the input contextualization to emphasize specific types of linguistic compositions. In addition, FF and its surrounding components tend to cancel out each other's effects, suggesting potential redundancy in the processing of the Transformer layer.", "primary_area": "visualization or interpretation of learned representations", "site": "https://iclr.cc/virtual/2024/poster/17891"} +{"video_file": "mqVgBbNCm9_39018863.mp4", "openreview_id": "mqVgBbNCm9", "slideslive_id": 39018863, "venue": "iclr2024", "title": "Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation", "status": "Poster", "keywords": "large language model;efficient inference;data-centric optimization;parallel generation;prompt engineering;planning", "tldr": "As a data-centric efficiency technique, SoT decreases the generation latency by guiding the LLM itself to organize the output data and generate multiple segments in parallel.", "abstract": "This work aims at decreasing the end-to-end generation latency of large language models (LLMs). One of the major causes of the high generation latency is the sequential decoding approach adopted by almost all state-of-the-art LLMs. In this work, motivated by the thinking and writing process of humans, we propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the skeleton of the answer, and then conducts parallel API calls or batched decoding to complete the contents of each skeleton point in parallel. Not only does SoT provide considerable speed-ups across 12 LLMs, but it can also potentially improve the answer quality on several question categories. SoT is an initial attempt at data-centric optimization for inference efficiency, and showcases the potential of eliciting high-quality answers by explicitly planning the answer structure in language.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/17880"} +{"video_file": "ms0VgzSGF2_39017461.mp4", "openreview_id": "ms0VgzSGF2", "slideslive_id": 39017461, "venue": "iclr2024", "title": "Bridging State and History Representations: Understanding Self-Predictive RL", "status": "Poster", "keywords": "Reinforcement Learning;Representation Learning;POMDPs;Information States;Self-supervised Learning", "tldr": "We offer theoretical insights into learning self-predictive representations in POMDPs and validate our theories with our simplified algorithm across several benchmarks.", "abstract": "Representations are at the core of all deep reinforcement learning (RL) methods for both Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). Many representation learning methods and theoretical frameworks have been developed to understand what constitutes an effective representation. However, the relationships between these methods and the shared properties among them remain unclear. In this paper, we show that many of these seemingly distinct methods and frameworks for state and history abstractions are, in fact, based on a common idea of self-predictive abstraction. Furthermore, we provide theoretical insights into the widely adopted objectives and optimization, such as the stop-gradient technique, in learning self-predictive representations. These findings together yield a minimalist algorithm to learn self-predictive representations for states and histories. We validate our theories by applying our algorithm to standard MDPs, MDPs with distractors, and POMDPs with sparse rewards. These findings culminate in a set of preliminary guidelines for RL practitioners.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/17879"} +{"video_file": "msXxrttLOi_39017126.mp4", "openreview_id": "msXxrttLOi", "slideslive_id": 39017126, "venue": "iclr2024", "title": "FedCompass: Efficient Cross-Silo Federated Learning on Heterogeneous Client Devices Using a Computing Power-Aware Scheduler", "status": "Poster", "keywords": "Federated Learning;Device Heterogeneity;Cross-silo Federated Learning", "tldr": "We propose FedCompass, a semi-asynchronous federated learning algorithm for faster convergence on heterogeneous clients and data.", "abstract": "Cross-silo federated learning offers a promising solution to collaboratively train robust and generalized AI models without compromising the privacy of local datasets, e.g., healthcare, financial, as well as scientific projects that lack a centralized data facility. Nonetheless, because of the disparity of computing resources among different clients (i.e., device heterogeneity), synchronous federated learning algorithms suffer from degraded efficiency when waiting for straggler clients. Similarly, asynchronous federated learning algorithms experience degradation in the convergence rate and final model accuracy on non-identically and independently distributed (non-IID) heterogeneous datasets due to stale local models and client drift. To address these limitations in cross-silo federated learning with heterogeneous clients and data, we propose FedCompass, an innovative semi-asynchronous federated learning algorithm with a computing power-aware scheduler on the server side, which adaptively assigns varying amounts of training tasks to different clients using the knowledge of the computing power of individual clients. FedCompass ensures that multiple locally trained models from clients are received almost simultaneously as a group for aggregation, effectively reducing the staleness of local models. At the same time, the overall training process remains asynchronous, eliminating prolonged waiting periods from straggler clients. Using diverse non-IID heterogeneous distributed datasets, we demonstrate that FedCompass achieves faster convergence and higher accuracy than other asynchronous algorithms while remaining more efficient than synchronous algorithms when performing federated learning on heterogeneous clients. The source code for FedCompass is available at https://github.com/APPFL/FedCompass.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/17878"} +{"video_file": "mw1PWNSWZP_39018956.mp4", "openreview_id": "mw1PWNSWZP", "slideslive_id": 39018956, "venue": "iclr2024", "title": "OctoPack: Instruction Tuning Code Large Language Models", "status": "Spotlight", "keywords": "large language models;large code models;instruction tuning", "tldr": "Data, models and evaluation for instruction tuning code large language models", "abstract": "Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/17875"} +{"video_file": "nFI3wFM9yN_39017120.mp4", "openreview_id": "nFI3wFM9yN", "slideslive_id": 39017120, "venue": "iclr2024", "title": "Communication-Efficient Federated Non-Linear Bandit Optimization", "status": "Poster", "keywords": "federated optimization;communication cost;non-linear bandit;bandit optimization;cumulative regret", "tldr": "federated bandit optimization of generic non-linear objective function", "abstract": "Federated optimization studies the problem of collaborative function optimization among multiple clients (e.g. mobile devices or organizations) under the coordination of a central server. Since the data is collected separately by each client and always remains decentralized, federated optimization preserves data privacy and allows for large-scale computing, which makes it a promising decentralized machine learning paradigm. Though it is often deployed for tasks that are online in nature, e.g., next-word prediction on keyboard apps, most works formulate it as an offline problem. The few exceptions that consider federated bandit optimization are limited to very simplistic function classes, e.g., linear, generalized linear, or non-parametric function class with bounded RKHS norm, which severely hinders its practical usage. In this paper, we propose a new algorithm, named Fed-GO-UCB, for federated bandit optimization with generic non-linear objective function. Under some mild conditions, we rigorously prove that Fed-GO-UCB is able to achieve sub-linear rate for both cumulative regret and communication cost. At the heart of our theoretical analysis are distributed regression oracle and individual confidence set construction, which can be of independent interests. Empirical evaluations also demonstrate the effectiveness of the proposed algorithm.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/17866"} +{"video_file": "nJnky5K944_39019005.mp4", "openreview_id": "nJnky5K944", "slideslive_id": 39019005, "venue": "iclr2024", "title": "Are Transformers with One Layer Self-Attention Using Low-Rank Weight Matrices Universal Approximators?", "status": "Poster", "keywords": "Transformer;Self-Attention;Memorization;Universal Approximation Theorem;Contextual Mapping", "tldr": "One-layer and single-head self-attention with low-rank weight matrices is expressive enough to be a contextual mapping.", "abstract": "Existing analyses of the expressive capacity of Transformer models have required excessively deep layers for data memorization, leading to a discrepancy with the Transformers actually used in practice. This is primarily due to the interpretation of the softmax function as an approximation of the hardmax function. By clarifying the connection between the softmax function and the Boltzmann operator, we prove that a single layer of self-attention with low-rank weight matrices possesses the capability to perfectly capture the context of an entire input sequence. As a consequence, we show that one-layer and single-head Transformers have a memorization capacity for finite samples, and that Transformers consisting of one self-attention layer with two feed-forward neural networks are universal approximators for continuous functions on a compact domain.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/17862"} +{"video_file": "nLWiR5P3wr_39017193.mp4", "openreview_id": "nLWiR5P3wr", "slideslive_id": 39017193, "venue": "iclr2024", "title": "Input-gradient space particle inference for neural network ensembles", "status": "Spotlight", "keywords": "deep ensembles;diversity;input gradient;robustness;covariate shift;particle variational inference", "tldr": "We learn an ensemble of neural networks that is diverse with respect to their input gradients.", "abstract": "Deep Ensembles (DEs) demonstrate improved accuracy, calibration and robustness to perturbations over single neural networks partly due to their functional diversity. Particle-based variational inference (ParVI) methods enhance diversity by formalizing a repulsion term based on a network similarity kernel. However, weight-space repulsion is inefficient due to over-parameterization, while direct function-space repulsion has been found to produce little improvement over DEs. To sidestep these difficulties, we propose First-order Repulsive Deep Ensemble (FoRDE), an ensemble learning method based on ParVI, which performs repulsion in the space of first-order input gradients. As input gradients uniquely characterize a function up to translation and are much smaller in dimension than the weights, this method guarantees that ensemble members are functionally different. Intuitively, diversifying the input gradients encourages each network to learn different features, which is expected to improve the robustness of an ensemble. Experiments on image classification datasets and transfer learning tasks show that FoRDE significantly outperforms the gold-standard DEs and other ensemble methods in accuracy and calibration under covariate shift due to input perturbations.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17861"} +{"video_file": "nO344avRib_39017449.mp4", "openreview_id": "nO344avRib", "slideslive_id": 39017449, "venue": "iclr2024", "title": "A Simple and Scalable Representation for Graph Generation", "status": "Poster", "keywords": "Graph generative models;graph neural networks;graph representation", "tldr": "We propose a simple and scalable edge list-based graph representation, gap encoded edge list (GEEL).", "abstract": "Recently, there has been a surge of interest in employing neural networks for graph generation, a fundamental statistical learning problem with critical applications like molecule design and community analysis. However, most approaches encounter significant limitations when generating large-scale graphs. This is due to their requirement to output the full adjacency matrices whose size grows quadratically with the number of nodes. In response to this challenge, we introduce a new, simple, and scalable graph representation named gap encoded edge list (GEEL) that has a small representation size that aligns with the number of edges. In addition, GEEL significantly reduces the vocabulary size by incorporating the gap encoding and bandwidth restriction schemes. GEEL can be autoregressively generated with the incorporation of node positional encoding, and we further extend GEEL to deal with attributed graphs by designing a new grammar. Our findings reveal that the adoption of this compact representation not only enhances scalability but also bolsters performance by simplifying the graph generation process. We conduct a comprehensive evaluation across ten non-attributed and two molecular graph generation tasks, demonstrating the effectiveness of GEEL.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/17859"} +{"video_file": "nnicaG5xiH_39018679.mp4", "openreview_id": "nnicaG5xiH", "slideslive_id": 39018679, "venue": "iclr2024", "title": "Interpretable Meta-Learning of Physical Systems", "status": "Poster", "keywords": "meta-learning;physical systems;multi-task learning;interpretable deep learning;identifiability;electrostatics;robotics;control;reinforcement learning;scientific discovery", "tldr": "We propose a new multi-environment meta-learning architecture for physical systems called CAMEL, that learns and generalizes at minimal cost and with interpretable weights.", "abstract": "Machine learning methods can be a valuable aid in the scientific process, but they need to face challenging settings where data come from inhomogeneous experimental conditions. Recent meta-learning methods have made significant progress in multi-task learning, but they rely on black-box neural networks, resulting in high computational costs and limited interpretability. We introduce CAMEL, a new meta-learning architecture capable of learning efficiently from multiple environments, with an affine structure with respect to the learning task. We prove that CAMEL can identify the physical parameters of the system, enabling interpreable learning. We demonstrate the competitive generalization performance and the low computational cost of our method by comparing it to state-of-the-art algorithms on physical systems, ranging from toy models to complex, non-analytical systems. The interpretability of our method is illustrated with original applications to parameter identification and to adaptive control and system identification.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17843"} +{"video_file": "nsNyDvNQTc_39018700.mp4", "openreview_id": "nsNyDvNQTc", "slideslive_id": 39018700, "venue": "iclr2024", "title": "Leveraging Uncertainty Estimates To Improve Classifier Performance", "status": "Poster", "keywords": "Uncertainty estimation;binary classification;imbalanced classification;score recalibration;uncertainty based decision making;classification decision boundary;bin packing;estimation bias;posterior networks", "tldr": "2D decision boundary on model score & uncertainty space boosts binary classification performance", "abstract": "Binary classification typically involves predicting the label of an instance based on whether the model score for the positive class exceeds a threshold chosen based on the application requirements (e.g., maximizing recall for a precision bound). However, model scores are often not aligned with true positivity rate. This is especially true when the training involves a differential sampling of classes or there is distributional drift between train and test settings. In this paper, we provide theoretical analysis and empirical evidence of the dependence of estimation bias on both uncertainty and model score. Further, we formulate the decision boundary selection using both model score and uncertainty, prove that it is NP-hard, and present algorithms based on dynamic programming and isotonic regression. Evaluation of the proposed algorithms on three real-world datasets yield 25%-40% improvement in recall at high precision bounds over the traditional approach of using model score alone, highlighting the benefits of leveraging uncertainty.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17840"} +{"video_file": "oAMArMMQxb_39017391.mp4", "openreview_id": "oAMArMMQxb", "slideslive_id": 39017391, "venue": "iclr2024", "title": "Sampling Multimodal Distributions with the Vanilla Score: Benefits of Data-Based Initialization", "status": "Poster", "keywords": "sampling;score matching;contrastive divergence;langevin dynamics", "tldr": "We show that sampling multimodal distributions with the vanilla score is provably fixed by data-based initialization.", "abstract": "There is a long history, as well as a recent explosion of interest, in statistical and generative modeling approaches based on \\emph{score functions} --- derivatives of the log-likelihood of a distribution. In seminal works, Hyv\"arinen proposed vanilla score matching as a way to learn distributions from data by computing an estimate of the score function of the underlying ground truth, and established connections between this method and established techniques like Contrastive Divergence and Pseudolikelihood estimation. It is by now well-known that vanilla score matching has significant difficulties learning multimodal distributions. Although there are various ways to overcome this difficulty, the following question has remained unanswered --- is there a natural way to sample multimodal distributions using just the vanilla score? Inspired by a long line of related experimental works, we prove that the Langevin diffusion with early stopping, initialized at the empirical distribution, and run on a score function estimated from data successfully generates natural multimodal distributions (mixtures of log-concave distributions).", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/17830"} +{"video_file": "oEF7qExD9F_39018860.mp4", "openreview_id": "oEF7qExD9F", "slideslive_id": 39018860, "venue": "iclr2024", "title": "LMUFormer: Low Complexity Yet Powerful Spiking Model With Legendre Memory Units", "status": "Poster", "keywords": "Legendre Memory Unit;Spiking Neural Network;Recurrent Neural Network", "tldr": "In this paper, we propose a novel model named LMUFormer, which enhances the LMU with patch embedding, channel mixer and SNN, and is low-computational, and powerful in the STC and NLP fields.", "abstract": "Transformer models have demonstrated high accuracy in numerous applications but have high complexity and lack sequential processing capability making them ill-suited for many streaming applications at the edge where devices are heavily resource-constrained. Thus motivated, many researchers have proposed reformulating the transformer models as RNN modules which modify the self-attention computation with explicit states. However, these approaches often incur significant performance degradation. The ultimate goal is to develop a model that has the following properties: parallel training, streaming and low-cost inference, and state-of-the-art (SOTA) performance. In this paper, we propose a new direction to achieve this goal. We show how architectural modifications to a fully-sequential recurrent model can help push its performance toward Transformer models while retaining its sequential processing capability. Specifically, inspired by the recent success of Legendre Memory Units (LMU) in sequence learning tasks, we propose LMUFormer, which augments the LMU with convolutional patch embedding and convolutional channel mixer. Moreover, we present a spiking version of this architecture, which introduces the benefit of states within the patch embedding and channel mixer modules while simultaneously reducing the computing complexity. We evaluated our architectures on multiple sequence datasets. Of particular note is our performance on the Speech Commands V2 dataset (35 classes). In comparison to SOTA transformer-based models within the ANN domain, our LMUFormer demonstrates comparable performance while necessitating a remarkable\n70\n\u00d7\nreduction in parameters and a substantial\n140\n\u00d7\ndecrement in FLOPs. Furthermore, when benchmarked against extant low-complexity SNN variants, our model establishes a new SOTA with an accuracy of 96.12%. Additionally, owing to our model's proficiency in real-time data processing, we are able to achieve a 32.03% reduction in sequence length, all while incurring an inconsequential decline in performance.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/17828"} +{"video_file": "oGNdBvymod_39017389.mp4", "openreview_id": "oGNdBvymod", "slideslive_id": 39017389, "venue": "iclr2024", "title": "Entropy-MCMC: Sampling from Flat Basins with Ease", "status": "Poster", "keywords": "MCMC;Bayesian Deep Learning;Flatness-aware Learning", "tldr": "We propose a practical MCMC algorithm to sample from the flat basins of deep neural network posteriors.", "abstract": "Bayesian deep learning counts on the quality of posterior distribution estimation. However, the posterior of deep neural networks is highly multi-modal in nature, with local modes exhibiting varying generalization performance. Given a practical budget, targeting at the original posterior can lead to suboptimal performance, as some samples may become trapped in \"bad\" modes and suffer from overfitting. Leveraging the observation that \"good\" modes with low generalization error often reside in flat basins of the energy landscape, we propose to bias sampling on the posterior toward these flat regions. Specifically, we introduce an auxiliary guiding variable, the stationary distribution of which resembles a smoothed posterior free from sharp modes, to lead the MCMC sampler to flat basins. By integrating this guiding variable with the model parameter, we create a simple joint distribution that enables efficient sampling with minimal computational overhead. We prove the convergence of our method and further show that it converges faster than several existing flatness-aware methods in the strongly convex setting. Empirical results demonstrate that our method can successfully sample from flat basins of the posterior, and outperforms all compared baselines on multiple benchmarks including classification, calibration, and out-of-distribution detection.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17827"} +{"video_file": "oMLQB4EZE1_39017386.mp4", "openreview_id": "oMLQB4EZE1", "slideslive_id": 39017386, "venue": "iclr2024", "title": "DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genomes", "status": "Poster", "keywords": "DNA;Genome;Language Model;Foundation Model;Benchmark", "tldr": "An efficient and effective foundation model for multi-species genomes.", "abstract": "Decoding the linguistic intricacies of the genome is a crucial problem in biology, and pre-trained foundational models such as DNABERT and Nucleotide Transformer have made significant strides in this area. Existing works have largely hinged on k-mer, fixed-length permutations of A, T, C, and G, as the token of the genome language due to its simplicity. However, we argue that the computation and sample inefficiencies introduced by k-mer tokenization are primary obstacles in developing large genome foundational models. We provide conceptual and empirical insights into genome tokenization, building on which we propose to replace k-mer tokenization with Byte Pair Encoding (BPE), a statistics-based data compression algorithm that constructs tokens by iteratively merging the most frequent co-occurring genome segment in the corpus. We demonstrate that BPE not only overcomes the limitations of k-mer tokenization but also benefits from the computational efficiency of non-overlapping tokenization. Based on these insights, we introduce DNABERT-2, a refined genome foundation model that adapts an efficient tokenizer and employs multiple strategies to overcome input length constraints, reduce time and memory expenditure, and enhance model capability. Furthermore, we identify the absence of a comprehensive and standardized benchmark for genome understanding as another significant impediment to fair comparative analysis. In response, we propose the Genome Understanding Evaluation (GUE), a comprehensive multi-species genome classification dataset that amalgamates\n36\ndistinct datasets across\n9\ntasks, with input lengths ranging from\n70\nto\n10000\n. Through comprehensive experiments on the GUE benchmark, we demonstrate that DNABERT-2 achieves comparable performance to the state-of-the-art model with\n21\n\u00d7\nfewer parameters and approximately\n92\n\u00d7\nless GPU time in pre-training. Compared to DNABERT, while being\n3\n\u00d7\nmore efficient, DNABERT-2 outperforms it on\n23\nout of\n28\ndatasets, with an average improvement of\n6\nabsolute scores on GUE. The code, data, and pre-trained model are available at \\url{https://github.com/MAGICS-LAB/DNABERT_2}.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17823"} +{"video_file": "oMNkj4ER7V_39017385.mp4", "openreview_id": "oMNkj4ER7V", "slideslive_id": 39017385, "venue": "iclr2024", "title": "A Unified Framework for Bayesian Optimization under Contextual Uncertainty", "status": "Poster", "keywords": "Bayesian optimization;Gaussian processes", "tldr": "Generalization of distributionally robust BO to other notions of risk such as value-at-risk, mean-variance tradeoff etc., along with a general algorithm with a regret bound.", "abstract": "Bayesian optimization under contextual uncertainty (BOCU) is a family of BO problems in which the learner makes a decision prior to observing the context and must manage the risks involved. Distributionally robust BO (DRBO) is a subset of BOCU that affords robustness against context distribution shift, and includes the optimization of expected values and worst-case values as special cases. By considering the first derivatives of the DRBO objective, we generalize DRBO to one that includes several other uncertainty objectives studied in the BOCU literature such as worst-case sensitivity (and thus notions of risk such as variance, range, and conditional value-at-risk) and mean-risk tradeoffs. We develop a general Thompson sampling algorithm that is able to optimize any objective within the BOCU framework, analyze its theoretical properties, and compare it to suitable baselines across different experimental settings and uncertainty objectives.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17822"} +{"video_file": "oO6FsMyDBt_39018919.mp4", "openreview_id": "oO6FsMyDBt", "slideslive_id": 39018919, "venue": "iclr2024", "title": "Graph Neural Networks for Learning Equivariant Representations of Neural Networks", "status": "Oral", "keywords": "Deep weight space;Graph neural networks;Transformers;Permutation equivariance;Implicit neural representations;Networks for networks;Neural graphs", "tldr": "We propose graph neural networks that learn permutation equivariant representations of other neural networks", "abstract": "Neural networks that process the parameters of other neural networks find applications in domains as diverse as classifying implicit neural representations, generating neural network weights, and predicting generalization errors. However, existing approaches either overlook the inherent permutation symmetry in the neural network or rely on intricate weight-sharing patterns to achieve equivariance, while ignoring the impact of the network architecture itself. In this work, we propose to represent neural networks as computational graphs of parameters, which allows us to harness powerful graph neural networks and transformers that preserve permutation symmetry. Consequently, our approach enables a single model to encode neural computational graphs with diverse architectures. We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations, predicting generalization performance, and learning to optimize, while consistently outperforming state-of-the-art methods. The source code is open-sourced at https://github.com/mkofinas/neural-graphs.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/17821"} +{"video_file": "oOwDQl8haC_39017383.mp4", "openreview_id": "oOwDQl8haC", "slideslive_id": 39017383, "venue": "iclr2024", "title": "Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators", "status": "Poster", "keywords": "Deep Neural Networks;Quantized Neural Networks;Network Quantization;Accumulators;Accelerators;Inference;Computer Vision;Language Models", "tldr": "We show that model can be fine tuned for inference with 12 bit accumulators, and develop methods for training with even smaller accumulators.", "abstract": "The majority of the research on the quantization of Deep Neural Networks (DNNs) is focused on reducing the precision of tensors visible by high-level frameworks (e.g., weights, activations, and gradients). However, current hardware still relies on high-accuracy core operations. Most significant is the operation of accumulating products. This high-precision accumulation operation is gradually becoming the main computational bottleneck. This is because, so far, the usage of low-precision accumulators led to a significant degradation in performance. In this work, we present a simple method to train and fine-tune DNNs, to allow, for the first time, utilization of cheaper,\n12\n-bits accumulators, with no significant degradation in accuracy. Lastly, we show that as we decrease the accumulation precision further, using fine-grained gradient approximations can improve the DNN accuracy.", "primary_area": "infrastructure, software libraries, hardware, etc.", "site": "https://iclr.cc/virtual/2024/poster/17819"} +{"video_file": "oTRwljRgiv_39017045.mp4", "openreview_id": "oTRwljRgiv", "slideslive_id": 39017045, "venue": "iclr2024", "title": "ExeDec: Execution Decomposition for Compositional Generalization in Neural Program Synthesis", "status": "Oral", "keywords": "Program Synthesis;Programming By Example;Generalization;Compositional Generalization", "tldr": "We describe different forms of compositional generalization that are desirable in program synthesis, and present a decomposition-based approach to synthesis achieving higher compositional generalization on two domains compared to prior approaches.", "abstract": "When writing programs, people have the ability to tackle a new complex task by decomposing it into smaller and more familiar subtasks. While it is difficult to measure whether neural program synthesis methods have similar capabilities, we can measure whether they compositionally generalize, that is, whether a model that has been trained on the simpler subtasks is subsequently able to solve more complex tasks. In this paper, we characterize several different forms of compositional generalization that are desirable in program synthesis, forming a meta-benchmark which we use to create generalization tasks for two popular datasets, RobustFill and DeepCoder. We then propose ExeDec, a novel decomposition-based synthesis strategy that predicts execution subgoals to solve problems step-by-step informed by program execution at each step. When used with Transformer models trained from scratch, ExeDec has better synthesis performance and greatly improved compositional generalization ability compared to baselines. Finally, we use our benchmarks to demonstrate that LLMs struggle to compositionally generalize when asked to do programming-by-example in a few-shot setting, but an ExeDec-style prompting approach can improve the generalization ability and overall performance.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/17817"} +{"video_file": "oYjPk8mqAV_39017381.mp4", "openreview_id": "oYjPk8mqAV", "slideslive_id": 39017381, "venue": "iclr2024", "title": "Magnushammer: A Transformer-Based Approach to Premise Selection", "status": "Poster", "keywords": "transformers;interactive theorem proving;automated reasoning;contrastive learning;premise selection", "tldr": "Contrastively trained transformers outperform state-of-the-art symbolic methods for premise selection, a challenging reasoning task of selecting relevant facts for proving new theorems in formal mathematics.", "abstract": "This paper presents a novel approach to premise selection, a crucial reasoning task in automated theorem proving. Traditionally, symbolic methods that rely on extensive domain knowledge and engineering effort are applied to this task. In contrast, this work demonstrates that contrastive training with the transformer architecture can achieve higher-quality retrieval of relevant premises, without the knowledge or feature engineering overhead. Our method, Magnushammer, outperforms the most advanced and widely used automation tool in interactive theorem proving called Sledgehammer. On the PISA and miniF2f benchmarks Magnushammer achieves\n59.5\n(against\n38.3\n) and\n34.0\n(against\n20.9\n) success rates, respectively. By combining Magnushammer with a language-model-based automated theorem prover, we further improve the state-of-the-art proof success rate from\n57.0\nto\n71.0\non the PISA benchmark using\n4\nx fewer parameters. Moreover, we develop and open source a novel dataset for premise selection, containing textual representations of (proof state, relevant premise) pairs. To the best of our knowledge, this is the largest available premise selection dataset, and the first dataset of this kind for the Isabelle proof assistant.", "primary_area": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17814"} +{"video_file": "ojIJZDNIBj_39017378.mp4", "openreview_id": "ojIJZDNIBj", "slideslive_id": 39017378, "venue": "iclr2024", "title": "Copula Conformal prediction for multi-step time series prediction", "status": "Poster", "keywords": "Conformal Prediction;time series;uncertainty quantification;calibration;RNN", "tldr": "significantly improve efficiency/sharpness of conformal prediction confidence intervals, for multi-step time series forecasting, by modeling dependence of time steps using copulas", "abstract": "Accurate uncertainty measurement is a key step in building robust and reliable machine learning systems. Conformal prediction is a distribution-free uncertainty quantification framework popular for its ease of implementation, finite-sample coverage guarantees, and generality for underlying prediction algorithms. However, existing conformal prediction approaches for time series are limited to single-step prediction without considering the temporal dependency. In this paper, we propose the Copula Conformal Prediction algorithm for multivariate, multi-step Time Series forecasting, CopulaCPTS. We prove that CopulaCPTS has finite-sample validity guarantee. On four synthetic and real-world multivariate time series datasets, we show that CopulaCPTS produces more calibrated and efficient confidence intervals for multi-step prediction tasks than existing techniques. Our code is open-sourced at https://github.com/Rose-STL-Lab/CopulaCPTS.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17807"} +{"video_file": "okYdj8Ysru_39018921.mp4", "openreview_id": "okYdj8Ysru", "slideslive_id": 39018921, "venue": "iclr2024", "title": "A Lie Group Approach to Riemannian Batch Normalization", "status": "Poster", "keywords": "Lie Groups;Riemannian Batch Normalization;SPD Neural Networks", "tldr": "We propose a general framework for Riemannian batch normalization over Lie groups (LieBN), and showcase our LieBN on diverse Lie groups of SPD manifolds.", "abstract": "Manifold-valued measurements exist in numerous applications within computer vision and machine learning. Recent studies have extended Deep Neural Networks (DNNs) to manifolds, and concomitantly, normalization techniques have also been adapted to several manifolds, referred to as Riemannian normalization. Nonetheless, most of the existing Riemannian normalization methods have been derived in an ad hoc manner and only apply to specific manifolds. This paper establishes a unified framework for Riemannian Batch Normalization (RBN) techniques on Lie groups. Our framework offers the theoretical guarantee of controlling both the Riemannian mean and variance. Empirically, we focus on Symmetric Positive Definite (SPD) manifolds, which possess three distinct types of Lie group structures. Using the deformation concept, we generalize the existing Lie groups on SPD manifolds into three families of parameterized Lie groups. Specific normalization layers induced by these Lie groups are then proposed for SPD neural networks. We demonstrate the effectiveness of our approach through three sets of experiments: radar recognition, human action recognition, and electroencephalography (EEG) classification. The code is available at https://github.com/GitZH-Chen/LieBN.git.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/17806"} +{"video_file": "owziuM1nsR_39018836.mp4", "openreview_id": "owziuM1nsR", "slideslive_id": 39018836, "venue": "iclr2024", "title": "Recursive Generalization Transformer for Image Super-Resolution", "status": "Poster", "keywords": "Transformer;image super-resolution", "tldr": "Recursive Generalization Transformer for Image Super-Resolution", "abstract": "Transformer architectures have exhibited remarkable performance in image super-resolution (SR). Since the quadratic computational complexity of the self-attention (SA) in Transformer, existing methods tend to adopt SA in a local region to reduce overheads. However, the local design restricts the global context exploitation, which is crucial for accurate image reconstruction. In this work, we propose the Recursive Generalization Transformer (RGT) for image SR, which can capture global spatial information and is suitable for high-resolution images. Specifically, we propose the recursive-generalization self-attention (RG-SA). It recursively aggregates input features into representative feature maps, and then utilizes cross-attention to extract global information. Meanwhile, the channel dimensions of attention matrices (\nq\nu\ne\nr\ny\n,\nk\ne\ny\n, and\nv\na\nl\nu\ne\n) are further scaled to mitigate the redundancy in the channel domain. Furthermore, we combine the RG-SA with local self-attention to enhance the exploitation of the global context, and propose the hybrid adaptive integration (HAI) for module integration. The HAI allows the direct and effective fusion between features at different levels (local or global). Extensive experiments demonstrate that our RGT outperforms recent state-of-the-art methods quantitatively and qualitatively. Code and pre-trained models are available at https://github.com/zhengchen1999/RGT.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/17801"} +{"video_file": "ox2ATRM90I_39017374.mp4", "openreview_id": "ox2ATRM90I", "slideslive_id": 39017374, "venue": "iclr2024", "title": "Yet Another ICU Benchmark: A Flexible Multi-Center Framework for Clinical ML", "status": "Poster", "keywords": "ICU;Intensive Care Unit;EHR;ML;Time Series;Patient Monitoring;Clinical ML;Benchmark;Multi-Center;MIMIC;eICU;HiRID;AmsterdamUMCdb", "tldr": "We introduce Yet Another ICU Benchmark: a flexible, holistic framework for the standardization of clinical prediction model experiments.", "abstract": "Medical applications of machine learning (ML) have experienced a surge in popularity in recent years. Given the abundance of available data from electronic health records, the intensive care unit (ICU) is a natural habitat for ML. Models have been proposed to address numerous ICU prediction tasks like the early detection of complications. While authors frequently report state-of-the-art performance, it is challenging to verify claims of superiority. Datasets and code are not always published, and cohort definitions, preprocessing pipelines, and training setups are difficult to reproduce. This work introduces Yet Another ICU Benchmark (YAIB), a modular framework that allows researchers to define reproducible and comparable clinical ML experiments; we offer an end-to-end solution from cohort definition to model evaluation. The framework natively supports most open-access ICU datasets (MIMIC III/IV, eICU, HiRID, AUMCdb) and is easily adaptable to future ICU datasets. Combined with a transparent preprocessing pipeline and extensible training code for multiple ML and deep learning models, YAIB enables unified model development, transfer, and evaluation. Our benchmark comes with five predefined established prediction tasks (mortality, acute kidney injury, sepsis, kidney function, and length of stay) developed in collaboration with clinicians. Adding further tasks is straightforward by design. Using YAIB, we demonstrate that the choice of dataset, cohort definition, and preprocessing have a major impact on the prediction performance \u2014 often more so than model class \u2014 indicating an urgent need for YAIB as a holistic benchmarking tool. We provide our work to the clinical ML community to accelerate method development and enable real-world clinical implementations.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/17800"} +{"video_file": "pB1FeRSQxh_39019203.mp4", "openreview_id": "pB1FeRSQxh", "slideslive_id": 39019203, "venue": "iclr2024", "title": "Near-Optimal Quantum Algorithm for Minimizing the Maximal Loss", "status": "Poster", "keywords": "Quantum Algorithms;Quantum Query Complexity;Convex Optimization;Minimizing Loss", "tldr": "We conduct a systematic study of quantum algorithms and lower bounds for minimizing the maximum of \nN\n convex, Lipschitz functions.", "abstract": "The problem of minimizing the maximum of\nN\nconvex, Lipschitz functions plays significant roles in optimization and machine learning. It has a series of results, with the most recent one requiring\nO\n(\nN\n\u03f5\n\u2212\n2\n/\n3\n+\n\u03f5\n\u2212\n8\n/\n3\n)\nqueries to a first-order oracle to compute an\n\u03f5\n-suboptimal point. On the other hand, quantum algorithms for optimization are rapidly advancing with speedups shown on many important optimization problems. In this paper, we conduct a systematic study of quantum algorithms and lower bounds for minimizing the maximum of\nN\nconvex, Lipschitz functions. On one hand, we develop quantum algorithms with an improved complexity bound of\nO\n~\n(\nN\n\u03f5\n\u2212\n5\n/\n3\n+\n\u03f5\n\u2212\n8\n/\n3\n)\n. On the other hand, we prove that quantum algorithms must take\n\u03a9\n~\n(\nN\n\u03f5\n\u2212\n2\n/\n3\n)\nqueries to a first-order quantum oracle, showing that our dependence on\nN\nis optimal up to poly-logarithmic factors.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/17789"} +{"video_file": "pFOoOdaiue_39017358.mp4", "openreview_id": "pFOoOdaiue", "slideslive_id": 39017358, "venue": "iclr2024", "title": "Robust Adversarial Reinforcement Learning via Bounded Rationality Curricula", "status": "Spotlight", "keywords": "reinforcement learning;adversarial;bounded rationality;curriculum", "tldr": "A novel approach for adversarial reinforcement learning that adopts a game-theoretical perspective based on bounded rationality to improve the robustness of obtained policies.", "abstract": "Robustness against adversarial attacks and distribution shifts is a long-standing goal of Reinforcement Learning (RL). To this end, Robust Adversarial Reinforcement Learning (RARL) trains a protagonist against destabilizing forces exercised by an adversary in a competitive zero-sum Markov game, whose optimal solution, i.e., rational strategy, corresponds to a Nash equilibrium. However, finding Nash equilibria requires facing complex saddle point optimization problems, which can be prohibitive to solve, especially for high-dimensional control. In this paper, we propose a novel approach for adversarial RL based on entropy regularization to ease the complexity of the saddle point optimization problem. We show that the solution of this entropy-regularized problem corresponds to a Quantal Response Equilibrium (QRE), a generalization of Nash equilibria that accounts for bounded rationality, i.e., agents sometimes play random actions instead of optimal ones. Crucially, the connection between the entropy-regularized objective and QRE enables free modulation of the rationality of the agents by simply tuning the temperature coefficient. We leverage this insight to propose our novel algorithm, Quantal Adversarial RL (QARL), which gradually increases the rationality of the adversary in a curriculum fashion until it is fully rational, easing the complexity of the optimization problem while retaining robustness. We provide extensive evidence of QARL outperforming RARL and recent baselines across several MuJoCo locomotion and navigation problems in overall performance and robustness.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/17780"} +{"video_file": "pzElnMrgSD_39017338.mp4", "openreview_id": "pzElnMrgSD", "slideslive_id": 39017338, "venue": "iclr2024", "title": "How I Warped Your Noise: a Temporally-Correlated Noise Prior for Diffusion Models", "status": "Oral", "keywords": "diffusion models; temporal coherency; Gaussian noise field; continuous white noise; noise transport", "tldr": "We propose a method to warp a Gaussian noise sample while keeping it Gaussian and apply it to diffusion models to help temporal coherency.", "abstract": "Video editing and generation methods often rely on pre-trained image-based diffusion models. During the diffusion process, however, the reliance on rudimentary noise sampling techniques that do not preserve correlations present in subsequent frames of a video is detrimental to the quality of the results. This either produces high-frequency flickering, or texture-sticking artifacts that are not amenable to post-processing. With this in mind, we propose a novel method for preserving temporal correlations in a sequence of noise samples. This approach is materialized by a novel noise representation, dubbed\n\u222b\n-noise (integral noise), that reinterprets individual noise samples as a continuously integrated noise field: pixel values do not represent discrete values, but are rather the integral of an underlying infinite-resolution noise over the pixel area. Additionally, we propose a carefully tailored transport method that uses\n\u222b\n-noise to accurately advect noise samples over a sequence of frames, maximizing the correlation between different frames while also preserving the noise properties. Our results demonstrate that the proposed\n\u222b\n-noise can be used for a variety of tasks, such as video restoration, surrogate rendering, and conditional video generation.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/17756"} +{"video_file": "qL9gogRepu_39017328.mp4", "openreview_id": "qL9gogRepu", "slideslive_id": 39017328, "venue": "iclr2024", "title": "Zero and Few-shot Semantic Parsing with Ambiguous Inputs", "status": "Poster", "keywords": "semantic parsing;text-to-code;ambiguity;NLP;calibration", "tldr": "Ambiguity is pervasive in natural language but often is ignored when translating between natural and formal languages. We propose a new dataset and data framework for testing models on ambiguous language and test current models.", "abstract": "Despite the frequent challenges posed by ambiguity when representing meaning via natural language, it is often ignored or deliberately removed in tasks mapping language to formally-designed representations, which generally assume a one-to-one mapping between linguistic and formal representations. We attempt to address this shortcoming by introducing AmP, a framework, dataset, and challenge for translating ambiguous natural language to formal representations like logic and code. We define templates and generate data for five well-documented linguistic ambiguities. Using AmP, we investigate how several few-shot text-to-code systems handle ambiguity, introducing three new metrics. We find that large pre-trained models perform poorly at capturing the distribution of possible meanings without deliberate instruction. However, models are able to capture the distribution well when ambiguity is attested in their inputs. These results motivate a call for including ambiguity explicitly in datasets and promote considering the distribution of possible outputs when evaluating systems. We release our data and code.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/17736"} +{"video_file": "qPFsIbF3V6_39018781.mp4", "openreview_id": "qPFsIbF3V6", "slideslive_id": 39018781, "venue": "iclr2024", "title": "Guess & Sketch: Language Model Guided Transpilation", "status": "Poster", "keywords": "transpilation;program translation;assembly code;language model;neurosymbolic;machine learning", "tldr": "We introduce a neurosymbolic approach to assembly language transpilation that outperforms GPT-4, an engineered transpiler, and fine-tuned language models.", "abstract": "Maintaining legacy software requires many software and systems engineering hours. Assembly code programs, which demand low-level control over the computer machine state and have no variable names, are particularly difficult for humans to analyze. Existing conventional program translators guarantee correctness, but are hand-engineered for the source and target programming languages in question. Learned transpilation, i.e. automatic translation of code, offers an alternative to manual re-writing and engineering efforts. Automated symbolic program translation approaches guarantee correctness but struggle to scale to longer programs due to the exponentially large search space. Their rigid rule-based systems also limit their expressivity, so they can only reason about a reduced space of programs. Probabilistic neural language models (LMs) produce plausible outputs for every input, but do so at the cost of guaranteed correctness. In this work, we leverage the strengths of LMs and symbolic solvers in a neurosymbolic approach to learned transpilation for assembly code. Assembly code is an appropriate setting for a neurosymbolic approach, since assembly code can be divided into shorter non-branching basic blocks amenable to the use of symbolic methods. Guess & Sketch extracts alignment and confidence information from features of the LM then passes it to a symbolic solver to resolve semantic equivalence of the transpilation input and output. We test Guess & Sketch on three different test sets of assembly transpilation tasks, varying in difficulty, and show that it successfully transpiles 57.6% more examples than GPT-4 and 39.6% more examples than an engineered transpiler. We also share a training and evaluation dataset for this task.", "primary_area": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17733"} +{"video_file": "qV83K9d5WB_39017324.mp4", "openreview_id": "qV83K9d5WB", "slideslive_id": 39017324, "venue": "iclr2024", "title": "Large Language Models as Tool Makers", "status": "Poster", "keywords": "large language models;tool making;tool using;serving efficiency", "tldr": "No ethics review needed.", "abstract": "Recent research has highlighted the potential of large language models (LLMs) to improve their problem-solving capabilities with the aid of suitable external tools. In our work, we further advance this concept by introducing a closed- loop framework, referred to as LLMs A s Tool Makers (LATM), where LLMs create their own reusable tools for problem-solving. Our approach consists of two phases: 1) tool making: an LLM acts as the tool maker that crafts tools for a set of tasks, where a tool is implemented as a Python utility function. 2) tool using: another LLM acts as the tool user, which applies the tool built by the tool maker for problem-solving. The tool user can be either the same or a different LLM from the tool maker. On the problem-solving server side, tool-making enables continual tool generation and caching as new requests emerge. This framework enables subsequent requests to access cached tools via their corresponding APIs, enhancing the efficiency of task resolution. Beyond enabling LLMs to create their own tools, our framework also uncovers intriguing opportunities to optimize the serving cost of LLMs: Recognizing that tool-making requires more sophisticated capabilities, we assign this task to a powerful, albeit resource-intensive, model. Conversely, the simpler tool-using phase is delegated to a lightweight model. This strategic division of labor allows the once-off cost of tool-making to be spread over multiple instances of tool-using, significantly reducing average costs while maintaining strong performance. Furthermore, our method offers a functional cache through the caching and reuse of tools, which stores the functionality of a class of requests instead of the natural language responses from LLMs, thus extending the applicability of the conventional cache mechanism. We evaluate our approach across various complex reasoning tasks, including Big-Bench tasks. With GPT-4 as the tool maker and GPT-3.5 as the tool user, LATM demonstrates performance equivalent to using GPT-4 for both roles, but with a significantly reduced inference cost.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/17729"} +{"video_file": "qmXedvwrT1_39017297.mp4", "openreview_id": "qmXedvwrT1", "slideslive_id": 39017297, "venue": "iclr2024", "title": "Learning Stackable and Skippable LEGO Bricks for Efficient, Reconfigurable, and Variable-Resolution Diffusion Modeling", "status": "Poster", "keywords": "Efficient diffusion models;short-span attention;local-feature enrichment;global-content orchestration;multi-scale generation", "tldr": "Exploring local-feature enrichment and global-content orchestration to construct efficient and flexible diffusion models", "abstract": "Diffusion models excel at generating photo-realistic images but come with significant computational costs in both training and sampling. While various techniques address these computational challenges, a less-explored issue is designing an efficient and adaptable network backbone for iterative refinement. Current options like U-Net and Vision Transformer often rely on resource-intensive deep networks and lack the flexibility needed for generating images at variable resolutions or with a smaller network than used in training. This study introduces LEGO bricks, which seamlessly integrate Local-feature Enrichment and Global-content Orchestration. These bricks can be stacked to create a test-time reconfigurable diffusion backbone, allowing selective skipping of bricks to reduce sampling costs and generate higher-resolution images than the training data. LEGO bricks enrich local regions with an MLP and transform them using a Transformer block while maintaining a consistent full-resolution image across all bricks. Experimental results demonstrate that LEGO bricks enhance training efficiency, expedite convergence, and facilitate variable-resolution image generation while maintaining strong generative performance. Moreover, LEGO significantly reduces sampling time compared to other methods, establishing it as a valuable enhancement for diffusion models. Our code and project page are available at https://jegzheng.github.io/LEGODiffusion.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/17718"} +{"video_file": "ruGY8v10mK_39019027.mp4", "openreview_id": "ruGY8v10mK", "slideslive_id": 39019027, "venue": "iclr2024", "title": "A Data-Driven Measure of Relative Uncertainty for Misclassification Detection", "status": "Poster", "keywords": "Misclassification detection;Uncertainty estimation;Trustworthy AI;Safety", "tldr": "We introduce a data-driven measure of relative uncertainty as a new method for detecting samples misclassified by machine learning classification models.", "abstract": "Misclassification detection is an important problem in machine learning, as it allows for the identification of instances where the model's predictions are unreliable. However, conventional uncertainty measures such as Shannon entropy do not provide an effective way to infer the real uncertainty associated with the model's predictions. In this paper, we introduce a novel data-driven measure of uncertainty relative to an observer for misclassification detection. By learning patterns in the distribution of soft-predictions, our uncertainty measure can identify misclassified samples based on the predicted class probabilities. Interestingly, according to the proposed measure, soft-predictions corresponding to misclassified instances can carry a large amount of uncertainty, even though they may have low Shannon entropy. We demonstrate empirical improvements over multiple image classification tasks, outperforming state-of-the-art misclassification detection methods.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/17676"} +{"video_file": "sMoifbuxjB_39017266.mp4", "openreview_id": "sMoifbuxjB", "slideslive_id": 39017266, "venue": "iclr2024", "title": "Towards Meta-Pruning via Optimal Transport", "status": "Spotlight", "keywords": "Pruning;Fusion", "tldr": "Marrying Pruning and Fusion", "abstract": "Structural pruning of neural networks conventionally relies on identifying and discarding less important neurons, a practice often resulting in significant accuracy loss that necessitates subsequent fine-tuning efforts. This paper introduces a novel approach named Intra-Fusion, challenging this prevailing pruning paradigm. Unlike existing methods that focus on designing meaningful neuron importance metrics, Intra-Fusion redefines the overlying pruning procedure. Through utilizing the concepts of model fusion and Optimal Transport, we leverage an agnostically given importance metric to arrive at a more effective sparse model representation. Notably, our approach achieves substantial accuracy recovery without the need for resource-intensive fine-tuning, making it an efficient and promising tool for neural network compression. Additionally, we explore how fusion can be added to the pruning process to significantly decrease the training time while maintaining competitive performance. We benchmark our results for various networks on commonly used datasets such as CIFAR-10, CIFAR-100, and ImageNet. More broadly, we hope that the proposed Intra-Fusion approach invigorates exploration into a fresh alternative to the predominant compression approaches. Our code is available here.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/17651"} +{"video_file": "sSyytcewxe_39017035.mp4", "openreview_id": "sSyytcewxe", "slideslive_id": 39017035, "venue": "iclr2024", "title": "Divide and not forget: Ensemble of selectively trained experts in Continual Learning", "status": "Poster", "keywords": "continual learning;class incremental learning", "tldr": "Class Incremental Learning, no exemplars, training from scratch. We design an ensemble of experts method, where in each task only one expert is finetuned, but all contribute during inference. We show it is a promising approach for CIL.", "abstract": "Class-incremental learning is becoming more popular as it helps models widen their applicability while not forgetting what they already know. A trend in this area is to use a mixture-of-expert technique, where different models work together to solve the task. However, the experts are usually trained all at once using whole task data, which makes them all prone to forgetting and increasing computational burden. To address this limitation, we introduce a novel approach named SEED. SEED selects only one, the most optimal expert for a considered task, and uses data from this task to fine-tune only this expert. For this purpose, each expert represents each class with a Gaussian distribution, and the optimal expert is selected based on the similarity of those distributions. Consequently, SEED increases diversity and heterogeneity within the experts while maintaining the high stability of this ensemble method. The extensive experiments demonstrate that SEED achieves state-of-the-art performance in exemplar-free settings across various scenarios, showing the potential of expert diversification through data in continual learning.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/17645"} +{"video_file": "samyfu6G93_39017260.mp4", "openreview_id": "samyfu6G93", "slideslive_id": 39017260, "venue": "iclr2024", "title": "NeuroBack: Improving CDCL SAT Solving using Graph Neural Networks", "status": "Poster", "keywords": "Propositional satisfiability;Graph Neural Networks;CDCL SAT Solving;Backbone;Phase Prediction", "tldr": "The paper applies GNN to predict the backbone in an offline manner to obtain the phase information of important variables for improving CDCL solving.", "abstract": "Propositional satisfiability (SAT) is an NP-complete problem that impacts many research fields, such as planning, verification, and security. Mainstream modern SAT solvers are based on the Conflict-Driven Clause Learning (CDCL) algorithm. Recent work aimed to enhance CDCL SAT solvers using Graph Neural Networks (GNNs). However, so far this approach either has not made solving more effective, or required substantial GPU resources for frequent online model inferences. Aiming to make GNN improvements practical, this paper proposes an approach called NeuroBack, which builds on two insights: (1) predicting phases (i.e., values) of variables appearing in the majority (or even all) of the satisfying assignments are essential for CDCL SAT solving, and (2) it is sufficient to query the neural model only once for the predictions before the SAT solving starts. Once trained, the offline model inference allows NeuroBack to execute exclusively on the CPU, removing its reliance on GPU resources. To train NeuroBack, a new dataset called DataBack containing 120,286 data samples is created. Finally, NeuroBack is implemented as an enhancement to a state-of-the-art SAT solver called Kissat. As a result, it allowed Kissat to solve 5.2% more problems on the recent SAT competition problem set, SATCOMP-2022. NeuroBack therefore shows how machine learning can be harnessed to improve SAT solving in an effective and practical manner.", "primary_area": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17641"} +{"video_file": "skcTCdJz0f_39017257.mp4", "openreview_id": "skcTCdJz0f", "slideslive_id": 39017257, "venue": "iclr2024", "title": "Probabilistic Self-supervised Representation Learning via Scoring Rules Minimization", "status": "Poster", "keywords": "Self-supervised Learning;Probablistic Machine Learning;Proper Scoring Rule", "tldr": "We propose a novel probabilistic self-supervised learning via scoring rule minimization (ProSMin) to enhance representation quality and mitigate collapsing representations.", "abstract": "% Self-supervised learning methods have shown promising results across a wide range of tasks in computer vision, natural language processing, and multimodal analysis. However, self-supervised approaches come with a notable limitation, dimensional collapse, where a model doesn't fully utilize its capacity to encode information optimally. Motivated by this, we propose ProSMin, a novel probabilistic self-supervised learning approach that leverages the power of probabilistic models to enhance representation quality and mitigate collapsing representations. Our proposed approach involves two neural networks, the online network and the target network, which collaborate and learn the diverse distribution of representations from each other through probabilistic knowledge distillation. The two networks are trained via our new loss function based on proper scoring rules. We provide a theoretical justification for ProSMin and demonstrate its modified scoring rule. This insight validates the method's optimization process and contributes to its robustness and effectiveness in improving representation quality. We evaluate our probabilistic model on various downstream tasks, such as in-distribution generalization, out-of-distribution detection, dataset corruption, low-shot learning, and transfer learning. Our method achieves superior accuracy and calibration, outperforming the self-supervised baseline in a variety of experiments on large datasets such as ImageNet-O and ImageNet-C. ProSMin thus demonstrates its scalability and real-world applicability. Our code is publicly available: https://github.com/amirvhd/SSL-sore-rule.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/17637"} +{"video_file": "slSmYGc8ee_39019112.mp4", "openreview_id": "slSmYGc8ee", "slideslive_id": 39019112, "venue": "iclr2024", "title": "How connectivity structure shapes rich and lazy learning in neural circuits", "status": "Poster", "keywords": "Computational neuroscience;recurrent neural networks;learning;connectivity structure;inductive bias;rich and lazy learning;deep learning theory;neural representations", "tldr": "We examine how initial connectivity structures influence the inclination toward different learning regimes in neural circuits.", "abstract": "In theoretical neuroscience, recent work leverages deep learning tools to explore how some network attributes critically influence its learning dynamics. Notably, initial weight distributions with small (resp. large) variance may yield a rich (resp. lazy) regime, where significant (resp. minor) changes to network states and representation are observed over the course of learning. However, in biology, neural circuit connectivity generally has a low-rank structure and therefore differs markedly from the random initializations generally used for these studies. As such, here we investigate how the structure of the initial weights \u2014 in particular their effective rank \u2014 influences the network learning regime. Through both empirical and theoretical analyses, we discover that high-rank initializations typically yield smaller network changes indicative of lazier learning, a finding we also confirm with experimentally-driven initial connectivity in recurrent neural networks. Conversely, low-rank initialization biases learning towards richer learning. Importantly, however, as an exception to this rule, we find lazier learning can still occur with a low-rank initialization that aligns with task and data statistics. Our research highlights the pivotal role of initial weight structures in shaping learning regimes, with implications for metabolic costs of plasticity and risks of catastrophic forgetting.", "primary_area": "applications to neuroscience & cognitive science", "site": "https://iclr.cc/virtual/2024/poster/17636"} +{"video_file": "spvaV5LELF_39017254.mp4", "openreview_id": "spvaV5LELF", "slideslive_id": 39017254, "venue": "iclr2024", "title": "Measuring Vision-Language STEM Skills of Neural Models", "status": "Poster", "keywords": "Benchmark;STEM;Multimodal;Vision-language models;Language models", "tldr": "We introduce a new challenge to test the STEM skills of neural models.", "abstract": "We introduce a new challenge to test the STEM skills of neural models. The problems in the real world often require solutions, combining knowledge from STEM (science, technology, engineering, and math). Unlike existing datasets, our dataset requires the understanding of multimodal vision-language information of STEM. Our dataset features one of the largest and most comprehensive datasets for the challenge. It includes\n448\nskills and\n1\n,\n073\n,\n146\nquestions spanning all STEM subjects. Compared to existing datasets that often focus on examining expert-level ability, our dataset includes fundamental skills and questions designed based on the K-12 curriculum. We also add state-of-the-art foundation models such as CLIP and GPT-3.5-Turbo to our benchmark. Results show that the recent model advances only help master a very limited number of lower grade-level skills (\n2.5\n% in the third grade) in our dataset. In fact, these models are still well below (averaging\n54.7\n%) the performance of elementary students, not to mention near expert-level performance. To understand and increase the performance on our dataset, we teach the models on a training split of our dataset. Even though we observe improved performance, the model performance remains relatively low compared to average elementary students. To solve STEM problems, we will need novel algorithmic innovations from the community.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/17631"} +{"video_file": "t3vnnLeajU_39019081.mp4", "openreview_id": "t3vnnLeajU", "slideslive_id": 39019081, "venue": "iclr2024", "title": "Controlling Vision-Language Models for Multi-Task Image Restoration", "status": "Poster", "keywords": "Image restoration;vision-language model;low-level vision", "tldr": "Controlling vision-language models to understand image degradation and improve image restoration.", "abstract": "Vision-language models such as CLIP have shown great impact on diverse downstream tasks for zero-shot or label-free predictions. However, when it comes to low-level vision such as image restoration their performance deteriorates dramatically due to corrupted inputs. In this paper, we present a degradation-aware vision-language model (DA-CLIP) to better transfer pretrained vision-language models to low-level vision tasks as a multi-task framework for image restoration. More specifically, DA-CLIP trains an additional controller that adapts the fixed CLIP image encoder to predict high-quality feature embeddings. By integrating the embedding into an image restoration network via cross-attention, we are able to pilot the model to learn a high-fidelity image reconstruction. The controller itself will also output a degradation feature that matches the real corruptions of the input, yielding a natural classifier for different degradation types. In addition, we construct a mixed degradation dataset with synthetic captions for DA-CLIP training. Our approach advances state-of-the-art performance on both degradation-specific and unified image restoration tasks, showing a promising direction of prompting image restoration with large-scale pretrained vision-language models. Our code is available at https://github.com/Algolzw/daclip-uir.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/17626"} +{"video_file": "t8eO0CiZJV_39017250.mp4", "openreview_id": "t8eO0CiZJV", "slideslive_id": 39017250, "venue": "iclr2024", "title": "Tailoring Self-Rationalizers with Multi-Reward Distillation", "status": "Poster", "keywords": "large language models;rationalization;explanation generation;explainability;rationale generation", "tldr": "Multi-reward conditioned algorithm that makes small LMs stronger rationalizers.", "abstract": "Large language models (LMs) are capable of generating free-text rationales to aid question answering. However, prior work 1) suggests that useful self-rationalization is emergent only at significant scales (e.g., 175B parameter GPT-3); and 2) focuses largely on downstream performance, ignoring the semantics of the rationales themselves, e.g., are they faithful, true, and helpful for humans? In this work, we enable small-scale LMs (\u223c200x smaller than GPT-3) to generate rationales that not only improve downstream task performance, but are also more plausible, consistent, and diverse, assessed both by automatic and human evaluation. Our method, MaRio (Multi-rewArd RatIOnalization), is a multi-reward conditioned self-rationalization algorithm that optimizes multiple distinct properties like plausibility, diversity and consistency. Results on three difficult question-answering datasets StrategyQA, QuaRel and OpenBookQA show that not only does MaRio improve task accuracy, but it also improves the self-rationalization quality of small LMs across the aforementioned axes better than a supervised fine-tuning (SFT) baseline. Extensive human evaluations confirm that MaRio rationales are preferred vs. SFT rationales, as well as qualitative improvements in plausibility and consistency.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/17624"} +{"video_file": "tGQirjzddO_39018989.mp4", "openreview_id": "tGQirjzddO", "slideslive_id": 39018989, "venue": "iclr2024", "title": "Reasoning with Latent Diffusion in Offline Reinforcement Learning", "status": "Poster", "keywords": "Reinforcement Learning;Diffusion Models", "tldr": "We leverage latent diffusion models to learn skill representations with which we learn high value policies from offline datasets.", "abstract": "Offline reinforcement learning (RL) holds promise as a means to learn high-reward policies from a static dataset, without the need for further environment interactions. However, a key challenge in offline RL lies in effectively stitching portions of suboptimal trajectories from the static dataset while avoiding extrapolation errors arising due to a lack of support in the dataset. Existing approaches use conservative methods that are tricky to tune and struggle with multi-modal data or rely on noisy Monte Carlo return-to-go samples for reward conditioning. In this work, we propose a novel approach that leverages the expressiveness of latent diffusion to model in-support trajectory sequences as compressed latent skills. This facilitates learning a Q-function while avoiding extrapolation error via batch-constraining. The latent space is also expressive and gracefully copes with multi-modal data. We show that the learned temporally-abstract latent space encodes richer task-specific information for offline RL tasks as compared to raw state-actions. This improves credit assignment and facilitates faster reward propagation during Q-learning. Our method demonstrates state-of-the-art performance on the D4RL benchmarks, particularly excelling in long-horizon, sparse-reward tasks.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/17620"} +{"video_file": "tUVG9nGzgE_39017244.mp4", "openreview_id": "tUVG9nGzgE", "slideslive_id": 39017244, "venue": "iclr2024", "title": "Learning Conditional Invariances through Non-Commutativity", "status": "Poster", "keywords": "Invariance Learning;Domain Adaptation", "tldr": "Non-commutatively mapping source domain samples to the representation space of the target domain can efficiently learn conditional invariances, satisfying the sample-complexity needs for generalization on the target with samples from the source.", "abstract": "Invariance learning algorithms that conditionally filter out domain-specific random variables as distractors, do so based only on the data semantics, and not the target domain under evaluation. We show that a provably optimal and sample-efficient way of learning conditional invariances is by relaxing the invariance criterion to be non-commutatively directed towards the target domain. Under domain asymmetry, i.e., when the target domain contains semantically relevant information absent in the source, the risk of the encoder\n\u03c6\n\u2217\nthat is optimal on average across domains is strictly lower-bounded by the risk of the target-specific optimal encoder\n\u03a6\n\u03c4\n\u2217\n. We prove that non-commutativity steers the optimization towards\n\u03a6\n\u03c4\n\u2217\ninstead of\n\u03c6\n\u2217\n, bringing the\nH\n-divergence between domains down to zero, leading to a stricter bound on the target risk. Both our theory and experiments demonstrate that non-commutative invariance (NCI) can leverage source domain samples to meet the sample complexity needs of learning\n\u03a6\n\u03c4\n\u2217\n, surpassing SOTA invariance learning algorithms for domain adaptation, at times by over 2%, approaching the performance of an oracle. Implementation is available at https://github.com/abhrac/nci.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/17615"} +{"video_file": "tVMPfEGT2w_39017242.mp4", "openreview_id": "tVMPfEGT2w", "slideslive_id": 39017242, "venue": "iclr2024", "title": "Provable Offline Preference-Based Reinforcement Learning", "status": "Spotlight", "keywords": "reinforcement learning theory;offline reinforcement learning", "tldr": "PAC offline reinforcement learning from preference feedback over trajectories using general function approximation.", "abstract": "In this paper, we investigate the problem of offline Preference-based Reinforcement Learning (PbRL) with human feedback where feedback is available in the form of preference between trajectory pairs rather than explicit rewards. Our proposed algorithm consists of two main steps: (1) estimate the implicit reward using Maximum Likelihood Estimation (MLE) with general function approximation from offline data and (2) solve a distributionally robust planning problem over a confidence set around the MLE. We consider the general reward setting where the reward can be defined over the whole trajectory and provide a novel guarantee that allows us to learn any target policy with a polynomial number of samples, as long as the target policy is covered by the offline data. This guarantee is the first of its kind with general function approximation. To measure the coverage of the target policy, we introduce a new single-policy concentrability coefficient, which can be upper bounded by the per-trajectory concentrability coefficient. We also establish lower bounds that highlight the necessity of such concentrability and the difference from standard RL, where state-action-wise rewards are directly observed. We further extend and analyze our algorithm when the feedback is given over action pairs.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/17613"} +{"video_file": "tiiAzqi6Ol_39017237.mp4", "openreview_id": "tiiAzqi6Ol", "slideslive_id": 39017237, "venue": "iclr2024", "title": "Compositional Preference Models for Aligning LMs", "status": "Poster", "keywords": "language model alignment;preference model;Reinforcement Learning from Human Feedback (RLHF);overoptimization;interpretability;scalable oversight;reward hacking", "tldr": "We decompose the preference assessment into interpretable features using a prompted LM and aggregate their scores using a logistic regression classifier.", "abstract": "As language models (LMs) become more capable, it is increasingly important to align them with human preferences. However, the dominant paradigm for training Preference Models (PMs) for that purpose suffers from fundamental limitations, such as lack of transparency and scalability, along with susceptibility to overfitting the preference dataset. We propose Compositional Preference Models (CPMs), a novel PM framework that decomposes one global preference assessment into several interpretable features, obtains scalar scores for these features from a prompted LM, and aggregates these scores using a logistic regression classifier. Through these simple steps, CPMs allow to control which properties of the preference data are used to train the preference model and to build it based on features that are believed to underlie the human preference judgment. Our experiments show that CPMs not only improve generalization and are more robust to overoptimization than standard PMs, but also that best-of-n samples obtained using CPMs tend to be preferred over samples obtained using conventional PMs. Overall, our approach demonstrates the benefits of endowing PMs with priors about which features determine human preferences while relying on LM capabilities to extract those features in a scalable and robust way.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/17607"} +{"video_file": "tplXNcHZs1_39017230.mp4", "openreview_id": "tplXNcHZs1", "slideslive_id": 39017230, "venue": "iclr2024", "title": "Diffusion Posterior Sampling for Linear Inverse Problem Solving: A Filtering Perspective", "status": "Poster", "keywords": "Diffusion Models;linear Inverse problem;Bayesian posterior sampling;Bayesian filtering;importance sampling", "tldr": "We reveal a link between Bayesian posterior sampling and Bayesian filtering in diffusion models, and propose a consistent Filtering Posterior Sampling (FPS) model which has state-of-the-art performance.", "abstract": "Diffusion models have achieved tremendous success in generating high-dimensional data like images, videos and audio. These models provide powerful data priors that can solve linear inverse problems in zero shot through Bayesian posterior sampling. However, exact posterior sampling for diffusion models is intractable. Current solutions often hinge on approximations that are either computationally expensive or lack strong theoretical guarantees. In this work, we introduce an efficient diffusion sampling algorithm for linear inverse problems that is guaranteed to be asymptotically accurate. We reveal a link between Bayesian posterior sampling and Bayesian filtering in diffusion models, proving the former as a specific instance of the latter. Our method, termed filtering posterior sampling, leverages sequential Monte Carlo methods to solve the corresponding filtering problem. It seamlessly integrates with all Markovian diffusion samplers, requires no model re-training, and guarantees accurate samples from the Bayesian posterior as particle counts rise. Empirical tests demonstrate that our method generates better or comparable results than leading zero-shot diffusion posterior samplers on tasks like image inpainting, super-resolution, and deblurring.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/17600"} +{"video_file": "tqh1zdXIra_39018920.mp4", "openreview_id": "tqh1zdXIra", "slideslive_id": 39018920, "venue": "iclr2024", "title": "Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How", "status": "Oral", "keywords": "Finetuning;pretrained model hubs;transfer learning;hyperparameter optimization;meta-learning", "tldr": "We learn to jointly and efficiently select pretrained models to finetune and their hyperparameters.", "abstract": "With the ever-increasing number of pretrained models, machine learning practitioners are continuously faced with which pretrained model to use, and how to finetune it for a new dataset. In this paper, we propose a methodology that jointly searches for the optimal pretrained model and the hyperparameters for finetuning it. Our method transfers knowledge about the performance of many pretrained models with multiple hyperparameter configurations on a series of datasets. To this aim, we evaluated over 20k hyperparameter configurations for finetuning 24 pretrained image classification models on 87 datasets to generate a large-scale meta-dataset. We meta-learn a gray-box performance predictor on the learning curves of this meta-dataset and use it for fast hyperparameter optimization on new datasets. We empirically demonstrate that our resulting approach can quickly select an accurate pretrained model for a new dataset together with its optimal hyperparameters.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/17599"} +{"video_file": "ttXg3SKAg5_39018881.mp4", "openreview_id": "ttXg3SKAg5", "slideslive_id": 39018881, "venue": "iclr2024", "title": "Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data", "status": "Poster", "keywords": "multi-modal contrastive learning;captioning;text-to-image generation", "tldr": "Our work explains the geometry of multi-modal contrastive representation space and introduces a three-step method to bridge the modality gap, achieving state-of-the-art results in zero-shot captioning and text-to-image generation.", "abstract": "Building cross-modal applications is challenging due to limited paired multi-modal data. Recent works have shown that leveraging a pre-trained multi-modal contrastive representation space enables cross-modal tasks to be learned from uni-modal data. This is based on the assumption that contrastive optimization makes embeddings from different modalities interchangeable. However, this assumption is under-explored due to the poorly understood geometry of the multi-modal contrastive space, where a modality gap exists. In our study, we provide a theoretical explanation of this space's geometry and introduce a three-step method,\nC\n3\n(Connect, Collapse, Corrupt), to bridge the modality gap, enhancing the interchangeability of embeddings. Our\nC\n3\nmethod significantly improves cross-modal learning from uni-modal data, achieving state-of-the-art results on zero-shot image / audio / video captioning and text-to-image generation.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/17596"} +{"video_file": "u3dHl287oB_39019126.mp4", "openreview_id": "u3dHl287oB", "slideslive_id": 39019126, "venue": "iclr2024", "title": "The Joint Effect of Task Similarity and Overparameterization on Catastrophic Forgetting \u2014 An Analytical Model", "status": "Poster", "keywords": "deep learning;continual learning;overparameterization;task similarity;catastrophic forgetting;theory", "tldr": "Analyze the joint effect of task similarity and overparameterization on catastrophic forgetting by deriving an exact analytical expression for forgetting in a two-task random orthogonal transformation problem.", "abstract": "In continual learning, catastrophic forgetting is affected by multiple aspects of the tasks. Previous works have analyzed separately how forgetting is affected by either task similarity or overparameterization. In contrast, our paper examines how task similarity and overparameterization jointly affect forgetting in an analyzable model. Specifically, we focus on two-task continual linear regression, where the second task is a random orthogonal transformation of an arbitrary first task (an abstraction of random permutation tasks). We derive an exact analytical expression for the expected forgetting \u2014 and uncover a nuanced pattern. In highly overparameterized models, intermediate task similarity causes the most forgetting. However, near the interpolation threshold, forgetting decreases monotonically with the expected task similarity. We validate our findings with linear regression on synthetic data, and with neural networks on established permutation task benchmarks.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/17591"} +{"video_file": "u6imHU4Ebu_39017228.mp4", "openreview_id": "u6imHU4Ebu", "slideslive_id": 39017228, "venue": "iclr2024", "title": "Large Language Models as Generalizable Policies for Embodied Tasks", "status": "Poster", "keywords": "Embodied AI;Reinforcement Learning;Large Language Models;Foundational Models", "tldr": "Reinforcement Learned Policies for multi-task embodied AI problems, when initialized from LLMs, demonstrate strong generalization properties across novel task and novel ways of describing tasks.", "abstract": "We show that large language models (LLMs) can be adapted to be generalizable policies for embodied visual tasks. Our approach, called Large LAnguage model Reinforcement Learning Policy (LLaRP), adapts a pre-trained frozen LLM to take as input text instructions and visual egocentric observations and output actions directly in the environment. Using reinforcement learning, we train LLaRP to see and act solely through environmental interactions. We show that LLaRP is robust to complex paraphrasings of task instructions and can generalize to new tasks that require novel optimal behavior. In particular, on 1,000 unseen tasks it achieves 42% success rate, 1.7x the success rate of other common learned baselines or zero-shot applications of LLMs. Finally, to aid the community in studying language conditioned, massively multi-task, embodied AI problems we release a novel benchmark, Language Rearrangement, consisting of 150,000 training and 1,000 testing tasks for language-conditioned rearrangement.", "primary_area": "applications to robotics, autonomy, planning", "site": "https://iclr.cc/virtual/2024/poster/17588"} +{"video_file": "u859gX7ADC_39017226.mp4", "openreview_id": "u859gX7ADC", "slideslive_id": 39017226, "venue": "iclr2024", "title": "Augmenting Transformers with Recursively Composed Multi-grained Representations", "status": "Poster", "keywords": "NLP; recursive neural network; multi-grain representation; compositional representation;span labeling;relation extraction; grammar induction;language understanding", "tldr": "Augmenting transformers with recursively composed multi-grained representations", "abstract": "We present ReCAT, a recursive composition augmented Transformer that is able to explicitly model hierarchical syntactic structures of raw texts without relying on gold trees during both learning and inference. Existing research along this line restricts data to follow a hierarchical tree structure and thus lacks inter-span communications. To overcome the problem, we propose a novel contextual inside-outside (CIO) layer that learns contextualized representations of spans through bottom-up and top-down passes, where a bottom-up pass forms representations of high-level spans by composing low-level spans, while a top-down pass combines information inside and outside a span. By stacking several CIO layers between the embedding layer and the attention layers in Transformer, the ReCAT model can perform both deep intra-span and deep inter-span interactions, and thus generate multi-grained representations fully contextualized with other spans. Moreover, the CIO layers can be jointly pre-trained with Transformers, making ReCAT enjoy scaling ability, strong performance, and interpretability at the same time. We conduct experiments on various sentence-level and span-level tasks. Evaluation results indicate that ReCAT can significantly outperform vanilla Transformer models on all span-level tasks and recursive models on natural language inference tasks. More interestingly, the hierarchical structures induced by ReCAT exhibit strong consistency with human-annotated syntactic trees, indicating good interpretability brought by the CIO layers.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/17586"} +{"video_file": "uKB4cFNQFg_39017219.mp4", "openreview_id": "uKB4cFNQFg", "slideslive_id": 39017219, "venue": "iclr2024", "title": "BEND: Benchmarking DNA Language Models on Biologically Meaningful Tasks", "status": "Poster", "keywords": "Biological sequence analysis;enhancer annotation;gene finding;gene annotation;Language model;genome modelling;benchmark;LLM;embeddings;representations;DNA", "tldr": "A dataset and downstream tasks for benchmarking emerging DNA language models with realistic and biologically meaningful tasks", "abstract": "The genome sequence contains the blueprint for governing cellular processes. While the availability of genomes has vastly increased over the last decades, experimental annotation of the various functional, non-coding and regulatory elements encoded in the DNA sequence remains both expensive and challenging. This has sparked interest in unsupervised language modeling of genomic DNA, a paradigm that has seen great success for protein sequence data. Although various DNA language models have been proposed, evaluation tasks often differ between individual works, and might not fully recapitulate the fundamental challenges of genome annotation, including the length, scale and sparsity of the data. In this study, we introduce BEND, a BENchmark for DNA language models, featuring a collection of realistic and biologically meaningful downstream tasks defined on the human genome. We find that embeddings from current DNA LMs can approach performance of expert methods on some tasks, but only capture limited information about long-range features. BEND is available at https://github.com/frederikkemarin/BEND.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/17578"} +{"video_file": "uNrFpDPMyo_39017217.mp4", "openreview_id": "uNrFpDPMyo", "slideslive_id": 39017217, "venue": "iclr2024", "title": "Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs", "status": "Oral", "keywords": "Large Language Model;Efficient Inference;Generative Inference;Key-Value Cache", "tldr": "We introduce adaptive KV cache compression, a plug-and-play method that reduces the memory footprint of generative inference for Large Language Models (LLMs) and accelerates its generation throughput.", "abstract": "In this study, we introduce adaptive KV cache compression, a plug-and-play method that reduces the memory footprint of generative inference for Large Language Models (LLMs). Different from the conventional KV cache that retains key and value vectors for all context tokens, we conduct targeted profiling to discern the intrinsic structure of attention modules. Based on the recognized structure, we then construct the KV cache in an adaptive manner: evicting long-range contexts on attention heads emphasizing local contexts, discarding non-special tokens on attention heads centered on special tokens, and only employing the standard KV cache for attention heads that broadly attend to all tokens. Moreover, with the lightweight attention profiling used to guide the construction of the adaptive KV cache, FastGen can be deployed without resource-intensive fine-tuning or re-training. In our experiments across various asks, FastGen demonstrates substantial reduction on GPU memory consumption with negligible generation quality loss. We will release our code and the compatible CUDA kernel for reproducibility.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/17575"} +{"video_file": "ulaUJFd96G_39019096.mp4", "openreview_id": "ulaUJFd96G", "slideslive_id": 39019096, "venue": "iclr2024", "title": "Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs", "status": "Poster", "keywords": "Large language models;Long context handling;Token pruning", "tldr": "We propose a computationally efficient method to extend the context limit of large language models.", "abstract": "Large language models (LLMs) have shown remarkable performance in various natural language processing tasks. However, a primary constraint they face is the context limit, i.e., the maximum number of tokens they can process. Previous works have explored architectural changes and modifications in positional encoding to relax the constraint, but they often require expensive training or do not address the computational demands of self-attention. In this paper, we present Hierarchical cOntext MERging (HOMER), a new training-free scheme designed to overcome the limitations. HOMER uses a divide-and-conquer algorithm, dividing long inputs into manageable chunks. Each chunk is then processed collectively, employing a hierarchical strategy that merges adjacent chunks at progressive transformer layers. A token reduction technique precedes each merging, ensuring memory usage efficiency. We also propose an optimized computational order reducing the memory requirement to logarithmically scale with respect to input length, making it especially favorable for environments with tight memory restrictions. Our experiments demonstrate the proposed method's superior performance and memory efficiency, enabling the broader use of LLMs in contexts requiring extended context. Code is available at https://github.com/alinlab/HOMER.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/17565"} +{"video_file": "uqxBTcWRnj_39019139.mp4", "openreview_id": "uqxBTcWRnj", "slideslive_id": 39019139, "venue": "iclr2024", "title": "Bridging Neural and Symbolic Representations with Transitional Dictionary Learning", "status": "Poster", "keywords": "Unsupervised Learning;Compositional representation;neural-symbolic learning", "tldr": "We propose Transitional Dictionary Learning to learn symbolic knowledge in representation. Experiments on abstract objects show our method largely outperforms unsupervised segmentation baselines and proposed metrics align well with human evaluations.", "abstract": "This paper introduces a novel Transitional Dictionary Learning (TDL) framework that can implicitly learn symbolic knowledge, such as visual parts and relations, by reconstructing the input as a combination of parts with implicit relations. We propose a game-theoretic diffusion model to decompose the input into visual parts using the dictionaries learned by the Expectation Maximization (EM) algorithm, implemented as the online prototype clustering, based on the decomposition results. Additionally, two metrics, clustering information gain, and heuristic shape score are proposed to evaluate the model. Experiments are conducted on three abstract compositional visual object datasets, which require the model to utilize the compositionality of data instead of simply exploiting visual features. Then, three tasks on symbol grounding to predefined classes of parts and relations, as well as transfer learning to unseen classes, followed by a human evaluation, were carried out on these datasets. The results show that the proposed method discovers compositional patterns, which significantly outperforms the state-of-the-art unsupervised part segmentation methods that rely on visual features from pre-trained backbones. Furthermore, the proposed metrics are consistent with human evaluations.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/17562"} +{"video_file": "uvFhCUPjtI_39017431.mp4", "openreview_id": "uvFhCUPjtI", "slideslive_id": 39017431, "venue": "iclr2024", "title": "Beyond Spatio-Temporal Representations: Evolving Fourier Transform for Temporal Graphs", "status": "Poster", "keywords": "Temporal Dynamic Graphs;Spectral Transform;GNN", "tldr": "First work that proposes a concept to transform an evolving temporal graph to its frequency domain; we call it \"Evolving Graph Fourier Transform (EFT)\".", "abstract": "We present the Evolving Graph Fourier Transform (EFT), the first invertible spectral transform that captures evolving representations on temporal graphs. We motivate our work by the inadequacy of existing methods for capturing the evolving graph spectra, which are also computationally expensive due to the temporal aspect along with the graph vertex domain. We view the problem as an optimization over the Laplacian of the continuous time dynamic graph. Additionally, we propose pseudo-spectrum relaxations that decompose the transformation process, making it highly computationally efficient. The EFT method adeptly captures the evolving graph's structural and positional properties, making it effective for downstream tasks on evolving graphs. Hence, as a reference implementation, we develop a simple neural model induced with \\eft for capturing evolving graph spectra. We empirically validate our theoretical findings on a number of large-scale and standard temporal graph benchmarks and demonstrate that our model achieves state-of-the-art performance.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/17560"} +{"video_file": "v1VvCWJAL8_39017427.mp4", "openreview_id": "v1VvCWJAL8", "slideslive_id": 39017427, "venue": "iclr2024", "title": "Towards Characterizing Domain Counterfactuals for Invertible Latent Causal Models", "status": "Poster", "keywords": "counterfactual;domain;causal representation learning", "tldr": "We build generative models by learning latent causal models from data observed from different domains for the purpose of generating domain counterfactuals.", "abstract": "Answering counterfactual queries has important applications such as explainability, robustness, and fairness but is challenging when the causal variables are unobserved and the observations are non-linear mixtures of these latent variables, such as pixels in images. One approach is to recover the latent Structural Causal Model (SCM), which may be infeasible in practice due to requiring strong assumptions, e.g., linearity of the causal mechanisms or perfect atomic interventions. Meanwhile, more practical ML-based approaches using naive domain translation models to generate counterfactual samples lack theoretical grounding and may construct invalid counterfactuals. In this work, we strive to strike a balance between practicality and theoretical guarantees by analyzing a specific type of causal query called domain counterfactuals, which hypothesizes what a sample would have looked like if it had been generated in a different domain (or environment). We show that recovering the latent SCM is unnecessary for estimating domain counterfactuals, thereby sidestepping some of the theoretic challenges. By assuming invertibility and sparsity of intervention, we prove domain counterfactual estimation error can be bounded by a data fit term and intervention sparsity term. Building upon our theoretical results, we develop a theoretically grounded practical algorithm that simplifies the modeling process to generative model estimation under autoregressive and shared parameter constraints that enforce intervention sparsity. Finally, we show an improvement in counterfactual estimation over baseline methods through extensive simulated and image-based experiments.", "primary_area": "causal reasoning", "site": "https://iclr.cc/virtual/2024/poster/17554"} +{"video_file": "v3K5TVP8kZ_39017426.mp4", "openreview_id": "v3K5TVP8kZ", "slideslive_id": 39017426, "venue": "iclr2024", "title": "AutomaTikZ: Text-Guided Synthesis of Scientific Vector Graphics with TikZ", "status": "Poster", "keywords": "Vector Graphics Generation;Code Generation;Science Generation;TikZ;Text-to-Image", "tldr": "We train large language models on TikZ code, conditioned on captions, to automatically generate scientific vector graphics.", "abstract": "Generating bitmap graphics from text has gained considerable attention, yet for scientific figures, vector graphics are often preferred. Given that vector graphics are typically encoded using low-level graphics primitives, generating them directly is difficult. To address this, we propose the use of TikZ, a well-known abstract graphics language that can be compiled to vector graphics, as an intermediate representation of scientific figures. TikZ offers human-oriented, high-level commands, thereby facilitating conditional language modeling with any large language model. To this end, we introduce DaTikZ the first large-scale TikZ dataset, consisting of 120k TikZ drawings aligned with captions. We fine-tune LLaMA on DaTikZ, as well as our new model CLiMA, which augments LLaMA with multimodal CLIP embeddings. In both human and automatic evaluation, CLiMA and LLaMA outperform commercial GPT-4 and Claude 2 in terms of similarity to human-created figures, with CLiMA additionally improving text-image alignment. Our detailed analysis shows that all models generalize well and are not susceptible to memorization. GPT-4 and Claude 2, however, tend to generate more simplistic figures compared to both humans and our models. We make our framework, AutomaTikZ, along with model weights and datasets, publicly available.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/17553"} +{"video_file": "v3XXtxWKi6_39018858.mp4", "openreview_id": "v3XXtxWKi6", "slideslive_id": 39018858, "venue": "iclr2024", "title": "RLCD: Reinforcement Learning from Contrastive Distillation for LM Alignment", "status": "Poster", "keywords": "Language Model;RLHF;Alignment;Instruction Tuning", "tldr": "We propose a new method for simulating preference data in RLHF alignment pipelines based on generating preference pairs from two contrasting prompts, with strong downstream performance on three diverse alignment tasks and multiple LLaMA model scales.", "abstract": "We propose Reinforcement Learning from Contrastive Distillation (RLCD), a method for aligning language models to follow principles expressed in natural language (e.g., to be more harmless) without using human feedback. RLCD creates preference pairs from two contrasting model outputs, one using a positive prompt designed to encourage following the given principles, and one using a negative prompt designed to encourage violating them. Using two different prompts causes model outputs to be more differentiated on average, resulting in cleaner preference labels in the absence of human annotations. We then use the preference pairs to train a preference model, which is in turn used to improve a base unaligned language model via reinforcement learning. Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context distillation (Huang et al., 2022) baselines across three diverse alignment tasks\u2014harmlessness, helpfulness, and story outline generation\u2014and when using both 7B and 30B model scales for simulating preference data", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/17552"} +{"video_file": "vE5MyzpP92_39017060.mp4", "openreview_id": "vE5MyzpP92", "slideslive_id": 39017060, "venue": "iclr2024", "title": "Threshold-Consistent Margin Loss for Open-World Deep Metric Learning", "status": "Poster", "keywords": "Deep metric learning;Open-world visual recognition;Threshold consistency", "tldr": "Quantify the notion of threshold inconsistency in deep metric learning through a novel variance-based metric, and introduce a simple yet effective regularization loss to improve threshold consistency for DML models.", "abstract": "Existing losses used in deep metric learning (DML) for image retrieval often lead to highly non-uniform intra-class and inter-class representation structures across test classes and data distributions. When combined with the common practice of using a fixed threshold to declare a match, this gives rise to significant performance variations in terms of false accept rate (FAR) and false reject rate (FRR) across test classes and data distributions. We define this issue in DML as threshold inconsistency. In real-world applications, such inconsistency often complicates the threshold selection process when deploying large-scale image retrieval systems. To measure this inconsistency, we propose a novel variance-based metric called Operating-Point-Inconsistency-Score (OPIS) that quantifies the variance in the operating characteristics across classes. Using the OPIS metric, we find that achieving high accuracy levels in a DML model does not automatically guarantee threshold consistency. In fact, our investigation reveals a Pareto frontier in the high-accuracy regime, where existing methods to improve accuracy often lead to degradation in threshold consistency. To address this trade-off, we introduce the Threshold-Consistent Margin (TCM) loss, a simple yet effective regularization technique that promotes uniformity in representation structures across classes by selectively penalizing hard sample pairs. Large-scale experiments demonstrate TCM's effectiveness in enhancing threshold consistency while preserving accuracy, simplifying the threshold selection process in practical DML settings.", "primary_area": "metric learning, kernel learning, and sparse coding", "site": "https://iclr.cc/virtual/2024/poster/17544"} +{"video_file": "vEfmVS5ywF_39019276.mp4", "openreview_id": "vEfmVS5ywF", "slideslive_id": 39019276, "venue": "iclr2024", "title": "Learning in reverse causal strategic environments with ramifications on two sided markets", "status": "Poster", "keywords": "Strategic Classification;Performative Prediction;Labor Market", "tldr": "We develop and study an example of performative prediction that is applicable to economic models of labor markets.", "abstract": "Motivated by equilibrium models of labor markets, we develop a formulation of causal strategic classification in which strategic agents can directly manipulate their outcomes. As an application, we consider employers that seek to anticipate the strategic response of a labor force when developing a hiring policy. We show theoretically that employers with performatively optimal hiring policies improve employer reward, labor force skill level, and labor force equity (compared to employers that do not anticipate the strategic labor force response) in the classic Coate-Loury labor market model. Empirically, we show that these desirable properties of performative hiring policies do generalize to our own formulation of a general equilibrium labor market. On the other hand, we also observe that the benefits of performatively optimal hiring policies are brittle in some aspects. We demonstrate that in our formulation a performative employer both harms workers by reducing their aggregate welfare and fails to prevent discrimination when more sophisticated wage and cost structures are introduced.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/17542"} +{"video_file": "vLJcd43U7a_39019032.mp4", "openreview_id": "vLJcd43U7a", "slideslive_id": 39019032, "venue": "iclr2024", "title": "SYMBOL: Generating Flexible Black-Box Optimizers through Symbolic Equation Learning", "status": "Poster", "keywords": "Black-Box Optimization;Meta-Black-Box Optimization;Deep Reinforcement Learning;Symbolic Equation Learning", "tldr": "We propose SYMBOL, a novel framework that promotes the automated discovery of state-of-the-art black-box optimizers through symbolic equation learning.", "abstract": "Recent Meta-learning for Black-Box Optimization (MetaBBO) methods harness neural networks to meta-learn configurations of traditional black-box optimizers. Despite their success, they are inevitably restricted by the limitations of predefined hand-crafted optimizers. In this paper, we present SYMBOL, a novel framework that promotes the automated discovery of black-box optimizers through symbolic equation learning. Specifically, we propose a Symbolic Equation Generator (SEG) that allows closed-form optimization rules to be dynamically generated for specific tasks and optimization steps. Within SYMBOL, we then develop three distinct strategies based on reinforcement learning, so as to meta-learn the SEG efficiently. Extensive experiments reveal that the optimizers generated by SYMBOL not only surpass the state-of-the-art BBO and MetaBBO baselines, but also exhibit exceptional zero-shot generalization abilities across entirely unseen tasks with different problem dimensions, population sizes, and optimization horizons. Furthermore, we conduct in-depth analyses of our SYMBOL framework and the optimization rules that it generates, underscoring its desirable flexibility and interpretability.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/17539"} +{"video_file": "vZZ4hhniJU_39017413.mp4", "openreview_id": "vZZ4hhniJU", "slideslive_id": 39017413, "venue": "iclr2024", "title": "Learning Multi-Agent Communication with Contrastive Learning", "status": "Poster", "keywords": "Multi-Agent Reinforcement Learning;Emergent Communication;Contrastive Learning", "tldr": "A novel approach to learning communication in decentralized MARL based on a multi-view contrastive learning perspective by treating messages as agents' encodings of the state", "abstract": "Communication is a powerful tool for coordination in multi-agent RL. But inducing an effective, common language is a difficult challenge, particularly in the decentralized setting. In this work, we introduce an alternative perspective where communicative messages sent between agents are considered as different incomplete views of the environment state. By examining the relationship between messages sent and received, we propose to learn to communicate using contrastive learning to maximize the mutual information between messages of a given trajectory. In communication-essential environments, our method outperforms previous work in both performance and learning speed. Using qualitative metrics and representation probing, we show that our method induces more symmetric communication and captures global state information from the environment. Overall, we show the power of contrastive learning and the importance of leveraging messages as encodings for effective communication.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/17527"} +{"video_file": "vePdNU3u6n_39017411.mp4", "openreview_id": "vePdNU3u6n", "slideslive_id": 39017411, "venue": "iclr2024", "title": "Towards Robust and Efficient Cloud-Edge Elastic Model Adaptation via Selective Entropy Distillation", "status": "Poster", "keywords": "model adaptation;cloud-edge model deployment;cloud-edge model collaboration;test-time adaptation", "tldr": "We propose a Cloud-Edge Elastic Model Adaptation (CEMA) paradigm to conduct robust and effcient test-time model adaptation in a collaborative way..", "abstract": "The conventional deep learning paradigm often involves training a deep model on a server and then deploying the model or its distilled ones to resource-limited edge devices. Usually, the models shall remain fixed once deployed (at least for some period) due to the potential high cost of model adaptation for both the server and edge sides. However, in many real-world scenarios, the test environments may change dynamically (known as distribution shifts), which often results in degraded performance. Thus, one has to adapt the edge models promptly to attain promising performance. Moreover, with the increasing data collected at the edge, this paradigm also fails to further adapt the cloud model for better performance. To address these, we encounter two primary challenges: 1) the edge model has limited computation power and may only support forward propagation; 2) the data transmission budget between cloud and edge devices is limited in latency-sensitive scenarios. In this paper, we establish a Cloud-Edge Elastic Model Adaptation (CEMA) paradigm in which the edge models only need to perform forward propagation and the edge models can be adapted online. In our CEMA, to reduce the communication burden, we devise two criteria to exclude unnecessary samples from uploading to the cloud, i.e., dynamic unreliable and low-informative sample exclusion. Based on the uploaded samples, we update and distribute the affine parameters of normalization layers by distilling from the stronger foundation model to the edge model with a sample replay strategy. Extensive experimental results on ImageNet-C and ImageNet-R verify the effectiveness of our CEMA.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/17525"} +{"video_file": "viftsX50Rt_39017409.mp4", "openreview_id": "viftsX50Rt", "slideslive_id": 39017409, "venue": "iclr2024", "title": "General Graph Random Features", "status": "Poster", "keywords": "Graphs;kernels;random walks;Laplacian;adjacency matrix;kernel learning;ordinary differential equation;neural network", "tldr": "A novel random walk-based algorithm for unbiased estimation of arbitrary functions of a weighted adjacency matrix", "abstract": "We propose a novel random walk-based algorithm for unbiased estimation of arbitrary functions of a weighted adjacency matrix, coined general graph random features (g-GRFs). This includes many of the most popular examples of kernels defined on the nodes of a graph. Our algorithm enjoys subquadratic time complexity with respect to the number of nodes, overcoming the notoriously prohibitive cubic scaling of exact graph kernel evaluation. It can also be trivially distributed across machines, permitting learning on much larger networks. At the heart of the algorithm is a modulation function which upweights or downweights the contribution from different random walks depending on their lengths. We show that by parameterising it with a neural network we can obtain g-GRFs that give higher-quality kernel estimates or perform efficient, scalable kernel learning. We provide robust theoretical analysis and support our findings with experiments including pointwise estimation of fixed graph kernels, solving non-homogeneous graph ordinary differential equations, node clustering and kernel regression on triangular meshes.", "primary_area": "learning on graphs and other geometries & topologies", "site": "https://iclr.cc/virtual/2024/poster/17523"} +{"video_file": "w1JanwReU6_39017399.mp4", "openreview_id": "w1JanwReU6", "slideslive_id": 39017399, "venue": "iclr2024", "title": "Are Models Biased on Text without Gender-related Language?", "status": "Poster", "keywords": "Large language models;bias evaluation;gender bias;gender co-occurring words;gender-invariant;pretraining data statistics", "tldr": "\"Do large language models still favor one gender over the other in non-stereotypical settings? We study this question in the gender pronoun setting and show that, surprisingly, 20 popular LLMs still exhibit gender bias in 50-90% of the examples\"", "abstract": "Gender bias research has been pivotal in revealing undesirable behaviors in large language models, exposing serious gender stereotypes associated with occupations, and emotions. A key observation in prior work is that models reinforce stereotypes as a consequence of the gendered correlations that are present in the training data. In this paper, we focus on bias where the effect from training data is unclear, and instead address the question: Do language models still exhibit gender bias in non-stereotypical settings? To do so, we introduce UnStereoEval (USE), a novel framework tailored for investigating gender bias in stereotype-free scenarios. USE defines a sentence-level score based on pretraining data statistics to determine if the sentence contain minimal word-gender associations. To systematically benchmark the fairness of popular language models in stereotype-free scenarios, we utilize USE to automatically generate benchmarks without any gender-related language. By leveraging USE's sentence-level score, we also repurpose prior gender bias benchmarks (Winobias and Winogender) for non-stereotypical evaluation. Surprisingly, we find low fairness across all 28 tested models. Concretely, models demonstrate fair behavior in only 9%-41% of stereotype-free sentences, suggesting that bias does not solely stem from the presence of gender-related words. These results raise important questions about where underlying model biases come from and highlight the need for more systematic and comprehensive bias evaluation. We release the full dataset and code at ucinlp.github.io/unstereo-eval.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/17511"} +{"video_file": "wG12xUSqrI_39018595.mp4", "openreview_id": "wG12xUSqrI", "slideslive_id": 39018595, "venue": "iclr2024", "title": "Score-based generative models break the curse of dimensionality in learning a family of sub-Gaussian distributions", "status": "Poster", "keywords": "score-based generative models;Barron space;curse of dimensionality", "tldr": "Score-based generative models can estimate an exponential tilting of the Gaussian distribution without the curse of dimensionality.", "abstract": "While score-based generative models (SGMs) have achieved remarkable successes in enormous image generation tasks, their mathematical foundations are still limited. In this paper, we analyze the approximation and generalization of SGMs in learning a family of sub-Gaussian probability distributions. We introduce a measure of complexity for probability distributions in terms of their relative density with respect to the standard Gaussian measure. We prove that if the log-relative density can be locally approximated by a neural network whose parameters can be suitably bounded, then the distribution generated by empirical score matching approximates the target distribution in total variation with a dimension-independent rate. We illustrate our theory through examples, which include certain mixtures of Gaussians. An essential ingredient of our proof is to derive a dimension-free deep network approximation rate for the true score function associated to the forward process, which is interesting in its own right.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/17500"} +{"video_file": "wISvONp3Kq_39018593.mp4", "openreview_id": "wISvONp3Kq", "slideslive_id": 39018593, "venue": "iclr2024", "title": "Learning No-Regret Sparse Generalized Linear Models with Varying Observation(s)", "status": "Spotlight", "keywords": "Generalized Linear Models;Learning with Varying Data;Differential Equations", "tldr": "Add:", "abstract": "Generalized Linear Models (GLMs) encompass a wide array of regression and classification models, where prediction is a function of a linear combination of the input variables. Often in real-world scenarios, a number of observations would be added into or removed from the existing training dataset, necessitating the development of learning systems that can efficiently train optimal models with varying observations in an online (sequential) manner instead of retraining from scratch. Despite the significance of data-varying scenarios, most existing approaches to sparse GLMs concentrate on offline batch updates, leaving online solutions largely underexplored. In this work, we present the first algorithm without compromising accuracy for GLMs regularized by sparsity-enforcing penalties trained on varying observations. Our methodology is capable of handling the addition and deletion of observations simultaneously, while adaptively updating data-dependent regularization parameters to ensure the best statistical performance. Specifically, we recast sparse GLMs as a bilevel optimization objective upon varying observations and characterize it as an explicit gradient flow in the underlying space for the inner and outer subproblems we are optimizing over, respectively. We further derive a set of rules to ensure a proper transition at regions of non-smoothness, and establish the guarantees of theoretical consistency and finite convergence. Encouraging results are exhibited on real-world benchmarks.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/17497"} +{"video_file": "wYvuY60SdD_39018588.mp4", "openreview_id": "wYvuY60SdD", "slideslive_id": 39018588, "venue": "iclr2024", "title": "Mixture of Weak and Strong Experts on Graphs", "status": "Poster", "keywords": "Graph Neural Networks;Mixture of experts;Node classification", "tldr": "We propose a system to combine a weak MLP expert and a strong GNN expert, so that the powerful GNN model can be better optimized by decoupling the feature and structure modalities of the graph.", "abstract": "Realistic graphs contain both (1) rich self-features of nodes and (2) informative structures of neighborhoods, jointly handled by a Graph Neural Network (GNN) in the typical setup. We propose to decouple the two modalities by Mixture of weak and strong experts (Mowst), where the weak expert is a light-weight Multi-layer Perceptron (MLP), and the strong expert is an off-the-shelf GNN. To adapt the experts' collaboration to different target nodes, we propose a \"confidence\" mechanism based on the dispersion of the weak expert's prediction logits. The strong expert is conditionally activated in the low-confidence region when either the node's classification relies on neighborhood information, or the weak expert has low model quality. We reveal interesting training dynamics by analyzing the influence of the confidence function on loss: our training algorithm encourages the specialization of each expert by effectively generating soft splitting of the graph. In addition, our \"confidence\" design imposes a desirable bias toward the strong expert to benefit from GNN's better generalization capability. Mowst is easy to optimize and achieves strong expressive power, with a computation cost comparable to a single GNN. Empirically, Mowst on 4 backbone GNN architectures show significant accuracy improvement on 6 standard node classification benchmarks, including both homophilous and heterophilous graphs (https://github.com/facebookresearch/mowst-gnn).", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/17490"} +{"video_file": "wg8NPfeMF9_39018583.mp4", "openreview_id": "wg8NPfeMF9", "slideslive_id": 39018583, "venue": "iclr2024", "title": "$\\texttt{NAISR}$: A 3D Neural Additive Model for Interpretable Shape Representation", "status": "Spotlight", "keywords": "Shape Modeling;Medical Shape Analysis;Interpretable Representation;AI4Science", "tldr": "We propose \nNAISR\n, the first shape representation method to investigate an atlas-based representation of 3D shapes in a deformable, disentangleable, transferable and evolvable way.", "abstract": "Deep implicit functions (DIFs) have emerged as a powerful paradigm for many computer vision tasks such as 3D shape reconstruction, generation, registration, completion, editing, and understanding. However, given a set of 3D shapes with associated covariates there is at present no shape representation method which allows to precisely represent the shapes while capturing the individual dependencies on each covariate. Such a method would be of high utility to researchers to discover knowledge hidden in a population of shapes. For scientific shape discovery purpose, we propose a 3D Neural Additive Model for Interpretable Shape Representation (\nNAISR\n) which describes individual shapes by deforming a shape atlas in accordance to the effect of disentangled covariates. Our approach captures shape population trends and allows for patient-specific predictions through shape transfer.\nNAISR\nis the first approach to combine the benefits of deep implicit shape representations with an atlas deforming according to specified covariates. We evaluate\nNAISR\nwith respect to shape reconstruction, shape disentanglement, shape evolution, and shape transfer on three datasets, i.e. 1)\nStarman\n, a simulated 2D shape dataset; 2) ADNI hippocampus 3D shape dataset; 3) pediatric airway 3D shape dataset. Our experiments demonstrate that\nNAISR\nachieves competitive shape reconstruction performance while retaining interpretability. Our code is available at https://github.com/uncbiag/NAISR.", "primary_area": "visualization or interpretation of learned representations", "site": "https://iclr.cc/virtual/2024/poster/17483"} +{"video_file": "wsRXwlwx4w_39018612.mp4", "openreview_id": "wsRXwlwx4w", "slideslive_id": 39018612, "venue": "iclr2024", "title": "Consistency-guided Prompt Learning for Vision-Language Models", "status": "Poster", "keywords": "Zero-shot Learning;Few-shot Learning;Prompt Learning;Vision-language Model", "tldr": "We propose a new prompt-tuning method that enforces a consistency constraint to learn a new task in the few-shot setting without losing the zero-shot generalizability of the foundation model.", "abstract": "We propose Consistency-guided Prompt learning (CoPrompt), a new fine-tuning method for vision-language models. Our approach improves the generalization of large foundation models when fine-tuned on downstream tasks in a few-shot setting. The basic idea of CoPrompt is to enforce a consistency constraint in the prediction of the trainable and pre-trained models to prevent overfitting on the downstream task. Additionally, we introduce the following two components into our consistency constraint to further boost the performance: enforcing consistency on two perturbed inputs and combining two dominant paradigms of tuning, prompting and adapter. Enforcing consistency on perturbed input serves to further regularize the consistency constraint, thereby improving generalization. Moreover, the integration of adapters and prompts not only enhances performance on downstream tasks but also offers increased tuning flexibility in both input and output spaces. This facilitates more effective adaptation to downstream tasks in a few-shot learning setting. Experiments show that CoPrompt outperforms existing methods on a range of evaluation suites, including base-to-novel generalization, domain generalization, and cross-dataset evaluation. On generalization, CoPrompt improves the state-of-the-art on zero-shot tasks and the overall harmonic mean over 11 datasets. Detailed ablation studies show the effectiveness of each of the components in CoPrompt. We make our code available at https://github.com/ShuvenduRoy/CoPrompt.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/17475"} +{"video_file": "x1ptaXpOYa_39018574.mp4", "openreview_id": "x1ptaXpOYa", "slideslive_id": 39018574, "venue": "iclr2024", "title": "ADOPD: A Large-Scale Document Page Decomposition Dataset", "status": "Poster", "keywords": "Document Understanding;Dataset;Segmentation;Detection;OCR;Captioning", "tldr": "A Large-Scale Document Page Decomposition Dataset", "abstract": "Research in document image understanding is hindered by limited high-quality document data. To address this, we introduce ADOPD, a comprehensive dataset for document page decomposition. ADOPD stands out with its data-driven approach for document taxonomy discovery during data collection, complemented by dense annotations. Our approach integrates large-scale pretrained models with a human-in-the-loop process to guarantee diversity and balance in the resulting data collection. Leveraging our data-driven document taxonomy, we collect and densely annotate document images, addressing four document image understanding tasks: Doc2Mask, Doc2Box, Doc2Tag, and Doc2Seq. Specifically, for each image, the annotations include human-labeled entity masks, text bounding boxes, as well as automatically generated tags and captions that have been manually cleaned. We conduct comprehensive experimental analyses to validate our data and assess the four tasks using various models. We envision ADOPD as a foundational dataset with the potential to drive future research in document understanding.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/17472"} +{"video_file": "x7d1qXEn1e_39018570.mp4", "openreview_id": "x7d1qXEn1e", "slideslive_id": 39018570, "venue": "iclr2024", "title": "A Restoration Network as an Implicit Prior", "status": "Poster", "keywords": "computational imaging;inverse problems;deep learning;plug-and-play priors", "tldr": "A new method and theory for using deep restoration networks as implicit priors for solving inverse problems.", "abstract": "Image denoisers have been shown to be powerful priors for solving inverse problems in imaging. In this work, we introduce a generalization of these methods that allows any image restoration network to be used as an implicit prior. The proposed method uses priors specified by deep neural networks pre-trained as general restoration operators. The method provides a principled approach for adapting state-of-the-art restoration models for other inverse problems. Our theoretical result analyzes its convergence to a stationary point of a global functional associated with the restoration operator. Numerical results show that the method using a super-resolution prior achieves state-of-the-art performance both quantitatively and qualitatively. Overall, this work offers a step forward for solving inverse problems by enabling the use of powerful pre-trained restoration models as priors.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/17467"} +{"video_file": "xHmCdSArUC_39018566.mp4", "openreview_id": "xHmCdSArUC", "slideslive_id": 39018566, "venue": "iclr2024", "title": "Correlated Noise Provably Beats Independent Noise for Differentially Private Learning", "status": "Poster", "keywords": "differentially private optimization;stochastic gradient descent;linear regression theory;private deep learning", "tldr": "We prove the benefits of correlated noise for DP optimization in linear regression. Using the theory, we derive an orders-of-magnitude more efficient correlated noise generation algorithm that nearly matches SOTA for private deep learning.", "abstract": "Differentially private learning algorithms inject noise into the learning process. While the most common private learning algorithm, DP-SGD, adds independent Gaussian noise in each iteration, recent work on matrix factorization mechanisms has shown empirically that introducing correlations in the noise can greatly improve their utility. We characterize the asymptotic learning utility for any choice of the correlation function, giving precise analytical bounds for linear regression and as the solution to a convex program for general convex functions. We show, using these bounds, how correlated noise provably improves upon vanilla DP-SGD as a function of problem parameters such as the effective dimension and condition number. Moreover, our analytical expression for the near-optimal correlation function circumvents the cubic complexity of the semi-definite program used to optimize the noise correlation matrix in previous work. We validate these theoretical results with experiments on private deep learning. Our work matches or outperforms prior work while being efficient both in terms of computation and memory.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/17459"} +{"video_file": "xJ5N8qrEPl_39017064.mp4", "openreview_id": "xJ5N8qrEPl", "slideslive_id": 39017064, "venue": "iclr2024", "title": "Constrained Bi-Level Optimization: Proximal Lagrangian Value Function Approach and Hessian-free Algorithm", "status": "Spotlight", "keywords": "Bi-level Optimization;Constrained Optimization;Hessian-free;Single-loop;Value Function;Convergence Analysis", "tldr": "This paper presents a new approach and algorithm for solving a class of constrained Bi-Level Optimization problems in which the lower-level problem involves constraints coupling both upper-level and lower-level variables.", "abstract": "This paper presents a new approach and algorithm for solving a class of constrained Bi-Level Optimization (BLO) problems in which the lower-level problem involves constraints coupling both upper-level and lower-level variables. Such problems have recently gained significant attention due to their broad applicability in machine learning. However, conventional gradient-based methods unavoidably rely on computationally intensive calculations related to the Hessian matrix. To address this challenge, we devise a smooth proximal Lagrangian value function to handle the constrained lower-level problem. Utilizing this construct, we introduce a single-level reformulation for constrained BLOs that transforms the original BLO problem into an equivalent optimization problem with smooth constraints. Enabled by this reformulation, we develop a Hessian-free gradient-based algorithm\u2014termed proximal Lagrangian Value function-based Hessian-free Bi-level Algorithm (LV-HBA)\u2014that is straightforward to implement in a single loop manner. Consequently, LV-HBA is especially well-suited for machine learning applications. Furthermore, we offer non-asymptotic convergence analysis for LV-HBA, eliminating the need for traditional strong convexity assumptions for the lower-level problem while also being capable of accommodating non-singleton scenarios. Empirical results substantiate the algorithm's superior practical performance.", "primary_area": "optimization", "site": "https://iclr.cc/virtual/2024/poster/17456"} +{"video_file": "xJbsmB8UMx_39018564.mp4", "openreview_id": "xJbsmB8UMx", "slideslive_id": 39018564, "venue": "iclr2024", "title": "SALMON: Self-Alignment with Instructable Reward Models", "status": "Poster", "keywords": "AI Alignment;Large Language Models;Scalable Oversight", "tldr": "We introduce a new AI alignment paradigm where an instructable reward model is trained to effectively and flexibly align language models with human values and intentions.", "abstract": "Supervised Fine-Tuning (SFT) on response demonstrations combined with Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful paradigm for aligning LLM-based AI agents. However, a significant limitation of such an approach is its dependency on high-quality human annotations, making its application to intricate tasks challenging due to difficulties in obtaining consistent response demonstrations and in-distribution response preferences. This paper presents a novel approach, namely SALMON, to align base language models with minimal human supervision, using only a small set of human-defined principles, yet achieving superior performance. Central to our approach is an instructable reward model. Trained on synthetic preference data, this model can generate reward scores based on arbitrary human-defined principles. By merely adjusting these principles during the RL training phase, we gain full control over the preferences with the instructable reward model, subsequently influencing the behavior of the RL-trained policy models, and reducing the reliance on the collection of online human preferences. Applying our method to the LLaMA-2-70b base language model, we developed an AI assistant named Dromedary-2. With only 6 exemplars for in-context learning and 31 human-defined principles, Dromedary-2 significantly surpasses the performance of several state-of-the-art AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have open-sourced the code and model weights to encourage further research into aligning LLM-based AI agents with enhanced supervision efficiency, improved controllability, and scalable oversight.", "primary_area": "generative models", "site": "https://iclr.cc/virtual/2024/poster/17454"} +{"video_file": "xUzWmFdglP_39018562.mp4", "openreview_id": "xUzWmFdglP", "slideslive_id": 39018562, "venue": "iclr2024", "title": "Privacy Amplification for Matrix Mechanisms", "status": "Spotlight", "keywords": "differential privacy;privacy amplification;matrix mechanism", "tldr": "We propose an algorithm for computing privacy guarantees of the matrix mechanism with privacy amplification.", "abstract": "Privacy amplification exploits randomness in data selection to provide tighter differential privacy (DP) guarantees. This analysis is key to DP-SGD's success in machine learning (ML), but, is not readily applicable to the newer state-of-the-art (SOTA) algorithms. This is because these algorithms, known as DP-FTRL, use the matrix mechanism to add correlated noise instead of independent noise as in DP-SGD.\nIn this paper, we propose \"MMCC'' (matrix mechanism conditional composition), the first algorithm to analyze privacy amplification via sampling for any generic matrix mechanism. MMCC is nearly tight in that it approaches a lower bound as\n\u03f5\n\u2192\n0\n. To analyze correlated outputs in MMCC, we prove that they can be analyzed as if they were independent, by conditioning them on prior outputs. Our \"conditional composition theorem'' has broad utility: we use it to show that the noise added to binary-tree-DP-FTRL can asymptotically match the noise added to DP-SGD with amplification. Our algorithm also has practical empirical utility. We show that amplification leads to significant improvement in the privacy/utility trade-offs for DP-FTRL style algorithms for standard benchmark tasks.", "primary_area": "societal considerations including fairness, safety, privacy", "site": "https://iclr.cc/virtual/2024/poster/17452"} +{"video_file": "xZDWO0oejD_39018561.mp4", "openreview_id": "xZDWO0oejD", "slideslive_id": 39018561, "venue": "iclr2024", "title": "Tell Your Model Where to Attend: Post-hoc Attention Steering for LLMs", "status": "Poster", "keywords": "Attention steering;Post-hoc;Contextual emphasizing", "tldr": "We propose PASTA, a post-hoc attention steering approach to enable users to hightlight specific information for LLMs and steers models to interpret the user-specified texts like human readers.", "abstract": "In human-written articles, we often leverage the subtleties of text style, such as bold and italics, to guide the attention of readers. These textual emphases are vital for the readers to grasp the conveyed information. When interacting with large language models (LLMs), we have a similar need -- steering the model to pay closer attention to user-specified information, e.g., an instruction. Existing methods, however, are constrained to process plain text and do not support such a mechanism. This motivates us to introduce PASTA -- Post-hoc Attention STeering Approach, a method that allows LLMs to read text with user-specified emphasis marks. To this end, PASTA identifies a small subset of attention heads and applies precise attention reweighting on them, directing the model attention to user-specified parts. Like prompting, PASTA is applied at inference time and does not require changing any model parameters. Experiments demonstrate that PASTA can substantially enhance an LLM's ability to follow user instructions or integrate new knowledge from user inputs, leading to a significant performance improvement on a variety of tasks, e.g., an average accuracy improvement of 22% for LLAMA-7B. Our code is publicly available at https://github.com/QingruZhang/PASTA .", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/17451"} +{"video_file": "xcMmebCT7s_39019083.mp4", "openreview_id": "xcMmebCT7s", "slideslive_id": 39019083, "venue": "iclr2024", "title": "Learning to design protein-protein interactions with enhanced generalization", "status": "Poster", "keywords": "protein-protein interactions;protein design;generalization;self-supervised learning;equivariant 3D representations", "tldr": "We introduce PPIRef dataset and PPIformer model to predict mutation effects on protein-protein interactions, achieving state-of-the-art performance on standard data and practical case studies in SARS-CoV-2 antibody design and thrombolytic engineering", "abstract": "Discovering mutations enhancing protein-protein interactions (PPIs) is critical for advancing biomedical research and developing improved therapeutics. While machine learning approaches have substantially advanced the field, they often struggle to generalize beyond training data in practical scenarios. The contributions of this work are three-fold. First, we construct PPIRef, the largest and non-redundant dataset of 3D protein-protein interactions, enabling effective large-scale learning. Second, we leverage the PPIRef dataset to pre-train PPIformer, a new SE(3)-equivariant model generalizing across diverse protein-binder variants. We fine-tune PPIformer to predict effects of mutations on protein-protein interactions via a thermodynamically motivated adjustment of the pre-training loss function. Finally, we demonstrate the enhanced generalization of our new PPIformer approach by outperforming other state-of-the-art methods on new, non-leaking splits of standard labeled PPI mutational data and independent case studies optimizing a human antibody against SARS-CoV-2 and increasing the thrombolytic activity of staphylokinase.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17449"} +{"video_file": "xkXdE81mOK_39019164.mp4", "openreview_id": "xkXdE81mOK", "slideslive_id": 39019164, "venue": "iclr2024", "title": "Federated Recommendation with Additive Personalization", "status": "Poster", "keywords": "Federated Learning;Federated Recommendation System", "tldr": "We present a novel federated recommendation system, named FedRAP, incorporating additive personalization to enhance the performance of recommendation systems in a federated setting.", "abstract": "Building recommendation systems via federated learning (FL) is a new emerging challenge for next-generation Internet service. Existing FL models share item embedding across clients while keeping the user embedding private and local on the client side. However, identical item embedding cannot capture users' individual differences in perceiving the same item and may lead to poor personalization. Moreover, dense item embedding in FL results in expensive communication costs and latency. To address these challenges, we propose Federated Recommendation withAdditive Personalization (FedRAP), which learns a global view of items via FL and a personalized view locally on each user. FedRAP encourages a sparse global view to save FL's communication cost and enforces the two views to be complementary via two regularizers. We propose an effective curriculum to learn the local and global views progressively with increasing regularization weights. To produce recommendations for a user, FedRAP adds the two views together to obtain a personalized item embedding. FedRAP achieves the best performance in FL setting on multiple benchmarks. It outperforms recent federated recommendation methods and several ablation study baselines. Our code is available at https://github.com/mtics/FedRAP.", "primary_area": "general machine learning (i.e., none of the above)", "site": "https://iclr.cc/virtual/2024/poster/17446"} +{"video_file": "xt9Bu66rqv_39018557.mp4", "openreview_id": "xt9Bu66rqv", "slideslive_id": 39018557, "venue": "iclr2024", "title": "Dual RL: Unification and New Methods for Reinforcement and Imitation Learning", "status": "Spotlight", "keywords": "Robot Learning;Offline Imitation Learning;Offline Reinforcement Learning;Deep Reinforcement Learning", "tldr": "A unification of RL and IL methods through the lens of duality that allows us to propose new methods for discriminator-free imitation learning and stable offline reinforcement learning.", "abstract": "The goal of reinforcement learning (RL) is to find a policy that maximizes the expected cumulative return. It has been shown that this objective can be represented as an optimization problem of state-action visitation distribution under linear constraints. The dual problem of this formulation, which we refer to as dual RL, is unconstrained and easier to optimize. In this work, we first cast several state-of-the-art offline RL and offline imitation learning (IL) algorithms as instances of dual RL approaches with shared structures. Such unification allows us to identify the root cause of the shortcomings of prior methods. For offline IL, our analysis shows that prior methods are based on a restrictive coverage assumption that greatly limits their performance in practice. To fix this limitation, we propose a new discriminator-free method ReCOIL that learns to imitate from arbitrary off-policy data to obtain near-expert performance. For offline RL, our analysis frames a recent offline RL method XQL in the dual framework, and we further propose a new method\nf\n-DVL that provides alternative choices to the Gumbel regression loss that fixes the known training instability issue of XQL. The performance improvements by both of our proposed methods, ReCOIL and\nf\n-DVL, in IL and RL are validated on an extensive suite of simulated robot locomotion and manipulation tasks.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/17440"} +{"video_file": "xtOydkE1Ku_39019176.mp4", "openreview_id": "xtOydkE1Ku", "slideslive_id": 39019176, "venue": "iclr2024", "title": "TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series", "status": "Poster", "keywords": "time series;forecasting;probabilistic;multivariate;copula;transformer;density estimation", "tldr": "A flexible model for multivariate probabilistic time series prediction, simplifying the training of attentional copulas, with state-of-the-art accuracy on diverse forecasting tasks, while supporting interpolation and learning from irregular data.", "abstract": "We introduce a new model for multivariate probabilistic time series prediction, designed to flexibly address a range of tasks including forecasting, interpolation, and their combinations. Building on copula theory, we propose a simplified objective for the recently-introduced transformer-based attentional copulas (TACTiS), wherein the number of distributional parameters now scales linearly with the number of variables instead of factorially. The new objective requires the introduction of a training curriculum, which goes hand-in-hand with necessary changes to the original architecture. We show that the resulting model has significantly better training dynamics and achieves state-of-the-art performance across diverse real-world forecasting tasks, while maintaining the flexibility of prior work, such as seamless handling of unaligned and unevenly-sampled time series. Code is made available at https://github.com/ServiceNow/TACTiS.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17439"} +{"video_file": "xuY33XhEGR_39018743.mp4", "openreview_id": "xuY33XhEGR", "slideslive_id": 39018743, "venue": "iclr2024", "title": "ClimODE: Climate and Weather Forecasting with Physics-informed Neural ODEs", "status": "Oral", "keywords": "neural ODE;time-series forecasting;climate prediction;physics-informed ML", "tldr": "We introduce a novel climate and weather modeling approach, inspired by physics, using ODEs that capture underlying inductive biases and allow for uncertainty quantification in predictions.", "abstract": "Climate and weather prediction traditionally relies on complex numerical simulations of atmospheric physics. Deep learning approaches, such as transformers, have recently challenged the simulation paradigm with complex network forecasts. However, they often act as data-driven black-box models that neglect the underlying physics and lack uncertainty quantification. We address these limitations with ClimODE, a spatiotemporal continuous-time process that implements a key principle of advection from statistical mechanics, namely, weather changes due to a spatial movement of quantities over time. ClimODE models precise weather evolution with value-conserving dynamics, learning global weather transport as a neural flow, which also enables estimating the uncertainty in predictions. Our approach outperforms existing data-driven methods in global and regional forecasting with an order of magnitude smaller parameterization, establishing a new state of the art.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17438"} +{"video_file": "xyxU99Nutg_39018750.mp4", "openreview_id": "xyxU99Nutg", "slideslive_id": 39018750, "venue": "iclr2024", "title": "Un-Mixing Test-Time Normalization Statistics: Combatting Label Temporal Correlation", "status": "Poster", "keywords": "test-time adaptation;batch normalization;distribution shift", "tldr": "We propose a new test-time normalization layer to combat label temporal correlation.", "abstract": "Recent test-time adaptation methods heavily rely on nuanced adjustments of batch normalization (BN) parameters. However, one critical assumption often goes overlooked: that of independently and identically distributed (i.i.d.) test batches with respect to unknown labels. This oversight leads to skewed BN statistics and undermines the reliability of the model under non-i.i.d. scenarios. To tackle this challenge, this paper presents a novel method termed '\nUn-Mix\ning\nT\nest-Time\nN\normalization\nS\ntatistics' (UnMix-TNS). Our method re-calibrates the statistics for each instance within a test batch by\nmixing\nit with multiple distinct statistics components, thus inherently simulating the i.i.d. scenario. The core of this method hinges on a distinctive online\nunmixing\nprocedure that continuously updates these statistics components by incorporating the most similar instances from new test batches. Remarkably generic in its design, UnMix-TNS seamlessly integrates with a wide range of leading test-time adaptation methods and pre-trained architectures equipped with BN layers. Empirical evaluations corroborate the robustness of UnMix-TNS under varied scenarios\u2014ranging from single to continual and mixed domain shifts, particularly excelling with temporally correlated test data and corrupted non-i.i.d. real-world streams. This adaptability is maintained even with very small batch sizes or single instances. Our results highlight UnMix-TNS's capacity to markedly enhance stability and performance across various benchmarks. Our code is publicly available at https://github.com/devavratTomar/unmixtns.", "primary_area": "transfer learning, meta learning, and lifelong learning", "site": "https://iclr.cc/virtual/2024/poster/17431"} +{"video_file": "y21ZO6M86t_39017160.mp4", "openreview_id": "y21ZO6M86t", "slideslive_id": 39017160, "venue": "iclr2024", "title": "PolyGCL: GRAPH CONTRASTIVE LEARNING via Learnable Spectral Polynomial Filters", "status": "Spotlight", "keywords": "Graph Contrastive Learning;Spectral Graph Neural Networks;Polynomial Filter;Heterophilic Graph Representation Learning", "tldr": "We introduce spectral polynomial filters into graph contrastive learning to model heterophilic graphs.", "abstract": "Recently, Graph Contrastive Learning (GCL) has achieved significantly superior performance in self-supervised graph representation learning. However, the existing GCL technique has inherent smooth characteristics because of its low-pass GNN encoder and objective based on homophily assumption, which poses a challenge when applying it to heterophilic graphs. In supervised learning tasks, spectral GNNs with polynomial approximation excel in both homophilic and heterophilic settings by adaptively fitting graph filters of arbitrary shapes. Yet, their applications in unsupervised learning are rarely explored. Based on the above analysis, a natural question arises: Can we incorporate the excellent properties of spectral polynomial filters into graph contrastive learning? In this paper, we address the question by studying the necessity of introducing high-pass information for heterophily from a spectral perspective. We propose PolyGCL, a GCL pipeline that utilizes polynomial filters to achieve contrastive learning between the low-pass and high-pass views. Specifically, PolyGCL utilizes polynomials with learnable filter functions to generate different spectral views and an objective that incorporates high-pass information through a linear combination. We theoretically prove that PolyGCL outperforms previous GCL paradigms when applied to graphs with varying levels of homophily. We conduct extensive experiments on both synthetic and real-world datasets, which demonstrate the promising performance of PolyGCL on homophilic and heterophilic graphs.", "primary_area": "unsupervised, self-supervised, semi-supervised, and supervised representation learning", "site": "https://iclr.cc/virtual/2024/poster/17428"} +{"video_file": "yN4Wv17ss3_39018548.mp4", "openreview_id": "yN4Wv17ss3", "slideslive_id": 39018548, "venue": "iclr2024", "title": "Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining", "status": "Poster", "keywords": "transformers;in-context learning;reinforcement learning;learning theory", "tldr": "We prove that transformers can provably implement various reinforcement learning algorithms in context, and learn them through supervised pretraining.", "abstract": "Large transformer models pretrained on offline reinforcement learning datasets have demonstrated remarkable in-context reinforcement learning (ICRL) capabilities, where they can make good decisions when prompted with interaction trajectories from unseen environments. However, when and how transformers can be trained to perform ICRL have not been theoretically well-understood. In particular, it is unclear which reinforcement-learning algorithms transformers can perform in context, and how distribution mismatch in offline training data affects the learned algorithms.\nThis paper provides a theoretical framework that analyzes supervised pretraining for ICRL. This includes two recently proposed training methods --- algorithm distillation and decision-pretrained transformers. First, assuming model realizability, we prove the supervised-pretrained transformer will imitate the conditional expectation of the expert algorithm given the observed trajectory. The generalization error will scale with model capacity and a distribution divergence factor between the expert and offline algorithms. Second, we show transformers with ReLU attention can efficiently approximate near-optimal online reinforcement learning algorithms like LinUCB and Thompson sampling for stochastic linear bandits, and UCB-VI for tabular Markov decision processes. This provides the first quantitative analysis of the ICRL capabilities of transformers pretrained from offline trajectories.", "primary_area": "learning theory", "site": "https://iclr.cc/virtual/2024/poster/17421"} +{"video_file": "yTBXeXdbMf_39018545.mp4", "openreview_id": "yTBXeXdbMf", "slideslive_id": 39018545, "venue": "iclr2024", "title": "Provable Reward-Agnostic Preference-Based Reinforcement Learning", "status": "Spotlight", "keywords": "reinforcement learning theory;reward-agnostic learning", "tldr": "PAC reward-agnostic reinforcement learning from preference feedback over trajectories with function approximation.", "abstract": "Preference-based Reinforcement Learning (PbRL) is a paradigm in which an RL agent learns to optimize a task using pair-wise preference-based feedback over trajectories, rather than explicit reward signals. While PbRL has demonstrated practical success in fine-tuning language models, existing theoretical work focuses on regret minimization and fails to capture most of the practical frameworks. In this study, we fill in such a gap between theoretical PbRL and practical algorithms by proposing a theoretical reward-agnostic PbRL framework where exploratory trajectories that enable accurate learning of hidden reward functions are acquired before collecting any human feedback. Theoretical analysis demonstrates that our algorithm requires less human feedback for learning the optimal policy under preference-based models with linear parameterization and unknown transitions, compared to the existing theoretical literature. Specifically, our framework can incorporate linear and low-rank MDPs with efficient sample complexity. Additionally, we investigate reward-agnostic RL with action-based comparison feedback and introduce an efficient querying algorithm tailored to this scenario.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/17417"} +{"video_file": "yV6fD7LYkF_39018691.mp4", "openreview_id": "yV6fD7LYkF", "slideslive_id": 39018691, "venue": "iclr2024", "title": "ValUES: A Framework for Systematic Validation of Uncertainty Estimation in Semantic Segmentation", "status": "Oral", "keywords": "uncertainty;segmentation;validation", "tldr": "We address the flawed validation in uncertainty estimation for segmentation by introducing a framework that explores uncertainty types, essential components, and effective methods, with empirical results from simulated and real-world data.", "abstract": "Uncertainty estimation is an essential and heavily-studied component for the reliable application of semantic segmentation methods. While various studies exist claiming methodological advances on the one hand, and successful application on the other hand, the field is currently hampered by a gap between theory and practice leaving fundamental questions unanswered: Can data-related and model-related uncertainty really be separated in practice? Which components of an uncertainty method are essential for real-world performance? Which uncertainty method works well for which application? In this work, we link this research gap to a lack of systematic and comprehensive evaluation of uncertainty methods. Specifically, we identify three key pitfalls in current literature and present an evaluation framework that bridges the research gap by providing 1) a controlled environment for studying data ambiguities as well as distribution shifts, 2) systematic ablations of relevant method components, and 3) test-beds for the five predominant uncertainty applications: OoD-detection, active learning, failure detection, calibration, and ambiguity modeling. Empirical results on simulated as well as real-world data demonstrate how the proposed framework is able to answer the predominant questions in the field revealing for instance that 1) separation of uncertainty types works on simulated data but does not necessarily translate to real-world data, 2) aggregation of scores is a crucial but currently neglected component of uncertainty methods, 3) While ensembles are performing most robustly across the different downstream tasks and settings, test-time augmentation often constitutes a light-weight alternative. Code is at: https://github.com/IML-DKFZ/values", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/17416"} +{"video_file": "ycF7mKfVGO_39019133.mp4", "openreview_id": "ycF7mKfVGO", "slideslive_id": 39019133, "venue": "iclr2024", "title": "Towards Assessing and Benchmarking Risk-Return Tradeoff of Off-Policy Evaluation", "status": "Poster", "keywords": "off-policy evaluation;offline reinforcement learning;offline policy selection;risk-return tradeoff", "tldr": "We propose a new evaluation metric for OPE called SharpeRatio@k, which measures the efficiency of policy portfolios formed by an OPE estimator taking its risk-return tradeoff into consideration.", "abstract": "Off-Policy Evaluation (OPE) aims to assess the effectiveness of counterfactual policies using offline logged data and is frequently utilized to identify the top-\nk\npromising policies for deployment in online A/B tests. Existing evaluation metrics for OPE estimators primarily focus on the \"accuracy\" of OPE or that of downstream policy selection, neglecting risk-return tradeoff and efficiency in subsequent online policy deployment. To address this issue, we draw inspiration from portfolio evaluation in finance and develop a new metric, called SharpeRatio@k, which measures the risk-return tradeoff and efficiency of policy portfolios formed by an OPE estimator under varying online evaluation budgets (\nk\n). We first demonstrate, in two example scenarios, that our proposed metric can clearly distinguish between conservative and high-stakes OPE estimators and reliably identify the most efficient estimator capable of forming superior portfolios of candidate policies that maximize return with minimal risk during online deployment, while existing evaluation metrics produce only degenerate results. To facilitate a quick, accurate, and consistent evaluation of OPE via SharpeRatio@k, we have also implemented the proposed metric in an open-source software. Using SharpeRatio@k and the software, we conduct a benchmark experiment of various OPE estimators regarding their risk-return tradeoff, presenting several future directions for OPE research.", "primary_area": "datasets and benchmarks", "site": "https://iclr.cc/virtual/2024/poster/17412"} +{"video_file": "yxKZGQLzOP_39018537.mp4", "openreview_id": "yxKZGQLzOP", "slideslive_id": 39018537, "venue": "iclr2024", "title": "Generating Pragmatic Examples to Train Neural Program Synthesizers", "status": "Poster", "keywords": "program synthesis;pragmatics;self-play", "tldr": "Pragmatic program synthesis in a realistic program space without human supervision in training", "abstract": "Programming-by-example is the task of synthesizing a program that is consistent with a set of user-provided input-output examples. As examples are often an under-specification of one's intent, a good synthesizer must choose the intended program from the many that are consistent with the given set of examples. Prior work frames program synthesis as a cooperative game between a listener (that synthesizes programs) and a speaker (a user choosing examples), and shows that models of computational pragmatic inference are effective in choosing the user intended programs. However, these models require counterfactual reasoning over a large set of programs and examples, which is infeasible in realistic program spaces. In this paper, we propose PraX, a novel way to amortize this search with neural networks. We sample pairs of programs and examples via self-play between listener and speaker models, and use pragmatic inference to choose informative training examples from this sample. We then use the informative dataset to train models to improve the synthesizer's ability to disambiguate user-provided examples without human supervision. We validate PraX on the challenging task of synthesizing regular expressions from example strings, and find that our method (1) outperforms models trained without choosing pragmatic examples by 23% (a 51% relative increase) (2) matches the performance of supervised learning on a dataset of pragmatic examples provided by humans, despite using no human data in training.", "primary_area": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17397"} +{"video_file": "z6KS9D1dxt_39019004.mp4", "openreview_id": "z6KS9D1dxt", "slideslive_id": 39019004, "venue": "iclr2024", "title": "Byzantine Robust Cooperative Multi-Agent Reinforcement Learning as a Bayesian Game", "status": "Poster", "keywords": "Multi-agent reinforcement learning;Robustness;Game Theory;Adversarial Attack", "tldr": "We study robust cooperative MARL against Byzantine adversary using a Bayesian game approach", "abstract": "In this study, we explore the robustness of cooperative multi-agent reinforcement learning (c-MARL) against Byzantine failures, where any agent can enact arbitrary, worst-case actions due to malfunction or adversarial attack. To address the uncertainty that any agent can be adversarial, we propose a Bayesian Adversarial Robust Dec-POMDP (BARDec-POMDP) framework, which views Byzantine adversaries as nature-dictated types, represented by a separate transition. This allows agents to learn policies grounded on their posterior beliefs about the type of other agents, fostering collaboration with identified allies and minimizing vulnerability to adversarial manipulation. We define the optimal solution to the BARDec-POMDP as an ex interim robust Markov perfect Bayesian equilibrium, which we proof to exist and the corresponding policy weakly dominates previous approaches as time goes to infinity. To realize this equilibrium, we put forward a two-timescale actor-critic algorithm with almost sure convergence under specific conditions. Experiments on matrix game, Level-based Foraging and StarCraft II indicate that, our method successfully acquires intricate micromanagement skills and adaptively aligns with allies under worst-case perturbations, showing resilience against non-oblivious adversaries, random allies, observation-based attacks, and transfer-based attacks.", "primary_area": "reinforcement learning", "site": "https://iclr.cc/virtual/2024/poster/17392"} +{"video_file": "zMvMwNvs4R_39018530.mp4", "openreview_id": "zMvMwNvs4R", "slideslive_id": 39018530, "venue": "iclr2024", "title": "Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models", "status": "Spotlight", "keywords": "language generation;language modeling;machine translation;robustness;estimating data quality", "tldr": "We propose to truncate tokens with high L2 error norm to improve robustness of text generation models to noise.", "abstract": "Text generation models are notoriously vulnerable to errors in the training data. With the wide-spread availability of massive amounts of web-crawled data becoming more commonplace, how can we enhance the robustness of models trained on a massive amount of noisy web-crawled text? In our work, we propose Error Norm Truncation (ENT), a robust enhancement method to the standard training objective that truncates noisy data. Compared to methods that only uses the negative log-likelihood loss to estimate data quality, our method provides a more accurate estimation by considering the distribution of non-target tokens, which is often overlooked by previous work. Through comprehensive experiments across language modeling, machine translation, and text summarization, we show that equipping text generation models with ENT improves generation quality over standard training and previous soft and hard truncation methods. Furthermore, we show that our method improves the robustness of models against two of the most detrimental types of noise in machine translation, resulting in an increase of more than 2 BLEU points over the MLE baseline when up to 50% of noise is added to the data.", "primary_area": "representation learning for computer vision, audio, language, and other modalities", "site": "https://iclr.cc/virtual/2024/poster/17384"} +{"video_file": "ziDFH8TPPK_39019250.mp4", "openreview_id": "ziDFH8TPPK", "slideslive_id": 39019250, "venue": "iclr2024", "title": "Long-Term Typhoon Trajectory Prediction: A Physics-Conditioned Approach Without Reanalysis Data", "status": "Spotlight", "keywords": "Weather Forecasting;Typhoon Trajectory Forecasting;Tropical Cyclone;Climate Change", "tldr": "Real-time 72-hour typhoon trajectory prediction using the NWP model.", "abstract": "In the face of escalating climate changes, typhoon intensities and their ensuing damage have surged. Accurate trajectory prediction is crucial for effective damage control. Traditional physics-based models, while comprehensive, are computationally intensive and rely heavily on the expertise of forecasters. Contemporary data-driven methods often rely on reanalysis data, which can be considered to be the closest to the true representation of weather conditions. However, reanalysis data is not produced in real-time and requires time for adjustment since prediction models are calibrated with observational data. This reanalysis data, such as ERA5, falls short in challenging real-world situations. Optimal preparedness necessitates predictions at least 72 hours in advance, beyond the capabilities of standard physics models. In response to these constraints, we present an approach that harnesses real-time Unified Model (UM) data, sidestepping the limitations of reanalysis data. Our model provides predictions at 6-hour intervals for up to 72 hours in advance and outperforms both state-of-the-art data-driven methods and numerical weather prediction models. In line with our efforts to mitigate adversities inflicted by \\rthree{typhoons}, we release our preprocessed \\textit{PHYSICS TRACK} dataset, which includes ERA5 reanalysis data, typhoon best-track, and UM forecast data.", "primary_area": "applications to physical sciences (physics, chemistry, biology, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17373"} +{"video_file": "zlkXLb3wpF_39018996.mp4", "openreview_id": "zlkXLb3wpF", "slideslive_id": 39018996, "venue": "iclr2024", "title": "Fast and unified path gradient estimators for normalizing flows", "status": "Poster", "keywords": "Normalizing Flows;Gradient Estimators;Lattice Field Theory;Variational Infernce", "tldr": "New low variance gradient estimator for normalizing flows", "abstract": "Recent work shows that path gradient estimators for normalizing flows have lower variance compared to standard estimators, resulting in improved training. However, they are often prohibitively more expensive from a computational point of view and cannot be applied to maximum likelihood training in a scalable manner, which severely hinders their widespread adoption. In this work, we overcome these crucial limitations. Specifically, we propose a fast path gradient estimator which works for all normalizing flow architectures of practical relevance for sampling from an unnormalized target distribution. We then show that this estimator can also be applied to maximum likelihood training and empirically establish its superior performance for several natural sciences applications.", "primary_area": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)", "site": "https://iclr.cc/virtual/2024/poster/17372"} +{"video_file": "01XV5Za56k_39027005.mp4", "openreview_id": "01XV5Za56k", "slideslive_id": 39027005, "venue": "nips2024", "title": "Testing Calibration in Nearly-Linear Time", "status": "Poster", "keywords": "Calibration;Property testing;Linear programming", "tldr": "We propose a property testing problem associated with measuring calibration of a predictor, and give a near-linear time algorithm for it based on minimum-cost flow.", "abstract": "In the recent literature on machine learning and decision making, calibration has emerged as a desirable and widely-studied statistical property of the outputs of binary prediction models. However, the algorithmic aspects of measuring model calibration have remained relatively less well-explored. Motivated by Blasiok et al '23, which proposed a rigorous framework for measuring distances to calibration, we initiate the algorithmic study of calibration through the lens of property testing. We define the problem of calibration testing from samples where given\nn\ndraws from a distribution\nD\non\n(\npredictions\n,\nbinary outcomes\n)\n, our goal is to distinguish between the cases where\nD\nis perfectly calibrated or\n\u03f5\n-far from calibration. We make the simple observation that the empirical smooth calibration linear program can be reformulated as an instance of minimum-cost flow on a highly-structured graph, and design an exact dynamic programming-based solver for it which runs in time\nO\n(\nn\nlog\n2\n\u2061\n(\nn\n)\n)\n, and solves the calibration testing problem information-theoretically optimally in the same time. This improves upon state-of-the-art black-box linear program solvers requiring\n\u03a9\n(\nn\n\u03c9\n)\ntime, where\n\u03c9\n>\n2\nis the exponent of matrix multiplication. We also develop algorithms for tolerant variants of our testing problem improving upon black-box linear program solvers, and give sample complexity lower bounds for alternative calibration measures to the one considered in this work. Finally, we present experiments showing the testing problem we define faithfully captures standard notions of calibration, and that our algorithms scale efficiently to accommodate large sample sizes.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/96961"} +{"video_file": "01s5ODIHKd_39025842.mp4", "openreview_id": "01s5ODIHKd", "slideslive_id": 39025842, "venue": "nips2024", "title": "FreqMark: Invisible Image Watermarking via Frequency Based Optimization in Latent Space", "status": "Poster", "keywords": "deep watermarking;latent frequency optimization", "tldr": "We propose a novel method for invisible watermark by optimizing the latent frequency space of images, named FreqMark, providing remarkable robustness and flexibility.", "abstract": "Invisible watermarking is essential for safeguarding digital content, enabling copyright protection and content authentication. However, existing watermarking methods fall short in robustness against regeneration attacks. In this paper, we propose a novel method called FreqMark that involves unconstrained optimization of the image latent frequency space obtained after VAE encoding. Specifically, FreqMark embeds the watermark by optimizing the latent frequency space of the images and then extracts the watermark through a pre-trained image encoder. This optimization allows a flexible trade-off between image quality with watermark robustness and effectively resists regeneration attacks. Experimental results demonstrate that FreqMark offers significant advantages in image quality and robustness, permits flexible selection of the encoding bit number, and achieves a bit accuracy exceeding 90% when encoding a 48-bit hidden message under various attack scenarios.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96959"} +{"video_file": "06JRFVK88O_39028540.mp4", "openreview_id": "06JRFVK88O", "slideslive_id": 39028540, "venue": "nips2024", "title": "Mimicking To Dominate: Imitation Learning Strategies for Success in Multiagent Games", "status": "Poster", "keywords": "Multi-agent Reinforcement Learning;Imitation Learning", "tldr": "An approach to utilize imitation learning techniques to improve multi-agent reinforcement learning.", "abstract": "Training agents in multi-agent games presents significant challenges due to their intricate nature. These challenges are exacerbated by dynamics influenced not only by the environment but also by strategies of opponents. Existing methods often struggle with slow convergence and instability. To address these challenges, we harness the potential of imitation learning (IL) to comprehend and anticipate actions of the opponents, aiming to mitigate uncertainties with respect to the game dynamics. Our key contributions include: (i) a new multi-agent IL model for predicting next moves of the opponents - our model works with hidden actions of opponents and local observations; (ii) a new multi-agent reinforcement learning (MARL) algorithm that combines our IL model and policy training into one single training process; and (iii) extensive experiments in three challenging game environments, including an advanced version of the Star-Craft multi-agent challenge (i.e., SMACv2). Experimental results show that our approach achieves superior performance compared to state-of-the-art MARL algorithms.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96954"} +{"video_file": "06Vt6f2js7_39024371.mp4", "openreview_id": "06Vt6f2js7", "slideslive_id": 39024371, "venue": "nips2024", "title": "SyncTweedies: A General Generative Framework Based on Synchronized Diffusions", "status": "Poster", "keywords": "Diffusion Models;Synchronization;Texturing;3D Gaussian Splatting;Mesh;Panorama", "tldr": "A general framework for diffusion synchronization", "abstract": "We introduce a general diffusion synchronization framework for generating diverse visual content, including ambiguous images, panorama images, 3D mesh textures, and 3D Gaussian splats textures, using a pretrained image diffusion model. We first present an analysis of various scenarios for synchronizing multiple diffusion processes through a canonical space. Based on the analysis, we introduce a synchronized diffusion method, SyncTweedies, which averages the outputs of Tweedie\u2019s formula while conducting denoising in multiple instance spaces. Compared to previous work that achieves synchronization through finetuning, SyncTweedies is a zero-shot method that does not require any finetuning, preserving the rich prior of diffusion models trained on Internet-scale image datasets without overfitting to specific domains. We verify that SyncTweedies offers the broadest applicability to diverse applications and superior performance compared to the previous state-of-the-art for each application. Our project page is at https://synctweedies.github.io.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96953"} +{"video_file": "08GbdALmEs_39028523.mp4", "openreview_id": "08GbdALmEs", "slideslive_id": 39028523, "venue": "nips2024", "title": "Learning Versatile Skills with Curriculum Masking", "status": "Poster", "keywords": "reinforcement learning;unsupervised pretraining;masked prediction;curriculum learning", "tldr": "We propose a curriculum-based masked prediction approach for unsupervised RL pretraining, which acquires skills at different complexity and achieves superior performance in various downstream tasks.", "abstract": "Masked prediction has emerged as a promising pretraining paradigm in offline reinforcement learning (RL) due to its versatile masking schemes, enabling flexible inference across various downstream tasks with a unified model. Despite the versatility of masked prediction, it remains unclear how to balance the learning of skills at different levels of complexity. To address this, we propose CurrMask, a curriculum masking pretraining paradigm for sequential decision making. Motivated by how humans learn by organizing knowledge in a curriculum, CurrMask adjusts its masking scheme during pretraining for learning versatile skills. Through extensive experiments, we show that CurrMask exhibits superior zero-shot performance on skill prompting tasks, goal-conditioned planning tasks, and competitive finetuning performance on offline RL tasks. Additionally, our analysis of training dynamics reveals that CurrMask gradually acquires skills of varying complexity by dynamically adjusting its masking scheme.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96950"} +{"video_file": "09nyBqSdUz_39024909.mp4", "openreview_id": "09nyBqSdUz", "slideslive_id": 39024909, "venue": "nips2024", "title": "RefDrop: Controllable Consistency in Image or Video Generation via Reference Feature Guidance", "status": "Poster", "keywords": "Consistent image generation;Diverse image generation;Improve temporal-consistency;Feature injection from multiple images", "tldr": "Our novel self-attention layers boosts control over feature injection from a single or multiple reference images, enhancing both image and video generation for diffusion model.", "abstract": "There is a rapidly growing interest in controlling consistency across multiple generated images using diffusion models. Among various methods, recent works have found that simply manipulating attention modules by concatenating features from multiple reference images provides an efficient approach to enhancing consistency without fine-tuning. Despite its popularity and success, few studies have elucidated the underlying mechanisms that contribute to its effectiveness. In this work, we reveal that the popular approach is a linear interpolation of image self-attention and cross-attention between synthesized content and reference features, with a constant rank-1 coefficient. Motivated by this observation, we find that a rank-1 coefficient is not necessary and simplifies the controllable generation mechanism. The resulting algorithm, which we coin as RefDrop, allows users to control the influence of reference context in a direct and precise manner. Besides further enhancing consistency in single-subject image generation, our method also enables more interesting applications, such as the consistent generation of multiple subjects, suppressing specific features to encourage more diverse content, and high-quality personalized video generation by boosting temporal consistency. Even compared with state-of-the-art image-prompt-based generators, such as IP-Adapter, RefDrop is competitive in terms of controllability and quality while avoiding the need to train a separate image encoder for feature injection from reference images, making it a versatile plug-and-play solution for any image or video diffusion model.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/96948"} +{"video_file": "0DE1dLMW2b_39024590.mp4", "openreview_id": "0DE1dLMW2b", "slideslive_id": 39024590, "venue": "nips2024", "title": "Quantum algorithm for large-scale market equilibrium computation", "status": "Poster", "keywords": "market equilibrium computation;quantum algorithm", "tldr": "We provide the first quantum algorithm for market equilibrium computation with sub-linear performance.", "abstract": "Classical algorithms for market equilibrium computation such as proportional response dynamics face scalability issues with Internet-based applications such as auctions, recommender systems, and fair division, despite having an almost linear runtime in terms of the product of buyers and goods. In this work, we provide the first quantum algorithm for market equilibrium computation with sub-linear performance. Our algorithm provides a polynomial runtime speedup in terms of the product of the number of buyers and goods while reaching the same optimization objective value as the classical algorithm. Numerical simulations of a system with 16384 buyers and goods support our theoretical results that our quantum algorithm provides a significant speedup.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/96944"} +{"video_file": "0G0VpMjKyV_39026169.mp4", "openreview_id": "0G0VpMjKyV", "slideslive_id": 39026169, "venue": "nips2024", "title": "Sketching for Distributed Deep Learning: A Sharper Analysis", "status": "Poster", "keywords": "sketching;distributed learning;federated learning;optimization", "tldr": "We provide a tighter analysis of sketching in distributed learning that eliminates the dimension dependence without imposing unrealistic restrictive assumptions in the distributed learning setup.", "abstract": "The high communication cost between the server and the clients is a significant bottleneck in scaling distributed learning for overparametrized deep models. One popular approach for reducing this communication overhead is randomized sketching. However, existing theoretical analyses for sketching-based distributed learning (sketch-DL) either incur a prohibitive dependence on the ambient dimension or need additional restrictive assumptions such as heavy-hitters. Nevertheless, despite existing pessimistic analyses, empirical evidence suggests that sketch-DL is competitive with its uncompressed counterpart, thus motivating a sharper analysis. In this work, we introduce a sharper ambient dimension-independent convergence analysis for sketch-DL using the second-order geometry specified by the loss Hessian. Our results imply ambient dimension-independent communication complexity for sketch-DL. We present empirical results both on the loss Hessian and overall accuracy of sketch-DL supporting our theoretical results. Taken together, our results provide theoretical justification for the observed empirical success of sketch-DL.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/96942"} +{"video_file": "0KvYLaTBTE_39028679.mp4", "openreview_id": "0KvYLaTBTE", "slideslive_id": 39028679, "venue": "nips2024", "title": "Latent Plan Transformer for Trajectory Abstraction: Planning as Latent Space Inference", "status": "Poster", "keywords": "Generative models;Reinforcement learning;Decision transformer", "tldr": "We present the Latent Plan Transformer (LPT), an novel generative model designed to explore planning and maintain temporal consistency through the latent space.", "abstract": "In tasks aiming for long-term returns, planning becomes essential. We study generative modeling for planning with datasets repurposed from offline reinforcement learning. Specifically, we identify temporal consistency in the absence of step-wise rewards as one key technical challenge. We introduce the Latent Plan Transformer (LPT), a novel model that leverages a latent variable to connect a Transformer- based trajectory generator and the final return. LPT can be learned with maximum likelihood estimation on trajectory-return pairs. In learning, posterior sampling of the latent variable naturally integrates sub-trajectories to form a consistent abstrac- tion despite the finite context. At test time, the latent variable is inferred from an expected return before policy execution, realizing the idea of planning as inference. Our experiments demonstrate that LPT can discover improved decisions from sub- optimal trajectories, achieving competitive performance across several benchmarks, including Gym-Mujoco, Franka Kitchen, Maze2D, and Connect Four. It exhibits capabilities in nuanced credit assignments, trajectory stitching, and adaptation to environmental contingencies. These results validate that latent variable inference can be a strong alternative to step-wise reward prompting.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96937"} +{"video_file": "0LXotew9Du_39025943.mp4", "openreview_id": "0LXotew9Du", "slideslive_id": 39025943, "venue": "nips2024", "title": "KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization", "status": "Poster", "keywords": "Quantization;KV Cache;LLM Inference;Compression;Long Context Length", "tldr": "We quantize the KV cache accurately to ultra-low precision (eg. 2-bit) to enable efficient long context length inference.", "abstract": "LLMs are seeing growing use for applications which require large context windows, and with these large context windows KV cache activations surface as the dominant contributor to memory consumption during inference. Quantization is a promising approach for compressing KV cache activations; however, existing solutions fail to represent activations accurately in sub-4-bit precision. Our work, KVQuant, facilitates low precision KV cache quantization by incorporating several novel methods: (i) Per-Channel Key Quantization, where we adjust the dimension along which we quantize the Key activations to better match the distribution; (ii) Pre-RoPE Key Quantization, where we quantize Key activations before the rotary positional embedding to mitigate its impact on quantization; (iii) Non-Uniform KV Cache Quantization, where we derive per-layer sensitivity-weighted non-uniform datatypes that better represent the distributions; and (iv) Per-Vector Dense-and-Sparse Quantization, where we isolate outliers separately for each vector to minimize skews in quantization ranges. By applying our method to the LLaMA, Llama-2, Llama-3, and Mistral models, we achieve < 0.1 perplexity degradation with 3-bit quantization on both Wikitext-2 and C4, outperforming existing approaches. Our method enables serving LLaMA-7B with a context length of up to 1 million on a single A100-80GB GPU and up to 10 million on an 8-GPU system. We develop custom CUDA kernels for KVQuant, showing that we can achieve up to ~1.7x speedups, compared to baseline fp16 matrix-vector multiplications, for the LLaMA-7B model.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96936"} +{"video_file": "0MXzbAv8xy_39026191.mp4", "openreview_id": "0MXzbAv8xy", "slideslive_id": 39026191, "venue": "nips2024", "title": "GFT: Graph Foundation Model with Transferable Tree Vocabulary", "status": "Poster", "keywords": "Graph Foundation Model;Transferability;Computation Tree;Graph Neural Network", "tldr": "We investigate the transferability of computation tree in the graph, and build a graph foundation model based on that.", "abstract": "Inspired by the success of foundation models in applications such as ChatGPT, as graph data has been ubiquitous, one can envision the far-reaching impacts that can be brought by Graph Foundation Models (GFMs) with broader applications in the areas such as scientific research, social network analysis, drug discovery, and e-commerce. Despite the significant progress of pre-trained graph neural networks, there haven\u2019t been GFMs that can achieve desired performance on various graph-learning-related tasks. Building GFMs may rely on a vocabulary that encodes transferable patterns shared among different tasks and domains. Unlike image and text, defining such transferable patterns for graphs remains an open question. In this paper, we aim to bridge this gap by rethinking the transferable patterns on graphs as computation trees -- i.e., tree structures derived from the message-passing process. Based on this insight, we propose a cross-task, cross-domain graph foundation model named GFT, short for Graph Foundation model with transferable Tree vocabulary. By treating computation trees as tokens within the transferable vocabulary, GFT improves model generalization and reduces the risk of negative transfer. The theoretical analyses and extensive experimental studies have demonstrated the transferability of computation trees and shown the effectiveness of GFT across diverse tasks and domains in graph learning. The open source code and data are available at https://github.com/Zehong-Wang/GFT.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/96932"} +{"video_file": "0SRJBtTNhX_39025353.mp4", "openreview_id": "0SRJBtTNhX", "slideslive_id": 39025353, "venue": "nips2024", "title": "IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors", "status": "Poster", "keywords": "Graph Machine Learning;Graph Data Augmentation;Graph Neural Networks", "tldr": "This paper introduces IntraMix, a graph augmentation method that employs Intra-Class Mixup and high-confidence neighbor selection. IntraMix addresses both the issues of scarce high-quality labels and missing neighborhoods in most graphs.", "abstract": "Graph Neural Networks (GNNs) have shown great performance in various tasks, with the core idea of learning from data labels and aggregating messages within the neighborhood of nodes. However, the common challenges in graphs are twofold: insufficient accurate (high-quality) labels and limited neighbors for nodes, resulting in weak GNNs. Existing graph augmentation methods typically address only one of these challenges, often adding training costs or relying on oversimplified or knowledge-intensive strategies, limiting their generalization. To simultaneously address both challenges faced by graphs in a generalized way, we propose an elegant method called IntraMix. Considering the incompatibility of vanilla Mixup with the complex topology of graphs, IntraMix innovatively employs Mixup among inaccurate labeled data of the same class, generating high-quality labeled data at minimal cost. Additionally, it finds data with high confidence of being clustered into the same group as the generated data to serve as their neighbors, thereby enriching the neighborhoods of graphs. IntraMix efficiently tackles both issues faced by graphs and challenges the prior notion of the limited effectiveness of Mixup in node classification. IntraMix is a theoretically grounded plug-in-play method that can be readily applied to all GNNs. Extensive experiments demonstrate the effectiveness of IntraMix across various GNNs and datasets. Our code is available at: https://github.com/Zhengsh123/IntraMix.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/96930"} +{"video_file": "0TUMAAb3of_39027114.mp4", "openreview_id": "0TUMAAb3of", "slideslive_id": 39027114, "venue": "nips2024", "title": "Queueing Matching Bandits with Preference Feedback", "status": "Poster", "keywords": "Bandits;Queue;Preference Feedback", "tldr": "Bandit algorithms for queueing matching under a preference model", "abstract": "In this study, we consider multi-class multi-server asymmetric queueing systems consisting of\nN\nqueues on one side and\nK\nservers on the other side, where jobs randomly arrive in queues at each time. The service rate of each job-server assignment is unknown and modeled by a feature-based Multi-nomial Logit (MNL) function. At each time, a scheduler assigns jobs to servers, and each server stochastically serves at most one job based on its preferences over the assigned jobs. The primary goal of the algorithm is to stabilize the queues in the system while learning the service rates of servers. To achieve this goal, we propose algorithms based on UCB and Thompson Sampling, which achieve system stability with an average queue length bound of\nO\n(\nmin\n{\nN\n,\nK\n}\n/\n\u03f5\n)\nfor a large time horizon\nT\n, where\n\u03f5\nis a traffic slackness of the system. Furthermore, the algorithms achieve sublinear regret bounds of\nO\n~\n(\nmin\n{\nT\nQ\nmax\n,\nT\n3\n/\n4\n}\n)\n, where\nQ\nmax\nrepresents the maximum queue length over agents and times. Lastly, we provide experimental results to demonstrate the performance of our algorithms.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/96929"} +{"video_file": "0WCFI2Qx85_39026309.mp4", "openreview_id": "0WCFI2Qx85", "slideslive_id": 39026309, "venue": "nips2024", "title": "ScaleKD: Strong Vision Transformers Could Be Excellent Teachers", "status": "Poster", "keywords": "knowledge distillation;model compression;training acceleration;vision transformer;convolutional neural network;multi-layer perception", "tldr": "In this paper, we present ScaleKD, showing that pre-trained ViT models could be used as teachers preserving scalable properties to advance cross-architecture knowledge distillation research.", "abstract": "In this paper, we question if well pre-trained vision transformer (ViT) models could be used as teachers that exhibit scalable properties to advance cross architecture knowledge distillation research, in the context of adopting mainstream large-scale visual recognition datasets for evaluation. To make this possible, our analysis underlines the importance of seeking effective strategies to align (1) feature computing paradigm differences, (2) model scale differences, and (3) knowledge density differences. By combining three closely coupled components namely cross attention projector, dual-view feature mimicking and teacher parameter perception tailored to address the alignment problems stated above, we present a simple and effective knowledge distillation method, called ScaleKD. Our method can train student backbones that span across a variety of convolutional neural network (CNN), multi-layer perceptron (MLP), and ViT architectures on image classification datasets, achieving state-of-the-art knowledge distillation performance. For instance, taking a well pre-trained Swin-L as the teacher model, our method gets 75.15%|82.03%|84.16%|78.63%|81.96%|83.93%|83.80%|85.53% top-1 accuracies for MobileNet-V1|ResNet-50|ConvNeXt-T|Mixer-S/16|Mixer-B/16|ViT-S/16|Swin-T|ViT-B/16 models trained on ImageNet-1K dataset from scratch, showing 3.05%|3.39%|2.02%|4.61%|5.52%|4.03%|2.62%|3.73% absolute gains to the individually trained counterparts. Intriguingly, when scaling up the size of teacher models or their pre-training datasets, our method showcases the desired scalable properties, bringing increasingly larger gains to student models. We also empirically show that the student backbones trained by our method transfer well on downstream MS-COCO and ADE20K datasets. More importantly, our method could be used as a more efficient alternative to the time-intensive pre-training paradigm for any target student model on large-scale datasets if a strong pre-trained ViT is available, reducing the amount of viewed training samples up to 195\n\u00d7\n. The code is available at https://github.com/deep-optimization/ScaleKD.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/96927"} +{"video_file": "0XeNkkENuI_39024867.mp4", "openreview_id": "0XeNkkENuI", "slideslive_id": 39024867, "venue": "nips2024", "title": "The Road Less Scheduled", "status": "Oral", "keywords": "Stochastic Optimization;Optimization;Convex Optimization;Learning Rates;Learning Rate Schedules", "tldr": "Train without learning rate schedules", "abstract": "Existing learning rate schedules that do not require specification of the optimization stopping step $T$ are greatly out-performed by learning rate schedules that depend on $T$. We propose an approach that avoids the need for this stopping time by eschewing the use of schedules entirely, while exhibiting state-of-the-art performance compared to schedules across a wide family of problems ranging from convex problems to large-scale deep learning problems. Our Schedule-Free approach introduces no additional hyper-parameters over standard optimizers with momentum. Our method is a direct consequence of a new theory we develop that unifies scheduling and iterate averaging. An open source implementation of our method is available at https://github.com/facebookresearch/schedule_free. Schedule-Free AdamW is the core algorithm behind our winning entry to the MLCommons 2024 AlgoPerf Algorithmic Efficiency Challenge Self-Tuning track.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/96925"} +{"video_file": "0ZZMUjZJYF_39028519.mp4", "openreview_id": "0ZZMUjZJYF", "slideslive_id": 39028519, "venue": "nips2024", "title": "Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Study", "status": "Poster", "keywords": "LLMs;Learning by Teaching;Reasoning;Mathematical Reasoning;Code Synthesis;Weak-to-Strong Generalization;In-Context Learning;Prompting;Knowledge Distillation;Education-Inspired", "tldr": "Aiming to improve LLM reasoning, we conduct a preliminary exploration of whether LLMs can \"learn by teaching\" -- a well-known paradigm in human learning", "abstract": "Teaching to improve student models (e.g., knowledge distillation) is an extensively studied methodology in LLMs. However, in human education, teaching enhances not only the students but also the teachers by fostering more rigorous and clearer reasoning, as well as deeper knowledge building. We ask: Can LLMs also learn by teaching (LbT) for better reasoning? If the answer is yes, we can potentially unlock the possibility of continuously advancing the models without solely relying on human-produced data or stronger models. In this paper, we provide a preliminary exploration of this question. We show that LbT ideas can be incorporated into existing LLM training/prompting pipelines and bring improvements. Specifically, we design three methods, each mimicking one of the three levels of LbT: observing students' feedback, learning from the feedback, and learning iteratively, with the goal of improving answer accuracy without training or improving models' inherent capability with fine-tuning. We reveal some findings: (1) Teaching materials that make it easier for students to learn (via in-context learning) have clearer and more accurate logic; (2) Weak-to-strong generalization: LbT might help improve strong models by teaching weak models; (3) Diversity in students might help: teaching multiple students could be better than teaching a single student or the teacher alone. We hope that our exploration can inspire future research on LbT and, more broadly, the adoption of advanced education techniques to improve LLMs. The code and website are at https://github.com/imagination-research/lbt and https://sites.google.com/view/llm-learning-by-teaching.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96924"} +{"video_file": "0ZeONp33f0_39024716.mp4", "openreview_id": "0ZeONp33f0", "slideslive_id": 39024716, "venue": "nips2024", "title": "Graph Neural Networks and Arithmetic Circuits", "status": "Poster", "keywords": "Machine Learning;Graph Neural Networks;Arithmetic Circuits;Computational Complexity", "tldr": "We obtain a characterization of the computational power/expressivity of graph neural networks in terms of arithmetic circuits over the reals.", "abstract": "We characterize the computational power of neural networks that follow the graph neural network (GNN) architecture, not restricted to aggregate-combine GNNs or other particular types. We establish an exact correspondence between the expressivity of GNNs using diverse activation functions and arithmetic circuits over real numbers. In our results the activation function of the network becomes a gate type in the circuit. Our result holds for families of constant depth circuits and networks, both uniformly and non-uniformly, for all common activation functions.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/96923"} +{"video_file": "0aN7VWwp4g_39026675.mp4", "openreview_id": "0aN7VWwp4g", "slideslive_id": 39026675, "venue": "nips2024", "title": "Fourier Amplitude and Correlation Loss: Beyond Using L2 Loss for Skillful Precipitation Nowcasting", "status": "Poster", "keywords": "Precipitation Nowcasting;Video Prediction;Fourier Analysis;Loss Function", "tldr": "We propose FACL, a loss function replacing MSE in precipitation nowcasting to achieve sharp and quality forecasts.", "abstract": "Deep learning approaches have been widely adopted for precipitation nowcasting in recent years. Previous studies mainly focus on proposing new model architectures to improve pixel-wise metrics. However, they frequently result in blurry predictions which provide limited utility to forecasting operations. In this work, we propose a new Fourier Amplitude and Correlation Loss (FACL) which consists of two novel loss terms: Fourier Amplitude Loss (FAL) and Fourier Correlation Loss (FCL). FAL regularizes the Fourier amplitude of the model prediction and FCL complements the missing phase information. The two loss terms work together to replace the traditional L2 losses such as MSE and weighted MSE for the spatiotemporal prediction problem on signal-based data. Our method is generic, parameter-free and efficient. Extensive experiments using one synthetic dataset and three radar echo datasets demonstrate that our method improves perceptual metrics and meteorology skill scores, with a small trade-off to pixel-wise accuracy and structural similarity. Moreover, to improve the error margin in meteorological skill scores such as Critical Success Index (CSI) and Fractions Skill Score (FSS), we propose and adopt the Regional Histogram Divergence (RHD), a distance metric that considers the patch-wise similarity between signal-based imagery patterns with tolerance to local transforms.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/96922"} +{"video_file": "0bFXbEMz8e_39028296.mp4", "openreview_id": "0bFXbEMz8e", "slideslive_id": 39028296, "venue": "nips2024", "title": "FlowLLM: Flow Matching for Material Generation with Large Language Models as Base Distributions", "status": "Poster", "keywords": "generative models;material generation;chemistry;flow matching;large language models", "tldr": "New generative model for materials combining Large Language Models and Riemannian Flow Matching, that significantly outperforms prior methods.", "abstract": "Material discovery is a critical area of research with the potential to revolutionize various fields, including carbon capture, renewable energy, and electronics. However, the immense scale of the chemical space makes it challenging to explore all possible materials experimentally. In this paper, we introduce FlowLLM, a novel generative model that combines large language models (LLMs) and Riemannian flow matching (RFM) to design novel crystalline materials. FlowLLM first fine-tunes an LLM to learn an effective base distribution of meta-stable crystals in a text representation. After converting to a graph representation, the RFM model takes samples from the LLM and iteratively refines the coordinates and lattice parameters. Our approach significantly outperforms state-of-the-art methods, increasing the generation rate of stable materials by over three times and increasing the rate for stable, unique, and novel crystals by\n\u223c\n50\n% \u2013 a huge improvement on a difficult problem. Additionally, the crystals generated by FlowLLM are much closer to their relaxed state when compared with another leading model, significantly reducing post-hoc computational cost.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/96921"} +{"video_file": "0cgDDa4OFr_39028117.mp4", "openreview_id": "0cgDDa4OFr", "slideslive_id": 39028117, "venue": "nips2024", "title": "Sourcerer: Sample-based Maximum Entropy Source Distribution Estimation", "status": "Poster", "keywords": "source distribution estimation;maximum entropy;Sliced-Wasserstein distance;empirical Bayes;simulation-based inference", "tldr": "We propose a sample-based, maximum entropy approach to the source distribution estimation problem.", "abstract": "Scientific modeling applications often require estimating a distribution of parameters consistent with a dataset of observations - an inference task also known as source distribution estimation. This problem can be ill-posed, however, since many different source distributions might produce the same distribution of data-consistent simulations. To make a principled choice among many equally valid sources, we propose an approach which targets the maximum entropy distribution, i.e., prioritizes retaining as much uncertainty as possible. Our method is purely sample-based - leveraging the Sliced-Wasserstein distance to measure the discrepancy between the dataset and simulations - and thus suitable for simulators with intractable likelihoods. We benchmark our method on several tasks, and show that it can recover source distributions with substantially higher entropy than recent source estimation methods, without sacrificing the fidelity of the simulations. Finally, to demonstrate the utility of our approach, we infer source distributions for parameters of the Hodgkin-Huxley model from experimental datasets with hundreds of single-neuron measurements. In summary, we propose a principled method for inferring source distributions of scientific simulator parameters while retaining as much uncertainty as possible.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/96918"} +{"video_file": "0d50Il6enG_39028894.mp4", "openreview_id": "0d50Il6enG", "slideslive_id": 39028894, "venue": "nips2024", "title": "Non-parametric classification via expand-and-sparsify representation", "status": "Poster", "keywords": "non-parametric regression;non-parametric classification;expand-and-sparsify representation;universal consistency;minimax-optimal convergence rate", "tldr": "Propose algorithms for non-parametric classification via expansion-and-sparsify representation and prove that the convergence rate is minimax-optimal.", "abstract": "In expand-and-sparsify (EaS) representation, a data point in\nS\nd\n\u2212\n1\nis first randomly mapped to higher dimension\nR\nm\n, where\nm\n>\nd\n, followed by a sparsification operation where the informative\nk\n\u226a\nm\nof the\nm\ncoordinates are set to one and the rest are set to zero. We propose two algorithms for non-parametric classification using such EaS representation. For our first algorithm, we use winners-take-all operation for the sparsification step and show that the proposed classifier admits the form of a locally weighted average classifier and establish its consistency via Stone's Theorem. Further, assuming that the conditional probability function\nP\n(\ny\n=\n1\n|\nx\n)\n=\n\u03b7\n(\nx\n)\nis H\"{o}lder continuous and for optimal choice of\nm\n, we show that the convergence rate of this classifier is minimax-optimal. For our second algorithm, we use empirical\nk\n-thresholding operation for the sparsification step, and under the assumption that data lie on a low dimensional manifold of dimension\nd\n0\n\u226a\nd\n, we show that the convergence rate of this classifier depends only on\nd\n0\nand is again minimax-optimal. Empirical evaluations performed on real-world datasets corroborate our theoretical results.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/96917"} +{"video_file": "0dtA21q83C_39026385.mp4", "openreview_id": "0dtA21q83C", "slideslive_id": 39026385, "venue": "nips2024", "title": "DeNetDM: Debiasing by Network Depth Modulation", "status": "Poster", "keywords": "Trustworthy Machine Learning;Debiasing;Robustness", "tldr": "A method for obtaining a debiased classifier by modulating the network depth.", "abstract": "Neural networks trained on biased datasets tend to inadvertently learn spurious correlations, hindering generalization. We formally prove that (1) samples that exhibit spurious correlations lie on a lower rank manifold relative to the ones that do not; and (2) the depth of a network acts as an implicit regularizer on the rank of the attribute subspace that is encoded in its representations. Leveraging these insights, we present DeNetDM, a novel debiasing method that uses network depth modulation as a way of developing robustness to spurious correlations. Using a training paradigm derived from Product of Experts, we create both biased and debiased branches with deep and shallow architectures and then distill knowledge to produce the target debiased model. Our method requires no bias annotations or explicit data augmentation while performing on par with approaches that require either or both. We demonstrate that DeNetDM outperforms existing debiasing techniques on both synthetic and real-world datasets by 5%. The project page is available at https://vssilpa.github.io/denetdm/.", "primary_area": "fairness", "site": "https://neurips.cc/virtual/2024/poster/96916"} +{"video_file": "0feJEykDRx_39024610.mp4", "openreview_id": "0feJEykDRx", "slideslive_id": 39024610, "venue": "nips2024", "title": "Mobility-LLM: Learning Visiting Intentions and Travel Preference from Human Mobility Data with Large Language Models", "status": "Poster", "keywords": "spatial-temporal data mining;location-based service;check-in sequence;large language model", "tldr": "We propose a novel unified framework that reprograms the check-in sequence to let LLMs comprehensively understand human visiting intentions and their travel preferences", "abstract": "Location-based services (LBS) have accumulated extensive human mobility data on diverse behaviors through check-in sequences. These sequences offer valuable insights into users\u2019 intentions and preferences. Yet, existing models analyzing check-in sequences fail to consider the semantics contained in these sequences, which closely reflect human visiting intentions and travel preferences, leading to an incomplete comprehension. Drawing inspiration from the exceptional semantic understanding and contextual information processing capabilities of large language models (LLMs) across various domains, we present Mobility-LLM, a novel framework that leverages LLMs to analyze check-in sequences for multiple tasks. Since LLMs cannot directly interpret check-ins, we reprogram these sequences to help LLMs comprehensively understand the semantics of human visiting intentions and travel preferences. Specifically, we introduce a visiting intention memory network (VIMN) to capture the visiting intentions at each record, along with a shared pool of human travel preference prompts (HTPP) to guide the LLM in understanding users\u2019 travel preferences. These components enhance the model\u2019s ability to extract and leverage semantic information from human mobility data effectively. Extensive experiments on four benchmark datasets and three downstream tasks demonstrate that our approach significantly outperforms existing models, underscoring the effectiveness of Mobility-LLM in advancing our understanding of human mobility data within LBS contexts.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/96914"} +{"video_file": "0jld45XGgJ_39028310.mp4", "openreview_id": "0jld45XGgJ", "slideslive_id": 39028310, "venue": "nips2024", "title": "Neural collapse vs. low-rank bias: Is deep neural collapse really optimal?", "status": "Poster", "keywords": "neural collapse;deep neural collapse;unconstrained features model;deep unconstrained features model;low-rank bias", "tldr": "We show theoretically and empirically that deep neural collapse is not an optimal solution in the general multi-class non-linear deep unconstrained features model due to a low-rank bias of weight regularization.", "abstract": "Deep neural networks (DNNs) exhibit a surprising structure in their final layer known as neural collapse (NC), and a growing body of works is currently investigated the propagation of neural collapse to earlier layers of DNNs -- a phenomenon called deep neural collapse (DNC). However, existing theoretical results are restricted to either linear models, the last two layers or binary classification. In contrast, we focus on non-linear models of arbitrary depth in multi-class classification and reveal a surprising qualitative shift. As soon as we go beyond two layers or two classes, DNC stops being optimal for the deep unconstrained features model (DUFM) -- the standard theoretical framework for the analysis of collapse. The main culprit is the low-rank bias of multi-layer regularization schemes. This bias leads to optimal solutions of even lower rank than the neural collapse. We support our theoretical findings with experiments on both DUFM and real data, which show the emergence of the low-rank structure in the solution found by gradient descent.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/96913"} +{"video_file": "0m19blQT6y_39028770.mp4", "openreview_id": "0m19blQT6y", "slideslive_id": 39028770, "venue": "nips2024", "title": "BitsFusion: 1.99 bits Weight Quantization of Diffusion Model", "status": "Poster", "keywords": "Diffusion;Quantization;Stable Diffusion;Low bit", "tldr": "We propose BitsFusion, which quantizes the text-to-image model into 1.99 bits.", "abstract": "Diffusion-based image generation models have achieved great success in recent years by showing the capability of synthesizing high-quality content. However, these models contain a huge number of parameters, resulting in a significantly large model size. Saving and transferring them is a major bottleneck for various applications, especially those running on resource-constrained devices. In this work, we develop a novel weight quantization method that quantizes the UNet from Stable Diffusion v1.5 to\n1.99\nbits, achieving a model with\n7.9\n\u00d7\nsmaller size while exhibiting even better generation quality than the original one. Our approach includes several novel techniques, such as assigning optimal bits to each layer, initializing the quantized model for better performance, and improving the training strategy to dramatically reduce quantization error. Furthermore, we extensively evaluate our quantized model across various benchmark datasets and through human evaluation to demonstrate its superior generation quality.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96909"} +{"video_file": "0og7nmvDbe_39028701.mp4", "openreview_id": "0og7nmvDbe", "slideslive_id": 39028701, "venue": "nips2024", "title": "Confidence Regulation Neurons in Language Models", "status": "Poster", "keywords": "LLMs;Interpretability;Mechanistic Interpretability", "tldr": "We study how LLMs regulate uncertainty using entropy neurons, which modulate entropy via an unembedding null space, and token frequency neurons, which adjust logits based on token frequency.", "abstract": "Despite their widespread use, the mechanisms by which large language models (LLMs) represent and regulate uncertainty in next-token predictions remain largely unexplored. This study investigates two critical components believed to influence this uncertainty: the recently discovered entropy neurons and a new set of components that we term token frequency neurons. Entropy neurons are characterized by an unusually high weight norm and influence the final layer normalization (LayerNorm) scale to effectively scale down the logits. Our work shows that entropy neurons operate by writing onto an \\textit{unembedding null space}, allowing them to impact the residual stream norm with minimal direct effect on the logits themselves. We observe the presence of entropy neurons across a range of models, up to 7 billion parameters. On the other hand, token frequency neurons, which we discover and describe here for the first time, boost or suppress each token\u2019s logit proportionally to its log frequency, thereby shifting the output distribution towards or away from the unigram distribution. Finally, we present a detailed case study where entropy neurons actively manage confidence: the setting of induction, i.e. detecting and continuing repeated subsequences.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/96903"} +{"video_file": "0qb8KoPsej_39025925.mp4", "openreview_id": "0qb8KoPsej", "slideslive_id": 39025925, "venue": "nips2024", "title": "Accelerating Matroid Optimization through Fast Imprecise Oracles", "status": "Poster", "keywords": "matroid optimization;weak-strong models;learning-augmented algorithms;algorithms with predictions;query minimization;robustness", "tldr": "We study matroid basis problems in a two-oracle model, where we have access to a fast but potentially imprecise and to a slow but precise independence oracle.", "abstract": "Querying complex models for precise information (e.g. traffic models, database systems, large ML models) often entails intense computations and results in long response times. Thus, weaker models which give imprecise results quickly can be advantageous, provided inaccuracies can be resolved using few queries to a stronger model. In the fundamental problem of computing a maximum-weight basis of a matroid, a well-known generalization of many combinatorial optimization problems, algorithms have access to a clean oracle to query matroid information. We additionally equip algorithms with a fast but dirty oracle. We design and analyze practical algorithms which only use few clean queries w.r.t. the quality of the dirty oracle, while maintaining robustness against arbitrarily poor dirty oracles, approaching the performance of classic algorithms for the given problem. Notably, we prove that our algorithms are, in many respects, best-possible. Further, we outline extensions to other matroid oracle types, non-free dirty oracles and other matroid problems.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/96902"} +{"video_file": "0uXtFk5KNJ_39028687.mp4", "openreview_id": "0uXtFk5KNJ", "slideslive_id": 39028687, "venue": "nips2024", "title": "BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models", "status": "Poster", "keywords": "block coordinate descent;large language models", "tldr": "This work designs a memory efficient block coordinate descent optimization method for finetuning LLMs, which effectively finetunes Llama 3-8B and Llama 3-70B using a single RTX3090-24GB GPU and 4 A100-80GB GPUs, respectively.", "abstract": "This work presents BAdam, an optimization method that leverages the block coordinate descent (BCD) framework with Adam's update rule. BAdam offers a memory efficient approach to the full parameter finetuning of large language models. We conduct a theoretical convergence analysis for BAdam in the deterministic case. Experimentally, we apply BAdam to finetune the Llama 3-8B and Llama 3-70B models using a single RTX3090-24GB GPU and 4 A100-80GB GPUs, respectively. The results confirm BAdam's efficiency in terms of memory usage, running time, and optimization capability. Furthermore, the downstream performance evaluation based on MT-bench and math benchmarks shows that BAdam outperforms existing memory efficient baselines such as LoRA. It also demonstrates that BAdam can achieve comparable or even superior performance compared to Adam. Finally, the ablation study using SGD's update rule illustrates the suitability of BCD for finetuning LLMs. Our code can be easily integrated into any PyTorch-based codebase and is available at https://github.com/Ledzy/BAdam.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/96897"} +{"video_file": "0zFVhMBZHJ_39028527.mp4", "openreview_id": "0zFVhMBZHJ", "slideslive_id": 39028527, "venue": "nips2024", "title": "Mixture of Tokens: Continuous MoE through Cross-Example Aggregation", "status": "Poster", "keywords": "LLM;Mixture of Experts;MoE;conditional computation;fully-differentiable;language modeling", "tldr": "Introducing Mixture of Tokens, a fully-differentiable architecture retaining scalability of sparse MoE in language modeling.", "abstract": "Mixture of Experts (MoE) models based on Transformer architecture are pushing the boundaries of language and vision tasks. The allure of these models lies in their ability to substantially increase the parameter count without a corresponding increase in FLOPs. Most widely adopted MoE models are discontinuous with respect to their parameters - often referred to as sparse. At the same time, existing continuous MoE designs either lag behind their sparse counterparts or are incompatible with autoregressive decoding. Motivated by the observation that the adaptation of fully continuous methods has been an overarching trend in Deep Learning, we develop Mixture of Tokens (MoT), a simple, continuous architecture that is capable of scaling the number of parameters similarly to sparse MoE models. Unlike conventional methods, MoT assigns mixtures of tokens from different examples to each expert. This architecture is fully compatible with autoregressive training and generation. Our best models not only achieve a 3x increase in training speed over dense Transformer models in language pretraining but also match the performance of state-of-the-art MoE architectures. Additionally, a close connection between MoT and MoE is demonstrated through a novel technique we call transition tuning.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96896"} +{"video_file": "0zWzJj6lO3_39024377.mp4", "openreview_id": "0zWzJj6lO3", "slideslive_id": 39024377, "venue": "nips2024", "title": "Cooperate or Collapse: Emergence of Sustainable Cooperation in a Society of LLM Agents", "status": "Poster", "keywords": "cooperative AI;AI safety;LLM agents;cognitive science;language model evaluation;dynamic evaluation;alignment;agency;evolving benchmarks;multi-agent interactions", "tldr": "We build a simulation environment to test sustainability behavior in a society of LLMs, based on the economics theory of Governing the Commons.", "abstract": "As AI systems pervade human life, ensuring that large language models (LLMs) make safe decisions remains a significant challenge. We introduce the Governance of the Commons Simulation (GovSim), a generative simulation platform designed to study strategic interactions and cooperative decision-making in LLMs. In GovSim, a society of AI agents must collectively balance exploiting a common resource with sustaining it for future use. This environment enables the study of how ethical considerations, strategic planning, and negotiation skills impact cooperative outcomes. We develop an LLM-based agent architecture and test it with the leading open and closed LLMs. We find that all but the most powerful LLM agents fail to achieve a sustainable equilibrium in GovSim, with the highest survival rate below 54%. Ablations reveal that successful multi-agent communication between agents is critical for achieving cooperation in these cases. Furthermore, our analyses show that the failure to achieve sustainable cooperation in most LLMs stems from their inability to formulate and analyze hypotheses about the long-term effects of their actions on the equilibrium of the group. Finally, we show that agents that leverage \"Universalization\"-based reasoning, a theory of moral thinking, are able to achieve significantly better sustainability. Taken together, GovSim enables us to study the mechanisms that underlie sustainable self-government with specificity and scale. We open source the full suite of our research results, including the simulation environment, agent prompts, and a comprehensive web interface.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/96895"} +{"video_file": "105ZuvpdyW_39027053.mp4", "openreview_id": "105ZuvpdyW", "slideslive_id": 39027053, "venue": "nips2024", "title": "SegVol: Universal and Interactive Volumetric Medical Image Segmentation", "status": "Spotlight", "keywords": "Volumetric Medical Image Segmentation;3D Segmentation Foundation Model;Universal and Interactive 3D Segmentation", "tldr": "We propose a foundation model for universal and interactive volumetric medical image segmentation, trained on the collected 90K unlabeled and 6K labeled data.", "abstract": "Precise image segmentation provides clinical study with instructive information. Despite the remarkable progress achieved in medical image segmentation, there is still an absence of a 3D foundation segmentation model that can segment a wide range of anatomical categories with easy user interaction. In this paper, we propose a 3D foundation segmentation model, named SegVol, supporting universal and interactive volumetric medical image segmentation. By scaling up training data to 90K unlabeled Computed Tomography (CT) volumes and 6K labeled CT volumes, this foundation model supports the segmentation of over 200 anatomical categories using semantic and spatial prompts. To facilitate efficient and precise inference on volumetric images, we design a zoom-out-zoom-in mechanism. Extensive experiments on 22 anatomical segmentation tasks verify that SegVol outperforms the competitors in 19 tasks, with improvements up to 37.24% compared to the runner-up methods. We demonstrate the effectiveness and importance of specific designs by ablation study. We expect this foundation model can promote the development of volumetric medical image analysis. The model and code are publicly available at https://github.com/BAAI-DCAI/SegVol.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/96893"} +{"video_file": "1067784F6e_39028634.mp4", "openreview_id": "1067784F6e", "slideslive_id": 39028634, "venue": "nips2024", "title": "Data Distribution Valuation", "status": "Poster", "keywords": "Data distribution valuation;Huber model;Maximum mean discrepancy", "tldr": "We propose a maximum mean discrepancy-based valuation for data distribution and utilize a game-theoretic special case.", "abstract": "Data valuation is a class of techniques for quantitatively assessing the value of data for applications like pricing in data marketplaces. Existing data valuation methods define a value for a discrete dataset. However, in many use cases, users are interested in not only the value of the dataset, but that of the distribution from which the dataset was sampled. For example, consider a buyer trying to evaluate whether to purchase data from different vendors. The buyer may observe (and compare) only a small preview sample from each vendor, to decide which vendor's data distribution is most useful to the buyer and purchase. The core question is how should we compare the values of data distributions from their samples? Under a Huber characterization of the data heterogeneity across vendors, we propose a maximum mean discrepancy (MMD)-based valuation method which enables theoretically principled and actionable policies for comparing data distributions from samples. We empirically demonstrate that our method is sample-efficient and effective in identifying valuable data distributions against several existing baselines, on multiple real-world datasets (e.g., network intrusion detection, credit card fraud detection) and downstream applications (classification, regression).", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/96892"} +{"video_file": "164QnJsYjF_39026649.mp4", "openreview_id": "164QnJsYjF", "slideslive_id": 39026649, "venue": "nips2024", "title": "Dense Associative Memory Through the Lens of Random Features", "status": "Poster", "keywords": "Associative Memory;Kernels;Random Features;Hopfield Network", "tldr": "We can approximate Dense Associative Memory energies and dynamics using random features from kernel methods, making it possible to introduce new memories without increasing the number of weights.", "abstract": "Dense Associative Memories are high storage capacity variants of the Hopfield networks that are capable of storing a large number of memory patterns in the weights of the network of a given size. Their common formulations typically require storing each pattern in a separate set of synaptic weights, which leads to the increase of the number of synaptic weights when new patterns are introduced. In this work we propose an alternative formulation of this class of models using random features, commonly used in kernel methods. In this formulation the number of network's parameters remains fixed. At the same time, new memories can be added to the network by modifying existing weights. We show that this novel network closely approximates the energy function and dynamics of conventional Dense Associative Memories and shares their desirable computational properties.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/96886"} +{"video_file": "18RdkSv9h9_39028034.mp4", "openreview_id": "18RdkSv9h9", "slideslive_id": 39028034, "venue": "nips2024", "title": "FINALLY: fast and universal speech enhancement with studio-like quality", "status": "Poster", "keywords": "speech enhancement; generative models", "tldr": "2024 SoTA for speech enhancement", "abstract": "In this paper, we address the challenge of speech enhancement in real-world recordings, which often contain various forms of distortion, such as background noise, reverberation, and microphone artifacts. We revisit the use of Generative Adversarial Networks (GANs) for speech enhancement and theoretically show that GANs are naturally inclined to seek the point of maximum density within the conditional clean speech distribution, which, as we argue, is essential for speech enhancement task. We study various feature extractors for perceptual loss to facilitate the stability of adversarial training, developing a methodology for probing the structure of the feature space. This leads us to integrate WavLM-based perceptual loss into MS-STFT adversarial training pipeline, creating an effective and stable training procedure for the speech enhancement model. The resulting speech enhancement model, which we refer to as FINALLY, builds upon the HiFi++ architecture, augmented with a WavLM encoder and a novel training pipeline. Empirical results on various datasets confirm our model's ability to produce clear, high-quality speech at 48 kHz, achieving state-of-the-art performance in the field of speech enhancement. Demo page: https://samsunglabs.github.io/FINALLY-page/", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/96882"} +{"video_file": "1ELFGSNBGC_39028060.mp4", "openreview_id": "1ELFGSNBGC", "slideslive_id": 39028060, "venue": "nips2024", "title": "Multiview Scene Graph", "status": "Poster", "keywords": "scene representation; spatial understanding; place recognition; object correspondence; scene graph", "tldr": "building a place+object multiview scene graph from unposed images as a topological scene representation", "abstract": "A proper scene representation is central to the pursuit of spatial intelligence where agents can robustly reconstruct and efficiently understand 3D scenes. A scene representation is either metric, such as landmark maps in 3D reconstruction, 3D bounding boxes in object detection, or voxel grids in occupancy prediction, or topological, such as pose graphs with loop closures in SLAM or visibility graphs in SfM. In this work, we propose to build Multiview Scene Graphs (MSG) from unposed images, representing a scene topologically with interconnected place and object nodes. The task of building MSG is challenging for existing representation learning methods since it needs to jointly address both visual place recognition, object detection, and object association from images with limited fields of view and potentially large viewpoint changes. To evaluate any method tackling this task, we developed an MSG dataset and annotation based on a public 3D dataset. We also propose an evaluation metric based on the intersection-over-union score of MSG edges. Moreover, we develop a novel baseline method built on mainstream pretrained vision models, combining visual place recognition and object association into one Transformer decoder architecture. Experiments demonstrate that our method has superior performance compared to existing relevant baselines.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96878"} +{"video_file": "1MCseWaFZb_39027443.mp4", "openreview_id": "1MCseWaFZb", "slideslive_id": 39027443, "venue": "nips2024", "title": "CryoSPIN: Improving Ab-Initio Cryo-EM Reconstruction with Semi-Amortized Pose Inference", "status": "Poster", "keywords": "Cryo-EM 3D reconstruction;Pose estimation;Semi-Amortization;Multi-choice learning", "tldr": "We propose a new approach to ab-initio cryo-EM 3D reconstruction using semi-amortization to accelerate pose convergence and multi-head pose encoder to handle pose uncertainty.", "abstract": "Cryo-EM is an increasingly popular method for determining the atomic resolution 3D structure of macromolecular complexes (eg, proteins) from noisy 2D images captured by an electron microscope. The computational task is to reconstruct the 3D density of the particle, along with 3D pose of the particle in each 2D image, for which the posterior pose distribution is highly multi-modal. Recent developments in cryo-EM have focused on deep learning for which amortized inference has been used to predict pose. Here, we address key problems with this approach, and propose a new semi-amortized method, cryoSPIN, in which reconstruction begins with amortized inference and then switches to a form of auto-decoding to refine poses locally using stochastic gradient descent. Through evaluation on synthetic datasets, we demonstrate that cryoSPIN is able to handle multi-modal pose distributions during the amortized inference stage, while the later, more flexible stage of direct pose optimization yields faster and more accurate convergence of poses compared to baselines. On experimental data, we show that cryoSPIN outperforms the state-of-the-art cryoAI in speed and reconstruction quality.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/96869"} +{"video_file": "1PmsSugB87_39025568.mp4", "openreview_id": "1PmsSugB87", "slideslive_id": 39025568, "venue": "nips2024", "title": "Evidential Stochastic Differential Equations for Time-Aware Sequential Recommendation", "status": "Poster", "keywords": "Sequential recommendation;time-evolving behavior", "tldr": "We formulate a novel Evidential Neural Stochastic Differential Equation (E-NSDE) to seamlessly integrate NSDE and evidential learning for effective time-aware sequential recommendations.", "abstract": "Sequential recommender systems are designed to capture users' evolving interests over time. Existing methods typically assume a uniform time interval among consecutive user interactions and may not capture users' continuously evolving behavior in the short and long term. In reality, the actual time intervals of user interactions vary dramatically. Consequently, as the time interval between interactions increases, so does the uncertainty in user behavior. Intuitively, it is beneficial to establish a correlation between the interaction time interval and the model uncertainty to provide effective recommendations. To this end, we formulate a novel Evidential Neural Stochastic Differential Equation (E-NSDE) to seamlessly integrate NSDE and evidential learning for effective time-aware sequential recommendations. The NSDE enables the model to learn users' fine-grained time-evolving behavior by capturing continuous user representation while evidential learning quantifies both aleatoric and epistemic uncertainties considering interaction time interval to provide model confidence during prediction. Furthermore, we derive a mathematical relationship between the interaction time interval and model uncertainty to guide the learning process. Experiments on real-world data demonstrate the effectiveness of the proposed method compared to the SOTA methods.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/96864"} +{"video_file": "1cXdndzkxU_39025878.mp4", "openreview_id": "1cXdndzkxU", "slideslive_id": 39025878, "venue": "nips2024", "title": "An Adaptive Approach for Infinitely Many-armed Bandits under Generalized Rotting Constraints", "status": "Poster", "keywords": "bandits;rotting rewards;infinitely many arms", "tldr": "Infinitely many-armed bandits with rotting rewards under generalized rotting constraints.", "abstract": "In this study, we consider the infinitely many-armed bandit problems in a rested rotting setting, where the mean reward of an arm may decrease with each pull, while otherwise, it remains unchanged. We explore two scenarios regarding the rotting of rewards: one in which the cumulative amount of rotting is bounded by\nV\nT\n, referred to as the slow-rotting case, and the other in which the cumulative number of rotting instances is bounded by\nS\nT\n, referred to as the abrupt-rotting case. To address the challenge posed by rotting rewards, we introduce an algorithm that utilizes UCB with an adaptive sliding window, designed to manage the bias and variance trade-off arising due to rotting rewards. Our proposed algorithm achieves tight regret bounds for both slow and abrupt rotting scenarios. Lastly, we demonstrate the performance of our algorithm using numerical experiments.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/96859"} +{"video_file": "1e3MOwHSIX_39027082.mp4", "openreview_id": "1e3MOwHSIX", "slideslive_id": 39027082, "venue": "nips2024", "title": "MAGNET: Improving the Multilingual Fairness of Language Models with Adaptive Gradient-Based Tokenization", "status": "Poster", "keywords": "tokenization;multilingual LMs;over-segmentation;fariness", "tldr": "We develop gradient based tokenizers that promote uniform segmentation granularity across languages in multilingual LMs.", "abstract": "In multilingual settings, non-Latin scripts and low-resource languages are usually disadvantaged in terms of language models\u2019 utility, efficiency, and cost. Specifically, previous studies have reported multiple modeling biases that the current tokenization algorithms introduce to non-Latin script languages, the main one being over-segmentation. In this work, we propose MAGNET\u2014 multilingual adaptive gradient-based tokenization\u2014to reduce over-segmentation via adaptive gradient-based subword tokenization. MAGNET learns to predict segment boundaries between byte tokens in a sequence via sub-modules within the model, which act as internal boundary predictors (tokenizers). Previous gradient-based tokenization methods aimed for uniform compression across sequences by integrating a single boundary predictor during training and optimizing it end-to-end through stochastic reparameterization alongside the next token prediction objective. However, this approach still results in over-segmentation for non-Latin script languages in multilingual settings. In contrast, MAGNET offers a customizable architecture where byte-level sequences are routed through language-script-specific predictors, each optimized for its respective language script. This modularity enforces equitable segmentation granularity across different language scripts compared to previous methods. Through extensive experiments, we demonstrate that in addition to reducing segmentation disparities, MAGNET also enables faster language modeling and improves downstream utility.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96857"} +{"video_file": "1f82rnwCbl_39024913.mp4", "openreview_id": "1f82rnwCbl", "slideslive_id": 39024913, "venue": "nips2024", "title": "Learning to Discuss Strategically: A Case Study on One Night Ultimate Werewolf", "status": "Poster", "keywords": "Large Language Models;LLM-based Agent;Reinforcement Learning", "tldr": "We propose a novel RL-instructed language agent framework to play the One Night Ultimate Werewolf game.", "abstract": "Communication is a fundamental aspect of human society, facilitating the exchange of information and beliefs among people. Despite the advancements in large language models (LLMs), recent agents built with these often neglect the control over discussion tactics, which are essential in communication scenarios and games. As a variant of the famous communication game Werewolf, One Night Ultimate Werewolf (ONUW) requires players to develop strategic discussion policies due to the potential role changes that increase the uncertainty and complexity of the game. In this work, we first present the existence of the Perfect Bayesian Equilibria (PBEs) in two scenarios of the ONUW game: one with discussion and one without. The results showcase that the discussion greatly changes players' utilities by affecting their beliefs, emphasizing the significance of discussion tactics. Based on the insights obtained from the analyses, we propose an RL-instructed language agent framework, where a discussion policy trained by reinforcement learning (RL) is employed to determine appropriate discussion tactics to adopt. Our experimental results on several ONUW game settings demonstrate the effectiveness and generalizability of our proposed framework.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96856"} +{"video_file": "1iHmhMHNyA_39027774.mp4", "openreview_id": "1iHmhMHNyA", "slideslive_id": 39027774, "venue": "nips2024", "title": "Large Language Models as Urban Residents: An LLM Agent Framework for Personal Mobility Generation", "status": "Poster", "keywords": "LLM;human mobility;urban computing;trajectory generation", "tldr": "This paper introduces an LLM agent framework for personal mobility generation.", "abstract": "This paper introduces a novel approach using Large Language Models (LLMs) integrated into an agent framework for flexible and effective personal mobility generation. LLMs overcome the limitations of previous models by effectively processing semantic data and offering versatility in modeling various tasks. Our approach addresses three research questions: aligning LLMs with real-world urban mobility data, developing reliable activity generation strategies, and exploring LLM applications in urban mobility. The key technical contribution is a novel LLM agent framework that accounts for individual activity patterns and motivations, including a self-consistency approach to align LLMs with real-world activity data and a retrieval-augmented strategy for interpretable activity generation. We evaluate our LLM agent framework and compare it with state-of-the-art personal mobility generation approaches, demonstrating the effectiveness of our approach and its potential applications in urban mobility. Overall, this study marks the pioneering work of designing an LLM agent framework for activity generation based on real-world human activity data, offering a promising tool for urban mobility analysis.", "primary_area": "machine_learning_for_social_sciences", "site": "https://neurips.cc/virtual/2024/poster/96855"} +{"video_file": "1l9cEyFmxg_39025080.mp4", "openreview_id": "1l9cEyFmxg", "slideslive_id": 39025080, "venue": "nips2024", "title": "Unlocking the Capabilities of Masked Generative Models for Image Synthesis via Self-Guidance", "status": "Poster", "keywords": "Image synthesis;discrete diffusion models;masked generative models;sampling guidance;parameter- efficient fine-tuning", "tldr": "We propose a guided sampling techniques for masked generative models and empirically demonstrate its effectiveness for image generation.", "abstract": "Masked generative models (MGMs) have shown impressive generative ability while providing an order of magnitude efficient sampling steps compared to continuous diffusion models. However, MGMs still underperform in image synthesis compared to recent well-developed continuous diffusion models with similar size in terms of quality and diversity of generated samples. A key factor in the performance of continuous diffusion models stems from the guidance methods, which enhance the sample quality at the expense of diversity. In this paper, we extend these guidance methods to generalized guidance formulation for MGMs and propose a self-guidance sampling method, which leads to better generation quality. The proposed approach leverages an auxiliary task for semantic smoothing in vector-quantized token space, analogous to the Gaussian blur in continuous pixel space. Equipped with the parameter-efficient fine-tuning method and high-temperature sampling, MGMs with the proposed self-guidance achieve a superior quality-diversity trade-off, outperforming existing sampling methods in MGMs with more efficient training and sampling costs. Extensive experiments with the various sampling hyperparameters confirm the effectiveness of the proposed self-guidance.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/96852"} +{"video_file": "1mAaewThcz_39024527.mp4", "openreview_id": "1mAaewThcz", "slideslive_id": 39024527, "venue": "nips2024", "title": "Theoretical and Empirical Insights into the Origins of Degree Bias in Graph Neural Networks", "status": "Poster", "keywords": "graph learning;fairness;degree", "tldr": "We provide an empirically-validated theoretical analysis of the origins of degree bias in message-passing graph neural networks.", "abstract": "Graph Neural Networks (GNNs) often perform better for high-degree nodes than low-degree nodes on node classification tasks. This degree bias can reinforce social marginalization by, e.g., privileging celebrities and other high-degree actors in social networks during social and content recommendation. While researchers have proposed numerous hypotheses for why GNN degree bias occurs, we find via a survey of 38 degree bias papers that these hypotheses are often not rigorously validated, and can even be contradictory. Thus, we provide an analysis of the origins of degree bias in message-passing GNNs with different graph filters. We prove that high-degree test nodes tend to have a lower probability of misclassification regardless of how GNNs are trained. Moreover, we show that degree bias arises from a variety of factors that are associated with a node's degree (e.g., homophily of neighbors, diversity of neighbors). Furthermore, we show that during training, some GNNs may adjust their loss on low-degree nodes more slowly than on high-degree nodes; however, with sufficiently many epochs of training, message-passing GNNs can achieve their maximum possible training accuracy, which is not significantly limited by their expressive power. Throughout our analysis, we connect our findings to previously-proposed hypotheses for the origins of degree bias, supporting and unifying some while drawing doubt to others. We validate our theoretical findings on 8 common real-world networks, and based on our theoretical and empirical insights, describe a roadmap to alleviate degree bias.", "primary_area": "fairness", "site": "https://neurips.cc/virtual/2024/poster/96851"} +{"video_file": "1po4j1Tv7O_39026881.mp4", "openreview_id": "1po4j1Tv7O", "slideslive_id": 39026881, "venue": "nips2024", "title": "Sample-Efficient Constrained Reinforcement Learning with General Parameterization", "status": "Poster", "keywords": "Constrained MDP;Sample Complexity;Constraint Violation;Global Optimality.", "tldr": "New state-of-the-art sample complexity for CMDPs.", "abstract": "We consider a constrained Markov Decision Problem (CMDP) where the goal of an agent is to maximize the expected discounted sum of rewards over an infinite horizon while ensuring that the expected discounted sum of costs exceeds a certain threshold. Building on the idea of momentum-based acceleration, we develop the Primal-Dual Accelerated Natural Policy Gradient (PD-ANPG) algorithm that ensures an\n\u03f5\nglobal optimality gap and\n\u03f5\nconstraint violation with\nO\n~\n(\n(\n1\n\u2212\n\u03b3\n)\n\u2212\n7\n\u03f5\n\u2212\n2\n)\nsample complexity for general parameterized policies where\n\u03b3\ndenotes the discount factor. This improves the state-of-the-art sample complexity in general parameterized CMDPs by a factor of\nO\n(\n(\n1\n\u2212\n\u03b3\n)\n\u2212\n1\n\u03f5\n\u2212\n2\n)\nand achieves the theoretical lower bound in\n\u03f5\n\u2212\n1\n.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96849"} +{"video_file": "1qfdCAXn6K_39028051.mp4", "openreview_id": "1qfdCAXn6K", "slideslive_id": 39028051, "venue": "nips2024", "title": "Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge Distillation", "status": "Poster", "keywords": "Knowledge Distillation; Wasserstein Distance; Image Classification; Object Detection", "tldr": "We propose novel knowledge distillation methods based on Wasserstein Distance, which outperforms predominant KL divergence based ones and other state-of-the-art competitors.", "abstract": "Since pioneering work of Hinton et al., knowledge distillation based on Kullback-Leibler Divergence (KL-Div) has been predominant, and recently its variants have achieved compelling performance. However, KL-Div only compares probabilities of the corresponding category between the teacher and student while lacking a mechanism for cross-category comparison. Besides, KL-Div is problematic when applied to intermediate layers, as it cannot handle non-overlapping distributions and is unaware of geometry of the underlying manifold. To address these downsides, we propose a methodology of Wasserstein Distance (WD) based knowledge distillation. Specifically, we propose a logit distillation method called WKD-L based on discrete WD, which performs cross-category comparison of probabilities and thus can explicitly leverage rich interrelations among categories. Moreover, we introduce a feature distillation method called WKD-F, which uses a parametric method for modeling feature distributions and adopts continuous WD for transferring knowledge from intermediate layers. Comprehensive evaluations on image classification and object detection have shown (1) for logit distillation WKD-L outperforms very strong KL-Div variants; (2) for feature distillation WKD-F is superior to the KL-Div counterparts and state-of-the-art competitors.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96847"} +{"video_file": "1u3qkG7BkQ_39026781.mp4", "openreview_id": "1u3qkG7BkQ", "slideslive_id": 39026781, "venue": "nips2024", "title": "Language-Driven Interactive Traffic Trajectory Generation", "status": "Poster", "keywords": "Trajectory generation;Interaction;Language control", "tldr": "In this work, we propose InteractTraj, the first language-driven traffic trajectory generator that can generate interactive traffic trajectories.", "abstract": "Realistic trajectory generation with natural language control is pivotal for advancing autonomous vehicle technology. However, previous methods focus on individual traffic participant trajectory generation, thus failing to account for the complexity of interactive traffic dynamics. In this work, we propose InteractTraj, the first language-driven traffic trajectory generator that can generate interactive traffic trajectories. InteractTraj interprets abstract trajectory descriptions into concrete formatted interaction-aware numerical codes and learns a mapping between these formatted codes and the final interactive trajectories. To interpret language descriptions, we propose a language-to-code encoder with a novel interaction-aware encoding strategy. To produce interactive traffic trajectories, we propose a code-to-trajectory decoder with interaction-aware feature aggregation that synergizes vehicle interactions with the environmental map and the vehicle moves. Extensive experiments show our method demonstrates superior performance over previous SoTA methods, offering a more realistic generation of interactive traffic trajectories with high controllability via diverse natural language commands.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/96845"} +{"video_file": "1v0BPTR3AA_39028812.mp4", "openreview_id": "1v0BPTR3AA", "slideslive_id": 39028812, "venue": "nips2024", "title": "Generalized Tensor Decomposition for Understanding Multi-Output Regression under Combinatorial Shifts", "status": "Poster", "keywords": "multi-output regression;tensor singular value decomposition;tensor completion", "tldr": "This paper tackles combinatorial distribution shift in multi-output regression using generalized tensor decomposition, Ft-SVD theorem, and a two-stage algorithm that exploits low-rank structure for better prediction under shifts.", "abstract": "In multi-output regression, we identify a previously neglected challenge that arises from the inability of training distribution to cover all combinations of input features, leading to combinatorial distribution shift (CDS). To the best of our knowledge, this is the first work to formally define and address this problem. We tackle it through a novel tensor decomposition perspective, proposing the Functional t-Singular Value Decomposition (Ft-SVD) theorem which extends the classical tensor SVD to infinite and continuous feature domains, providing a natural tool for representing and analyzing multi-output functions. Within the Ft-SVD framework, we formulate the multi-output regression problem under CDS as a low-rank tensor estimation problem under the missing not at random (MNAR) setting, and introduce a series of assumptions about the true functions, training and testing distributions, and spectral properties of the ground-truth embeddings, making the problem more tractable. To address the challenges posed by CDS in multi-output regression, we develop a tailored Double-Stage Empirical Risk Minimization (ERM-DS) algorithm that leverages the spectral properties of the embeddings and uses specific hypothesis classes in each frequency component to better capture the varying spectral decay patterns. We provide rigorous theoretical analyses that establish performance guarantees for the ERM-DS algorithm. This work lays a preliminary theoretical foundation for multi-output regression under CDS.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96844"} +{"video_file": "1v4gKsyGfe_39028352.mp4", "openreview_id": "1v4gKsyGfe", "slideslive_id": 39028352, "venue": "nips2024", "title": "Understanding Linear Probing then Fine-tuning Language Models from NTK Perspective", "status": "Poster", "keywords": "fine-tuning;transfer learning;neural tangent kernel", "tldr": "We analyze fine-tuning strategy of LP-FT from NTK perspective and demonstrates its effectiveness for language models.", "abstract": "The two-stage fine-tuning (FT) method, linear probing (LP) then fine-tuning (LP-FT), outperforms linear probing and FT alone. This holds true for both in-distribution (ID) and out-of-distribution (OOD) data. One key reason for its success is the preservation of pre-trained features, achieved by obtaining a near-optimal linear head during LP. However, despite the widespread use of large language models, there has been limited exploration of more complex architectures such as Transformers. In this paper, we analyze the training dynamics of LP-FT for classification tasks on the basis of the neural tangent kernel (NTK) theory. Our analysis decomposes the NTK matrix into two components. This decomposition highlights the importance of the linear head norm alongside the prediction accuracy at the start of the FT stage. We also observe a significant increase in the linear head norm during LP, which stems from training with the cross-entropy (CE) loss. This increase in the linear head norm effectively reduces changes in learned features. Furthermore, we find that this increased norm can adversely affect model calibration, which can be corrected using temperature scaling. Additionally, we extend our analysis with the NTK to the low-rank adaptation (LoRA) method and validate its effectiveness. Our experiments using a Transformer-based model on multiple natural language processing datasets confirm our theoretical analysis. Our study demonstrates the effectiveness of LP-FT for fine-tuning language models. Code is available at https://github.com/tom4649/lp-ft_ntk.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/96843"} +{"video_file": "1vPqOmqSfO_39028431.mp4", "openreview_id": "1vPqOmqSfO", "slideslive_id": 39028431, "venue": "nips2024", "title": "Sketched Lanczos uncertainty score: a low-memory summary of the Fisher information", "status": "Poster", "keywords": "Sketching;Uncertainty;Lanczos;Laplace", "tldr": "We propose a memory-efficient and provably-good approximation of an enstablished uncertainty score", "abstract": "Current uncertainty quantification is memory and compute expensive, which hinders practical uptake. To counter, we develop Sketched Lanczos Uncertainty (SLU): an architecture-agnostic uncertainty score that can be applied to pre-trained neural networks with minimal overhead. Importantly, the memory use of SLU only grows logarithmically with the number of model parameters. We combine Lanczos' algorithm with dimensionality reduction techniques to compute a sketch of the leading eigenvectors of a matrix. Applying this novel algorithm to the Fisher information matrix yields a cheap and reliable uncertainty score. Empirically, SLU yields well-calibrated uncertainties, reliably detects out-of-distribution examples, and consistently outperforms existing methods in the low-memory regime.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/96842"} +{"video_file": "1ziIqFo4Tj_39027389.mp4", "openreview_id": "1ziIqFo4Tj", "slideslive_id": 39027389, "venue": "nips2024", "title": "HOPE: Shape Matching Via Aligning Different K-hop Neighbourhoods", "status": "Poster", "keywords": "Shape analysis;correspondences;registration", "tldr": "We propose to use different k-hop neighboorhoods of vertices (nodes) for refining initialized maps for shape matching.", "abstract": "Accurate and smooth shape matching is very hard to achieve. This is because for accuracy, one needs unique descriptors (signatures) on shapes that distinguish different vertices on a mesh accurately while at the same time being invariant to deformations. However, most existing unique shape descriptors are generally not smooth on the shape and are not noise-robust thus leading to non-smooth matches. On the other hand, for smoothness, one needs descriptors that are smooth and continuous on the shape. However, existing smooth descriptors are generally not unique and as such lose accuracy as they match neighborhoods (for smoothness) rather than exact vertices (for accuracy). In this work, we propose to use different k-hop neighborhoods of vertices as pairwise descriptors for shape matching. We use these descriptors in conjunction with local map distortion (LMD) to refine an initialized map for shape matching. We validate the effectiveness of our pipeline on benchmark datasets such as SCAPE, TOSCA, TOPKIDS, and others.", "primary_area": "active_learning", "site": "https://neurips.cc/virtual/2024/poster/96838"} +{"video_file": "204YOrDHny_39024844.mp4", "openreview_id": "204YOrDHny", "slideslive_id": 39024844, "venue": "nips2024", "title": "Reparameterization invariance in approximate Bayesian inference", "status": "Spotlight", "keywords": "Approximate Bayesian inference;Laplace approximations;Differential geometry;Reparametrization invariance", "tldr": "We provide a mathematical analysis of reparametrizations in the context of Laplace approximations, and devise novel a reparametrization invariant Riemannian diffusion-based approximate posterior.", "abstract": "Current approximate posteriors in Bayesian neural networks (BNNs) exhibit a crucial limitation: they fail to maintain invariance under reparameterization, i.e. BNNs assign different posterior densities to different parametrizations of identical functions. This creates a fundamental flaw in the application of Bayesian principles as it breaks the correspondence between uncertainty over the parameters with uncertainty over the parametrized function. In this paper, we investigate this issue in the context of the increasingly popular linearized Laplace approximation. Specifically, it has been observed that linearized predictives alleviate the common underfitting problems of the Laplace approximation. We develop a new geometric view of reparametrizations from which we explain the success of linearization. Moreover, we demonstrate that these reparameterization invariance properties can be extended to the original neural network predictive using a Riemannian diffusion process giving a straightforward algorithm for approximate posterior sampling, which empirically improves posterior fit.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/96837"} +{"video_file": "20QgErW5zH_39025814.mp4", "openreview_id": "20QgErW5zH", "slideslive_id": 39025814, "venue": "nips2024", "title": "Drones Help Drones: A Collaborative Framework for Multi-Drone Object Trajectory Prediction and Beyond", "status": "Poster", "keywords": "Multi-drone Collaboration;Perception and Prediction", "tldr": "We present a collaborative framework for multi-drone object trajectory prediction, including a drone-specific BEV generation module and a selective interaction strategy based on the local feature discrepancy.", "abstract": "Collaborative trajectory prediction can comprehensively forecast the future motion of objects through multi-view complementary information. However, it encounters two main challenges in multi-drone collaboration settings. The expansive aerial observations make it difficult to generate precise Bird's Eye View (BEV) representations. Besides, excessive interactions can not meet real-time prediction requirements within the constrained drone-based communication bandwidth. To address these problems, we propose a novel framework named \"Drones Help Drones\" (DHD). Firstly, we incorporate the ground priors provided by the drone's inclined observation to estimate the distance between objects and drones, leading to more precise BEV generation. Secondly, we design a selective mechanism based on the local feature discrepancy to prioritize the critical information contributing to prediction tasks during inter-drone interactions. Additionally, we create the first dataset for multi-drone collaborative prediction, named \"Air-Co-Pred\", and conduct quantitative and qualitative experiments to validate the effectiveness of our DHD framework. The results demonstrate that compared to state-of-the-art approaches, DHD reduces position deviation in BEV representations by over 20% and requires only a quarter of the transmission ratio for interactions while achieving comparable prediction performance. Moreover, DHD also shows promising generalization to the collaborative 3D object detection in CoPerception-UAVs.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96836"} +{"video_file": "25Ioxw576r_39028571.mp4", "openreview_id": "25Ioxw576r", "slideslive_id": 39028571, "venue": "nips2024", "title": "You Only Cache Once: Decoder-Decoder Architectures for Language Models", "status": "Oral", "keywords": "Decoder-Decoder;Model Architecture", "tldr": "We introduce a decoder-decoder architecture, YOCO, for large language models, which only caches key-value pairs once.", "abstract": "We introduce a decoder-decoder architecture, YOCO, for large language models, which only caches key-value pairs once. It consists of two components, i.e., a cross-decoder stacked upon a self-decoder. The self-decoder efficiently encodes global key-value (KV) caches that are reused by the cross-decoder via cross-attention. The overall model behaves like a decoder-only Transformer, although YOCO only caches once. The design substantially reduces GPU memory demands, yet retains global attention capability. Additionally, the computation flow enables prefilling to early exit without changing the final output, thereby significantly speeding up the prefill stage. Experimental results demonstrate that YOCO achieves favorable performance compared to Transformer in various settings of scaling up model size and number of training tokens. We also extend YOCO to 1M context length with near-perfect needle retrieval accuracy. The profiling results show that YOCO improves inference memory, prefill latency, and throughput by orders of magnitude across context lengths and model sizes.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/96833"} +{"video_file": "26BdXIY3ik_39027955.mp4", "openreview_id": "26BdXIY3ik", "slideslive_id": 39027955, "venue": "nips2024", "title": "TFGDA: Exploring Topology and Feature Alignment in Semi-supervised Graph Domain Adaptation through Robust Clustering", "status": "Poster", "keywords": "Graph Transfer Learning;Graph Domain Adaptatipn;Graphs Semi-supervised Learning;Graph Node Classification", "tldr": "A nove graph transfer learning framework for semi-supervised graph domain adaptation.", "abstract": "Semi-supervised graph domain adaptation, as a branch of graph transfer learning, aims to annotate unlabeled target graph nodes by utilizing transferable knowledge learned from a label-scarce source graph. However, most existing studies primarily concentrate on aligning feature distributions directly to extract domain-invariant features, while ignoring the utilization of the intrinsic structure information in graphs. Inspired by the significance of data structure information in enhancing models' generalization performance, this paper aims to investigate how to leverage the structure information to assist graph transfer learning. To this end, we propose an innovative framework called TFGDA. Specially, TFGDA employs a structure alignment strategy named STSA to encode graphs' topological structure information into the latent space, greatly facilitating the learning of transferable features. To achieve a stable alignment of feature distributions, we also introduce a SDA strategy to mitigate domain discrepancy on the sphere. Moreover, to address the overfitting issue caused by label scarcity, a simple but effective RNC strategy is devised to guide the discriminative clustering of unlabeled nodes. Experiments on various benchmarks demonstrate the superiority of TFGDA over SOTA methods.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/96831"} +{"video_file": "2AIwiIkE0s_39025562.mp4", "openreview_id": "2AIwiIkE0s", "slideslive_id": 39025562, "venue": "nips2024", "title": "Vision Transformer Neural Architecture Search for Out-of-Distribution Generalization: Benchmark and Insights", "status": "Poster", "keywords": "Vision Transformer;Neural Architecture Search;Out-of-Distribution Generalization", "tldr": "This work introduces OoD-ViT-NAS, the first systematic ViT NAS benchmark designed for out-of-distribution (OoD) generalization, and provide analytical insights on how ViT architectures impact OoD performance.", "abstract": "While Vision Transformer (ViT) have achieved success across various machine learning tasks, deploying them in real-world scenarios faces a critical challenge: generalizing under Out-of-Distribution (OoD) shifts. A crucial research gap remains in understanding how to design ViT architectures \u2013 both manually and automatically \u2013 to excel in OoD generalization. To address this gap, we introduce OoD-ViT-NAS, the first systematic benchmark for ViT Neural Architecture Search (NAS) focused on OoD generalization. This comprehensive benchmark includes 3,000 ViT architectures of varying model computational budgets evaluated on common large-scale OoD datasets. With this comprehensive benchmark at hand, we analyze the factors that contribute to the OoD generalization of ViT architecture. Our analysis uncovers several key insights. Firstly, we show that ViT architecture designs have a considerable impact on OoD generalization. Secondly, we observe that In-Distribution (ID) accuracy might not be a very good indicator of OoD accuracy. This underscores the risk that ViT architectures optimized for ID accuracy might not perform well under OoD shifts. Thirdly, we conduct the first study to explore NAS for ViT\u2019s OoD robustness. Specifically, we study 9 Training-free NAS for their OoD generalization performance on our benchmark. We observe that existing Training-free NAS are largely ineffective in predicting OoD accuracy despite their effectiveness at predicting ID accuracy. Moreover, simple proxies like #Param or #Flop surprisingly outperform more complex Training-free NAS in predicting ViTs OoD accuracy. Finally, we study how ViT architectural attributes impact OoD generalization. We discover that increasing embedding dimensions of a ViT architecture generally can improve the OoD generalization. We show that ViT architectures in our benchmark exhibit a wide range of OoD accuracy, with up to 11.85% for some OoD shift, prompting the importance to study ViT architecture design for OoD. We firmly believe that our OoD-ViT-NAS benchmark and our analysis can catalyze and streamline important research on understanding how ViT architecture designs influence OoD generalization. Our OoD-NAS-ViT benchmark and code are available at https://hosytuyen.github.io/projects/OoD-ViT-NAS", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/96829"} +{"video_file": "2HvgvB4aWq_39027405.mp4", "openreview_id": "2HvgvB4aWq", "slideslive_id": 39027405, "venue": "nips2024", "title": "Differentiable Task Graph Learning: Procedural Activity Representation and Online Mistake Detection from Egocentric Videos", "status": "Spotlight", "keywords": "Task Graph;Procedural Sequences;Online Mistake Detection;Video Understanding", "tldr": "Differentiable learning of task graphs of procedural activities from action sequences, enabling end-to-end models with emerging video understanding capabilities, tested on online mistake action detection in procedural egocentric videos.", "abstract": "Procedural activities are sequences of key-steps aimed at achieving specific goals. They are crucial to build intelligent agents able to assist users effectively. In this context, task graphs have emerged as a human-understandable representation of procedural activities, encoding a partial ordering over the key-steps. While previous works generally relied on hand-crafted procedures to extract task graphs from videos, in this paper, we propose an approach based on direct maximum likelihood optimization of edges' weights, which allows gradient-based learning of task graphs and can be naturally plugged into neural network architectures. Experiments on the CaptainCook4D dataset demonstrate the ability of our approach to predict accurate task graphs from the observation of action sequences, with an improvement of +16.7% over previous approaches. Owing to the differentiability of the proposed framework, we also introduce a feature-based approach, aiming to predict task graphs from key-step textual or video embeddings, for which we observe emerging video understanding abilities. Task graphs learned with our approach are also shown to significantly enhance online mistake detection in procedural egocentric videos, achieving notable gains of +19.8% and +7.5% on the Assembly101-O and EPIC-Tent-O datasets. Code for replicating the experiments is available at https://github.com/fpv-iplab/Differentiable-Task-Graph-Learning.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96827"} +{"video_file": "2Inwtjvyx8_39028685.mp4", "openreview_id": "2Inwtjvyx8", "slideslive_id": 39028685, "venue": "nips2024", "title": "Revisiting Adversarial Patches for Designing Camera-Agnostic Attacks against Person Detection", "status": "Poster", "keywords": "Physical Adversarial Attack;Person Detection;Adversarial Patch;Camera ISP", "tldr": "We propose a camera-agnostic physical adversarial attack, CAP (Camera-Agnostic Patch), addressing the oversight of camera influence in existing physical adversarial attack methods.", "abstract": "Physical adversarial attacks can deceive deep neural networks (DNNs), leading to erroneous predictions in real-world scenarios. To uncover potential security risks, attacking the safety-critical task of person detection has garnered significant attention. However, we observe that existing attack methods overlook the pivotal role of the camera, involving capturing real-world scenes and converting them into digital images, in the physical adversarial attack workflow. This oversight leads to instability and challenges in reproducing these attacks. In this work, we revisit patch-based attacks against person detectors and introduce a camera-agnostic physical adversarial attack to mitigate this limitation. Specifically, we construct a differentiable camera Image Signal Processing (ISP) proxy network to compensate for the physical-to-digital transition gap. Furthermore, the camera ISP proxy network serves as a defense module, forming an adversarial optimization framework with the attack module. The attack module optimizes adversarial patches to maximize effectiveness, while the defense module optimizes the conditional parameters of the camera ISP proxy network to minimize attack effectiveness. These modules engage in an adversarial game, enhancing cross-camera stability. Experimental results demonstrate that our proposed Camera-Agnostic Patch (CAP) attack effectively conceals persons from detectors across various imaging hardware, including two distinct cameras and four smartphones.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/96825"} +{"video_file": "2KuZHYykkq_39028396.mp4", "openreview_id": "2KuZHYykkq", "slideslive_id": 39028396, "venue": "nips2024", "title": "Mini-Sequence Transformers: Optimizing Intermediate Memory for Long Sequences Training", "status": "Poster", "keywords": "Long-Context;Foundation Models;Systems for ML;LLM Training;GPUs;Memory-efficient Training", "tldr": "We propose Mini Sequence to reduce intermediate memory overhead for long sequence training, with 12X longer than the standard implementation of LLaMA3-8b training on a single A100 device.", "abstract": "We introduce Mini-Sequence Transformer (MsT), a simple and effective methodology for highly efficient and accurate LLM training with extremely long sequences. MsT partitions input sequences and iteratively processes mini-sequences to reduce intermediate memory usage. Integrated with activation recomputation, it enables significant memory savings in both forward and backward passes. In experiments with the Llama3-8B model, with MsT, we measure no degradation in throughput or convergence even with 12x longer sequences than standard implementations. MsT is fully general, implementation-agnostic, and requires minimal code changes to integrate with existing LLM training frameworks. Integrated with the huggingface library, MsT successfully extends the maximum context length of Qwen, Mistral, and Gemma-2 by 12-24x.", "primary_area": "infrastructure", "site": "https://neurips.cc/virtual/2024/poster/96824"} +{"video_file": "2LctgfN6Ty_39024503.mp4", "openreview_id": "2LctgfN6Ty", "slideslive_id": 39024503, "venue": "nips2024", "title": "Distributional Preference Alignment of LLMs via Optimal Transport", "status": "Poster", "keywords": "LLM Alignment;Optimal Transport;stochastic dominance", "tldr": "We propose AOT a distributional alignment of LLMs via Optimal Transport.", "abstract": "Current LLM alignment techniques use pairwise human preferences at a sample level, and as such, they do not imply an alignment on the distributional level. We propose in this paper Alignment via Optimal Transport (AOT), a novel method for distributional preference alignment of LLMs. AOT aligns LLMs on unpaired preference data by making the reward distribution of the positive samples stochastically dominant in the first order on the distribution of negative samples. We introduce a convex relaxation of this first-order stochastic dominance and cast it as an optimal transport problem with a smooth and convex cost. Thanks to the one-dimensional nature of the resulting optimal transport problem and the convexity of the cost, it has a closed-form solution via sorting on empirical measures. We fine-tune LLMs with this AOT objective, which enables alignment by penalizing the violation of the stochastic dominance of the reward distribution of the positive samples on the reward distribution of the negative samples. We analyze the sample complexity of AOT by considering the dual of the OT problem and show that it converges at the parametric rate. Empirically, we show on a diverse set of alignment datasets and LLMs that AOT leads to state-of-the-art models in the 7B family of models when evaluated with Open LLM Benchmarks and AlpacaEval. Code for\nAOT\nis available in the Hugging Face TRL library \\url{https://ibm.biz/AOT_TRL}.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/96822"} +{"video_file": "2NfBBpbN9x_39025188.mp4", "openreview_id": "2NfBBpbN9x", "slideslive_id": 39025188, "venue": "nips2024", "title": "Utilizing Image Transforms and Diffusion Models for Generative Modeling of Short and Long Time Series", "status": "Poster", "keywords": "Time Series;Generative Models;Long Sequences", "tldr": "Towards a unified generative model for varying-length time series, we propose transforming time series data into images via invertible transforms and utilizing generative frameworks for time series generation.", "abstract": "Lately, there has been a surge in interest surrounding generative modeling of time series data. Most existing approaches are designed either to process short sequences or to handle long-range sequences. This dichotomy can be attributed to gradient issues with recurrent networks, computational costs associated with transformers, and limited expressiveness of state space models. Towards a unified generative model for varying-length time series, we propose in this work to transform sequences into images. By employing invertible transforms such as the delay embedding and the short-time Fourier transform, we unlock three main advantages: i) We can exploit advanced diffusion vision models; ii) We can remarkably process short- and long-range inputs within the same framework; and iii) We can harness recent and established tools proposed in the time series to image literature. We validate the effectiveness of our method through a comprehensive evaluation across multiple tasks, including unconditional generation, interpolation, and extrapolation. We show that our approach achieves consistently state-of-the-art results against strong baselines. In the unconditional generation tasks, we show remarkable mean improvements of\n58.17\n% over previous diffusion models in the short discriminative score and\n132.61\n% in the (ultra-)long classification scores. Code is at https://github.com/azencot-group/ImagenTime.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/96819"} +{"video_file": "2RS0fL7Eet_39025418.mp4", "openreview_id": "2RS0fL7Eet", "slideslive_id": 39025418, "venue": "nips2024", "title": "Stochastic Optimization Algorithms for Instrumental Variable Regression with Streaming Data", "status": "Poster", "keywords": "instrumental variable regression;stochastic gradient descent;stochastic approximation;2SLS", "tldr": "We propose the first stochastic approximation algorithm for instrumental variable regression with streaming data and provide analysis.", "abstract": "We develop and analyze algorithms for instrumental variable regression by viewing the problem as a conditional stochastic optimization problem. In the context of least-squares instrumental variable regression, our algorithms neither require matrix inversions nor mini-batches thereby providing a fully online approach for performing instrumental variable regression with streaming data. When the true model is linear, we derive rates of convergence in expectation, that are of order\nO\n(\nlog\n\u2061\nT\n/\nT\n)\nand\nO\n(\n1\n/\nT\n1\n\u2212\n\u03f5\n)\nfor any\n\u03f5\n>\n0\n, respectively under the availability of two-sample and one-sample oracles respectively. Importantly, under the availability of the two-sample oracle, the aforementioned rate is actually agnostic to the relationship between confounder and the instrumental variable demonstrating the flexibility of the proposed approach in alleviating the need for explicit model assumptions required in recent works based on reformulating the problem as min-max optimization problems. Experimental validation is provided to demonstrate the advantages of the proposed algorithms over classical approaches like the 2SLS method.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/96817"} +{"video_file": "2UJLv3KPGO_39025768.mp4", "openreview_id": "2UJLv3KPGO", "slideslive_id": 39025768, "venue": "nips2024", "title": "Automating Data Annotation under Strategic Human Agents: Risks and Potential Solutions", "status": "Poster", "keywords": "Strategic Classification;Long-term Fairness", "tldr": "This paper studies the dynamics of welfare and fairness where strategic agents interact with an ML system retrained over time with model-annotated and human-annotated samples.", "abstract": "As machine learning (ML) models are increasingly used in social domains to make consequential decisions about humans, they often have the power to reshape data distributions. Humans, as strategic agents, continuously adapt their behaviors in response to the learning system. As populations change dynamically, ML systems may need frequent updates to ensure high performance. However, acquiring high-quality human-annotated samples can be highly challenging and even infeasible in social domains. A common practice to address this issue is using the model itself to annotate unlabeled data samples. This paper investigates the long-term impacts when ML models are retrained with model-annotated samples when they incorporate human strategic responses. We first formalize the interactions between strategic agents and the model and then analyze how they evolve under such dynamic interactions. We find that agents are increasingly likely to receive positive decisions as the model gets retrained, whereas the proportion of agents with positive labels may decrease over time. We thus propose a refined retraining process to stabilize the dynamics. Last, we examine how algorithmic fairness can be affected by these retraining processes and find that enforcing common fairness constraints at every round may not benefit the disadvantaged group in the long run. Experiments on (semi-)synthetic and real data validate the theoretical findings.", "primary_area": "machine_learning_for_social_sciences", "site": "https://neurips.cc/virtual/2024/poster/96814"} +{"video_file": "2YSHEBRRol_39028382.mp4", "openreview_id": "2YSHEBRRol", "slideslive_id": 39028382, "venue": "nips2024", "title": "Aligning Individual and Collective Objectives in Multi-Agent Cooperation", "status": "Poster", "keywords": "Mixed-motive cooperation;Mixed-motive game;cooperative AI", "tldr": "We propose a novel optimization method, AgA, that employs gradient adjustments to progressively align individual and collective objectives in the mixed-motive setting.", "abstract": "Among the research topics in multi-agent learning, mixed-motive cooperation is one of the most prominent challenges, primarily due to the mismatch between individual and collective goals. The cutting-edge research is focused on incorporating domain knowledge into rewards and introducing additional mechanisms to incentivize cooperation. However, these approaches often face shortcomings such as the effort on manual design and the absence of theoretical groundings. To close this gap, we model the mixed-motive game as a differentiable game for the ease of illuminating the learning dynamics towards cooperation. More detailed, we introduce a novel optimization method named \\textbf{\\textit{A}}ltruistic \\textbf{\\textit{G}}radient \\textbf{\\textit{A}}djustment (\\textbf{\\textit{AgA}}) that employs gradient adjustments to progressively align individual and collective objectives. Furthermore, we theoretically prove that AgA effectively attracts gradients to stable fixed points of the collective objective while considering individual interests, and we validate these claims with empirical evidence. We evaluate the effectiveness of our algorithm AgA through benchmark environments for testing mixed-motive collaboration with small-scale agents such as the two-player public good game and the sequential social dilemma games, Cleanup and Harvest, as well as our self-developed large-scale environment in the game StarCraft II.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96810"} +{"video_file": "2bdSnxeQcW_39027683.mp4", "openreview_id": "2bdSnxeQcW", "slideslive_id": 39027683, "venue": "nips2024", "title": "Exclusively Penalized Q-learning for Offline Reinforcement Learning", "status": "Spotlight", "keywords": "Deep reinforcement learning;offline RL;Q-learning;overestimation reduction", "tldr": "We propose a novel penalizing method, which efficiently reduces overestimation for offline reinforcement learning", "abstract": "Constraint-based offline reinforcement learning (RL) involves policy constraints or imposing penalties on the value function to mitigate overestimation errors caused by distributional shift. This paper focuses on a limitation in existing offline RL methods with penalized value function, indicating the potential for underestimation bias due to unnecessary bias introduced in the value function. To address this concern, we propose Exclusively Penalized Q-learning (EPQ), which reduces estimation bias in the value function by selectively penalizing states that are prone to inducing estimation errors. Numerical results show that our method significantly reduces underestimation bias and improves performance in various offline control tasks compared to other offline RL methods.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96808"} +{"video_file": "2cFUYnNL1m_39028014.mp4", "openreview_id": "2cFUYnNL1m", "slideslive_id": 39028014, "venue": "nips2024", "title": "Weight Diffusion for Future: Learn to Generalize in Non-Stationary Environments", "status": "Poster", "keywords": "domain generalization;evolving pattern;diffuison model;weight generation;domain-incremental", "tldr": "A weight diffusion approach that leverages the conditional diffusion model to learn the evolving pattern of parameters across domains and further generated customized classifiers for evolving domain generalization in the domain-incremental setting.", "abstract": "Enabling deep models to generalize in non-stationary environments is vital for real-world machine learning, as data distributions are often found to continually change. Recently, evolving domain generalization (EDG) has emerged to tackle the domain generalization in a time-varying system, where the domain gradually evolves over time in an underlying continuous structure. Nevertheless, it typically assumes multiple source domains simultaneously ready. It still remains an open problem to address EDG in the domain-incremental setting, where source domains are non-static and arrive sequentially to mimic the evolution of training domains. To this end, we propose Weight Diffusion (W-Diff), a novel framework that utilizes the conditional diffusion model in the parameter space to learn the evolving pattern of classifiers during the domain-incremental training process. Specifically, the diffusion model is conditioned on the classifier weights of different historical domain (regarded as a reference point) and the prototypes of current domain, to learn the evolution from the reference point to the classifier weights of current domain (regarded as the anchor point). In addition, a domain-shared feature encoder is learned by enforcing prediction consistency among multiple classifiers, so as to mitigate the overfitting problem and restrict the evolving pattern to be reflected in the classifier as much as possible. During inference, we adopt the ensemble manner based on a great number of target domain-customized classifiers, which are cheaply obtained via the conditional diffusion model, for robust prediction. Comprehensive experiments on both synthetic and real-world datasets show the superior generalization performance of W-Diff on unseen domains in the future.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/96806"} +{"video_file": "2cczgOfMP4_39024825.mp4", "openreview_id": "2cczgOfMP4", "slideslive_id": 39024825, "venue": "nips2024", "title": "Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs", "status": "Poster", "keywords": "large language model;reasoning;tree of thought", "tldr": "A novel method called Chain of Preference Optimization (CPO), which leverages the supervision generated by the self-reasoning process (i.e., Tree-of-Thought) to enhance the reasoning ability of LLMs.", "abstract": "The recent development of chain-of-thought (CoT) decoding has enabled large language models (LLMs) to generate explicit logical reasoning paths for complex problem-solving. However, research indicates that these paths are not always deliberate and optimal. The tree-of-thought (ToT) method employs tree-searching to extensively explore the reasoning space and find better reasoning paths that CoT decoding might overlook. This deliberation, however, comes at the cost of significantly increased inference complexity. In this work, we demonstrate that fine-tuning LLMs leveraging the search tree constructed by ToT allows CoT to achieve similar or better performance, thereby avoiding the substantial inference burden. This is achieved through \\emph{Chain of Preference Optimization} (CPO), where LLMs are fine-tuned to align each step of the CoT reasoning paths with those of ToT using the inherent preference information in the tree-search process. Extensive experimental results show that CPO significantly improves LLM performance in solving a variety of complex problems, including question answering, fact verification, and arithmetic reasoning, demonstrating its effectiveness. Our code is available at https://github.com/sail-sg/CPO.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96804"} +{"video_file": "2kZMtdjzSV_39026429.mp4", "openreview_id": "2kZMtdjzSV", "slideslive_id": 39026429, "venue": "nips2024", "title": "Beyond task diversity: provable representation transfer for sequential multitask linear bandits", "status": "Poster", "keywords": "Bandits;multi-task;meta-learning;representation learning;online learning;task diversity", "tldr": "We study lifelong learning in linear bandits, where a learner interacts with a sequence of linear bandit tasks that share a low-rank representation without requiring the Task Diversity assumption.", "abstract": "We study lifelong learning in linear bandits, where a learner interacts with a sequence of linear bandit tasks whose parameters lie in an $m$-dimensional subspace of $\\mathbb{R}^d$, thereby sharing a low-rank representation. Current literature typically assumes that the tasks are diverse, i.e., their parameters uniformly span the $m$-dimensional subspace. This assumption allows the low-rank representation to be learned before all tasks are revealed, which can be unrealistic in real-world applications. In this work, we present the first nontrivial result for sequential multi-task linear bandits without the task diversity assumption. We develop an algorithm that efficiently learns and transfers low-rank representations. When facing $N$ tasks, each played over $\\tau$ rounds, our algorithm achieves a regret guarantee of $\\tilde{O}\\big (Nm \\sqrt{\\tau} + N^{\\frac{2}{3}} \\tau^{\\frac{2}{3}} d m^{\\frac13} + Nd^2 + \\tau m d \\big)$ under the ellipsoid action set assumption. This result can significantly improve upon the baseline of $\\tilde{O} \\left (Nd \\sqrt{\\tau}\\right)$ that does not leverage the low-rank structure when the number of tasks $N$ is sufficiently large and $m \\ll d$. We also demonstrate empirically on synthetic data that our algorithm outperforms baseline algorithms, which rely on the task diversity assumption.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/96798"} +{"video_file": "2lL7s5ESTj_39027706.mp4", "openreview_id": "2lL7s5ESTj", "slideslive_id": 39027706, "venue": "nips2024", "title": "Replicability in Learning: Geometric Partitions and KKM-Sperner Lemma", "status": "Poster", "keywords": "replicability;learning;geometric partitions;sperner lemma;KKM lemma;sample complexity;list complexity", "tldr": "This paper studies replicability in machine learning tasks from a geometric viewpoint.", "abstract": "This paper studies replicability in machine learning tasks from a geometric viewpoint. Recent works have revealed the role of geometric partitions and Sperner's lemma (and its variations) in designing replicable learning algorithms and in establishing impossibility results.\nA partition\nP\nof\nR\nd\nis called a\n(\nk\n,\n\u03f5\n)\n-secluded partition if for every\np\n\u2192\n\u2208\nR\nd\n, an\n\u03b5\n-radius ball (with respect to the\n\u2113\n\u221e\nnorm) centered at\np\n\u2192\nintersects at most\nk\nmembers of\nP\n. In relation to replicable learning, the parameter\nk\nis closely related to the\nlist complexity\n, and the parameter\n\u03b5\nis related to the sample complexity of the replicable learner. Construction of secluded partitions with better parameters (small\nk\nand large\n\u03b5\n) will lead to replicable learning algorithms with small list and sample complexities.\nMotivated by this connection, we undertake a comprehensive study of secluded partitions and establish near-optimal relationships between\nk\nand\n\u03b5\n.\nWe show that for any\n(\nk\n,\n\u03f5\n)\n-secluded partition where each member has at most unit measure, it must be that\nk\n\u2265\n(\n1\n+\n2\n\u03b5\n)\nd\n, and consequently, for the interesting regime\nk\n\u2208\n[\n2\nd\n]\nit must be that\n\u03f5\n\u2264\nlog\n4\n\u2061\n(\nk\n)\nd\n.\nTo complement this upper bound on\n\u03f5\n, we show that for each\nd\n\u2208\nN\nand each viable\nk\n\u2208\n[\n2\nd\n]\n, a construction of a\n(\nk\n,\n\u03f5\n)\n-secluded (unit cube) partition with\n\u03f5\n\u2265\nlog\n4\n\u2061\n(\nk\n)\nd\n\u22c5\n1\n8\nlog\n4\n\u2061\n(\nd\n+\n1\n)\n. This establishes the optimality of\n\u03f5\nwithin a logarithmic factor.\nFinally, we adapt our proof techniques to obtain a new ``neighborhood'' variant of the cubical KKM lemma (or cubical Sperner's lemma): For any coloring of\n[\n0\n,\n1\n]\nd\nin which no color is used on opposing faces, it holds for each\n\u03f5\n\u2208\n(\n0\n,\n1\n2\n]\nthat there is a point where the open\n\u03f5\n-radius\n\u2113\n\u221e\n-ball intersects at least\n(\n1\n+\n2\n3\n\u03f5\n)\nd\ncolors. While the classical Sperner/KKM lemma guarantees the existence of a point that is \"adjacent\" to points with\n(\nd\n+\n1\n)\ndistinct colors, the neighborhood version guarantees the existence of a small neighborhood with exponentially many points with distinct colors.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96797"} +{"video_file": "2nisrxMMQR_39024627.mp4", "openreview_id": "2nisrxMMQR", "slideslive_id": 39024627, "venue": "nips2024", "title": "Meta-Exploiting Frequency Prior for Cross-Domain Few-Shot Learning", "status": "Poster", "keywords": "Meta-learning Few-shot learning Cross-domain few-shot learning", "tldr": "We introduce a novel framework, which is crafted to comprehensively exploit the cross-domain transferable image prior.", "abstract": "Meta-learning offers a promising avenue for few-shot learning (FSL), enabling models to glean a generalizable feature embedding through episodic training on synthetic FSL tasks in a source domain. Yet, in practical scenarios where the target task diverges from that in the source domain, meta-learning based method is susceptible to over-fitting. To overcome this, we introduce a novel framework, Meta-Exploiting Frequency Prior for Cross-Domain Few-Shot Learning, which is crafted to comprehensively exploit the cross-domain transferable image prior that each image can be decomposed into complementary low-frequency content details and high-frequency robust structural characteristics. Motivated by this insight, we propose to decompose each query image into its high-frequency and low-frequency components, and parallel incorporate them into the feature embedding network to enhance the final category prediction. More importantly, we introduce a feature reconstruction prior and a prediction consistency prior to separately encourage the consistency of the intermediate feature as well as the final category prediction between the original query image and its decomposed frequency components. This allows for collectively guiding the network's meta-learning process with the aim of learning generalizable image feature embeddings, while not introducing any extra computational cost in the inference phase. Our framework establishes new state-of-the-art results on multiple cross-domain few-shot learning benchmarks.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96793"} +{"video_file": "2oZea6pKhl_39026148.mp4", "openreview_id": "2oZea6pKhl", "slideslive_id": 39026148, "venue": "nips2024", "title": "RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar", "status": "Poster", "keywords": "4D Imaging Radar;3D Occupancy Prediction;Scene Understanding;Autonomous Driving", "tldr": "This paper proposes a pioneering method that utilizes 4D imaging radar sensors for robust 3D occupancy prediction even against adverse weathers.", "abstract": "3D occupancy-based perception pipeline has significantly advanced autonomous driving by capturing detailed scene descriptions and demonstrating strong generalizability across various object categories and shapes. Current methods predominantly rely on LiDAR or camera inputs for 3D occupancy prediction. These methods are susceptible to adverse weather conditions, limiting the all-weather deployment of self-driving cars. To improve perception robustness, we leverage the recent advances in automotive radars and introduce a novel approach that utilizes 4D imaging radar sensors for 3D occupancy prediction. Our method, RadarOcc, circumvents the limitations of sparse radar point clouds by directly processing the 4D radar tensor, thus preserving essential scene details. RadarOcc innovatively addresses the challenges associated with the voluminous and noisy 4D radar data by employing Doppler bins descriptors, sidelobe-aware spatial sparsification, and range-wise self-attention mechanisms. To minimize the interpolation errors associated with direct coordinate transformations, we also devise a spherical-based feature encoding followed by spherical-to-Cartesian feature aggregation. We benchmark various baseline methods based on distinct modalities on the public K-Radar dataset. The results demonstrate RadarOcc's state-of-the-art performance in radar-based 3D occupancy prediction and promising results even when compared with LiDAR- or camera-based methods. Additionally, we present qualitative evidence of the superior performance of 4D radar in adverse weather conditions and explore the impact of key pipeline components through ablation studies.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/96791"} +{"video_file": "2pgc5xDJ1b_39025605.mp4", "openreview_id": "2pgc5xDJ1b", "slideslive_id": 39025605, "venue": "nips2024", "title": "Externally Valid Policy Evaluation from Randomized Trials Using Additional Observational Data", "status": "Poster", "keywords": "Policy Evaluation;Randomized Trials;External Validity", "tldr": "We propose a nonparametric method that uses trial data, adjusted with additional covariates from the target population, to provide certifiably and externally valid policy evaluations.", "abstract": "Randomized trials are widely considered as the gold standard for evaluating the effects of decision policies. Trial data is, however, drawn from a population which may differ from the intended target population and this raises a problem of external validity (aka. generalizability). In this paper we seek to use trial data to draw valid inferences about the outcome of a policy on the target population. Additional covariate data from the target population is used to model the sampling of individuals in the trial study. We develop a method that yields certifiably valid trial-based policy evaluations under any specified range of model miscalibrations. The method is nonparametric and the validity is assured even with finite samples. The certified policy evaluations are illustrated using both simulated and real data.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/96790"} +{"video_file": "2squ766Iq4_39028198.mp4", "openreview_id": "2squ766Iq4", "slideslive_id": 39028198, "venue": "nips2024", "title": "Towards Understanding Extrapolation: a Causal Lens", "status": "Poster", "keywords": "generalization;extrapolation;identification;adaptation", "tldr": "We investigate the theoretical aspects of extrapolation under a causal model formulation.", "abstract": "Canonical work handling distribution shifts typically necessitates an entire target distribution that lands inside the training distribution. However, practical scenarios often involve only a handful target samples, potentially lying outside the training support, which requires the capability of extrapolation. In this work, we aim to provide a theoretical understanding of when extrapolation is possible and offer principled methods to achieve it without requiring an on-support target distribution. To this end, we formulate the extrapolation problem with a latent-variable model that embodies the minimal change principle in causal mechanisms. Under this formulation, we cast the extrapolation problem into a latent-variable identification problem. We provide realistic conditions on shift properties and the estimation objectives that lead to identification even when only one off-support target sample is available, tackling the most challenging scenarios. Our theory reveals the intricate interplay between the underlying manifold's smoothness and the shift properties. We showcase how our theoretical results inform the design of practical adaptation algorithms. Through experiments on both synthetic and real-world data, we validate our theoretical findings and their practical implications.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/96789"} +{"video_file": "2vMvh5XP0P_39025136.mp4", "openreview_id": "2vMvh5XP0P", "slideslive_id": 39025136, "venue": "nips2024", "title": "Subsurface Scattering for Gaussian Splatting", "status": "Poster", "keywords": "gaussian splatting;inverse rendering;relighting;subsurface scattering;computer graphics;brdf decomposition;pbr;differentiable rendering;nerf", "tldr": "Realtime relighting of subsurface scattering objects in 3D gaussian splatting scenes", "abstract": "3D reconstruction and relighting of objects made from scattering materials present a significant challenge due to the complex light transport beneath the surface. 3D Gaussian Splatting introduced high-quality novel view synthesis at real-time speeds. While 3D Gaussians efficiently approximate an object's surface, they fail to capture the volumetric properties of subsurface scattering. We propose a framework for optimizing an object's shape together with the radiance transfer field given multi-view OLAT (one light at a time) data. Our method decomposes the scene into an explicit surface represented as 3D Gaussians, with a spatially varying BRDF, and an implicit volumetric representation of the scattering component. A learned incident light field accounts for shadowing. We optimize all parameters jointly via ray-traced differentiable rendering. Our approach enables material editing, relighting, and novel view synthesis at interactive rates. We show successful application on synthetic data and contribute a newly acquired multi-view multi-light dataset of objects in a light-stage setup. Compared to previous work we achieve comparable or better results at a fraction of optimization and rendering time while enabling detailed control over material attributes.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96787"} +{"video_file": "2wlNnIqCb7_39027989.mp4", "openreview_id": "2wlNnIqCb7", "slideslive_id": 39027989, "venue": "nips2024", "title": "Bridging semantics and pragmatics in information-theoretic emergent communication", "status": "Poster", "keywords": "semantics;pragmatics;emergent communication;information theory;artificial agents", "tldr": "Co-evolution of semantics and pragmatics in artificial agents", "abstract": "Human languages support both semantic categorization and local pragmatic interactions that require context-sensitive reasoning about meaning. While semantics and pragmatics are two fundamental aspects of language, they are typically studied independently and their co-evolution is largely under-explored. Here, we aim to bridge this gap by studying how a shared lexicon may emerge from local pragmatic interactions. To this end, we extend a recent information-theoretic framework for emergent communication in artificial agents, which integrates utility maximization, associated with pragmatics, with general communicative constraints that are believed to shape human semantic systems. Specifically, we show how to adapt this framework to train agents via unsupervised pragmatic interactions, and then evaluate their emergent lexical semantics. We test this approach in a rich visual domain of naturalistic images, and find that key human-like properties of the lexicon emerge when agents are guided by both context-specific utility and general communicative pressures, suggesting that both aspects are crucial for understanding how language may evolve in humans and in artificial agents.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/96783"} +{"video_file": "2zWbzx50mH_39026340.mp4", "openreview_id": "2zWbzx50mH", "slideslive_id": 39026340, "venue": "nips2024", "title": "Compact Proofs of Model Performance via Mechanistic Interpretability", "status": "Poster", "keywords": "mechanistic interpretability;verification;proof;guarantees;interpretability", "tldr": "We prototype using mechanistic interpretability to derive and formally verify guarantees on model performance in a toy setting.", "abstract": "We propose using mechanistic interpretability -- techniques for reverse engineering model weights into human-interpretable algorithms -- to derive and compactly prove formal guarantees on model performance. We prototype this approach by formally proving accuracy lower bounds for a small transformer trained on Max-of-\nK\n, validating proof transferability across 151 random seeds and four values of\nK\n. We create 102 different computer-assisted proof strategies and assess their length and tightness of bound on each of our models. Using quantitative metrics, we find that shorter proofs seem to require and provide more mechanistic understanding. Moreover, we find that more faithful mechanistic understanding leads to tighter performance bounds. We confirm these connections by qualitatively examining a subset of our proofs. Finally, we identify compounding structureless errors as a key challenge for using mechanistic interpretability to generate compact proofs on model performance.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/96781"} +{"video_file": "31xWlIdxTm_39024822.mp4", "openreview_id": "31xWlIdxTm", "slideslive_id": 39024822, "venue": "nips2024", "title": "Instance-adaptive Zero-shot Chain-of-Thought Prompting", "status": "Poster", "keywords": "Large Language Models;Chain-of-Thought Reasoning;Instance-adaptive;Zero-shot", "tldr": "This paper conducts analysis on LLM's zero-shot CoT, discovering an information flow pattern, furthermore, this paper proposes an instance-adaptive zero-shot prompting strategy to improve LLM's performance on several reasoning tasks.", "abstract": "Zero-shot Chain-of-Thought (CoT) prompting emerges as a simple and effective strategy for enhancing the performance of large language models (LLMs) in real-world reasoning tasks. Nonetheless, the efficacy of a singular, task-level prompt uniformly applied across the whole of instances is inherently limited since one prompt cannot be a good partner for all, a more appropriate approach should consider the interaction between the prompt and each instance meticulously. This work introduces an instance-adaptive prompting algorithm as an alternative zero-shot CoT reasoning scheme by adaptively differentiating good and bad prompts. Concretely, we first employ analysis on LLMs through the lens of information flow to detect the mechanism under zero-shot CoT reasoning, in which we discover that information flows from question to prompt and question to rationale jointly influence the reasoning results most. We notice that a better zero-shot CoT reasoning needs the prompt to obtain semantic information from the question then the rationale aggregates sufficient information from the question directly and via the prompt indirectly. On the contrary, lacking any of those would probably lead to a bad one. Stem from that, we further propose an instance-adaptive prompting strategy (IAP) for zero-shot CoT reasoning. Experiments conducted with LLaMA-2, LLaMA-3, and Qwen on math, logic, and commonsense reasoning tasks (e.g., GSM8K, MMLU, Causal Judgement) obtain consistent improvement, demonstrating that the instance-adaptive zero-shot CoT prompting performs better than other task-level methods with some curated prompts or sophisticated procedures, showing the significance of our findings in the zero-shot CoT reasoning mechanism.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96779"} +{"video_file": "337dHOexCM_39027342.mp4", "openreview_id": "337dHOexCM", "slideslive_id": 39027342, "venue": "nips2024", "title": "Retrieval & Fine-Tuning for In-Context Tabular Models", "status": "Poster", "keywords": "in-context learning;tabular data;retrieval;foundation models;transformers", "tldr": "We use retrieval with fine-tuning to improve in-context learning on tabular data and show large improvement with better scaling", "abstract": "Tabular data is a pervasive modality spanning a wide range of domains, and this inherent diversity poses a considerable challenge for deep learning. Recent advancements using transformer-based in-context learning have shown promise on smaller and less complex tabular datasets, but have struggled to scale to larger and more complex ones. To address this limitation, we propose a combination of retrieval and fine-tuning: we can adapt the transformer to a local subset of the data by collecting nearest neighbours, and then perform task-specific fine-tuning with this retrieved set of neighbours in context. Using TabPFN as the base model -- currently the best tabular in-context learner -- and applying our retrieval and fine-tuning scheme on top results in what we call a locally-calibrated PFN, or LoCalPFN. We conduct extensive evaluation on 95 datasets curated by TabZilla from OpenML, upon which we establish a new state-of-the-art with LoCalPFN -- even with respect to tuned tree-based models. Notably, we show a significant boost in performance compared to the base in-context model, demonstrating the efficacy of our approach and advancing the frontier of deep learning in tabular data.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/96776"} +{"video_file": "348hfcprUs_39025490.mp4", "openreview_id": "348hfcprUs", "slideslive_id": 39025490, "venue": "nips2024", "title": "Fast Best-of-N Decoding via Speculative Rejection", "status": "Poster", "keywords": "alignment;large language models;rejection sampling;best-of-n;acceleration", "tldr": "We accelerate the Best-of-N algorithm, an inference-time alignment strategy", "abstract": "The safe and effective deployment of Large Language Models (LLMs) involves a critical step called alignment, which ensures that the model's responses are in accordance with human preferences. Prevalent alignment techniques, such as DPO, PPO and their variants, align LLMs by changing the pre-trained model weights during a phase called post-training. While predominant, these post-training methods add substantial complexity before LLMs can be deployed. Inference-time alignment methods avoid the complex post-training step and instead bias the generation towards responses that are aligned with human preferences. The best-known inference-time alignment method, called Best-of-N, is as effective as the state-of-the-art post-training procedures. Unfortunately, Best-of-N requires vastly more resources at inference time than standard decoding strategies, which makes it computationally not viable. In this work, we introduce Speculative Rejection, a computationally-viable inference-time alignment algorithm. It generates high-scoring responses according to a given reward model, like Best-of-N does, while being between 16 to 32 times more computationally efficient.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/96774"} +{"video_file": "35DAviqMFo_39027796.mp4", "openreview_id": "35DAviqMFo", "slideslive_id": 39027796, "venue": "nips2024", "title": "Understanding Emergent Abilities of Language Models from the Loss Perspective", "status": "Poster", "keywords": "pretrained language model;emergent ability;scaling law", "tldr": "We demonstrate the existence of emergent abilities in the lens of pre-training loss regardless of the continuity of evaluation metrics", "abstract": "Recent studies have put into question the belief that emergent abilities in language models are exclusive to large models. This skepticism arises from two observations: 1) smaller models can also exhibit high performance on emergent abilities and 2) there is doubt on the discontinuous metrics used to measure these abilities. In this paper, we propose to study emergent abilities in the lens of pre-training loss, instead of model size or training compute. We demonstrate that the Transformer models with the same pre-training loss, but different model and data sizes, generate the same performance on various downstream tasks, with a fixed data corpus, tokenization, and model architecture. We also discover that a model exhibits emergent abilities on certain tasks\u2014regardless of the continuity of metrics\u2014when its pre-training loss falls below a specific threshold. Before reaching this threshold, its performance remains at the level of random guessing. This inspires us to redefine emergent abilities as those that manifest in models with lower pre-training losses, highlighting that these abilities cannot be predicted by merely extrapolating the performance trends of models with higher pre-training losses.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96773"} +{"video_file": "35WwZhkush_39026145.mp4", "openreview_id": "35WwZhkush", "slideslive_id": 39026145, "venue": "nips2024", "title": "BetterDepth: Plug-and-Play Diffusion Refiner for Zero-Shot Monocular Depth Estimation", "status": "Poster", "keywords": "Monocular depth estimation;diffusion model;zero-shot transfer;plug-and-play", "tldr": "We present BetterDepth to efficiently achieve robust affine-invariant monocular depth estimation with fine-grained details.", "abstract": "By training over large-scale datasets, zero-shot monocular depth estimation (MDE) methods show robust performance in the wild but often suffer from insufficient detail. Although recent diffusion-based MDE approaches exhibit a superior ability to extract details, they struggle in geometrically complex scenes that challenge their geometry prior, trained on less diverse 3D data. To leverage the complementary merits of both worlds, we propose BetterDepth to achieve geometrically correct affine-invariant MDE while capturing fine details. Specifically, BetterDepth is a conditional diffusion-based refiner that takes the prediction from pre-trained MDE models as depth conditioning, in which the global depth layout is well-captured, and iteratively refines details based on the input image. For the training of such a refiner, we propose global pre-alignment and local patch masking methods to ensure BetterDepth remains faithful to the depth conditioning while learning to add fine-grained scene details. With efficient training on small-scale synthetic datasets, BetterDepth achieves state-of-the-art zero-shot MDE performance on diverse public datasets and on in-the-wild scenes. Moreover, BetterDepth can improve the performance of other MDE models in a plug-and-play manner without further re-training.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96772"} +{"video_file": "36tMV15dPO_39024888.mp4", "openreview_id": "36tMV15dPO", "slideslive_id": 39024888, "venue": "nips2024", "title": "X-Ray: A Sequential 3D Representation For Generation", "status": "Spotlight", "keywords": "3D generation;3D representation;3D reconstruciton;diffusion model", "tldr": "We propose a novel 3D representation that enhances 3D synthesis by capturing intricate object details, both inside and outside!", "abstract": "We introduce X-Ray, a novel 3D sequential representation inspired by the penetrability of x-ray scans. X-Ray transforms a 3D object into a series of surface frames at different layers, making it suitable for generating 3D models from images. Our method utilizes ray casting from the camera center to capture geometric and textured details, including depth, normal, and color, across all intersected surfaces. This process efficiently condenses the whole 3D object into a multi-frame video format, motivating the utilize of a network architecture similar to those in video diffusion models. This design ensures an efficient 3D representation by focusing solely on surface information. Also, we propose a two-stage pipeline to generate 3D objects from X-Ray Diffusion Model and Upsampler. We demonstrate the practicality and adaptability of our X-Ray representation by synthesizing the complete visible and hidden surfaces of a 3D object from a single input image. Experimental results reveal the state-of-the-art superiority of our representation in enhancing the accuracy of 3D generation, paving the way for new 3D representation research and practical applications. Our project page is in \\url{https://tau-yihouxiang.github.io/projects/X-Ray/X-Ray.html}.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96771"} +{"video_file": "37CyA1K0vV_39025343.mp4", "openreview_id": "37CyA1K0vV", "slideslive_id": 39025343, "venue": "nips2024", "title": "Aggregating Quantitative Relative Judgments: From Social Choice to Ranking Prediction", "status": "Poster", "keywords": "Algorithmic Game Theory;Ranking and Preference Learning", "tldr": "We propose new rules for aggregating quantitative relative judgments, study their computational complexity, and evaluate their empirical performance.", "abstract": "Quantitative Relative Judgment Aggregation (QRJA) is a new research topic in (computational) social choice. In the QRJA model, agents provide judgments on the relative quality of different candidates, and the goal is to aggregate these judgments across all agents. In this work, our main conceptual contribution is to explore the interplay between QRJA in a social choice context and its application to ranking prediction. We observe that in QRJA, judges do not have to be people with subjective opinions; for example, a race can be viewed as a ``judgment'' on the contestants' relative abilities. This allows us to aggregate results from multiple races to evaluate the contestants' true qualities. At a technical level, we introduce new aggregation rules for QRJA and study their structural and computational properties. We evaluate the proposed methods on data from various real races and show that QRJA-based methods offer effective and interpretable ranking predictions.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/96770"} +{"video_file": "38UFpdt3Tr_39028653.mp4", "openreview_id": "38UFpdt3Tr", "slideslive_id": 39028653, "venue": "nips2024", "title": "Exploiting Activation Sparsity with Dense to Dynamic-k Mixture-of-Experts Conversion", "status": "Poster", "keywords": "inference efficiency;activation sparsity;dynamic-k gating;mixture-of-experts;conditional computation;dynamic neural networks", "tldr": "We speed up inference by novel mixture-of-experts conversion method", "abstract": "Transformer models can face practical limitations due to their high computational requirements. At the same time, such models exhibit significant activation sparsity, which can be leveraged to reduce the inference cost by converting parts of the network into equivalent Mixture-of-Experts (MoE) layers. Despite the crucial role played by activation sparsity, its impact on this process remains unexplored. We demonstrate that the efficiency of the conversion can be significantly enhanced by a proper regularization of the activation sparsity of the base model. Moreover, motivated by the high variance of the number of activated neurons for different inputs, we introduce a more effective dynamic-\nk\nexpert selection rule that adjusts the number of executed experts on a per-token basis. To achieve further savings, we extend this approach to multi-head attention projections. Finally, we develop an efficient implementation that translates these computational savings into actual wall-clock speedup. The proposed method, Dense to Dynamic-\nk\nMixture-of-Experts (D2DMoE), outperforms existing approaches on common NLP and vision tasks, reducing inference cost by up to 60% without significantly impacting performance.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/96769"} +{"video_file": "3ADBiWNUBb_39028449.mp4", "openreview_id": "3ADBiWNUBb", "slideslive_id": 39028449, "venue": "nips2024", "title": "Graph Structure Inference with BAM: Neural Dependency Processing via Bilinear Attention", "status": "Poster", "keywords": "Graph Structure Inference;Causal Inference;Supervised Deep Learning;Geometric Deep Learning", "tldr": "We present a neural network with a novel bilinear attention mechanism for supervised graph structure inference, mapping observational data to their dependency structures with enhanced performance and robust generalizability.", "abstract": "Detecting dependencies among variables is a fundamental task across scientific disciplines. We propose a novel neural network model for graph structure inference, which aims to learn a mapping from observational data to the corresponding underlying dependence structures. The model is trained with variably shaped and coupled simulated input data and requires only a single forward pass through the trained network for inference. Central to our approach is a novel bilinear attention mechanism (BAM) operating on covariance matrices of transformed data while respecting the geometry of the manifold of symmetric positive definite (SPD) matrices. Inspired by graphical lasso methods, our model optimizes over continuous graph representations in the SPD space, where inverse covariance matrices encode conditional independence relations. Empirical evaluations demonstrate the robustness of our method in detecting diverse dependencies, excelling in undirected graph estimation and showing competitive performance in completed partially directed acyclic graph estimation via a novel two-step approach. The trained model effectively detects causal relationships and generalizes well across different functional forms of nonlinear dependencies.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/96766"} +{"video_file": "3BNPUDvqMt_39026717.mp4", "openreview_id": "3BNPUDvqMt", "slideslive_id": 39026717, "venue": "nips2024", "title": "Better by default: Strong pre-tuned MLPs and boosted trees on tabular data", "status": "Poster", "keywords": "tabular data;benchmark;default parameters;neural networks;deep learning;multilayer perceptron;gradient-boosted decision trees", "tldr": "We propose better default parameters for boosted decision trees and improved neural networks on tabular data, evaluate them on separate large benchmarks, and show that they can achieve excellent results with moderate runtime.", "abstract": "For classification and regression on tabular data, the dominance of gradient-boosted decision trees (GBDTs) has recently been challenged by often much slower deep learning methods with extensive hyperparameter tuning. We address this discrepancy by introducing (a) RealMLP, an improved multilayer perceptron (MLP), and (b) strong meta-tuned default parameters for GBDTs and RealMLP. We tune RealMLP and the default parameters on a meta-train benchmark with 118 datasets and compare them to hyperparameter-optimized versions on a disjoint meta-test benchmark with 90 datasets, as well as the GBDT-friendly benchmark by Grinsztajn et al. (2022). Our benchmark results on medium-to-large tabular datasets (1K--500K samples) show that RealMLP offers a favorable time-accuracy tradeoff compared to other neural baselines and is competitive with GBDTs in terms of benchmark scores. Moreover, a combination of RealMLP and GBDTs with improved default parameters can achieve excellent results without hyperparameter tuning. Finally, we demonstrate that some of RealMLP's improvements can also considerably improve the performance of TabR with default parameters.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/96765"} +{"video_file": "3HpCVZV9it_39025921.mp4", "openreview_id": "3HpCVZV9it", "slideslive_id": 39025921, "venue": "nips2024", "title": "Geometric-Averaged Preference Optimization for Soft Preference Labels", "status": "Poster", "keywords": "Reinforcement Learning from Human Feedback;Alignment;Soft Preference Labels;Large Language Models", "tldr": "We propose the weighted geometric averaging of LLM output likelihood that can be applied to any DPO-based method. Geometric averaging consistently outperforms previous binary and conservative soft preference methods in offline and online alignment.", "abstract": "Many algorithms for aligning LLMs with human preferences assume that human preferences are binary and deterministic. However, human preferences can vary across individuals, and therefore should be represented distributionally. In this work, we introduce the distributional soft preference labels and improve Direct Preference Optimization (DPO) with a weighted geometric average of the LLM output likelihood in the loss function. This approach adjusts the scale of learning loss based on the soft labels such that the loss would approach zero when the responses are closer to equally preferred. This simple modification can be easily applied to any DPO-based methods and mitigate over-optimization and objective mismatch, which prior works suffer from. Our experiments simulate the soft preference labels with AI feedback from LLMs and demonstrate that geometric averaging consistently improves performance on standard benchmarks for alignment research. In particular, we observe more preferable responses than binary labels and significant improvements where modestly-confident labels are in the majority.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96758"} +{"video_file": "3LZHatxUa9_39025144.mp4", "openreview_id": "3LZHatxUa9", "slideslive_id": 39025144, "venue": "nips2024", "title": "On the Impact of Feature Heterophily on Link Prediction with Graph Neural Networks", "status": "Poster", "keywords": "graph neural networks;heterophily;link prediction", "tldr": "We formalize and analyze heterophilic link prediction with GNNs, when connected nodes have dissimilar features. We identify real-world heterophilic benchmarks and show that learnable decoders and separated node embeddings are crucial for such graphs.", "abstract": "Heterophily, or the tendency of connected nodes in networks to have different class labels or dissimilar features, has been identified as challenging for many Graph Neural Network (GNN) models. While the challenges of applying GNNs for node classification when class labels display strong heterophily are well understood, it is unclear how heterophily affects GNN performance in other important graph learning tasks where class labels are not available. In this work, we focus on the link prediction task and systematically analyze the impact of heterophily in node features on GNN performance. We first introduce formal definitions of homophilic and heterophilic link prediction tasks, and present a theoretical framework that highlights the different optimizations needed for the respective tasks. We then analyze how different link prediction encoders and decoders adapt to varying levels of feature homophily and introduce designs for improved performance. Based on our definitions, we identify and analyze six real-world benchmarks spanning from homophilic to heterophilic link prediction settings, with graphs containing up to 30M edges. Our empirical analysis on a variety of synthetic and real-world datasets confirms our theoretical insights and highlights the importance of adopting learnable decoders and GNN encoders with ego- and neighbor-embedding separation in message passing for link prediction tasks beyond homophily.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/96753"} +{"video_file": "3O5YCEWETq_39025826.mp4", "openreview_id": "3O5YCEWETq", "slideslive_id": 39025826, "venue": "nips2024", "title": "Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series", "status": "Poster", "keywords": "time series;foundation models;pretrained models;forecasting;time-series;time;tsfm;light-weight;forecasters", "tldr": "Fast and Light-weight Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series", "abstract": "Large pre-trained models excel in zero/few-shot learning for language and vision tasks but face challenges in multivariate time series (TS) forecasting due to diverse data characteristics. Consequently, recent research efforts have focused on developing pre-trained TS forecasting models. These models, whether built from scratch or adapted from large language models (LLMs), excel in zero/few-shot forecasting tasks. However, they are limited by slow performance, high computational demands, and neglect of cross-channel and exogenous correlations. To address this, we introduce Tiny Time Mixers (TTM), a compact model (starting from 1M parameters) with effective transfer learning capabilities, trained exclusively on public TS datasets. TTM, based on the light-weight TSMixer architecture, incorporates innovations like adaptive patching, diverse resolution sampling, and resolution prefix tuning to handle pre-training on varied dataset resolutions with minimal model capacity. Additionally, it employs multi-level modeling to capture channel correlations and infuse exogenous signals during fine-tuning. TTM outperforms existing popular benchmarks in zero/few-shot forecasting by (4-40%), while reducing computational requirements significantly. Moreover, TTMs are lightweight and can be executed even on CPU-only machines, enhancing usability and fostering wider adoption in resource-constrained environments. The model weights for reproducibility and research use are available at https://huggingface.co/ibm/ttm-research-r2/, while enterprise-use weights under the Apache license can be accessed as follows: the initial TTM-Q variant at https://huggingface.co/ibm-granite/granite-timeseries-ttm-r1, and the latest variants (TTM-B, TTM-E, TTM-A) weights are available at https://huggingface.co/ibm-granite/granite-timeseries-ttm-r2. The source code for the TTM model along with the usage scripts are available at https://github.com/ibm-granite/granite-tsfm/tree/main/tsfm_public/models/tinytimemixer", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/96748"} +{"video_file": "3Odq2tGSpp_39027338.mp4", "openreview_id": "3Odq2tGSpp", "slideslive_id": 39027338, "venue": "nips2024", "title": "Stylus: Automatic Adapter Selection for Diffusion Models", "status": "Oral", "keywords": "Stable Diffusion;Diffusion-based Models;Computer Vision;Artificial Intelligence;RAG;Retrieval;Adapters;LoRA", "tldr": "Stylus automatically selects and composes adapters from a vast database of adapters for diffusion-based models.", "abstract": "Beyond scaling base models with more data or parameters, fine-tuned adapters provide an alternative way to generate high fidelity, custom images at reduced costs. As such, adapters have been widely adopted by open-source communities, accumulating a database of over 100K adapters\u2014most of which are highly customized with insufficient descriptions. To generate high quality images, this paper explores the problem of matching the prompt to a Stylus of relevant adapters, built on recent work that highlight the performance gains of composing adapters. We introduce Stylus, which efficiently selects and automatically composes task-specific adapters based on a prompt's keywords. Stylus outlines a three-stage approach that first summarizes adapters with improved descriptions and embeddings, retrieves relevant adapters, and then further assembles adapters based on prompts' keywords by checking how well they fit the prompt. To evaluate Stylus, we developed StylusDocs, a curated dataset featuring 75K adapters with pre-computed adapter embeddings. In our evaluation on popular Stable Diffusion checkpoints, Stylus achieves greater CLIP/FID Pareto efficiency and is twice as preferred, with humans and multimodal models as evaluators, over the base model.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96747"} +{"video_file": "3RxcarQFRn_39026743.mp4", "openreview_id": "3RxcarQFRn", "slideslive_id": 39026743, "venue": "nips2024", "title": "Generative Adversarial Model-Based Optimization via Source Critic Regularization", "status": "Poster", "keywords": "Offline Optimization;Bayesian Optimization;Surrogate Objectives", "tldr": "We show that source critic adversarial networks can effectively regularize offline optimizers for generative tasks in medicine and the sciences.", "abstract": "Offline model-based optimization seeks to optimize against a learned surrogate model without querying the true oracle objective function during optimization. Such tasks are commonly encountered in protein design, robotics, and clinical medicine where evaluating the oracle function is prohibitively expensive. However, inaccurate surrogate model predictions are frequently encountered along offline optimization trajectories. To address this limitation, we propose generative adversarial model-based optimization using adaptive source critic regularization (aSCR)\u2014a task- and optimizer- agnostic framework for constraining the optimization trajectory to regions of the design space where the surrogate function is reliable. We propose a computationally tractable algorithm to dynamically adjust the strength of this constraint, and show how leveraging aSCR with standard Bayesian optimization outperforms existing methods on a suite of offline generative design tasks. Our code is available at https://github.com/michael-s-yao/gabo.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/96744"} +{"video_file": "3Tzcot1LKb_39028698.mp4", "openreview_id": "3Tzcot1LKb", "slideslive_id": 39028698, "venue": "nips2024", "title": "SimPO: Simple Preference Optimization with a Reference-Free Reward", "status": "Poster", "keywords": "Language Models;Preference Optimization;Reinforcement Learning from Human Feedback", "tldr": "We propose SimPO, a simple and effective preference optimization algorithm.", "abstract": "Direct Preference Optimization (DPO) is a widely used offline preference optimization algorithm that reparameterizes reward functions in reinforcement learning from human feedback (RLHF) to enhance simplicity and training stability. In this work, we propose SimPO, a simpler yet more effective approach. The effectiveness of SimPO is attributed to a key design: using the average log probability of a sequence as the implicit reward. This reward formulation better aligns with model generation and eliminates the need for a reference model, making it more compute and memory efficient. Additionally, we introduce a target reward margin to the Bradley-Terry objective to encourage a larger margin between the winning and losing responses, further improving the algorithm's performance. We compare SimPO to DPO and its latest variants across various state-of-the-art training setups, including both base and instruction-tuned models such as Mistral, Llama 3, and Gemma 2. We evaluate on extensive chat-based evaluation benchmarks, including AlpacaEval 2, MT-Bench, and Arena-Hard. Our results demonstrate that SimPO consistently and significantly outperforms existing approaches without substantially increasing response length. Specifically, SimPO outperforms DPO by up to 6.4 points on AlpacaEval 2 and by up to 7.5 points on Arena-Hard. Our top-performing model, built on Gemma-2-9B-it, achieves a 72.4% length-controlled win rate on AlpacaEval 2, a 59.1% win rate on Arena-Hard, and ranks 1st on Chatbot Arena among\n<\n10B models with real user votes.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96741"} +{"video_file": "3XnBVK9sD6_39027218.mp4", "openreview_id": "3XnBVK9sD6", "slideslive_id": 39027218, "venue": "nips2024", "title": "InfoRM: Mitigating Reward Hacking in RLHF via Information-Theoretic Reward Modeling", "status": "Poster", "keywords": "Reward Hacking;Reward Overoptimization;Reinforcement Learning from Human Feedback;Large Language Models", "tldr": "Add:", "abstract": "Despite the success of reinforcement learning from human feedback (RLHF) in aligning language models with human values, reward hacking, also termed reward overoptimization, remains a critical challenge. This issue primarily arises from reward misgeneralization, where reward models (RMs) compute reward using spurious features that are irrelevant to human preferences. In this work, we tackle this problem from an information-theoretic perspective and propose a framework for reward modeling, namely InfoRM, by introducing a variational information bottleneck objective to filter out irrelevant information. Notably, we further identify a correlation between overoptimization and outliers in the IB latent space of InfoRM, establishing it as a promising tool for detecting reward overoptimization. Inspired by this finding, we propose the Cluster Separation Index (CSI), which quantifies deviations in the IB latent space, as an indicator of reward overoptimization to facilitate the development of online mitigation strategies. Extensive experiments on a wide range of settings and RM scales (70M, 440M, 1.4B, and 7B) demonstrate the effectiveness of InfoRM. Further analyses reveal that InfoRM's overoptimization detection mechanism is not only effective but also robust across a broad range of datasets, signifying a notable advancement in the field of RLHF. The code will be released upon acceptance.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96739"} +{"video_file": "3YIyB82rjX_39026607.mp4", "openreview_id": "3YIyB82rjX", "slideslive_id": 39026607, "venue": "nips2024", "title": "Handling Learnwares from Heterogeneous Feature Spaces with Explicit Label Exploitation", "status": "Poster", "keywords": "learnware;heterogeneous feature spaces;subspace", "tldr": "This paper solves the problem of managing and reusing models developed from heterogeneous feature spaces by explicitly utilizing label information.", "abstract": "The learnware paradigm aims to help users leverage numerous existing high-performing models instead of starting from scratch, where a learnware consists of a well-trained model and the specification describing its capability. Numerous learnwares are accommodated by a learnware dock system. When users solve tasks with the system, models that fully match the task feature space are often rare or even unavailable. However, models with heterogeneous feature space can still be helpful. This paper finds that label information, particularly model outputs, is helpful yet previously less exploited in the accommodation of heterogeneous learnwares. We extend the specification to better leverage model pseudo-labels and subsequently enrich the unified embedding space for better specification evolvement. With label information, the learnware identification can also be improved by additionally comparing conditional distributions. Experiments demonstrate that, even without a model explicitly tailored to user tasks, the system can effectively handle tasks by leveraging models from diverse feature spaces.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/96738"} +{"video_file": "3Z0LTDjIM0_39027222.mp4", "openreview_id": "3Z0LTDjIM0", "slideslive_id": 39027222, "venue": "nips2024", "title": "Faster Local Solvers for Graph Diffusion Equations", "status": "Poster", "keywords": "Graph Diffusion Equation;Personalized PageRank;Heat Kernel;Katz;Local Solvers;Graph Neural Networks", "tldr": "We propose a general local iterative framework to accelerate the computation of graph diffusion equations which can be applied to GNN models.", "abstract": "Efficient computation of graph diffusion equations (GDEs), such as Personalized PageRank, Katz centrality, and the Heat kernel, is crucial for clustering, training neural networks, and many other graph-related problems. Standard iterative methods require accessing the whole graph per iteration, making them time-consuming for large-scale graphs. While existing local solvers approximate diffusion vectors through heuristic local updates, they often operate sequentially and are typically designed for specific diffusion types, limiting their applicability. Given that diffusion vectors are highly localizable, as measured by the participation ratio, this paper introduces a novel framework for approximately solving GDEs using a local diffusion process. This framework reveals the suboptimality of existing local solvers. Furthermore, our approach effectively localizes standard iterative solvers by designing simple and provably sublinear time algorithms. These new local solvers are highly parallelizable, making them well-suited for implementation on GPUs. We demonstrate the effectiveness of our framework in quickly obtaining approximate diffusion vectors, achieving up to a hundred-fold speed improvement, and its applicability to large-scale dynamic graphs. Our framework could also facilitate more efficient local message-passing mechanisms for GNNs.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/96736"} +{"video_file": "3ZAfFoAcUI_39026452.mp4", "openreview_id": "3ZAfFoAcUI", "slideslive_id": 39026452, "venue": "nips2024", "title": "On the Inductive Bias of Stacking Towards Improving Reasoning", "status": "Poster", "keywords": "stacking;language model;reasoning;inductive bias;efficient training", "tldr": "An intriguing inductive bias of efficient training methods like stacking towards particularly improving tasks that require reasoning, despite having the same pretraining perplexity", "abstract": "Given the increasing scale of model sizes, efficient training strategies like gradual stacking have garnered interest. Stacking enables efficient training by gradually growing the depth of a model in stages and using layers from a smaller model in an earlier stage to initialize the next stage. Although efficient for training, the model biases induced by such growing approaches are largely unexplored. In this work, we examine this fundamental aspect of gradual stacking, going beyond its efficiency benefits. We propose a variant of gradual stacking called MIDAS that can speed up language model training by up to 40%. Furthermore we discover an intriguing phenomenon: MIDAS is not only training-efficient but surprisingly also has an inductive bias towards improving downstream tasks, especially tasks that require reasoning abilities like reading comprehension and math problems, despite having similar or slightly worse perplexity compared to baseline training. To further analyze this inductive bias, we construct {\\em reasoning primitives} \u2013 simple synthetic tasks that are building blocks for reasoning \u2013 and find that a model pretrained with stacking is significantly better than standard pretraining on these primitives, with and without fine-tuning. This provides stronger and more robust evidence for this inductive bias towards reasoning. These findings of training efficiency and inductive bias towards reasoning are verified at 1B, 2B and 8B parameter language models. Finally, we conjecture the underlying reason for this inductive bias by exploring the connection of stacking to looped models and provide strong supporting empirical analysis.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/96735"} +{"video_file": "3apt5AJ5QN_39028172.mp4", "openreview_id": "3apt5AJ5QN", "slideslive_id": 39028172, "venue": "nips2024", "title": "Global Rewards in Restless Multi-Armed Bandits", "status": "Poster", "keywords": "Restless Bandits;Multi-Armed Bandit;Submodular;Food Rescue", "tldr": "We extend traditional restless multi-armed bandits to incorporate global rewards", "abstract": "Restless multi-armed bandits (RMAB) extend multi-armed bandits so arm pulls impact future arm states. Despite the success of RMABs, a key limiting assumption is the separability of rewards into a sum across arms. We address this deficiency by proposing restless-multi-armed bandit with global rewards (RMAB-G), a generalization of RMABs to global non-separable rewards. To solve RMAB-G, we develop the Linear-Whittle and Shapley-Whittle indices, which extend Whittle indices from RMABs to RMAB-Gs. We prove approximation bounds which demonstrate how Linear and Shapley-Whittle indices fail for non-linear rewards. To overcome this limitation, we propose two sets of adaptive policies: the first computes indices iteratively and the second combines indices with Monte-Carlo Tree Search (MCTS). Empirically, we demonstrate that adaptive policies outperform both pre-computed index policies and baselines in synthetic and real-world food rescue datasets.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/96734"} +{"video_file": "3cL2XDyaEB_39024615.mp4", "openreview_id": "3cL2XDyaEB", "slideslive_id": 39024615, "venue": "nips2024", "title": "EGonc : Energy-based Open-Set Node Classification with substitute Unknowns", "status": "Poster", "keywords": "Open-set classification;Energy-based models;Graph learning", "tldr": "An energy-based open graph learning method for open-set node classification.", "abstract": "Open-set Classification (OSC) is a critical requirement for safely deploying machine learning models in the open world, which aims to classify samples from known classes and reject samples from out-of-distribution (OOD). Existing methods exploit the feature space of trained network and attempt at estimating the uncertainty in the predictions. However, softmax-based neural networks are found to be overly confident in their predictions even on data they have never seen before and the immense diversity of the OOD examples also makes such methods fragile. To this end, we follow the idea of estimating the underlying density of the training data to decide whether a given input is close to the in-distribution (IND) data and adopt Energy-based models (EBMs) as density estimators. A novel energy-based generative open-set node classification method, \\textit{EGonc}, is proposed to achieve open-set graph learning. Specifically, we generate substitute unknowns to mimic the distribution of real open-set samples firstly, based on the information of graph structures. Then, an additional energy logit representing the virtual OOD class is learned from the residual of the feature against the principal space, and matched with the original logits by a constant scaling. This virtual logit serves as the indicator of OOD-ness. EGonc has nice theoretical properties that guarantee an overall distinguishable margin between the detection scores for IND and OOD samples. Comprehensive experimental evaluations of EGonc also demonstrate its superiority.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/96733"} +{"video_file": "3cb6pF3Tvf_39025274.mp4", "openreview_id": "3cb6pF3Tvf", "slideslive_id": 39025274, "venue": "nips2024", "title": "Learning-Augmented Algorithms for the Bahncard Problem", "status": "Poster", "keywords": "algorithms with predictions;competitive analysis;consistency;robustness", "tldr": "We develop an improved learning-augmented algorithm for the Bahncard problem and derive its competitive ratio under any prediction errors.", "abstract": "In this paper, we study learning-augmented algorithms for the Bahncard problem. The Bahncard problem is a generalization of the ski-rental problem, where a traveler needs to irrevocably and repeatedly decide between a cheap short-term solution and an expensive long-term one with an unknown future. Even though the problem is canonical, only a primal-dual-based learning-augmented algorithm was explicitly designed for it. We develop a new learning-augmented algorithm, named PFSUM, that incorporates both history and short-term future to improve online decision making. We derive the competitive ratio of PFSUM as a function of the prediction error and conduct extensive experiments to show that PFSUM outperforms the primal-dual-based algorithm.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/96732"} +{"video_file": "3csuL7TVpV_39025376.mp4", "openreview_id": "3csuL7TVpV", "slideslive_id": 39025376, "venue": "nips2024", "title": "Decoding-Time Language Model Alignment with Multiple Objectives", "status": "Poster", "keywords": "multi-objective alignment;decoding-time algorithms;RLHF", "tldr": "We propose a training-free, simple yet effective decoding-time algorithm for multi-objective alignment of language models, with optimality guarantees.", "abstract": "Aligning language models (LMs) to human preferences has emerged as a critical pursuit, enabling these models to better serve diverse user needs. Existing methods primarily focus on optimizing LMs for a single reward function, limiting their adaptability to varied objectives. Here, we propose\nmulti-objective decoding (MOD)\n, a decoding-time algorithm that outputs the next token from a linear combination of predictions of all base models, for any given weighting over different objectives. We exploit a common form among a family of\nf\n-divergence regularized alignment approaches (such as PPO, DPO, and their variants) to identify a closed-form solution by Legendre transform, and derive an efficient decoding strategy. Theoretically, we show why existing approaches can be sub-optimal even in natural settings and obtain optimality guarantees for our method. Empirical results demonstrate the effectiveness of the algorithm. For example, compared to a parameter-merging baseline, MOD achieves 12.8% overall reward improvement when equally optimizing towards\n3\nobjectives. Moreover, we experiment with MOD on combining three fully-finetuned LMs of different model sizes, each aimed at different objectives such as safety, coding, and general user preference. Unlike traditional methods that require careful curation of a mixture of datasets to achieve comprehensive improvement, we can quickly experiment with preference weightings using MOD to find the best combination of models. Our best combination reduces toxicity on Toxigen to nearly 0% and achieves 7.9--33.3% improvement across three other metrics (\ni.e.\n, Codex@1, GSM-COT, BBH-COT).", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96731"} +{"video_file": "3dn1hINA6o_39025717.mp4", "openreview_id": "3dn1hINA6o", "slideslive_id": 39025717, "venue": "nips2024", "title": "The Edge-of-Reach Problem in Offline Model-Based Reinforcement Learning", "status": "Poster", "keywords": "Offline Reinforcement Learning;Model-Based Reinforcement Learning", "tldr": "Existing offline model-based methods fail under the true dynamics. This reveals the previously-unnoticed \u201cedge-of-reach\u201d problem, and leads us to propose RAVL - a new method that continues to work even in the absence of model uncertainty.", "abstract": "Offline reinforcement learning (RL) aims to train agents from pre-collected datasets. However, this comes with the added challenge of estimating the value of behaviors not covered in the dataset. Model-based methods offer a potential solution by training an approximate dynamics model, which then allows collection of additional synthetic data via rollouts in this model. The prevailing theory treats this approach as online RL in an approximate dynamics model, and any remaining performance gap is therefore understood as being due to dynamics model errors. In this paper, we analyze this assumption and investigate how popular algorithms perform as the learned dynamics model is improved. In contrast to both intuition and theory, if the learned dynamics model is replaced by the true error-free dynamics, existing model-based methods completely fail. This reveals a key oversight: The theoretical foundations assume sampling of full horizon rollouts in the learned dynamics model; however, in practice, the number of model-rollout steps is aggressively reduced to prevent accumulating errors. We show that this truncation of rollouts results in a set of edge-of-reach states at which we are effectively \"bootstrapping from the void.\" This triggers pathological value overestimation and complete performance collapse. We term this the edge-of-reach problem. Based on this new insight, we fill important gaps in existing theory, and reveal how prior model-based methods are primarily addressing the edge-of-reach problem, rather than model-inaccuracy as claimed. Finally, we propose Reach-Aware Value Learning (RAVL), a simple and robust method that directly addresses the edge-of-reach problem and hence - unlike existing methods - does not fail as the dynamics model is improved. Since world models will inevitably improve, we believe this is a key step towards future-proofing offline RL.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96730"} +{"video_file": "3f8i9GlBzu_39025789.mp4", "openreview_id": "3f8i9GlBzu", "slideslive_id": 39025789, "venue": "nips2024", "title": "Can Transformers Smell Like Humans?", "status": "Spotlight", "keywords": "Representational Alignment;Olfactory;Transformers;Representation Learning;Neuroscience", "tldr": "In this work, we evaluate representational alignment between human olfactory perception and representations extracted from pre-trained transformers.", "abstract": "The human brain encodes stimuli from the environment into representations that form a sensory perception of the world. Despite recent advances in understanding visual and auditory perception, olfactory perception remains an under-explored topic in the machine learning community due to the lack of large-scale datasets annotated with labels of human olfactory perception. In this work, we ask the question of whether pre-trained transformer models of chemical structures encode representations that are aligned with human olfactory perception, i.e., can transformers smell like humans? We demonstrate that representations encoded from transformers pre-trained on general chemical structures are highly aligned with human olfactory perception. We use multiple datasets and different types of perceptual representations to show that the representations encoded by transformer models are able to predict: (i) labels associated with odorants\u200c\u200c provided by experts; (ii) continuous ratings provided by human participants with respect to pre-defined descriptors; and (iii) similarity ratings between odorants provided by human participants. Finally, we evaluate the extent to which this alignment is associated with physicochemical features of odorants known to be relevant for olfactory decoding.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/96729"} +{"video_file": "3hcn0UxP72_39028010.mp4", "openreview_id": "3hcn0UxP72", "slideslive_id": 39028010, "venue": "nips2024", "title": "Topological obstruction to the training of shallow ReLU neural networks", "status": "Poster", "keywords": "learning dynamics;topology;two-layer neural networks;ReLU networks;geometry;symmetry;loss landscape;gradient flow", "tldr": "For some initializations, the space of possible gradient flow trajectories of a shallow ReLU neural network is disconnected, resulting in the training being impossible.", "abstract": "Studying the interplay between the geometry of the loss landscape and the optimization trajectories of simple neural networks is a fundamental step for understanding their behavior in more complex settings. This paper reveals the presence of topological obstruction in the loss landscape of shallow ReLU neural networks trained using gradient flow. We discuss how the homogeneous nature of the ReLU activation function constrains the training trajectories to lie on a product of quadric hypersurfaces whose shape depends on the particular initialization of the network's parameters. When the neural network's output is a single scalar, we prove that these quadrics can have multiple connected components, limiting the set of reachable parameters during training. We analytically compute the number of these components and discuss the possibility of mapping one to the other through neuron rescaling and permutation. In this simple setting, we find that the non-connectedness results in a topological obstruction, which, depending on the initialization, can make the global optimum unreachable. We validate this result with numerical experiments.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/96726"} +{"video_file": "3ie8NWA1El_39028662.mp4", "openreview_id": "3ie8NWA1El", "slideslive_id": 39028662, "venue": "nips2024", "title": "HyperPrism: An Adaptive Non-linear Aggregation Framework for Distributed Machine Learning over Non-IID Data and Time-varying Communication Links", "status": "Poster", "keywords": "Distributed Machine Learning;Time-varying Communication;Non-Linear Aggregation;HyperNetwork", "tldr": "Introduces Kolmogorov Means As A New Distributed Machine Learning Communication Primitive.", "abstract": "While Distributed Machine Learning (DML) has been widely used to achieve decent performance, it is still challenging to take full advantage of data and devices distributed at multiple vantage points to adapt and learn, especially it is non-trivial to address dynamic and divergence challenges based on the linear aggregation framework as follows: (1) heterogeneous learning data at different devices (i.e., non-IID data) resulting in model divergence and (2) in the case of time-varying communication links, the limited ability for devices to reconcile model divergence. In this paper, we contribute a non-linear class aggregation framework HyperPrism that leverages distributed mirror descent with averaging done in the mirror descent dual space and adapts the degree of Weighted Power Mean (WPM) used in each round. Moreover, HyperPrism could adaptively choose different mapping for different layers of the local model with a dedicated hypernetwork per device, achieving automatic optimization of DML in high divergence settings. We perform rigorous analysis and experimental evaluations to demonstrate the effectiveness of adaptive, mirror-mapping DML. In particular, we extend the generalizability of existing related works and position them as special cases within HyperPrism. Our experimental results show that HyperPrism can improve the convergence speed up to 98.63% and scale well to more devices compared with the state-of-the-art, all with little additional computation overhead compared to traditional linear aggregation.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/96724"} +{"video_file": "3j2nasmKkP_39026508.mp4", "openreview_id": "3j2nasmKkP", "slideslive_id": 39026508, "venue": "nips2024", "title": "Cluster-wise Graph Transformer with Dual-granularity Kernelized Attention", "status": "Spotlight", "keywords": "Graph Based Learning", "tldr": "We introduce the Cluster-wise Graph Transformer (Cluster-GT) with a novel Node-to-Cluster Attention (N2C-Attn) mechanism, utilizing a dual-granularity kernel to capture information at both node and cluster levels.", "abstract": "In the realm of graph learning, there is a category of methods that conceptualize graphs as hierarchical structures, utilizing node clustering to capture broader structural information. While generally effective, these methods often rely on a fixed graph coarsening routine, leading to overly homogeneous cluster representations and loss of node-level information. In this paper, we envision the graph as a network of interconnected node sets without compressing each cluster into a single embedding. To enable effective information transfer among these node sets, we propose the Node-to-Cluster Attention (N2C-Attn) mechanism. N2C-Attn incorporates techniques from Multiple Kernel Learning into the kernelized attention framework, effectively capturing information at both node and cluster levels. We then devise an efficient form for N2C-Attn using the cluster-wise message-passing framework, achieving linear time complexity. We further analyze how N2C-Attn combines bi-level feature maps of queries and keys, demonstrating its capability to merge dual-granularity information. The resulting architecture, Cluster-wise Graph Transformer (Cluster-GT), which uses node clusters as tokens and employs our proposed N2C-Attn module, shows superior performance on various graph-level tasks. Code is available at https://github.com/LUMIA-Group/Cluster-wise-Graph-Transformer.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/96721"} +{"video_file": "3l2HnZXNou_39024488.mp4", "openreview_id": "3l2HnZXNou", "slideslive_id": 39024488, "venue": "nips2024", "title": "Multi-Agent Coordination via Multi-Level Communication", "status": "Poster", "keywords": "multi-agent reinforcement learning", "tldr": "We propose SeqComm, a multi-agent communication scheme that enhances coordination by prioritizing actions and communication.", "abstract": "The partial observability and stochasticity in multi-agent settings can be mitigated by accessing more information about others via communication. However, the coordination problem still exists since agents cannot communicate actual actions with each other at the same time due to the circular dependencies. In this paper, we propose a novel multi-level communication scheme, Sequential Communication (SeqComm). SeqComm treats agents asynchronously (the upper-level agents make decisions before the lower-level ones) and has two communication phases. In the negotiation phase, agents determine the priority of decision-making by communicating hidden states of observations and comparing the value of intention, which is obtained by modeling the environment dynamics. In the launching phase, the upper-level agents take the lead in making decisions and then communicate their actions with the lower-level agents. Theoretically, we prove the policies learned by SeqComm are guaranteed to improve monotonically and converge. Empirically, we show that SeqComm outperforms existing methods in a variety of cooperative multi-agent tasks.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96719"} +{"video_file": "3lQgEPRxeu_39028522.mp4", "openreview_id": "3lQgEPRxeu", "slideslive_id": 39028522, "venue": "nips2024", "title": "Learning General Parameterized Policies for Infinite Horizon Average Reward Constrained MDPs via Primal-Dual Policy Gradient Algorithm", "status": "Poster", "keywords": "Average Reward MDP;Constraint Violation;Regret.", "tldr": "Regret and constraint violation analysis of infinite horizon average reward constraint MDP.", "abstract": "This paper explores the realm of infinite horizon average reward Constrained Markov Decision Processes (CMDPs). To the best of our knowledge, this work is the first to delve into the regret and constraint violation analysis of average reward CMDPs with a general policy parametrization. To address this challenge, we propose a primal dual-based policy gradient algorithm that adeptly manages the constraints while ensuring a low regret guarantee toward achieving a global optimal policy. In particular, our proposed algorithm achieves\nO\n~\n(\nT\n4\n/\n5\n)\nobjective regret and\nO\n~\n(\nT\n4\n/\n5\n)\nconstraint violation bounds.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96718"} +{"video_file": "3lic0JgPRZ_39026323.mp4", "openreview_id": "3lic0JgPRZ", "slideslive_id": 39026323, "venue": "nips2024", "title": "Learning to Decouple the Lights for 3D Face Texture Modeling", "status": "Poster", "keywords": "Face Texture;Light Decoupling;Neural Representation", "tldr": "Learning to recover face textures under illumination affected by external occlusions", "abstract": "Existing research has made impressive strides in reconstructing human facial shapes and textures from images with well-illuminated faces and minimal external occlusions. Nevertheless, it remains challenging to recover accurate facial textures from scenarios with complicated illumination affected by external occlusions, \\eg a face that is partially obscured by items such as a hat. Existing works based on the assumption of single and uniform illumination cannot correctly process these data. In this work, we introduce a novel approach to model 3D facial textures under such unnatural illumination. Instead of assuming single illumination, our framework learns to imitate the unnatural illumination as a composition of multiple separate light conditions combined with learned neural representations, named Light Decoupling. According to experiments on both single images and video sequences, we demonstrate the effectiveness of our approach in modeling facial textures under challenging illumination affected by occlusions.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96717"} +{"video_file": "3mCr7ZNdSw_39025500.mp4", "openreview_id": "3mCr7ZNdSw", "slideslive_id": 39025500, "venue": "nips2024", "title": "Privacy without Noisy Gradients: Slicing Mechanism for Generative Model Training", "status": "Poster", "keywords": "differential privacy;synthetic data generation;tabular data;GAN;f divergence;slicing", "tldr": "We enable private training of GANs using random slicing, avoiding the shortcomings and challenges of DPSGD.", "abstract": "Training generative models with differential privacy (DP) typically involves injecting noise into gradient updates or adapting the discriminator's training procedure. As a result, such approaches often struggle with hyper-parameter tuning and convergence. We consider the \\emph{slicing privacy mechanism} that injects noise into random low-dimensional projections of the private data, and provide strong privacy guarantees for it. These noisy projections are used for training generative models. To enable optimizing generative models using this DP approach, we introduce the \\emph{smoothed-sliced\nf\n-divergence} and show it enjoys statistical consistency.\nMoreover, we present a kernel-based estimator for this divergence, circumventing the need for adversarial training. Extensive numerical experiments demonstrate that our approach can generate synthetic data of higher quality compared with baselines. Beyond performance improvement, our method, by sidestepping the need for noisy gradients, offers data scientists the flexibility to adjust generator architecture and hyper-parameters, run the optimization over any number of epochs, and even restart the optimization process---all without incurring additional privacy costs.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96715"} +{"video_file": "3uQtNWNTwz_39025904.mp4", "openreview_id": "3uQtNWNTwz", "slideslive_id": 39025904, "venue": "nips2024", "title": "Zero-to-Hero: Enhancing Zero-Shot Novel View Synthesis via Attention Map Filtering", "status": "Poster", "keywords": "Novel View Synthesis;Image Generative Models;Diffusion Models", "tldr": "Training-free boosting of diffusion based novel view generation through attention map manipulations.", "abstract": "Generating realistic images from arbitrary views based on a single source image remains a significant challenge in computer vision, with broad applications ranging from e-commerce to immersive virtual experiences. Recent advancements in diffusion models, particularly the Zero-1-to-3 model, have been widely adopted for generating plausible views, videos, and 3D models. However, these models still struggle with inconsistencies and implausibility in new views generation, especially for challenging changes in viewpoint. In this work, we propose Zero-to-Hero, a novel test-time approach that enhances view synthesis by manipulating attention maps during the denoising process of Zero-1-to-3. By drawing an analogy between the denoising process and stochastic gradient descent (SGD), we implement a filtering mechanism that aggregates attention maps, enhancing generation reliability and authenticity. This process improves geometric consistency without requiring retraining or significant computational resources. Additionally, we modify the self-attention mechanism to integrate information from the source view, reducing shape distortions. These processes are further supported by a specialized sampling schedule. Experimental results demonstrate substantial improvements in fidelity and consistency, validated on a diverse set of out-of-distribution objects. Additionally, we demonstrate the general applicability and effectiveness of Zero-to-Hero in multi-view, and image generation conditioned on semantic maps and pose.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96707"} +{"video_file": "3vHfwL2stG_39028004.mp4", "openreview_id": "3vHfwL2stG", "slideslive_id": 39028004, "venue": "nips2024", "title": "The Ladder in Chaos: Improving Policy Learning by Harnessing the Parameter Evolving Path in A Low-dimensional Space", "status": "Poster", "keywords": "Reinforcement Learning;Policy Optimization", "tldr": "This paper reveals the parameter evolving path of policy network in a low-dimensional space and propose a general method to leverage the path for better learning performance.", "abstract": "Knowing the learning dynamics of policy is significant to unveiling the mysteries of Reinforcement Learning (RL). It is especially crucial yet challenging to Deep RL, from which the remedies to notorious issues like sample inefficiency and learning instability could be obtained. In this paper, we study how the policy networks of typical DRL agents evolve during the learning process by empirically investigating several kinds of temporal change for each policy parameter. In popular MuJoCo and DeepMind Control Suite (DMC) environments, we find common phenomena for TD3 and RAD agents: (1) the activity of policy network parameters is highly asymmetric and policy networks advance monotonically along a very limited number of major parameter directions; (2) severe detours occur in parameter update and harmonic-like changes are observed for all minor parameter directions. By performing a novel temporal SVD along the policy learning path, the major and minor parameter directions are identified as the columns of the right unitary matrix associated with dominant and insignificant singular values respectively. Driven by the discoveries above, we propose a simple and effective method, called Policy Path Trimming and Boosting (PPTB), as a general plug-in improvement to DRL algorithms. The key idea of PPTB is to trim the policy learning path by canceling the policy updates in minor parameter directions, and boost the learning path by encouraging the advance in major directions. In experiments, we demonstrate that our method improves the learning performance of TD3, RAD, and DoubleDQN regarding scores and efficiency in MuJoCo, DMC, and MinAtar tasks respectively.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96705"} +{"video_file": "3xHCaDdYcc_39028330.mp4", "openreview_id": "3xHCaDdYcc", "slideslive_id": 39028330, "venue": "nips2024", "title": "ETO:Efficient Transformer-based Local Feature Matching by Organizing Multiple Homography Hypotheses", "status": "Poster", "keywords": "local feature matching;3d vision;pose estimation", "tldr": "We reduce the time comsuming for local feature matching by reducing the units that participate in transformer, while using more complicated homography hypothese to maintain the accuracy.", "abstract": "We tackle the efficiency problem of learning local feature matching.Recent advancements have given rise to purely CNN-based and transformer-based approaches, each augmented with deep learning techniques. While CNN-based methods often excel in matching speed, transformer-based methods tend to provide more accurate matches. We propose an efficient transformer-based network architecture for local feature matching.This technique is built on constructing multiple homography hypotheses to approximate the continuous correspondence in the real world and uni-directional cross-attention to accelerate the refinement. On the YFCC100M dataset, our matching accuracy is competitive with LoFTR, a state-of-the-art transformer-based architecture, while the inference speed is boosted to 4 times, even outperforming the CNN-based methods.Comprehensive evaluations on other open datasets such as Megadepth, ScanNet, and HPatches demonstrate our method's efficacy, highlighting its potential to significantly enhance a wide array of downstream applications.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96703"} +{"video_file": "41lovPOCo5_39026130.mp4", "openreview_id": "41lovPOCo5", "slideslive_id": 39026130, "venue": "nips2024", "title": "TableRAG: Million-Token Table Understanding with Language Models", "status": "Poster", "keywords": "large language model;large scale;tabular reasoning;retrieval;LLM", "tldr": "We introduce TableRAG, a framework that enables LLMs to handle large tables by retrieving only essential schema and cell data. Additionally, we provide two new benchmarks, ArcadeQA and BirdQA, to evaluate performance on real-world large tables.", "abstract": "Recent advancements in language models (LMs) have notably enhanced their ability to reason with tabular data, primarily through program-aided mechanisms that manipulate and analyze tables. However, these methods often require the entire table as input, leading to scalability challenges due to the positional bias or context length constraints. In response to these challenges, we introduce TableRAG, a Retrieval-Augmented Generation (RAG) framework specifically designed for LM-based table understanding. TableRAG leverages query expansion combined with schema and cell retrieval to pinpoint crucial information before providing it to the LMs. This enables more efficient data encoding and precise retrieval, significantly reducing prompt lengths and mitigating information loss. We have developed two new million-token benchmarks from the Arcade and BIRD-SQL datasets to thoroughly evaluate TableRAG's effectiveness at scale. Our results demonstrate that TableRAG's retrieval design achieves the highest retrieval quality, leading to the new state-of-the-art performance on large-scale table understanding.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96701"} +{"video_file": "47loYmzxep_39027313.mp4", "openreview_id": "47loYmzxep", "slideslive_id": 39027313, "venue": "nips2024", "title": "E2E-MFD: Towards End-to-End Synchronous Multimodal Fusion Detection", "status": "Oral", "keywords": "Multimodal Fusion;Object detection", "tldr": "A novel end-to-end training algorithm for multimodal fusion detection", "abstract": "Multimodal image fusion and object detection are crucial for autonomous driving. While current methods have advanced the fusion of texture details and semantic information, their complex training processes hinder broader applications. Addressing this challenge, we introduce E2E-MFD, a novel end-to-end algorithm for multimodal fusion detection. E2E-MFD streamlines the process, achieving high performance with a single training phase. It employs synchronous joint optimization across components to avoid suboptimal solutions associated to individual tasks. Furthermore, it implements a comprehensive optimization strategy in the gradient matrix for shared parameters, ensuring convergence to an optimal fusion detection configuration. Our extensive testing on multiple public datasets reveals E2E-MFD's superior capabilities, showcasing not only visually appealing image fusion but also impressive detection outcomes, such as a 3.9% and 2.0%\nmAP\n50\nincrease on horizontal object detection dataset M3FD and oriented object detection dataset DroneVehicle, respectively, compared to state-of-the-art approaches.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96693"} +{"video_file": "483IPG0HWL_39026796.mp4", "openreview_id": "483IPG0HWL", "slideslive_id": 39026796, "venue": "nips2024", "title": "ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution", "status": "Poster", "keywords": "combinatorial optimization;hyper-heuristic;heuristic;neural combinatorial optimization;large language model;evolutionary algorithm", "tldr": "We use large language models to design heuristics for combinatorial optimization automatically.", "abstract": "The omnipresence of NP-hard combinatorial optimization problems (COPs) compels domain experts to engage in trial-and-error heuristic design. The long-standing endeavor of design automation has gained new momentum with the rise of large language models (LLMs). This paper introduces Language Hyper-Heuristics (LHHs), an emerging variant of Hyper-Heuristics that leverages LLMs for heuristic generation, featuring minimal manual intervention and open-ended heuristic spaces. To empower LHHs, we present Reflective Evolution (ReEvo), a novel integration of evolutionary search for efficiently exploring the heuristic space, and LLM reflections to provide verbal gradients within the space. Across five heterogeneous algorithmic types, six different COPs, and both white-box and black-box views of COPs, ReEvo yields state-of-the-art and competitive meta-heuristics, evolutionary algorithms, heuristics, and neural solvers, while being more sample-efficient than prior LHHs.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/96692"} +{"video_file": "4A5IQEjG8c_39028737.mp4", "openreview_id": "4A5IQEjG8c", "slideslive_id": 39028737, "venue": "nips2024", "title": "Slack-Free Spiking Neural Network Formulation for Hypergraph Minimum Vertex Cover", "status": "Poster", "keywords": "Neuromorphic computing;spiking neural network;hypergraph minimum vertex cover", "tldr": "We developed a novel SNN for hypergraph minimum vertex cover.", "abstract": "Neuromorphic computers open up the potential of energy-efficient computation using spiking neural networks (SNN), which consist of neurons that exchange spike-based information asynchronously. In particular, SNNs have shown promise in solving combinatorial optimization. Underpinning the SNN methods is the concept of energy minimization of an Ising model, which is closely related to quadratic unconstrained binary optimization (QUBO). Thus, the starting point for many SNN methods is reformulating the target problem as QUBO, then executing an SNN-based QUBO solver. For many combinatorial problems, the reformulation entails introducing penalty terms, potentially with slack variables, that implement feasibility constraints in the QUBO objective. For more complex problems such as hypergraph minimum vertex cover (HMVC), numerous slack variables are introduced which drastically increase the search domain and reduce the effectiveness of the SNN solver. In this paper, we propose a novel SNN formulation for HMVC. Rather than using penalty terms with slack variables, our SNN architecture introduces additional spiking neurons with a constraint checking and correction mechanism that encourages convergence to feasible solutions. In effect, our method obviates the need for reformulating HMVC as QUBO. Experiments on neuromorphic hardware show that our method consistently yielded high quality solutions for HMVC on real and synthetic instances where the SNN-based QUBO solver often failed, while consuming measurably less energy than global solvers on CPU.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/96690"} +{"video_file": "4DA5vaPHFb_39026388.mp4", "openreview_id": "4DA5vaPHFb", "slideslive_id": 39026388, "venue": "nips2024", "title": "Expectile Regularization for Fast and Accurate Training of Neural Optimal Transport", "status": "Spotlight", "keywords": "Optimal Transport;Neural Optimal Transport;Expectile;Regularization;Kantorovich potentials", "tldr": "A 3-fold better and a 10-fold faster way to train neural optimal transport: all thanks to a new expectile regularizing loss function that sets an upper bound over the distribution of learning conjugate potentials.", "abstract": "We present a new approach for Neural Optimal Transport (NOT) training procedure, capable of accurately and efficiently estimating optimal transportation plan via specific regularization on dual Kantorovich potentials. The main bottleneck of existing NOT solvers is associated with the procedure of finding a near-exact approximation of the conjugate operator (i.e., the c-transform), which is done either by optimizing over non-convex max-min objectives or by the computationally intensive fine-tuning of the initial approximated prediction. We resolve both issues by proposing a new theoretically justified loss in the form of expectile regularization which enforces binding conditions on the learning process of the dual potentials. Such a regularization provides the upper bound estimation over the distribution of possible conjugate potentials and makes the learning stable, completely eliminating the need for additional extensive fine-tuning. Proposed method, called Expectile-Regularized Neural Optimal Transport (ENOT), outperforms previous state-of-the-art approaches in the established Wasserstein-2 benchmark tasks by a large margin (up to a 3-fold improvement in quality and up to a 10-fold improvement in runtime). Moreover, we showcase performance of ENOT for various cost functions in different tasks, such as image generation, demonstrating generalizability and robustness of the proposed algorithm.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/96684"} +{"video_file": "4DHoSjET4R_39025328.mp4", "openreview_id": "4DHoSjET4R", "slideslive_id": 39025328, "venue": "nips2024", "title": "Efficiency of the First-Price Auction in the Autobidding World", "status": "Poster", "keywords": "first-price auctions;ad auctions;autobidding;price of anarchy", "tldr": "We give tight price of anarchy bounds for first-price auctions in the autobidding world.", "abstract": "We study the price of anarchy of first-price auctions in the autobidding world, where bidders can be either utility maximizers (i.e., traditional bidders) or value maximizers (i.e., autobidders). We show that with autobidders only, the price of anarchy of first-price auctions is\n1\n/\n2\n, and with both kinds of bidders, the price of anarchy degrades to about\n0.457\n(the precise number is given by an optimization). These results complement the recent result by [Jin and Lu, 2022] showing that the price of anarchy of first-price auctions with traditional bidders is\n1\n\u2212\n1\n/\ne\n2\n. We further investigate a setting where the seller can utilize machine-learned advice to improve the efficiency of the auctions. There, we show that as the accuracy of the advice increases, the price of anarchy improves smoothly from about\n0.457\nto\n1\n.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/96683"} +{"video_file": "4DcpFagQ9e_39026227.mp4", "openreview_id": "4DcpFagQ9e", "slideslive_id": 39026227, "venue": "nips2024", "title": "Score Distillation via Reparametrized DDIM", "status": "Poster", "keywords": "3D generation;diffusion;score distillation", "tldr": "We bridge a gap between 3D Score Distillation and 2D image generation both in terms of theory and qualitative results", "abstract": "While 2D diffusion models generate realistic, high-detail images, 3D shape generation methods like Score Distillation Sampling (SDS) built on these 2D diffusion models produce cartoon-like, over-smoothed shapes. To help explain this discrepancy, we show that the image guidance used in Score Distillation can be understood as the velocity field of a 2D denoising generative process, up to the choice of a noise term. In particular, after a change of variables, SDS resembles a high-variance version of Denoising Diffusion Implicit Models (DDIM) with a differently-sampled noise term: SDS introduces noise i.i.d. randomly at each step, while DDIM infers it from the previous noise predictions. This excessive variance can lead to over-smoothing and unrealistic outputs. We show that a better noise approximation can be recovered by inverting DDIM in each SDS update step. This modification makes SDS's generative process for 2D images almost identical to DDIM. In 3D, it removes over-smoothing, preserves higher-frequency detail, and brings the generation quality closer to that of 2D samplers. Experimentally, our method achieves better or similar 3D generation quality compared to other state-of-the-art Score Distillation methods, all without training additional neural networks or multi-view supervision, and providing useful insights into relationship between 2D and 3D asset generation with diffusion models.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96682"} +{"video_file": "4M9f8VMt2C_39026658.mp4", "openreview_id": "4M9f8VMt2C", "slideslive_id": 39026658, "venue": "nips2024", "title": "Long-form factuality in large language models", "status": "Poster", "keywords": "natural language processing;machine learning;factuality", "tldr": "We propose a dataset, evaluation method, and metric for benchmarking long-form factuality in large language models.", "abstract": "Large language models (LLMs) often generate content that contains factual errors when responding to fact-seeking prompts on open-ended topics. To benchmark a model\u2019s long-form factuality in open domains, we first use GPT-4 to generate LongFact, a prompt set comprising thousands of questions spanning 38 topics. We then propose that LLM agents can be used as automated evaluators for long-form factuality through a method which we call Search-Augmented Factuality Evaluator (SAFE). SAFE utilizes an LLM to break down a long-form response into a set of individual facts and to evaluate the accuracy of each fact using a multi-step reasoning process comprising sending search queries to Google Search and determining whether a fact is supported by the search results. Furthermore, we propose extending F1 score as an aggregated metric for long-form factuality. To do so, we balance the percentage of supported facts in a response (precision) with the percentage of provided facts relative to a hyperparameter representing a user\u2019s preferred response length (recall).\nEmpirically, we demonstrate that LLM agents can outperform crowdsourced human annotators\u2014on a set of\u223c16k individual facts, SAFE agrees with crowdsourced human annotators 72% of the time, and on a random subset of 100 disagreement cases, SAFE wins 76% of the time. At the same time, SAFE is more than 20 times cheaper than human annotators. We also benchmark thirteen language models on LongFact across four model families (Gemini, GPT, Claude, and PaLM-2), finding that larger language models generally achieve better long-form factuality. LongFact, SAFE, and all experimental code are available at https://github.com/google-deepmind/long-form-factuality.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96675"} +{"video_file": "4NGlu45uyt_39025803.mp4", "openreview_id": "4NGlu45uyt", "slideslive_id": 39025803, "venue": "nips2024", "title": "Aligning Embeddings and Geometric Random Graphs: Informational Results and Computational Approaches for the Procrustes-Wasserstein Problem", "status": "Poster", "keywords": "unsupervised learning;alignment of embeddings;high dimensional statistics", "tldr": "We provide information-theoretic and computational results for the unsupervised matching two high-dimensional point clouds.", "abstract": "The Procrustes-Wasserstein problem consists in matching two high-dimensional point clouds in an unsupervised setting, and has many applications in natural language processing and computer vision. We consider a planted model with two datasets\nX\n,\nY\nthat consist of\nn\ndatapoints in\nR\nd\n, where\nY\nis a noisy version of\nX\n, up to an orthogonal transformation and a relabeling of the data points. This setting is related to the graph alignment problem in geometric models. In this work, we focus on the euclidean transport cost between the point clouds as a measure of performance for the alignment. We first establish information-theoretic results, in the high (\nd\n\u226b\nlog\n\u2061\nn\n) and low (\nd\n\u226a\nlog\n\u2061\nn\n) dimensional regimes. We then study computational aspects and propose the \u2018Ping-Pong algorithm', alternatively estimating the orthogonal transformation and the relabeling, initialized via a Franke-Wolfe convex relaxation. We give sufficient conditions for the method to retrieve the planted signal after one single step. We provide experimental results to compare the proposed approach with the state-of-the-art method of Grave et al. (2019).", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96674"} +{"video_file": "4NGrHrhJPx_39028763.mp4", "openreview_id": "4NGrHrhJPx", "slideslive_id": 39028763, "venue": "nips2024", "title": "The Dormant Neuron Phenomenon in Multi-Agent Reinforcement Learning Value Factorization", "status": "Poster", "keywords": "dormant neurons; Multi-agent reinforcement learning", "tldr": "We studied the dormant neuron phenomenon in multi-agent reinforcement learning value decomposition and proposed a new method for recycling dormant neurons", "abstract": "In this work, we study the dormant neuron phenomenon in multi-agent reinforcement learning value factorization, where the mixing network suffers from reduced network expressivity caused by an increasing number of inactive neurons. We demonstrate the presence of the dormant neuron phenomenon across multiple environments and algorithms, and show that this phenomenon negatively affects the learning process. We show that dormant neurons correlates with the existence of over-active neurons, which have large activation scores. To address the dormant neuron issue, we propose ReBorn, a simple but effective method that transfers the weights from over-active neurons to dormant neurons. We theoretically show that this method can ensure the learned action preferences are not forgotten after the weight-transferring procedure, which increases learning effectiveness. Our extensive experiments reveal that ReBorn achieves promising results across various environments and improves the performance of multiple popular value factorization approaches. The source code of ReBorn is available in \\url{https://github.com/xmu-rl-3dv/ReBorn}.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96673"} +{"video_file": "4NJBV6Wp0h_39026353.mp4", "openreview_id": "4NJBV6Wp0h", "slideslive_id": 39026353, "venue": "nips2024", "title": "LLM Evaluators Recognize and Favor Their Own Generations", "status": "Oral", "keywords": "LLMs;evaluations;benchmarking;situational-awareness", "tldr": "In two text-summarization tasks, we find evidence of a causal link between an LLM's self-recognition ability and bias toward its own outputs in evaluation.", "abstract": "Self-evaluation using large language models (LLMs) has proven valuable not only in benchmarking but also methods like reward modeling, constitutional AI, and self-refinement. But new biases are introduced due to the same LLM acting as both the evaluator and the evaluatee. One such bias is self-preference, where an LLM evaluator scores its own outputs higher than others\u2019 while human annotators consider them of equal quality. But do LLMs actually recognize their own outputs when they give those texts higher scores, or is it just a coincidence? In this paper, we investigate if self-recognition capability contributes to self-preference. We discover that, out of the box, LLMs such as GPT-4 and Llama 2 have non-trivial accuracy at distinguishing themselves from other LLMs and humans. By finetuning LLMs, we discover a linear correlation between self-recognition capability and the strength of self-preference bias; using controlled experiments, we show that the causal explanation resists straightforward confounders. We discuss how self-recognition can interfere with unbiased evaluations and AI safety more generally.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96672"} +{"video_file": "4NQ24cHnOi_39025798.mp4", "openreview_id": "4NQ24cHnOi", "slideslive_id": 39025798, "venue": "nips2024", "title": "Private Edge Density Estimation for Random Graphs: Optimal, Efficient and Robust", "status": "Spotlight", "keywords": "differential privacy;robustness;random graph;sum of squares;average-case complexity", "tldr": "A private, robust and polynomial-time algorithm based on sum-of-squares hierarchy that achieves optimal error rate for edge density estimation of random graphs", "abstract": "We give the first polynomial-time, differentially node-private, and robust algorithm for estimating the edge density of Erd\u0151s-R\u00e9nyi random graphs and their generalization, inhomogeneous random graphs. We further prove information-theoretical lower bounds, showing that the error rate of our algorithm is optimal up to logarithmic factors. Previous algorithms incur either exponential running time or suboptimal error rates.\nTwo key ingredients of our algorithm are (1) a new sum-of-squares algorithm for robust edge density estimation, and (2) the reduction from privacy to robustness based on sum-of-squares exponential mechanisms due to Hopkins et al. (STOC 2023).", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96671"} +{"video_file": "4OJdZhcwBb_39025519.mp4", "openreview_id": "4OJdZhcwBb", "slideslive_id": 39025519, "venue": "nips2024", "title": "A Method for Evaluating Hyperparameter Sensitivity in Reinforcement Learning", "status": "Poster", "keywords": "Reinforcement Learning;Hyperparameters;Empirical Methodology", "tldr": "An empirical methodology for evaluating hyperparameter sensitivity of reinforcement learning algorithms.", "abstract": "The performance of modern reinforcement learning algorithms critically relies on tuning ever increasing numbers of hyperparameters. Often, small changes in a hyperparameter can lead to drastic changes in performance, and different environments require very different hyperparameter settings to achieve state-of-the-art performance reported in the literature. We currently lack a scalable and widely accepted approach to characterizing these complex interactions. This work proposes a new empirical methodology for studying, comparing, and quantifying the sensitivity of an algorithm\u2019s performance to hyperparameter tuning for a given set of environments. We then demonstrate the utility of this methodology by assessing the hyperparameter sensitivity of several commonly used normalization variants of PPO. The results suggest that several algorithmic performance improvements may, in fact, be a result of an increased reliance on hyperparameter tuning.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96670"} +{"video_file": "4SAR7IRqmB_39024667.mp4", "openreview_id": "4SAR7IRqmB", "slideslive_id": 39024667, "venue": "nips2024", "title": "On the Complexity of Teaching a Family of Linear Behavior Cloning Learners", "status": "Poster", "keywords": "Machine Teaching;Behavior Cloning;Reinforcement Learning;Supervised Learning", "tldr": "We study optimal teaching complexity of a family of consistent linear behavior cloning learners.", "abstract": "We study optimal teaching for a family of Behavior Cloning learners that learn using a linear hypothesis class. In this setup, a knowledgeable teacher can demonstrate a dataset of state and action tuples and is required to teach an optimal policy to an entire family of BC learners using the smallest possible dataset. We analyze the linear family and design a novel teaching algorithm called `TIE' that achieves the instance optimal Teaching Dimension for the entire family. However, we show that this problem is NP-hard for action spaces with\n|\nA\n|\n>\n2\nand provide an efficient approximation algorithm with a\nlog\n\u2061\n(\n|\nA\n|\n\u2212\n1\n)\nguarantee on the optimal teaching size. We present empirical results to demonstrate the effectiveness of our algorithm and compare it to various baselines in different teaching environments.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96669"} +{"video_file": "4TENzBftZR_39027312.mp4", "openreview_id": "4TENzBftZR", "slideslive_id": 39027312, "venue": "nips2024", "title": "iVideoGPT: Interactive VideoGPTs are Scalable World Models", "status": "Poster", "keywords": "world model;model-based reinforcement learning;video prediction;visual planning", "tldr": "We propose iVideoGPT, an autoregressive transformer architecture for scalable world models, pre-train it on millions of trajectories and adapt it to a wide range of tasks, including video prediction, visual planning, and model-based RL.", "abstract": "World models empower model-based agents to interactively explore, reason, and plan within imagined environments for real-world decision-making. However, the high demand for interactivity poses challenges in harnessing recent advancements in video generative models for developing world models at scale. This work introduces Interactive VideoGPT (iVideoGPT), a scalable autoregressive transformer framework that integrates multimodal signals\u2014visual observations, actions, and rewards\u2014into a sequence of tokens, facilitating an interactive experience of agents via next-token prediction. iVideoGPT features a novel compressive tokenization technique that efficiently discretizes high-dimensional visual observations. Leveraging its scalable architecture, we are able to pre-train iVideoGPT on millions of human and robotic manipulation trajectories, establishing a versatile foundation that is adaptable to serve as interactive world models for a wide range of downstream tasks. These include action-conditioned video prediction, visual planning, and model-based reinforcement learning, where iVideoGPT achieves competitive performance compared with state-of-the-art methods. Our work advances the development of interactive general world models, bridging the gap between generative video models and practical model-based reinforcement learning applications. Code and pre-trained models are available at https://thuml.github.io/iVideoGPT.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96668"} +{"video_file": "4TlUE0ufiz_39028552.mp4", "openreview_id": "4TlUE0ufiz", "slideslive_id": 39028552, "venue": "nips2024", "title": "Introspective Planning: Aligning Robots' Uncertainty with Inherent Task Ambiguity", "status": "Poster", "keywords": "Large Language Models;Conformal Prediction;Uncertainty Quantification;Foundation Models for Decision Making", "tldr": "This paper propose introspective planning to guide Large Language Models (LLMs) planning with uncertainty awareness, and achieve a tighter confidence bound with conformal prediction.", "abstract": "Large language models (LLMs) exhibit advanced reasoning skills, enabling robots to comprehend natural language instructions and strategically plan high-level actions through proper grounding. However, LLM hallucination may result in robots confidently executing plans that are misaligned with user goals or even unsafe in critical scenarios. Additionally, inherent ambiguity in natural language instructions can introduce uncertainty into the LLM's reasoning and planning. We propose introspective planning, a systematic approach that guides LLMs to refine their own uncertainty in alignment with inherent task ambiguity. Our approach constructs a knowledge base containing introspective reasoning examples as post-hoc rationalizations of human-selected safe and compliant plans, which are retrieved during deployment. Evaluations on three tasks, including a new safe mobile manipulation benchmark, indicate that introspection substantially improves both compliance and safety over state-of-the-art LLM-based planning methods. Additionally, we empirically show that introspective planning, in combination with conformal prediction, achieves tighter confidence bounds, maintaining statistical success guarantees while minimizing unnecessary user clarification requests.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96667"} +{"video_file": "4U18ZoRXTD_39027406.mp4", "openreview_id": "4U18ZoRXTD", "slideslive_id": 39027406, "venue": "nips2024", "title": "AV-GS: Learning Material and Geometry Aware Priors for Novel View Acoustic Synthesis", "status": "Poster", "keywords": "Spatial audio synthesis;Gaussian Splatting;Material characteristics;Geometry priors", "tldr": "An audio-visual Gaussian Splatting approach to learn holistic scene priors for novel view acoustic synthesis.", "abstract": "Novel view acoustic synthesis (NVAS) aims to render binaural audio at any target viewpoint, given a mono audio emitted by a sound source at a 3D scene. Existing methods have proposed NeRF-based implicit models to exploit visual cues as a condition for synthesizing binaural audio. However, in addition to low efficiency originating from heavy NeRF rendering, these methods all have a limited ability of characterizing the entire scene environment such as room geometry, material properties, and the spatial relation between the listener and sound source. To address these issues, we propose a novel Audio-Visual Gaussian Splatting (AV-GS) model. To obtain a material-aware and geometry-aware condition for audio synthesis, we learn an explicit point-based scene representation with audio-guidance parameters on locally initialized Gaussian points, taking into account the space relation from the listener and sound source. To make the visual scene model audio adaptive, we propose a point densification and pruning strategy to optimally distribute the Gaussian points, with the per-point contribution in sound propagation (e.g., more points needed for texture-less wall surfaces as they affect sound path diversion). Extensive experiments validate the superiority of our AV-GS over existing alternatives on the real-world RWAS and simulation-based SoundSpaces datasets. Project page: \\url{https://surrey-uplab.github.io/research/avgs/}", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/96666"} +{"video_file": "4ZH48aGD60_39027011.mp4", "openreview_id": "4ZH48aGD60", "slideslive_id": 39027011, "venue": "nips2024", "title": "Active, anytime-valid risk controlling prediction sets", "status": "Poster", "keywords": "distribution free;conformal prediction;e-process;confidence sequence", "tldr": "An extension of risk controlling prediction sets to anytime-valid and active labeling regime.", "abstract": "Rigorously establishing the safety of black-box machine learning models with respect to critical risk measures is important for providing guarantees about the behavior of the model. Recently, a notion of a risk controlling prediction set (RCPS) has been introduced by Bates et. al. (JACM '24) for producing prediction sets that are statistically guaranteed to have low risk from machine learning models. Our method extends this notion to the sequential setting, where we provide guarantees even when the data is collected adaptively, and ensures the risk guarantee is anytime-valid, i.e., simultaneously holds at all time steps. Further, we propose a framework for constructing RCPSes for active labeling, i.e., allowing one to use a labeling policy that chooses whether to query the true label for each received data point, and ensures the expected proportion data points whose labels are queried are below a predetermined label budget. We also describe how to use predictors (e.g., the machine learning model we are providing risk control guarantees for) to further improve the utility of our RCPSes by estimating the expected risk conditioned on the covariates. We characterize the optimal choices of label policy under a fixed label budget, and predictor, and show a regret result that relates the estimation error of the optimal labeling policy and predictor to the wealth process that underlies our RCPSes. Lastly, we present practical ways of formulating label policies and we empirically show that our label policies use fewer labels to reach higher utility than naive baseline labeling strategies on both simulations and real data.", "primary_area": "active_learning", "site": "https://neurips.cc/virtual/2024/poster/96655"} +{"video_file": "4Zt7S0B0Jp_39028274.mp4", "openreview_id": "4Zt7S0B0Jp", "slideslive_id": 39028274, "venue": "nips2024", "title": "Chain-of-Thought Reasoning Without Prompting", "status": "Poster", "keywords": "Reasoning;large language models;decoding", "tldr": "Large language models can reason without any specialized prompting, when alternative tokens are considered during the decoding stage.", "abstract": "In enhancing the reasoning capabilities of large language models (LLMs), prior research primarily focuses on specific prompting techniques such as few-shot or zero-shot chain-of-thought (CoT) prompting. These methods, while effective, often involve manually intensive prompt engineering. Our study takes a novel approach by asking: Can LLMs reason effectively without any prompting? Our findings reveal that, intriguingly, CoT reasoning paths can be elicited from pre-trained LLMs by simply altering the \\textit{decoding} process. Rather than conventional greedy decoding, we investigate the top-\nk\nalternative tokens, uncovering that CoT paths are frequently inherent in these sequences. This approach not only bypasses the confounders of prompting but also allows us to assess the LLMs' \\textit{intrinsic} reasoning abilities. Moreover, we observe that the presence of a CoT in the decoding path correlates with a higher confidence in the model's decoded answer. This confidence metric effectively differentiates between CoT and non-CoT paths. Extensive empirical studies on various reasoning benchmarks show that the proposed CoT-decoding effectively elicits reasoning capabilities from language models, which were previously obscured by standard greedy decoding.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96654"} +{"video_file": "4bJufOS6No_39028861.mp4", "openreview_id": "4bJufOS6No", "slideslive_id": 39028861, "venue": "nips2024", "title": "On Learning Multi-Modal Forgery Representation for Diffusion Generated Video Detection", "status": "Poster", "keywords": "Video Forensics\uff0cMulti-Modal Large Language Model", "tldr": "We propose a powerful detector for diffusion video forensics.", "abstract": "Large numbers of synthesized videos from diffusion models pose threats to information security and authenticity, leading to an increasing demand for generated content detection. However, existing video-level detection algorithms primarily focus on detecting facial forgeries and often fail to identify diffusion-generated content with a diverse range of semantics. To advance the field of video forensics, we propose an innovative algorithm named Multi-Modal Detection(MM-Det) for detecting diffusion-generated videos. MM-Det utilizes the profound perceptual and comprehensive abilities of Large Multi-modal Models (LMMs) by generating a Multi-Modal Forgery Representation (MMFR) from LMM's multi-modal space, enhancing its ability to detect unseen forgery content. Besides, MM-Det leverages an In-and-Across Frame Attention (IAFA) mechanism for feature augmentation in the spatio-temporal domain. A dynamic fusion strategy helps refine forgery representations for the fusion. Moreover, we construct a comprehensive diffusion video dataset, called Diffusion Video Forensics (DVF), across a wide range of forgery videos. MM-Det achieves state-of-the-art performance in DVF, demonstrating the effectiveness of our algorithm. Both source code and DVF are available at https://github.com/SparkleXFantasy/MM-Det.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96651"} +{"video_file": "4cU9ZvOkBz_39028548.mp4", "openreview_id": "4cU9ZvOkBz", "slideslive_id": 39028548, "venue": "nips2024", "title": "What is my quantum computer good for? Quantum capability learning with physics-aware neural networks", "status": "Poster", "keywords": "GNN;Quantum Computing;Quantum Benchmarking", "tldr": "A new physics-aware neural network architecture is proposed for predicting which quantum circuits a quantum computer will run with high fidelity.", "abstract": "Quantum computers have the potential to revolutionize diverse fields, including quantum chemistry, materials science, and machine learning. However, contemporary quantum computers experience errors that often cause quantum programs run on them to fail. Until quantum computers can reliably execute large quantum programs, stakeholders will need fast and reliable methods for assessing a quantum computer\u2019s capability\u2014i.e., the programs it can run and how well it can run them. Previously, off-the-shelf neural network architectures have been used to model quantum computers' capabilities, but with limited success, because these networks fail to learn the complex quantum physics that determines real quantum computers' errors. We address this shortcoming with a new quantum-physics-aware neural network architecture for learning capability models. Our scalable architecture combines aspects of graph neural networks with efficient approximations to the physics of errors in quantum programs. This approach achieves up to\n\u223c\n50\nreductions in mean absolute error on both experimental and simulated data, over state-of-the-art models based on convolutional neural networks, and scales to devices with 100+ qubits.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/96649"} +{"video_file": "4lGPSbGe11_39025699.mp4", "openreview_id": "4lGPSbGe11", "slideslive_id": 39025699, "venue": "nips2024", "title": "Is Cross-validation the Gold Standard to Estimate Out-of-sample Model Performance?", "status": "Poster", "keywords": "cross-validation;plug-in;uncertainty quantification;nonparametric models", "tldr": "Plug-in approaches perform no worse than cross-validation approaches when estimating out-of-sample model performance.", "abstract": "Cross-Validation (CV) is the default choice for estimate the out-of-sample performance of machine learning models. Despite its wide usage, their statistical benefits have remained half-understood, especially in challenging nonparametric regimes. In this paper we fill in this gap and show that, in terms of estimating the out-of-sample performances, for a wide spectrum of models, CV does not statistically outperform the simple ``plug-in'' approach where one reuses training data for testing evaluation. Specifically, in terms of both the asymptotic bias and coverage accuracy of the associated interval for out-of-sample evaluation,\nK\n-fold CV provably cannot outperform plug-in regardless of the rate at which the parametric or nonparametric models converge. Leave-one-out CV can have a smaller bias as compared to plug-in; however, this bias improvement is negligible compared to the variability of the evaluation, and in some important cases leave-one-out again does not outperform plug-in once this variability is taken into account. We obtain our theoretical comparisons via a novel higher-order Taylor analysis that dissects the limit theorems of testing evaluations, which applies to model classes that are not amenable to previously known sufficient conditions. Our numerical results demonstrate that plug-in performs indeed no worse than CV in estimating model performance across a wide range of examples.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96639"} +{"video_file": "4oAt5L4lYe_39027193.mp4", "openreview_id": "4oAt5L4lYe", "slideslive_id": 39027193, "venue": "nips2024", "title": "ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction", "status": "Poster", "keywords": "Machine Learning System;Large Language Model Inference;Key-Value Cache Eviction", "tldr": "We propose a method to dynamically evict some KV cache tokens (pages) or recall evicted ones for different query tokens, reducing the memory usage of the KV cache and the latency of attention computation with few accuracy loss.", "abstract": "Large Language Models (LLMs) are widely used in today's tasks of natural language processing. To support applications like multi-turn chats, document understanding, and content generation, models with long context lengths are growing in importance. However, managing long contexts brings substantial challenges due to the expansion of key-value cache (KV cache). Longer KV cache requires larger memory, limiting the batch-size thus decreasing throughput. Also, computing attention over long KV cache incurs more memory access, hurting the end-to-end latency. Prior works find that it is sufficient to use only the recent and high-impact tokens for attention computation, allowing the eviction of less vital tokens to shrink cache size. Nonetheless, we observe a dynamic shift in token importance across different decoding steps. Tokens initially evicted might regain importance after certain decoding steps. To address this, we propose ArkVale, a page-based KV cache manager that can recognize and recall currently important tokens evicted before. We asynchronously copy the filled page into external memory (e.g., CPU memory) as backup and summarize it into a much smaller digest by constructing the bounding-volume of its keys. Before attention computation, we measure all pages' importance based on their digests, recall the important ones, evict the unimportant ones, and select the top-ranked pages for attention computation. Experiment results show that ArkVale performs well on various long context tasks with negligible accuracy loss under 2k\n\u223c\n4k cache budget and can improve decoding latency to\n2.2\n\u00d7\nand batching throughput to\n4.6\n\u00d7\nbecause it applies attention on only a small subset of pages and reduce per-sample memory usage of KV cache.", "primary_area": "infrastructure", "site": "https://neurips.cc/virtual/2024/poster/96635"} +{"video_file": "4syq5cgwA2_39027818.mp4", "openreview_id": "4syq5cgwA2", "slideslive_id": 39027818, "venue": "nips2024", "title": "Gradient-based Discrete Sampling with Automatic Cyclical Scheduling", "status": "Poster", "keywords": "MCMC;Discrete Spaces;Sampling;EBM", "tldr": "We improve gradient based discrete sampling by adding cyclical schedules and automated hyperparameter tuning algorithm", "abstract": "Discrete distributions, particularly in high-dimensional deep models, are often highly multimodal due to inherent discontinuities. While gradient-based discrete sampling has proven effective, it is susceptible to becoming trapped in local modes due to the gradient information. To tackle this challenge, we propose an automatic cyclical scheduling, designed for efficient and accurate sampling in multimodal discrete distributions. Our method contains three key components: (1) a cyclical step size schedule where large steps discover new modes and small steps exploit each mode; (2) a cyclical balancing schedule, ensuring \"balanced\" proposals for given step sizes and high efficiency of the Markov chain; and (3) an automatic tuning scheme for adjusting the hyperparameters in the cyclical schedules, allowing adaptability across diverse datasets with minimal tuning. We prove the non-asymptotic convergence and inference guarantee for our method in general discrete distributions. Extensive experiments demonstrate the superiority of our method in sampling complex multimodal discrete distributions.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/96627"} +{"video_file": "4t3ox9hj3z_39027447.mp4", "openreview_id": "4t3ox9hj3z", "slideslive_id": 39027447, "venue": "nips2024", "title": "When are dynamical systems learned from time series data statistically accurate?", "status": "Poster", "keywords": "generalization;ergodic theory and dynamical systems;chaotic dynamics;scientific machine learning", "tldr": "We combine ergodic theory with generalization to advance the theory and practice of learning chaotic systems using dynamical data.", "abstract": "Conventional notions of generalization often fail to describe the ability of learned models to capture meaningful information from dynamical data. A neural network that learns complex dynamics with a small test error may still fail to reproduce its \\emph{physical} behavior, including associated statistical moments and Lyapunov exponents. To address this gap, we propose an ergodic theoretic approach to generalization of complex dynamical models learned from time series data. Our main contribution is to define and analyze generalization of a broad suite of neural representations of classes of ergodic systems, including chaotic systems, in a way that captures emulating underlying invariant, physical measures. Our results provide theoretical justification for why regression methods for generators of dynamical systems (Neural ODEs) fail to generalize, and why their statistical accuracy improves upon adding Jacobian information during training. We verify our results on a number of ergodic chaotic systems and neural network parameterizations, including MLPs, ResNets, Fourier Neural layers, and RNNs.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/96626"} +{"video_file": "4vp0edVY4o_39026151.mp4", "openreview_id": "4vp0edVY4o", "slideslive_id": 39026151, "venue": "nips2024", "title": "Continual Learning with Global Alignment", "status": "Poster", "keywords": "Continual Learning;Global Alignment;Composition Learning", "tldr": "To address interference when continually learning across tasks, we learn globally aligned data representations by interpolating pre-trained token representations; and apply probing first strategy to reduce interference caused by the classifier.", "abstract": "Continual learning aims to sequentially learn new tasks without forgetting previous tasks' knowledge (catastrophic forgetting). One factor that can cause forgetting is the interference between the gradients on losses from different tasks. When the gradients on the current task's loss are in opposing directions to those on previous tasks' losses, updating the model for the current task may cause performance degradation on previous tasks. In this paper, we first identify causes of the above interference, and hypothesize that correlations between data representations are a key factor of interference. We then propose a method for promoting appropriate correlations between arbitrary tasks' data representations (i.e., global alignment) in individual task learning. Specifically, we learn the data representation as a task-specific composition of pre-trained token representations shared across all tasks. Then the correlations between different tasks' data representations are grounded by correlations between pre-trained token representations. We explore different ways to learn such compositions. Without experience replay, our model achieves SOTA performance in continual learning tasks. It also achieves advanced class-incremental performance through task-incremental training.", "primary_area": "online_learning", "site": "https://neurips.cc/virtual/2024/poster/96625"} +{"video_file": "4ztP4PujOG_39024856.mp4", "openreview_id": "4ztP4PujOG", "slideslive_id": 39024856, "venue": "nips2024", "title": "Motion Graph Unleashed: A Novel Approach to Video Prediction", "status": "Poster", "keywords": "Video;Motion;Low-level computer vision;Frame synthesis", "tldr": "This work proposed a novel motion representation called motion graph to facilitate more accurate video prediction with significantly lower GPU consumptions and a smaller model size.", "abstract": "We introduce motion graph, a novel approach to address the video prediction problem, i.e., predicting future video frames from limited past data. The motion graph transforms patches of video frames into interconnected graph nodes, to comprehensively describe the spatial-temporal relationships among them. This representation overcomes the limitations of existing motion representations such as image differences, optical flow, and motion matrix that either fall short in capturing complex motion patterns or suffer from excessive memory consumption. We further present a video prediction pipeline empowered by motion graph, exhibiting substantial performance improvements and cost reductions. Extensive experiments on various datasets, including UCF Sports, KITTI and Cityscapes, highlight the strong representative ability of motion graph. Especially on UCF Sports, our method matches and outperforms the SOTA methods with a significant reduction in model size by 78% and a substantial decrease in GPU memory utilization by 47%.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96622"} +{"video_file": "50nEnmVLRb_39026005.mp4", "openreview_id": "50nEnmVLRb", "slideslive_id": 39026005, "venue": "nips2024", "title": "Gaussian Process Bandits for Top-k Recommendations", "status": "Poster", "keywords": "Gaussian processes;Bandit algorithms;Top-k recommendations;Linear Algebra;Iterative Algorithms", "tldr": "This paper explores Gaussian process bandit algorithms using Kendall kernels for top-k ranking, introducing a new variant of the Kendall kernel, enabling fast inference, and including a regret analysis.", "abstract": "Algorithms that utilize bandit feedback to optimize top-k recommendations are vital for online marketplaces, search engines, and content platforms. However, the combinatorial nature of this problem poses a significant challenge, as the possible number of ordered top-k recommendations from\nn\nitems grows exponentially with\nk\n. As a result, previous work often relies on restrictive assumptions about the reward or bandit feedback models, such as assuming that the feedback discloses rewards for each recommended item rather than a single scalar feedback for the entire set of top-k recommendations. We introduce a novel contextual bandit algorithm for top-k recommendations, leveraging a Gaussian process with a Kendall kernel to model the reward function. Our algorithm requires only scalar feedback from the top-k recommendations and does not impose restrictive assumptions on the reward structure. Theoretical analysis confirms that the proposed algorithm achieves sub-linear regret in relation to the number of rounds and arms. Additionally, empirical results using a bandit simulator demonstrate that the proposed algorithm outperforms other baselines across various scenarios.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/96620"} +{"video_file": "51HQpkQy3t_39024739.mp4", "openreview_id": "51HQpkQy3t", "slideslive_id": 39024739, "venue": "nips2024", "title": "DiTFastAttn: Attention Compression for Diffusion Transformer Models", "status": "Poster", "keywords": "Diffusion Transformer;Attention;Acceleration", "tldr": "Attention Compression for Diffusion Transformer Models", "abstract": "Diffusion Transformers (DiT) excel at image and video generation but face computational challenges due to the quadratic complexity of self-attention operators. We propose DiTFastAttn, a post-training compression method to alleviate the computational bottleneck of DiT. We identify three key redundancies in the attention computation during DiT inference: (1) spatial redundancy, where many attention heads focus on local information; (2) temporal redundancy, with high similarity between the attention outputs of neighboring steps; (3) conditional redundancy, where conditional and unconditional inferences exhibit significant similarity. We propose three techniques to reduce these redundancies: (1)\nWindow Attention with Residual Sharing\nto reduce spatial redundancy; (2)\nAttention Sharing across Timesteps\nto exploit the similarity between steps; (3)\nAttention Sharing across CFG\nto skip redundant computations during conditional generation.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96619"} +{"video_file": "52r4XJYzjg_39028387.mp4", "openreview_id": "52r4XJYzjg", "slideslive_id": 39028387, "venue": "nips2024", "title": "Improving Context-Aware Preference Modeling for Language Models", "status": "Poster", "keywords": "context-specific preference modeling;preference modeling;reward modeling;language modeling;rlaif;rlhf", "tldr": "We propose context-specific preference datasets and conduct experiments to investigate the potential of context-specific preference modeling.", "abstract": "While finetuning language models from pairwise preferences has proven remarkably effective, the underspecified nature of natural language presents critical challenges. Direct preference feedback is uninterpretable, difficult to provide where multidimensional criteria may apply, and often inconsistent, either because it is based on incomplete instructions or provided by diverse principals. To address these challenges, we consider the two-step preference modeling procedure that first resolves the under-specification by selecting a context, and then evaluates preference with respect to the chosen context. We decompose reward modeling error according to these two steps, which suggests that supervising context in addition to context-specific preference may be a viable approach to aligning models with diverse human preferences. For this to work, the ability of models to evaluate context-specific preference is critical. To this end, we contribute context-conditioned preference datasets and accompanying experiments that investigate the ability of language models to evaluate context-specific preference. Unlike past datasets, where context-specific preference is highly correlated with general preference, our \"preference reversal\" datasets disentangle context-specific and general preferences to isolate context-specific capabilities. We use our datasets to (1) show that existing preference models benefit from, but fail to fully consider, added context, (2) finetune a context-aware reward model with context-specific performance exceeding that of GPT-4 and Llama 3 70B, and (3) investigate the potential value of context-aware preference modeling.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96617"} +{"video_file": "5AeLrXb9sQ_39027161.mp4", "openreview_id": "5AeLrXb9sQ", "slideslive_id": 39027161, "venue": "nips2024", "title": "TARSS-Net: Temporal-Aware Radar Semantic Segmentation Network", "status": "Poster", "keywords": "Radar Semantic Segmentation;Temporal Relation Modeling", "tldr": "Discussions on current temporal modeling mechanisms; a novel temporal information learning paradigm for radar semantic segmentation.", "abstract": "Radar signal interpretation plays a crucial role in remote detection and ranging. With the gradual display of the advantages of neural network technology in signal processing, learning-based radar signal interpretation is becoming a research hot-spot and made great progress. And since radar semantic segmentation (RSS) can provide more fine-grained target information, it has become a more concerned direction in this field. However, the temporal information, which is an important clue for analyzing radar data, has not been exploited sufficiently in present RSS frameworks. In this work, we propose a novel temporal information learning paradigm, i.e., data-driven temporal information aggregation with learned target-history relations. Following this idea, a flexible learning module, called Temporal Relation-Aware Module (TRAM) is carefully designed. TRAM contains two main blocks: i) an encoder for capturing the target-history temporal relations (TH-TRE) and ii) a learnable temporal relation attentive pooling (TRAP) for aggregating temporal information. Based on TRAM, an end-to-end Temporal-Aware RSS Network (TARSS-Net) is presented, which has outstanding performance on publicly available and our collected real-measured datasets. Code and supplementary materials are available at https://github.com/zlw9161/TARSS-Net.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96608"} +{"video_file": "5BXXoJh0Vr_39027317.mp4", "openreview_id": "5BXXoJh0Vr", "slideslive_id": 39027317, "venue": "nips2024", "title": "CausalStock: Deep End-to-end Causal Discovery for News-driven Multi-stock Movement Prediction", "status": "Poster", "keywords": "Causal discovery;Stock movement prediction;Text mining", "tldr": "We propose a novel news-driven multi-stock movement prediction framework called CausalStock.", "abstract": "There are two issues in news-driven multi-stock movement prediction tasks that are not well solved in the existing works. On the one hand, \"relation discovery\" is a pivotal part when leveraging the price information of other stocks to achieve accurate stock movement prediction. Given that stock relations are often unidirectional, such as the \"supplier-consumer\" relationship, causal relations are more appropriate to capture the impact between stocks. On the other hand, there is substantial noise existing in the news data leading to extracting effective information with difficulty. With these two issues in mind, we propose a novel framework called CausalStock for news-driven multi-stock movement prediction, which discovers the temporal causal relations between stocks. We design a lag-dependent temporal causal discovery mechanism to model the temporal causal graph distribution. Then a Functional Causal Model is employed to encapsulate the discovered causal relations and predict the stock movements. Additionally, we propose a Denoised News Encoder by taking advantage of the excellent text evaluation ability of large language models (LLMs) to extract useful information from massive news data. The experiment results show that CausalStock outperforms the strong baselines for both news-driven multi-stock movement prediction and multi-stock movement prediction tasks on six real-world datasets collected from the US, China, Japan, and UK markets. Moreover, getting benefit from the causal relations, CausalStock could offer a clear prediction mechanism with good explainability.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96607"} +{"video_file": "5DJBBACqim_39027028.mp4", "openreview_id": "5DJBBACqim", "slideslive_id": 39027028, "venue": "nips2024", "title": "MeMo: Meaningful, Modular Controllers via Noise Injection", "status": "Poster", "keywords": "modular neural network policy;policy transfer;imitation learning;reinforcement learning", "tldr": "We present a method for pretraining modular controllers from a single robot and environment that significantly speeds up RL training for locomotion and grasping when reused on more complex morphologies.", "abstract": "Robots are often built from standardized assemblies, (e.g. arms, legs, or fingers), but each robot must be trained from scratch to control all the actuators of all the parts together. In this paper we demonstrate a new approach that takes a single robot and its controller as input and produces a set of modular controllers for each of these assemblies such that when a new robot is built from the same parts, its control can be quickly learned by reusing the modular controllers. We achieve this with a framework called MeMo which learns (Me)aningful, (Mo)dular controllers. Specifically, we propose a novel modularity objective to learn an appropriate division of labor among the modules. We demonstrate that this objective can be optimized simultaneously with standard behavior cloning loss via noise injection. We benchmark our framework in locomotion and grasping environments on simple to complex robot morphology transfer. We also show that the modules help in task transfer. On both structure and task transfer, MeMo achieves improved training efficiency to graph neural network and Transformer baselines.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/96605"} +{"video_file": "5FATPIlWUJ_39027578.mp4", "openreview_id": "5FATPIlWUJ", "slideslive_id": 39027578, "venue": "nips2024", "title": "Robust Gaussian Processes via Relevance Pursuit", "status": "Poster", "keywords": "Gaussian process;robust regression;Bayesian optimization;submodular", "tldr": "Robust Gaussian Processes with strong empirical performance, and surprising convexity and approximation guarantees.", "abstract": "Gaussian processes (GPs) are non-parametric probabilistic regression models that are popular due to their flexibility, data efficiency, and well-calibrated uncertainty estimates. However, standard GP models assume homoskedastic Gaussian noise, while many real-world applications are subject to non-Gaussian corruptions. Variants of GPs that are more robust to alternative noise models have been proposed, and entail significant trade-offs between accuracy and robustness, and between computational requirements and theoretical guarantees. In this work, we propose and study a GP model that achieves robustness against sparse outliers by inferring data-point-specific noise levels with a sequential selection procedure maximizing the log marginal likelihood that we refer to as relevance pursuit. We show, surprisingly, that the model can be parameterized such that the associated log marginal likelihood is strongly concave in the data-point-specific noise variances, a property rarely found in either robust regression objectives or GP marginal likelihoods. This in turn implies the weak submodularity of the corresponding subset selection problem, and thereby proves approximation guarantees for the proposed algorithm. We compare the model\u2019s performance relative to other approaches on diverse regression and Bayesian optimization tasks, including the challenging but common setting of sparse corruptions of the labels within or close to the function range.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/96603"} +{"video_file": "5FHzrRGOKR_39025749.mp4", "openreview_id": "5FHzrRGOKR", "slideslive_id": 39025749, "venue": "nips2024", "title": "Federated Behavioural Planes: Explaining the Evolution of Client Behaviour in Federated Learning", "status": "Poster", "keywords": "federated learning;explainable AI;counterfactuals;secure aggregation", "tldr": "We introduce Federated Behavioural Planes (FBPs), a novel method to analyse, visualise, and explain the dynamics of FL systems, showing how clients behave under two different lenses: predictive performance and decision-making processes.", "abstract": "Federated Learning (FL), a privacy-aware approach in distributed deep learning environments, enables many clients to collaboratively train a model without sharing sensitive data, thereby reducing privacy risks. However, enabling human trust and control over FL systems requires understanding the evolving behaviour of clients, whether beneficial or detrimental for the training, which still represents a key challenge in the current literature. To address this challenge, we introduce Federated Behavioural Planes (FBPs), a novel method to analyse, visualise, and explain the dynamics of FL systems, showing how clients behave under two different lenses: predictive performance (error behavioural space) and decision-making processes (counterfactual behavioural space). Our experiments demonstrate that FBPs provide informative trajectories describing the evolving states of clients and their contributions to the global model, thereby enabling the identification of clusters of clients with similar behaviours. Leveraging the patterns identified by FBPs, we propose a robust aggregation technique named Federated Behavioural Shields to detect malicious or noisy client models, thereby enhancing security and surpassing the efficacy of existing state-of-the-art FL defense mechanisms. Our code is publicly available on GitHub.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96602"} +{"video_file": "5GCgNFZSyo_39025291.mp4", "openreview_id": "5GCgNFZSyo", "slideslive_id": 39025291, "venue": "nips2024", "title": "Minimizing UCB: a Better Local Search Strategy in Local Bayesian Optimization", "status": "Poster", "keywords": "Bayesian Optimization;Local Optimization;High dimensional;Minimizing upper confidence Bound", "tldr": "We propose two local Bayesian optimziation algorithms to show that minimizing the UCB is better strategy than gradient descent in local search with a Gaussian process surrogate.", "abstract": "Local Bayesian optimization is a promising practical approach to solve the high dimensional black-box function optimization problem. Among them is the approximated gradient class of methods, which implements a strategy similar to gradient descent. These methods have achieved good experimental results and theoretical guarantees. However, given the distributional properties of the Gaussian processes applied on these methods, there may be potential to further exploit the information of the Gaussian processes to facilitate the BO search. In this work, we develop the relationship between the steps of the gradient descent method and one that minimizes the Upper Confidence Bound (UCB), and show that the latter can be a better strategy than direct gradient descent when a Gaussian process is applied as a surrogate. Through this insight, we propose a new local Bayesian optimization algorithm, MinUCB, which replaces the gradient descent step with minimizing UCB in GIBO. We further show that MinUCB maintains a similar convergence rate with GIBO. We then improve the acquisition function of MinUCB further through a look ahead strategy, and obtain a more efficient algorithm LA-MinUCB. We apply our algorithms on different synthetic and real-world functions, and the results show the effectiveness of our method. Our algorithms also illustrate improvements on local search strategies from an upper bound perspective in Bayesian optimization, and provides a new direction for future algorithm design.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/96598"} +{"video_file": "5H4l37IsZ8_39026562.mp4", "openreview_id": "5H4l37IsZ8", "slideslive_id": 39026562, "venue": "nips2024", "title": "Task-recency bias strikes back: Adapting covariances in Exemplar-Free Class Incremental Learning", "status": "Poster", "keywords": "continual learning;exemplar free;exemplar free class incremental learning;class incremental learning;exemplar-free", "tldr": "Exemplar-free class incremental learning, we explain task-recency bias in modern methods and develop a method that adapts means and covariances of classes from task to task", "abstract": "Exemplar-Free Class Incremental Learning (EFCIL) tackles the problem of training a model on a sequence of tasks without access to past data. Existing state-of-the-art methods represent classes as Gaussian distributions in the feature extractor's latent space, enabling Bayes classification or training the classifier by replaying pseudo features. However, we identify two critical issues that compromise their efficacy when the feature extractor is updated on incremental tasks. First, they do not consider that classes' covariance matrices change and must be adapted after each task. Second, they are susceptible to a task-recency bias caused by dimensionality collapse occurring during training. In this work, we propose AdaGauss - a novel method that adapts covariance matrices from task to task and mitigates the task-recency bias owing to the additional anti-collapse loss function. AdaGauss yields state-of-the-art results on popular EFCIL benchmarks and datasets when training from scratch or starting from a pre-trained backbone.", "primary_area": "online_learning", "site": "https://neurips.cc/virtual/2024/poster/96596"} +{"video_file": "5HQhYiGnYb_39025931.mp4", "openreview_id": "5HQhYiGnYb", "slideslive_id": 39025931, "venue": "nips2024", "title": "FIDE: Frequency-Inflated Conditional Diffusion Model for Extreme-Aware Time Series Generation", "status": "Poster", "keywords": "Diffusion model;time series;extreme values", "tldr": "A frequency-inflated conditional diffusion model that enhances time series generation by preserving extreme values distribution", "abstract": "Time series generation is a crucial aspect of data analysis, playing a pivotal role in learning the temporal patterns and their underlying dynamics across diverse fields. Conventional time series generation methods often struggle to capture extreme values adequately, diminishing their value in critical applications such as scenario planning and management for healthcare, finance, climate change adaptation, and beyond. In this paper, we introduce a conditional diffusion model called FIDE to address the challenge of preserving the distribution of extreme values in generative modeling for time series. FIDE employs a novel high-frequency inflation strategy in the frequency domain, preventing premature fade-out of the extreme value. It also extends traditional diffusion-based model, enabling the generation of samples conditioned on the block maxima, thereby enhancing the model's capacity to capture extreme events. Additionally, the FIDE framework incorporates the Generalized Extreme Value (GEV) distribution within its generative modeling framework, ensuring fidelity to both block maxima and overall data distribution. Experimental results on real-world and synthetic data showcase the efficacy of FIDE over baseline methods, highlighting its potential in advancing Generative AI for time series analysis, specifically in accurately modeling extreme events.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/96595"} +{"video_file": "5IFeCNA7zR_39027126.mp4", "openreview_id": "5IFeCNA7zR", "slideslive_id": 39027126, "venue": "nips2024", "title": "DARG: Dynamic Evaluation of Large Language Models via Adaptive Reasoning Graph", "status": "Poster", "keywords": "Dynamic Evaluation;Large Language Model", "tldr": "We propose DARG, which dynamically evaluates LLMs by generating complex test data with adaptive reasoning graphs, overcoming static benchmarks' limitations and revealing performance declines and biases as task complexity rises.", "abstract": "The current paradigm of evaluating Large Language Models (LLMs) through static benchmarks comes with significant limitations, such as vulnerability to data contamination and a lack of adaptability to the evolving capabilities of LLMs. Therefore, evaluation methods that can adapt and generate evaluation data with controlled complexity are urgently needed. In this work, we introduce Dynamic Evaluation of LLMs via Adaptive Reasoning Graph Evolvement (DARG) to dynamically extend current benchmarks with controlled complexity and diversity. Specifically, we first extract the reasoning graphs of data points in current benchmarks and then perturb the reasoning graphs to generate novel testing data. Such newly generated test samples can have different levels of complexity while maintaining linguistic diversity similar to the original benchmarks. We further use a code-augmented LLM to ensure the label correctness of newly generated data. We apply our DARG framework to diverse reasoning tasks in four domains with 15 state-of-the-art LLMs. Experimental results show that almost all LLMs experience a performance decrease with increased complexity and certain LLMs exhibit significant drops. Additionally, we find that LLMs exhibit more biases when being evaluated via the data generated by DARG with higher complexity levels. These observations provide useful insights into how to dynamically and adaptively evaluate LLMs.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/96593"} +{"video_file": "5K3VeoBnqc_39026203.mp4", "openreview_id": "5K3VeoBnqc", "slideslive_id": 39026203, "venue": "nips2024", "title": "AED: Adaptable Error Detection for Few-shot Imitation Policy", "status": "Poster", "keywords": "adaptable error detection;few-shot imitation;policy learning", "tldr": "The novel adaptable error detection (AED) problem is formulated for monitoring few-shot imitation policies' behaviors, and we propose PrObe to address the challenging problem by learning from the policy's feature representations.", "abstract": "We introduce a new task called Adaptable Error Detection (AED), which aims to identify behavior errors in few-shot imitation (FSI) policies based on visual observations in novel environments. The potential to cause serious damage to surrounding areas limits the application of FSI policies in real-world scenarios. Thus, a robust system is necessary to notify operators when FSI policies are inconsistent with the intent of demonstrations. This task introduces three challenges: (1) detecting behavior errors in novel environments, (2) identifying behavior errors that occur without revealing notable changes, and (3) lacking complete temporal information of the rollout due to the necessity of online detection. However, the existing benchmarks cannot support the development of AED because their tasks do not present all these challenges. To this end, we develop a cross-domain AED benchmark, consisting of 322 base and 153 novel environments. Additionally, we propose Pattern Observer (PrObe) to address these challenges. PrObe is equipped with a powerful pattern extractor and guided by novel learning objectives to parse discernible patterns in the policy feature representations of normal or error states. Through our comprehensive evaluation, PrObe demonstrates superior capability to detect errors arising from a wide range of FSI policies, consistently surpassing strong baselines. Moreover, we conduct detailed ablations and a pilot study on error correction to validate the effectiveness of the proposed architecture design and the practicality of the AED task, respectively. The AED project page can be found at https://aed-neurips.github.io/.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96591"} +{"video_file": "5SUP6vUVkP_39024414.mp4", "openreview_id": "5SUP6vUVkP", "slideslive_id": 39024414, "venue": "nips2024", "title": "Conditional Density Estimation with Histogram Trees", "status": "Poster", "keywords": "Conditional density estimation;MDL principle;Decision Tree;Histogram", "tldr": "Intrinsically interpretable models for conditional density estimation, employing decision trees equipped with histogram density estimation", "abstract": "Conditional density estimation (CDE) goes beyond regression by modeling the full conditional distribution, providing a richer understanding of the data than just the conditional mean in regression. This makes CDE particularly useful in critical application domains. However, interpretable CDE methods are understudied. Current methods typically employ kernel-based approaches, using kernel functions directly for kernel density estimation or as basis functions in linear models. In contrast, despite their conceptual simplicity and visualization suitability, tree-based methods---which are arguably more comprehensible---have been largely overlooked for CDE tasks. Thus, we propose the Conditional Density Tree (CDTree), a fully non-parametric model consisting of a decision tree in which each leaf is formed by a histogram model. Specifically, we formalize the problem of learning a CDTree using the minimum description length (MDL) principle, which eliminates the need for tuning the hyperparameter for regularization. Next, we propose an iterative algorithm that, although greedily, searches the optimal histogram for every possible node split. Our experiments demonstrate that, in comparison to existing interpretable CDE methods, CDTrees are both more accurate (as measured by the log-loss) and more robust against irrelevant features. Further, our approach leads to smaller tree sizes than existing tree-based models, which benefits interpretability.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/96586"} +{"video_file": "5VE1iLeYOz_39026841.mp4", "openreview_id": "5VE1iLeYOz", "slideslive_id": 39026841, "venue": "nips2024", "title": "Efficient Centroid-Linkage Clustering", "status": "Poster", "keywords": "clustering;hierarchical agglomerative clustering;hac;centroid linkage;algorithm;dynamic nearest neighbor search;adaptive updates", "tldr": "We give an efficient algorithm for Centroid-Linkage Hierarchical Agglomerative Clustering (HAC), which computes a \nc\n-approximate clustering in \nn\n1\n+\n1\n/\nc\n2\n+\no\n(\n1\n)\n time and obtains significant speedups over existing baselines.", "abstract": "We give an algorithm for Centroid-Linkage Hierarchical Agglomerative Clustering (HAC), which computes a\nc\n-approximate clustering in roughly\nn\n1\n+\nO\n(\n1\n/\nc\n2\n)\ntime. We obtain our result by combining a new centroid-linkage HAC algorithm with a novel fully dynamic data structure for nearest neighbor search which works under adaptive updates.\nWe also evaluate our algorithm empirically. By leveraging a state-of-the-art nearest-neighbor search library, we obtain a fast and accurate centroid-linkage HAC algorithm. Compared to an existing state-of-the-art exact baseline, our implementation maintains the clustering quality while delivering up to a\n36\n\u00d7\nspeedup due to performing fewer distance comparisons.", "primary_area": "infrastructure", "site": "https://neurips.cc/virtual/2024/poster/96585"} +{"video_file": "5WoYFypPv0_39027461.mp4", "openreview_id": "5WoYFypPv0", "slideslive_id": 39027461, "venue": "nips2024", "title": "Deep Support Vectors", "status": "Poster", "keywords": "model inversion;privacy attacks;generative model;dataset distillation;support vector machines", "tldr": "This paper finds support vector in deep learning model. These vectors has same characteristics to conventional SVM. It can be used in dataset distillation, global explanation. In its beyond, it can be used as latent generative model.", "abstract": "Deep learning has achieved tremendous success. However, unlike SVMs, which provide direct decision criteria and can be trained with a small dataset, it still has significant weaknesses due to its requirement for massive datasets during training and the black-box characteristics on decision criteria. This paper addresses these issues by identifying support vectors in deep learning models. To this end, we propose the DeepKKT condition, an adaptation of the traditional Karush-Kuhn-Tucker (KKT) condition for deep learning models, and confirm that generated Deep Support Vectors (DSVs) using this condition exhibit properties similar to traditional support vectors. This allows us to apply our method to few-shot dataset distillation problems and alleviate the black-box characteristics of deep learning models. Additionally, we demonstrate that the DeepKKT condition can transform conventional classification models into generative models with high fidelity, particularly as latent generation models using class labels as latent variables. We validate the effectiveness of DSVs using common datasets (ImageNet, CIFAR10 and CIFAR100) on the general architectures (ResNet and ConvNet), proving their practical applicability.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96584"} +{"video_file": "5atraF1tbg_39028256.mp4", "openreview_id": "5atraF1tbg", "slideslive_id": 39028256, "venue": "nips2024", "title": "PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining", "status": "Poster", "keywords": "privacy;auditing;machine learning;differential privacy;membership inference attack", "tldr": "PANORAMIA is a practical privacy audit for ML models, that does not retrain the target model or alter its training set or training algorithm.", "abstract": "We present PANORAMIA, a privacy leakage measurement framework for machine learning models that relies on membership inference attacks using generated data as non-members. By relying on generated non-member data, PANORAMIA eliminates the common dependency of privacy measurement tools on in-distribution non-member data. As a result, PANORAMIA does not modify the model, training data, or training process, and only requires access to a subset of the training data. We evaluate PANORAMIA on ML models for image and tabular data classification, as well as on large-scale language models.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96581"} +{"video_file": "5cIRdGM1uG_39028591.mp4", "openreview_id": "5cIRdGM1uG", "slideslive_id": 39028591, "venue": "nips2024", "title": "Position Coupling: Improving Length Generalization of Arithmetic Transformers Using Task Structure", "status": "Poster", "keywords": "Length Generalization;Transformers;Position Coupling;Positional Encoding;Out-of-distribution Generalization;Arithmetic Tasks;Algorithmic Tasks", "tldr": "To tackle the length generalization problem of decoder-only Transformer for solving arithmetic/algorithmic tasks, we inject the structure of the task into the Transformer by using the same position IDs for relevant tokens.", "abstract": "Even for simple arithmetic tasks like integer addition, it is challenging for Transformers to generalize to longer sequences than those encountered during training. To tackle this problem, we propose position coupling, a simple yet effective method that directly embeds the structure of the tasks into the positional encoding of a (decoder-only) Transformer. Taking a departure from the vanilla absolute position mechanism assigning unique position IDs to each of the tokens, we assign the same position IDs to two or more \"relevant\" tokens; for integer addition tasks, we regard digits of the same significance as in the same position. On the empirical side, we show that with the proposed position coupling, our models trained on 1 to 30-digit additions can generalize up to 200-digit additions (6.67x of the trained length). On the theoretical side, we prove that a 1-layer Transformer with coupled positions can solve the addition task involving exponentially many digits, whereas any 1-layer Transformer without positional information cannot entirely solve it. We also demonstrate that position coupling can be applied to other algorithmic tasks such as Nx2 multiplication and a two-dimensional task. Our codebase is available at github.com/HanseulJo/position-coupling.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96579"} +{"video_file": "5d2eScRiRC_39027135.mp4", "openreview_id": "5d2eScRiRC", "slideslive_id": 39027135, "venue": "nips2024", "title": "Imitating Language via Scalable Inverse Reinforcement Learning", "status": "Poster", "keywords": "Language Modeling;Inverse Reinforcement Learning;Imitation Learning;Supervised Fine-tuning", "tldr": "Investigating an effective, computationally scalable reinforcement learning perspective to imitation for language modeling.", "abstract": "The majority of language model training builds on imitation learning. It covers pretraining, supervised fine-tuning, and affects the starting conditions for reinforcement learning from human feedback (RLHF). The simplicity and scalability of maximum likelihood estimation (MLE) for next token prediction led to its role as predominant paradigm. However, the broader field of imitation learning can more effectively utilize the sequential structure underlying autoregressive generation. We focus on investigating the inverse reinforcement learning (IRL) perspective to imitation, extracting rewards and directly optimizing sequences instead of individual token likelihoods and evaluate its benefits for fine-tuning large language models. We provide a new angle, reformulating inverse soft-Q-learning as a temporal difference regularized extension of MLE. This creates a principled connection between MLE and IRL and allows trading off added complexity with increased performance and diversity of generations in the supervised fine-tuning (SFT) setting. We find clear advantages for IRL-based imitation, in particular for retaining diversity while maximizing task performance, rendering IRL a strong alternative on fixed SFT datasets even without online data generation. Our analysis of IRL-extracted reward functions further indicates benefits for more robust reward functions via tighter integration of supervised and preference-based LLM post-training.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96578"} +{"video_file": "5fybcQZ0g4_39025070.mp4", "openreview_id": "5fybcQZ0g4", "slideslive_id": 39025070, "venue": "nips2024", "title": "Categorical Flow Matching on Statistical Manifolds", "status": "Poster", "keywords": "Generative Model;Flow Matching;Statistical Manifold", "tldr": "We proposed Statistical Flow Matching (SFM), a novel flow matching framework on the manifold of parameterized probability measures for discrete generation.", "abstract": "We introduce Statistical Flow Matching (SFM), a novel and mathematically rigorous flow-matching framework on the manifold of parameterized probability measures inspired by the results from information geometry. We demonstrate the effectiveness of our method on the discrete generation problem by instantiating SFM on the manifold of categorical distributions whose geometric properties remain unexplored in previous discrete generative models. Utilizing the Fisher information metric, we equip the manifold with a Riemannian structure whose intrinsic geometries are effectively leveraged by following the shortest paths of geodesics. We develop an efficient training and sampling algorithm that overcomes numerical stability issues with a diffeomorphism between manifolds. Our distinctive geometric perspective of statistical manifolds allows us to apply optimal transport during training and interpret SFM as following the steepest direction of the natural gradient. Unlike previous models that rely on variational bounds for likelihood estimation, SFM enjoys the exact likelihood calculation for arbitrary probability measures. We manifest that SFM can learn more complex patterns on the statistical manifold where existing models often fail due to strong prior assumptions. Comprehensive experiments on real-world generative tasks ranging from image, text to biological domains further demonstrate that SFM achieves higher sampling quality and likelihood than other discrete diffusion or flow-based models.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/96577"} +{"video_file": "5jRU8ufi8H_39025484.mp4", "openreview_id": "5jRU8ufi8H", "slideslive_id": 39025484, "venue": "nips2024", "title": "Unlocking Tokens as Data Points for Generalization Bounds on Larger Language Models", "status": "Spotlight", "keywords": "Large language models;generalization bounds;generalization;compression", "tldr": "We compute token-level generalization bounds for large language models at the scale of LlaMa2 70B.", "abstract": "Large language models (LLMs) with billions of parameters excel at predicting the next token in a sequence. Recent work computes non-vacuous compression-based generalization bounds for LLMs, but these bounds are vacuous for large models at the billion-parameter scale. Moreover, these bounds are obtained through restrictive compression techniques, bounding compressed models that generate low-quality text. Additionally, the tightness of these existing bounds depends on the number of IID documents in a training set rather than the much larger number of non-IID constituent tokens, leaving untapped potential for tighter bounds. In this work, we instead use properties of martingales to derive generalization bounds that benefit from the vast number of tokens in LLM training sets. Since a dataset contains far more tokens than documents, our generalization bounds not only tolerate but actually benefit from far less restrictive compression schemes. With Monarch matrices, Kronecker factorizations, and post-training quantization, we achieve non-vacuous generalization bounds for LLMs as large as LLaMA2-70B. Unlike previous approaches, our work achieves the first non-vacuous bounds for models that are deployed in practice and generate high-quality text.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/96574"} +{"video_file": "5jYFoldunM_39027972.mp4", "openreview_id": "5jYFoldunM", "slideslive_id": 39027972, "venue": "nips2024", "title": "On the Adversarial Robustness of Benjamini Hochberg", "status": "Poster", "keywords": "multiple testing;p-values;false discovery rate;adversarial robust", "tldr": "As BH's FDR control could be relied upon in some critical contexts, we investigate its adversarial robustness via easily implementable, adversarial algorithms, and show that BH's control can be significantly broken with perturbations to few tests.", "abstract": "The Benjamini-Hochberg (BH) procedure is widely used to control the false detection rate (FDR) in multiple testing. Applications of this control abound in drug discovery, forensics, anomaly detection, and, in particular, machine learning, ranging from nonparametric outlier detection to out-of-distribution detection and one-class classification methods. Considering this control could be relied upon in critical safety/security contexts, we investigate its adversarial robustness. More precisely, we study under what conditions BH does and does not exhibit adversarial robustness, we present a class of simple and easily implementable adversarial test-perturbation algorithms, and we perform computational experiments. With our algorithms, we demonstrate that there are conditions under which BH's control can be significantly broken with relatively few (even just one) test score perturbation(s), and provide non-asymptotic guarantees on the expected adversarial-adjustment to FDR. Our technical analysis involves a combinatorial reframing of the BH procedure as a ``balls into bins'' process, and drawing a connection to generalized ballot problems to facilitate an information-theoretic approach for deriving non-asymptotic lower bounds.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/96573"} +{"video_file": "5l5bhYexYO_39026357.mp4", "openreview_id": "5l5bhYexYO", "slideslive_id": 39026357, "venue": "nips2024", "title": "Reinforcement Learning Gradients as Vitamin for Online Finetuning Decision Transformers", "status": "Spotlight", "keywords": "Decision transformer;reinforcement learning", "tldr": "We found that introducing RL gradient would significantly improve online decision transformer's performance during online finetuning.", "abstract": "Decision Transformers have recently emerged as a new and compelling paradigm for offline Reinforcement Learning (RL), completing a trajectory in an autoregressive way. While improvements have been made to overcome initial shortcomings, online finetuning of decision transformers has been surprisingly under-explored. The widely adopted state-of-the-art Online Decision Transformer (ODT) still struggles when pretrained with low-reward offline data. In this paper, we theoretically analyze the online-finetuning of the decision transformer, showing that the commonly used Return-To-Go (RTG) that's far from the expected return hampers the online fine-tuning process. This problem, however, is well-addressed by the value function and advantage of standard RL algorithms. As suggested by our analysis, in our experiments, we hence find that simply adding TD3 gradients to the finetuning process of ODT effectively improves the online finetuning performance of ODT, especially if ODT is pretrained with low-reward offline data. These findings provide new directions to further improve decision transformers.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96569"} +{"video_file": "5lLb7aXRN9_39027432.mp4", "openreview_id": "5lLb7aXRN9", "slideslive_id": 39027432, "venue": "nips2024", "title": "Unconditional stability of a recurrent neural circuit implementing divisive normalization", "status": "Poster", "keywords": "Recurrent Networks;Theoretical Neuroscience;Dynamical Systems;Normalization", "tldr": "We prove unconditional stability of a neurodynamical model and demonstrate its performance on sequential datasets.", "abstract": "Stability in recurrent neural models poses a significant challenge, particularly in developing biologically plausible neurodynamical models that can be seamlessly trained. Traditional cortical circuit models are notoriously difficult to train due to expansive nonlinearities in the dynamical system, leading to an optimization problem with nonlinear stability constraints that are difficult to impose. Conversely, recurrent neural networks (RNNs) excel in tasks involving sequential data but lack biological plausibility and interpretability. In this work, we address these challenges by linking dynamic divisive normalization (DN) to the stability of \"oscillatory recurrent gated neural integrator circuits'' (ORGaNICs), a biologically plausible recurrent cortical circuit model that dynamically achieves DN and that has been shown to simulate a wide range of neurophysiological phenomena. By using the indirect method of Lyapunov, we prove the remarkable property of unconditional local stability for an arbitrary-dimensional ORGaNICs circuit when the recurrent weight matrix is the identity. We thus connect ORGaNICs to a system of coupled damped harmonic oscillators, which enables us to derive the circuit's energy function, providing a normative principle of what the circuit, and individual neurons, aim to accomplish. Further, for a generic recurrent weight matrix, we prove the stability of the 2D model and demonstrate empirically that stability holds in higher dimensions. Finally, we show that ORGaNICs can be trained by backpropagation through time without gradient clipping/scaling, thanks to its intrinsic stability property and adaptive time constants, which address the problems of exploding, vanishing, and oscillating gradients. By evaluating the model's performance on RNN benchmarks, we find that ORGaNICs outperform alternative neurodynamical models on static image classification tasks and perform comparably to LSTMs on sequential tasks.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/96568"} +{"video_file": "5pJfDlaSxV_39028374.mp4", "openreview_id": "5pJfDlaSxV", "slideslive_id": 39028374, "venue": "nips2024", "title": "Verifiably Robust Conformal Prediction", "status": "Poster", "keywords": "Conformal Prediction;Adversarial Attacks;Distribution Shift;Formal Verification", "tldr": "We introduce two verifiably robust conformal prediction frameworks that are robust against adversarial perturbations and always guaranteed to return the desired coverage in the presence of such adversarial attacks.", "abstract": "Conformal Prediction (CP) is a popular uncertainty quantification method that provides distribution-free, statistically valid prediction sets, assuming that training and test data are exchangeable. In such a case, CP's prediction sets are guaranteed to cover the (unknown) true test output with a user-specified probability. Nevertheless, this guarantee is violated when the data is subjected to adversarial attacks, which often result in a significant loss of coverage. Recently, several approaches have been put forward to recover CP guarantees in this setting. These approaches leverage variations of randomised smoothing to produce conservative sets which account for the effect of the adversarial perturbations. They are, however, limited in that they only support\n\u2113\n2\n-bounded perturbations and classification tasks. This paper introduces VRCP (Verifiably Robust Conformal Prediction), a new framework that leverages recent neural network verification methods to recover coverage guarantees under adversarial attacks. Our VRCP method is the first to support perturbations bounded by arbitrary norms including\n\u2113\n1\n,\n\u2113\n2\n, and\n\u2113\n\u221e\n, as well as regression tasks. We evaluate and compare our approach on image classification tasks (CIFAR10, CIFAR100, and TinyImageNet) and regression tasks for deep reinforcement learning environments. In every case, VRCP achieves above nominal coverage and yields significantly more efficient and informative prediction regions than the SotA.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/96567"} +{"video_file": "5pnhGedG98_39028681.mp4", "openreview_id": "5pnhGedG98", "slideslive_id": 39028681, "venue": "nips2024", "title": "Scalable and Effective Arithmetic Tree Generation for Adder and Multiplier Designs", "status": "Spotlight", "keywords": "Reinforcement Learning;Computer Arithmetic;Electronic Design Automation", "tldr": "Employing RL methods for arithmetic tree generation yields superior designs for adders and multipliers.", "abstract": "Across a wide range of hardware scenarios, the computational efficiency and physical size of the arithmetic units significantly influence the speed and footprint of the overall hardware system. Nevertheless, the effectiveness of prior arithmetic design techniques proves inadequate, as they do not sufficiently optimize speed and area, resulting in increased latency and larger module size. To boost computing performance, this work focuses on the two most common and fundamental arithmetic modules, adders and multipliers. We cast the design tasks as single-player tree generation games, leveraging reinforcement learning techniques to optimize their arithmetic tree structures. This tree generation formulation allows us to efficiently navigate the vast search space and discover superior arithmetic designs that improve computational efficiency and hardware size within just a few hours. Our proposed method, ArithTreeRL, achieves significant improvements for both adders and multipliers. For adders, our approach discovers designs of 128-bit adders that achieve Pareto optimality in theoretical metrics. Compared with PrefixRL, it reduces delay and size by up to 26% and 30%, respectively. For multipliers, compared to RL-MUL, our method enhances speed and reduces size by as much as 49% and 45%. Additionally, ArithTreeRL's flexibility and scalability enable seamless integration into 7nm technology. We believe our work will offer valuable insights into hardware design, further accelerating speed and reducing size through the refined search space and our tree generation methodologies.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/96566"} +{"video_file": "5qPmQtfvhy_39024765.mp4", "openreview_id": "5qPmQtfvhy", "slideslive_id": 39024765, "venue": "nips2024", "title": "Algorithmic progress in language models", "status": "Poster", "keywords": "Natural Language Processing", "tldr": "Progress in language model performance surpasses what we'd expect from merely increasing computing resources, occurring at a pace equivalent to doubling computational power every 2 to 22 months.", "abstract": "We investigate the rate at which algorithms for pre-training language models have improved since the advent of deep learning. Using a dataset of over 200 language model evaluations on Wikitext and Penn Treebank spanning 2012-2023, we find that the compute required to reach a set performance threshold has halved approximately every 8 months, with a 90% confidence interval of around 2 to 22 months, substantially faster than hardware gains per Moore's Law. We estimate augmented scaling laws, which enable us to quantify algorithmic progress and determine the relative contributions of scaling models versus innovations in training algorithms. Despite the rapid pace of algorithmic progress and the development of new architectures such as the transformer, our analysis reveals that the increase in compute made an even larger contribution to overall performance improvements over this time period. Though limited by noisy benchmark data, our analysis quantifies the rapid progress in language modeling, shedding light on the relative contributions from compute and algorithms.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/96565"} +{"video_file": "5sm8YDnWvC_39028103.mp4", "openreview_id": "5sm8YDnWvC", "slideslive_id": 39028103, "venue": "nips2024", "title": "Perceiving Longer Sequences With Bi-Directional Cross-Attention Transformers", "status": "Poster", "keywords": "Transformer;Neural Architecture;Efficient Attention;Architectures;Representation Learning;General Perception;Long Sequences", "tldr": "Linearly scaling Transformer architecture using bi-directional cross-attention as efficient means of information refinement, exploiting the observation of naturally emerging approximately symmetric cross-attention patterns", "abstract": "We present a novel bi-directional Transformer architecture (BiXT) which scales linearly with input size in terms of computational cost and memory consumption, but does not suffer the drop in performance or limitation to only one input modality seen with other efficient Transformer-based approaches. BiXT is inspired by the Perceiver architectures but replaces iterative attention with an efficient bi-directional cross-attention module in which input tokens and latent variables attend to each other simultaneously, leveraging a naturally emerging attention-symmetry between the two. This approach unlocks a key bottleneck experienced by Perceiver-like architectures and enables the processing and interpretation of both semantics ('what') and location ('where') to develop alongside each other over multiple layers -- allowing its direct application to dense and instance-based tasks alike. By combining efficiency with the generality and performance of a full Transformer architecture, BiXT can process longer sequences like point clouds, text or images at higher feature resolutions and achieves competitive performance across a range of tasks like point cloud part segmentation, semantic image segmentation, image classification, hierarchical sequence modeling and document retrieval. Our experiments demonstrate that BiXT models outperform larger competitors by leveraging longer sequences more efficiently on vision tasks like classification and segmentation, and perform on par with full Transformer variants on sequence modeling and document retrieval -- but require 28% fewer FLOPs and are up to\n8.4\n\u00d7\nfaster.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/96564"} +{"video_file": "5tIG2KZogL_39024422.mp4", "openreview_id": "5tIG2KZogL", "slideslive_id": 39024422, "venue": "nips2024", "title": "Supervised Kernel Thinning", "status": "Poster", "keywords": "Kernel methods;distribution compression;non-parametric regression", "tldr": "We apply recent advances in distribution compression (Kernel Thinning) to speed up kernel smoothing and kernel ridge regression.", "abstract": "The kernel thinning algorithm of Dwivedi & Mackey (2024) provides a better-than-i.i.d. compression of a generic set of points. By generating high-fidelity coresets of size significantly smaller than the input points, KT is known to speed up unsupervised tasks like Monte Carlo integration, uncertainty quantification, and non-parametric hypothesis testing, with minimal loss in statistical accuracy. In this work, we generalize the KT algorithm to speed up supervised learning problems involving kernel methods. Specifically, we combine two classical algorithms---Nadaraya-Watson (NW) regression or kernel smoothing, and kernel ridge regression (KRR)---with KT to provide a quadratic speed-up in both training and inference times. We show how distribution compression with KT in each setting reduces to constructing an appropriate kernel, and introduce the Kernel-Thinned NW and Kernel-Thinned KRR estimators. We prove that KT-based regression estimators enjoy significantly superior computational efficiency over the full-data estimators and improved statistical efficiency over i.i.d. subsampling of the training data. En route, we also provide a novel multiplicative error guarantee for compressing with KT. We validate our design choices with both simulations and real data experiments.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96561"} +{"video_file": "61YYSy078Z_39027805.mp4", "openreview_id": "61YYSy078Z", "slideslive_id": 39027805, "venue": "nips2024", "title": "ECLipsE: Efficient Compositional Lipschitz Constant Estimation for Deep Neural Networks", "status": "Spotlight", "keywords": "Neural Networks;Lipschitz Constant;Robustness", "tldr": "We provide an extremely fast and scalable method to obtain tight bounds for the Lipschitz constant of deep neural networks.", "abstract": "The Lipschitz constant plays a crucial role in certifying the robustness of neural networks to input perturbations. Since calculating the exact Lipschitz constant is NP-hard, efforts have been made to obtain tight upper bounds on the Lipschitz constant. Typically, this involves solving a large matrix verification problem, the computational cost of which grows significantly for both deeper and wider networks. In this paper, we provide a compositional approach to estimate Lipschitz constants for deep feed-forward neural networks. We first obtain an exact decomposition of the large matrix verification problem into smaller sub-problems. Then, leveraging the underlying cascade structure of the network, we develop two algorithms. The first algorithm explores the geometric features of the problem and enables us to provide Lipschitz estimates that are comparable to existing methods by solving small semidefinite programs (SDPs) that are only as large as the size of each layer. The second algorithm relaxes these sub-problems and provides a closed-form solution to each sub-problem for extremely fast estimation, altogether eliminating the need to solve SDPs. The two algorithms represent different levels of trade-offs between efficiency and accuracy. Finally, we demonstrate that our approach provides a steep reduction in computation time (as much as several thousand times faster, depending on the algorithm for deeper networks) while yielding Lipschitz bounds that are very close to or even better than those achieved by state-of-the-art approaches in a broad range of experiments. In summary, our approach considerably advances the scalability and efficiency of certifying neural network robustness, making it particularly attractive for online learning tasks.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96554"} +{"video_file": "64V40K2fDv_39026628.mp4", "openreview_id": "64V40K2fDv", "slideslive_id": 39026628, "venue": "nips2024", "title": "Exploring Molecular Pretraining Model at Scale", "status": "Poster", "keywords": "Molecular Pretraining;Scaling Law;Molecular Property Prediction", "tldr": "we systematically investigate the scaling laws on molecular pretraining and scale the model to 1.1 billion parameters through pretraining on 0.8 billion conformations, making it the largest molecular pretraining model to date.", "abstract": "In recent years, pretraining models have made significant advancements in the fields of natural language processing (NLP), computer vision (CV), and life sciences. The significant advancements in NLP and CV are predominantly driven by the expansion of model parameters and data size, a phenomenon now recognized as the scaling laws. However, research exploring scaling law in molecular pretraining model remains unexplored. In this work, we present an innovative molecular pretraining model that leverages a two-track transformer to effectively integrate features at the atomic level, graph level, and geometry structure level. Along with this, we systematically investigate the scaling law within molecular pretraining models, examining the power-law correlations between validation loss and model size, dataset size, and computational resources. Consequently, we successfully scale the model to 1.1 billion parameters through pretraining on 800 million conformations, making it the largest molecular pretraining model to date. Extensive experiments show the consistent improvement on the downstream tasks as the model size grows up. The model with 1.1 billion parameters also outperform over existing methods, achieving an average 27% improvement on the QM9 and 14% on COMPAS-1D dataset.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/96550"} +{"video_file": "65UoJ0z7Kp_39025434.mp4", "openreview_id": "65UoJ0z7Kp", "slideslive_id": 39025434, "venue": "nips2024", "title": "SeTAR: Out-of-Distribution Detection with Selective Low-Rank Approximation", "status": "Poster", "keywords": "Out-of-distribution detection;CLIP;low-rank approximation;trustworthy AI", "tldr": "SeTAR, a training-free method that improves out-of-distribution detection by post-hoc modification of weight matrices, with extended fine-tuning version SeTAR+FT.", "abstract": "Out-of-distribution (OOD) detection is crucial for the safe deployment of neural networks. Existing CLIP-based approaches perform OOD detection by devising novel scoring functions or sophisticated fine-tuning methods. In this work, we propose SeTAR, a novel, training-free OOD detection method that leverages selective low-rank approximation of weight matrices in vision-language and vision-only models. SeTAR enhances OOD detection via post-hoc modification of the model's weight matrices using a simple greedy search algorithm. Based on SeTAR, we further propose SeTAR+FT, a fine-tuning extension optimizing model performance for OOD detection tasks. Extensive evaluations on ImageNet1K and Pascal-VOC benchmarks show SeTAR's superior performance, reducing the relatively false positive rate by up to 18.95% and 36.80% compared to zero-shot and fine-tuning baselines. Ablation studies further validate our approach's effectiveness, robustness, and generalizability across different model backbones. Our work offers a scalable, efficient solution for OOD detection, setting a new state-of-the-art in this area.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/96549"} +{"video_file": "6A29LUZhfv_39026342.mp4", "openreview_id": "6A29LUZhfv", "slideslive_id": 39026342, "venue": "nips2024", "title": "MixEval: Deriving Wisdom of the Crowd from LLM Benchmark Mixtures", "status": "Poster", "keywords": "LLM Evaluation;Approximating Human Preference;Dynamic Benchmarking;Benchmark Mixture;Web Query Detection", "tldr": "MixEval, a ground-truth-based dynamic benchmark which updates data points periodically, evaluates LLMs with a highly capable model ranking (\\ie 0.96 correlation with Chatbot Arena) while running locally and quickly (1/15 time of running MMLU).", "abstract": "Evaluating large language models (LLMs) is challenging. Traditional ground-truth- based benchmarks fail to capture the comprehensiveness and nuance of real-world queries, while LLM-as-judge benchmarks suffer from grading biases and limited query quantity. Both of them may also become contaminated over time. User- facing evaluation, such as Chatbot Arena, provides reliable signals but is costly and slow. In this work, we propose MixEval, a new paradigm for establishing efficient, gold-standard LLM evaluation by strategically mixing off-the-shelf bench- marks. It bridges (1) comprehensive and well-distributed real-world user queries and (2) efficient and fairly-graded ground-truth-based benchmarks, by matching queries mined from the web with similar queries from existing benchmarks. Based on MixEval, we further build MixEval-Hard, which offers more room for model improvement. Our benchmarks\u2019 advantages lie in (1) a 0.96 model ranking correlation with Chatbot Arena arising from the highly impartial query distribution and grading mechanism, (2) fast, cheap, and reproducible execution (6% of the time and cost of MMLU), and (3) dynamic evaluation enabled by the rapid and stable data update pipeline. We provide extensive meta-evaluation and analysis for our and existing LLM benchmarks to deepen the community\u2019s understanding of LLM evaluation and guide future research directions.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/96545"} +{"video_file": "6AeIDnrTN2_39027525.mp4", "openreview_id": "6AeIDnrTN2", "slideslive_id": 39027525, "venue": "nips2024", "title": "LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS", "status": "Spotlight", "keywords": "Gaussian Splatting;Efficient Rendering", "tldr": "LightGaussian effectively compresses 3D Gaussians while enhancing rendering speeds", "abstract": "Recent advances in real-time neural rendering using point-based techniques have enabled broader adoption of 3D representations. However, foundational approaches like 3D Gaussian Splatting impose substantial storage overhead, as Structure-from-Motion (SfM) points can grow to millions, often requiring gigabyte-level disk space for a single unbounded scene. This growth presents scalability challenges and hinders splatting efficiency. To address this, we introduce LightGaussian, a method for transforming 3D Gaussians into a more compact format. Inspired by Network Pruning, LightGaussian identifies Gaussians with minimal global significance on scene reconstruction, and applies a pruning and recovery process to reduce redundancy while preserving visual quality. Knowledge distillation and pseudo-view augmentation then transfer spherical harmonic coefficients to a lower degree, yielding compact representations. Gaussian Vector Quantization, based on each Gaussian\u2019s global significance, further lowers bitwidth with minimal accuracy loss. LightGaussian achieves an average 15 times compression rate while boosting FPS from 144 to 237 within the 3D-GS framework, enabling efficient complex scene representation on the Mip-NeRF 360 and Tank & Temple datasets. The proposed Gaussian pruning approach is also adaptable to other 3D representations (e.g., Scaffold-GS), demonstrating strong generalization capabilities.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96544"} +{"video_file": "6ArNmbMpKF_39025115.mp4", "openreview_id": "6ArNmbMpKF", "slideslive_id": 39025115, "venue": "nips2024", "title": "Noisy Dual Mirror Descent: A Near Optimal Algorithm for Jointly-DP Convex Resource Allocation", "status": "Poster", "keywords": "joint differential privacy;resource allocation;near optimal algorithm;mirror descent", "tldr": "we propose an algorithm to (near) optimally solve convex resource allocation problems under joint-DP", "abstract": "We study convex resource allocation problems with\nm\nhard constraints under\n(\n\u03b5\n,\n\u03b4\n)\n-joint differential privacy (Joint-DP or JDP) in an offline setting. To approximately solve the problem, we propose a generic algorithm called Noisy Dual Mirror Descent. The algorithm applies noisy Mirror Descent to a dual problem from relaxing the hard constraints for private shadow prices, and then uses the shadow prices to coordinate allocations in the primal problem. Leveraging weak duality theory, we show that the optimality gap is upper bounded by\nO\n(\nm\nln\n\u2061\n(\n1\n/\n\u03b4\n)\n\u03b5\n)\n, and constraint violation is no more than\nO\n(\nm\nln\n\u2061\n(\n1\n/\n\u03b4\n)\n\u03b5\n)\nper constraint. When strong duality holds, both preceding results can be improved to\nO\n~\n(\nln\n\u2061\n(\n1\n/\n\u03b4\n)\n\u03b5\n)\nby better utilizing the geometric structure of the dual space, which is neglected by existing works. To complement our results under strong duality, we derive a minimax lower bound\n\u03a9\n(\nm\n\u03b5\n)\nfor any JDP algorithm outputting feasible allocations. The lower bound matches our upper bounds up to some logarithmic factors for\n\u03b5\n\u2265\nmax\n(\n1\n,\n1\n/\n(\nn\n\u03b3\n)\n)\n, where\nn\n\u03b3\nis the available resource level. Numerical studies further confirm the effectiveness of our algorithm.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96542"} +{"video_file": "6HUJoD3wTj_39025480.mp4", "openreview_id": "6HUJoD3wTj", "slideslive_id": 39025480, "venue": "nips2024", "title": "Separations in the Representational Capabilities of Transformers and Recurrent Architectures", "status": "Poster", "keywords": "expressivity;Transformers;RNNs;deep learning theory;communication complexity", "tldr": "We show that small-sized Transformers can theoretically express tasks like index lookup, nearest neighbor, and string matching, whereas RNNs require larger sizes; Also show limitations of one-layer Transformers on recognizing Dyck languages", "abstract": "Transformer architectures have been widely adopted in foundation models. Due to their high inference costs, there is renewed interest in exploring the potential of efficient recurrent architectures (RNNs). In this paper, we analyze the differences in the representational capabilities of Transformers and RNNs across several tasks of practical relevance, including index lookup, nearest neighbor, recognizing bounded Dyck languages, and string equality. For the tasks considered, our results show separations based on the size of the model required for different architectures. For example, we show that a one-layer Transformer of logarithmic width can perform index lookup, whereas an RNN requires a hidden state of linear size. Conversely, while constant-size RNNs can recognize bounded Dyck languages, we show that one-layer Transformers require a linear size for this task. Furthermore, we show that two-layer Transformers of logarithmic size can perform decision tasks such as string equality or disjointness, whereas both one-layer Transformers and recurrent models require linear size for these tasks. We also show that a log-size two-layer Transformer can implement the nearest neighbor algorithm in its forward pass; on the other hand recurrent models require linear size. Our constructions are based on the existence of\nN\nnearly orthogonal vectors in\nO\n(\nlog\n\u2061\nN\n)\ndimensional space and our lower bounds are based on reductions from communication complexity problems. We supplement our theoretical results with experiments that highlight the differences in the performance of these architectures on practical-size sequences.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/96535"} +{"video_file": "6KDZHgrDhG_39025050.mp4", "openreview_id": "6KDZHgrDhG", "slideslive_id": 39025050, "venue": "nips2024", "title": "Compositional Automata Embeddings for Goal-Conditioned Reinforcement Learning", "status": "Poster", "keywords": "reinforcement learning;goal-conditioned reinforcement learning;formal methods;graph embeddings;representation learning", "tldr": "We show how to produce rich neural embeddings of temporal tasks for use in goal-conditioned RL.", "abstract": "Goal-conditioned reinforcement learning is a powerful way to control an AI agent's behavior at runtime. That said, popular goal representations, e.g., target states or natural language, are either limited to Markovian tasks or rely on ambiguous task semantics. We propose representing temporal goals using compositions of deterministic finite automata (cDFAs) and use cDFAs to guide RL agents. cDFAs balance the need for formal temporal semantics with ease of interpretation: if one can understand a flow chart, one can understand a cDFA. On the other hand, cDFAs form a countably infinite concept class with Boolean semantics, and subtle changes to the automaton can result in very different tasks, making them difficult to condition agent behavior on. To address this, we observe that all paths through a DFA correspond to a series of reach-avoid tasks and propose pre-training graph neural network embeddings on \"reach-avoid derived\" DFAs. Through empirical evaluation, we demonstrate that the proposed pre-training method enables zero-shot generalization to various cDFA task classes and accelerated policy specialization without the myopic suboptimality of hierarchical methods.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96533"} +{"video_file": "6Kg26g1quR_39024991.mp4", "openreview_id": "6Kg26g1quR", "slideslive_id": 39024991, "venue": "nips2024", "title": "ROIDICE: Offline Return on Investment Maximization for Efficient Decision Making", "status": "Poster", "keywords": "Reinforcement Learning;Convex Optimization", "tldr": "Policy optimization that maximizes Return on Investment (ROI) of a policy", "abstract": "In this paper, we propose a novel policy optimization framework that maximizes Return on Investment (ROI) of a policy using a fixed dataset within a Markov Decision Process (MDP) equipped with a cost function. ROI, defined as the ratio between the return and the accumulated cost of a policy, serves as a measure of efficiency of the policy. Despite the importance of maximizing ROI in various applications, it remains a challenging problem due to its nature as a ratio of two long-term values: return and accumulated cost. To address this, we formulate the ROI maximizing reinforcement learning problem as a linear fractional programming. We then incorporate the stationary distribution correction (DICE) framework to develop a practical offline ROI maximization algorithm. Our proposed algorithm, ROIDICE, yields an efficient policy that offers a superior trade-off between return and accumulated cost compared to policies trained using existing frameworks.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96531"} +{"video_file": "6LVxO1C819_39026364.mp4", "openreview_id": "6LVxO1C819", "slideslive_id": 39026364, "venue": "nips2024", "title": "HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate Federated Learning", "status": "Poster", "keywords": "Federated Learning;Knowledge Distillation;Poisoning Attacks", "tldr": "We show that Knowledge Distillation amplifies model poisoning attacks in FL, and design our algorithm HYDRA-FL to mitigate attack ampification.", "abstract": "Data heterogeneity among Federated Learning (FL) users poses a significant challenge, resulting in reduced global model performance. The community has designed various techniques to tackle this issue, among which Knowledge Distillation (KD)-based techniques are common. While these techniques effectively improve performance under high heterogeneity, they inadvertently cause higher accuracy degradation under model poisoning attacks (known as \\emph{attack amplification}). This paper presents a case study to reveal this critical vulnerability in KD-based FL systems. We show why KD causes this issue through empirical evidence and use it as motivation to design a hybrid distillation technique. We introduce a novel algorithm, Hybrid Knowledge Distillation for Robust and Accurate FL (HYDRA-FL), which reduces the impact of attacks in attack scenarios by offloading some of the KD loss to a shallow layer via an auxiliary classifier. We model HYDRA-FL as a generic framework and adapt it to two KD-based FL algorithms, FedNTD and MOON. Using these two as case studies, we demonstrate that our technique outperforms baselines in attack settings while maintaining comparable performance in benign settings.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/96530"} +{"video_file": "6OK8Qy9yVu_39028666.mp4", "openreview_id": "6OK8Qy9yVu", "slideslive_id": 39028666, "venue": "nips2024", "title": "Why Go Full? Elevating Federated Learning Through Partial Network Updates", "status": "Poster", "keywords": "Federated Learning;Partial Network Updates;Convergence Efficiency;Computational and Communicational Overhead Reduction", "tldr": "We observe the layer mismatch problem in federated learning and propose a partial network update method to address it, improving convergence speed, accuracy, and reducing computational and communication overhead.", "abstract": "Federated learning is a distributed machine learning paradigm designed to protect user data privacy, which has been successfully implemented across various scenarios. In traditional federated learning, the entire parameter set of local models is updated and averaged in each training round. Although this full network update method maximizes knowledge acquisition and sharing for each model layer, it prevents the layers of the global model from cooperating effectively to complete the tasks of each client, a challenge we refer to as layer mismatch. This mismatch problem recurs after every parameter averaging, consequently slowing down model convergence and degrading overall performance. To address the layer mismatch issue, we introduce the FedPart method, which restricts model updates to either a single layer or a few layers during each communication round. Furthermore, to maintain the efficiency of knowledge acquisition and sharing, we develop several strategies to select trainable layers in each round, including sequential updating and multi-round cycle training. Through both theoretical analysis and experiments, our findings demonstrate that the FedPart method significantly surpasses conventional full network update strategies in terms of convergence speed and accuracy, while also reducing communication and computational overheads.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/96529"} +{"video_file": "6SSzMq3WTn_39025688.mp4", "openreview_id": "6SSzMq3WTn", "slideslive_id": 39025688, "venue": "nips2024", "title": "Improved Regret of Linear Ensemble Sampling", "status": "Poster", "keywords": "Linear Bandit;Ensemble Sampling", "tldr": "We prove a \nO\n(\nd\n3\n/\n2\nT\n)\n frequentist regret bound for linear ensemble sampling.", "abstract": "In this work, we close the fundamental gap of theory and practice by providing an improved regret bound for linear ensemble sampling. We prove that with an ensemble size logarithmic in\nT\n, linear ensemble sampling can achieve a frequentist regret bound of\nO\n~\n(\nd\n3\n/\n2\nT\n)\n, matching state-of-the-art results for randomized linear bandit algorithms, where\nd\nand\nT\nare the dimension of the parameter and the time horizon respectively. Our approach introduces a general regret analysis framework for linear bandit algorithms. Additionally, we reveal a significant relationship between linear ensemble sampling and Linear Perturbed-History Exploration (LinPHE), showing that LinPHE is a special case of linear ensemble sampling when the ensemble size equals\nT\n. This insight allows us to derive a new regret bound of\nO\n~\n(\nd\n3\n/\n2\nT\n)\nfor LinPHE, independent of the number of arms. Our contributions advance the theoretical foundation of ensemble sampling, bringing its regret bounds in line with the best known bounds for other randomized exploration algorithms.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/96524"} +{"video_file": "6VVgAgVfxW_39027173.mp4", "openreview_id": "6VVgAgVfxW", "slideslive_id": 39027173, "venue": "nips2024", "title": "Team-Fictitious Play for Reaching Team-Nash Equilibrium in Multi-team Games", "status": "Poster", "keywords": "Multi-agent reinforcement learning;fictitious play;multi-team games", "tldr": "A new variant of fictitious play provably converging to Team-Nash equilibrium in multi-team zero-sum games.", "abstract": "Multi-team games, prevalent in robotics and resource management, involve team members striving for a joint best response against other teams. Team-Nash equilibrium (TNE) predicts the outcomes of such coordinated interactions. However, can teams of self-interested agents reach TNE? We introduce Team-Fictitious Play (Team-FP), a new variant of fictitious play where agents respond to the last actions of team members and the beliefs formed about other teams with some inertia in action updates. This design is essential in team coordination beyond the classical fictitious play dynamics. We focus on zero-sum potential team games (ZSPTGs) where teams can interact pairwise while the team members do not necessarily have identical payoffs. We show that Team-FP reaches near TNE in ZSPTGs with a quantifiable error bound. We extend Team-FP dynamics to multi-team Markov games for model-based and model-free cases. The convergence analysis tackles the challenge of non-stationarity induced by evolving opponent strategies based on the optimal coupling lemma and stochastic differential inclusion approximation methods. Our work strengthens the foundation for using TNE to predict the behavior of decentralized teams and offers a practical rule for team learning in multi-team environments. We provide extensive simulations of Team-FP dynamics and compare its performance with other widely studied dynamics such as smooth fictitious play and multiplicative weights update. We further explore how different parameters impact the speed of convergence.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/96521"} +{"video_file": "6ZBHIEtdP4_39026198.mp4", "openreview_id": "6ZBHIEtdP4", "slideslive_id": 39026198, "venue": "nips2024", "title": "PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models", "status": "Spotlight", "keywords": "PEFT;LoRA;LLM;Finetune;SVD", "tldr": "PiSSA reinitializes LoRA's parameters by applying SVD to the base model, which enables faster convergence and ultimately superior performance. Additionally, it can reduce quantization error compared to QLoRA.", "abstract": "To parameter-efficiently fine-tune (PEFT) large language models (LLMs), the low-rank adaptation (LoRA) method approximates the model changes $\\Delta W \\in \\mathbb{R}^{m \\times n}$ through the product of two matrices $A \\in \\mathbb{R}^{m \\times r}$ and $B \\in \\mathbb{R}^{r \\times n}$, where $r \\ll \\min(m, n)$, $A$ is initialized with Gaussian noise, and $B$ with zeros. LoRA freezes the original model $W$ and updates the \"Noise & Zero\" adapter, which may lead to slow convergence. To overcome this limitation, we introduce Principal Singular values and Singular vectors Adaptation (PiSSA). PiSSA shares the same architecture as LoRA, but initializes the adaptor matrices $A$ and $B$ with the principal components of the original matrix $W$, and put the remaining components into a residual matrix $W^{res} \\in \\mathbb{R}^{m \\times n}$ which is frozen during fine-tuning. Compared to LoRA, PiSSA updates the principal components while freezing the \"residual\" parts, allowing faster convergence and enhanced performance. Comparative experiments of PiSSA and LoRA across 11 different models, ranging from 184M to 70B, encompassing 5 NLG and 8 NLU tasks, reveal that PiSSA consistently outperforms LoRA under identical experimental setups. On the GSM8K benchmark, Gemma-7B fine-tuned with PiSSA achieves an accuracy of 77.7%, surpassing LoRA's 74.53% by 3.25%. Due to the same architecture, PiSSA is also compatible with quantization to further reduce the memory requirement of fine-tuning. Compared to QLoRA, QPiSSA (PiSSA with 4-bit quantization) exhibits smaller quantization errors in the initial stages. Fine-tuning LLaMA-3-70B on GSM8K, QPiSSA attains an accuracy of 86.05%, exceeding the performances of QLoRA at 81.73%. Leveraging a fast SVD technique, PiSSA can be initialized in only a few seconds, presenting a negligible cost for transitioning from LoRA to PiSSA.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/96517"} +{"video_file": "6cWDg9t3z5_39028695.mp4", "openreview_id": "6cWDg9t3z5", "slideslive_id": 39028695, "venue": "nips2024", "title": "Universal Rates of Empirical Risk Minimization", "status": "Poster", "keywords": "Statistical learning theory;Universal learning;Empirical risk minimization;PAC learning", "tldr": "We consider the problem of universal learning by ERM in the realizable case and study the possible universal rates.", "abstract": "The well-known\nempirical risk minimization\n(ERM) principle is the basis of many widely used machine learning algorithms, and plays an essential role in the classical PAC theory. A common description of a learning algorithm's performance is its so-called \u201clearning curve\u201d, that is, the decay of the expected error as a function of the input sample size. As the PAC model fails to explain the behavior of learning curves, recent research has explored an alternative universal learning model and has ultimately revealed a distinction between optimal universal and uniform learning rates (Bousquet et al., 2021). However, a basic understanding of such differences with a particular focus on the ERM principle has yet to be developed.\nIn this paper, we consider the problem of universal learning by ERM in the realizable case and study the possible universal rates. Our main result is a fundamental\ntetrachotomy\n: there are only four possible universal learning rates by ERM, namely, the learning curves of any concept class learnable by ERM decay either at\ne\n\u2212\nn\n,\n1\n/\nn\n,\nlog\n\u2061\n(\nn\n)\n/\nn\n, or arbitrarily slow rates. Moreover, we provide a complete characterization of which concept classes fall into each of these categories, via new complexity structures. We also develop new combinatorial dimensions which supply sharp asymptotically-valid constant factors for these rates, whenever possible.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96512"} +{"video_file": "6cdYMkxxNt_39025033.mp4", "openreview_id": "6cdYMkxxNt", "slideslive_id": 39025033, "venue": "nips2024", "title": "Understanding the Transferability of Representations via Task-Relatedness", "status": "Poster", "keywords": "Transfer Learning Analysis;Distribution Shift", "tldr": "We rigorously analyze transfer learning in terms of relatedness between tasks and show that task-relatedness accurately predicts transferability in various practical scenarios.", "abstract": "The growing popularity of transfer learning due to the availability of models pre-trained on vast amounts of data, makes it imperative to understand when the knowledge of these pre-trained models can be transferred to obtain high-performing models on downstream target tasks. However, the exact conditions under which transfer learning succeeds in a cross-domain cross-task setting are still poorly understood. To bridge this gap, we propose a novel analysis that analyzes the transferability of the representations of pre-trained models to downstream tasks in terms of their relatedness to a given reference task. Our analysis leads to an upper bound on transferability in terms of task-relatedness, quantified using the difference between the class priors, label sets, and features of the two tasks.Our experiments using state-of-the-art pre-trained models show the effectiveness of task-relatedness in explaining transferability on various vision and language tasks. The efficient computability of task-relatedness even without labels of the target task and its high correlation with the model's accuracy after end-to-end fine-tuning on the target task makes it a useful metric for transferability estimation. Our empirical results of using task-relatedness on the problem of selecting the best pre-trained model from a model zoo for a target task highlight its utility for practical problems.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/96511"} +{"video_file": "6ejpSVIiIl_39028906.mp4", "openreview_id": "6ejpSVIiIl", "slideslive_id": 39028906, "venue": "nips2024", "title": "Classifier Clustering and Feature Alignment for Federated Learning under Distributed Concept Drift", "status": "Poster", "keywords": "federated learning;concept drift;data heterogeneity", "tldr": "We propose FedCCFA, a federated learning framework with classifier clustering and feature alignment.", "abstract": "Data heterogeneity is one of the key challenges in federated learning, and many efforts have been devoted to tackling this problem. However, distributed concept drift with data heterogeneity, where clients may additionally experience different concept drifts, is a largely unexplored area. In this work, we focus on real drift, where the conditional distribution\nP\n(\nY\n|\nX\n)\nchanges. We first study how distributed concept drift affects the model training and find that local classifier plays a critical role in drift adaptation. Moreover, to address data heterogeneity, we study the feature alignment under distributed concept drift, and find two factors that are crucial for feature alignment: the conditional distribution\nP\n(\nY\n|\nX\n)\nand the degree of data heterogeneity. Motivated by the above findings, we propose FedCCFA, a federated learning framework with classifier clustering and feature alignment. To enhance collaboration under distributed concept drift, FedCCFA clusters local classifiers at class-level and generates clustered feature anchors according to the clustering results. Assisted by these anchors, FedCCFA adaptively aligns clients' feature spaces based on the entropy of label distribution\nP\n(\nY\n)\n, alleviating the inconsistency in feature space. Our results demonstrate that FedCCFA significantly outperforms existing methods under various concept drift settings. Code is available at https://github.com/Chen-Junbao/FedCCFA.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/96509"} +{"video_file": "6eoGVqMiIj_39027890.mp4", "openreview_id": "6eoGVqMiIj", "slideslive_id": 39027890, "venue": "nips2024", "title": "DreamClear: High-Capacity Real-World Image Restoration with Privacy-Safe Dataset Curation", "status": "Poster", "keywords": "Image restoration;Dataset Curation;Diffusion transformer", "tldr": "A high-capacity image restoration model trained on large-scale, privacy-safe high-quality image dataset", "abstract": "Image restoration (IR) in real-world scenarios presents significant challenges due to the lack of high-capacity models and comprehensive datasets. To tackle these issues, we present a dual strategy: GenIR, an innovative data curation pipeline, and DreamClear, a cutting-edge Diffusion Transformer (DiT)-based image restoration model. GenIR, our pioneering contribution, is a dual-prompt learning pipeline that overcomes the limitations of existing datasets, which typically comprise only a few thousand images and thus offer limited generalizability for larger models. GenIR streamlines the process into three stages: image-text pair construction, dual-prompt based fine-tuning, and data generation & filtering. This approach circumvents the laborious data crawling process, ensuring copyright compliance and providing a cost-effective, privacy-safe solution for IR dataset construction. The result is a large-scale dataset of one million high-quality images. Our second contribution, DreamClear, is a DiT-based image restoration model. It utilizes the generative priors of text-to-image (T2I) diffusion models and the robust perceptual capabilities of multi-modal large language models (MLLMs) to achieve photorealistic restoration. To boost the model's adaptability to diverse real-world degradations, we introduce the Mixture of Adaptive Modulator (MoAM). It employs token-wise degradation priors to dynamically integrate various restoration experts, thereby expanding the range of degradations the model can address. Our exhaustive experiments confirm DreamClear's superior performance, underlining the efficacy of our dual strategy for real-world image restoration. Code and pre-trained models are available at: https://github.com/shallowdream204/DreamClear.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96507"} +{"video_file": "6gMnj9oc6d_39026945.mp4", "openreview_id": "6gMnj9oc6d", "slideslive_id": 39026945, "venue": "nips2024", "title": "Scalable DP-SGD: Shuffling vs. Poisson Subsampling", "status": "Poster", "keywords": "DPSGD;Differential Privacy;Shuffling;Poisson subsampling", "tldr": "Establishes new lower bounds on privacy analysis of DP-SGD with shuffling and provides a first comparative study of DP-SGD with Shuffling vs Poisson Subsampling in the light of the gaps in privacy analysis between the two approaches.", "abstract": "We provide new lower bounds on the privacy guarantee of multi-epoch Adaptive Batch Linear Queries (ABLQ) mechanism with shuffled batch sampling, demonstrating substantial gaps when compared to Poisson subsampling; prior analysis was limited to a single epoch. Since the privacy analysis of Differentially Private Stochastic Gradient Descent (DP-SGD) is obtained by analyzing the ABLQ mechanism, this brings into serious question the common practice of implementing Shuffling based DP-SGD, but reporting privacy parameters as if Poisson subsampling was used. To understand the impact of this gap on the utility of trained machine learning models, we introduce a novel practical approach to implement Poisson subsampling at scale using massively parallel computation, and efficiently train models with the same. We provide a comparison between the utility of models trained with Poisson subsampling based DP-SGD, and the optimistic estimates of utility when using shuffling, via our new lower bounds on the privacy guarantee of ABLQ with shuffling.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96505"} +{"video_file": "6gzPSMUAz2_39028482.mp4", "openreview_id": "6gzPSMUAz2", "slideslive_id": 39028482, "venue": "nips2024", "title": "MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models", "status": "Poster", "keywords": "Large Language Models;Pretraining;Data Curation", "tldr": "We introduce model-aware data selection with data influence models (MATES), where a data influence model continuously adapts to the evolving data preferences of the pretraining model and selects the data to optimize the efficacy of the pretraining.", "abstract": "Pretraining data selection has the potential to improve language model pretraining efficiency by utilizing higher-quality data from massive web data corpora. Current data selection methods, which rely on either hand-crafted rules or larger reference models, are conducted statically and do not capture the evolving data preferences during pretraining. In this paper, we introduce model-aware data selection with data influence models (MATES), where a data influence model continuously adapts to the evolving data preferences of the pretraining model and then selects the data most effective for the current pretraining progress. Specifically, we collect oracle data influence by locally probing the pretraining model and fine-tune a small data influence model to approximate it accurately. The data influence model then predicts data influence over the whole pretraining corpus and selects the most influential data for the next pretraining stage. Experiments of pretraining 410M and 1B models on the C4 dataset demonstrate that MATES significantly outperforms random data selection on extensive downstream tasks. It doubles the gains achieved by the state-of-the-art data selection approach that leverages larger reference models and reduces the total FLOPs required to reach certain performances by half. Further analyses validate the effectiveness of the locally probed oracle data influence and the approximation with data influence models. Our code is open-sourced at https://github.com/cxcscmu/MATES.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96504"} +{"video_file": "6jOScqwdHU_39028068.mp4", "openreview_id": "6jOScqwdHU", "slideslive_id": 39028068, "venue": "nips2024", "title": "Fisher Flow Matching for Generative Modeling over Discrete Data", "status": "Poster", "keywords": "Flow matching;Generative models;Riemannian manifolds;Discrete data", "tldr": "We propose a novel flow matching approach for discrete data by continuously reparameterising categorical distributions on the hypersphere.", "abstract": "Generative modeling over discrete data has recently seen numerous success stories, with applications spanning language modeling, biological sequence design, and graph-structured molecular data. The predominant generative modeling paradigm for discrete data is still autoregressive, with more recent alternatives based on diffusion or flow-matching falling short of their impressive performance in continuous data settings, such as image or video generation. In this work, we introduce Fisher-Flow, a novel flow-matching model for discrete data. Fisher-Flow takes a manifestly geometric perspective by considering categorical distributions over discrete data as points residing on a statistical manifold equipped with its natural Riemannian metric: the \\emph{Fisher-Rao metric}. As a result, we demonstrate discrete data itself can be continuously reparameterised to points on the positive orthant of the\nd\n-hypersphere\nS\n+\nd\n, which allows us to define flows that map any source distribution to target in a principled manner by transporting mass along (closed-form) geodesics of\nS\n+\nd\n. Furthermore, the learned flows in Fisher-Flow can be further bootstrapped by leveraging Riemannian optimal transport leading to improved training dynamics. We prove that the gradient flow induced by Fisher-FLow is optimal in reducing the forward KL divergence. We evaluate Fisher-Flow on an array of synthetic and diverse real-world benchmarks, including designing DNA Promoter, and DNA Enhancer sequences. Empirically, we find that Fisher-Flow improves over prior diffusion and flow-matching models on these benchmarks.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/96502"} +{"video_file": "6lwKOvL3KN_39024577.mp4", "openreview_id": "6lwKOvL3KN", "slideslive_id": 39024577, "venue": "nips2024", "title": "Adaptive Visual Scene Understanding: Incremental Scene Graph Generation", "status": "Poster", "keywords": "Scene graph generation;Continual learning;In-context symbolic replay;Long tail distribution;Compositional scene graphs", "tldr": "We introduce a continual scene graph generation dataset and a model to tackle this problem.", "abstract": "Scene graph generation (SGG) analyzes images to extract meaningful information about objects and their relationships. In the dynamic visual world, it is crucial for AI systems to continuously detect new objects and establish their relationships with existing ones. Recently, numerous studies have focused on continual learning within the domains of object detection and image recognition. However, a limited amount of research focuses on a more challenging continual learning problem in SGG. This increased difficulty arises from the intricate interactions and dynamic relationships among objects, and their associated contexts. Thus, in continual learning, SGG models are often required to expand, modify, retain, and reason scene graphs within the process of adaptive visual scene understanding. To systematically explore Continual Scene Graph Generation (CSEGG), we present a comprehensive benchmark comprising three learning regimes: relationship incremental, scene incremental, and relationship generalization. Moreover, we introduce a ``Replays via Analysis by Synthesis\" method named RAS. This approach leverages the scene graphs, decomposes and re-composes them to represent different scenes, and replays the synthesized scenes based on these compositional scene graphs. The replayed synthesized scenes act as a means to practice and refine proficiency in SGG in known and unknown environments. Our experimental results not only highlight the challenges of directly combining existing continual learning methods with SGG backbones but also demonstrate the effectiveness of our proposed approach, enhancing CSEGG efficiency while simultaneously preserving privacy and memory usage. All data and source code will be made public.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96501"} +{"video_file": "6osgTNnAZQ_39025492.mp4", "openreview_id": "6osgTNnAZQ", "slideslive_id": 39025492, "venue": "nips2024", "title": "Block Transformer: Global-to-Local Language Modeling for Fast Inference", "status": "Poster", "keywords": "language model;model architecture;efficient inference", "tldr": "We propose the Block Transformer architecture that adopts global-to-local modeling to autoregressive transformers to mitigate the bottlenecks of global self-attention and significantly improve inference throughput compared to vanilla transformers.", "abstract": "We introduce the Block Transformer which adopts hierarchical global-to-local modeling to autoregressive transformers to mitigate the inference bottlenecks associated with self-attention. Self-attention requires the key-value (KV) cache of all previous sequences to be retrieved from memory at every decoding step to retrieve context information, leading to two primary bottlenecks during batch inference. First, there is a significant delay in obtaining the first token, as the information of the entire prompt must first be processed to prefill the KV cache. Second, computation of subsequent tokens is bottlenecked by the high memory I/O demand of fetching the entire KV cache, which grows linearly with sequence length, incurring quadratic memory reads overall. We design the Block Transformer to strategically mitigate these costs, by incorporating coarsity and locality into an integrated global-to-local architecture. At the lower layers, we aggregate tokens into fixed size blocks to apply attention across the entire sequence at coarse-grained detail, to capture the global context while minimizing KV cache overhead. At upper layers, we apply attention within each block to decode individual tokens, to model fine-grained details with a lightweight local KV cache. We pretrain vanilla and Block Transformers from scratch and demonstrate that Block Transformers reach 10--20x inference throughput compared to vanilla transformers with equivalent perplexity and zero-shot task performance.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96498"} +{"video_file": "6qr3932RWe_39024933.mp4", "openreview_id": "6qr3932RWe", "slideslive_id": 39024933, "venue": "nips2024", "title": "Memorize What Matters: Emergent Scene Decomposition from Multitraverse", "status": "Spotlight", "keywords": "Autonomous Driving;Self-Supervised Learning;3D Gaussian Splatting", "tldr": "Self-supervised 2D ephemerality segmentation and 3D environmental mapping in urban scenes", "abstract": "Humans naturally retain memories of permanent elements, while ephemeral moments often slip through the cracks of memory. This selective retention is crucial for robotic perception, localization, and mapping. To endow robots with this capability, we introduce 3D Gaussian Mapping (3DGM), a self-supervised, camera-only offline mapping framework grounded in 3D Gaussian Splatting. 3DGM converts multitraverse RGB videos from the same region into a Gaussian-based environmental map while concurrently performing 2D ephemeral object segmentation. Our key observation is that the environment remains consistent across traversals, while objects frequently change. This allows us to exploit self-supervision from repeated traversals to achieve environment-object decomposition. More specifically, 3DGM formulates multitraverse environmental mapping as a robust 3D representation learning problem, treating pixels of the environment and objects as inliers and outliers, respectively. Using robust feature distillation, feature residual mining, and robust optimization, 3DGM simultaneously performs 2D segmentation and 3D mapping without human intervention. We build the Mapverse benchmark, sourced from the Ithaca365 and nuPlan datasets, to evaluate our method in unsupervised 2D segmentation, 3D reconstruction, and neural rendering. Extensive results verify the effectiveness and potential of our method for self-driving and robotics.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96496"} +{"video_file": "6uv9ViIoMj_39027808.mp4", "openreview_id": "6uv9ViIoMj", "slideslive_id": 39027808, "venue": "nips2024", "title": "Towards Next-Level Post-Training Quantization of Hyper-Scale Transformers", "status": "Poster", "keywords": "Hyper-scale;Compression;Quantization;Transformers;LLM", "tldr": "We have proposed a next-level post-training quantization scheme for Hyper-scale Transformer models", "abstract": "With the increasing complexity of generative AI models, post-training quantization (PTQ) has emerged as a promising solution for deploying hyper-scale models on edge devices such as mobile and TVs. Existing PTQ schemes, however, consume considerable time and resources, which could be a bottleneck in real situations where frequent model updates and multiple hyperparameter tunings are required. As a cost-effective alternative, learning-free PTQ schemes have been proposed. However, the performance is somewhat limited because they cannot consider the inter-layer dependency within the attention module, which is a significant feature of Transformers. In this paper, we thus propose a novel PTQ algorithm that balances accuracy and efficiency. The key idea of the proposed algorithm called aespa is to perform quantization layer-wise for efficiency while targeting attention-wise reconstruction to consider the cross-layer dependency. Through extensive experiments on various language models and complexity analysis, we demonstrate that aespa is accurate and efficient in quantizing Transformer models. The code will be available at https: //github.com/SamsungLabs/aespa.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96493"} +{"video_file": "6zOKbzjBO4_39026485.mp4", "openreview_id": "6zOKbzjBO4", "slideslive_id": 39026485, "venue": "nips2024", "title": "Fast Rates for Bandit PAC Multiclass Classification", "status": "Poster", "keywords": "bandit;classification;multiclass", "tldr": "We establish optimal sample complexity bounds for agnostic PAC in bandit multiclass classification with finite hypothesis classes and Natarajan classes", "abstract": "We study multiclass PAC learning with bandit feedback, where inputs are classified into one of\nK\npossible labels and feedback is limited to whether or not the predicted labels are correct. Our main contribution is in designing a novel learning algorithm for the agnostic\n(\n\u03b5\n,\n\u03b4\n)\n-PAC version of the problem, with sample complexity of\nO\n(\n(\npoly\n(\nK\n)\n+\n1\n/\n\u03b5\n2\n)\nlog\n\u2061\n(\n|\nH\n|\n/\n\u03b4\n)\n)\nfor any finite hypothesis class\nH\n. In terms of the leading dependence on\n\u03b5\n, this improves upon existing bounds for the problem, that are of the form\nO\n(\nK\n/\n\u03b5\n2\n)\n. We also provide an extension of this result to general classes and establish similar sample complexity bounds in which\nlog\n\u2061\n|\nH\n|\nis replaced by the Natarajan dimension. This matches the optimal rate in the full-information version of the problem and resolves an open question studied by Daniely, Sabato, Ben-David, and Shalev-Shwartz (2011) who demonstrated that the multiplicative price of bandit feedback in realizable PAC learning is\n\u0398\n(\nK\n)\n. We complement this by revealing a stark contrast with the agnostic case, where the price of bandit feedback is only\nO\n(\n1\n)\nas\n\u03b5\n\u2192\n0\n. Our algorithm utilizes a stochastic optimization technique to minimize a log-barrier potential based on Frank-Wolfe updates for computing a low-variance exploration distribution over the hypotheses, and is made computationally efficient provided access to an ERM oracle over\nH\n.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/96490"} +{"video_file": "74B6qX62vW_39025207.mp4", "openreview_id": "74B6qX62vW", "slideslive_id": 39025207, "venue": "nips2024", "title": "Sample-Efficient Private Learning of Mixtures of Gaussians", "status": "Spotlight", "keywords": "Differential Privacy;Density Estimation;Mixtures of Gaussians;Sample Complexity", "tldr": "We provide improved sample complexity bounds for learning mixtures of Gaussians with differential privacy, which are optimal for sufficiently large dimension or for one-dimensional Gaussian Mixtures..", "abstract": "We study the problem of learning mixtures of Gaussians with approximate differential privacy. We prove that roughly\nk\nd\n2\n+\nk\n1.5\nd\n1.75\n+\nk\n2\nd\nsamples suffice to learn a mixture of\nk\narbitrary\nd\n-dimensional Gaussians up to low total variation distance, with differential privacy. Our work improves over the previous best result (which required roughly\nk\n2\nd\n4\nsamples) and is provably optimal when\nd\nis much larger than\nk\n2\n. Moreover, we give the first optimal bound for privately learning mixtures of\nk\nunivariate (i.e.,\n1\n-dimensional) Gaussians. Importantly, we show that the sample complexity for learning mixtures of univariate Gaussians is linear in the number of components\nk\n, whereas the previous best sample complexity was quadratic in\nk\n. Our algorithms utilize various techniques, including the inverse sensitivity mechanism, sample compression for distributions, and methods for bounding volumes of sumsets.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96486"} +{"video_file": "76CZrhbMoo_39027044.mp4", "openreview_id": "76CZrhbMoo", "slideslive_id": 39027044, "venue": "nips2024", "title": "CLIPAway: Harmonizing focused embeddings for removing objects via diffusion models", "status": "Poster", "keywords": "Object Removal;Inpainting;Diffusion models", "tldr": "A training-free, plug-and-play method that uses CLIP embeddings to focus on background areas for seamless object removal in diffusion models without requiring specialized datasets.", "abstract": "Advanced image editing techniques, particularly inpainting, are essential for seamlessly removing unwanted elements while preserving visual integrity. Traditional GAN-based methods have achieved notable success, but recent advancements in diffusion models have produced superior results due to their training on large-scale datasets, enabling the generation of remarkably realistic inpainted images. Despite their strengths, diffusion models often struggle with object removal tasks without explicit guidance, leading to unintended hallucinations of the removed object. To address this issue, we introduce CLIPAway, a novel approach leveraging CLIP embeddings to focus on background regions while excluding foreground elements. CLIPAway enhances inpainting accuracy and quality by identifying embeddings that prioritize the background, thus achieving seamless object removal. Unlike other methods that rely on specialized training datasets or costly manual annotations, CLIPAway provides a flexible, plug-and-play solution compatible with various diffusion-based inpainting techniques.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96484"} +{"video_file": "77kCJzvpOa_39027671.mp4", "openreview_id": "77kCJzvpOa", "slideslive_id": 39027671, "venue": "nips2024", "title": "Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior Models", "status": "Poster", "keywords": "large-scale language models;lossless gradient compression", "tldr": "We show that LLMs can compress gradients in a zero-shot setting, demonstrating a promising direction toward general parameter priors.", "abstract": "Despite the widespread use of statistical prior models in various fields, such models for neural network gradients have long been overlooked. The inherent challenge stems from their high-dimensional structures and complex interdependencies, which complicate effective modeling. In this work, we demonstrate the potential of large language models (LLMs) to act as gradient priors in a zero-shot setting. We examine the property by considering lossless gradient compression -- a critical application in distributed learning -- that depends heavily on precise probability modeling. To achieve this, we introduce LM-GC, a novel method that integrates LLMs with arithmetic coding. Our technique converts plain gradients into text-like formats, enhancing token efficiency by up to 38 times compared to their plain representations. We ensure that this data conversion maintains a close alignment with the structure of plain gradients and the symbols commonly recognized by LLMs. Our experiments indicate that LM-GC surpasses existing state-of-the-art lossless compression methods, improving compression rates by 10% up to 17.2% across various datasets and architectures. Additionally, our approach shows promising compatibility with lossy compression techniques such as quantization and sparsification. These findings highlight the significant potential of LLMs as a model for effectively handling gradients. Code is available at https://github.com/hui-po-wang/LM-GC.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/96482"} +{"video_file": "792txRlKit_39025706.mp4", "openreview_id": "792txRlKit", "slideslive_id": 39025706, "venue": "nips2024", "title": "DataStealing: Steal Data from Diffusion Models in Federated Learning with Multiple Trojans", "status": "Poster", "keywords": "Federated Learning;Diffusion Models;DataStealing;Multiple Trojans;Adaptive Scale", "tldr": "This paper presents a new privacy vulnerability in training diffusion models with Federated Learning and proposes an adaptive Trojan attack to bypass advanced defenses and achieve DataStealing.", "abstract": "Federated Learning (FL) is commonly used to collaboratively train models with privacy preservation. In this paper, we found out that the popular diffusion models have introduced a new vulnerability to FL, which brings serious privacy threats. Despite stringent data management measures, attackers can steal massive private data from local clients through multiple Trojans, which control generative behaviors with multiple triggers. We refer to the new task as\nDataStealing\nand demonstrate that the attacker can achieve the purpose based on our proposed Combinatorial Triggers (ComboTs) in a vanilla FL system. However, advanced distance-based FL defenses are still effective in filtering the malicious update according to the distances between each local update. Hence, we propose an Adaptive Scale Critical Parameters (AdaSCP) attack to circumvent the defenses and seamlessly incorporate malicious updates into the global model. Specifically, AdaSCP evaluates the importance of parameters with the gradients in dominant timesteps of the diffusion model. Subsequently, it adaptively seeks the optimal scale factor and magnifies critical parameter updates before uploading to the server. As a result, the malicious update becomes similar to the benign update, making it difficult for distance-based defenses to identify. Extensive experiments reveal the risk of leaking thousands of images in training diffusion models with FL. Moreover, these experiments demonstrate the effectiveness of AdaSCP in defeating advanced distance-based defenses. We hope this work will attract more attention from the FL community to the critical privacy security issues of Diffusion Models. Code: https://github.com/yuangan/DataStealing.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96480"} +{"video_file": "79eWvkLjib_39028383.mp4", "openreview_id": "79eWvkLjib", "slideslive_id": 39028383, "venue": "nips2024", "title": "Zero-Shot Reinforcement Learning from Low Quality Data", "status": "Poster", "keywords": "reinforcement learning;offline reinforcement learning;unsupervised reinforcement learning;zero-shot reinforcement learning", "tldr": "We propose methods for improving the performance of zero-shot RL methods when trained on low quality offline datasets.", "abstract": "Zero-shot reinforcement learning (RL) promises to provide agents that can perform any task in an environment after an offline, reward-free pre-training phase. Methods leveraging successor measures and successor features have shown strong performance in this setting, but require access to large heterogenous datasets for pre-training which cannot be expected for most real problems. Here, we explore how the performance of zero-shot RL methods degrades when trained on small homogeneous datasets, and propose fixes inspired by conservatism, a well-established feature of performant single-task offline RL algorithms. We evaluate our proposals across various datasets, domains and tasks, and show that conservative zero-shot RL algorithms outperform their non-conservative counterparts on low quality datasets, and perform no worse on high quality datasets. Somewhat surprisingly, our proposals also outperform baselines that get to see the task during training. Our code is available via the project page https://enjeeneer.io/projects/zero-shot-rl/.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96479"} +{"video_file": "79q206xswc_39025595.mp4", "openreview_id": "79q206xswc", "slideslive_id": 39025595, "venue": "nips2024", "title": "Is Your LiDAR Placement Optimized for 3D Scene Understanding?", "status": "Spotlight", "keywords": "Autonomous Driving;LiDAR Semantic Segmentation;Sensor Placement", "tldr": "Place3D is a full-cycle pipeline that encompasses LiDAR placement optimization, data generation, and downstream evaluations.", "abstract": "The reliability of driving perception systems under unprecedented conditions is crucial for practical usage. Latest advancements have prompted increasing interest in multi-LiDAR perception. However, prevailing driving datasets predominantly utilize single-LiDAR systems and collect data devoid of adverse conditions, failing to capture the complexities of real-world environments accurately. Addressing these gaps, we proposed Place3D, a full-cycle pipeline that encompasses LiDAR placement optimization, data generation, and downstream evaluations. Our framework makes three appealing contributions. 1) To identify the most effective configurations for multi-LiDAR systems, we introduce the Surrogate Metric of the Semantic Occupancy Grids (M-SOG) to evaluate LiDAR placement quality. 2) Leveraging the M-SOG metric, we propose a novel optimization strategy to refine multi-LiDAR placements. 3) Centered around the theme of multi-condition multi-LiDAR perception, we collect a 280,000-frame dataset from both clean and adverse conditions. Extensive experiments demonstrate that LiDAR placements optimized using our approach outperform various baselines. We showcase exceptional results in both LiDAR semantic segmentation and 3D object detection tasks, under diverse weather and sensor failure conditions.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96478"} +{"video_file": "7ANmKBfP88_39025971.mp4", "openreview_id": "7ANmKBfP88", "slideslive_id": 39025971, "venue": "nips2024", "title": "Right this way: Can VLMs Guide Us to See More to Answer Questions?", "status": "Poster", "keywords": "visual accessibility;self-knowledge;vision language models", "tldr": "Our study explores VLMs capabilities in assessing answerability and guiding information acquisition, introducing a novel task with a benchmark dataset and a data-efficient training framework to improve the model's performance.", "abstract": "In question-answering scenarios, humans can assess whether the available information is sufficient and seek additional information if necessary, rather than providing a forced answer. In contrast, Vision Language Models (VLMs) typically generate direct, one-shot responses without evaluating the sufficiency of the information. To investigate this gap, we identify a critical and challenging task in the Visual Question Answering (VQA) scenario: can VLMs indicate how to adjust an image when the visual information is insufficient to answer a question? This capability is especially valuable for assisting visually impaired individuals who often need guidance to capture images correctly. To evaluate this capability of current VLMs, we introduce a human-labeled dataset as a benchmark for this task. Additionally, we present an automated framework that generates synthetic training data by simulating ``where to know'' scenarios. Our empirical results show significant performance improvements in mainstream VLMs when fine-tuned with this synthetic data. This study demonstrates the potential to narrow the gap between information assessment and acquisition in VLMs, bringing their performance closer to humans.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/96477"} +{"video_file": "7Dep87TMJs_39027877.mp4", "openreview_id": "7Dep87TMJs", "slideslive_id": 39027877, "venue": "nips2024", "title": "Learning with Fitzpatrick Losses", "status": "Poster", "keywords": "loss functions;convex analysis;monotone operators", "tldr": "A new family of loss functions based on monotone operator theory that lower bound Fenchel-Young losses, such as the the logistic loss.", "abstract": "Fenchel-Young losses are a family of loss functions, encompassing the squared, logistic and sparsemax losses, among others. They are convex w.r.t. the model output and the target, separately. Each Fenchel-Young loss is implicitly associated with a link function, that maps model outputs to predictions. For instance, the logistic loss is associated with the soft argmax link function. Can we build new loss functions associated with the same link function as Fenchel-Young losses? In this paper, we introduce Fitzpatrick losses, a new family of separately convex loss functions based on the Fitzpatrick function. A well-known theoretical tool in maximal monotone operator theory, the Fitzpatrick function naturally leads to a refined Fenchel-Young inequality, making Fitzpatrick losses tighter than Fenchel- Young losses, while maintaining the same link function for prediction. As an example, we introduce the Fitzpatrick logistic loss and the Fitzpatrick sparsemax loss, counterparts of the logistic and the sparsemax losses. This yields two new tighter losses associated with the soft argmax and the sparse argmax, two of the most ubiquitous output layers used in machine learning. We study in details the properties of Fitzpatrick losses and, in particular, we show that they can be seen as Fenchel-Young losses using a modified, target-dependent generating function. We demonstrate the effectiveness of Fitzpatrick losses for label proportion estimation.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/96472"} +{"video_file": "7ESHFpqjNO_39025786.mp4", "openreview_id": "7ESHFpqjNO", "slideslive_id": 39025786, "venue": "nips2024", "title": "Learning Place Cell Representations and Context-Dependent Remapping", "status": "Poster", "keywords": "Place cells;remapping;AI;neuroAI", "tldr": "We train neural networks to minimize a similarity-based objective function to learn joint encodings of space and context, and observe place cell-like representations and remapping in network responses", "abstract": "Hippocampal place cells are known for their spatially selective firing patterns, which has led to the suggestion that they encode an animal's location. However, place cells also respond to contextual cues, such as smell. Furthermore, they have the ability to remap, wherein the firing fields and rates of cells change in response to changes in the environment. How place cell responses emerge, and how these representations remap is not fully understood. In this work, we propose a similarity-based objective function that translates proximity in space, to proximity in representation. We show that a neural network trained to minimize the proposed objective learns place-like representations. We also show that the proposed objective is easily extended to include other sources of information, such as context information, in the same way. When trained to encode multiple contexts, networks learn distinct representations, exhibiting remapping behaviors between contexts. The proposed objective is invariant to orthogonal transformations. Such transformations of the original trained representation (e.g. rotations), therefore yield new representations distinct from the original, without explicit relearning, akin to remapping. Our findings shed new light on the formation and encoding properties of place cells, and also demonstrate an interesting case of representational reuse.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/96470"} +{"video_file": "7Fzx3Akdt5_39024612.mp4", "openreview_id": "7Fzx3Akdt5", "slideslive_id": 39024612, "venue": "nips2024", "title": "Harnessing Multiple Correlated Networks for Exact Community Recovery", "status": "Poster", "keywords": "Stochastic block model;community recovery;graph matching;correlated random graphs;information-theoretic limits", "tldr": "We derive the precise information-theoretic threshold for exact community recovery from any constant number of correlated stochastic block models, showcasing the power of integrative data analysis and quantifying the value of each additional graph.", "abstract": "We study the problem of learning latent community structure from multiple correlated networks, focusing on edge-correlated stochastic block models with two balanced communities. Recent work of Gaudio, R\u00e1cz, and Sridhar (COLT 2022) determined the precise information-theoretic threshold for exact community recovery using two correlated graphs; in particular, this showcased the subtle interplay between community recovery and graph matching. Here we study the natural setting of more than two graphs. The main challenge lies in understanding how to aggregate information across several graphs when none of the pairwise latent vertex correspondences can be exactly recovered. Our main result derives the precise information-theoretic threshold for exact community recovery using any constant number of correlated graphs, answering a question of Gaudio, R\u00e1cz, and Sridhar (COLT 2022). In particular, for every\nK\n\u2265\n3\nwe uncover and characterize a region of the parameter space where exact community recovery is possible using\nK\ncorrelated graphs, even though (1) this is information-theoretically impossible using any\nK\n\u2212\n1\nof them and (2) none of the latent matchings can be exactly recovered.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96468"} +{"video_file": "7Mo1NOosNT_39027088.mp4", "openreview_id": "7Mo1NOosNT", "slideslive_id": 39027088, "venue": "nips2024", "title": "COLD: Causal reasOning in cLosed Daily activities", "status": "Poster", "keywords": "Causal Common Sense;Causal NLP;LLMs;Commonsense Reasoning", "tldr": "We present a new perspective on causal reasoning in NLP by using a closed system (defined by real-world commonsense activities) and propose a framework with underlying causal graphs. The causal queries help validate causal reasoning in LLMs.", "abstract": "Large Language Models (LLMs) have shown state-of-the-art performance in a variety of tasks, including arithmetic and reasoning; however, to gauge the intellectual capabilities of LLMs, causal reasoning has become a reliable proxy for validating a general understanding of the mechanics and intricacies of the world similar to humans. Previous works in natural language processing (NLP) have either focused on open-ended causal reasoning via causal commonsense reasoning (CCR) or framed a symbolic representation-based question answering for theoretically backed-up analysis via a causal inference engine. The former adds an advantage of real-world grounding but lacks theoretically backed-up analysis/validation, whereas the latter is far from real-world grounding. In this work, we bridge this gap by proposing the COLD (Causal reasOning in cLosed Daily activities) framework, which is built upon human understanding of daily real-world activities to reason about the causal nature of events. We show that the proposed framework facilitates the creation of enormous causal queries (\u223c 9 million) and comes close to the mini-turing test, simulating causal reasoning to evaluate the understanding of a daily real-world task. We evaluate multiple LLMs on the created causal queries and find that causal reasoning is challenging even for activities trivial to humans. We further explore (the causal reasoning abilities of LLMs) using the backdoor criterion to determine the causal strength between events.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96459"} +{"video_file": "7O6KtaAr8n_39028762.mp4", "openreview_id": "7O6KtaAr8n", "slideslive_id": 39028762, "venue": "nips2024", "title": "Learning Social Welfare Functions", "status": "Spotlight", "keywords": "Social Choice Learning;Power Mean Functions;PAC Learning;Preference Learning;Statistical Learning Theory;Social Welfare Functions", "tldr": "We show how to learn a policy maker's implicit social welfare function from their past decisions by fitting power mean functions, with provable sample complexity guarantees even under noisy observations.", "abstract": "Is it possible to understand or imitate a policy maker's rationale by looking at past decisions they made? We formalize this question as the problem of learning social welfare functions belonging to the well-studied family of power mean functions. We focus on two learning tasks; in the first, the input is vectors of utilities of an action (decision or policy) for individuals in a group and their associated social welfare as judged by a policy maker, whereas in the second, the input is pairwise comparisons between the welfares associated with a given pair of utility vectors. We show that power mean functions are learnable with polynomial sample complexity in both cases, even if the social welfare information is noisy. Finally, we design practical algorithms for these tasks and evaluate their performance.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/96456"} +{"video_file": "7PORYhql4V_39024870.mp4", "openreview_id": "7PORYhql4V", "slideslive_id": 39024870, "venue": "nips2024", "title": "Great Minds Think Alike: The Universal Convergence Trend of Input Salience", "status": "Poster", "keywords": "explainable artificial intelligence;saliency maps;model distributions", "tldr": "Leveraging input saliency maps, we discover that with increasing capacities, the distributions of models converge to the almost-shared population mean, and thus the limiting of model can be estimated through the population mean of small models.", "abstract": "Uncertainty is introduced in optimized DNNs through stochastic algorithms, forming specific distributions. Training models can be seen as random sampling from this distribution of optimized models. In this work, we study the distribution of optimized DNNs as a family of functions by leveraging a pointwise approach. We focus on the input saliency maps, as the input gradient field is decisive to the models' mathematical essence. Our investigation of saliency maps reveals a counter-intuitive trend: two stochastically optimized models tend to resemble each other more as either of their capacities increases. Therefore, we hypothesize several properties of these distributions, suggesting that (1) Within the same model architecture (e.g., CNNs, ResNets), different family variants (e.g., varying capacities) tend to align in terms of their population mean directions of the input salience. And (2) the distributions of optimized models follow a convergence trend to their shared population mean as the capacity increases. Furthermore, we also propose semi-parametric distributions based on the Saw distribution to model the convergence trend, satisfying all the counter-intuitive observations. Our experiments shed light on the significant implications of our hypotheses in various application domains, including black-box attacks, deep ensembles, etc. These findings not only enhance our understanding of DNN behaviors but also offer valuable insights for their practical application in diverse areas of deep learning.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/96455"} +{"video_file": "7QG9R8urVy_39024754.mp4", "openreview_id": "7QG9R8urVy", "slideslive_id": 39024754, "venue": "nips2024", "title": "Doubly Mild Generalization for Offline Reinforcement Learning", "status": "Poster", "keywords": "offline reinforcement learning", "tldr": "This work proposes Doubly Mild Generalization, comprising mild action generalization and mild generalization propagation, to appropriately exploit generalization in offline RL.", "abstract": "Offline Reinforcement Learning (RL) suffers from the extrapolation error and value overestimation. From a generalization perspective, this issue can be attributed to the over-generalization of value functions or policies towards out-of-distribution (OOD) actions. Significant efforts have been devoted to mitigating such generalization, and recent in-sample learning approaches have further succeeded in entirely eschewing it. Nevertheless, we show that mild generalization beyond the dataset can be trusted and leveraged to improve performance under certain conditions. To appropriately exploit generalization in offline RL, we propose Doubly Mild Generalization (DMG), comprising (i) mild action generalization and (ii) mild generalization propagation. The former refers to selecting actions in a close neighborhood of the dataset to maximize the Q values. Even so, the potential erroneous generalization can still be propagated, accumulated, and exacerbated by bootstrapping. In light of this, the latter concept is introduced to mitigate the generalization propagation without impeding the propagation of RL learning signals. Theoretically, DMG guarantees better performance than the in-sample optimal policy in the oracle generalization scenario. Even under worst-case generalization, DMG can still control value overestimation at a certain level and lower bound the performance. Empirically, DMG achieves state-of-the-art performance across Gym-MuJoCo locomotion tasks and challenging AntMaze tasks. Moreover, benefiting from its flexibility in both generalization aspects, DMG enjoys a seamless transition from offline to online learning and attains strong online fine-tuning performance.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96454"} +{"video_file": "7Tir0u0ukg_39026855.mp4", "openreview_id": "7Tir0u0ukg", "slideslive_id": 39026855, "venue": "nips2024", "title": "Randomized Exploration in Cooperative Multi-Agent Reinforcement Learning", "status": "Poster", "keywords": "Multi-Agent Reinforcement Learning;Randomized Exploration;Deep Reinforcement Learning", "tldr": "This paper presents the first study on provably efficient randomized exploration in cooperative multi-agent reinforcement learning.", "abstract": "We present the first study on provably efficient randomized exploration in cooperative multi-agent reinforcement learning (MARL). We propose a unified algorithm framework for randomized exploration in parallel Markov Decision Processes (MDPs), and two Thompson Sampling (TS)-type algorithms, CoopTS-PHE and CoopTS-LMC, incorporating the perturbed-history exploration (PHE) strategy and the Langevin Monte Carlo exploration (LMC) strategy respectively, which are flexible in design and easy to implement in practice. For a special class of parallel MDPs where the transition is (approximately) linear, we theoretically prove that both CoopTS-PHE and CoopTS-LMC achieve a\nO\n~\n(\nd\n3\n/\n2\nH\n2\nM\nK\n)\nregret bound with communication complexity\nO\n~\n(\nd\nH\nM\n2\n)\n, where\nd\nis the feature dimension,\nH\nis the horizon length,\nM\nis the number of agents, and\nK\nis the number of episodes. This is the first theoretical result for randomized exploration in cooperative MARL. We evaluate our proposed method on multiple parallel RL environments, including a deep exploration problem (i.e.,\nN\n-chain), a video game, and a real-world problem in energy systems. Our experimental results support that our framework can achieve better performance, even under conditions of misspecified transition models. Additionally, we establish a connection between our unified framework and the practical application of federated learning.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96449"} +{"video_file": "7U5MwUS3Rw_39025871.mp4", "openreview_id": "7U5MwUS3Rw", "slideslive_id": 39025871, "venue": "nips2024", "title": "Towards Harmless Rawlsian Fairness Regardless of Demographic Prior", "status": "Poster", "keywords": "Harmless fairness;demographics-free;reducing variance of losses", "tldr": "Harmless Rawlsian Fairness Regardless of Demographics via decreasing Variance of Losses", "abstract": "Due to privacy and security concerns, recent advancements in group fairness advocate for model training regardless of demographic information. However, most methods still require prior knowledge of demographics. In this study, we explore the potential for achieving fairness without compromising its utility when no prior demographics are provided to the training set, namely harmless Rawlsian fairness. We ascertain that such a fairness requirement with no prior demographic information essential promotes training losses to exhibit a Dirac delta distribution. To this end, we propose a simple but effective method named VFair to minimize the variance of training losses inside the optimal set of empirical losses. This problem is then optimized by a tailored dynamic update approach that operates in both loss and gradient dimensions, directing the model towards relatively fairer solutions while preserving its intact utility. Our experimental findings indicate that regression tasks, which are relatively unexplored from literature, can achieve significant fairness improvement through VFair regardless of any prior, whereas classification tasks usually do not because of their quantized utility measurements. The implementation of our method is publicly available at https://github.com/wxqpxw/VFair.", "primary_area": "fairness", "site": "https://neurips.cc/virtual/2024/poster/96448"} +{"video_file": "7UyBKTFrtd_39028444.mp4", "openreview_id": "7UyBKTFrtd", "slideslive_id": 39028444, "venue": "nips2024", "title": "Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE)", "status": "Poster", "keywords": "Interpretable Machine Learning;Dictionary Learning;Representation Learning;Multimodal Models;Interpretability;CLIP", "tldr": "We use dictionary learning to interpret CLIP embeddings by representing them as sparse combinations of semantic concepts, resulting in interpretability while maintaining high performance and unlocking novel use cases.", "abstract": "CLIP embeddings have demonstrated remarkable performance across a wide range of multimodal applications. However, these high-dimensional, dense vector representations are not easily interpretable, limiting our understanding of the rich structure of CLIP and its use in downstream applications that require transparency. In this work, we show that the semantic structure of CLIP's latent space can be leveraged to provide interpretability, allowing for the decomposition of representations into semantic concepts. We formulate this problem as one of sparse recovery and propose a novel method, Sparse Linear Concept Embeddings (SpLiCE), for transforming CLIP representations into sparse linear combinations of human-interpretable concepts. Distinct from previous work, \\method is task-agnostic and can be used, without training, to explain and even replace traditional dense CLIP representations, maintaining high downstream performance while significantly improving their interpretability. We also demonstrate significant use cases of \\method representations including detecting spurious correlations and model editing. Code is provided at https://github.com/AI4LIFE-GROUP/SpLiCE.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/96446"} +{"video_file": "7W0f7lifDk_39028676.mp4", "openreview_id": "7W0f7lifDk", "slideslive_id": 39028676, "venue": "nips2024", "title": "Human-3Diffusion: Realistic Avatar Creation via Explicit 3D Consistent Diffusion Models", "status": "Poster", "keywords": "3D Reconstruction;3D Human Reconstruction;Diffusion Models;3D Generative Models;2D Foundation Models", "tldr": "A novel 3D consistent diffusion framework that utilize pretrained 2D diffusion prior for 3D reconstruction and use reconstructed 3D to guide 2D sampling process", "abstract": "Creating realistic avatars from a single RGB image is an attractive yet challenging problem. To deal with challenging loose clothing or occlusion by interaction objects, we leverage powerful shape prior from 2D diffusion models pretrained on large datasets. Although 2D diffusion models demonstrate strong generalization capability, they cannot provide multi-view shape priors with guaranteed 3D consistency. We propose Human-3Diffusion: Realistic Avatar Creation via Explicit 3D Consistent Diffusion. Our key insight is that 2D multi-view diffusion and 3D reconstruction models provide complementary information for each other. By coupling them in a tight manner, we can fully leverage the potential of both models. We introduce a novel image-conditioned generative 3D Gaussian Splats reconstruction model that leverages the prior from 2D multi-view diffusion models, and provides an explicit 3D representation, which further guides the 2D reverse sampling process to have better 3D consistency. Experiments show that our proposed framework outperforms state-of-the-art methods and enables the creation of realistic avatars from a single RGB image, achieving high-fidelity in both geometry and appearance. Extensive ablations also validate the efficacy of our design, (1) multi-view 2D priors conditioning in generative 3D reconstruction and (2) consistency refinement of sampling trajectory via the explicit 3D representation. Our code and models are released at https://yuxuan-xue.com/human-3diffusion/.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96444"} +{"video_file": "7WvwzuYkUq_39025447.mp4", "openreview_id": "7WvwzuYkUq", "slideslive_id": 39025447, "venue": "nips2024", "title": "Progressive Entropic Optimal Transport Solvers", "status": "Poster", "keywords": "Optimal Transport;Entropy Regularization", "tldr": "We propose a progressive algorithm for estimation of OT maps and plans. We prove its statistically consistency and demonstrate its performance on synthetic and single-cell data.", "abstract": "Optimal transport (OT) has profoundly impacted machine learning by providing theoretical and computational tools to realign datasets. In this context, given two large point clouds of sizes\nn\nand\nm\nin\nR\nd\n, entropic OT (EOT) solvers have emerged as the most reliable tool to either solve the Kantorovich problem and output a\nn\n\u00d7\nm\ncoupling matrix, or to solve the Monge problem and learn a vector-valued push-forward map. While the robustness of EOT couplings/maps makes them a go-to choice in practical applications, EOT solvers remain difficult to tune because of a small but influential set of hyperparameters, notably the omnipresent entropic regularization strength\n\u03b5\n. Setting\n\u03b5\ncan be difficult, as it simultaneously impacts various performance metrics, such as compute speed, statistical performance, generalization, and bias. In this work, we propose a new class of EOT solvers (ProgOT), that can estimate both plans and transport maps. We take advantage of several opportunities to optimize the computation of EOT solutions by dividing mass displacement using a time discretization, borrowing inspiration from dynamic OT formulations, and conquering each of these steps using EOT with properly scheduled parameters. We provide experimental evidence demonstrating that ProgOT is a faster and more robust alternative to standard solvers when computing couplings at large scales, even outperforming neural network-based approaches. We also prove statistical consistency of our approach for estimating OT maps.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/96442"} +{"video_file": "7Ye12RLZ4P_39025131.mp4", "openreview_id": "7Ye12RLZ4P", "slideslive_id": 39025131, "venue": "nips2024", "title": "Asynchronous Perception Machine for Efficient Test Time Training", "status": "Poster", "keywords": "MORTAL COMPUTATION;GLOM;test time training;neural fields;implicit representation;distillation", "tldr": "APM introduces asynchronous patch-processing for test-time-training instead of parallel perception. APM leverages GLOM's islands of agreement. We present the scientific evidence towards validating GLOM's insight: if input percept is really a field.", "abstract": "In this work, we propose Asynchronous Perception Machine (APM), a computationally-efficient architecture for test-time-training (TTT). APM can process patches of an image one at a time in any order asymmetrically and still encode semantic-awareness in the net. We demonstrate APM's ability to recognize out-of-distribution images without dataset-specific pre-training, augmentation or any-pretext task. APM offers competitive performance over existing TTT approaches. To perform TTT, APM just distills test sample's representation once. APM possesses a unique property: it can learn using just this single representation and starts predicting semantically-aware features. APM demostrates potential applications beyond test-time-training: APM can scale up to a dataset of 2D images and yield semantic-clusterings in a single forward pass. APM also provides first empirical evidence towards validating GLOM's insight, i.e. input percept is a field. Therefore, APM helps us converge towards an implementation which can do both interpolation and perception on a shared-connectionist hardware. Our code is publicly available at https://rajatmodi62.github.io/apm_project_page/\nIt now appears that some of the ideas in GLOM could be made to work.\nhttps://www.technologyreview.com/2021/04/16/1021871/geoffrey-hinton-glom-godfather-ai-neural-networks/\nGLOM = Geoff's Latest Original Model.\n .-\"\"\"\"\"\"-.\n .' '.\n/ O O \\\n| O |\n \\ '------' /\n '. .'\n '-....-'\nSilent men in deep-contemplation.\nSilent men emerges only sometimes.\nSilent men love all.\nSilent men practice slow science.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/96438"} +{"video_file": "7arAADUK6D_39026017.mp4", "openreview_id": "7arAADUK6D", "slideslive_id": 39026017, "venue": "nips2024", "title": "Ensemble Learning for Heterogeneous Large Language Models with Deep Parallel Collaboration", "status": "Spotlight", "keywords": "Ensemble Learning;Large Language Model;Relative Representation", "tldr": "Enable the probability distribution averaging between heterogeneous large language models", "abstract": "Large language models (LLMs) exhibit complementary strengths in various tasks, motivating the research of LLM ensembling. However, existing work focuses on training an extra reward model or fusion model to select or combine all candidate answers, posing a great challenge to the generalization on unseen data distributions. Besides, prior methods use textual responses as communication media, ignoring the valuable information in the internal representations. In this work, we propose a training-free ensemble framework \\textsc{DeePEn}, fusing the informative probability distributions yielded by different LLMs at each decoding step. Unfortunately, the vocabulary discrepancy between heterogeneous LLMs directly makes averaging the distributions unfeasible due to the token misalignment. To address this challenge, \\textsc{DeePEn} maps the probability distribution of each model from its own probability space to a universal \\textit{relative space} based on the relative representation theory, and performs aggregation. Next, we devise a search-based inverse transformation to transform the aggregated result back to the probability space of one of the ensembling LLMs (main model), in order to determine the next token. We conduct extensive experiments on ensembles of different number of LLMs, ensembles of LLMs with different architectures, and ensembles between the LLM and the specialist model. Experimental results show that (i) \\textsc{DeePEn} achieves consistent improvements across six benchmarks covering subject examination, reasoning, and knowledge, (ii) a well-performing specialist model can benefit from a less effective LLM through distribution fusion, and (iii) \\textsc{DeePEn} has complementary strengths with other ensemble methods such as voting.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96435"} +{"video_file": "7eIaqYrpcs_39024775.mp4", "openreview_id": "7eIaqYrpcs", "slideslive_id": 39024775, "venue": "nips2024", "title": "Vidu4D: Single Generated Video to High-Fidelity 4D Reconstruction with Dynamic Gaussian Surfels", "status": "Poster", "keywords": "generative models;4D reconstruction;Gaussian splatting", "tldr": "Vidu4D, a reconstruction model that excels in accurately reconstructing dynamic gaussian surfels from single generated videos.", "abstract": "Video generative models are receiving particular attention given their ability to generate realistic and imaginative frames. Besides, these models are also observed to exhibit strong 3D consistency, significantly enhancing their potential to act as world simulators. In this work, we present Vidu4D, a novel reconstruction model that excels in accurately reconstructing 4D (i.e., sequential 3D) representations from single generated videos, addressing challenges associated with non-rigidity and frame distortion. This capability is pivotal for creating high-fidelity virtual contents that maintain both spatial and temporal coherence. At the core of Vidu4D is our proposed Dynamic Gaussian Surfels (DGS) technique. DGS optimizes time-varying warping functions to transform Gaussian surfels (surface elements) from a static state to a dynamically warped state. This transformation enables a precise depiction of motion and deformation over time. To preserve the structural integrity of surface-aligned Gaussian surfels, we design the warped-state geometric regularization based on continuous warping fields for estimating normals. Additionally, we learn refinements on rotation and scaling parameters of Gaussian surfels, which greatly alleviates texture flickering during the warping process and enhances the capture of fine-grained appearance details. Vidu4D also contains a novel initialization state that provides a proper start for the warping fields in DGS. Equipping Vidu4D with an existing video generative model, the overall framework demonstrates high-fidelity text-to-4D generation in both appearance and geometry.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/96432"} +{"video_file": "7fScrgJ3An_39026007.mp4", "openreview_id": "7fScrgJ3An", "slideslive_id": 39026007, "venue": "nips2024", "title": "DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features", "status": "Poster", "keywords": "Autonomous Driving; Generalizable NeRF; Scene Representation Learning; Distillation", "tldr": "DistillNeRF, perceiving/reconstructing the 3D driving world without any labels or per-scene training!", "abstract": "We propose DistillNeRF, a self-supervised learning framework addressing the challenge of understanding 3D environments from limited 2D observations in outdoor autonomous driving scenes. Our method is a generalizable feedforward model that predicts a rich neural scene representation from sparse, single-frame multi-view camera inputs with limited view overlap, and is trained self-supervised with differentiable rendering to reconstruct RGB, depth, or feature images. Our first insight is to exploit per-scene optimized Neural Radiance Fields (NeRFs) by generating dense depth and virtual camera targets from them, which helps our model to learn enhanced 3D geometry from sparse non-overlapping image inputs. Second, to learn a semantically rich 3D representation, we propose distilling features from pre-trained 2D foundation models, such as CLIP or DINOv2, thereby enabling various downstream tasks without the need for costly 3D human annotations. To leverage these two insights, we introduce a novel model architecture with a two-stage lift-splat-shoot encoder and a parameterized sparse hierarchical voxel representation. Experimental results on the NuScenes and Waymo NOTR datasets demonstrate that DistillNeRF significantly outperforms existing comparable state-of-the-art self-supervised methods for scene reconstruction, novel view synthesis, and depth estimation; and it allows for competitive zero-shot 3D semantic occupancy prediction, as well as open-world scene understanding through distilled foundation model features. Demos and code will be available at https://distillnerf.github.io/.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96431"} +{"video_file": "7flSQgZ4RT_39026573.mp4", "openreview_id": "7flSQgZ4RT", "slideslive_id": 39026573, "venue": "nips2024", "title": "Navigable Graphs for High-Dimensional Nearest Neighbor Search: Constructions and Limits", "status": "Poster", "keywords": "vector databases;navigable graphs;near-neighbor search", "tldr": "We prove tight upper and lower bounds for the average degree of navigable graphs for high-dimensional point sets.", "abstract": "There has been significant recent interest in graph-based nearest neighbor search methods, many of which are centered on the construction of (approximately) \"navigable\" graphs over high-dimensional point sets. A graph is navigable if we can successfully move from any starting node to any target node using a greedy routing strategy where we always move to the neighbor that is closest to the destination according to the given distance function. The complete graph is obviously navigable for any point set, but the important question for applications is if sparser graphs can be constructed. While this question is fairly well understood in low-dimensions, we establish some of the first upper and lower bounds for high-dimensional point sets. First, we give a simple and efficient way to construct a navigable graph with average degree\nO\n(\nn\nlog\n\u2061\nn\n)\nfor any set of\nn\npoints, in any dimension, for any distance function. We compliment this result with a nearly matching lower bound: even under the Euclidean metric in\nO\n(\nlog\n\u2061\nn\n)\ndimensions, a random point set has no navigable graph with average degree\nO\n(\nn\n\u03b1\n)\nfor any\n\u03b1\n<\n1\n/\n2\n. Our lower bound relies on sharp anti-concentration bounds for binomial random variables, which we use to show that the {near-neighborhoods} of a set of random points do not overlap significantly, forcing any navigable graph to have many edges.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/96430"} +{"video_file": "7gf6oGdKPU_39028754.mp4", "openreview_id": "7gf6oGdKPU", "slideslive_id": 39028754, "venue": "nips2024", "title": "Retrieval-Retro: Retrieval-based Inorganic Retrosynthesis with Expert Knowledge", "status": "Poster", "keywords": "Inogranic Synthesis;Inorganic Retrosynthesis;Material Science", "tldr": "Inorganic Retrosynthesis", "abstract": "While inorganic retrosynthesis planning is essential in the field of chemical science, the application of machine learning in this area has been notably less explored compared to organic retrosynthesis planning. In this paper, we propose Retrieval-Retro for inorganic retrosynthesis planning, which implicitly extracts the precursor information of reference materials that are retrieved from the knowledge base regarding domain expertise in the field. Specifically, instead of directly employing the precursor information of reference materials, we propose implicitly extracting it with various attention layers, which enables the model to learn novel synthesis recipes more effectively. Moreover, during retrieval, we consider the thermodynamic relationship between target material and precursors, which is essential domain expertise in identifying the most probable precursor set among various options. Extensive experiments demonstrate the superiority of Retrieval-Retro in retrosynthesis planning, especially in discovering novel synthesis recipes, which is crucial for materials discovery. The source code for Retrieval-Retro is available at https://github.com/HeewoongNoh/Retrieval-Retro.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/96429"} +{"video_file": "7sdkLVuYCU_39026062.mp4", "openreview_id": "7sdkLVuYCU", "slideslive_id": 39026062, "venue": "nips2024", "title": "QTIP: Quantization with Trellises and Incoherence Processing", "status": "Spotlight", "keywords": "quantization;llms;trellises;fast inference;post training quantization;trellis coded quantization;model compression;computed codes", "tldr": "We present the first tractable ultra-high dimensional quantizer for LLM PTQ that supports fast inference, enabling in state of the art quantization quality and inference speed.", "abstract": "Post-training quantization (PTQ) reduces the memory footprint of LLMs by quantizing weights to low-precision datatypes. Since LLM inference is usually memory-bound, PTQ methods can improve inference throughput. Recent state-of-the-art PTQ approaches use vector quantization (VQ) to quantize multiple weights at once, which improves information utilization through better shaping. However, VQ requires a codebook with size exponential in the dimension. This limits current VQ-based PTQ works to low VQ dimensions (\n\u2264\n8\n) that in turn limit quantization quality. Here, we introduce QTIP, which instead uses trellis coded quantization (TCQ) to achieve ultra-high-dimensional quantization. TCQ uses a stateful decoder that separates the codebook size from the bitrate and effective dimension. QTIP introduces a spectrum of lookup-only to computed lookup-free trellis codes designed for a hardware-efficient \"bitshift\" trellis structure; these codes achieve state-of-the-art results in both quantization quality and inference speed.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/96418"} +{"video_file": "7t9eDEY2GT_39027987.mp4", "openreview_id": "7t9eDEY2GT", "slideslive_id": 39027987, "venue": "nips2024", "title": "Flipping-based Policy for Chance-Constrained Markov Decision Processes", "status": "Poster", "keywords": "Reinforcement learning;Chance constraints;Stochastic policy", "tldr": "We present a flipping-based policy for safe reinforcement learning and provide theoretical foundations for its optimality and practical implementation.", "abstract": "Safe reinforcement learning (RL) is a promising approach for many real-world decision-making problems where ensuring safety is a critical necessity. In safe RL research, while expected cumulative safety constraints (ECSCs) are typically the first choices, chance constraints are often more pragmatic for incorporating safety under uncertainties. This paper proposes a \\textit{flipping-based policy} for Chance-Constrained Markov Decision Processes (CCMDPs). The flipping-based policy selects the next action by tossing a potentially distorted coin between two action candidates. The probability of the flip and the two action candidates vary depending on the state. We establish a Bellman equation for CCMDPs and further prove the existence of a flipping-based policy within the optimal solution sets. Since solving the problem with joint chance constraints is challenging in practice, we then prove that joint chance constraints can be approximated into Expected Cumulative Safety Constraints (ECSCs) and that there exists a flipping-based policy in the optimal solution sets for constrained MDPs with ECSCs. As a specific instance of practical implementations, we present a framework for adapting constrained policy optimization to train a flipping-based policy. This framework can be applied to other safe RL algorithms. We demonstrate that the flipping-based policy can improve the performance of the existing safe RL algorithms under the same limits of safety constraints on Safety Gym benchmarks.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96415"} +{"video_file": "7tRtH0AoBl_39028803.mp4", "openreview_id": "7tRtH0AoBl", "slideslive_id": 39028803, "venue": "nips2024", "title": "Randomized Exploration for Reinforcement Learning with Multinomial Logistic Function Approximation", "status": "Poster", "keywords": "Reinforcement learning;Function approximation;Multinomial logistic regression;Regret analysis", "tldr": "We propose randomized exploration model-based RL algorithms with multinomial logistic function approximation.", "abstract": "We study reinforcement learning with multinomial logistic (MNL) function approximation where the underlying transition probability kernel of the Markov decision processes (MDPs) is parametrized by an unknown transition core with features of state and action. For the finite horizon episodic setting with inhomogeneous state transitions, we propose provably efficient algorithms with randomized exploration having frequentist regret guarantees. For our first algorithm,\nRRL-MNL\n, we adapt optimistic sampling to ensure the optimism of the estimated value function with sufficient frequency and establish that\nRRL-MNL\nis both statistically and computationally efficient, achieving a\nO\n~\n(\n\u03ba\n\u2212\n1\nd\n3\n2\nH\n3\n2\nT\n)\nfrequentist regret bound with constant-time computational cost per episode. Here,\nd\nis the dimension of the transition core,\nH\nis the horizon length,\nT\nis the total number of steps, and\n\u03ba\nis a problem-dependent constant. Despite the simplicity and practicality of\nRRL-MNL\n, its regret bound scales with\n\u03ba\n\u2212\n1\n, which is potentially large in the worst case. To improve the dependence on\n\u03ba\n\u2212\n1\n, we propose\nORRL-MNL\n, which estimates the value function using local gradient information of the MNL transition model. We show that its frequentist regret bound is\nO\n~\n(\nd\n3\n2\nH\n3\n2\nT\n+\n\u03ba\n\u2212\n1\nd\n2\nH\n2\n)\n. To the best of our knowledge, these are the first randomized RL algorithms for the MNL transition model that achieve both computational and statistical efficiency. Numerical experiments demonstrate the superior performance of the proposed algorithms.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96414"} +{"video_file": "7txPaUpUnc_39028864.mp4", "openreview_id": "7txPaUpUnc", "slideslive_id": 39028864, "venue": "nips2024", "title": "Identifying Functionally Important Features with End-to-End Sparse Dictionary Learning", "status": "Poster", "keywords": "Mechnastic Interpretability;Interpretability;Explainability;Transparency;Sparse Coding;Causal mediation analysis;High dimensional data analysis", "tldr": "We introduce end-to-end sparse autoencoders, which aim to learn functionally relevant features from network activations. They are a pareto improvement over standard sparse autoencoders.", "abstract": "Identifying the features learned by neural networks is a core challenge in mechanistic interpretability. Sparse autoencoders (SAEs), which learn a sparse, overcomplete dictionary that reconstructs a network's internal activations, have been used to identify these features. However, SAEs may learn more about the structure of the datatset than the computational structure of the network. There is therefore only indirect reason to believe that the directions found in these dictionaries are functionally important to the network. We propose end-to-end (e2e) sparse dictionary learning, a method for training SAEs that ensures the features learned are functionally important by minimizing the KL divergence between the output distributions of the original model and the model with SAE activations inserted. Compared to standard SAEs, e2e SAEs offer a Pareto improvement: They explain more network performance, require fewer total features, and require fewer simultaneously active features per datapoint, all with no cost to interpretability. We explore geometric and qualitative differences between e2e SAE features and standard SAE features. E2e dictionary learning brings us closer to methods that can explain network behavior concisely and accurately. We release our library for training e2e SAEs and reproducing our analysis at https://github.com/ApolloResearch/e2e_sae.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/96413"} +{"video_file": "7uqVfZW6Mo_39027049.mp4", "openreview_id": "7uqVfZW6Mo", "slideslive_id": 39027049, "venue": "nips2024", "title": "Not All Diffusion Model Activations Have Been Evaluated as Discriminative Features", "status": "Spotlight", "keywords": "Diffusion Models;Representation Learning;Model Property Study", "tldr": "We evaluate and exploit previously-ignored activations from diffusion backbones based on our new discoveries of their properties, thus resulting in better features for discrimination.", "abstract": "Diffusion models are initially designed for image generation. Recent research shows that the internal signals within their backbones, named activations, can also serve as dense features for various discriminative tasks such as semantic segmentation. Given numerous activations, selecting a small yet effective subset poses a fundamental problem. To this end, the early study of this field performs a large-scale quantitative comparison of the discriminative ability of the activations. However, we find that many potential activations have not been evaluated, such as the queries and keys used to compute attention scores. Moreover, recent advancements in diffusion architectures bring many new activations, such as those within embedded ViT modules. Both combined, activation selection remains unresolved but overlooked. To tackle this issue, this paper takes a further step with a much broader range of activations evaluated. Considering the significant increase in activations, a full-scale quantitative comparison is no longer operational. Instead, we seek to understand the properties of these activations, such that the activations that are clearly inferior can be filtered out in advance via simple qualitative evaluation. After careful analysis, we discover three properties universal among diffusion models, enabling this study to go beyond specific models. On top of this, we present effective feature selection solutions for several popular diffusion models. Finally, the experiments across multiple discriminative tasks validate the superiority of our method over the SOTA competitors. Our code is available at https://github.com/Darkbblue/generic-diffusion-feature.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96411"} +{"video_file": "7v0UyO0B6q_39027050.mp4", "openreview_id": "7v0UyO0B6q", "slideslive_id": 39027050, "venue": "nips2024", "title": "Online Posterior Sampling with a Diffusion Prior", "status": "Poster", "keywords": "posterior sampling;diffusion models;online learning;contextual bandits", "tldr": "We propose Laplace posterior sampling approximations for linear models and GLMs with a diffusion model prior, and apply them to contextual bandits.", "abstract": "Posterior sampling in contextual bandits with a Gaussian prior can be implemented exactly or approximately using the Laplace approximation. The Gaussian prior is computationally efficient but it cannot describe complex distributions. In this work, we propose approximate posterior sampling algorithms for contextual bandits with a diffusion model prior. The key idea is to sample from a chain of approximate conditional posteriors, one for each stage of the reverse diffusion process, which are obtained by the Laplace approximation. Our approximations are motivated by posterior sampling with a Gaussian prior, and inherit its simplicity and efficiency. They are asymptotically consistent and perform well empirically on a variety of contextual bandit problems.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/96410"} +{"video_file": "7zzOcyT0hd_39024451.mp4", "openreview_id": "7zzOcyT0hd", "slideslive_id": 39024451, "venue": "nips2024", "title": "Sub-optimal Experts mitigate Ambiguity in Inverse Reinforcement Learning", "status": "Poster", "keywords": "Inverse Reinforcement Learning;Sub-optimal Experts;Sample Complexity", "tldr": "We study how the presence of multiple and sub-optimal experts can mitigate the ambuigity that affects Inverse Reinforcement Learning.", "abstract": "Inverse Reinforcement Learning (IRL) deals with the problem of deducing a reward function that explains the behavior of an expert agent who is assumed to act optimally in an underlying unknown task. Recent works have studied the IRL problem from the perspective of recovering the feasible reward set, i.e., the class of reward functions that are compatible with a unique optimal expert. However, in several problems of interest it is possible to observe the behavior of multiple experts with different degree of optimality (e.g., racing drivers whose skills ranges from amateurs to professionals). For this reason, in this work, we focus on the reconstruction of the feasible reward set when, in addition to demonstrations from the optimal expert, we observe the behavior of multiple sub-optimal experts. Given this problem, we first study the theoretical properties showing that the presence of multiple sub-optimal experts, in addition to the optimal one, can significantly shrink the set of compatible rewards, ultimately mitigating the inherent ambiguity of IRL. Furthermore, we study the statistical complexity of estimating the feasible reward set with a generative model and analyze a uniform sampling algorithm that turns out to be minimax optimal whenever the sub-optimal experts' performance level is sufficiently close to that of the optimal expert.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96405"} +{"video_file": "82Ndsr4OS6_39027557.mp4", "openreview_id": "82Ndsr4OS6", "slideslive_id": 39027557, "venue": "nips2024", "title": "Adversarially Trained Weighted Actor-Critic for Safe Offline Reinforcement Learning", "status": "Poster", "keywords": "RL;safe-RL;Offline RL", "tldr": "We propose a novel algorithm for Safe Offline RL under functional approximation; our algorithm achieves robust safe policy improvement (first such result) using only single policy concentrability assumption.", "abstract": "We propose WSAC (Weighted Safe Actor-Critic), a novel algorithm for Safe Offline Reinforcement Learning (RL) under functional approximation, which can robustly optimize policies to improve upon an arbitrary reference policy with limited data coverage. WSAC is designed as a two-player Stackelberg game to optimize a refined objective function. The actor optimizes the policy against two adversarially trained value critics with small importance-weighted Bellman errors, which focus on scenarios where the actor's performance is inferior to the reference policy. In theory, we demonstrate that when the actor employs a no-regret optimization oracle, WSAC achieves a number of guarantees:\n(\ni\n)\nFor the first time in the safe offline RL setting, we establish that WSAC can produce a policy that outperforms {\\bf any} reference policy while maintaining the same level of safety, which is critical to designing a safe algorithm for offline RL.\n(\ni\ni\n)\nWSAC achieves the optimal statistical convergence rate of\n1\n/\nN\nto the reference policy, where\nN\nis the size of the offline dataset.\n(\ni\ni\ni\n)\nWe theoretically show that WSAC guarantees a safe policy improvement across a broad range of hyperparameters that control the degree of pessimism, indicating its practical robustness. Additionally, we offer a practical version of WSAC and compare it with existing state-of-the-art safe offline RL algorithms in several continuous control environments. WSAC outperforms all baselines across a range of tasks, supporting the theoretical results.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96400"} +{"video_file": "848vuK2cKp_39025785.mp4", "openreview_id": "848vuK2cKp", "slideslive_id": 39025785, "venue": "nips2024", "title": "Offline Oracle-Efficient Learning for Contextual MDPs via Layerwise Exploration-Exploitation Tradeoff", "status": "Poster", "keywords": "reinforcement learning;contextual MDP;offline density estimation;computational efficiency", "tldr": "This paper presents a statistical and computational reduction from CMDPs to offline density estimation with little overhead.", "abstract": "Motivated by the recent discovery of a statistical and computational reduction from contextual bandits to offline regression \\citep{simchi2020bypassing}, we address the general (stochastic) Contextual Markov Decision Process (CMDP) problem with horizon\nH\n(as known as CMDP with\nH\nlayers). In this paper, we introduce a reduction from CMDPs to offline density estimation under the realizability assumption, i.e., a model class\nM\ncontaining the true underlying CMDP is provided in advance. We develop an efficient, statistically near-optimal algorithm requiring only\nO\n(\nH\nlog\n\u2061\nT\n)\ncalls to an offline density estimation algorithm (or oracle) across all\nT\nrounds. This number can be further reduced to\nO\n(\nH\nlog\n\u2061\nlog\n\u2061\nT\n)\nif\nT\nis known in advance. Our results mark the first efficient and near-optimal reduction from CMDPs to offline density estimation without imposing any structural assumptions on the model class. A notable feature of our algorithm is the design of a layerwise exploration-exploitation tradeoff tailored to address the layerwise structure of CMDPs. Additionally, our algorithm is versatile and applicable to pure exploration tasks in reward-free reinforcement learning.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96396"} +{"video_file": "85tu7K06i3_39028596.mp4", "openreview_id": "85tu7K06i3", "slideslive_id": 39028596, "venue": "nips2024", "title": "Looks Too Good To Be True: An Information-Theoretic Analysis of Hallucinations in Generative Restoration Models", "status": "Poster", "keywords": "uncertainty;hallucinations;perception;tradeoff;distortion;restoration tasks;inverse problems.", "tldr": "We uncover an inherent limitation of generative image restoration: higher perceptual quality comes at the expense of greater uncertainty.", "abstract": "The pursuit of high perceptual quality in image restoration has driven the development of revolutionary generative models, capable of producing results often visually indistinguishable from real data. However, as their perceptual quality continues to improve, these models also exhibit a growing tendency to generate hallucinations \u2013 realistic-looking details that do not exist in the ground truth images. Hallucinations in these models create uncertainty about their reliability, raising major concerns about their practical application. This paper investigates this phenomenon through the lens of information theory, revealing a fundamental tradeoff between uncertainty and perception. We rigorously analyze the relationship between these two factors, proving that the global minimal uncertainty in generative models grows in tandem with perception. In particular, we define the inherent uncertainty of the restoration problem and show that attaining perfect perceptual quality entails at least twice this uncertainty. Additionally, we establish a relation between distortion, uncertainty and perception, through which we prove the aforementioned uncertainly-perception tradeoff induces the well-known perception-distortion tradeoff. We demonstrate our theoretical findings through experiments with super-resolution and inpainting algorithms. This work uncovers fundamental limitations of generative models in achieving both high perceptual quality and reliable predictions for image restoration. Thus, we aim to raise awareness among practitioners about this inherent tradeoff, empowering them to make informed decisions and potentially prioritize safety over perceptual performance.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/96394"} +{"video_file": "87AXdbkRyd_39025642.mp4", "openreview_id": "87AXdbkRyd", "slideslive_id": 39025642, "venue": "nips2024", "title": "Self-supervised Transformation Learning for Equivariant Representations", "status": "Poster", "keywords": "Equivariant Learning;Transformation Representation;Self-supervised Transformation Learning", "tldr": "Self-Supervised Transformation Learning (STL) uses transformation representations from image pairs instead of labels to learn equivariant transformations, enhancing performance efficiently.", "abstract": "Unsupervised representation learning has significantly advanced various machine learning tasks. In the computer vision domain, state-of-the-art approaches utilize transformations like random crop and color jitter to achieve invariant representations, embedding semantically the same inputs despite transformations. However, this can degrade performance in tasks requiring precise features, such as localization or flower classification. To address this, recent research incorporates equivariant representation learning, which captures transformation-sensitive information. However, current methods depend on transformation labels and thus struggle with interdependency and complex transformations. We propose Self-supervised Transformation Learning (STL), replacing transformation labels with transformation representations derived from image pairs. The proposed method ensures transformation representation is image-invariant and learns corresponding equivariant transformations, enhancing performance without increased batch complexity. We demonstrate the approach\u2019s effectiveness across diverse classification and detection tasks, outperforming existing methods in 7 out of 11 benchmarks and excelling in detection. By integrating complex transformations like AugMix, unusable by prior equivariant methods, this approach enhances performance across tasks, underscoring its adaptability and resilience. Additionally, its compatibility with various base models highlights its flexibility and broad applicability. The code is available at https://github.com/jaemyung-u/stl.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96393"} +{"video_file": "88TzdGyPT6_39026613.mp4", "openreview_id": "88TzdGyPT6", "slideslive_id": 39026613, "venue": "nips2024", "title": "Benign overfitting in leaky ReLU networks with moderate input dimension", "status": "Spotlight", "keywords": "benign overfitting;leaky relu;optimization;generalization;hinge loss;margin;overparameterization", "tldr": "We investigate the conditions under which leaky ReLU networks generalize well beyond the regime of near-orthogonal training data.", "abstract": "The problem of benign overfitting asks whether it is possible for a model to perfectly fit noisy training data and still generalize well. We study benign overfitting in two-layer leaky ReLU networks trained with the hinge loss on a binary classification task. We consider input data which can be decomposed into the sum of a common signal and a random noise component, which lie on subspaces orthogonal to one another. We characterize conditions on the signal to noise ratio (SNR) of the model parameters giving rise to benign versus non-benign, or harmful, overfitting: in particular, if the SNR is high then benign overfitting occurs, conversely if the SNR is low then harmful overfitting occurs. We attribute both benign and non-benign overfitting to an approximate margin maximization property and show that leaky ReLU networks trained on hinge loss with gradient descent (GD) satisfy this property. In contrast to prior work we do not require the training data to be nearly orthogonal. Notably, for input dimension\nd\nand training sample size\nn\n, while results in prior work require\nd\n=\n\u03a9\n(\nn\n2\nlog\n\u2061\nn\n)\n, here we require only\nd\n=\n\u03a9\n(\nn\n)\n.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96392"} +{"video_file": "8APPypS0yN_39027713.mp4", "openreview_id": "8APPypS0yN", "slideslive_id": 39027713, "venue": "nips2024", "title": "On the Expressivity and Sample Complexity of Node-Individualized Graph Neural Networks", "status": "Poster", "keywords": "Graph neural networks;graph learning;Weisfeiler-Leman;VC dimension", "tldr": "A theoretical analysis of the sample complexity of GNNs with node individualization schemes.", "abstract": "Graph neural networks (GNNs) employing message passing for graph classification are inherently limited by the expressive power of the Weisfeiler-Leman (WL) test for graph isomorphism. Node individualization schemes, which assign unique identifiers to nodes (e.g., by adding random noise to features), are a common approach for achieving universal expressiveness. However, the ability of GNNs endowed with individualization schemes to generalize beyond the training data is still an open question. To address this question, this paper presents a theoretical analysis of the sample complexity of such GNNs from a statistical learning perspective, employing Vapnik\u2013Chervonenkis (VC) dimension and covering number bounds. We demonstrate that node individualization schemes that are permutation-equivariant result in lower sample complexity, and design novel individualization schemes that exploit these results. As an application of this analysis, we also develop a novel architecture that can perform substructure identification (i.e., subgraph isomorphism) while having a lower VC dimension compared to competing methods. Finally, our theoretical findings are validated experimentally on both synthetic and real-world datasets.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/96388"} +{"video_file": "8CguPoe3TP_39024795.mp4", "openreview_id": "8CguPoe3TP", "slideslive_id": 39024795, "venue": "nips2024", "title": "Bayesian Nonparametrics Meets Data-Driven Distributionally Robust Optimization", "status": "Poster", "keywords": "Bayesian Nonparametrics;Distributionally Robust Optimization;Dirichlet Process;Decision Theory;Ambiguity Aversion;Machine Learning", "tldr": "We introduce and study a novel distributionally robust optimization procedure, combining insights from Bayesian nonparametric (Dirichlet process) theory and a popular decision-theoretic model of smooth ambiguity-averse preferences.", "abstract": "Training machine learning and statistical models often involves optimizing a data-driven risk criterion. The risk is usually computed with respect to the empirical data distribution, but this may result in poor and unstable out-of-sample performance due to distributional uncertainty. In the spirit of distributionally robust optimization, we propose a novel robust criterion by combining insights from Bayesian nonparametric (i.e., Dirichlet process) theory and a recent decision-theoretic model of smooth ambiguity-averse preferences. First, we highlight novel connections with standard regularized empirical risk minimization techniques, among which Ridge and LASSO regressions. Then, we theoretically demonstrate the existence of favorable finite-sample and asymptotic statistical guarantees on the performance of the robust optimization procedure. For practical implementation, we propose and study tractable approximations of the criterion based on well-known Dirichlet process representations. We also show that the smoothness of the criterion naturally leads to standard gradient-based numerical optimization. Finally, we provide insights into the workings of our method by applying it to a variety of tasks based on simulated and real datasets.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96385"} +{"video_file": "8Dkz60yGfj_39026972.mp4", "openreview_id": "8Dkz60yGfj", "slideslive_id": 39026972, "venue": "nips2024", "title": "Adjust Pearson's $r$ to Measure Arbitrary Monotone Dependence", "status": "Poster", "keywords": "Pearson's r;correlation coefficient;rearrangement inequality;nonlinear monotone dependence", "tldr": "The most used Pearson's r, typically regarded as a measure only for linear dependence, has been enhanced to accurately measure nonlinear monotone dependence, with the aid of an inequality tighter than Cauchy-Schwartz Inequality.", "abstract": "Pearson's $r$, the most widely-used correlation coefficient, is traditionally regarded as exclusively capturing linear dependence, leading to its discouragement in contexts involving nonlinear relationships. However, recent research challenges this notion, suggesting that Pearson's $r$ should not be ruled out a priori for measuring nonlinear monotone relationships. Pearson's $r$ is essentially a scaled covariance, rooted in the renowned Cauchy-Schwarz Inequality. Our findings reveal that different scaling bounds yield coefficients with different capture ranges, and interestingly, tighter bounds actually expand these ranges. We derive a tighter inequality than Cauchy-Schwarz Inequality, leverage it to refine Pearson's $r$, and propose a new correlation coefficient, i.e., rearrangement correlation. This coefficient is able to capture arbitrary monotone relationships, both linear and nonlinear ones. It reverts to Pearson's $r$ in linear scenarios. Simulation experiments and real-life investigations show that the rearrangement correlation is more accurate in measuring nonlinear monotone dependence than the three classical correlation coefficients, and other recently proposed dependence measures.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/96384"} +{"video_file": "8Dy42ThoNe_39026234.mp4", "openreview_id": "8Dy42ThoNe", "slideslive_id": 39026234, "venue": "nips2024", "title": "Large Language Model Unlearning", "status": "Poster", "keywords": "Large Language Model; LLM Alignment; Machine Unlearning; AI Privacy; AI Security", "tldr": "We study how to perform unlearning, i.e. forgetting undesirable (mis)behaviors, on large language models (LLMs).", "abstract": "We study how to perform unlearning, i.e. forgetting undesirable (mis)behaviors, on large language models (LLMs). We show at least three scenarios of aligning LLMs with human preferences can benefit from unlearning: (1) removing harmful responses, (2) erasing copyright-protected content as requested, and (3) reducing hallucinations. Unlearning, as an alignment technique, has three advantages. (1) It only requires negative (e.g. harmful) examples, which are much easier and cheaper to collect (e.g. via red teaming or user reporting) than positive (e.g. helpful and often human-written) examples required in the standard alignment process. (2) It is computationally efficient. (3) It is especially effective when we know which training samples cause the misbehavior. To the best of our knowledge, our work is among the first to explore LLM unlearning. We are also among the first to formulate the settings, goals, and evaluations in LLM unlearning. Despite only having negative samples, our ablation study shows that unlearning can still achieve better alignment performance than RLHF with just 2% of its computational time.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/96383"} +{"video_file": "8Fxqn1tZM1_39027780.mp4", "openreview_id": "8Fxqn1tZM1", "slideslive_id": 39027780, "venue": "nips2024", "title": "Scale Equivariant Graph Metanetworks", "status": "Oral", "keywords": "graph neural networks;weight space networks;implicit neural representations;symmetries", "tldr": "We introduce a graph metanetwork framework that allows scaling and permutation equivariant neural network processing.", "abstract": "This paper pertains to an emerging machine learning paradigm: learning higher- order functions, i.e. functions whose inputs are functions themselves, particularly when these inputs are Neural Networks (NNs). With the growing interest in architectures that process NNs, a recurring design principle has permeated the field: adhering to the permutation symmetries arising from the connectionist structure of NNs. However, are these the sole symmetries present in NN parameterizations? Zooming into most practical activation functions (e.g. sine, ReLU, tanh) answers this question negatively and gives rise to intriguing new symmetries, which we collectively refer to as scaling symmetries, that is, non-zero scalar multiplications and divisions of weights and biases. In this work, we propose Scale Equivariant Graph MetaNetworks - ScaleGMNs, a framework that adapts the Graph Metanetwork (message-passing) paradigm by incorporating scaling symmetries and thus rendering neuron and edge representations equivariant to valid scalings. We introduce novel building blocks, of independent technical interest, that allow for equivariance or invariance with respect to individual scalar multipliers or their product and use them in all components of ScaleGMN. Furthermore, we prove that, under certain expressivity conditions, ScaleGMN can simulate the forward and backward pass of any input feedforward neural network. Experimental results demonstrate that our method advances the state-of-the-art performance for several datasets and activation functions, highlighting the power of scaling symmetries as an inductive bias for NN processing. The source code is publicly available at https://github.com/jkalogero/scalegmn.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/96382"} +{"video_file": "8HwI6UavYc_39025583.mp4", "openreview_id": "8HwI6UavYc", "slideslive_id": 39025583, "venue": "nips2024", "title": "ReplaceAnything3D: Text-Guided Object Replacement in 3D Scenes with Compositional Scene Representations", "status": "Poster", "keywords": "3D inpainting;Text-to-3D;Diffusion;Score-based Distillation;3D scenes", "tldr": "Replace objects in 3D scenes", "abstract": "We introduce ReplaceAnything3D model RAM3D, a novel method for 3D object replacement in 3D scenes based on users' text description. Given multi-view images of a scene, a text prompt describing the object to replace, and another describing the new object, our Erase-and-Replace approach can effectively swap objects in 3D scenes with newly generated content while maintaining 3D consistency across multiple viewpoints. We demonstrate the versatility of RAM3D by applying it to various realistic 3D scene types, showcasing results of modified objects that blend in seamlessly with the scene without impacting its overall integrity.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96380"} +{"video_file": "8PWvdaRQAu_39025563.mp4", "openreview_id": "8PWvdaRQAu", "slideslive_id": 39025563, "venue": "nips2024", "title": "Contrasting with Symile: Simple Model-Agnostic Representation Learning for Unlimited Modalities", "status": "Poster", "keywords": "multimodal;contrastive learning;representation learning;total correlation", "tldr": "Symile is a simple contrastive learning approach that captures higher-order information between any number of modalities and provides a flexible, architecture-agnostic objective for learning modality-specific representations.", "abstract": "Contrastive learning methods, such as CLIP, leverage naturally paired data\u2014for example, images and their corresponding text captions\u2014to learn general representations that transfer efficiently to downstream tasks. While such approaches are generally applied to two modalities, domains such as robotics, healthcare, and video need to support many types of data at once. We show that the pairwise application of CLIP fails to capture joint information between modalities, thereby limiting the quality of the learned representations. To address this issue, we present Symile, a simple contrastive learning approach that captures higher-order information between any number of modalities. Symile provides a flexible, architecture-agnostic objective for learning modality-specific representations. To develop Symile's objective, we derive a lower bound on total correlation, and show that Symile representations for any set of modalities form a sufficient statistic for predicting the remaining modalities. Symile outperforms pairwise CLIP, even with modalities missing in the data, on cross-modal classification and retrieval across several experiments including on an original multilingual dataset of 33M image, text and audio samples and a clinical dataset of chest X-rays, electrocardiograms, and laboratory measurements. All datasets and code used in this work are publicly available at https://github.com/rajesh-lab/symile.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/96372"} +{"video_file": "8UqyWNsnyA_39026293.mp4", "openreview_id": "8UqyWNsnyA", "slideslive_id": 39026293, "venue": "nips2024", "title": "An Autoencoder-Like Nonnegative Matrix Co-Factorization for Improved Student Cognitive Modeling", "status": "Poster", "keywords": "Student Cognitive Modeling;Matrix Co-Factorization;Autoencoder", "tldr": "This paper presents an autoencoder-like nonnegative matrix co-factorization model of student cognition for better-predicting student exercise performance and estimating his or her knowledge proficiency in a subject.", "abstract": "Student cognitive modeling (SCM) is a fundamental task in intelligent education, with applications ranging from personalized learning to educational resource allocation. By exploiting students' response logs, SCM aims to predict their exercise performance as well as estimate knowledge proficiency in a subject. Data mining approaches such as matrix factorization can obtain high accuracy in predicting student performance on exercises, but the knowledge proficiency is unknown or poorly estimated. The situation is further exacerbated if only sparse interactions exist between exercises and students (or knowledge concepts). To solve this dilemma, we root monotonicity (a fundamental psychometric theory on educational assessments) in a co-factorization framework and present an autoencoder-like nonnegative matrix co-factorization (AE-NMCF), which improves the accuracy of estimating the student's knowledge proficiency via an encoder-decoder learning pipeline. The resulting estimation problem is nonconvex with nonnegative constraints. We introduce a projected gradient method based on block coordinate descent with Lipschitz constants and guarantee the method's theoretical convergence. Experiments on several real-world data sets demonstrate the efficacy of our approach in terms of both performance prediction accuracy and knowledge estimation ability, when compared with existing student cognitive models.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/96371"} +{"video_file": "8Uyfr5TcNR_39027538.mp4", "openreview_id": "8Uyfr5TcNR", "slideslive_id": 39027538, "venue": "nips2024", "title": "Robust Reinforcement Learning with General Utility", "status": "Poster", "keywords": "robust reinforcement Learning;general utility;minimax optimization", "tldr": "We propose a new robust reinforcement learning framework with general utility, and design algorithms with provable convergence rates to stationary point or global optimum.", "abstract": "Reinforcement Learning (RL) problem with general utility is a powerful decision making framework that covers standard RL with cumulative cost, exploration problems, and demonstration learning. Existing works on RL with general utility do not consider the robustness under environmental perturbation, which is important to adapt RL system in the real-world environment that differs from the training environment. To train a robust policy, we propose a robust RL framework with general utility, which subsumes many existing RL frameworks including RL, robust RL, RL with general utility, constrained RL, robust constrained RL, pure exploration, robust entropy regularized RL, etc. Then we focus on popular convex utility functions, with which our proposed learning framework is a challenging nonconvex-nonconcave minimax optimization problem, and design a two-phase stochastic policy gradient type algorithm and obtain its sample complexity result for gradient convergence. Furthermore, for convex utility on a widely used polyhedral ambiguity set, we design an algorithm and obtain its convergence rate to a global optimal solution.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96370"} +{"video_file": "8W5ADJOKcv_39024678.mp4", "openreview_id": "8W5ADJOKcv", "slideslive_id": 39024678, "venue": "nips2024", "title": "Neuc-MDS: Non-Euclidean Multidimensional Scaling Through Bilinear Forms", "status": "Poster", "keywords": "multidimensional scaling;dimension reduction;non-Euclidean geometry", "tldr": "We introduce Neuc-MDS, an extension of classical multidimensional scaling to the non-Euclidean non-metrical setting.", "abstract": "We introduce \\textbf{N}on-\\textbf{Euc}lidean-\\textbf{MDS} (Neuc-MDS), which extends Multidimensional Scaling (MDS) to generate outputs that can be non-Euclidean and non-metric. The main idea is to generalize the inner product to other symmetric bilinear forms to utilize the negative eigenvalues of dissimiliarity Gram matrices. Neuc-MDS efficiently optimizes the choice of (both positive and negative) eigenvalues of the dissimilarity Gram matrix to reduce STRESS, the sum of squared pairwise error. We provide an in-depth error analysis and proofs of the optimality in minimizing lower bounds of STRESS. We demonstrate Neuc-MDS's ability to address limitations of classical MDS raised by prior research, and test it on various synthetic and real-world datasets in comparison with both linear and non-linear dimension reduction methods.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96368"} +{"video_file": "8ZLL6mu2qC_39024886.mp4", "openreview_id": "8ZLL6mu2qC", "slideslive_id": 39024886, "venue": "nips2024", "title": "Optimal and Approximate Adaptive Stochastic Quantization", "status": "Poster", "keywords": "adaptive quantization;quantization;compression;algorithms;dynamic programming", "tldr": "Optimal and near-optimal methods for unbiasedly quantizing large vectors on the fly while minimizing the mean squared error for the particular input", "abstract": "Quantization is a fundamental optimization for many machine learning (ML) use cases, including compressing gradients, model weights and activations, and datasets. The most accurate form of quantization is adaptive, where the error is minimized with respect to a given input rather than optimizing for the worst case. However, optimal adaptive quantization methods are considered infeasible in terms of both their runtime and memory requirements.\nWe revisit the Adaptive Stochastic Quantization (ASQ) problem and present algorithms that find optimal solutions with asymptotically improved time and space complexities. Our experiments indicate that our algorithms may open the door to using ASQ more extensively in a variety of ML applications. We also present an even faster approximation algorithm for quantizing large inputs on the fly.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/96366"} +{"video_file": "8aAaYEwNR4_39028878.mp4", "openreview_id": "8aAaYEwNR4", "slideslive_id": 39028878, "venue": "nips2024", "title": "EAI: Emotional Decision-Making of LLMs in Strategic Games and Ethical Dilemmas", "status": "Poster", "keywords": "LLM;Ethics;Emotions;Game Theory", "tldr": "Integration of emotions into LLMs in Ethics and Game Theory settings reveal significant and consistent effects on decision-making.", "abstract": "One of the urgent tasks of artificial intelligence is to assess the safety and alignment of large language models (LLMs) with human behavior. Conventional verification only in pure natural language processing benchmarks can be insufficient. Since emotions often influence human decisions, this paper examines LLM alignment in complex strategic and ethical environments, providing an in-depth analysis of the drawbacks of our psychology and the emotional impact on decision-making in humans and LLMs. We introduce the novel EAI framework for integrating emotion modeling into LLMs to examine the emotional impact on ethics and LLM-based decision-making in various strategic games, including bargaining and repeated games. Our experimental study with various LLMs demonstrated that emotions can significantly alter the ethical decision-making landscape of LLMs, highlighting the need for robust mechanisms to ensure consistent ethical standards. Our game-theoretic analysis revealed that LLMs are susceptible to emotional biases influenced by model size, alignment strategies, and primary pretraining language. Notably, these biases often diverge from typical human emotional responses, occasionally leading to unexpected drops in cooperation rates, even under positive emotional influence. Such behavior complicates the alignment of multiagent systems, emphasizing the need for benchmarks that can rigorously evaluate the degree of emotional alignment. Our framework provides a foundational basis for developing such benchmarks.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96364"} +{"video_file": "8i6px5W1Rf_39025069.mp4", "openreview_id": "8i6px5W1Rf", "slideslive_id": 39025069, "venue": "nips2024", "title": "Evaluating alignment between humans and neural network representations in image-based learning tasks", "status": "Poster", "keywords": "human alignment;neural network representations;generalization;function learning;decision-making", "tldr": "We compare humans and various pretrained neural networks in image-based learning tasks and identify factors that make neural networks perform more human-like.", "abstract": "Humans represent scenes and objects in rich feature spaces, carrying information that allows us to generalise about category memberships and abstract functions with few examples. What determines whether a neural network model generalises like a human? We tested how well the representations of $86$ pretrained neural network models mapped to human learning trajectories across two tasks where humans had to learn continuous relationships and categories of natural images. In these tasks, both human participants and neural networks successfully identified the relevant stimulus features within a few trials, demonstrating effective generalisation. We found that while training dataset size was a core determinant of alignment with human choices, contrastive training with multi-modal data (text and imagery) was a common feature of currently publicly available models that predicted human generalisation. Intrinsic dimensionality of representations had different effects on alignment for different model types. Lastly, we tested three sets of human-aligned representations and found no consistent improvements in predictive accuracy compared to the baselines. In conclusion, pretrained neural networks can serve to extract representations for cognitive models, as they appear to capture some fundamental aspects of cognition that are transferable across tasks. Both our paradigms and modelling approach offer a novel way to quantify alignment between neural networks and humans and extend cognitive science into more naturalistic domains.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/96361"} +{"video_file": "8koaqRdRYH_39025569.mp4", "openreview_id": "8koaqRdRYH", "slideslive_id": 39025569, "venue": "nips2024", "title": "Improving Neural Network Surface Processing with Principal Curvatures", "status": "Poster", "keywords": "geometric deep learning;geometry processing;shape analysis;discrete differential geometry", "tldr": "Neural networks architectures designed to process surfaces can be improved by representing input surfaces via principal curvature", "abstract": "The modern study and use of surfaces is a research topic grounded in centuries of mathematical and empirical inquiry. From a mathematical point of view, curvature is an invariant that characterises the intrinsic geometry and the extrinsic shape of a surface. Yet, in modern applications the focus has shifted away from finding expressive representations of surfaces, and towards the design of efficient neural network architectures to process them. The literature suggests a tendency to either overlook the representation of the processed surface, or use overcomplicated representations whose ability to capture the essential features of a surface is opaque. We propose using curvature as the input of neural network architectures for surface processing, and explore this proposition through experiments making use of the shape operator. Our results show that using curvature as input leads to significant a increase in performance on segmentation and classification tasks, while allowing far less computational overhead than current methods.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96354"} +{"video_file": "8moTQjfqAV_39026669.mp4", "openreview_id": "8moTQjfqAV", "slideslive_id": 39026669, "venue": "nips2024", "title": "Temporal-Difference Learning Using Distributed Error Signals", "status": "Poster", "keywords": "neuroscience;reinforcement learning;temporal-difference learning", "tldr": "We propose a new deep Q-learning algorithm that learns to solve RL tasks without sequential propagation of errors, which provides a potential explanation to credit assignment in biological reward-based learning.", "abstract": "A computational problem in biological reward-based learning is how credit assignment is performed in the nucleus accumbens (NAc). Much research suggests that NAc dopamine encodes temporal-difference (TD) errors for learning value predictions. However, dopamine is synchronously distributed in regionally homogeneous concentrations, which does not support explicit credit assignment (like used by backpropagation). It is unclear whether distributed errors alone are sufficient for synapses to make coordinated updates to learn complex, nonlinear reward-based learning tasks. We design a new deep Q-learning algorithm, Artificial Dopamine, to computationally demonstrate that synchronously distributed, per-layer TD errors may be sufficient to learn surprisingly complex RL tasks. We empirically evaluate our algorithm on MinAtar, the DeepMind Control Suite, and classic control tasks, and show it often achieves comparable performance to deep RL algorithms that use backpropagation.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/96351"} +{"video_file": "8oSY3rA9jY_39028328.mp4", "openreview_id": "8oSY3rA9jY", "slideslive_id": 39028328, "venue": "nips2024", "title": "Finding Transformer Circuits With Edge Pruning", "status": "Spotlight", "keywords": "interpretability;circuits;pruning;optimization", "tldr": "We propose a circuit finding method that prunes connections between model components, and show it performs and scales well.", "abstract": "The path to interpreting a language model often proceeds via analysis of circuits---sparse computational subgraphs of the model that capture specific aspects of its behavior. Recent work has automated the task of discovering circuits. Yet, these methods have practical limitations, as they either rely on inefficient search algorithms or inaccurate approximations. In this paper, we frame circuit discovery as an optimization problem and propose Edge Pruning as an effective and scalable solution. Edge Pruning leverages gradient-based pruning techniques, but instead of removing neurons or components, prunes the edges between components. Our method finds circuits in GPT-2 that use less than half the number of edges than circuits found by previous methods while being equally faithful to the full model predictions on standard circuit-finding tasks. Edge Pruning is efficient on tasks involving up to 100,000 examples, outperforming previous methods in speed and producing substantially better circuits. It also perfectly recovers the ground-truth circuits in two models compiled with Tracr. Thanks to its efficiency, we scale Edge Pruning to CodeLlama-13B, a model over 100x the size of GPT-2. We use this setting for a case study, where we compare the mechanisms behind instruction prompting and in-context learning. We find two circuits with more than 99.96% sparsity that match the performance of the full model. Further analysis reveals that the mechanisms in the two settings overlap substantially. This shows that Edge Pruning is a practical and scalable tool for interpretability, which can shed light on behaviors that only emerge in large models.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/96350"} +{"video_file": "8ohsbxw7q8_39025973.mp4", "openreview_id": "8ohsbxw7q8", "slideslive_id": 39025973, "venue": "nips2024", "title": "Graph Diffusion Policy Optimization", "status": "Poster", "keywords": "Graph Generation;Diffusion Models;Reinforcement Learning", "tldr": "This paper introduces graph diffusion policy optimization (GDPO), a novel approach to optimize graph diffusion models for arbitrary (e.g., non-differentiable) objectives using reinforcement learning.", "abstract": "Recent research has made significant progress in optimizing diffusion models for downstream objectives, which is an important pursuit in fields such as graph generation for drug design. However, directly applying these models to graph presents challenges, resulting in suboptimal performance. This paper introduces graph diffusion policy optimization (GDPO), a novel approach to optimize graph diffusion models for arbitrary (e.g., non-differentiable) objectives using reinforcement learning. GDPO is based on an eager policy gradient tailored for graph diffusion models, developed through meticulous analysis and promising improved performance. Experimental results show that GDPO achieves state-of-the-art performance in various graph generation tasks with complex and diverse objectives. Code is available at https://github.com/sail-sg/GDPO.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/96349"} +{"video_file": "8on9dIUh5v_39025410.mp4", "openreview_id": "8on9dIUh5v", "slideslive_id": 39025410, "venue": "nips2024", "title": "Provable Benefit of Cutout and CutMix for Feature Learning", "status": "Spotlight", "keywords": "Cutout;CutMix;feature learning;theory", "tldr": "We investigate the benefit of Cutout and CutMix for feature learning", "abstract": "Patch-level data augmentation techniques such as Cutout and CutMix have demonstrated significant efficacy in enhancing the performance of vision tasks. However, a comprehensive theoretical understanding of these methods remains elusive. In this paper, we study two-layer neural networks trained using three distinct methods: vanilla training without augmentation, Cutout training, and CutMix training. Our analysis focuses on a feature-noise data model, which consists of several label-dependent features of varying rarity and label-independent noises of differing strengths. Our theorems demonstrate that Cutout training can learn low-frequency features that vanilla training cannot, while CutMix training can learn even rarer features that Cutout cannot capture. From this, we establish that CutMix yields the highest test accuracy among the three. Our novel analysis reveals that CutMix training makes the network learn all features and noise vectors \"evenly\" regardless of the rarity and strength, which provides an interesting insight into understanding patch-level augmentation.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96348"} +{"video_file": "8puv3c9CPg_39028709.mp4", "openreview_id": "8puv3c9CPg", "slideslive_id": 39028709, "venue": "nips2024", "title": "Beyond the Doors of Perception: Vision Transformers Represent Relations Between Objects", "status": "Poster", "keywords": "visual reasoning;mechanistic interpretability;transformers;cognitive science", "tldr": "We use methods from mechanistic interpretability to investigate how Vision Transformers perform an abstract visual reasoning task.", "abstract": "Though vision transformers (ViTs) have achieved state-of-the-art performance in a variety of settings, they exhibit surprising failures when performing tasks involving visual relations. This begs the question: how do ViTs attempt to perform tasks that require computing visual relations between objects? Prior efforts to interpret ViTs tend to focus on characterizing relevant low-level visual features. In contrast, we adopt methods from mechanistic interpretability to study the higher-level visual algorithms that ViTs use to perform abstract visual reasoning. We present a case study of a fundamental, yet surprisingly difficult, relational reasoning task: judging whether two visual entities are the same or different. We find that pretrained ViTs fine-tuned on this task often exhibit two qualitatively different stages of processing despite having no obvious inductive biases to do so: 1) a perceptual stage wherein local object features are extracted and stored in a disentangled representation, and 2) a relational stage wherein object representations are compared. In the second stage, we find evidence that ViTs can learn to represent somewhat abstract visual relations, a capability that has long been considered out of reach for artificial neural networks. Finally, we demonstrate that failures at either stage can prevent a model from learning a generalizable solution to our fairly simple tasks. By understanding ViTs in terms of discrete processing stages, one can more precisely diagnose and rectify shortcomings of existing and future models.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/96346"} +{"video_file": "8qEkjSEdls_39025792.mp4", "openreview_id": "8qEkjSEdls", "slideslive_id": 39025792, "venue": "nips2024", "title": "Off-policy estimation with adaptively collected data: the power of online learning", "status": "Poster", "keywords": "Causal inference;function approximation;learning theory;off-policy estimation;online learning;reinforcement learning", "tldr": "We provide non-asymptotic guarantees of the AIPW estimator using the framework of online learning for estimation of the treatment effect. We then give a lower bound to demonstrate the instance-optimality of the proposed estimator.", "abstract": "We consider estimation of a linear functional of the treatment effect from adaptively collected data. This problem finds a variety of applications including off-policy evaluation in contextual bandits, and estimation of the average treatment effect in causal inference. While a certain class of augmented inverse propensity weighting (AIPW) estimators enjoys desirable asymptotic properties including the semi-parametric efficiency, much less is known about their non-asymptotic theory with adaptively collected data. To fill in the gap, we first present generic upper bounds on the mean-squared error of the class of AIPW estimators that crucially depends on a sequentially weighted error between the treatment effect and its estimates. Motivated by this, we propose a general reduction scheme that allows one to produce a sequence of estimates for the treatment effect via online learning to minimize the sequentially weighted estimation error. To illustrate this, we provide three concrete instantiations in (1) the tabular case; (2) the case of linear function approximation; and (3) the case of general function approximation for the outcome model. We then provide a local minimax lower bound to show the instance-dependent optimality of the AIPW estimator using no-regret online learning algorithms.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/96345"} +{"video_file": "8ugOlbjJpp_39028271.mp4", "openreview_id": "8ugOlbjJpp", "slideslive_id": 39028271, "venue": "nips2024", "title": "Private Algorithms for Stochastic Saddle Points and Variational Inequalities: Beyond Euclidean Geometry", "status": "Poster", "keywords": "Differential Privacy;Stochastic Saddle Point Problem;Stochastic Variational Inequality;Strong Gap;Stochastic Minimax Optimization;Algorithmic Stability", "tldr": "In this work, we conduct a systematic study of stochastic saddle point problems (SSP) and stochastic variational inequalities (SVI) under the constraint of \n(\n\u03f5\n,\n\u03b4\n)\n-differential privacy (DP) in both Euclidean and non-Euclidean setups.", "abstract": "In this work, we conduct a systematic study of stochastic saddle point problems (SSP) and stochastic variational inequalities (SVI) under the constraint of\n(\n\u03f5\n,\n\u03b4\n)\n-differential privacy (DP) in both Euclidean and non-Euclidean setups. We first consider Lipschitz convex-concave SSPs in the\n\u2113\np\n/\n\u2113\nq\nsetup,\np\n,\nq\n\u2208\n[\n1\n,\n2\n]\n. That is, we consider the case where the primal problem has an\n\u2113\np\n-setup (i.e., the primal parameter is constrained to an\n\u2113\np\nbounded domain and the loss is\n\u2113\np\n-Lipschitz with respect to the primal parameter) and the dual problem has an\n\u2113\nq\nsetup. Here, we obtain a bound of\nO\n~\n(\n1\nn\n+\nd\nn\n\u03f5\n)\non the strong SP-gap, where\nn\nis the number of samples and\nd\nis the dimension. This rate is nearly optimal for any\np\n,\nq\n\u2208\n[\n1\n,\n2\n]\n. Without additional assumptions, such as smoothness or linearity requirements, prior work under DP has only obtained this rate when\np\n=\nq\n=\n2\n(i.e., only in the Euclidean setup). Further, existing algorithms have each only been shown to work for specific settings of\np\nand\nq\nand under certain assumptions on the loss and the feasible set, whereas we provide a general algorithm for DP SSPs whenever\np\n,\nq\n\u2208\n[\n1\n,\n2\n]\n. Our result is obtained via a novel analysis of the recursive regularization algorithm. In particular, we develop new tools for analyzing generalization, which may be of independent interest. Next, we turn our attention towards SVIs with a monotone, bounded and Lipschitz operator and consider\n\u2113\np\n-setups,\np\n\u2208\n[\n1\n,\n2\n]\n. Here, we provide the first analysis which obtains a bound on the strong VI-gap of\nO\n~\n(\n1\nn\n+\nd\nn\n\u03f5\n)\n. For\np\n\u2212\n1\n=\n\u03a9\n(\n1\n)\n, this rate is near optimal due to existing lower bounds. To obtain this result, we develop a modified version of recursive regularization. Our analysis builds on the techniques we develop for SSPs as well as employing additional novel components which handle difficulties arising from adapting the recursive regularization framework to SVIs.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/96341"} +{"video_file": "8wvH0RZPsG_39026431.mp4", "openreview_id": "8wvH0RZPsG", "slideslive_id": 39026431, "venue": "nips2024", "title": "Conformalized Multiple Testing after Data-dependent Selection", "status": "Poster", "keywords": "multiple testing;conformal p-value;conformal inference;selective inference;distribution-free", "tldr": "We propose to construct the Selective Conformal p-value, which takes into account the selection effects to guarantee the false discovery rate in the predictive setting", "abstract": "The task of distinguishing individuals of interest from a vast pool of candidates using predictive models has garnered significant attention in recent years. This task can be framed as a conformalized multiple testing procedure, which aims at quantifying prediction uncertainty by controlling the false discovery rate (FDR) via conformal inference. In this paper, we tackle the challenge of conformalized multiple testing after data-dependent selection procedures. To guarantee the construction of valid test statistics that accurately capture the distorted distribution resulting from the selection process, we leverage a holdout labeled set to closely emulate the selective distribution. Our approach involves adaptively picking labeled data to create a calibration set based on the stability of the selection rule. This strategy ensures that the calibration data and the selected test unit are exchangeable, allowing us to develop valid conformal p-values. Implementing with the famous Benjamini-Hochberg (BH) procedure, it effectively controls the FDR over the selected subset. To handle the randomness of the selected subset and the dependence among the constructed p-values, we establish a unified theoretical framework. This framework extends the application of conformalized multiple testing to complex selective settings. Furthermore, we conduct numerical studies to showcase the effectiveness and validity of our procedures across various scenarios.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/96339"} +{"video_file": "8x48XFLvyd_39028001.mp4", "openreview_id": "8x48XFLvyd", "slideslive_id": 39028001, "venue": "nips2024", "title": "Globally Convergent Variational Inference", "status": "Poster", "keywords": "forward KL divergence; neural posterior estimation; neural tangent kernel; convex optimization", "tldr": "Utilizing convexity of the forward KL divergence, we establish a global convergence result for fitting an amortized variational posterior by analyzing neural tangent kernel (NTK) dynamics in the large-width setting.", "abstract": "In variational inference (VI), an approximation of the posterior distribution is selected from a family of distributions through numerical optimization. With the most common variational objective function, known as the evidence lower bound (ELBO), only convergence to a local optimum can be guaranteed. In this work, we instead establish the global convergence of a particular VI method. This VI method, which may be considered an instance of neural posterior estimation (NPE), minimizes an expectation of the inclusive (forward) KL divergence to fit a variational distribution that is parameterized by a neural network. Our convergence result relies on the neural tangent kernel (NTK) to characterize the gradient dynamics that arise from considering the variational objective in function space. In the asymptotic regime of a fixed, positive-definite neural tangent kernel, we establish conditions under which the variational objective admits a unique solution in a reproducing kernel Hilbert space (RKHS). Then, we show that the gradient descent dynamics in function space converge to this unique function. In ablation studies and practical problems, we demonstrate that our results explain the behavior of NPE in non-asymptotic finite-neuron settings, and show that NPE outperforms ELBO-based optimization, which often converges to shallow local optima.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/96338"} +{"video_file": "96gXvFYWSE_39025707.mp4", "openreview_id": "96gXvFYWSE", "slideslive_id": 39025707, "venue": "nips2024", "title": "Pearls from Pebbles: Improved Confidence Functions for Auto-labeling", "status": "Poster", "keywords": "Auto-labeling;Confidence Calibration;Failure Prediction;Selective Classification", "tldr": "We introduce Colander, a novel post-hoc method addressing overconfidence in threshold-based auto-labeling (TBAL). Colander achieves up to 60% better coverage than baselines.", "abstract": "Auto-labeling is an important family of techniques that produce labeled training sets with minimum manual annotation. A prominent variant, threshold-based auto-labeling (TBAL), works by finding thresholds on a model's confidence scores above which it can accurately automatically label unlabeled data. However, many models are known to produce overconfident scores, leading to poor TBAL performance. While a natural idea is to apply off-the-shelf calibration methods to alleviate the overconfidence issue, we show that such methods fall short. Rather than experimenting with ad-hoc choices of confidence functions, we propose a framework for studying the optimal TBAL confidence function. We develop a tractable version of the framework to obtain Colander (Confidence functions for Efficient and Reliable Auto-labeling), a new post-hoc method specifically designed to maximize performance in TBAL systems. We perform an extensive empirical evaluation of Colander and compare it against methods designed for calibration. Colander achieves up to 60% improvement on coverage over the baselines while maintaining error level below 5% and using the same amount of labeled data.", "primary_area": "active_learning", "site": "https://neurips.cc/virtual/2024/poster/96327"} +{"video_file": "99rOAM7Jfm_39026389.mp4", "openreview_id": "99rOAM7Jfm", "slideslive_id": 39026389, "venue": "nips2024", "title": "Noise-Aware Differentially Private Regression via Meta-Learning", "status": "Poster", "keywords": "differential privacy;meta-learning;neural processes;Gaussian processes;sim-to-real;probabilistic regression", "tldr": "We train a neural network that can output a differentially private probabilistic regression model from a data set in one forward pass.", "abstract": "Many high-stakes applications require machine learning models that protect user privacy and provide well-calibrated, accurate predictions. While Differential Privacy (DP) is the gold standard for protecting user privacy, standard DP mechanisms typically significantly impair performance. One approach to mitigating this issue is pre-training models on simulated data before DP learning on the private data. In this work we go a step further, using simulated data to train a meta-learning model that combines the Convolutional Conditional Neural Process (ConvCNP) with an improved functional DP mechanism of Hall et al. (2013), yielding the DPConvCNP. DPConvCNP learns from simulated data how to map private data to a DP predictive model in one forward pass, and then provides accurate, well-calibrated predictions. We compare DPConvCNP with a DP Gaussian Process (GP) baseline with carefully tuned hyperparameters. The DPConvCNP outperforms the GP baseline, especially on non-Gaussian data, yet is much faster at test time and requires less tuning.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96324"} +{"video_file": "9B0iOkn3UP_39028199.mp4", "openreview_id": "9B0iOkn3UP", "slideslive_id": 39028199, "venue": "nips2024", "title": "Computational Aspects of Bayesian Persuasion under Approximate Best Response", "status": "Poster", "keywords": "Bayesian persuasion;computational complexity;robustness;approximate best response.", "tldr": "We give algorithms and hardness results for Bayesian persuasion under approximate best response.", "abstract": "We study Bayesian persuasion under approximate best response, where the receiver may choose any action that is not too much suboptimal, given their posterior belief upon receiving the signal. We focus on the computational aspects of the problem, aiming to design algorithms that efficiently compute (almost) optimal strategies for the sender. Despite the absence of the revelation principle --- which has been one of the most powerful tools in Bayesian persuasion --- we design polynomial-time exact algorithms for the problem when either the state space or the action space is small, as well as a quasi-polynomial-time approximation scheme (QPTAS) for the general problem. On the negative side, we show there is no polynomial-time exact algorithm for the general problem unless\nP\n=\nNP\n. Our results build on several new algorithmic ideas, which might be useful in other principal-agent problems where robustness is desired.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/96322"} +{"video_file": "9GhSOp1LYH_39026174.mp4", "openreview_id": "9GhSOp1LYH", "slideslive_id": 39026174, "venue": "nips2024", "title": "Leveraging Hallucinations to Reduce Manual Prompt Dependency in Promptable Segmentation", "status": "Poster", "keywords": "Camouflaged Object Detection; Transfer Learning; Test-time Domain Adaptation; Manual-free Promptable Segmentation; Unsupervised Learning", "tldr": "We propose using hallucinations as prior knowledge to extract and validate task-related information, which helps generate instance-specific prompts for reducing reliance on manual prompts in promptable segmentation.", "abstract": "Promptable segmentation typically requires instance-specific manual prompts to guide the segmentation of each desired object. To minimize such a need, task-generic promptable segmentation has been introduced, which employs a single task-generic prompt to segment various images of different objects in the same task. Current methods use Multimodal Large Language Models (MLLMs) to reason detailed instance-specific prompts from a task-generic prompt for improving segmentation accuracy. The effectiveness of this segmentation heavily depends on the precision of these derived prompts. However, MLLMs often suffer hallucinations during reasoning, resulting in inaccurate prompting. While existing methods focus on eliminating hallucinations to improve a model, we argue that MLLM hallucinations can reveal valuable contextual insights when leveraged correctly, as they represent pre-trained large-scale knowledge beyond individual images. In this paper, we first utilize hallucinations to mine task-related information from images and verify its accuracy to enhance precision of the generated prompts. Specifically, we introduce an iterative \\textbf{Pro}mpt-\\textbf{Ma}sk \\textbf{C}ycle generation framework (ProMaC) with a prompt generator and a mask generator. The prompt generator uses a multi-scale chain of thought prompting, initially leveraging hallucinations to extract extended contextual prompts on a test image. These hallucinations are then minimized to formulate precise instance-specific prompts, directing the mask generator to produce masks that are consistent with task semantics by mask semantic alignment. Iteratively the generated masks induce the prompt generator to focus more on task-relevant image areas and reduce irrelevant hallucinations, resulting jointly in better prompts and masks. Experiments on 5 benchmarks demonstrate the effectiveness of ProMaC. Code is in https://lwpyh.github.io/ProMaC/.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96318"} +{"video_file": "9JFSJitKC0_39026490.mp4", "openreview_id": "9JFSJitKC0", "slideslive_id": 39026490, "venue": "nips2024", "title": "Spectral-Risk Safe Reinforcement Learning with Convergence Guarantees", "status": "Poster", "keywords": "Safe Reinforcement Learning;Risk Constraint;Spectral Risk Measure", "tldr": "We propose a safe RL algorithm with spectral risk constraints, which shows convergence to an optimum in tabular settings.", "abstract": "The field of risk-constrained reinforcement learning (RCRL) has been developed to effectively reduce the likelihood of worst-case scenarios by explicitly handling risk-measure-based constraints. However, the nonlinearity of risk measures makes it challenging to achieve convergence and optimality. To overcome the difficulties posed by the nonlinearity, we propose a spectral risk measure-constrained RL algorithm, spectral-risk-constrained policy optimization (SRCPO), a bilevel optimization approach that utilizes the duality of spectral risk measures. In the bilevel optimization structure, the outer problem involves optimizing dual variables derived from the risk measures, while the inner problem involves finding an optimal policy given these dual variables. The proposed method, to the best of our knowledge, is the first to guarantee convergence to an optimum in the tabular setting. Furthermore, the proposed method has been evaluated on continuous control tasks and showed the best performance among other RCRL algorithms satisfying the constraints. Our code is available at https://github.com/rllab-snu/Spectral-Risk-Constrained-RL.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96317"} +{"video_file": "9Jmt1eER9P_39027473.mp4", "openreview_id": "9Jmt1eER9P", "slideslive_id": 39027473, "venue": "nips2024", "title": "Optimization Algorithm Design via Electric Circuits", "status": "Spotlight", "keywords": "Convex optimization;Distributed optimization;Decentralized optimization;ADMM;Alternating direction method of multipliers;PG-EXTRA;Performance estimation problem;Continuous-time analysis;first-order optimization;proximal methods", "tldr": "We design optimization algorithms using electric RLC circuits.", "abstract": "We present a novel methodology for convex optimization algorithm design using ideas from electric RLC circuits. Given an optimization problem, the first stage of the methodology is to design an appropriate electric circuit whose continuous-time dynamics converge to the solution of the optimization problem at hand. Then, the second stage is an automated, computer-assisted discretization of the continuous-time dynamics, yielding a provably convergent discrete-time algorithm. Our methodology recovers many classical (distributed) optimization algorithms and enables users to quickly design and explore a wide range of new algorithms with convergence guarantees.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/96316"} +{"video_file": "9O2sVnEHor_39028114.mp4", "openreview_id": "9O2sVnEHor", "slideslive_id": 39028114, "venue": "nips2024", "title": "Weisfeiler and Leman Go Loopy: A New Hierarchy for Graph Representational Learning", "status": "Oral", "keywords": "Graph Neural Networks;Weisfeiler-Leman (WL) Test;Homomorphism Counting;Theory and Expressivity in GNNs;Cactus Graphs", "tldr": "We introduce GNNs that can count cycles and homomorphisms of cactus graphs, surpassing the limitations of existing GNNs while being scalable on real-world graphs.", "abstract": "We introduce\nr\n-loopy Weisfeiler-Leman (\nr\n-\n\u2113\nWL), a novel hierarchy of graph isomorphism tests and a corresponding GNN framework,\nr\n-\n\u2113\nMPNN, that can count cycles up to length\nr\n+\n2\n. Most notably, we show that\nr\n-\n\u2113\nWL can count homomorphisms of cactus graphs. This extends 1-WL, which can only count homomorphisms of trees and, in fact, is incomparable to\nk\n-WL for any fixed\nk\n. We empirically validate the expressive and counting power of\nr\n-\n\u2113\nMPNN on several synthetic datasets and demonstrate the scalability and strong performance on various real-world datasets, particularly on sparse graphs.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/96314"} +{"video_file": "9OHXQybMZB_39027855.mp4", "openreview_id": "9OHXQybMZB", "slideslive_id": 39027855, "venue": "nips2024", "title": "Aligning Model Properties via Conformal Risk Control", "status": "Poster", "keywords": "Alignment;Conformal Prediction;Conformal Risk Control;Property Testing", "tldr": "Proposes a novel approach for post-training model alignment using conformal risk control, defining alignment using ideas from property testing", "abstract": "AI model alignment is crucial due to inadvertent biases in training data and the underspecified machine learning pipeline, where models with excellent test metrics may not meet end-user requirements. While post-training alignment via human feedback shows promise, these methods are often limited to generative AI settings where humans can interpret and provide feedback on model outputs. In traditional non-generative settings with numerical or categorical outputs, detecting misalignment through single-sample outputs remains challenging, and enforcing alignment during training requires repeating costly training processes. In this paper we consider an alternative strategy. We propose interpreting model alignment through property testing, defining an aligned model\nf\nas one belonging to a subset\nP\nof functions that exhibit specific desired behaviors. We focus on post-processing a pre-trained model\nf\nto better align with\nP\nusing conformal risk control. Specifically, we develop a general procedure for converting queries for testing a given property\nP\nto a collection of loss functions suitable for use in a conformal risk control algorithm. We prove a probabilistic guarantee that the resulting conformal interval around\nf\ncontains a function approximately satisfying\nP\n. We exhibit applications of our methodology on a collection of supervised learning datasets for (shape-constrained) properties such as monotonicity and concavity. The general procedure is flexible and can be applied to a wide range of desired properties. Finally, we prove that pre-trained models will always require alignment techniques even as model sizes or training data increase, as long as the training data contains even small biases.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/96313"} +{"video_file": "9SghPrjYU1_39025402.mp4", "openreview_id": "9SghPrjYU1", "slideslive_id": 39025402, "venue": "nips2024", "title": "Minimax Optimal and Computationally Efficient Algorithms for Distributionally Robust Offline Reinforcement Learning", "status": "Poster", "keywords": "offline reinforcement learning;distributionally robust Markov decision processes;function approximation", "tldr": "This paper studies instance dependent upper and lower bounds under the setting of offline linear DRMDPs.", "abstract": "Distributionally robust offline reinforcement learning (RL), which seeks robust policy training against environment perturbation by modeling dynamics uncertainty, calls for function approximations when facing large state-action spaces. However, the consideration of dynamics uncertainty introduces essential nonlinearity and computational burden, posing unique challenges for analyzing and practically employing function approximation. Focusing on a basic setting where the nominal model and perturbed models are linearly parameterized, we propose minimax optimal and computationally efficient algorithms realizing function approximation and initiate the study on instance-dependent suboptimality analysis in the context of robust offline RL. Our results uncover that function approximation in robust offline RL is essentially distinct from and probably harder than that in standard offline RL. Our algorithms and theoretical results crucially depend on a novel function approximation mechanism incorporating variance information, a new procedure of suboptimality and estimation uncertainty decomposition, a quantification of the robust value function shrinkage, and a meticulously designed family of hard instances, which might be of independent interest.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96310"} +{"video_file": "9SpWvX9ykp_39027172.mp4", "openreview_id": "9SpWvX9ykp", "slideslive_id": 39027172, "venue": "nips2024", "title": "Generating Code World Models with Large Language Models Guided by Monte Carlo Tree Search", "status": "Poster", "keywords": "Large Language Models;code generation;MCTS;model-based reinforcement learning", "tldr": "We propose to model RL environments with code written by an LLM, propose a method to improve code generation for this task and show how to plan with code world models.", "abstract": "In this work we consider Code World Models, world models generated by a Large Language Model (LLM) in the form of Python code for model-based Reinforcement Learning (RL). Calling code instead of LLMs for planning has potential to be more precise, reliable, interpretable, and extremely efficient. However, writing appropriate Code World Models requires the ability to understand complex instructions, to generate exact code with non-trivial logic and to self-debug a long program with feedback from unit tests and environment trajectories. To address these challenges, we propose Generate, Improve and Fix with Monte Carlo Tree Search (GIF-MCTS), a new code generation strategy for LLMs. To test our approach in an offline RL setting, we introduce the Code World Models Benchmark (CWMB), a suite of program synthesis and planning tasks comprised of 18 diverse RL environments paired with corresponding textual descriptions and curated trajectories. GIF-MCTS surpasses all baselines on the CWMB and two other benchmarks, and we show that the Code World Models synthesized with it can be successfully used for planning, resulting in model-based RL agents with greatly improved sample efficiency and inference speed.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96309"} +{"video_file": "9U0nLnNMJ7_39027612.mp4", "openreview_id": "9U0nLnNMJ7", "slideslive_id": 39027612, "venue": "nips2024", "title": "Compact Language Models via Pruning and Knowledge Distillation", "status": "Poster", "keywords": "llm;pruning;distillation;compression", "tldr": "A novel structured pruning and retraining approach for LLMs that produces strong models using a small fraction of original training data", "abstract": "Large language models (LLMs) targeting different deployment scales and sizes are currently produced by training each variant from scratch; this is extremely compute-intensive. In this paper, we investigate if pruning an existing LLM and then re-training it with a fraction <3% of the original training data can be a suitable alternative to repeated, full retraining. To this end, we develop a set of practical and effective compression best practices for LLMs that combine depth, width, attention and MLP pruning with knowledge distillation-based retraining; we arrive at these best practices through a detailed empirical exploration of pruning strategies for each axis, methods to combine axes, distillation strategies, and search techniques for arriving at optimal compressed architectures. We use this guide to compress the Nemotron-4 family of LLMs by a factor of 2-4x, and compare their performance to similarly-sized models on a variety of language modeling tasks. On these tasks, we perform better than Nemotron-3 8B and LLaMa2 7B using up to 40x fewer training tokens}, on par with Mistral 7B and Gemma 7B using up to 85x fewer tokens and slightly worse than LLaMa3 8B using up to 159x fewer tokens. Our models also compare favorably to state-of-the-art compression techniques from the literature.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96308"} +{"video_file": "9VbGjXLzig_39024898.mp4", "openreview_id": "9VbGjXLzig", "slideslive_id": 39024898, "venue": "nips2024", "title": "No \"Zero-Shot\" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance", "status": "Poster", "keywords": "Multimodal Datasets;Long-tailed Concept Distribution;CLIP Models;Diffusion Models;Data-Centric ML", "tldr": "We show that multimodal models require exponentially more data on a concept to linearly improve their performance on tasks pertaining to that concept, highlighting extreme sample inefficiency.", "abstract": "Web-crawled pretraining datasets underlie the impressive \"zero-shot\" evaluation performance of multimodal models, such as CLIP for classification and Stable-Diffusion for image generation. However, it is unclear how meaningful the notion of \"zero-shot\" generalization is for such multimodal models, as it is not known to what extent their pretraining datasets encompass the downstream concepts targeted for during \"zero-shot\" evaluation. In this work, we ask: How is the performance of multimodal models on downstream concepts influenced by the frequency of these concepts in their pretraining datasets?\nWe comprehensively investigate this question across 34 models and 5 standard pretraining datasets (CC-3M, CC-12M, YFCC-15M, LAION-400M, LAION-Aesthetics), generating over 300GB of data artifacts. We consistently find that, far from exhibiting \"zero-shot\" generalization, multimodal models require exponentially more data to achieve linear improvements in downstream \"zero-shot\" performance, following a sample inefficient log-linear scaling trend. This trend persists even when controlling for sample-level similarity between pretraining and downstream datasets, and testing on purely synthetic data distributions. Furthermore, upon benchmarking models on long-tailed data sampled based on our analysis, we demonstrate that multimodal models across the board perform poorly. We contribute this long-tail test set as the Let it Wag! benchmark to further research in this direction. Taken together, our study reveals an exponential need for training data which implies that the key to \"zero-shot\" generalization capabilities under large-scale training data and compute paradigms remains to be found.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96307"} +{"video_file": "9XDYEEBRV6_39027905.mp4", "openreview_id": "9XDYEEBRV6", "slideslive_id": 39027905, "venue": "nips2024", "title": "Coded Computing for Resilient Distributed Computing: A Learning-Theoretic Framework", "status": "Poster", "keywords": "Coded Computing;Distributed Computing;Non-Parametric Regression;Smoothing Spline;Kernel Ridge Regression", "tldr": "We propose a new framework for coded computing, based on learning theory, to handle general computation and we provide theoretical justification for its benefits as well as experimental evaluations. It strictly outperforms the state of the art.", "abstract": "Coded computing has emerged as a promising framework for tackling significant challenges in large-scale distributed computing, including the presence of slow, faulty, or compromised servers. In this approach, each worker node processes a combination of the data, rather than the raw data itself. The final result then is decoded from the collective outputs of the worker nodes. However, there is a significant gap between current coded computing approaches and the broader landscape of general distributed computing, particularly when it comes to machine learning workloads. To bridge this gap, we propose a novel foundation for coded computing, integrating the principles of learning theory, and developing a framework that seamlessly adapts with machine learning applications. In this framework, the objective is to find the encoder and decoder functions that minimize the loss function, defined as the mean squared error between the estimated and true values. Facilitating the search for the optimum decoding and functions, we show that the loss function can be upper-bounded by the summation of two terms: the generalization error of the decoding function and the training error of the encoding function. Focusing on the second-order Sobolev space, we then derive the optimal encoder and decoder. We show that in the proposed solution, the mean squared error of the estimation decays with the rate of\nO\n(\nS\n3\nN\n\u2212\n3\n)\nand\nO\n(\nS\n8\n5\nN\n\u2212\n3\n5\n)\nin noiseless and noisy computation settings, respectively, where\nN\nis the number of worker nodes with at most\nS\nslow servers (stragglers). Finally, we evaluate the proposed scheme on inference tasks for various machine learning models and demonstrate that the proposed framework outperforms the state-of-the-art in terms of accuracy and rate of convergence.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/96305"} +{"video_file": "9Y8zUO11EQ_39024460.mp4", "openreview_id": "9Y8zUO11EQ", "slideslive_id": 39024460, "venue": "nips2024", "title": "SWT-Bench: Testing and Validating Real-World Bug-Fixes with Code Agents", "status": "Poster", "keywords": "language model;test generation;code agent", "tldr": "LLM based Code Agents are suitable for generating software tests in large and complex code bases, introducing a new paradigm for test generation and additional metrics for Code Agent performance.", "abstract": "Rigorous software testing is crucial for developing and maintaining high-quality code, making automated test generation a promising avenue for both improving software quality and boosting the effectiveness of code generation methods. However, while code generation with Large Language Models (LLMs) is an extraordinarily active research area, test generation remains relatively unexplored. We address this gap and investigate the capability of LLM-based Code Agents to formalize user issues into test cases. To this end, we propose a novel benchmark based on popular GitHub repositories, containing real-world issues, ground-truth bug-fixes, and golden tests. We find that LLMs generally perform surprisingly well at generating relevant test cases, with Code Agents designed for code repair exceeding the performance of systems designed specifically for test generation. Further, as test generation is a similar but more structured task than code generation, it allows for a more fine-grained analysis using issue reproduction rate and coverage changes, providing a dual metric for analyzing systems designed for code repair. Finally, we find that generated tests are an effective filter for proposed code fixes, doubling the precision of SWE-Agent. We release all data and code at https://github.com/logic-star-ai/SWT-Bench.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96304"} +{"video_file": "9f5tOXKoMC_39027809.mp4", "openreview_id": "9f5tOXKoMC", "slideslive_id": 39027809, "venue": "nips2024", "title": "A Bayesian Approach to Data Point Selection", "status": "Poster", "keywords": "Bayesian;LLM;Data selection;Data unbalancing;Data denoising;Domain adaptation", "tldr": "BADS (Bayesian Data Point Selection)", "abstract": "Data point selection (DPS) is becoming a critical topic in deep learning due to the ease of acquiring uncurated training data compared to the difficulty of obtaining curated or processed data. Existing approaches to DPS are predominantly based on a bi-level optimisation (BLO) formulation, which is demanding in terms of memory and computation, and exhibits some theoretical defects regarding minibatches. Thus, we propose a novel Bayesian approach to DPS. We view the DPS problem as posterior inference in a novel Bayesian model where the posterior distributions of the instance-wise weights and the main neural network parameters are inferred under a reasonable prior and likelihood model. We employ stochastic gradient Langevin MCMC sampling to learn the main network and instance-wise weights jointly, ensuring convergence even with minibatches. Our update equation is comparable to the widely used SGD and much more efficient than existing BLO-based methods. Through controlled experiments in both the vision and language domains, we present the proof-of-concept. Additionally, we demonstrate that our method scales effectively to large language models and facilitates automated per-task optimization for instruction fine-tuning datasets.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/96300"} +{"video_file": "9sP4oejtjB_39028582.mp4", "openreview_id": "9sP4oejtjB", "slideslive_id": 39028582, "venue": "nips2024", "title": "Disentangling the Roles of Distinct Cell Classes with Cell-Type Dynamical Systems", "status": "Spotlight", "keywords": "neuroscience;neural dynamics;animal decision making", "tldr": "We introduce the cell-type dynamical systems (CTDS) model, which extends latent linear dynamical systems to incorporate cell-type information when modeling neural activity.", "abstract": "Latent dynamical systems have been widely used to characterize the dynamics of neural population activity in the brain. However, these models typically ignore the fact that the brain contains multiple cell types. This limits their ability to capture the functional roles of distinct cell classes, and to predict the effects of cell-specific perturbations on neural activity or behavior. To overcome these limitations, we introduce the `\"cell-type dynamical systems\" (CTDS) model. This model extends latent linear dynamical systems to contain distinct latent variables for each cell class, with biologically inspired constraints on both dynamics and emissions. To illustrate our approach, we consider neural recordings with distinct excitatory (E) and inhibitory (I) populations.\nThe CTDS model defines separate latents for both cell types, and constrains the dynamics so that E (I) latents have a strictly positive (negative) effects on other latents. We applied CTDS to recordings from rat frontal orienting fields (FOF) and anterior dorsal striatum (ADS) during an auditory decision-making task. The model achieved higher accuracy than a standard linear dynamical system (LDS), and revealed that the animal's choice can be decoded from both E and I latents and thus is not restricted to a single cell-class. We also performed in-silico optogenetic perturbation experiments in the FOF and ADS, and found that CTDS was able to replicate the experimentally observed effects of different perturbations on behavior, whereas a standard LDS model---which does not differentiate between cell types---did not. Crucially, our model allowed us to understand the effects of these perturbations by revealing the dynamics of different cell-specific latents. Finally, CTDS can also be used to identify cell types for neurons whose class labels are unknown in electrophysiological recordings. These results illustrate the power of the CTDS model to provide more accurate and more biologically interpretable descriptions of neural population dynamics and their relationship to behavior.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/96294"} +{"video_file": "9uolDxbYLm_39025646.mp4", "openreview_id": "9uolDxbYLm", "slideslive_id": 39025646, "venue": "nips2024", "title": "Model Reconstruction Using Counterfactual Explanations: A Perspective From Polytope Theory", "status": "Poster", "keywords": "model extraction;counterfactual explanations;decision boundary shift;query complexity", "tldr": "We propose novel performance guarantees and strategies for leveraging counterfactual explanations in model reconstruction.", "abstract": "Counterfactual explanations provide ways of achieving a favorable model outcome with minimum input perturbation. However, counterfactual explanations can also be leveraged to reconstruct the model by strategically training a surrogate model to give similar predictions as the original (target) model. In this work, we analyze how model reconstruction using counterfactuals can be improved by further leveraging the fact that the counterfactuals also lie quite close to the decision boundary. Our main contribution is to derive novel theoretical relationships between the error in model reconstruction and the number of counterfactual queries required using polytope theory. Our theoretical analysis leads us to propose a strategy for model reconstruction that we call Counterfactual Clamping Attack (CCA) which trains a surrogate model using a unique loss function that treats counterfactuals differently than ordinary instances. Our approach also alleviates the related problem of decision boundary shift that arises in existing model reconstruction approaches when counterfactuals are treated as ordinary instances. Experimental results demonstrate that our strategy improves fidelity between the target and surrogate model predictions on several datasets.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/96291"} +{"video_file": "9utMGIbHBt_39024429.mp4", "openreview_id": "9utMGIbHBt", "slideslive_id": 39024429, "venue": "nips2024", "title": "UDPM: Upsampling Diffusion Probabilistic Models", "status": "Poster", "keywords": "diffusion models;generative models", "tldr": "We propose an efficient diffusion model that is based on adding noise + upsampling and a novel training strategy, which leads to state-of-the-art generation results on CIFAR-10 and other datasets", "abstract": "Denoising Diffusion Probabilistic Models (DDPM) have recently gained significant attention. DDPMs compose a Markovian process that begins in the data domain and gradually adds noise until reaching pure white noise. DDPMs generate high-quality samples from complex data distributions by defining an inverse process and training a deep neural network to learn this mapping. However, these models are inefficient because they require many diffusion steps to produce aesthetically pleasing samples. Additionally, unlike generative adversarial networks (GANs), the latent space of diffusion models is less interpretable. In this work, we propose to generalize the denoising diffusion process into an Upsampling Diffusion Probabilistic Model (UDPM). In the forward process, we reduce the latent variable dimension through downsampling, followed by the traditional noise perturbation. As a result, the reverse process gradually denoises and upsamples the latent variable to produce a sample from the data distribution. We formalize the Markovian diffusion processes of UDPM and demonstrate its generation capabilities on the popular FFHQ, AFHQv2, and CIFAR10 datasets. UDPM generates images with as few as three network evaluations, whose overall computational cost is less than a single DDPM or EDM step while achieving an FID score of 6.86. This surpasses current state-of-the-art efficient diffusion models that use a single denoising step for sampling. Additionally, UDPM offers an interpretable and interpolable latent space, which gives it an advantage over traditional DDPMs. Our code is available online: \\url{https://github.com/shadyabh/UDPM/}", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/96290"} +{"video_file": "9vcqleAHPl_39026746.mp4", "openreview_id": "9vcqleAHPl", "slideslive_id": 39026746, "venue": "nips2024", "title": "FAST: A Dual-tier Few-Shot Learning Paradigm for Whole Slide Image Classification", "status": "Poster", "keywords": "Whole Slide Image Classification;Few-shot Learning;Vision-Language Model Adaption;Multimodal Large Model", "tldr": "We propose a novel and efficient dual-tier few-shot learning paradigm for WSI classification. This paradigm can effectively reduce the fine-grained annotation cost of WSI while fully utilizing limited WSI data.", "abstract": "The expensive fine-grained annotation and data scarcity have become the primary obstacles for the widespread adoption of deep learning-based Whole Slide Images (WSI) classification algorithms in clinical practice. Unlike few-shot learning methods in natural images that can leverage the labels of each image, existing few-shot WSI classification methods only utilize a small number of fine-grained labels or weakly supervised slide labels for training in order to avoid expensive fine-grained annotation. They lack sufficient mining of available WSIs, severely limiting WSI classification performance. To address the above issues, we propose a novel and efficient dual-tier few-shot learning paradigm for WSI classification, named FAST. FAST consists of a dual-level annotation strategy and a dual-branch classification framework. Firstly, to avoid expensive fine-grained annotation, we collect a very small number of WSIs at the slide level, and annotate an extremely small number of patches. Then, to fully mining the available WSIs, we use all the patches and available patch labels to build a cache branch, which utilizes the labeled patches to learn the labels of unlabeled patches and through knowledge retrieval for patch classification. In addition to the cache branch, we also construct a prior branch that includes learnable prompt vectors, using the text encoder of visual-language models for patch classification. Finally, we integrate the results from both branches to achieve WSI classification. Extensive experiments on binary and multi-class datasets demonstrate that our proposed method significantly surpasses existing few-shot classification methods and approaches the accuracy of fully supervised methods with only 0.22% annotation costs. All codes and models will be publicly available on https://github.com/fukexue/FAST.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96289"} +{"video_file": "9zQl27mqWE_39027409.mp4", "openreview_id": "9zQl27mqWE", "slideslive_id": 39027409, "venue": "nips2024", "title": "Mixed Dynamics In Linear Networks: Unifying the Lazy and Active Regimes", "status": "Poster", "keywords": "Linear Networks;Lazy Regime;Active Regime;Training Dynamics;Phase Diagram", "tldr": "We derive a simple formula for training dynamics of linear networks that not only unifies the lazy and balanced dynamics, but reveals the existence of mixed dynamics.", "abstract": "The training dynamics of linear networks are well studied in two distinct setups: the lazy regime and balanced/active regime, depending on the initialization and width of the network. We provide a surprisingly simple unifying formula for the evolution of the learned matrix that contains as special cases both lazy and balanced regimes but also a mixed regime in between the two. In the mixed regime, a part of the network is lazy while the other is balanced. More precisely the network is lazy along singular values that are below a certain threshold and balanced along those that are above the same threshold. At initialization, all singular values are lazy, allowing for the network to align itself with the task, so that later in time, when some of the singular value cross the threshold and become active they will converge rapidly (convergence in the balanced regime is notoriously difficult in the absence of alignment). The mixed regime is the `best of both worlds': it converges from any random initialization (in contrast to balanced dynamics which require special initialization), and has a low rank bias (absent in the lazy dynamics). This allows us to prove an almost complete phase diagram of training behavior as a function of the variance at initialization and the width, for a MSE training task.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96286"} +{"video_file": "A3hxp0EeNW_39025390.mp4", "openreview_id": "A3hxp0EeNW", "slideslive_id": 39025390, "venue": "nips2024", "title": "Generative Modelling of Structurally Constrained Graphs", "status": "Poster", "keywords": "Graph Generative Models;Constrained Diffusion", "tldr": "We propose a constrained discrete diffusion framework for generating graphs that adhere to specific structural properties. This framework achieves state-of-the-art performance on both synthetic and digital pathology datasets.", "abstract": "Graph diffusion models have emerged as state-of-the-art techniques in graph generation; yet, integrating domain knowledge into these models remains challenging. Domain knowledge is particularly important in real-world scenarios, where invalid generated graphs hinder deployment in practical applications. Unconstrained and conditioned graph diffusion models fail to guarantee such domain-specific structural properties. We present ConStruct, a novel framework that enables graph diffusion models to incorporate hard constraints on specific properties, such as planarity or acyclicity. Our approach ensures that the sampled graphs remain within the domain of graphs that satisfy the specified property throughout the entire trajectory in both the forward and reverse processes. This is achieved by introducing an edge-absorbing noise model and a new projector operator. ConStruct demonstrates versatility across several structural and edge-deletion invariant constraints and achieves state-of-the-art performance for both synthetic benchmarks and attributed real-world datasets. For example, by incorporating planarity constraints in digital pathology graph datasets, the proposed method outperforms existing baselines, improving data validity by up to 71.1 percentage points.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96282"} +{"video_file": "A969ouPqEs_39027758.mp4", "openreview_id": "A969ouPqEs", "slideslive_id": 39027758, "venue": "nips2024", "title": "DiffLight: A Partial Rewards Conditioned Diffusion Model for Traffic Signal Control with Missing Data", "status": "Spotlight", "keywords": "Traffic signal control;Reinforcement learning;Diffusion model;Spatial-temporal data", "tldr": "We propose DiffLight, a conditional diffusion framework, unifying traffic data imputation and decision-making for TSC with missing data.", "abstract": "The application of reinforcement learning in traffic signal control (TSC) has been extensively researched and yielded notable achievements. However, most existing works for TSC assume that traffic data from all surrounding intersections is fully and continuously available through sensors. In real-world applications, this assumption often fails due to sensor malfunctions or data loss, making TSC with missing data a critical challenge. To meet the needs of practical applications, we introduce DiffLight, a novel conditional diffusion model for TSC under data-missing scenarios in the offline setting. Specifically, we integrate two essential sub-tasks, i.e., traffic data imputation and decision-making, by leveraging a Partial Rewards Conditioned Diffusion (PRCD) model to prevent missing rewards from interfering with the learning process. Meanwhile, to effectively capture the spatial-temporal dependencies among intersections, we design a Spatial-Temporal transFormer (STFormer) architecture. In addition, we propose a Diffusion Communication Mechanism (DCM) to promote better communication and control performance under data-missing scenarios. Extensive experiments on five datasets with various data-missing scenarios demonstrate that DiffLight is an effective controller to address TSC with missing data. The code of DiffLight is released at https://github.com/lokol5579/DiffLight-release.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/96278"} +{"video_file": "AB6XpMzvqH_39028828.mp4", "openreview_id": "AB6XpMzvqH", "slideslive_id": 39028828, "venue": "nips2024", "title": "Many-Shot In-Context Learning", "status": "Spotlight", "keywords": "large language models;in-context learning;long-context models", "tldr": "We investigate the many-shot in-context learning regime -- prompting large language models with hundreds or thousands of examples -- for a wide range of tasks.", "abstract": "Large language models (LLMs) excel at few-shot in-context learning (ICL) -- learning from a few examples provided in context at inference, without any weight updates. Newly expanded context windows allow us to investigate ICL with hundreds or thousands of examples \u2013 the many-shot regime. Going from few-shot to many-shot, we observe significant performance gains across a wide variety of generative and discriminative tasks. While promising, many-shot ICL can be bottlenecked by the available amount of human-generated outputs. To mitigate this limitation, we explore two new settings: (1) \"Reinforced ICL\" that uses model-generated chain-of-thought rationales in place of human rationales, and (2) \"Unsupervised ICL\" where we remove rationales from the prompt altogether, and prompts the model only with domain-specific inputs. We find that both Reinforced and Unsupervised ICL can be quite effective in the many-shot regime, particularly on complex reasoning tasks. We demonstrate that, unlike few-shot learning, many-shot learning is effective at overriding pretraining biases, can learn high-dimensional functions with numerical inputs, and performs comparably to supervised fine-tuning. Finally, we reveal the limitations of next-token prediction loss as an indicator of downstream ICL performance.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96277"} +{"video_file": "ACCqGLviig_39026827.mp4", "openreview_id": "ACCqGLviig", "slideslive_id": 39026827, "venue": "nips2024", "title": "Vector Quantization Prompting for Continual Learning", "status": "Poster", "keywords": "continual learning;incremental learning;life-long learning;image classification;deep learning", "tldr": "This study focuses on one critical deficiency inherent in current prompt-based continual learning methods, i.e., the end-to-end optimization of prompt selection with task loss while keeping its discrete nature as task knowledge representation.", "abstract": "Continual learning requires to overcome catastrophic forgetting when training a single model on a sequence of tasks. Recent top-performing approaches are prompt-based methods that utilize a set of learnable parameters (i.e., prompts) to encode task knowledge, from which appropriate ones are selected to guide the fixed pre-trained model in generating features tailored to a certain task. However, existing methods rely on predicting prompt identities for prompt selection, where the identity prediction process cannot be optimized with task loss. This limitation leads to sub-optimal prompt selection and inadequate adaptation of pre-trained features for a specific task. Previous efforts have tried to address this by directly generating prompts from input queries instead of selecting from a set of candidates. However, these prompts are continuous, which lack sufficient abstraction for task knowledge representation, making them less effective for continual learning. To address these challenges, we propose VQ-Prompt, a prompt-based continual learning method that incorporates Vector Quantization (VQ) into end-to-end training of a set of discrete prompts. In this way, VQ-Prompt can optimize the prompt selection process with task loss and meanwhile achieve effective abstraction of task knowledge for continual learning. Extensive experiments show that VQ-Prompt outperforms state-of-the-art continual learning methods across a variety of benchmarks under the challenging class-incremental setting.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96275"} +{"video_file": "ACIDDnTbSJ_39025491.mp4", "openreview_id": "ACIDDnTbSJ", "slideslive_id": 39025491, "venue": "nips2024", "title": "Feint Behaviors and Strategies: Formalization, Implementation and Evaluation", "status": "Poster", "keywords": "Feint Behaviors;Multi-Player Games;Multi-Agent Reinforcement Learning", "tldr": "The first comprehensive formalization of Feint behaviors at action-level and strategy-level, and provide concrete implementation and quantitative evaluation in Multi-Player games.", "abstract": "Feint behaviors refer to a set of deceptive behaviors in a nuanced manner, which enable players to obtain temporal and spatial advantages over opponents in competitive games. Such behaviors are crucial tactics in most competitive multi-player games (e.g., boxing, fencing, basketball, motor racing, etc.). However, existing literature does not provide a comprehensive (and/or concrete) formalization for Feint behaviors, and their implications on game strategies. In this work, we introduce the first comprehensive formalization of Feint behaviors at both action-level and strategy-level, and provide concrete implementation and quantitative evaluation of them in multi-player games. The key idea of our work is to (1) allow automatic generation of Feint behaviors via Palindrome-directed templates, combine them into meaningful behavior sequences via a Dual-Behavior Model; (2) concertize the implications from our formalization of Feint on game strategies, in terms of temporal, spatial, and their collective impacts respectively; and (3) provide a unified implementation scheme of Feint behaviors in existing MARL frameworks. The experimental results show that our design of Feint behaviors can (1) greatly improve the game reward gains; (2) significantly improve the diversity of Multi-Player Games; and (3) only incur negligible overheads in terms of time consumption.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96274"} +{"video_file": "ADJASE9uQ2_39024396.mp4", "openreview_id": "ADJASE9uQ2", "slideslive_id": 39024396, "venue": "nips2024", "title": "2DQuant: Low-bit Post-Training Quantization for Image Super-Resolution", "status": "Poster", "keywords": "Quantization;Image super resolution;Low bit;Post-training quantization.", "tldr": "A dual-stage low-bit post-training quantization algorithm for image super-resolution", "abstract": "Low-bit quantization has become widespread for compressing image super-resolution (SR) models for edge deployment, which allows advanced SR models to enjoy compact low-bit parameters and efficient integer/bitwise constructions for storage compression and inference acceleration, respectively. However, it is notorious that low-bit quantization degrades the accuracy of SR models compared to their full-precision (FP) counterparts. Despite several efforts to alleviate the degradation, the transformer-based SR model still suffers severe degradation due to its distinctive activation distribution. In this work, we present a dual-stage low-bit post-training quantization (PTQ) method for image super-resolution, namely 2DQuant, which achieves efficient and accurate SR under low-bit quantization. The proposed method first investigates the weight and activation and finds that the distribution is characterized by coexisting symmetry and asymmetry, long tails. Specifically, we propose Distribution-Oriented Bound Initialization (DOBI), using different searching strategies to search a coarse bound for quantizers. To obtain refined quantizer parameters, we further propose Distillation Quantization Calibration (DQC), which employs a distillation approach to make the quantized model learn from its FP counterpart. Through extensive experiments on different bits and scaling factors, the performance of DOBI can reach the state-of-the-art (SOTA) while after stage two, our method surpasses existing PTQ in both metrics and visual effects. 2DQuant gains an increase in PSNR as high as 4.52dB on Set5 (x2) compared with SOTA when quantized to 2-bit and enjoys a 3.60x compression ratio and 5.08x speedup ratio. The code and models are available at https://github.com/Kai-Liu001/2DQuant.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/96273"} +{"video_file": "AFnSMlye5K_39025078.mp4", "openreview_id": "AFnSMlye5K", "slideslive_id": 39025078, "venue": "nips2024", "title": "Disentangling Interpretable Factors with Supervised Independent Subspace Principal Component Analysis", "status": "Poster", "keywords": "Principal Component Analysis (PCA);Disentanglement Representation Learning;Computational Biology", "tldr": "A multi-subspace extension of PCA to disentangle interpretable subspaces of variations with supervision.", "abstract": "The success of machine learning models relies heavily on effectively representing high-dimensional data. However, ensuring data representations capture human-understandable concepts remains difficult, often requiring the incorporation of prior knowledge and decomposition of data into multiple subspaces. Traditional linear methods fall short in modeling more than one space, while more expressive deep learning approaches lack interpretability. Here, we introduce Supervised Independent Subspace Principal Component Analysis (\nsisPCA\n), a PCA extension designed for multi-subspace learning. Leveraging the Hilbert-Schmidt Independence Criterion (HSIC),\nsisPCA\nincorporates supervision and simultaneously ensures subspace disentanglement. We demonstrate\nsisPCA\n's connections with autoencoders and regularized linear regression and showcase its ability to identify and separate hidden data structures through extensive applications, including breast cancer diagnosis from image features, learning aging-associated DNA methylation changes, and single-cell analysis of malaria infection. Our results reveal distinct functional pathways associated with malaria colonization, underscoring the essentiality of explainable representation in high-dimensional data analysis.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/96270"} +{"video_file": "AH1mFs3c7o_39026446.mp4", "openreview_id": "AH1mFs3c7o", "slideslive_id": 39026446, "venue": "nips2024", "title": "InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint", "status": "Poster", "keywords": "human motion generation;human interaction generation;diffusion model;controllable generation", "tldr": "the first method to generate human interactions of arbitrary number of humans in a zero-shot manner with only single-person training data", "abstract": "Text-conditioned motion synthesis has made remarkable progress with the emergence of diffusion models. However, the majority of these motion diffusion models are primarily designed for a single character and overlook multi-human interactions. In our approach, we strive to explore this problem by synthesizing human motion with interactions for a group of characters of any size in a zero-shot manner. The key aspect of our approach is the adaptation of human-wise interactions as pairs of human joints that can be either in contact or separated by a desired distance. In contrast to existing methods that necessitate training motion generation models on multi-human motion datasets with a fixed number of characters, our approach inherently possesses the flexibility to model human interactions involving an arbitrary number of individuals, thereby transcending the limitations imposed by the training data. We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs. It consists of a motion controller and an inverse kinematics guidance module that realistically and accurately aligns the joints of synthesized characters to the desired location. Furthermore, we demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model (LLM). Experimental results highlight the capability of our framework to generate interactions with multiple human characters and its potential to work with off-the-shelf physics-based character simulators. Code is available at https://github.com/zhenzhiwang/intercontrol.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96269"} +{"video_file": "AH5KwUSsln_39026739.mp4", "openreview_id": "AH5KwUSsln", "slideslive_id": 39026739, "venue": "nips2024", "title": "Credal Learning Theory", "status": "Poster", "keywords": "Statistical learning;imprecise probabilities;credal sets;epistemic and aleatory uncertainties", "tldr": "We develop Credal Learning Theory, which allows to derive tighter bounds wrt SLT by leveraging a finite sample of training sets", "abstract": "Statistical learning theory is the foundation of machine learning, providing theoretical bounds for the risk of models learned from a (single) training set, assumed to issue from an unknown probability distribution. In actual deployment, however, the data distribution may (and often does) vary, causing domain adaptation/generalization issues. In this paper we lay the foundations for a `credal' theory of learning, using convex sets of probabilities (credal sets) to model the variability in the data-generating distribution. Such credal sets, we argue, may be inferred from a finite sample of training sets. Bounds are derived for the case of finite hypotheses spaces (both assuming realizability or not), as well as infinite model spaces, which directly generalize classical results.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96268"} +{"video_file": "AKBTFQhCjm_39026181.mp4", "openreview_id": "AKBTFQhCjm", "slideslive_id": 39026181, "venue": "nips2024", "title": "DEFT: Efficient Fine-tuning of Diffusion Models by Learning the Generalised $h$-transform", "status": "Poster", "keywords": "Inverse problems;Generative Modelling;Diffusion Models;Conditional Generative Modelling;Diffusion model guidance", "tldr": "Novel and highly efficient fine-tuning algorithm for diffusion models motivated by theoretical formulations of conditional sampling in SDEs", "abstract": "Generative modelling paradigms based on denoising diffusion processes have emerged as a leading candidate for conditional sampling in inverse problems. In many real-world applications, we often have access to large, expensively trained unconditional diffusion models, which we aim to exploit for improving conditional sampling. Most recent approaches are motivated heuristically and lack a unifying framework, obscuring connections between them. Further, they often suffer from issues such as being very sensitive to hyperparameters, being expensive to train or needing access to weights hidden behind a closed API. In this work, we unify conditional training and sampling using the mathematically well-understood Doob's h-transform. This new perspective allows us to unify many existing methods under a common umbrella. Under this framework, we propose DEFT (Doob's h-transform Efficient FineTuning), a new approach for conditional generation that simply fine-tunes a very small network to quickly learn the conditional\nh\n-transform, while keeping the larger unconditional network unchanged. DEFT is much faster than existing baselines while achieving state-of-the-art performance across a variety of linear and non-linear benchmarks. On image reconstruction tasks, we achieve speedups of up to 1.6\n\u00d7\n, while having the best perceptual quality on natural images and reconstruction performance on medical images. Further, we also provide initial experiments on protein motif scaffolding and outperform reconstruction guidance methods.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/96267"} +{"video_file": "ALISPmDPCq_39027605.mp4", "openreview_id": "ALISPmDPCq", "slideslive_id": 39027605, "venue": "nips2024", "title": "ConStat: Performance-Based Contamination Detection in Large Language Models", "status": "Poster", "keywords": "large language models;model evaluation;contamination detection", "tldr": "We present a statistical method that aims to detect contamination in language models as artificially inflated and non-generalizing benchmark performance.", "abstract": "Public benchmarks play an essential role in the evaluation of large language models. However, data contamination can lead to inflated performance, rendering them unreliable for model comparison. It is therefore crucial to detect contamination and estimate its impact on measured performance. Unfortunately, existing detection methods can be easily evaded and fail to quantify contamination. To overcome these limitations, we propose a novel definition of contamination as artificially inflated and non-generalizing benchmark performance instead of the inclusion of benchmark samples in the training data. This perspective enables us to detect any model with inflated performance, i.e., performance that does not generalize to rephrased samples, synthetic samples from the same distribution, or different benchmarks for the same task. Based on this insight, we develop ConStat, a statistical method that reliably detects and quantifies contamination by comparing performance between a primary and reference benchmark relative to a set of reference models. We demonstrate the effectiveness of ConStat in an extensive evaluation of diverse model architectures, benchmarks, and contamination scenarios and find high levels of contamination in multiple popular models including Mistral, Llama, Yi, and the top-3 Open LLM Leaderboard models.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96266"} +{"video_file": "ALU676zGFE_39028453.mp4", "openreview_id": "ALU676zGFE", "slideslive_id": 39028453, "venue": "nips2024", "title": "MTGS: A Novel Framework for Multi-Person Temporal Gaze Following and Social Gaze Prediction", "status": "Poster", "keywords": "gaze following;social gaze prediction;multi-task learning", "tldr": "We propose a new architecture and dataset for jointly modelling gaze following and social gaze prediction", "abstract": "Gaze following and social gaze prediction are fundamental tasks providing insights into human communication behaviors, intent, and social interactions. Most previous approaches addressed these tasks separately, either by designing highly specialized social gaze models that do not generalize to other social gaze tasks or by considering social gaze inference as an ad-hoc post-processing of the gaze following task. Furthermore, the vast majority of gaze following approaches have proposed models that can handle only one person at a time and are static, therefore failing to take advantage of social interactions and temporal dynamics. In this paper, we address these limitations and introduce a novel framework to jointly predict the gaze target and social gaze label for all people in the scene. It comprises (i) a temporal, transformer-based architecture that, in addition to frame tokens, handles person-specific tokens capturing the gaze information related to each individual; (ii) a new dataset, VSGaze, built from multiple gaze following and social gaze datasets by extending and validating head detections and tracks, and unifying annotation types. We demonstrate that our model can address and benefit from training on all tasks jointly, achieving state-of-the-art results for multi-person gaze following and social gaze prediction. Our annotations and code will be made publicly available.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96265"} +{"video_file": "ARAxPPIAhq_39027155.mp4", "openreview_id": "ARAxPPIAhq", "slideslive_id": 39027155, "venue": "nips2024", "title": "xLSTM: Extended Long Short-Term Memory", "status": "Spotlight", "keywords": "LSTM;LLM;Language modeling;NLP;Memory", "tldr": "We extend the LSTM architecture with exponential gating and new memory structures and show that this new xLSTM performs favorably on large-scale language modeling tasks.", "abstract": "In the 1990s, the constant error carousel and gating were introduced as the central ideas of the Long Short-Term Memory (LSTM). Since then, LSTMs have stood the test of time and contributed to numerous deep learning success stories, in particular they constituted the first Large Language Models (LLMs). However, the advent of the Transformer technology with parallelizable self-attention at its core marked the dawn of a new era, outpacing LSTMs at scale. We now raise a simple question: How far do we get in language modeling when scaling LSTMs to billions of parameters, leveraging the latest techniques from modern LLMs, but mitigating known limitations of LSTMs? Firstly, we introduce exponential gating with appropriate normalization and stabilization techniques. Secondly, we modify the LSTM memory structure, obtaining: (i) sLSTM with a scalar memory, a scalar update, and new memory mixing, (ii) mLSTM that is fully parallelizable with a matrix memory and a covariance update rule. Integrating these LSTM extensions into residual block backbones yields xLSTM blocks that are then residually stacked into xLSTM architectures. Exponential gating and modified memory structures boost xLSTM capabilities to perform favorably when compared to state-of-the-art Transformers and State Space Models, both in performance and scaling.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/96260"} +{"video_file": "ARV1gJSOzV_39028508.mp4", "openreview_id": "ARV1gJSOzV", "slideslive_id": 39028508, "venue": "nips2024", "title": "Persistent Homology for High-dimensional Data Based on Spectral Methods", "status": "Poster", "keywords": "persistent homology;spectral methods-sequencing;topology;topological data analysis;curse of dimensionality;effective resistance;diffusion distance;single-cell RNA", "tldr": "Traditional persistent homology and its standard extensions fail on high-dimensional data, but persistent homology with spectral distances works well.", "abstract": "Persistent homology is a popular computational tool for analyzing the topology of point clouds, such as the presence of loops or voids. However, many real-world datasets with low intrinsic dimensionality reside in an ambient space of much higher dimensionality. We show that in this case traditional persistent homology becomes very sensitive to noise and fails to detect the correct topology. The same holds true for existing refinements of persistent homology. As a remedy, we find that spectral distances on the k-nearest-neighbor graph of the data, such as diffusion distance and effective resistance, allow to detect the correct topology even in the presence of high-dimensional noise. Moreover, we derive a novel closed-form formula for effective resistance, and describe its relation to diffusion distances. Finally, we apply these methods to high-dimensional single-cell RNA-sequencing data and show that spectral distances allow robust detection of cell cycle loops.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/96258"} +{"video_file": "AUg9D2VjcF_39025776.mp4", "openreview_id": "AUg9D2VjcF", "slideslive_id": 39025776, "venue": "nips2024", "title": "One Sample Fits All: Approximating All Probabilistic Values Simultaneously and Efficiently", "status": "Poster", "keywords": "Beta Shapley values;weighted Banzhaf values;approximation;datamodels", "tldr": "We explore the possibilities of approximating all probabilistic values simultaneously and efficiently, examples of probabilistic values include Beta Shapley values and weighted Banzhaf values.", "abstract": "The concept of probabilistic values, such as Beta Shapley values and weighted Banzhaf values, has gained recent attention in applications like feature attribution and data valuation. However, exact computation of these values is often exponentially expensive, necessitating approximation techniques. Prior research has shown that the choice of probabilistic values significantly impacts downstream performance, with no universally superior option. Consequently, one may have to approximate multiple candidates and select the best-performing one. Although there have been many efforts to develop efficient estimators, none are intended to approximate all probabilistic values both simultaneously and efficiently. In this work, we embark on the first exploration of achieving this goal. Adhering to the principle of maximum sample reuse and avoiding amplifying factors, we propose a one-sample-fits-all framework parameterized by a sampling vector to approximate intermediate terms that can be converted to any probabilistic value. Leveraging the concept of\n(\n\u03f5\n,\n\u03b4\n)\n-approximation, we theoretically identify a key formula that effectively determines the convergence rate of our framework. By optimizing the sampling vector using this formula, we obtain i) a one-for-all estimator that achieves the currently best time complexity for all probabilistic values on average, and ii) a faster generic estimator with the sampling vector optimally tuned for each probabilistic value. Particularly, our one-for-all estimator achieves the fastest convergence rate on Beta Shapley values, including the well-known Shapley value, both theoretically and empirically. Finally, we establish a connection between probabilistic values and the least square regression used in (regularized) datamodels, showing that our one-for-all estimator can solve a family of datamodels simultaneously. Our code is available at https://github.com/watml/one-for-all.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/96253"} +{"video_file": "AVrGtVrx10_39025027.mp4", "openreview_id": "AVrGtVrx10", "slideslive_id": 39025027, "venue": "nips2024", "title": "Probabilistic Conformal Distillation for Enhancing Missing Modality Robustness", "status": "Poster", "keywords": "Robust Learning;Multimodal Learning;Missing Modality", "tldr": "We propose a PCD method to handle the missing modality problem, which transfers privileged information of modality-complete representation by considering the indeterminacy in the mapping from incompleteness to completeness.", "abstract": "Multimodal models trained on modality-complete data are plagued with severe performance degradation when encountering modality-missing data. Prevalent cross-modal knowledge distillation-based methods precisely align the representation of modality-missing data and that of its modality-complete counterpart to enhance robustness. However, due to the irreparable information asymmetry, this determinate alignment is too stringent, easily inducing modality-missing features to capture spurious factors erroneously. In this paper, a novel multimodal Probabilistic Conformal Distillation (PCD) method is proposed, which considers the inherent indeterminacy in this alignment. Given a modality-missing input, our goal is to learn the unknown Probability Density Function (PDF) of the mapped variables in the modality-complete space, rather than relying on the brute-force point alignment. Specifically, PCD models the modality-missing feature as a probabilistic distribution, enabling it to satisfy two characteristics of the PDF. One is the extremes of probabilities of modality-complete feature points on the PDF, and the other is the geometric consistency between the modeled distributions and the peak points of different PDFs. Extensive experiments on a range of benchmark datasets demonstrate the superiority of PCD over state-of-the-art methods. Code is available at: https://github.com/mxchen-mc/PCD.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96251"} +{"video_file": "AWFryOJaGi_39027156.mp4", "openreview_id": "AWFryOJaGi", "slideslive_id": 39027156, "venue": "nips2024", "title": "Lambda: Learning Matchable Prior For Entity Alignment with Unlabeled Dangling Cases", "status": "Poster", "keywords": "Knowledge Graph;Entity Alignment;Positive-Unlabeled Learning;Dangling Cases", "tldr": "entity alignment with unlabeled dangling entity pineer work using novel PU learning algorithm", "abstract": "We investigate the entity alignment (EA) problem with unlabeled dangling cases, meaning that partial entities have no counterparts in the other knowledge graph (KG), yet these entities are unlabeled. The problem arises when the source and target graphs are of different scales, and it is much cheaper to label the matchable pairs than the dangling entities. To address this challenge, we propose the framework \\textit{Lambda} for dangling detection and entity alignment. Lambda features a GNN-based encoder called KEESA with a spectral contrastive learning loss for EA and a positive-unlabeled learning algorithm called iPULE for dangling detection. Our dangling detection module offers theoretical guarantees of unbiasedness, uniform deviation bounds, and convergence. Experimental results demonstrate that each component contributes to overall performances that are superior to baselines, even when baselines additionally exploit 30% of dangling entities labeled for training.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/96250"} +{"video_file": "AYDBFxNon4_39026380.mp4", "openreview_id": "AYDBFxNon4", "slideslive_id": 39026380, "venue": "nips2024", "title": "Linking In-context Learning in Transformers to Human Episodic Memory", "status": "Poster", "keywords": "in-context learning;Transformer;induction head;episodic memory;mechanistic interpretability", "tldr": "This paper explores a striking similarity between the in-context learning mechanisms of Transformer models and human episodic memory, revealing parallel computational processes in artificial and biological intelligence systems.", "abstract": "Understanding connections between artificial and biological intelligent systems can reveal fundamental principles of general intelligence. While many artificial intelligence models have a neuroscience counterpart, such connections are largely missing in Transformer models and the self-attention mechanism. Here, we examine the relationship between interacting attention heads and human episodic memory. We focus on induction heads, which contribute to in-context learning in Transformer-based large language models (LLMs). We demonstrate that induction heads are behaviorally, functionally, and mechanistically similar to the contextual maintenance and retrieval (CMR) model of human episodic memory. Our analyses of LLMs pre-trained on extensive text data show that CMR-like heads often emerge in the intermediate and late layers, qualitatively mirroring human memory biases. The ablation of CMR-like heads suggests their causal role in in-context learning. Our findings uncover a parallel between the computational mechanisms of LLMs and human memory, offering valuable insights into both research fields.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/96248"} +{"video_file": "AYq6GxxrrY_39028378.mp4", "openreview_id": "AYq6GxxrrY", "slideslive_id": 39028378, "venue": "nips2024", "title": "Transferable Boltzmann Generators", "status": "Poster", "keywords": "Boltzmann Generators;Normalizing Flows;Sampling Problem;Flow Matching;Molecular Dynamics", "tldr": "We introduce transferable Boltzmann Generators that allow efficient sampling on unseen small peptide systems", "abstract": "The generation of equilibrium samples of molecular systems has been a long-standing problem in statistical physics. Boltzmann Generators are a generative machine learning method that addresses this issue by learning a transformation via a normalizing flow from a simple prior distribution to the target Boltzmann distribution of interest. Recently, flow matching has been employed to train Boltzmann Generators for small molecular systems in Cartesian coordinates. We extend this work and propose a first framework for Boltzmann Generators that are transferable across chemical space, such that they predict zero-shot Boltzmann distributions for test molecules without being retraining for these systems. These transferable Boltzmann Generators allow approximate sampling from the target distribution of unseen systems, as well as efficient reweighting to the target Boltzmann distribution. The transferability of the proposed framework is evaluated on dipeptides, where we show that it generalizes efficiently to unseen systems. Furthermore, we demonstrate that our proposed architecture enhances the efficiency of Boltzmann Generators trained on single molecular systems.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/96246"} +{"video_file": "AbTpJl7vN6_39027552.mp4", "openreview_id": "AbTpJl7vN6", "slideslive_id": 39027552, "venue": "nips2024", "title": "Flexible task abstractions emerge in linear networks with fast and bounded units", "status": "Spotlight", "keywords": "Deep linear networks;Learning dynamics;Cognitive Science;Cognitive control;Task representations", "tldr": "We train neural networks in changing environments and show that task abstractions emerge in parameters trained with fast learning rate and heavily regularized. The task abstractions can then support cognitive flexibility.", "abstract": "Animals survive in dynamic environments changing at arbitrary timescales, but such data distribution shifts are a challenge to neural networks. To adapt to change, neural systems may change a large number of parameters, which is a slow process involving forgetting past information. In contrast, animals leverage distribution changes to segment their stream of experience into tasks and associate them with internal task abstracts. Animals can then respond flexibly by selecting the appropriate task abstraction. However, how such flexible task abstractions may arise in neural systems remains unknown. Here, we analyze a linear gated network where the weights and gates are jointly optimized via gradient descent, but with neuron-like constraints on the gates including a faster timescale, non-negativity, and bounded activity. We observe that the weights self-organize into modules specialized for tasks or sub-tasks encountered, while the gates layer forms unique representations that switch the appropriate weight modules (task abstractions). We analytically reduce the learning dynamics to an effective eigenspace, revealing a virtuous cycle: fast adapting gates drive weight specialization by protecting previous knowledge, while weight specialization in turn increases the update rate of the gating layer. Task switching in the gating layer accelerates as a function of curriculum block size and task training, mirroring key findings in cognitive neuroscience. We show that the discovered task abstractions support generalization through both task and subtask composition, and we extend our findings to a non-linear network switching between two tasks. Overall, our work offers a theory of cognitive flexibility in animals as arising from joint gradient descent on synaptic and neural gating in a neural network architecture.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/96245"} +{"video_file": "Ai76ATrb2y_39028879.mp4", "openreview_id": "Ai76ATrb2y", "slideslive_id": 39028879, "venue": "nips2024", "title": "Auditing Privacy Mechanisms via Label Inference Attacks", "status": "Spotlight", "keywords": "label inference;label reconstruction;differential privacy;learning from label proportions", "tldr": "We propose auditing tools for privacy mechanisms via label reconstruction advantage measures, allowing to place a variety of proposed label privatization schemes\u2014some differentially private, some not\u2014on the same footing.", "abstract": "We propose reconstruction advantage measures to audit label privatization mechanisms. A reconstruction advantage measure quantifies the increase in an attacker's ability to infer the true label of an unlabeled example when provided with a private version of the labels in a dataset (e.g., aggregate of labels from different users or noisy labels output by randomized response), compared to an attacker that only observes the feature vectors, but may have prior knowledge of the correlation between features and labels. We consider two such auditing measures: one additive, and on multiplicative. These cover previous approaches taken in the literature on empirical auditing and differential privacy. These measures allow us to place a variety of proposed privatization schemes---some differentially private, some not---on the same footing. We analyze these measures theoretically under a distributional model which, we claim, encapsulates reasonable adversarial settings. We also quantify their behavior empirically on real and simulated prediction tasks. Across a range of experimental settings, we find that differentially private schemes dominate or match the privacy-utility tradeoff of more heuristic approaches.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96236"} +{"video_file": "Aj8RKCGwjE_39026295.mp4", "openreview_id": "Aj8RKCGwjE", "slideslive_id": 39026295, "venue": "nips2024", "title": "AROMA: Preserving Spatial Structure for Latent PDE Modeling with Local Neural Fields", "status": "Poster", "keywords": "PDE;Neural Operator;Neural Fields;Transformer;Diffusion", "tldr": "We introduce AROMA, a versatile framework for enhanced PDE modeling using local neural fields, which achieves stable, efficient processing of diverse spatial data and superior performance in simulating 1D and 2D equations.", "abstract": "We present AROMA (Attentive Reduced Order Model with Attention), a framework designed to enhance the modeling of partial differential equations (PDEs) using local neural fields. Our flexible encoder-decoder architecture can obtain smooth latent representations of spatial physical fields from a variety of data types, including irregular-grid inputs and point clouds. This versatility eliminates the need for patching and allows efficient processing of diverse geometries. The sequential nature of our latent representation can be interpreted spatially and permits the use of a conditional transformer for modeling the temporal dynamics of PDEs. By employing a diffusion-based formulation, we achieve greater stability and enable longer rollouts compared to conventional MSE training. AROMA's superior performance in simulating 1D and 2D equations underscores the efficacy of our approach in capturing complex dynamical behaviors.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/96233"} +{"video_file": "Ao0FiZqrXa_39027579.mp4", "openreview_id": "Ao0FiZqrXa", "slideslive_id": 39027579, "venue": "nips2024", "title": "Simple and Fast Distillation of Diffusion Models", "status": "Poster", "keywords": "Diffusion models;fast distillation;fast sampling", "tldr": "A simple and fast distillation of diffusion models that accelerates the fine-tuning up to 1000 times while performing high-quality image generation.", "abstract": "Diffusion-based generative models have demonstrated their powerful performance across various tasks, but this comes at a cost of the slow sampling speed. To achieve both efficient and high-quality synthesis, various distillation-based accelerated sampling methods have been developed recently. However, they generally require time-consuming fine tuning with elaborate designs to achieve satisfactory performance in a specific number of function evaluation (NFE), making them difficult to employ in practice. To address this issue, we propose Simple and Fast Distillation (SFD) of diffusion models, which simplifies the paradigm used in existing methods and largely shortens their fine-tuning time up to\n1000\n\u00d7\n. We begin with a vanilla distillation-based sampling method and boost its performance to state of the art by identifying and addressing several small yet vital factors affecting the synthesis efficiency and quality. Our method can also achieve sampling with variable NFEs using a single distilled model. Extensive experiments demonstrate that SFD strikes a good balance between the sample quality and fine-tuning costs in few-step image generation task. For example, SFD achieves 4.53 FID (NFE=2) on CIFAR-10 with only 0.64 hours of fine-tuning on a single NVIDIA A100 GPU.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96231"} +{"video_file": "Apq6corvfZ_39027216.mp4", "openreview_id": "Apq6corvfZ", "slideslive_id": 39027216, "venue": "nips2024", "title": "Instance-Optimal Private Density Estimation in the Wasserstein Distance", "status": "Poster", "keywords": "Differential Privacy;Density Estimation;Instance Optimality;Wasserstein Distance", "tldr": "We give instance optimal (differentially private) algorithms for density estimation in Wasserstein distance.", "abstract": "Estimating the density of a distribution from samples is a fundamental problem in statistics. In many practical settings, the Wasserstein distance is an appropriate error metric for density estimation. For example, when estimating population densities in a geographic region, a small Wasserstein distance means that the estimate is able to capture roughly where the population mass is. In this work we study differentially private density estimation in the Wasserstein distance. We design and analyze instance-optimal algorithms for this problem that can adapt to easy instances.\nFor distributions\nP\nover\nR\n, we consider a strong notion of instance-optimality: an algorithm that uniformly achieves the instance-optimal estimation rate is competitive with an algorithm that is told that the distribution is either\nP\nor\nQ\nP\nfor some distribution\nQ\nP\nwhose probability density function (pdf) is within a factor of 2 of the pdf of\nP\n. For distributions over\nR\n2\n, we use a slightly different notion of instance optimality. We say that an algorithm is instance-optimal if it is competitive with an algorithm that is given a constant multiplicative approximation of the density of the distribution. We characterize the instance-optimal estimation rates in both these settings and show that they are uniformly achievable (up to polylogarithmic factors). Our approach for\nR\n2\nextends to arbitrary metric spaces as it goes via hierarchically separated trees. As a special case our results lead to instance-optimal learning in TV distance for discrete distributions.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96229"} +{"video_file": "AvWB40qXZh_39027857.mp4", "openreview_id": "AvWB40qXZh", "slideslive_id": 39027857, "venue": "nips2024", "title": "NeuMA: Neural Material Adaptor for Visual Grounding of Intrinsic Dynamics", "status": "Poster", "keywords": "Intuitive Physics;Differentiable Renderer;Neural Simulation", "tldr": "We propose a residual adaptation paradigm for the visual grounding of intrinsic dynamics.", "abstract": "While humans effortlessly discern intrinsic dynamics and adapt to new scenarios, modern AI systems often struggle. Current methods for visual grounding of dynamics either use pure neural-network-based simulators (black box), which may violate physical laws, or traditional physical simulators (white box), which rely on expert-defined equations that may not fully capture actual dynamics. We propose the Neural Material Adaptor (NeuMA), which integrates existing physical laws with learned corrections, facilitating accurate learning of actual dynamics while maintaining the generalizability and interpretability of physical priors. Additionally, we propose Particle-GS, a particle-driven 3D Gaussian Splatting variant that bridges simulation and observed images, allowing back-propagate image gradients to optimize the simulator. Comprehensive experiments on various dynamics in terms of grounded particle accuracy, dynamic rendering quality, and generalization ability demonstrate that NeuMA can accurately capture intrinsic dynamics. Project Page: https://xjay18.github.io/projects/neuma.html.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/96224"} +{"video_file": "B0OWOkMwhz_39027530.mp4", "openreview_id": "B0OWOkMwhz", "slideslive_id": 39027530, "venue": "nips2024", "title": "MVSplat360: Feed-Forward 360 Scene Synthesis from Sparse Views", "status": "Poster", "keywords": "novel view synthesis;feed-forward 3DGS;3D gaussians splatting;latent video diffusion model", "tldr": "MVSplat360 is a feed-forward approach for 360\u00b0 novel view synthesis of diverse real-world scenes, using only sparse observations.", "abstract": "We introduce MVSplat360, a feed-forward approach for 360\u00b0 novel view synthesis (NVS) of diverse real-world scenes, using only sparse observations. This setting is inherently ill-posed due to minimal overlap among input views and insufficient visual information provided, making it challenging for conventional methods to achieve high-quality results. Our MVSplat360 addresses this by effectively combining geometry-aware 3D reconstruction with temporally consistent video generation. Specifically, it refactors a feed-forward 3D Gaussian Splatting (3DGS) model to render features directly into the latent space of a pre-trained Stable Video Diffusion (SVD) model, where these features then act as pose and visual cues to guide the denoising process and produce photorealistic 3D-consistent views. Our model is end-to-end trainable and supports rendering arbitrary views with as few as 5 sparse input views. To evaluate MVSplat360's performance, we introduce a new benchmark using the challenging DL3DV-10K dataset, where MVSplat360 achieves superior visual quality compared to state-of-the-art methods on wide-sweeping or even 360\u00b0 NVS tasks. Experiments on the existing benchmark RealEstate10K also confirm the effectiveness of our model. Readers are highly recommended to view the video results at donydchen.github.io/mvsplat360.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/96223"} +{"video_file": "B1FOes6cyq_39028830.mp4", "openreview_id": "B1FOes6cyq", "slideslive_id": 39028830, "venue": "nips2024", "title": "Learning from Teaching Regularization: Generalizable Correlations Should be Easy to Imitate", "status": "Poster", "keywords": "Generalization;Regularization;Learning from Teaching", "tldr": "We propose Learning from Teaching (LoT), a novel regularization technique for deep neural networks to enhance generalization.", "abstract": "Generalization remains a central challenge in machine learning. In this work, we propose Learning from Teaching (LoT), a novel regularization technique for deep neural networks to enhance generalization. Inspired by the human ability to capture concise and abstract patterns, we hypothesize that generalizable correlations are expected to be easier to imitate. LoT operationalizes this concept to improve the generalization of the main model with auxiliary student learners. The student learners are trained by the main model and, in turn, provide feedback to help the main model capture more generalizable and imitable correlations. Our experimental results across several domains, including Computer Vision, Natural Language Processing, and methodologies like Reinforcement Learning, demonstrate that the introduction of LoT brings significant benefits compared to training models on the original dataset. The results suggest the effectiveness and efficiency of LoT in identifying generalizable information at the right scales while discarding spurious data correlations, thus making LoT a valuable addition to current machine learning. Code is available at https://github.com/jincan333/LoT.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/96222"} +{"video_file": "B1Iq1EOiVU_39025732.mp4", "openreview_id": "B1Iq1EOiVU", "slideslive_id": 39025732, "venue": "nips2024", "title": "DeformableTST: Transformer for Time Series Forecasting without Over-reliance on Patching", "status": "Poster", "keywords": "Time series forecasting;Transformer;Deep learning", "tldr": "We propose DeformableTST, a Transformer-based model less reliant on patching, to broaden the applicability of Transformer-based models in time series forecasting tasks and achieves SOTA performance in a wider range of time series forecasting tasks.", "abstract": "With the proposal of patching technique in time series forecasting, Transformerbased models have achieved compelling performance and gained great interest from the time series community. But at the same time, we observe a new problem that the recent Transformer-based models are overly reliant on patching to achieve ideal performance, which limits their applicability to some forecasting tasks unsuitable for patching. In this paper, we intent to handle this emerging issue. Through diving into the relationship between patching and full attention (the core mechanism in Transformer-based models), we further find out the reason behind this issue is that full attention relies overly on the guidance of patching to focus on the important time points and learn non-trivial temporal representation. Based on this finding, we propose DeformableTST as an effective solution to this emerging issue. Specifically, we propose deformable attention, a sparse attention mechanism that can better focus on the important time points by itself, to get rid of the need of patching. And we also adopt a hierarchical structure to alleviate the efficiency issue caused by the removal of patching. Experimentally, our DeformableTST achieves the consistent state-of-the-art performance in a broader range of time series tasks, especially achieving promising performance in forecasting tasks unsuitable for patching, therefore successfully reducing the reliance on patching and broadening the applicability of Transformer-based models. Code is available at this repository: https://github.com/luodhhh/DeformableTST.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/96221"} +{"video_file": "B29BlRe26Z_39026632.mp4", "openreview_id": "B29BlRe26Z", "slideslive_id": 39026632, "venue": "nips2024", "title": "SLowcalSGD : Slow Query Points Improve Local-SGD for Stochastic Convex Optimization", "status": "Poster", "keywords": "Stochastic Convex Optimization", "tldr": "The first parallel training method that provably benefits over Minibatch-SGD in Convex heterogeneous training scenarios.", "abstract": "We consider distributed learning scenarios where\nM\nmachines interact with a parameter server along several communication rounds in order to minimize a joint objective function. Focusing on the heterogeneous case, where different machines may draw samples from different data-distributions, we design the first local update method that provably benefits over the two most prominent distributed baselines: namely Minibatch-SGD and Local-SGD. Key to our approach is a slow querying technique that we customize to the distributed setting, which in turn enables a better mitigation of the bias caused by local updates.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/96219"} +{"video_file": "B2cTLakrhV_39028901.mp4", "openreview_id": "B2cTLakrhV", "slideslive_id": 39028901, "venue": "nips2024", "title": "Differentiable Structure Learning with Partial Orders", "status": "Poster", "keywords": "Causal discovery;Continuous optimization;Differentiable Structure Learning;Partial Orders", "tldr": "This paper extends the continuous optimization of structure learning able to integrate structural partial orders, an important prior information in real-world causal research.", "abstract": "Differentiable structure learning is a novel line of causal discovery research that transforms the combinatorial optimization of structural models into a continuous optimization problem. However, the field has lacked feasible methods to integrate partial order constraints, a critical prior information typically used in real-world scenarios, into the differentiable structure learning framework. The main difficulty lies in adapting these constraints, typically suited for the space of total orderings, to the continuous optimization context of structure learning in the graph space. To bridge this gap, this paper formalizes a set of equivalent constraints that map partial orders onto graph spaces and introduces a plug-and-play module for their efficient application. This module preserves the equivalent effect of partial order constraints in the graph space, backed by theoretical validations of correctness and completeness. It significantly enhances the quality of recovered structures while maintaining good efficiency, which learns better structures using 90% fewer samples than the data-based method on a real-world dataset. This result, together with a comprehensive evaluation on synthetic cases, demonstrates our method's ability to effectively improve differentiable structure learning with partial orders.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/96218"} +{"video_file": "B74mb0tEY6_39027521.mp4", "openreview_id": "B74mb0tEY6", "slideslive_id": 39027521, "venue": "nips2024", "title": "Optimizing the coalition gain in Online Auctions with Greedy Structured Bandits", "status": "Poster", "keywords": "unimodal bandits;multi-arm bandit;auctions", "tldr": "We prove that a specific structured bandit problem motivated by online auctions can be tackled with locally greedy algorithms.", "abstract": "Motivated by online display advertising, this work considers repeated second-price auctions, where agents sample their value from an unknown distribution with cumulative distribution function\nF\n. In each auction\nt\n, a decision-maker bound by limited observations selects\nn\nt\nagents from a coalition of\nN\nto compete for a prize with\np\nother agents, aiming to maximize the cumulative reward of the coalition across all auctions. The problem is framed as an\nN\n-armed structured bandit, each number of player sent being an arm\nn\n, with expected reward\nr\n(\nn\n)\nfully characterized by\nF\nand\np\n+\nn\n. We present two algorithms, Local-Greedy (LG) and Greedy-Grid (GG), both achieving constant problem-dependent regret. This relies on three key ingredients: 1. an estimator of\nr\n(\nn\n)\nfrom feedback collected from any arm\nk\n, 2. concentration bounds of these estimates for\nk\nwithin an estimation neighborhood of\nn\nand 3. the unimodality property of\nr\nunder standard assumptions on\nF\n. Additionally, GG exhibits problem-independent guarantees on top of best problem-dependent guarantees. However, by avoiding to rely on confidence intervals, LG practically outperforms GG, as well as standard unimodal bandit algorithms such as OSUB or multi-armed bandit algorithms.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/96213"} +{"video_file": "B9FPPdNmyk_39024461.mp4", "openreview_id": "B9FPPdNmyk", "slideslive_id": 39024461, "venue": "nips2024", "title": "The Best of Both Worlds: On the Dilemma of Out-of-distribution Detection", "status": "Poster", "keywords": "Out-of-Distribution Detection; Uncertainty estimation;", "tldr": "This paper provides both analysis and solution to overcome the dilemma between OOD detection and generalization.", "abstract": "Out-of-distribution (OOD) detection is essential for model trustworthiness which aims to sensitively identity semantic OOD samples and robustly generalize for covariate-shifted OOD samples. However, we discover that the superior OOD detection performance of state-of-the-art methods is achieved by secretly sacrificing the OOD generalization ability. The classification accuracy frequently collapses catastrophically when even slight noise is encountered. Such a phenomenon violates the motivation of trustworthiness and significantly limits the model's deployment in the real world. What is the hidden reason behind such a limitation? In this work, we theoretically demystify the \"\\textit{sensitive-robust}\" dilemma that lies in previous OOD detection methods. Consequently, a theory-inspired algorithm is induced to overcome such a dilemma. By decoupling the uncertainty learning objective from a Bayesian perspective, the conflict between OOD detection and OOD generalization is naturally harmonized and a dual-optimized performance could be expected. Empirical studies show that our method achieves superior performance on commonly used benchmarks. To our best knowledge, this work is the first principled OOD detection method that achieves state-of-the-art OOD detection performance without sacrificing OOD generalization ability. Our code is available at https://github.com/QingyangZhang/DUL.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/96211"} +{"video_file": "B9qg3wo75g_39025125.mp4", "openreview_id": "B9qg3wo75g", "slideslive_id": 39025125, "venue": "nips2024", "title": "Generative Fractional Diffusion Models", "status": "Poster", "keywords": "diffusion models;fractional brownian motion;fractional noise;generative modeling", "tldr": "We generalize the continuous time framework for score-based generative models from an underlying Brownian motion to a Markov-approximate fractional Brownian motion.", "abstract": "We introduce the first continuous-time score-based generative model that leverages fractional diffusion processes for its underlying dynamics. Although diffusion models have excelled at capturing data distributions, they still suffer from various limitations such as slow convergence, mode-collapse on imbalanced data, and lack of diversity. These issues are partially linked to the use of light-tailed Brownian motion (BM) with independent increments. In this paper, we replace BM with an approximation of its non-Markovian counterpart, fractional Brownian motion (fBM), characterized by correlated increments and Hurst index\nH\n\u2208\n(\n0\n,\n1\n)\n, where\nH\n=\n0.5\nrecovers the classical BM. To ensure tractable inference and learning, we employ a recently popularized Markov approximation of fBM (MA-fBM) and derive its reverse-time model, resulting in generative fractional diffusion models (GFDM). We characterize the forward dynamics using a continuous reparameterization trick and propose augmented score matching to efficiently learn the score function, which is partly known in closed form, at minimal added cost. The ability to drive our diffusion model via MA-fBM offers flexibility and control.\nH\n\u2264\n0.5\nenters the regime of rough paths whereas\nH\n>\n0.5\nregularizes diffusion paths and invokes long-term memory. The Markov approximation allows added control by varying the number of Markov processes linearly combined to approximate fBM. Our evaluations on real image datasets demonstrate that GFDM achieves greater pixel-wise diversity and enhanced image quality, as indicated by a lower FID, offering a promising alternative to traditional diffusion models", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96210"} +{"video_file": "BAfKBkr8IP_39025382.mp4", "openreview_id": "BAfKBkr8IP", "slideslive_id": 39025382, "venue": "nips2024", "title": "Rethinking Fourier Transform from A Basis Functions Perspective for Long-term Time Series Forecasting", "status": "Poster", "keywords": "Time series forecasting;Deep learning;Fourier transform;Frequency domain", "tldr": "We propose a Fourier basis mapping model, leveraging the basis functions to provide more implicit frequency features while preserving the temporal characteristic.", "abstract": "The interaction between Fourier transform and deep learning opens new avenues for long-term time series forecasting (LTSF). We propose a new perspective to reconsider the Fourier transform from a basis functions perspective. Specifically, the real and imaginary parts of the frequency components can be viewed as the coefficients of cosine and sine basis functions at tiered frequency levels, respectively. We argue existing Fourier-based methods do not involve basis functions thus fail to interpret frequency coefficients precisely and consider the time-frequency relationship sufficiently, leading to inconsistent starting cycles and inconsistent series length issues. Accordingly, a novel Fourier basis mapping (FBM) method addresses these issues by mixing time and frequency domain features through Fourier basis expansion. Differing from existing approaches, FBM (i) embeds the discrete Fourier transform with basis functions, and then (ii) can enable plug-and-play in various types of neural networks for better performance. FBM extracts explicit frequency features while preserving temporal characteristics, enabling the mapping network to capture the time-frequency relationships. By incorporating our unique time-frequency features, the FBM variants can enhance any type of networks like linear, multilayer-perceptron-based, transformer-based, and Fourier-based networks, achieving state-of-the-art LTSF results on diverse real-world datasets with just one or three fully connected layers. The code is available at: https://github.com/runze1223/Fourier-Basis-Mapping.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/96209"} +{"video_file": "BAjjINf0Oh_39025493.mp4", "openreview_id": "BAjjINf0Oh", "slideslive_id": 39025493, "venue": "nips2024", "title": "Oracle-Efficient Differentially Private Learning with Public Data", "status": "Poster", "keywords": "Oracle Efficiency;Differential Privacy;PAC learning", "tldr": "We provide oracle-efficient algorithms capable of differentially private PAC learning any learnable function classes in the presence of public, unlabelled data.", "abstract": "Due to statistical lower bounds on the learnability of many function classes under privacy constraints, there has been recent interest in leveraging public data to improve the performance of private learning algorithms. In this model, algorithms must always guarantee differential privacy with respect to the private samples while also ensuring learning guarantees when the private data distribution is sufficiently close to that of the public data. Previous work has demonstrated that when sufficient public, unlabelled data is available, private learning can be made statistically tractable, but the resulting algorithms have all been computationally inefficient. In this work, we present the first computationally efficient, algorithms to provably leverage public data to learn privately whenever a function class is learnable non-privately, where our notion of computational efficiency is with respect to the number of calls to an optimization oracle for the function class. In addition to this general result, we provide specialized algorithms with improved sample complexities in the special cases when the function class is convex or when the task is binary classification.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96208"} +{"video_file": "BAmAFraxvf_39027175.mp4", "openreview_id": "BAmAFraxvf", "slideslive_id": 39027175, "venue": "nips2024", "title": "Toward Semantic Gaze Target Detection", "status": "Poster", "keywords": "gaze following;dataset;deep learning;computer vision", "tldr": "We extend the gaze following task, which is currently focused on localization alone, to also incorporate the class label of the gaze target. We propose new benchmarks and a design a new architecture that outperforms existing methods.", "abstract": "From the onset of infanthood, humans naturally develop the ability to closely observe and interpret the visual gaze of others. This skill, known as gaze following, holds significance in developmental theory as it enables us to grasp another person\u2019s mental state, emotions, intentions, and more. In computer vision, gaze following is defined as the prediction of the pixel coordinates where a person in the image is focusing their attention. Existing methods in this research area have predominantly centered on pinpointing the gaze target by predicting a gaze heatmap or gaze point. However, a notable drawback of this approach is its limited practical value in gaze applications, as mere localization may not fully capture our primary interest \u2014 understanding the underlying semantics, such as the nature of the gaze target, rather than just its 2D pixel location. To address this gap, we extend the gaze following task, and introduce a novel architecture that simultaneously predicts the localization and semantic label of the gaze target. We devise a pseudo-annotation pipeline for the GazeFollow dataset, propose a new benchmark, develop an experimental protocol and design a suitable baseline for comparison. Our method sets a new state-of-the-art on the main GazeFollow benchmark for localization and achieves competitive results in the recognition task on both datasets compared to the baseline, with 40% fewer parameters", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96207"} +{"video_file": "BCA9NMZkLS_39025966.mp4", "openreview_id": "BCA9NMZkLS", "slideslive_id": 39025966, "venue": "nips2024", "title": "BERTs are Generative In-Context Learners", "status": "Poster", "keywords": "in-context learning;masked language modeling;bert;language modeling;evaluation;inference", "tldr": "This paper explores the in-context learning capabilities of masked language models, challenging the common view that such abilities are only present in causal language models.", "abstract": "While in-context learning is commonly associated with causal language models, such as GPT, we demonstrate that this capability also 'emerges' in masked language models. Through an embarrassingly simple inference technique, we enable an existing masked model, DeBERTa, to perform generative tasks without additional training or architectural changes. Our evaluation reveals that the masked and causal language models behave very differently, as they clearly outperform each other on different categories of tasks. These complementary strengths suggest that the field's focus on causal models for in-context learning may be limiting \u2013 both architectures can develop these capabilities, but with distinct advantages; pointing toward promising hybrid approaches that combine the strengths of both objectives.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96206"} +{"video_file": "BEiqNQZIky_39026093.mp4", "openreview_id": "BEiqNQZIky", "slideslive_id": 39026093, "venue": "nips2024", "title": "Efficiently Learning Significant Fourier Feature Pairs for Statistical Independence Testing", "status": "Poster", "keywords": "independence test;learnable Fourier feature", "tldr": "A novel framework for statistical independence tests that enable effective learning significant Fourier feature pairs to maximize test power", "abstract": "We propose a novel method to efficiently learn significant Fourier feature pairs for maximizing the power of Hilbert-Schmidt Independence Criterion~(HSIC) based independence tests. We first reinterpret HSIC in the frequency domain, which reveals its limited discriminative power due to the inability to adapt to specific frequency-domain features under the current inflexible configuration. To remedy this shortcoming, we introduce a module of learnable Fourier features, thereby developing a new criterion. We then derive a finite sample estimate of the test power by modeling the behavior of the criterion, thus formulating an optimization objective for significant Fourier feature pairs learning. We show that this optimization objective can be computed in linear time (with respect to the sample size $n$), which ensures fast independence tests. We also prove the convergence property of the optimization objective and establish the consistency of the independence tests. Extensive empirical evaluation on both synthetic and real datasets validates our method's superiority in effectiveness and efficiency, particularly in handling high-dimensional data and dealing with large-scale scenarios.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/96204"} +{"video_file": "BFWdIPPLgZ_39027048.mp4", "openreview_id": "BFWdIPPLgZ", "slideslive_id": 39027048, "venue": "nips2024", "title": "A Phase Transition between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention", "status": "Spotlight", "keywords": "replica method;statistical physics;phase transition;high-dimensional limit;attention layer", "tldr": "We provide a tight asymptotic analysis of the learning of an attention layer, and evidence a phase transition from a positional to a semantic attention mechanism with sample complexity.", "abstract": "Many empirical studies have provided evidence for the emergence of algorithmic mechanisms (abilities) in the learning of language models, that lead to qualitative improvements of the model capabilities. Yet, a theoretical characterization of how such mechanisms emerge remains elusive. In this paper, we take a step in this direction by providing a tight theoretical analysis of the emergence of semantic attention in a solvable model of dot-product attention. More precisely, we consider a non-linear self-attention layer with trainable tied and low-rank query and key matrices. In the asymptotic limit of high-dimensional data and a comparably large number of training samples we provide a tight closed-form characterization of the global minimum of the non-convex empirical loss landscape. We show that this minimum corresponds to either a positional attention mechanism (with tokens attending to each other based on their respective positions) or a semantic attention mechanism (with tokens attending to each other based on their meaning), and evidence an emergent phase transition from the former to the latter with increasing sample complexity. Finally, we compare the dot-product attention layer to a linear positional baseline, and show that it outperforms the latter using the semantic mechanism provided it has access to sufficient data.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96203"} +{"video_file": "BGOGknwHbi_39028092.mp4", "openreview_id": "BGOGknwHbi", "slideslive_id": 39028092, "venue": "nips2024", "title": "Self-Guiding Exploration for Combinatorial Problems", "status": "Poster", "keywords": "combinatorial problems;combinatorial optimization;LLM prompting strategies;LLM thought exploration", "tldr": "We introduce Self-Guiding Exploration (SGE) for LLMs, boosting combinatorial problem-solving by 27.84% and increasing reasoning task accuracy by 2.46%.", "abstract": "Large Language Models (LLMs) have become pivotal in addressing reasoning tasks across diverse domains, including arithmetic, commonsense, and symbolic reasoning. They utilize prompting techniques such as Exploration-of-Thought, Decomposition, and Refinement to effectively navigate and solve intricate tasks. Despite these advancements, the application of LLMs to Combinatorial Problems (CPs), known for their NP-hardness and critical roles in logistics and resource management remains underexplored. To address this gap, we introduce a novel prompting strategy: Self-Guiding Exploration (SGE), designed to enhance the performance of solving CPs. SGE operates autonomously, generating multiple thought trajectories for each CP task. It then breaks these trajectories down into actionable subtasks, executes them sequentially, and refines the results to ensure optimal outcomes. We present our research as the first to apply LLMs to a broad range of CPs and demonstrate that SGE outperforms existing prompting strategies by over 27.84% in CP optimization performance. Additionally, SGE achieves a 2.46% higher accuracy over the best existing results in other reasoning tasks (arithmetic, commonsense, and symbolic).", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/96202"} +{"video_file": "BJndYScO6o_39024922.mp4", "openreview_id": "BJndYScO6o", "slideslive_id": 39024922, "venue": "nips2024", "title": "Model-based Diffusion for Trajectory Optimization", "status": "Poster", "keywords": "Diffusion;Trajectory Optimization;Motion Planning;Robotics;Sampling-based Control", "tldr": "Model-Based Diffusion (MBD) solves trajectory optimization by using model information for score computation.", "abstract": "Recent advances in diffusion models have demonstrated their strong capabilities in generating high-fidelity samples from complex distributions through an iterative refinement process. Despite the empirical success of diffusion models in motion planning and control, the model-free nature of these methods does not leverage readily available model information and limits their generalization to new scenarios beyond the training data (e.g., new robots with different dynamics). In this work, we introduce Model-Based Diffusion (MBD), an optimization approach using the diffusion process to solve trajectory optimization (TO) problems without data. The key idea is to explicitly compute the score function by leveraging the model information in TO problems, which is why we refer to our approach as model-based diffusion. Moreover, although MBD does not require external data, it can be naturally integrated with data of diverse qualities to steer the diffusion process. We also reveal that MBD has interesting connections to sampling-based optimization. Empirical evaluations show that MBD outperforms state-of-the-art reinforcement learning and sampling-based TO methods in challenging contact-rich tasks. Additionally, MBD\u2019s ability to integrate with data enhances its versatility and practical applicability, even with imperfect and infeasible data (e.g., partial-state demonstrations for high-dimensional humanoids), beyond the scope of standard diffusion models. Videos and codes are available in the supplementary materials.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96200"} +{"video_file": "BJrBaLoDRJ_39025873.mp4", "openreview_id": "BJrBaLoDRJ", "slideslive_id": 39025873, "venue": "nips2024", "title": "A robust inlier identification algorithm for point cloud registration via $\\mathbf{\\ell_0}$-minimization", "status": "Poster", "keywords": "Point cloud registration;Inlier identification;Optimization", "tldr": "We propose an effective $\\ell_0$-norm based inlier identification algorithm for robust point cloud registration.", "abstract": "Correspondences in point cloud registration are prone to outliers, significantly reducing registration accuracy and highlighting the need for precise inlier identification. In this paper, we propose a robust inlier identification algorithm for point cloud registration by reformulating the conventional registration problem as an alignment error $\\ell_0$-minimization problem. The $\\ell_0$-minimization problem is formulated for each local set, where those local sets are built on a compatibility graph of input correspondences. To resolve the $\\ell_0$-minimization, we develop a novel two-stage decoupling strategy, which first decouples the alignment error into a rotation fitting error and a translation fitting error. Second, null-space matrices are employed to decouple inlier identification from the estimation of rotation and translation respectively, thereby applying Bayesian theory to $\\ell_0$-minimization problems and solving for fitting errors. Correspondences with the smallest errors are identified as inliers to generate a transformation hypothesis for each local set. The best hypothesis is selected to perform registration. We demonstrate that the proposed inlier identification algorithm is robust under high outlier ratios and noise through experiments. Extensive results on the KITTI, 3DMatch, and 3DLoMatch datasets demonstrate that our method achieves state-of-the-art performance compared to both traditional and learning-based methods in various indoor and outdoor scenes.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96199"} +{"video_file": "BJv1t4XNJW_39028824.mp4", "openreview_id": "BJv1t4XNJW", "slideslive_id": 39028824, "venue": "nips2024", "title": "Slot State Space Models", "status": "Poster", "keywords": "State-Space Models;Object-Centric Learning;Video Understanding Models;Spatial-Temporal Reasoning", "tldr": "We propose SlotSSM which incorporates independent mechanisms into State Space Models to preserve or encourage separation of information int object-centric learning and visual reasoning.", "abstract": "Recent State Space Models (SSMs) such as S4, S5, and Mamba have shown remarkable computational benefits in long-range temporal dependency modeling. However, in many sequence modeling problems, the underlying process is inherently modular and it is of interest to have inductive biases that mimic this modular structure. In this paper, we introduce SlotSSMs, a novel framework for incorporating independent mechanisms into SSMs to preserve or encourage separation of information. Unlike conventional SSMs that maintain a monolithic state vector, SlotSSMs maintains the state as a collection of multiple vectors called slots. Crucially, the state transitions are performed independently per slot with sparse interactions across slots implemented via the bottleneck of self-attention. In experiments, we evaluate our model in object-centric learning, 3D visual reasoning, and long-context video understanding tasks, which involve modeling multiple objects and their long-range temporal dependencies. We find that our proposed design offers substantial performance gains over existing sequence modeling methods. Project page is available at \\url{https://slotssms.github.io/}", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/96198"} +{"video_file": "BOhnXyIPWW_39025990.mp4", "openreview_id": "BOhnXyIPWW", "slideslive_id": 39025990, "venue": "nips2024", "title": "Locally Private and Robust Multi-Armed Bandits", "status": "Poster", "keywords": "Local Differential Privacy;Robustness;Huber Corruption;Multi-Armed Bandits", "tldr": "We show an interesting interplay between local differential privacy and Huber contamination in MABs.", "abstract": "We study the interplay between local differential privacy (LDP) and robustness to Huber corruption and possibly heavy-tailed rewards in the context of multi-armed bandits (MABs). We consider two different practical settings: LDP-then-Corruption (LTC) where each user's locally private response might be further corrupted during the data collection process, and Corruption-then-LDP (CTL) where each user's raw data may be corrupted such that the LDP mechanism will only be applied to the corrupted data. To start with, we present the first tight characterization of the mean estimation error in high probability under both LTC and CTL settings. Leveraging this new result, we then present an almost tight characterization (up to log factor) of the minimax regret in online MABs and sub-optimality in offline MABs under both LTC and CTL settings, respectively. Our theoretical results in both settings are also corroborated by a set of systematic simulations. One key message in this paper is that LTC is a more difficult setting that leads to a worse performance guarantee compared to the CTL setting (in the minimax sense). Our sharp understanding of LTC and CTL also naturally allows us to give the first tight performance bounds for the most practical setting where corruption could happen both before and after the LDP mechanism. As an important by-product, we also give the first correct and tight regret bound for locally private and heavy-tailed online MABs, i.e., without Huber corruption, by identifying a fundamental flaw in the state-of-the-art.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/96196"} +{"video_file": "BQh1SGvROG_39025987.mp4", "openreview_id": "BQh1SGvROG", "slideslive_id": 39025987, "venue": "nips2024", "title": "AdanCA: Neural Cellular Automata As Adaptors For More Robust Vision Transformer", "status": "Poster", "keywords": "Neural Cellular Automata;Vision Transformer;Adversarial Robustness;Out-of-distribution generalization", "tldr": "Neural Cellular Automata can be inserted between the middle layers of Vision Transformers (ViTs) to improve ViTs' robustness on image classification.", "abstract": "Vision Transformers (ViTs) demonstrate remarkable performance in image classification through visual-token interaction learning, particularly when equipped with local information via region attention or convolutions. Although such architectures improve the feature aggregation from different granularities, they often fail to contribute to the robustness of the networks. Neural Cellular Automata (NCA) enables the modeling of global visual-token representations through local interactions, with its training strategies and architecture design conferring strong generalization ability and robustness against noisy input. In this paper, we propose Adaptor Neural Cellular Automata (AdaNCA) for Vision Transformers that uses NCA as plug-and-play adaptors between ViT layers, thus enhancing ViT's performance and robustness against adversarial samples as well as out-of-distribution inputs. To overcome the large computational overhead of standard NCAs, we propose Dynamic Interaction for more efficient interaction learning. Using our analysis of AdaNCA placement and robustness improvement, we also develop an algorithm for identifying the most effective insertion points for AdaNCA. With less than a 3% increase in parameters, AdaNCA contributes to more than 10% absolute improvement in accuracy under adversarial attacks on the ImageNet1K benchmark. Moreover, we demonstrate with extensive evaluations across eight robustness benchmarks and four ViT architectures that AdaNCA, as a plug-and-play module, consistently improves the robustness of ViTs.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/96193"} +{"video_file": "BRW0MKJ7Rr_39027570.mp4", "openreview_id": "BRW0MKJ7Rr", "slideslive_id": 39027570, "venue": "nips2024", "title": "Action Gaps and Advantages in Continuous-Time Distributional Reinforcement Learning", "status": "Poster", "keywords": "distributional reinforcement learning;reinforcement learning;continuous time;advantage updating;stochastic differential equations", "tldr": "We establish theory governing the difficulty of policy optimization in high-decision-frequency distributional RL.", "abstract": "When decisions are made at high frequency, traditional reinforcement learning (RL) methods struggle to accurately estimate action values. In turn, their performance is inconsistent and often poor. Whether the performance of distributional RL (DRL) agents suffers similarly, however, is unknown. In this work, we establish that DRL agents are sensitive to the decision frequency. We prove that action-conditioned return distributions collapse to their underlying policy's return distribution as the decision frequency increases. We quantify the rate of collapse of these return distributions and exhibit that their statistics collapse at different rates. Moreover, we define distributional perspectives on action gaps and advantages. In particular, we introduce the superiority as a probabilistic generalization of the advantage---the core object of approaches to mitigating performance issues in high-frequency value-based RL. In addition, we build a superiority-based DRL algorithm. Through simulations in an option-trading domain, we validate that proper modeling of the superiority distribution produces improved controllers at high decision frequencies.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96191"} +{"video_file": "BRZYhVHvSg_39026282.mp4", "openreview_id": "BRZYhVHvSg", "slideslive_id": 39026282, "venue": "nips2024", "title": "Multi-Group Proportional Representation in Retrieval", "status": "Poster", "keywords": "Fairness;Proportional Representation;Multi-Group Fairness", "tldr": "We introduce a novel metric for ensuring multi-group proportional representation over sets of images. We apply this metric to retrieval and propose an algorithm that maximizes similarity under a multi-group proportional representation constraint.", "abstract": "Image search and retrieval tasks can perpetuate harmful stereotypes, erase cultural identities, and amplify social disparities. Current approaches to mitigate these representational harms balance the number of retrieved items across population groups defined by a small number of (often binary) attributes. However, most existing methods overlook intersectional groups determined by combinations of group attributes, such as gender, race, and ethnicity. We introduce Multi-Group Proportional Representation (MPR), a novel metric that measures representation across intersectional groups. We develop practical methods for estimating MPR, provide theoretical guarantees, and propose optimization algorithms to ensure MPR in retrieval. We demonstrate that existing methods optimizing for equal and proportional representation metrics may fail to promote MPR. Crucially, our work shows that optimizing MPR yields more proportional representation across multiple intersectional groups specified by a rich function class, often with minimal compromise in retrieval accuracy. Code is provided at https://github.com/alex-oesterling/multigroup-proportional-representation.", "primary_area": "fairness", "site": "https://neurips.cc/virtual/2024/poster/96190"} +{"video_file": "BZLdXBjB8O_39027408.mp4", "openreview_id": "BZLdXBjB8O", "slideslive_id": 39027408, "venue": "nips2024", "title": "CausalDiff: Causality-Inspired Disentanglement via Diffusion Model for Adversarial Defense", "status": "Poster", "keywords": "Adversarial Defense;Diffusion Model;Causal", "tldr": "We propose a casual diffusion model (CausalDiff) that adapts diffusion models for conditional data generation and disentangles the two types of casual factors for adversarial defense on image classification task.", "abstract": "Despite ongoing efforts to defend neural classifiers from adversarial attacks, they remain vulnerable, especially to unseen attacks. In contrast, humans are difficult to be cheated by subtle manipulations, since we make judgments only based on essential factors. Inspired by this observation, we attempt to model label generation with essential label-causative factors and incorporate label-non-causative factors to assist data generation. For an adversarial example, we aim to discriminate the perturbations as non-causative factors and make predictions only based on the label-causative factors. Concretely, we propose a casual diffusion model (CausalDiff) that adapts diffusion models for conditional data generation and disentangles the two types of casual factors by learning towards a novel casual information bottleneck objective. Empirically, CausalDiff has significantly outperformed state-of-the-art defense methods on various unseen attacks, achieving an average robustness of 86.39% (+4.01%) on CIFAR-10, 56.25% (+3.13%) on CIFAR-100, and 82.62% (+4.93%) on GTSRB (German Traffic Sign Recognition Benchmark). The code is available athttps://github.com/CAS-AISafetyBasicResearchGroup/CausalDiff.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/96186"} +{"video_file": "BgZcuEsYU8_39026039.mp4", "openreview_id": "BgZcuEsYU8", "slideslive_id": 39026039, "venue": "nips2024", "title": "Causal Inference in the Closed-Loop: Marginal Structural Models for Sequential Excursion Effects", "status": "Poster", "keywords": "marginal structural models;optogenetics;excursion effects;neuroscience;dynamic treatment regimes;micro-randomized trials;sequentially randomized experiments", "tldr": "We propose a non-parametric causal inference framework for closed-loop optogenetics behavioral experiments to enable excursion effect estimation for treatment sequences greater than length one in the presence of positivity violations.", "abstract": "Optogenetics is widely used to study the effects of neural circuit manipulation on behavior. However, the paucity of causal inference methodological work on this topic has resulted in analysis conventions that discard information, and constrain the scientific questions that can be posed. To fill this gap, we introduce a nonparametric causal inference framework for analyzing \"closed-loop\" designs, which use dynamic policies that assign treatment based on covariates. In this setting, standard methods can introduce bias and occlude causal effects. Building on the sequentially randomized experiments literature in causal inference, our approach extends history-restricted marginal structural models for dynamic regimes. In practice, our framework can identify a wide range of causal effects of optogenetics on trial-by-trial behavior, such as, fast/slow-acting, dose-response, additive/antagonistic, and floor/ceiling. Importantly, it does so without requiring negative controls, and can estimate how causal effect magnitudes evolve across time points. From another view, our work extends \"excursion effect\" methods---popular in the mobile health literature---to enable estimation of causal contrasts for treatment sequences greater than length one, in the presence of positivity violations. We derive rigorous statistical guarantees, enabling hypothesis testing of these causal effects. We demonstrate our approach on data from a recent study of dopaminergic activity on learning, and show how our method reveals relevant effects obscured in standard analyses.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/96183"} +{"video_file": "Bh0LLUp8OA_39025217.mp4", "openreview_id": "Bh0LLUp8OA", "slideslive_id": 39025217, "venue": "nips2024", "title": "Contracting with a Learning Agent", "status": "Poster", "keywords": "Contract Theory;Learning;No-Regret Learning;Mean-Based Learners", "tldr": "Optimal contract design for principals interacting with no-regret learning agents", "abstract": "Real-life contractual relations typically involve repeated interactions between the principal and agent, where, despite theoretical appeal, players rarely use complex dynamic strategies and instead manage uncertainty through learning algorithms.\nIn this paper, we initiate the study of repeated contracts with learning agents, focusing on those achieving no-regret outcomes. For the canonical setting where the agent\u2019s actions result in success or failure, we present a simple, optimal solution for the principal: Initially provide a linear contract with scalar\n\u03b1\n>\n0\n, then switch to a zero-scalar contract. This shift causes the agent to \u201cfree-fall\u201d through their action space, yielding non-zero rewards for the principal at zero cost. Interestingly, despite the apparent exploitation, there are instances where our dynamic contract can make \\emph{both} players better off compared to the best static contract.\nWe then broaden the scope of our results to general linearly-scaled contracts, and, finally, to the best of our knowledge, we provide the first analysis of optimization against learning agents with uncertainty about the time horizon.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/96182"} +{"video_file": "BiikUm6pLu_39028521.mp4", "openreview_id": "BiikUm6pLu", "slideslive_id": 39028521, "venue": "nips2024", "title": "Truncated Variance Reduced Value Iteration", "status": "Poster", "keywords": "Markov Decision Processes (MDP);discounted MDP;value iteration;variance reduction", "tldr": "We provide faster randomized algorithms for computing an \u03b5-optimal policy in a discounted Markov decision process and take a step towards closing the sample-complexity gap between model-based and model-free methods.", "abstract": "We provide faster randomized algorithms for computing an\n\u03f5\n-optimal policy in a discounted Markov decision process with\nA\ntot\n-state-action pairs, bounded rewards, and discount factor\n\u03b3\n. We provide an\nO\n~\n(\nA\ntot\n[\n(\n1\n\u2212\n\u03b3\n)\n\u2212\n3\n\u03f5\n\u2212\n2\n+\n(\n1\n\u2212\n\u03b3\n)\n\u2212\n2\n]\n)\n-time algorithm in the sampling setting, where the probability transition matrix is unknown but accessible through a generative model which can be queried in\nO\n~\n(\n1\n)\n-time, and an\nO\n~\n(\ns\n+\n(\n1\n\u2212\n\u03b3\n)\n\u2212\n2\n)\n-time algorithm in the offline setting where the probability transition matrix is known and\ns\n-sparse. These results improve upon the prior state-of-the-art which either ran in\nO\n~\n(\nA\ntot\n[\n(\n1\n\u2212\n\u03b3\n)\n\u2212\n3\n\u03f5\n\u2212\n2\n+\n(\n1\n\u2212\n\u03b3\n)\n\u2212\n3\n]\n)\ntime [Sidford, Wang, Wu, Ye 2018] in the sampling setting,\nO\n~\n(\ns\n+\nA\ntot\n(\n1\n\u2212\n\u03b3\n)\n\u2212\n3\n)\ntime [Sidford, Wang, Wu, Yang, Ye 2018] in the offline setting, or time at least quadratic in the number of states using interior point methods for linear programming. We achieve our results by building upon prior stochastic variance-reduced value iteration methods [Sidford, Wang, Wu, Yang, Ye 2018]. We provide a variant that carefully truncates the progress of its iterates to improve the variance of new variance-reduced sampling procedures that we introduce to implement the steps. Our method is essentially model-free and can be implemented in\nO\n~\n(\nA\ntot\n)\n-space when given generative model access. Consequently, our results take a step in closing the sample-complexity gap between model-free and model-based methods.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96181"} +{"video_file": "Bj2CpB9Dey_39028844.mp4", "openreview_id": "Bj2CpB9Dey", "slideslive_id": 39028844, "venue": "nips2024", "title": "Tangent Space Causal Inference: Leveraging Vector Fields for Causal Discovery in Dynamical Systems", "status": "Poster", "keywords": "causal discovery;convergent cross mapping;manifolds;dynamical systems;differential geometry", "tldr": "We improve convergent cross mapping by explicitly considering vector fields instead of individual predictions.", "abstract": "Causal discovery with time series data remains a challenging yet increasingly important task across many scientific domains. Convergent cross mapping (CCM) and related methods have been proposed to study time series that are generated by dynamical systems, where traditional approaches like Granger causality are unreliable. However, CCM often yields inaccurate results depending upon the quality of the data. We propose the Tangent Space Causal Inference (TSCI) method for detecting causalities in dynamical systems. TSCI works by considering vector fields as explicit representations of the systems' dynamics and checks for the degree of synchronization between the learned vector fields. The TSCI approach is model-agnostic and can be used as a drop-in replacement for CCM and its generalizations. We first present a basic version of the TSCI algorithm, which is shown to be more effective than the basic CCM algorithm with very little additional computation. We additionally present augmented versions of TSCI that leverage the expressive power of latent variable models and deep learning. We validate our theory on standard systems, and we demonstrate improved causal inference performance across a number of benchmark tasks.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/96180"} +{"video_file": "BmwcbNYkuH_39025543.mp4", "openreview_id": "BmwcbNYkuH", "slideslive_id": 39025543, "venue": "nips2024", "title": "Are nuclear masks all you need for improved out-of-domain generalisation? A closer look at cancer classification in histopathology", "status": "Poster", "keywords": "Deep learning;domain generalization;histopathology;computational pathology;digital pathology;computer vision;single domain generalization", "tldr": "Focusing on shape and organisation of nuclei (domain invariant features) leads to improved single domain generalisation and shows that nuclei have sufficient information to detect cancer.", "abstract": "Domain generalisation in computational histopathology is challenging because the images are substantially affected by differences among hospitals due to factors like fixation and staining of tissue and imaging equipment. We hypothesise that focusing on nuclei can improve the out-of-domain (OOD) generalisation in cancer detection. We propose a simple approach to improve OOD generalisation for cancer detection by focusing on nuclear morphology and organisation, as these are domain-invariant features critical in cancer detection. Our approach integrates original images with nuclear segmentation masks during training, encouraging the model to prioritise nuclei and their spatial arrangement. Going beyond mere data augmentation, we introduce a regularisation technique that aligns the representations of masks and original images. We show, using multiple datasets, that our method improves OOD generalisation and also leads to increased robustness to image corruptions and adversarial attacks. The source code is available at https://github.com/undercutspiky/SFL/", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96177"} +{"video_file": "BptJGaPn9C_39027861.mp4", "openreview_id": "BptJGaPn9C", "slideslive_id": 39027861, "venue": "nips2024", "title": "QWO: Speeding Up Permutation-Based Causal Discovery in LiGAMs", "status": "Poster", "keywords": "causal discovery;permutation-based;linear gaussian acyclic model;DAG learning", "tldr": "We propose a method that enhances the time complexity of permutation-based causal discovery approaches in linear gaussian acyclic models.", "abstract": "Causal discovery is essential for understanding relationships among variables of interest in many scientific domains. In this paper, we focus on permutation-based methods for learning causal graphs in Linear Gaussian Acyclic Models (LiGAMs), where the permutation encodes a causal ordering of the variables. Existing methods in this setting are not scalable due to their high computational complexity. These methods are comprised of two main components: (i) constructing a specific DAG,\nG\n\u03c0\n, for a given permutation\n\u03c0\n, which represents the best structure that can be learned from the available data while adhering to\n\u03c0\n, and (ii) searching over the space of permutations (i.e., causal orders) to minimize the number of edges in\nG\n\u03c0\n. We introduce QWO, a novel approach that significantly enhances the efficiency of computing\nG\n\u03c0\nfor a given permutation\n\u03c0\n. QWO has a speed-up of\nO\n(\nn\n2\n)\n(\nn\nis the number of variables) compared to the state-of-the-art BIC-based method, making it highly scalable. We show that our method is theoretically sound and can be integrated into existing search strategies such as GRASP and hill-climbing-based methods to improve their performance.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/96175"} +{"video_file": "BrPZMOQiSN_39024819.mp4", "openreview_id": "BrPZMOQiSN", "slideslive_id": 39024819, "venue": "nips2024", "title": "SequentialAttention++ for Block Sparsification: Differentiable Pruning Meets Combinatorial Optimization", "status": "Poster", "keywords": "pruning;sparsification;sparse optimization;neural network", "tldr": "Combining differentiable pruning and combinatorial optimization for block-sparse pruning of DNNs", "abstract": "Neural network pruning is a key technique towards engineering large yet scalable, interpretable, and generalizable models. Prior work on the subject has developed largely along two orthogonal directions: (1) differentiable pruning for efficiently and accurately scoring the importance of parameters, and (2) combinatorial optimization for efficiently searching over the space of sparse models. We unite the two approaches, both theoretically and empirically, to produce a coherent framework for structured neural network pruning in which differentiable pruning guides combinatorial optimization algorithms to select the most important sparse set of parameters. Theoretically, we show how many existing differentiable pruning techniques can be understood as nonconvex regularization for group sparse optimization, and prove that for a wide class of nonconvex regularizers, the global optimum is unique, group-sparse, and provably yields an approximate solution to a sparse convex optimization problem. The resulting algorithm that we propose, SequentialAttention++, advances the state of the art in large-scale neural network block-wise pruning tasks on the ImageNet and Criteo datasets.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/96174"} +{"video_file": "BrvLTxEx08_39027024.mp4", "openreview_id": "BrvLTxEx08", "slideslive_id": 39027024, "venue": "nips2024", "title": "Learning Equilibria in Adversarial Team Markov Games: A Nonconvex-Hidden-Concave Min-Max Optimization Problem", "status": "Poster", "keywords": "MARL;Convex RL;Nash Equilibrium;Non-smooth Optimization;Minimax Optimization;Hidden Convexity;Nonconvex-nonconcave;Markov Games;Stochastic Games;Learning in Games", "tldr": "We develop a policy gradient method to learn a Nash equilibrium in a MARL setting which encompasses both cooperation and competition", "abstract": "We study the problem of learning a Nash equilibrium (NE) in Markov games which is a cornerstone in multi-agent reinforcement learning (MARL). In particular, we focus on infinite-horizon adversarial team Markov games (ATMGs) in which agents that share a common reward function compete against a single opponent, the adversary. These games unify two-player zero-sum Markov games and Markov potential games, resulting in a setting that encompasses both collaboration and competition. Kalogiannis et al. (2023) provided an efficient equilibrium computation algorithm for ATMGs which presumes knowledge of the reward and transition functions and has no sample complexity guarantees. We contribute a learning algorithm that utilizes MARL policy gradient methods with iteration and sample complexity that is polynomial in the approximation error\n\u03f5\nand the natural parameters of the ATMG, resolving the main caveats of the solution by (Kalogiannis et al., 2023). It is worth noting that previously, the existence of learning algorithms for NE was known for Markov two-player zero-sum and potential games but not for ATMGs.\nSeen through the lens of min-max optimization, computing a NE in these games consists a nonconvex--nonconcave saddle-point problem. Min-max optimization has received an extensive study. Nevertheless, the case of nonconvex--nonconcave landscapes remains elusive: in full generality, finding saddle-points is computationally intractable (Daskalakis et al., 2021). We circumvent the aforementioned intractability by developing techniques that exploit the hidden structure of the objective function via a nonconvex--concave reformulation. However, this introduces a challenge of a feasibility set with coupled constraints. We tackle these challenges by establishing novel techniques for optimizing weakly-smooth nonconvex functions, extending the framework of (Devolder et al., 2014).", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/96173"} +{"video_file": "C0EhyoPpTN_39025028.mp4", "openreview_id": "C0EhyoPpTN", "slideslive_id": 39025028, "venue": "nips2024", "title": "Inferring stochastic low-rank recurrent neural networks from neural data", "status": "Poster", "keywords": "Low-rank RNNs;dynamical systems;variational inference;sequential monte carlo;neural data", "tldr": "We fit low-rank RNNs to neural data using variational SMC, and obtain models that are both generative and have tractable low-dimensional dynamics.", "abstract": "A central aim in computational neuroscience is to relate the activity of large populations of neurons to an underlying dynamical system. Models of these neural dynamics should ideally be both interpretable and fit the observed data well. Low-rank recurrent neural networks (RNNs) exhibit such interpretability by having tractable dynamics. However, it is unclear how to best fit low-rank RNNs to data consisting of noisy observations of an underlying stochastic system. Here, we propose to fit stochastic low-rank RNNs with variational sequential Monte Carlo methods. We validate our method on several datasets consisting of both continuous and spiking neural data, where we obtain lower dimensional latent dynamics than current state of the art methods. Additionally, for low-rank models with piecewise linear nonlinearities, we show how to efficiently identify all fixed points in polynomial rather than exponential cost in the number of units, making analysis of the inferred dynamics tractable for large RNNs. Our method both elucidates the dynamical systems underlying experimental recordings and provides a generative model whose trajectories match observed variability.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/96170"} +{"video_file": "C1d3VVfdVG_39026383.mp4", "openreview_id": "C1d3VVfdVG", "slideslive_id": 39026383, "venue": "nips2024", "title": "Unchosen Experts Can Contribute Too: Unleashing MoE Models\u2019 Power by Self-Contrast", "status": "Poster", "keywords": "Mixture-of-Experts;Self-Contrast;Text Generation", "tldr": "Enhancing Mixture-of-Experts models by utilizing unchosen experts in a self-contrast manner.", "abstract": "Mixture-of-Experts (MoE) has emerged as a prominent architecture for scaling model size while maintaining computational efficiency. In MoE, each token in the input sequence activates a different subset of experts determined by a routing mechanism. However, the unchosen experts in MoE models do not contribute to the output, potentially leading to underutilization of the model's capacity. In this work, we first conduct exploratory studies to demonstrate that increasing the number of activated experts does not necessarily improve and can even degrade the output quality. Then, we show that output distributions from an MoE model using different routing strategies substantially differ, indicating that different experts do not always act synergistically. Motivated by these findings, we propose Self-Contrast Mixture-of-Experts (SCMoE), a training-free strategy that utilizes unchosen experts in a self-contrast manner during inference. In SCMoE, the next-token probabilities are determined by contrasting the outputs from strong and weak activation using the same MoE model. Our method is conceptually simple and computationally lightweight, as it incurs minimal latency compared to greedy decoding. Experiments on several benchmarks (GSM8K, StrategyQA, MBPP and HumanEval) demonstrate that SCMoE can consistently enhance Mixtral 8x7B\u2019s reasoning capability across various domains. For example, it improves the accuracy on GSM8K from 61.79 to 66.94. Moreover, combining SCMoE with self-consistency yields additional gains, increasing major@20 accuracy from 75.59 to 78.31.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96169"} +{"video_file": "C1hiRbzEH9_39028355.mp4", "openreview_id": "C1hiRbzEH9", "slideslive_id": 39028355, "venue": "nips2024", "title": "Out-Of-Distribution Detection with Diversification (Provably)", "status": "Poster", "keywords": "OOD detection", "tldr": "Our theory and experiments demonstrate that training with diverse auxiliary outliers enhances OOD detection performance.", "abstract": "Out-of-distribution (OOD) detection is crucial for ensuring reliable deployment of machine learning models. Recent advancements focus on utilizing easily accessible auxiliary outliers (e.g., data from the web or other datasets) in training. However, we experimentally reveal that these methods still struggle to generalize their detection capabilities to unknown OOD data, due to the limited diversity of the auxiliary outliers collected. Therefore, we thoroughly examine this problem from the generalization perspective and demonstrate that a more diverse set of auxiliary outliers is essential for enhancing the detection capabilities. However, in practice, it is difficult and costly to collect sufficiently diverse auxiliary outlier data. Therefore, we propose a simple yet practical approach with a theoretical guarantee, termed Diversity-induced Mixup for OOD detection (diverseMix), which enhances the diversity of auxiliary outlier set for training in an efficient way. Extensive experiments show that diverseMix achieves superior performance on commonly used and recent challenging large-scale benchmarks, which further confirm the importance of the diversity of auxiliary outliers.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/96168"} +{"video_file": "C3ZHiij9QE_39026032.mp4", "openreview_id": "C3ZHiij9QE", "slideslive_id": 39026032, "venue": "nips2024", "title": "VLMimic: Vision Language Models are Visual Imitation Learner for Fine-grained Actions", "status": "Poster", "keywords": "Multimodal language models;Vision language models;Robotic manipulation;Code generation;Visual imitation learning", "tldr": "VLMimic is a novel visual imitation learning paradigm that leverages VLMs to directly learn skills with fine-grained action levels, from a limited number of human videos, outperforming baselines in both simulated and real-world experiments.", "abstract": "Visual imitation learning (VIL) provides an efficient and intuitive strategy for robotic systems to acquire novel skills. Recent advancements in Vision Language Models (VLMs) have demonstrated remarkable performance in vision and language reasoning capabilities for VIL tasks. Despite the progress, current VIL methods naively employ VLMs to learn high-level plans from human videos, relying on pre-defined motion primitives for executing physical interactions, which remains a major bottleneck. In this work, we present VLMimic, a novel paradigm that harnesses VLMs to directly learn even fine-grained action levels, only given a limited number of human videos. Specifically, VLMimic first grounds object-centric movements from human videos, and learns skills using hierarchical constraint representations, facilitating the derivation of skills with fine-grained action levels from limited human videos. These skills are refined and updated through an iterative comparison strategy, enabling efficient adaptation to unseen environments. Our extensive experiments exhibit that our VLMimic, using only 5 human videos, yields significant improvements of over 27% and 21% in RLBench and real-world manipulation tasks, and surpasses baselines by more than 37% in long-horizon tasks. Code and videos are available on our anonymous homepage.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/96165"} +{"video_file": "C3tEX45hJX_39027737.mp4", "openreview_id": "C3tEX45hJX", "slideslive_id": 39027737, "venue": "nips2024", "title": "Diffusion Spectral Representation for Reinforcement Learning", "status": "Poster", "keywords": "Diffusion Models;Reinforcement Learning;Representation Learning", "tldr": "We propose an algorithm that harnesses the flexibility of diffusion models for representation learning to achieve efficient policy optimization, while avoiding the time-consuming sampling process typically associated with diffusion models.", "abstract": "Diffusion-based models have achieved notable empirical successes in reinforcement learning (RL) due to their expressiveness in modeling complex distributions. Despite existing methods being promising, the key challenge of extending existing methods for broader real-world applications lies in the computational cost at inference time, i.e., sampling from a diffusion model is considerably slow as it often requires tens to hundreds of iterations to generate even one sample. To circumvent this issue, we propose to leverage the flexibility of diffusion models for RL from a representation learning perspective. In particular, by exploiting the connection between diffusion models and energy-based models, we develop Diffusion Spectral Representation (Diff-SR), a coherent algorithm framework that enables extracting sufficient representations for value functions in Markov decision processes (MDP) and partially observable Markov decision processes (POMDP). We further demonstrate how Diff-SR facilitates efficient policy optimization and practical algorithms while explicitly bypassing the difficulty and inference cost of sampling from the diffusion model. Finally, we provide comprehensive empirical studies to verify the benefits of Diff-SR in delivering robust and advantageous performance across various benchmarks with both fully and partially observable settings.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96164"} +{"video_file": "C4SInFLvuB_39027260.mp4", "openreview_id": "C4SInFLvuB", "slideslive_id": 39027260, "venue": "nips2024", "title": "Reshuffling Resampling Splits Can Improve Generalization of Hyperparameter Optimization", "status": "Poster", "keywords": "Hyperparameter Optimization;Generalization Performance;Cross-Validation;Resampling;Validation Splits;Model Selection;Automated Machine Learning", "tldr": "We propose to reshuffle resampling splits during hyperparameter optimization to improve generalization performance, demonstrating its effectiveness through theoretical analysis, simulations and benchmark experiments.", "abstract": "Hyperparameter optimization is crucial for obtaining peak performance of machine learning models. The standard protocol evaluates various hyperparameter configurations using a resampling estimate of the generalization error to guide optimization and select a final hyperparameter configuration. Without much evidence, paired resampling splits, i.e., either a fixed train-validation split or a fixed cross-validation scheme, are often recommended. We show that, surprisingly, reshuffling the splits for every configuration often improves the final model's generalization performance on unseen data. Our theoretical analysis explains how reshuffling affects the asymptotic behavior of the validation loss surface and provides a bound on the expected regret in the limiting regime. This bound connects the potential benefits of reshuffling to the signal and noise characteristics of the underlying optimization problem. We confirm our theoretical results in a controlled simulation study and demonstrate the practical usefulness of reshuffling in a large-scale, realistic hyperparameter optimization experiment. While reshuffling leads to test performances that are competitive with using fixed splits, it drastically improves results for a single train-validation holdout protocol and can often make holdout become competitive with standard CV while being computationally cheaper.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/96162"} +{"video_file": "C4zmR2kyP8_39026377.mp4", "openreview_id": "C4zmR2kyP8", "slideslive_id": 39026377, "venue": "nips2024", "title": "Stabilizing Zero-Shot Prediction: A Novel Antidote to Forgetting in Continual Vision-Language Tasks", "status": "Poster", "keywords": "Vision-language Learning; Continual Learning;", "tldr": "We thoroughly investigate the link between model stability in zero-shot predictions and anti-forgetting capabilities, and propose a novel replay-free method with the EMA-LoRA architecture to enhance continual learning.", "abstract": "Continual learning (CL) empowers pre-trained vision-language (VL) models to efficiently adapt to a sequence of downstream tasks. However, these models often encounter challenges in retaining previously acquired skills due to parameter shifts and limited access to historical data. In response, recent efforts focus on devising specific frameworks and various replay strategies, striving for a typical learning-forgetting trade-off. Surprisingly, both our empirical research and theoretical analysis demonstrate that the stability of the model in consecutive zero-shot predictions serves as a reliable indicator of its anti-forgetting capabilities for previously learned tasks. Motivated by these insights, we develop a novel replay-free CL method named ZAF (Zero-shot Antidote to Forgetting), which preserves acquired knowledge through a zero-shot stability regularization applied to wild data in a plug-and-play manner. To enhance efficiency in adapting to new tasks and seamlessly access historical models, we introduce a parameter-efficient EMA-LoRA neural architecture based on the Exponential Moving Average (EMA). ZAF utilizes new data for low-rank adaptation (LoRA), complemented by a zero-shot antidote on wild data, effectively decoupling learning from forgetting. Our extensive experiments demonstrate ZAF's superior performance and robustness in pre-trained models across various continual VL concept learning tasks, achieving leads of up to 3.70%, 4.82%, and 4.38%, along with at least a 10x acceleration in training speed on three benchmarks, respectively. Additionally, our zero-shot antidote significantly reduces forgetting in existing models by at least 6.37%. Our code is available at https://github.com/Zi-Jian-Gao/Stabilizing-Zero-Shot-Prediction-ZAF.", "primary_area": "online_learning", "site": "https://neurips.cc/virtual/2024/poster/96161"} +{"video_file": "CEnoUjEqNx_39028204.mp4", "openreview_id": "CEnoUjEqNx", "slideslive_id": 39028204, "venue": "nips2024", "title": "Convergence of No-Swap-Regret Dynamics in Self-Play", "status": "Poster", "keywords": "online learning;game theory;dynamics", "tldr": "We study convergence properties of no-swap-regret dynamics in zero-sum-games.", "abstract": "In this paper, we investigate the question of whether no-swap-regret dynamics have stronger convergence properties in repeated games than regular no-external-regret dynamics. We prove that in almost all symmetric zero-sum games under symmetric initializations of the agents, no-swap-regret dynamics in self-play are guaranteed to converge in a strong ``frequent-iterate'' sense to the Nash equilibrium: in all but a vanishing fraction of the rounds, the players must play a strategy profile close to a symmetric Nash equilibrium. Remarkably, relaxing any of these three constraints, i.e. by allowing either i) asymmetric initial conditions, or ii) an asymmetric game or iii) no-external regret dynamics suffices to destroy this result and lead to complex non-equilibrating or even chaotic behavior.\nIn a dual type of result, we show that the power of no-swap-regret dynamics comes at a cost of imposing a time-asymmetry on its inputs. While no-external-regret dynamics can be completely determined by the cumulative reward vector received by each player, we show there does not exist any general no-swap-regret dynamics defined on the same state space. In fact, we prove that any no-swap-regret learning algorithm must play a time-asymmetric function over the set of previously observed rewards, ruling out any dynamics based on a symmetric function of the current set of rewards.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/96155"} +{"video_file": "CIHdlhfrOo_39024658.mp4", "openreview_id": "CIHdlhfrOo", "slideslive_id": 39024658, "venue": "nips2024", "title": "Self-Supervised Adversarial Training via Diverse Augmented Queries and Self-Supervised Double Perturbation", "status": "Poster", "keywords": "Self-supervised Learning;Adversarial Training", "tldr": "This paper provides a method to enhance self-supervised adversarial training", "abstract": "Recently, there have been some works studying self-supervised adversarial training, a learning paradigm that learns robust features without labels. While those works have narrowed the performance gap between self-supervised adversarial training (SAT) and supervised adversarial training (supervised AT), a well-established formulation of SAT and its connections with supervised AT are under-explored. Based on a simple SAT benchmark, we find that SAT still faces the problem of large robust generalization gap and degradation on natural samples. We hypothesize this is due to the lack of data complexity and model regularization and propose a method named as DAQ-SDP (Diverse Augmented Queries Self-supervised Double Perturbation). We first challenge the previous conclusion that complex data augmentations degrade robustness in SAT by using diversely augmented samples as queries to guide adversarial training. Inspired by previous works in supervised AT, we then incorporate a self-supervised double perturbation scheme to self-supervised learning (SSL), which promotes robustness transferable to downstream classification. Our work can be seamlessly combined with models pretrained by different SSL frameworks without revising the learning objectives and helps to bridge the gap between SAT and AT. Our method also improves both robust and natural accuracies across different SSL frameworks. Our code is available at https://github.com/rzzhang222/DAQ-SDP.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96153"} +{"video_file": "CL9k2PaUQb_39027949.mp4", "openreview_id": "CL9k2PaUQb", "slideslive_id": 39027949, "venue": "nips2024", "title": "The Surprising Effectiveness of SP Voting with Partial Preferences", "status": "Poster", "keywords": "Surprisingly Popular Algorithm;Preference Aggregation;Partial Rankings", "tldr": "We extend surprisingly popular algorithm to partial preferences, and evaluate our approach through crowdsourcing.", "abstract": "We consider the problem of recovering the ground truth ordering (ranking, top-$k$, or others) over a large number of alternatives. The wisdom of crowd is a heuristic approach based on Condorcet's Jury theorem to address this problem through collective opinions. This approach fails to recover the ground truth when the majority of the crowd is misinformed. The \\emph{surprisingly popular} (SP) algorithm~\\citep{prelec2017solution} is an alternative approach that is able to recover the ground truth even when experts are in minority. The SP algorithm requires the voters to predict other voters' report in the form of a full probability distribution over all rankings of alternatives. However, when the number of alternatives, $m$, is large, eliciting the prediction report or even the vote over $m$ alternatives might be too costly. In this paper, we design a scalable alternative of the SP algorithm which only requires eliciting partial preferences from the voters, and propose new variants of the SP algorithm. In particular, we propose two versions---\\emph{Aggregated-SP} and \\emph{Partial-SP}---that ask voters to report vote and prediction on a subset of size $k$ ($\\ll m$) in terms of top alternative, partial rank, or an approval set. Through a large-scale crowdsourcing experiment on MTurk, we show that both of our approaches outperform conventional preference aggregation algorithms for the recovery of ground truth rankings, when measured in terms of Kendall-Tau distance and Spearman's $\\rho$. We further analyze the collected data and demonstrate that voters' behavior in the experiment, including the minority of the experts, and the SP phenomenon, can be correctly simulated by a concentric mixtures of Mallows model. Finally, we provide theoretical bounds on the sample complexity of SP algorithms with partial rankings to demonstrate the theoretical guarantees of the proposed methods.", "primary_area": "machine_learning_for_social_sciences", "site": "https://neurips.cc/virtual/2024/poster/96149"} +{"video_file": "CTIFk7b9jU_39027596.mp4", "openreview_id": "CTIFk7b9jU", "slideslive_id": 39027596, "venue": "nips2024", "title": "Bidirectional Recurrence for Cardiac Motion Tracking with Gaussian Process Latent Coding", "status": "Poster", "keywords": "Medical Image Analysis;Cardiac Motion Tracking", "tldr": "Use bidirectional Recurrent Manner Equip with Gaussian Process for Cardiac Motion Tracking", "abstract": "Quantitative analysis of cardiac motion is crucial for assessing cardiac function. This analysis typically uses imaging modalities such as MRI and Echocardiograms that capture detailed image sequences throughout the heartbeat cycle. Previous methods predominantly focused on the analysis of image pairs lacking consideration of the motion dynamics and spatial variability. Consequently, these methods often overlook the long-term relationships and regional motion characteristic of cardiac. To overcome these limitations, we introduce the GPTrack, a novel unsupervised framework crafted to fully explore the temporal and spatial dynamics of cardiac motion. The GPTrack enhances motion tracking by employing the sequential Gaussian Process in the latent space and encoding statistics by spatial information at each time stamp, which robustly promotes temporal consistency and spatial variability of cardiac dynamics. Also, we innovatively aggregate sequential information in a bidirectional recursive manner, mimicking the behavior of diffeomorphic registration to better capture consistent long-term relationships of motions across cardiac regions such as the ventricles and atria. Our GPTrack significantly improves the precision of motion tracking in both 3D and 4D medical images while maintaining computational efficiency. The code is available at: https://github.com/xmed-lab/GPTrack.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/96144"} +{"video_file": "CTvxvAcSJN_39024440.mp4", "openreview_id": "CTvxvAcSJN", "slideslive_id": 39024440, "venue": "nips2024", "title": "SceneCraft: Layout-Guided 3D Scene Generation", "status": "Poster", "keywords": "3D Scene Generation; 3D Content Generation", "tldr": "We propose a novel layout-guided 3D scene generation pipeline to generate complicated indoor scenes adhering to user specifications.", "abstract": "The creation of complex 3D scenes tailored to user specifications has been a tedious and challenging task with traditional 3D modeling tools. Although some pioneering methods have achieved automatic text-to-3D generation, they are generally limited to small-scale scenes with restricted control over the shape and texture. We introduce SceneCraft, a novel method for generating detailed indoor scenes that adhere to textual descriptions and spatial layout preferences provided by users. Central to our method is a rendering-based technique, which converts 3D semantic layouts into multi-view 2D proxy maps. Furthermore, we design a semantic and depth conditioned diffusion model to generate multi-view images, which are used to learn a neural radiance field (NeRF) as the final scene representation. Without the constraints of panorama image generation, we surpass previous methods in supporting complicated indoor space generation beyond a single room, even as complicated as a whole multi-bedroom apartment with irregular shapes and layouts. Through experimental analysis, we demonstrate that our method significantly outperforms existing approaches in complex indoor scene generation with diverse textures, consistent geometry, and realistic visual quality.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96143"} +{"video_file": "Cb3kcwYBgw_39027912.mp4", "openreview_id": "Cb3kcwYBgw", "slideslive_id": 39027912, "venue": "nips2024", "title": "Spatio-Spectral Graph Neural Networks", "status": "Poster", "keywords": "Graph Neural Networks;long-range interactions", "tldr": "We propose Spatio-Spectral Graph Neural Networks (S^2GNNs) that have a global receptive field, vanquish over-squashing and deliver strong empirical results on small and large graphs.", "abstract": "Spatial Message Passing Graph Neural Networks (MPGNNs) are widely used for learning on graph-structured data. However, key limitations of \u2113-step MPGNNs are that their \"receptive field\" is typically limited to the \u2113-hop neighborhood of a node and that information exchange between distant nodes is limited by over-squashing. Motivated by these limitations, we propose Spatio-Spectral Graph Neural Networks (S\u00b2GNNs) \u2013 a new modeling paradigm for Graph Neural Networks (GNNs) that synergistically combines spatially and spectrally parametrized graph filters. Parameterizing filters partially in the frequency domain enables global yet efficient information propagation. We show that S\u00b2GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs. Further, rethinking graph convolutions at a fundamental level unlocks new design spaces. For example, S\u00b2GNNs allow for free positional encodings that make them strictly more expressive than the 1-Weisfeiler-Leman (WL) test. Moreover, to obtain general-purpose S\u00b2GNNs, we propose spectrally parametrized filters for directed graphs. S\u00b2GNNs outperform spatial MPGNNs, graph transformers, and graph rewirings, e.g., on the peptide long-range benchmark tasks, and are competitive with state-of-the-art sequence modeling. On a 40 GB GPU, S\u00b2GNNs scale to millions of nodes.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/96137"} +{"video_file": "CbHz30KeA4_39024971.mp4", "openreview_id": "CbHz30KeA4", "slideslive_id": 39024971, "venue": "nips2024", "title": "Taming \"data-hungry\" reinforcement learning? Stability in continuous state-action spaces", "status": "Poster", "keywords": "reinforcement learning;continuous control;stability analysis", "tldr": "We introduce an RL framework for continuous state-action spaces with faster convergence rates than previous ones. Key to this are stability conditions of the Bellman operator and occupation measures that are prevalent in continuous domain MDPs.", "abstract": "We introduce a novel framework for analyzing reinforcement learning (RL) in continuous state-action spaces, and use it to prove fast rates of convergence in both off-line and on-line settings. Our analysis highlights two key stability properties, relating to how changes in value functions and/or policies affect the Bellman operator and occupation measures. We argue that these properties are satisfied in many continuous state-action Markov decision processes. Our analysis also offers fresh perspectives on the roles of pessimism and optimism in off-line and on-line RL.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96136"} +{"video_file": "Cc0ckJlJF2_39025272.mp4", "openreview_id": "Cc0ckJlJF2", "slideslive_id": 39025272, "venue": "nips2024", "title": "Reward Machines for Deep RL in Noisy and Uncertain Environments", "status": "Poster", "keywords": "Reward Machines;LTL;Linear Temporal Logic;Automata;RL;Reinforcement Learning;Formal Language", "tldr": "We investigate the use of Reward Machines in deep RL under an uncertain interpretation of the domain-specific vocabulary.", "abstract": "Reward Machines provide an automaton-inspired structure for specifying instructions, safety constraints, and other temporally extended reward-worthy behaviour. By exposing the underlying structure of a reward function, they enable the decomposition of an RL task, leading to impressive gains in sample efficiency. Although Reward Machines and similar formal specifications have a rich history of application towards sequential decision-making problems, prior frameworks have traditionally ignored ambiguity and uncertainty when interpreting the domain-specific vocabulary forming the building blocks of the reward function. Such uncertainty critically arises in many real-world settings due to factors like partial observability or noisy sensors. In this work, we explore the use of Reward Machines for Deep RL in noisy and uncertain environments. We characterize this problem as a POMDP and propose a suite of RL algorithms that exploit task structure under uncertain interpretation of the domain-specific vocabulary. Through theory and experiments, we expose pitfalls in naive approaches to this problem while simultaneously demonstrating how task structure can be successfully leveraged under noisy interpretations of the vocabulary.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96134"} +{"video_file": "CcNw4mVIxo_39028320.mp4", "openreview_id": "CcNw4mVIxo", "slideslive_id": 39028320, "venue": "nips2024", "title": "Spiking Neural Network as Adaptive Event Stream Slicer", "status": "Poster", "keywords": "Event-based Camera;Spiking Neural Network;Object Tracking;Image Recognition", "tldr": "A novel-designed event processing framework capable of splitting events stream in an adaptive manner.", "abstract": "Event-based cameras are attracting significant interest as they provide rich edge information, high dynamic range, and high temporal resolution. Many state-of-the-art event-based algorithms rely on splitting the events into fixed groups, resulting in the omission of crucial temporal information, particularly when dealing with diverse motion scenarios (e.g., high/low speed). In this work, we propose SpikeSlicer, a novel-designed event processing framework capable of splitting events stream adaptively. SpikeSlicer utilizes a low-energy spiking neural network (SNN) to trigger event slicing. To guide the SNN to fire spikes at optimal time steps, we propose the Spiking Position-aware Loss (SPA-Loss) to modulate the neuron's state. Additionally, we develop a Feedback-Update training strategy that refines the slicing decisions using feedback from the downstream artificial neural network (ANN). Extensive experiments demonstrate that our method yields significant performance improvements in event-based object tracking and recognition. Notably, SpikeSlicer provides a brand-new SNN-ANN cooperation paradigm, where the SNN acts as an efficient, low-energy data processor to assist the ANN in improving downstream performance, injecting new perspectives and potential avenues of exploration.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/96133"} +{"video_file": "CeOwahuQic_39025179.mp4", "openreview_id": "CeOwahuQic", "slideslive_id": 39025179, "venue": "nips2024", "title": "Can Large Language Model Agents Simulate Human Trust Behavior?", "status": "Poster", "keywords": "LLM Agent;Human Simulation;Behavioral Alignment;Trust Games", "tldr": "We discover that LLM agents generally exhibit trust behavior in Trust Games and GPT-4 agents manifest high behavioral alignment with humans in terms of trust behavior, indicating the potential to simulate human trust behavior with LLM agents.", "abstract": "Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in social science and role-playing applications. However, one fundamental question remains: can LLM agents really simulate human behavior? In this paper, we focus on one critical and elemental behavior in human interactions, trust, and investigate whether LLM agents can simulate human trust behavior. We first find that LLM agents generally exhibit trust behavior, referred to as agent trust, under the framework of Trust Games, which are widely recognized in behavioral economics. Then, we discover that GPT-4 agents manifest high behavioral alignment with humans in terms of trust behavior, indicating the feasibility of simulating human trust behavior with LLM agents. In addition, we probe the biases of agent trust and differences in agent trust towards other LLM agents and humans. We also explore the intrinsic properties of agent trust under conditions including external manipulations and advanced reasoning strategies. Our study provides new insights into the behaviors of LLM agents and the fundamental analogy between LLMs and humans beyond value alignment. We further illustrate broader implications of our discoveries for applications where trust is paramount.", "primary_area": "machine_learning_for_social_sciences", "site": "https://neurips.cc/virtual/2024/poster/96131"} +{"video_file": "CehOqpvOxG_39028837.mp4", "openreview_id": "CehOqpvOxG", "slideslive_id": 39028837, "venue": "nips2024", "title": "Fair Kernel K-Means: from Single Kernel to Multiple Kernel", "status": "Poster", "keywords": "kernel k-means;multiple kernel k-means;fair clustering", "tldr": "This paper proposes a new fairness regularization term and plug it into kernel k-means framework, leading to novel fair kernel k-means and fair multiple kernel k-means.", "abstract": "Kernel k-means has been widely studied in machine learning. However, existing kernel k-means methods often ignore the \\textit{fairness} issue, which may cause discrimination. To address this issue, in this paper, we propose a novel Fair Kernel K-Means (FKKM) framework. In this framework, we first propose a new fairness regularization term that can lead to a fair partition of data. The carefully designed fairness regularization term has a similar form to the kernel k-means which can be seamlessly integrated into the kernel k-means framework. Then, we extend this method to the multiple kernel setting, leading to a Fair Multiple Kernel K-Means (FMKKM) method. We also provide some theoretical analysis of the generalization error bound, and based on this bound we give a strategy to set the hyper-parameter, which makes the proposed methods easy to use. At last, we conduct extensive experiments on both the single kernel and multiple kernel settings to compare the proposed methods with state-of-the-art methods to demonstrate their effectiveness.", "primary_area": "fairness", "site": "https://neurips.cc/virtual/2024/poster/96130"} +{"video_file": "CgGjT8EG8A_39028125.mp4", "openreview_id": "CgGjT8EG8A", "slideslive_id": 39028125, "venue": "nips2024", "title": "Universal Exact Compression of Differentially Private Mechanisms", "status": "Poster", "keywords": "Differential Privacy;Channel Simulation;Federated Learning;Communication;Poisson Process", "tldr": "We provide the first mechanism that compresses and simulates any randomizer while preserving local differential privacy, achieving near-optimal compression sizes, and ensuring no distortion is introduced to the reproduced distribution.", "abstract": "To reduce the communication cost of differential privacy mechanisms, we introduce a novel construction, called Poisson private representation (PPR), designed to compress and simulate any local randomizer while ensuring local differential privacy. Unlike previous simulation-based local differential privacy mechanisms, PPR exactly preserves the joint distribution of the data and the output of the original local randomizer. Hence, the PPR-compressed privacy mechanism retains all desirable statistical properties of the original privacy mechanism such as unbiasedness and Gaussianity. Moreover, PPR achieves a compression size within a logarithmic gap from the theoretical lower bound. Using the PPR, we give a new order-wise trade-off between communication, accuracy, central and local differential privacy for distributed mean estimation. Experiment results on distributed mean estimation show that PPR consistently gives a better trade-off between communication, accuracy and central differential privacy compared to the coordinate subsampled Gaussian mechanism, while also providing local differential privacy.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96129"} +{"video_file": "Ci7II4CPwm_39024637.mp4", "openreview_id": "Ci7II4CPwm", "slideslive_id": 39024637, "venue": "nips2024", "title": "Fast Proxy Experiment Design for Causal Effect Identification", "status": "Poster", "keywords": "Causal Inference;Identifiability;Experiment design", "tldr": "We present novel, highly efficient algorithms for designing minimum-cost proxy experiments (interventions) to identify causal effects, significantly outperforming the s.o.t.a.", "abstract": "Identifying causal effects is a key problem of interest across many disciplines. The two long-standing approaches to estimate causal effects are observational and experimental (randomized) studies. Observational studies can suffer from unmeasured confounding, which may render the causal effects unidentifiable. On the other hand, direct experiments on the target variable may be too costly or even infeasible to conduct. A middle ground between these two approaches is to estimate the causal effect of interest through proxy experiments, which are conducted on variables with a lower cost to intervene on compared to the main target. In an earlier work, we studied this setting and demonstrated that the problem of designing the optimal (minimum-cost) experiment for causal effect identification is NP-complete and provided a naive algorithm that may require solving exponentially many NP-hard problems as a sub-routine in the worst case. In this work, we provide a few reformulations of the problem that allow for designing significantly more efficient algorithms to solve it as witnessed by our extensive simulations. Additionally, we study the closely-related problem of designing experiments that enable us to identify a given effect through valid adjustments sets.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/96127"} +{"video_file": "CluvZBfrjj_39026076.mp4", "openreview_id": "CluvZBfrjj", "slideslive_id": 39026076, "venue": "nips2024", "title": "From Instance Training to Instruction Learning: Task Adapters Generation from Instructions", "status": "Poster", "keywords": "Hypernetwork;Generalization;Instruction Learning", "tldr": "The paper introduces TAGI, a novel approach that enables large language models to learn from instructions rather than extensive data instances, significantly enhancing their adaptability and efficiency in real-world tasks.", "abstract": "Large language models (LLMs) have acquired the ability to solve general tasks by utilizing instruction finetuning (IFT). However, IFT still relies heavily on instance training of extensive task data, which greatly limits the adaptability of LLMs to real-world scenarios where labeled task instances are scarce and broader task generalization becomes paramount. Contrary to LLMs, humans acquire skills and complete tasks not merely through repeated practice but also by understanding and following instructional guidelines. This paper is dedicated to simulating human learning to address the shortcomings of instance training, focusing on instruction learning to enhance cross-task generalization. Within this context, we introduce Task Adapters Generation from Instructions (TAGI), which automatically constructs the task-specific model in a parameter generation manner based on the given task instructions without retraining for unseen tasks. Specifically, we utilize knowledge distillation to enhance the consistency between TAGI developed through Learning with Instruction and task-specific models developed through Training with Instance, by aligning the labels, output logits, and adapter parameters between them. TAGI is endowed with cross-task generalization capabilities through a two-stage training process that includes hypernetwork pretraining and finetuning. We evaluate TAGI on the Super-Natural Instructions and P3 datasets. The experimental results demonstrate that TAGI can match or even outperform traditional meta-trained models and other hypernetwork models, while significantly reducing computational requirements. Our code will be available at https://github.com/Xnhyacinth/TAGI.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/96123"} +{"video_file": "CovjSQmNOD_39027982.mp4", "openreview_id": "CovjSQmNOD", "slideslive_id": 39027982, "venue": "nips2024", "title": "ODGS: 3D Scene Reconstruction from Omnidirectional Images with 3D Gaussian Splattings", "status": "Poster", "keywords": "3D scene reconstruction;3D Gaussian splatting;Omnidirectional images", "tldr": "We develop a new method for 3D scene reconstruction from omnidirectional images via 3D Gaussian Splattings.", "abstract": "Omnidirectional (or 360-degree) images are increasingly being used for 3D applications since they allow the rendering of an entire scene with a single image. Existing works based on neural radiance fields demonstrate successful 3D reconstruction quality on egocentric videos, yet they suffer from long training and rendering times. Recently, 3D Gaussian splatting has gained attention for its fast optimization and real-time rendering. However, directly using a perspective rasterizer to omnidirectional images results in severe distortion due to the different optical properties between the two image domains. In this work, we present ODGS, a novel rasterization pipeline for omnidirectional images with geometric interpretation. For each Gaussian, we define a tangent plane that touches the unit sphere and is perpendicular to the ray headed toward the Gaussian center. We then leverage a perspective camera rasterizer to project the Gaussian onto the corresponding tangent plane. The projected Gaussians are transformed and combined into the omnidirectional image, finalizing the omnidirectional rasterization process. This interpretation reveals the implicit assumptions within the proposed pipeline, which we verify through mathematical proofs. The entire rasterization process is parallelized using CUDA, achieving optimization and rendering speeds 100 times faster than NeRF-based methods. Our comprehensive experiments highlight the superiority of ODGS by delivering the best reconstruction and perceptual quality across various datasets. Additionally, results on roaming datasets demonstrate that ODGS effectively restores fine details, even when reconstructing large 3D scenes. The source code is available on our project page (https://github.com/esw0116/ODGS).", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96122"} +{"video_file": "Cp7HD618bd_39024921.mp4", "openreview_id": "Cp7HD618bd", "slideslive_id": 39024921, "venue": "nips2024", "title": "A Metalearned Neural Circuit for Nonparametric Bayesian Inference", "status": "Poster", "keywords": "Nonparametric Bayes;metalearning;amortized inference", "tldr": "We introduce a metalearned neural network that captures the inductive bias of nonparametric Bayesian models.", "abstract": "Most applications of machine learning to classification assume a closed set of balanced classes. This is at odds with the real world, where class occurrence statistics often follow a long-tailed power-law distribution and it is unlikely that all classes are seen in a single sample. Nonparametric Bayesian models naturally capture this phenomenon, but have significant practical barriers to widespread adoption, namely implementation complexity and computational inefficiency. To address this, we present a method for extracting the inductive bias from a nonparametric Bayesian model and transferring it to an artificial neural network. By simulating data with a nonparametric Bayesian prior, we can metalearn a sequence model that performs inference over an unlimited set of classes. After training, this \"neural circuit\" has distilled the corresponding inductive bias and can successfully perform sequential inference over an open set of classes. Our experimental results show that the metalearned neural circuit achieves comparable or better performance than particle filter-based methods for inference in these models while being faster and simpler to use than methods that explicitly incorporate Bayesian nonparametric inference.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/96121"} +{"video_file": "Cr2jEHJB9q_39024382.mp4", "openreview_id": "Cr2jEHJB9q", "slideslive_id": 39024382, "venue": "nips2024", "title": "Scaling Law for Time Series Forecasting", "status": "Poster", "keywords": "Time series forecasting;Scaling law;Theory", "tldr": "Our research proposes a novel theory for scaling laws in time series forecasting, addressing anomalies observed in previous studies and emphasizing dataset size, model complexity, and forecast horizon in deep learning methodologies.", "abstract": "Scaling law that rewards large datasets, complex models and enhanced data granularity has been observed in various fields of deep learning. Yet, studies on time series forecasting have cast doubt on scaling behaviors of deep learning methods for time series forecasting: while more training data improves performance, more capable models do not always outperform less capable models, and longer input horizon may hurt performance for some models. We propose a theory for scaling law for time series forecasting that can explain these seemingly abnormal behaviors. We take into account the impact of dataset size and model complexity, as well as time series data granularity, particularly focusing on the look-back horizon, an aspect that has been unexplored in previous theories. Furthermore, we empirically evaluate various models using a diverse set of time series forecasting datasets, which (1) verifies the validity of scaling law on dataset size and model complexity within the realm of time series forecasting, and (2) validates our theoretical framework, particularly regarding the influence of look back horizon. We hope our findings may inspire new models targeting time series forecasting datasets of limited size, as well as large foundational datasets and models for time series forecasting in future works.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/96119"} +{"video_file": "CrADAX7h23_39028109.mp4", "openreview_id": "CrADAX7h23", "slideslive_id": 39028109, "venue": "nips2024", "title": "DAGER: Exact Gradient Inversion for Large Language Models", "status": "Poster", "keywords": "Federated Learning;Exact Gradient Inversion;Gradient Leakage;Privacy;Language Model;LLM;Attack", "tldr": "We introduce the first exact gradient leakage attack for batches of sequnces on LLMs and show that it works on significantly larger token sequences and batch sizes while being faster and more price than prior work.", "abstract": "Federated learning works by aggregating locally computed gradients from multiple clients, thus enabling collaborative training without sharing private client data. However, prior work has shown that the data can actually be recovered by the server using so-called gradient inversion attacks. While these attacks perform well when applied on images, they are limited in the text domain and only permit approximate reconstruction of small batches and short input sequences. In this work, we propose DAGER, the first algorithm to recover whole batches of input text exactly. DAGER leverages the low-rank structure of self-attention layer gradients and the discrete nature of token embeddings to efficiently check if a given token sequence is part of the client data. We use this check to exactly recover full batches in the honest-but-curious setting without any prior on the data for both encoder and decoder-based architectures using exhaustive heuristic search and a greedy approach, respectively. We provide an efficient GPU implementation of DAGER and show experimentally that it recovers full batches of size up to 128 on large language models (LLMs), beating prior attacks in speed (20x at same batch size), scalability (10x larger batches), and reconstruction quality (ROUGE-1/2 > 0.99).", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96118"} +{"video_file": "Cw7Agrr8GJ_39024574.mp4", "openreview_id": "Cw7Agrr8GJ", "slideslive_id": 39024574, "venue": "nips2024", "title": "Large Language Models-guided Dynamic Adaptation for Temporal Knowledge Graph Reasoning", "status": "Poster", "keywords": "Large Language Model;Temporal Knowledge Graph;Knowledge Graph Reasoning", "tldr": "We propose a Dynamic Adaptation method to guide LLMs for rule-based TKGR tasks.", "abstract": "Temporal Knowledge Graph Reasoning (TKGR) is the process of utilizing temporal information to capture complex relations within a Temporal Knowledge Graph (TKG) to infer new knowledge. Conventional methods in TKGR typically depend on deep learning algorithms or temporal logical rules. However, deep learning-based TKGRs often lack interpretability, whereas rule-based TKGRs struggle to effectively learn temporal rules that capture temporal patterns. Recently, Large Language Models (LLMs) have demonstrated extensive knowledge and remarkable proficiency in temporal reasoning. Consequently, the employment of LLMs for Temporal Knowledge Graph Reasoning (TKGR) has sparked increasing interest among researchers. Nonetheless, LLMs are known to function as black boxes, making it challenging to comprehend their reasoning process. Additionally, due to the resource-intensive nature of fine-tuning, promptly updating LLMs to integrate evolving knowledge within TKGs for reasoning is impractical. To address these challenges, in this paper, we propose a Large Language Models-guided Dynamic Adaptation (LLM-DA) method for reasoning on TKGs. Specifically, LLM-DA harnesses the capabilities of LLMs to analyze historical data and extract temporal logical rules. These rules unveil temporal patterns and facilitate interpretable reasoning. To account for the evolving nature of TKGs, a dynamic adaptation strategy is proposed to update the LLM-generated rules with the latest events. This ensures that the extracted rules always incorporate the most recent knowledge and better generalize to the predictions on future events. Experimental results show that without the need of fine-tuning, LLM-DA significantly improves the accuracy of reasoning over several common datasets, providing a robust framework for TKGR tasks.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96116"} +{"video_file": "CwNevJONgq_39028269.mp4", "openreview_id": "CwNevJONgq", "slideslive_id": 39028269, "venue": "nips2024", "title": "Simplifying Latent Dynamics with Softly State-Invariant World Models", "status": "Poster", "keywords": "World model;latent dynamics;reinforcement learning;compression", "tldr": "We introduce the Parsimonious Latent Space Model (PLSM), a world model that regularizes the latent dynamics to make the effect of the agent's actions more predictable.", "abstract": "To solve control problems via model-based reasoning or planning, an agent needs to know how its actions affect the state of the world. The actions an agent has at its disposal often change the state of the environment in systematic ways. However, existing techniques for world modelling do not guarantee that the effect of actions are represented in such systematic ways. We introduce the Parsimonious Latent Space Model (PLSM), a world model that regularizes the latent dynamics to make the effect of the agent's actions more predictable. Our approach minimizes the mutual information between latent states and the change that an action produces in the agent's latent state, in turn minimizing the dependence the state has on the dynamics. This makes the world model softly state-invariant. We combine PLSM with different model classes used for i) future latent state prediction, ii) planning, and iii) model-free reinforcement learning. We find that our regularization improves accuracy, generalization, and performance in downstream tasks, highlighting the importance of systematic treatment of actions in world models.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96114"} +{"video_file": "CyzZeND3LB_39028252.mp4", "openreview_id": "CyzZeND3LB", "slideslive_id": 39028252, "venue": "nips2024", "title": "PAC-Bayes-Chernoff bounds for unbounded losses", "status": "Poster", "keywords": "Statistical learning theory;PAC-Bayes;Chernoff bounds;regularization", "tldr": "We introduce a new PAC-Bayes bound that allows working with richer assumptions and illustrate its potential by generalizing previous bounds, obtaining novel ones for several regularization techniques, and minimizing them to get new posteriors", "abstract": "We introduce a new PAC-Bayes oracle bound for unbounded losses that extends Cram\u00e9r-Chernoff bounds to the PAC-Bayesian setting. The proof technique relies on controlling the tails of certain random variables involving the Cram\u00e9r transform of the loss. Our approach naturally leverages properties of Cram\u00e9r-Chernoff bounds, such as exact optimization of the free parameter in many PAC-Bayes bounds. We highlight several applications of the main theorem. Firstly, we show that our bound recovers and generalizes previous results. Additionally, our approach allows working with richer assumptions that result in more informative and potentially tighter bounds. In this direction, we provide a general bound under a new model-dependent assumption from which we obtain bounds based on parameter norms and log-Sobolev inequalities. Notably, many of these bounds can be minimized to obtain distributions beyond the Gibbs posterior and provide novel theoretical coverage to existing regularization techniques.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96111"} +{"video_file": "CzPtBzgfae_39027658.mp4", "openreview_id": "CzPtBzgfae", "slideslive_id": 39027658, "venue": "nips2024", "title": "Don't Compress Gradients in Random Reshuffling: Compress Gradient Differences", "status": "Poster", "keywords": "Random reshuffling;communication compression;distributed optimization;Federated Learning", "tldr": "In this paper we introduce an improved method that utilizes communication compression with variance reduction and sampling without replacement.", "abstract": "Gradient compression is a popular technique for improving communication complexity of stochastic first-order methods in distributed training of machine learning models. However, the existing works consider only with-replacement sampling of stochastic gradients. In contrast, it is well-known in practice and recently confirmed in theory that stochastic methods based on without-replacement sampling, e.g., Random Reshuffling (RR) method, perform better than ones that sample the gradients with-replacement. In this work, we close this gap in the literature and provide the first analysis of methods with gradient compression and without-replacement sampling. We first develop a distributed variant of random reshuffling with gradient compression (Q-RR), and show how to reduce the variance coming from gradient quantization through the use of control iterates. Next, to have a better fit to Federated Learning applications, we incorporate local computation and propose a variant of Q-RR called Q-NASTYA. Q-NASTYA uses local gradient steps and different local and global stepsizes. Next, we show how to reduce compression variance in this setting as well. Finally, we prove the convergence results for the proposed methods and outline several settings in which they improve upon existing algorithms.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/96110"} +{"video_file": "D4QgSWxiOb_39025647.mp4", "openreview_id": "D4QgSWxiOb", "slideslive_id": 39025647, "venue": "nips2024", "title": "Grokking of Implicit Reasoning in Transformers: A Mechanistic Journey to the Edge of Generalization", "status": "Poster", "keywords": "Reasoning;Grokking;Systematic Generalization;Mechanistic Interpretability;Transformer", "tldr": "We discover and analyze Transformer's grokking phenomenon on the task of implicit reasoning.", "abstract": "We study whether transformers can learn to implicitly reason over parametric knowledge, a skill that even the most capable language models struggle with. Focusing on two representative reasoning types, composition and comparison, we consistently find that transformers can learn implicit reasoning, but only through grokking, i.e., extended training far beyond overfitting. The levels of generalization also vary across reasoning types: when faced with out-of-distribution examples, transformers fail to systematically generalize for composition but succeed for comparison. We delve into the model's internals throughout training, conducting analytical experiments that reveal: 1) the mechanism behind grokking, such as the formation of the generalizing circuit and its relation to the relative efficiency of generalizing and memorizing circuits, and 2) the connection between systematicity and the configuration of the generalizing circuit. Our findings guide data and training setup to better induce implicit reasoning and suggest potential improvements to the transformer architecture, such as encouraging cross-layer knowledge sharing. Furthermore, we demonstrate that for a challenging reasoning task with a large search space, GPT-4-Turbo and Gemini-1.5-Pro based on non-parametric memory fail badly regardless of prompting styles or retrieval augmentation, while a fully grokked transformer can achieve near-perfect accuracy, showcasing the power of parametric memory for complex reasoning.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/96105"} +{"video_file": "D4yRz3s7UL_39025723.mp4", "openreview_id": "D4yRz3s7UL", "slideslive_id": 39025723, "venue": "nips2024", "title": "DeSparsify: Adversarial Attack Against Token Sparsification Mechanisms", "status": "Spotlight", "keywords": "Adversarial Attack;Vision Transformers;Token Sparsification", "tldr": "An adversarial attack that targets the availability of efficient vision transformers", "abstract": "Vision transformers have shown remarkable advancements in the computer vision domain, demonstrating state-of-the-art performance in diverse tasks (e.g., image classification, object detection). However, their high computational requirements grow quadratically with the number of tokens used. Token sparsification mechanisms have been proposed to address this issue. These mechanisms employ an input-dependent strategy, in which uninformative tokens are discarded from the computation pipeline, improving the model\u2019s efficiency. However, their dynamism and average-case assumption makes them vulnerable to a new threat vector \u2013 carefully crafted adversarial examples capable of fooling the sparsification mechanism, resulting in worst-case performance. In this paper, we present DeSparsify, an attack targeting the availability of vision transformers that use token sparsification mechanisms. The attack aims to exhaust the operating system\u2019s resources, while maintaining its stealthiness. Our evaluation demonstrates the attack\u2019s effectiveness on three token sparsification mechanisms and examines the attack\u2019s transferability between them and its effect on the GPU resources. To mitigate the impact of the attack, we propose various countermeasures.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/96104"} +{"video_file": "D6MQrw9HFu_39025718.mp4", "openreview_id": "D6MQrw9HFu", "slideslive_id": 39025718, "venue": "nips2024", "title": "FOOGD: Federated Collaboration for Both Out-of-distribution Generalization and Detection", "status": "Poster", "keywords": "Federated learning; Out-of-distribution;", "tldr": "We devise a FL framework that adapts to wild data, which coexists with non-IID in-distribution (IN) data, covariate-shift (IN-C) data, and semantic-shift (OUT) data.", "abstract": "Federated learning (FL) is a promising machine learning paradigm that collaborates with client models to capture global knowledge. However, deploying FL models in real-world scenarios remains unreliable due to the coexistence of in-distribution data and unexpected out-of-distribution (OOD) data, such as covariate-shift and semantic-shift data. Current FL researches typically address either covariate-shift data through OOD generalization or semantic-shift data via OOD detection, overlooking the simultaneous occurrence of various OOD shifts. In this work, we propose FOOGD, a method that estimates the probability density of each client and obtains reliable global distribution as guidance for the subsequent FL process. Firstly, SM3D in FOOGD estimates score model for arbitrary distributions without prior constraints, and detects semantic-shift data powerfully. Then SAG in FOOGD provides invariant yet diverse knowledge for both local covariate-shift generalization and client performance generalization. In empirical validations, FOOGD significantly enjoys three main advantages: (1) reliably estimating non-normalized decentralized distributions, (2) detecting semantic shift data via score values, and (3) generalizing to covariate-shift data by regularizing feature extractor. The project is open in https://github.com/XeniaLLL/FOOGD-main.git.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96103"} +{"video_file": "DAO2BFzMfy_39028145.mp4", "openreview_id": "DAO2BFzMfy", "slideslive_id": 39028145, "venue": "nips2024", "title": "Interpreting the Weight Space of Customized Diffusion Models", "status": "Poster", "keywords": "Weight Space;Model Editing;Diffusion Models;Latent Space;Personalization", "tldr": "Using a dataset of fine-tuned diffusion models, we define a subspace in diffusion model weight space that enables controllable creation of new diffusion models.", "abstract": "We investigate the space of weights spanned by a large collection of customized diffusion models. We populate this space by creating a dataset of over 60,000 models, each of which is a base model fine-tuned to insert a different person's visual identity. We model the underlying manifold of these weights as a subspace, which we term\nweights2weights\n. We demonstrate three immediate applications of this space that result in new diffusion models -- sampling, editing, and inversion. First, sampling a set of weights from this space results in a new model encoding a novel identity. Next, we find linear directions in this space corresponding to semantic edits of the identity (e.g., adding a beard), resulting in a new model with the original identity edited. Finally, we show that inverting a single image into this space encodes a realistic identity into a model, even if the input image is out of distribution (e.g., a painting). We further find that these linear properties of the diffusion model weight space extend to other visual concepts. Our results indicate that the weight space of fine-tuned diffusion models can behave as an interpretable\nmeta\n-latent space producing new models.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96100"} +{"video_file": "DG2f1rVEM5_39025850.mp4", "openreview_id": "DG2f1rVEM5", "slideslive_id": 39025850, "venue": "nips2024", "title": "GaussianCube: A Structured and Explicit Radiance Representation for 3D Generative Modeling", "status": "Poster", "keywords": "3D Generative Modeling; Gaussian Splatting; Optimal Transport", "tldr": "We present a structured and explicit representation for 3D generative modeling by structuring Gaussian Splatting using Optimal Transport, achieving state-of-the-art generation quality.", "abstract": "We introduce a radiance representation that is both structured and fully explicit and thus greatly facilitates 3D generative modeling. Existing radiance representations either require an implicit feature decoder, which significantly degrades the modeling power of the representation, or are spatially unstructured, making them difficult to integrate with mainstream 3D diffusion methods. We derive GaussianCube by first using a novel densification-constrained Gaussian fitting algorithm, which yields high-accuracy fitting using a fixed number of free Gaussians, and then rearranging these Gaussians into a predefined voxel grid via Optimal Transport. Since GaussianCube is a structured grid representation, it allows us to use standard 3D U-Net as our backbone in diffusion modeling without elaborate designs. More importantly, the high-accuracy fitting of the Gaussians allows us to achieve a high-quality representation with orders of magnitude fewer parameters than previous structured representations for comparable quality, ranging from one to two orders of magnitude. The compactness of GaussianCube greatly eases the difficulty of 3D generative modeling. Extensive experiments conducted on unconditional and class-conditioned object generation, digital avatar creation, and text-to-3D synthesis all show that our model achieves state-of-the-art generation results both qualitatively and quantitatively, underscoring the potential of GaussianCube as a highly accurate and versatile radiance representation for 3D generative modeling.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/96097"} +{"video_file": "DKSI3bULiZ_39024890.mp4", "openreview_id": "DKSI3bULiZ", "slideslive_id": 39024890, "venue": "nips2024", "title": "Multiple Physics Pretraining for Spatiotemporal Surrogate Models", "status": "Poster", "keywords": "transfer learning;physics;pretraining;finetuning;surrogate models;spatiotemporal", "tldr": "We develop approaches to enable autoregressive pretraining on multiple physical systems and show it can improve transfer performance across domain gaps.", "abstract": "We introduce multiple physics pretraining (MPP), an autoregressive task-agnostic pretraining approach for physical surrogate modeling of spatiotemporal systems with transformers. In MPP, rather than training one model on a specific physical system, we train a backbone model to predict the dynamics of multiple heterogeneous physical systems simultaneously in order to learn features that are broadly useful across systems and facilitate transfer. In order to learn effectively in this setting, we introduce a shared embedding and normalization strategy that projects the fields of multiple systems into a shared embedding space. We validate the efficacy of our approach on both pretraining and downstream tasks over a broad fluid mechanics-oriented benchmark. We show that a single MPP-pretrained transformer is able to match or outperform task-specific baselines on all pretraining sub-tasks without the need for finetuning. For downstream tasks, we demonstrate that finetuning MPP-trained models results in more accurate predictions across multiple time-steps on systems with previously unseen physical components or higher dimensional systems compared to training from scratch or finetuning pretrained video foundation models. We open-source our code and model weights trained at multiple scales for reproducibility.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/96095"} +{"video_file": "DLNOBJa7TM_39027297.mp4", "openreview_id": "DLNOBJa7TM", "slideslive_id": 39027297, "venue": "nips2024", "title": "Efficient Federated Learning against Heterogeneous and Non-stationary Client Unavailability", "status": "Poster", "keywords": "distributed learning;non-convex optimization;federated learning;fault-tolerance", "tldr": "We propose FedAWE, an efficient algorithm for handling heterogeneous and non-stationary client unavailability in federated learning, achieving linear speedup and outperforming state-of-the-art methods in experiments over diversified dynamics.", "abstract": "Addressing intermittent client availability is critical for the real-world deployment of federated learning algorithms. Most prior work either overlooks the potential non-stationarity in the dynamics of client unavailability or requires substantial memory/computation overhead. We study federated learning in the presence of heterogeneous and non-stationary client availability, which may occur when the deployment environments are uncertain, or the clients are mobile. The impacts of heterogeneity and non-stationarity on client unavailability can be significant, as we illustrate using FedAvg, the most widely adopted federated learning algorithm. We propose FedAWE, which includes novel algorithmic structures that (i) compensate for missed computations due to unavailability with only $O(1)$ additional memory and computation with respect to standard FedAvg, and (ii) evenly diffuse local updates within the federated learning system through implicit gossiping, despite being agnostic to non-stationary dynamics. We show that FedAWE converges to a stationary point of even non-convex objectives while achieving the desired linear speedup property. We corroborate our analysis with numerical experiments over diversified client unavailability dynamics on real-world data sets.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/96094"} +{"video_file": "DNGfCVBOnU_39028211.mp4", "openreview_id": "DNGfCVBOnU", "slideslive_id": 39028211, "venue": "nips2024", "title": "Pretraining with Random Noise for Fast and Robust Learning without Weight Transport", "status": "Poster", "keywords": "Random noise training;Network pretraining;Pre-regularization;Feedback alignment;Error backpropagation;Weight transport problem;Biologically-Plausible Algorithm", "tldr": "Pretraining with random noise using a feedback alignment algorithm allows fast learning and robust generalization without weight transport.", "abstract": "The brain prepares for learning even before interacting with the environment, by refining and optimizing its structures through spontaneous neural activity that resembles random noise. However, the mechanism of such a process has yet to be understood, and it is unclear whether this process can benefit the algorithm of machine learning. Here, we study this issue using a neural network with a feedback alignment algorithm, demonstrating that pretraining neural networks with random noise increases the learning efficiency as well as generalization abilities without weight transport. First, we found that random noise training modifies forward weights to match backward synaptic feedback, which is necessary for teaching errors by feedback alignment. As a result, a network with pre-aligned weights learns notably faster and reaches higher accuracy than a network without random noise training, even comparable to the backpropagation algorithm. We also found that the effective dimensionality of weights decreases in a network pretrained with random noise. This pre-regularization allows the network to learn simple solutions of a low rank, reducing the generalization error during subsequent training. This also enables the network to robustly generalize a novel, out-of-distribution dataset. Lastly, we confirmed that random noise pretraining reduces the amount of meta-loss, enhancing the network ability to adapt to various tasks. Overall, our results suggest that random noise training with feedback alignment offers a straightforward yet effective method of pretraining that facilitates quick and reliable learning without weight transport.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/96093"} +{"video_file": "DQD0DNRjxk_39028692.mp4", "openreview_id": "DQD0DNRjxk", "slideslive_id": 39028692, "venue": "nips2024", "title": "GVKF: Gaussian Voxel Kernel Functions for Highly Efficient Surface Reconstruction in Open Scenes", "status": "Poster", "keywords": "3dgs;Mesh;Sdf;Nerf;Surface Reconstruction", "tldr": "High efficient surface reconstuction based on gaussian splatting", "abstract": "In this paper we present a novel method for efficient and effective 3D surface reconstruction in open scenes. Existing Neural Radiance Fields (NeRF) based works typically require extensive training and rendering time due to the adopted implicit representations. In contrast, 3D Gaussian splatting (3DGS) uses an explicit and discrete representation, hence the reconstructed surface is built by the huge number of Gaussian primitives, which leads to excessive memory consumption and rough surface details in sparse Gaussian areas. To address these issues, we propose Gaussian Voxel Kernel Functions (GVKF), which establish a continuous scene representation based on discrete 3DGS through kernel regression. The GVKF integrates fast 3DGS rasterization and highly effective scene implicit representations, achieving high-fidelity open scene surface reconstruction. Experiments on challenging scene datasets demonstrate the efficiency and effectiveness of our proposed GVKF, featuring with high reconstruction quality, real-time rendering speed, significant savings in storage and training memory consumption.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96090"} +{"video_file": "DT7n4F2bbP_39027020.mp4", "openreview_id": "DT7n4F2bbP", "slideslive_id": 39027020, "venue": "nips2024", "title": "Tensor-Based Synchronization and the Low-Rankness of the Block Trifocal Tensor", "status": "Poster", "keywords": "synchronization;tensor decomposition;structure from motion;multilinear rank;multiview geometry;trifocal tensor;higher-order scene information", "tldr": "This paper introduces the block trifocal tensor, established a low multilinear rank, and introduces a global synchronization framework for trifocal tensors.", "abstract": "The block tensor of trifocal tensors provides crucial geometric information on the three-view geometry of a scene. The underlying synchronization problem seeks to recover camera poses (locations and orientations up to a global transformation) from the block trifocal tensor. We establish an explicit Tucker factorization of this tensor, revealing a low multilinear rank of\n(\n6\n,\n4\n,\n4\n)\nindependent of the number of cameras under appropriate scaling conditions. We prove that this rank constraint provides sufficient information for camera recovery in the noiseless case. The constraint motivates a synchronization algorithm based on the higher-order singular value decomposition of the block trifocal tensor. Experimental comparisons with state-of-the-art global synchronization methods on real datasets demonstrate the potential of this algorithm for significantly improving location estimation accuracy. Overall this work suggests that higher-order interactions in synchronization problems can be exploited to improve performance, beyond the usual pairwise-based approaches.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96088"} +{"video_file": "DUHX779C5q_39026790.mp4", "openreview_id": "DUHX779C5q", "slideslive_id": 39026790, "venue": "nips2024", "title": "Language Grounded Multi-agent Reinforcement Learning with Human-interpretable Communication", "status": "Poster", "keywords": "Multi-Agent Reinforcement Learning;Emergent Communication;Ad-hoc Teamwork;Large Language Models", "tldr": "We propose a novel computational pipeline to ground MARL communication in human language using embodied LLM agents, enabling interpretable and generalizable communication in ad-hoc multi-agent teamwork.", "abstract": "Multi-Agent Reinforcement Learning (MARL) methods have shown promise in enabling agents to learn a shared communication protocol from scratch and accomplish challenging team tasks. However, the learned language is usually not interpretable to humans or other agents not co-trained together, limiting its applicability in ad-hoc teamwork scenarios. In this work, we propose a novel computational pipeline that aligns the communication space between MARL agents with an embedding space of human natural language by grounding agent communications on synthetic data generated by embodied Large Language Models (LLMs) in interactive teamwork scenarios. Our results demonstrate that introducing language grounding not only maintains task performance but also accelerates the emergence of communication. Furthermore, the learned communication protocols exhibit zero-shot generalization capabilities in ad-hoc teamwork scenarios with unseen teammates and novel task states. This work presents a significant step toward enabling effective communication and collaboration between artificial agents and humans in real-world teamwork settings.", "primary_area": "human-AI_interaction", "site": "https://neurips.cc/virtual/2024/poster/96086"} +{"video_file": "DV15UbHCY1_39028694.mp4", "openreview_id": "DV15UbHCY1", "slideslive_id": 39028694, "venue": "nips2024", "title": "Are Language Models Actually Useful for Time Series Forecasting?", "status": "Spotlight", "keywords": "Time Series;Language Models;Time Series Forecasting", "tldr": "LLM in Time Series Forecasting Task", "abstract": "Large language models (LLMs) are being applied to time series forecasting. But are language models actually useful for time series? In a series of ablation studies on three recent and popular LLM-based time series forecasting methods, we find that removing the LLM component or replacing it with a basic attention layer does not degrade forecasting performance---in most cases, the results even improve! We also find that despite their significant computational cost, pretrained LLMs do no better than models trained from scratch, do not represent the sequential dependencies in time series, and do not assist in few-shot settings. Additionally, we explore time series encoders and find that patching and attention structures perform similarly to LLM-based forecasters. All resources needed to reproduce our work are available: https://github.com/BennyTMT/LLMsForTimeSeries.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/96085"} +{"video_file": "DX5GUwMFFb_39028077.mp4", "openreview_id": "DX5GUwMFFb", "slideslive_id": 39028077, "venue": "nips2024", "title": "Deep Policy Gradient Methods Without Batch Updates, Target Networks, or Replay Buffers", "status": "Poster", "keywords": "Reinforcement Learning;Robotics;Deep Learning;Incremental Learning;Real-time Learning", "tldr": "We introduce Action Value Gradient (AVG), a novel incremental policy gradient method for real-time learning on robots with limited onboard computation, eliminating the need for large replay buffers, target networks or batch updates.", "abstract": "Modern deep policy gradient methods achieve effective performance on simulated robotic tasks, but they all require large replay buffers or expensive batch updates, or both, making them incompatible for real systems with resource-limited computers. We show that these methods fail catastrophically when limited to small replay buffers or during incremental learning, where updates only use the most recent sample without batch updates or a replay buffer. We propose a novel incremental deep policy gradient method --- Action Value Gradient (AVG) and a set of normalization and scaling techniques to address the challenges of instability in incremental learning. On robotic simulation benchmarks, we show that AVG is the only incremental method that learns effectively, often achieving final performance comparable to batch policy gradient methods. This advancement enabled us to show for the first time effective deep reinforcement learning with real robots using only incremental updates, employing a robotic manipulator and a mobile robot.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96084"} +{"video_file": "DdKdr4kqxh_39025863.mp4", "openreview_id": "DdKdr4kqxh", "slideslive_id": 39025863, "venue": "nips2024", "title": "Identifying Spatio-Temporal Drivers of Extreme Events", "status": "Poster", "keywords": "anomaly detection;weakly supervised learning;Earth science;climate science;remote sensing;deep learning", "tldr": "We present a deep learning model, designed to leverage climate data to identify the drivers of extreme events impacts.", "abstract": "The spatio-temporal relations of impacts of extreme events and their drivers in climate data are not fully understood and there is a need of machine learning approaches to identify such spatio-temporal relations from data. The task, however, is very challenging since there are time delays between extremes and their drivers, and the spatial response of such drivers is inhomogeneous. In this work, we propose a first approach and benchmarks to tackle this challenge. Our approach is trained end-to-end to predict spatio-temporally extremes and spatio-temporally drivers in the physical input variables jointly. By enforcing the network to predict extremes from spatio-temporal binary masks of identified drivers, the network successfully identifies drivers that are correlated with extremes. We evaluate our approach on three newly created synthetic benchmarks, where two of them are based on remote sensing or reanalysis climate data, and on two real-world reanalysis datasets. The source code and datasets are publicly available at the project page https://hakamshams.github.io/IDE.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/96083"} +{"video_file": "Dokew2u49m_39025939.mp4", "openreview_id": "Dokew2u49m", "slideslive_id": 39025939, "venue": "nips2024", "title": "Make Continual Learning Stronger via C-Flat", "status": "Poster", "keywords": "Continual Learning;Incremental Learning", "tldr": "This paper propose a Continual Flatness (C-Flat) method featuring a flatter loss landscape tailored for CL. C-Flat could be easily called with only one line of code and is plug-and-play to any CL methods.", "abstract": "How to balance the learning \u2019sensitivity-stability\u2019 upon new task training and memory preserving is critical in CL to resolve catastrophic forgetting. Improving model generalization ability within each learning phase is one solution to help CL learning overcome the gap in the joint knowledge space. Zeroth-order loss landscape sharpness-aware minimization is a strong training regime improving model generalization in transfer learning compared with optimizer like SGD. It has also been introduced into CL to improve memory representation or learning efficiency. However, zeroth-order sharpness alone could favors sharper over flatter minima in certain scenarios, leading to a rather sensitive minima rather than a global optima. To further enhance learning stability, we propose a Continual Flatness (C-Flat) method featuring a flatter loss landscape tailored for CL. C-Flat could be easily called with only one line of code and is plug-and-play to any CL methods. A general framework of C-Flat applied to all CL categories and a thorough comparison with loss minima optimizer and flat minima based CL approaches is presented in this paper, showing that our method can boost CL performance in almost all cases. Code is available at https://github.com/WanNaa/C-Flat.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/96074"} +{"video_file": "DpByqSbdhI_39027209.mp4", "openreview_id": "DpByqSbdhI", "slideslive_id": 39027209, "venue": "nips2024", "title": "UniMTS: Unified Pre-training for Motion Time Series", "status": "Poster", "keywords": "motion time series classification;pre-training;contrastive learning;physics-based simulation;human activity recognition", "tldr": "We present the first unified pre-training procedure for motion time series that can generalize to diverse device locations, orientations, and activity types.", "abstract": "Motion time series collected from low-power, always-on mobile and wearable devices such as smartphones and smartwatches offer significant insights into human behavioral patterns, with wide applications in healthcare, automation, IoT, and AR/XR. However, given security and privacy concerns, building large-scale motion time series datasets remains difficult, hindering the development of pre-trained models for human activity analysis. Typically, existing models are trained and tested on the same dataset, leading to poor generalizability across variations in device location, device mounting orientation, and human activity type. In this paper, we introduce UniMTS, the first unified pre-training procedure for motion time series that generalizes across diverse device latent factors and activities. Specifically, we employ a contrastive learning framework that aligns motion time series with text descriptions enriched by large language models. This helps the model learn the semantics of time series to generalize across activities. Given the absence of large-scale motion time series data, we derive and synthesize time series from existing motion skeleton data with all-joint coverage. We use spatio-temporal graph networks to capture the relationships across joints for generalization across different device locations. We further design rotation-invariant augmentation to make the model agnostic to changes in device mounting orientations. Our model shows exceptional generalizability across 18 motion time series classification benchmark datasets, outperforming the best baselines by 340% in the zero-shot setting, 16.3% in the few-shot setting, and 9.2% in the full-shot setting.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/96073"} +{"video_file": "DpP5F3UfKw_39025698.mp4", "openreview_id": "DpP5F3UfKw", "slideslive_id": 39025698, "venue": "nips2024", "title": "Divergences between Language Models and Human Brains", "status": "Poster", "keywords": "Natural Language Processing;NLP;Brain Imaging;Neuroimaging;Magnetoencephalography;MEG;Neuroscience;Cognitive Science;Interpretability;Deep Learning", "tldr": "Language models differ from human brains in social/emotional intelligence and physical commonsense. Fine-tuning language models on these domains improves their alignment with human understanding.", "abstract": "Do machines and humans process language in similar ways? Recent research has hinted at the affirmative, showing that human neural activity can be effectively predicted using the internal representations of language models (LMs). Although such results are thought to reflect shared computational principles between LMs and human brains, there are also clear differences in how LMs and humans represent and use language. In this work, we systematically explore the divergences between human and machine language processing by examining the differences between LM representations and human brain responses to language as measured by Magnetoencephalography (MEG) across two datasets in which subjects read and listened to narrative stories. Using an LLM-based data-driven approach, we identify two domains that LMs do not capture well: social/emotional intelligence and physical commonsense. We validate these findings with human behavioral experiments and hypothesize that the gap is due to insufficient representations of social/emotional and physical knowledge in LMs. Our results show that fine-tuning LMs on these domains can improve their alignment with human brain responses.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/96072"} +{"video_file": "DqiggGDOmA_39024764.mp4", "openreview_id": "DqiggGDOmA", "slideslive_id": 39024764, "venue": "nips2024", "title": "EASI: Evolutionary Adversarial Simulator Identification for Sim-to-Real Transfer", "status": "Poster", "keywords": "Evolutionary adversarial simulator identification;reinforcement learning;sim-to-real transfer", "tldr": "We introduce a novel approach of Evolutionary Adversarial Simulator Identification (EASI) by combining Generative Adversarial Network (GAN) and Evolutionary Strategy (ES) to address sim-to-real challenges.", "abstract": "Reinforcement Learning (RL) controllers have demonstrated remarkable performance in complex robot control tasks. However, the presence of reality gap often leads to poor performance when deploying policies trained in simulation directly onto real robots. Previous sim-to-real algorithms like Domain Randomization (DR) requires domain-specific expertise and suffers from issues such as reduced control performance and high training costs. In this work, we introduce Evolutionary Adversarial Simulator Identification (EASI), a novel approach that combines Generative Adversarial Network (GAN) and Evolutionary Strategy (ES) to address sim-to-real challenges. Specifically, we consider the problem of sim-to-real as a search problem, where ES acts as a generator in adversarial competition with a neural network discriminator, aiming to find physical parameter distributions that make the state transitions between simulation and reality as similar as possible. The discriminator serves as the fitness function, guiding the evolution of the physical parameter distributions. EASI features simplicity, low cost, and high fidelity, enabling the construction of a more realistic simulator with minimal requirements for real-world data, thus aiding in transferring simulated-trained policies to the real world. We demonstrate the performance of EASI in both sim-to-sim and sim-to-real tasks, showing superior performance compared to existing sim-to-real algorithms.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/96071"} +{"video_file": "DztaBt4wP5_39028504.mp4", "openreview_id": "DztaBt4wP5", "slideslive_id": 39028504, "venue": "nips2024", "title": "Membership Inference on Text-to-Image Diffusion Models via Conditional Likelihood Discrepancy", "status": "Poster", "keywords": "Diffusion models;Membership inference;Conditional Likelihood;Text-to-Image Synthesis", "tldr": "We propose a membership inference method on text-to-image diffusion models via condition likelihood discrepancy, outperforming previous works on diverse datasets, with superior resistance against early stopping and data augmentation.", "abstract": "Text-to-image diffusion models have achieved tremendous success in the field of controllable image generation, while also coming along with issues of privacy leakage and data copyrights. Membership inference arises in these contexts as a potential auditing method for detecting unauthorized data usage. While some efforts have been made on diffusion models, they are not applicable to text-to-image diffusion models due to the high computation overhead and enhanced generalization capabilities. In this paper, we first identify a conditional overfitting phenomenon in text-to-image diffusion models, indicating that these models tend to overfit the conditional distribution of images given the corresponding text rather than the marginal distribution of images only. Based on this observation, we derive an analytical indicator, namely Conditional Likelihood Discrepancy (CLiD), to perform membership inference, which reduces the stochasticity in estimating memorization of individual samples. Experimental results demonstrate that our method significantly outperforms previous methods across various data distributions and dataset scales. Additionally, our method shows superior resistance to overfitting mitigation strategies, such as early stopping and data augmentation.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96064"} +{"video_file": "E1nBLrEaJo_39027743.mp4", "openreview_id": "E1nBLrEaJo", "slideslive_id": 39027743, "venue": "nips2024", "title": "On the Benefits of Public Representations for Private Transfer Learning under Distribution Shift", "status": "Poster", "keywords": "pretraining;representations;privacy;distribution shift", "tldr": "We show that contrary to prior concerns, public features can be extremely helpful in private transfer learning even when the transfer task is significantly out of distribution. We propose a theoretical model to support our empirical results.", "abstract": "Public pretraining is a promising approach to improve differentially private model training. However, recent work has noted that many positive research results studying this paradigm only consider in-distribution tasks, and may not apply to settings where there is distribution shift between the pretraining and finetuning data---a scenario that is likely when finetuning private tasks due to the sensitive nature of the data. In this work, we show empirically across three tasks that even in settings with large distribution shift, where both zero-shot performance from public data and training from scratch with private data give unusably weak results, public features can in fact improve private training accuracy by up to 67% over private training from scratch. We provide a theoretical explanation for this phenomenon, showing that if the public and private data share a low-dimensional representation, public representations can improve the sample complexity of private training even if it is \\emph{impossible} to learn the private task from the public data alone. Altogether, our results provide evidence that public data can indeed make private training practical in realistic settings of extreme distribution shift.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96063"} +{"video_file": "E2BYPreuU8_39026842.mp4", "openreview_id": "E2BYPreuU8", "slideslive_id": 39026842, "venue": "nips2024", "title": "On Mesa-Optimization in Autoregressively Trained Transformers: Emergence and Capability", "status": "Poster", "keywords": "Mesa-Optimization;In-context learning;Autoregressive pretraining;Non-convex optimization;Learning theory;Transformers", "tldr": "Towards understanding the mechanisms underlying the in-context learning from autoregressive pretraining by rigorously verifying the mesa-optimization hypothesis.", "abstract": "Autoregressively trained transformers have brought a profound revolution to the world, especially with their in-context learning (ICL) ability to address downstream tasks. Recently, several studies suggest that transformers learn a mesa-optimizer during autoregressive (AR) pretraining to implement ICL. Namely, the forward pass of the trained transformer is equivalent to optimizing an inner objective function in-context. However, whether the practical non-convex training dynamics will converge to the ideal mesa-optimizer is still unclear. Towards filling this gap, we investigate the non-convex dynamics of a one-layer linear causal self-attention model autoregressively trained by gradient flow, where the sequences are generated by an AR process\nx\nt\n+\n1\n=\nW\nx\nt\n. First, under a certain condition of data distribution, we prove that an autoregressively trained transformer learns\nW\nby implementing one step of gradient descent to minimize an ordinary least squares (OLS) problem in-context. It then applies the learned\nW\n^\nfor next-token prediction, thereby verifying the mesa-optimization hypothesis. Next, under the same data conditions, we explore the capability limitations of the obtained mesa-optimizer. We show that a stronger assumption related to the moments of data is the sufficient and necessary condition that the learned mesa-optimizer recovers the distribution. Besides, we conduct exploratory analyses beyond the first data condition and prove that generally, the trained transformer will not perform vanilla gradient descent for the OLS problem. Finally, our simulation results verify the theoretical results.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96062"} +{"video_file": "E3ZMsqdO0D_39025794.mp4", "openreview_id": "E3ZMsqdO0D", "slideslive_id": 39025794, "venue": "nips2024", "title": "Zero-Shot Event-Intensity Asymmetric Stereo via Visual Prompting from Image Domain", "status": "Poster", "keywords": "Event cameras;stereo matching;asymetric stereo;visual prompting;disparity filtering", "tldr": "We propose a zero-shot event-intensity asymmetric stereo method that adapts large-scale image domain models by using physical inspired visual prompting and a monocular cue-guided disparity refinement technique.", "abstract": "Event-intensity asymmetric stereo systems have emerged as a promising approach for robust 3D perception in dynamic and challenging environments by integrating event cameras with frame-based sensors in different views. However, existing methods often suffer from overfitting and poor generalization due to limited dataset sizes and lack of scene diversity in the event domain. To address these issues, we propose a zero-shot framework that utilizes monocular depth estimation and stereo matching models pretrained on diverse image datasets. Our approach introduces a visual prompting technique to align the representations of frames and events, allowing the use of off-the-shelf stereo models without additional training. Furthermore, we introduce a monocular cue-guided disparity refinement module to improve robustness across static and dynamic regions by incorporating monocular depth information from foundation models. Extensive experiments on real-world datasets demonstrate the superior zero-shot evaluation performance and enhanced generalization ability of our method compared to existing approaches.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96057"} +{"video_file": "E6ZodZu0HQ_39026740.mp4", "openreview_id": "E6ZodZu0HQ", "slideslive_id": 39026740, "venue": "nips2024", "title": "PuLID: Pure and Lightning ID Customization via Contrastive Alignment", "status": "Poster", "keywords": "diffusion;controllable image generation;image customization", "tldr": "We introduce PuLID, a tuning-free ID customization approach. PuLID maintains high ID fidelity while effectively reducing interference with the original model's behavior", "abstract": "We propose Pure and Lightning ID customization (PuLID), a novel tuning-free ID customization method for text-to-image generation. By incorporating a Lightning T2I branch with a standard diffusion one, PuLID introduces both contrastive alignment loss and accurate ID loss, minimizing disruption to the original model and ensuring high ID fidelity. Experiments show that PuLID achieves superior performance in both ID fidelity and editability. Another attractive property of PuLID is that the image elements (\\eg, background, lighting, composition, and style) before and after the ID insertion are kept as consistent as possible. Codes and models are available at https://github.com/ToTheBeginning/PuLID", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96055"} +{"video_file": "E7en5DyO2G_39026701.mp4", "openreview_id": "E7en5DyO2G", "slideslive_id": 39026701, "venue": "nips2024", "title": "Bayesian Online Natural Gradient (BONG)", "status": "Poster", "keywords": "online learning;Bayesian neural networks;variational inference;natural gradient descent", "tldr": "We improve on online variational Bayes using natural gradient descent on expected log-likelihood.", "abstract": "We propose a novel approach to sequential Bayesian inference based on variational Bayes (VB). The key insight is that, in the online setting, we do not need to add the KL term to regularize to the prior (which comes from the posterior at the previous timestep); instead we can optimize just the expected log-likelihood, performing a single step of natural gradient descent starting at the prior predictive. We prove this method recovers exact Bayesian inference if the model is conjugate. We also show how to compute an efficient deterministic approximation to the VB objective, as well as our simplified objective, when the variational distribution is Gaussian or a sub-family, including the case of a diagonal plus low-rank precision matrix. We show empirically that our method outperforms other online VB methods in the non-conjugate setting, such as online learning for neural networks, especially when controlling for computational costs.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/96054"} +{"video_file": "E7fZOoiEKl_39025415.mp4", "openreview_id": "E7fZOoiEKl", "slideslive_id": 39025415, "venue": "nips2024", "title": "FuseFL: One-Shot Federated Learning through the Lens of Causality with Progressive Model Fusion", "status": "Spotlight", "keywords": "Federated Learning;communication efficiency;causality", "tldr": "This work identifies the cause of low performance of one-shot FL, and proposes FuseFL to progressively train and fuses DNN model following a bottom-up manner, reducing communication costs to an extremely low degree.", "abstract": "One-shot Federated Learning (OFL) significantly reduces communication costs in FL by aggregating trained models only once. However, the performance of advanced OFL methods is far behind the normal FL. In this work, we provide a causal view to find that this performance drop of OFL methods comes from the isolation problem, which means that local isolatedly trained models in OFL may easily fit to spurious correlations due to the data heterogeneity. From the causal perspective, we observe that the spurious fitting can be alleviated by augmenting intermediate features from other clients. Built upon our observation, we propose a novel learning approach to endow OFL with superb performance and low communication and storage costs, termed as FuseFL. Specifically, FuseFL decomposes neural networks into several blocks, and progressively trains and fuses each block following a bottom-up manner for feature augmentation, introducing no additional communication costs. Comprehensive experiments demonstrate that FuseFL outperforms existing OFL and ensemble FL by a significant margin. We conduct comprehensive experiments to show that FuseFL supports high scalability of clients, heterogeneous model training, and low memory costs. Our work is the first attempt using causality to analyze and alleviate data heterogeneity of OFL.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/96053"} +{"video_file": "E8wDxddIqU_39028735.mp4", "openreview_id": "E8wDxddIqU", "slideslive_id": 39028735, "venue": "nips2024", "title": "Distributionally Robust Performative Prediction", "status": "Poster", "keywords": "performative prediction;distributionally robust learning;misspecification;distribution shift", "tldr": "This work designs a robust learning framework to better approximate the true performative optimum in the presence of distribution map misspecification.", "abstract": "Performative prediction aims to model scenarios where predictive outcomes subsequently influence the very systems they target. The pursuit of a performative optimum (PO)\u2014minimizing performative risk\u2014is generally reliant on modeling of the distribution map, which characterizes how a deployed ML model alters the data distribution. Unfortunately, inevitable misspecification of the distribution map can lead to a poor approximation of the true PO. To address this issue, we introduce a novel framework of distributionally robust performative prediction and study a new solution concept termed as distributionally robust performative optimum (DRPO). We show provable guarantees for DRPO as a robust approximation to the true PO when the nominal distribution map is different from the actual one. Moreover, distributionally robust performative prediction can be reformulated as an augmented performative prediction problem, enabling efficient optimization. The experimental results demonstrate that DRPO offers potential advantages over traditional PO approach when the distribution map is misspecified at either micro- or macro-level.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/96051"} +{"video_file": "EAbNopo3os_39028718.mp4", "openreview_id": "EAbNopo3os", "slideslive_id": 39028718, "venue": "nips2024", "title": "A Theory of Optimistically Universal Online Learnability for General Concept Classes", "status": "Poster", "keywords": "Online Learning;Statistical Learning;Consistency", "tldr": "We provide a full characterization of the concept classes that are optimistically universally online learnable with {0, 1} labels.", "abstract": "We provide a full characterization of the concept classes that are optimistically universally online learnable with {0, 1} labels. The notion of optimistically universal online learning was defined in [Hanneke, 2021] in order to understand learnability under minimal assumptions. In this paper, following the philosophy behind that work, we investigate two questions, namely, for every concept class: (1) What are the minimal assumptions on the data process admitting online learnability? (2) Is there a learning algorithm which succeeds under every data process satisfying the minimal assumptions? Such an algorithm is said to be optimistically universal for the given concept class. We resolve both of these questions for all concept classes, and moreover, as part of our solution we design general learning algorithms for each case. Finally, we extend these algorithms and results to the agnostic case, showing an equivalence between the minimal assumptions on the data process for learnability in the agnostic and realizable cases, for every concept class, as well as the equivalence of optimistically universal learnability.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96050"} +{"video_file": "EC9Hfi9V3k_39026900.mp4", "openreview_id": "EC9Hfi9V3k", "slideslive_id": 39026900, "venue": "nips2024", "title": "Efficient Streaming Algorithms for Graphlet Sampling", "status": "Poster", "keywords": "graphlet sampling;streaming;approximation algorithms", "tldr": "We have developed an efficient algorithm that enables random and uniform sampling of graphlets from large input graphs in the semi-streaming setting with limited memory.", "abstract": "Given a graph\nG\nand a positive integer\nk\n, the Graphlet Sampling problem asks to sample a connected induced\nk\n-vertex subgraph of\nG\nuniformly at random. Graphlet sampling enhances machine learning applications by transforming graph structures into feature vectors for tasks such as graph classification and subgraph identification, boosting neural network performance, and supporting clustered federated learning by capturing local structures and relationships. A recent work has shown that the problem admits an algorithm that preprocesses\nG\nin time\nO\n(\nn\nk\n2\nlog\n\u2061\nk\n+\nm\n)\n, and draws one sample in expected time\nk\nO\n(\nk\n)\nlog\n\u2061\nn\n, where\nn\n=\n|\nV\n(\nG\n)\n|\nand\nm\n=\n|\nE\n(\nG\n)\n|\n. Such an algorithm relies on the assumption that the input graph fits into main memory and it does not seem to be straightforward to adapt it to very large graphs. We consider Graphlet Sampling in the semi-streaming setting, where we have a memory of\nM\n=\n\u03a9\n(\nn\nlog\n\u2061\nn\n)\nwords, and\nG\ncan be only read through sequential passes over the edge list. We develop a semi-streaming algorithm that preprocesses\nG\nin\np\n=\nO\n(\nlog\n\u2061\nn\n)\npasses and samples\n\u0398\n(\nM\nk\n\u2212\nO\n(\nk\n)\n)\nindependent uniform\nk\n-graphlets in\nO\n(\nk\n)\npasses. For constant\nk\n, both phases run in time\nO\n(\n(\nn\n+\nm\n)\nlog\n\u2061\nn\n)\n. We also show that the tradeoff between memory and number of passes of our algorithms is near-optimal. Our extensive evaluation on very large graphs shows the effectiveness of our algorithms.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/96049"} +{"video_file": "EHXyeImux0_39026771.mp4", "openreview_id": "EHXyeImux0", "slideslive_id": 39026771, "venue": "nips2024", "title": "Data Mixture Inference Attack: BPE Tokenizers Reveal Training Data Compositions", "status": "Poster", "keywords": "tokenizers;distribution inference;security", "tldr": "We infer the training data mixtures of tokenizers from their merge lists.", "abstract": "The pretraining data of today's strongest language models remains opaque, even when their parameters are open-sourced. In particular, little is known about the proportions of different domains, languages, or code represented in the data. While a long line of membership inference attacks aim to identify training examples on an instance level, they do not extend easily to global statistics about the corpus. In this work, we tackle a task which we call data mixture inference, which aims to uncover the distributional make-up of the pretraining data. We introduce a novel attack based on a previously overlooked source of information \u2014 byte-pair encoding (BPE) tokenizers, used by the vast majority of modern language models. Our key insight is that the ordered vocabulary learned by a BPE tokenizer naturally reveals information about the token frequencies in its training data: the first token is the most common byte pair, the second is the most common pair after merging the first token, and so on. Given a tokenizer's merge list along with data samples for each category of interest (e.g., different natural languages), we formulate a linear program that solves for the relative proportion of each category in the tokenizer's training set. Importantly, to the extent to which tokenizer training data is representative of the pretraining data, we indirectly learn about the pretraining data. In controlled experiments, we show that our attack can recover mixture ratios with high precision for tokenizers trained on known mixtures of natural languages, programming languages, and data sources. We then apply our approach to off-the-shelf tokenizers released alongside recent LMs. We confirm much publicly disclosed information about these models, and also make several new inferences: GPT-4o is much more multilingual than its predecessors, training on 10x more non-English data than GPT-3.5, Llama 3 and Claude are trained on predominantly code, and many recent models are trained on 7-16% books. We hope our work sheds light on current design practices for pretraining data, and inspires continued research into data mixture inference for LMs.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96046"} +{"video_file": "EJZfcKXdiT_39025697.mp4", "openreview_id": "EJZfcKXdiT", "slideslive_id": 39025697, "venue": "nips2024", "title": "Event-3DGS: Event-based 3D Reconstruction Using 3D Gaussian Splatting", "status": "Poster", "keywords": "Event Camera;Event-based vision;3D reconstruction;3D gaussian spallating", "tldr": "We introduce a novel reconstruction algorithm achieving high-quality scene reconstruction from Event data under low-light, high-speed conditions.", "abstract": "Event cameras, offering high temporal resolution and high dynamic range, have brought a new perspective to addressing 3D reconstruction challenges in fast-motion and low-light scenarios. Most methods use the Neural Radiance Field (NeRF) for event-based photorealistic 3D reconstruction. However, these NeRF methods suffer from time-consuming training and inference, as well as limited scene-editing capabilities of implicit representations. To address these problems, we propose Event-3DGS, the first event-based reconstruction using 3D Gaussian splatting (3DGS) for synthesizing novel views freely from event streams. Technically, we first propose an event-based 3DGS framework that directly processes event data and reconstructs 3D scenes by simultaneously optimizing scenario and sensor parameters. Then, we present a high-pass filter-based photovoltage estimation module, which effectively reduces noise in event data to improve the robustness of our method in real-world scenarios. Finally, we design an event-based 3D reconstruction loss to optimize the parameters of our method for better reconstruction quality. The results show that our method outperforms state-of-the-art methods in terms of reconstruction quality on both simulated and real-world datasets. We also verify that our method can perform robust 3D reconstruction even in real-world scenarios with extreme noise, fast motion, and low-light conditions. Our code is available in https://github.com/lanpokn/Event-3DGS.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96044"} +{"video_file": "EK1tyHcb3W_39025833.mp4", "openreview_id": "EK1tyHcb3W", "slideslive_id": 39025833, "venue": "nips2024", "title": "Sample Complexity of Posted Pricing for a Single Item", "status": "Spotlight", "keywords": "sample complexity;revenue;welfare;pricing;online;prophet inequality", "tldr": "We obtain tight bounds on the sample complexity of posted pricing for a single item, for both independent and correlated distributions on the buyers' values.", "abstract": "Selling a single item to\nn\nself-interested bidders is a fundamental problem in economics, where the two objectives typically considered are welfare maximization and revenue maximization. Since the optimal auctions are often impractical and do not work for sequential bidders, posted pricing auctions, where fixed prices are set for the item for different bidders, have emerged as a practical and effective alternative. This paper investigates how many samples are needed from bidders' value distributions to find near-optimal posted prices, considering both independent and correlated bidder distributions, and welfare versus revenue maximization. We obtain matching upper and lower bounds (up to logarithmic terms) on the sample complexity for all these settings.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/96043"} +{"video_file": "EKdk4vxKO4_39028491.mp4", "openreview_id": "EKdk4vxKO4", "slideslive_id": 39028491, "venue": "nips2024", "title": "MDAgents: An Adaptive Collaboration of LLMs for Medical Decision-Making", "status": "Oral", "keywords": "Medical Decision Making;Multi-Agent Collaboration", "tldr": "MDAgents, a framework that adapts the collaboration of LLMs for complex medical decision-making, improving performance on major medical benchmarks", "abstract": "Foundation models are becoming valuable tools in medicine. Yet despite their promise, the best way to leverage Large Language Models (LLMs) in complex medical tasks remains an open question. We introduce a novel multi-agent framework, named Medical Decision-making Agents (MDAgents) that helps to address this gap by automatically assigning a collaboration structure to a team of LLMs. The assigned solo or group collaboration structure is tailored to the medical task at hand, a simple emulation inspired by the way real-world medical decision-making processes are adapted to tasks of different complexities. We evaluate our framework and baseline methods using state-of-the-art LLMs across a suite of real-world medical knowledge and clinical diagnosis benchmarks, including a comparison of LLMs\u2019 medical complexity classification against human physicians. MDAgents achieved the best performance in seven out of ten benchmarks on tasks requiring an understanding of medical knowledge and multi-modal reasoning, showing a significant improvement of up to 4.2% (\np\n< 0.05) compared to previous methods' best performances. Ablation studies reveal that MDAgents effectively determines medical complexity to optimize for efficiency and accuracy across diverse medical tasks. Notably, the combination of moderator review and external medical knowledge in group collaboration resulted in an average accuracy improvement of 11.8%. Our code can be found at https://github.com/mitmedialab/MDAgents.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/96041"} +{"video_file": "ELnxXc8pik_39027014.mp4", "openreview_id": "ELnxXc8pik", "slideslive_id": 39027014, "venue": "nips2024", "title": "Hierarchy-Agnostic Unsupervised Segmentation: Parsing Semantic Image Structure", "status": "Poster", "keywords": "Unsupervised Hierarchical Segmentation;Spectral Clustering;Self-Supervised Feature Extraction;Semantic Region Tree", "tldr": "Our method creates a hierarchy-agnostic semantic region tree from image pixels, offering nuanced segmentation without predefined hierarchies, scaling effectively across datasets.", "abstract": "Unsupervised semantic segmentation aims to discover groupings within images, capturing objects' view-invariance without external supervision. Moreover, this task is inherently ambiguous due to the varying levels of semantic granularity. Existing methods often bypass this ambiguity using dataset-specific priors. In our research, we address this ambiguity head-on and provide a universal tool for pixel-level semantic parsing of images guided by the latent representations encoded in self-supervised models. We introduce a novel algebraic approach that recursively decomposes an image into nested subgraphs, dynamically estimating their count and ensuring clear separation. The innovative approach identifies scene-specific primitives and constructs a hierarchy-agnostic tree of semantic regions from the image pixels. The model captures fine and coarse semantic details, producing a nuanced and unbiased segmentation. We present a new metric for estimating the quality of the semantic segmentation of discovered elements on different levels of the hierarchy. The metric validates the intrinsic nature of the compositional relations among parts, objects, and scenes in a hierarchy-agnostic domain. Our results prove the power of this methodology, uncovering semantic regions without prior definitions and scaling effectively across various datasets. This robust framework for unsupervised image segmentation proves more accurate semantic hierarchical relationships between scene elements than traditional algorithms. The experiments underscore its potential for broad applicability in image analysis tasks, showcasing its ability to deliver a detailed and unbiased segmentation that surpasses existing unsupervised methods.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96040"} +{"video_file": "EMkrwJY2de_39024538.mp4", "openreview_id": "EMkrwJY2de", "slideslive_id": 39024538, "venue": "nips2024", "title": "Spectral Graph Pruning Against Over-Squashing and Over-Smoothing", "status": "Poster", "keywords": "graph neural networks;rewiring;spectral gap optimization;over-smoothing;over-squashing;lottery tickets", "tldr": "By deleting edges of a graph that maximize the spectral gap, we jointly address over-smoothing and over-squashing in GNNs.", "abstract": "Message Passing Graph Neural Networks are known to suffer from two problems that are sometimes believed to be diametrically opposed: over-squashing and over-smoothing. The former results from topological bottlenecks that hamper the information flow from distant nodes and are mitigated by spectral gap maximization, primarily, by means of edge additions. However, such additions often promote over-smoothing that renders nodes of different classes less distinguishable. Inspired by the Braess phenomenon, we argue that deleting edges can address over-squashing and over-smoothing simultaneously. This insight explains how edge deletions can improve generalization, thus connecting spectral gap optimization to a seemingly disconnected objective of reducing computational resources by pruning graphs for lottery tickets. To this end, we propose a computationally effective spectral gap optimization framework to add or delete edges and demonstrate its effectiveness on the long range graph benchmark and on larger heterophilous datasets.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/96038"} +{"video_file": "EMstukR5J4_39027169.mp4", "openreview_id": "EMstukR5J4", "slideslive_id": 39027169, "venue": "nips2024", "title": "FM-Delta: Lossless Compression for Storing Massive Fine-tuned Foundation Models", "status": "Poster", "keywords": "model compression;lossless compression;cloud storage", "tldr": "To mitigate cloud model storage overhead, we propose a novel lossless compression scheme FM-Delta to compress massive fine-tuned models stored in cloud, significantly saving cloud storage costs.", "abstract": "Pre-trained foundation models, particularly large language models, have achieved remarkable success and led to massive fine-tuned variants. These models are commonly fine-tuned locally and then uploaded by users to cloud platforms such as HuggingFace for secure storage. However, the huge model number and their billion-level parameters impose heavy storage overhead for cloud with limited resources. Our empirical and theoretical analysis reveals that most fine-tuned models in cloud have a small difference (delta) from their pre-trained models. To this end, we propose a novel lossless compression scheme FM-Delta specifically for storing massive fine-tuned models in cloud. FM-Delta maps fine-tuned and pre-trained model parameters into integers with the same bits, and entropy codes their integer delta. In this way, cloud only needs to store one uncompressed pre-trained model and other compressed fine-tuned models. Extensive experiments have demonstrated that FM-Delta efficiently reduces cloud storage consumption for massive fine-tuned models by an average of around 50% with only negligible additional time in most end-to-end cases. For example, on up to 10 fine-tuned models in the GPT-NeoX-20B family, FM-Delta reduces the original storage requirement from 423GB to 205GB, significantly saving cloud storage costs.", "primary_area": "infrastructure", "site": "https://neurips.cc/virtual/2024/poster/96037"} +{"video_file": "ENLsNDfys0_39028207.mp4", "openreview_id": "ENLsNDfys0", "slideslive_id": 39028207, "venue": "nips2024", "title": "Novel Object Synthesis via Adaptive Text-Image Harmony", "status": "Poster", "keywords": "Text-to-image Generation; Diffusion Model; Object Editing; Combination", "tldr": "ATIH", "abstract": "In this paper, we study an object synthesis task that combines an object text with an object image to create a new object image. However, most diffusion models struggle with this task, \\textit{i.e.}, often generating an object that predominantly reflects either the text or the image due to an imbalance between their inputs. To address this issue, we propose a simple yet effective method called Adaptive Text-Image Harmony (ATIH) to generate novel and surprising objects. First, we introduce a scale factor and an injection step to balance text and image features in cross-attention and to preserve image information in self-attention during the text-image inversion diffusion process, respectively. Second, to better integrate object text and image, we design a balanced loss function with a noise parameter, ensuring both optimal editability and fidelity of the object image. Third, to adaptively adjust these parameters, we present a novel similarity score function that not only maximizes the similarities between the generated object image and the input text/image but also balances these similarities to harmonize text and image integration.\nExtensive experiments demonstrate the effectiveness of our approach, showcasing remarkable object creations such as colobus-glass jar. https://xzr52.github.io/ATIH/", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96036"} +{"video_file": "ENlubvb262_39027418.mp4", "openreview_id": "ENlubvb262", "slideslive_id": 39027418, "venue": "nips2024", "title": "Learning Noisy Halfspaces with a Margin: Massart is No Harder than Random", "status": "Spotlight", "keywords": "pac learning;learning halfspaces;massart noise;sgd;robust learning", "tldr": "We give a simple efficient algorithm for learning halfspaces with Massart noise.", "abstract": "We study the problem of PAC learning\n\u03b3\n-margin halfspaces with Massart noise. We propose a simple proper learning algorithm, the Perspectron, that has sample complexity\nO\n~\n(\n(\n\u03f5\n\u03b3\n)\n\u2212\n2\n)\nand achieves classification error at most\n\u03b7\n+\n\u03f5\nwhere\n\u03b7\nis the Massart noise rate. Prior works (DGT19, CKMY20) came with worse sample complexity guarantees (in both\n\u03f5\nand\n\u03b3\n) or could only handle random classification noise (DDKWZ23,KITBMV23)--- a much milder noise assumption. We also show that our results extend to the more challenging setting of learning generalized linear models with a known link function under Massart noise, achieving a similar sample complexity to the halfspace case. This significantly improves upon the prior state-of-the-art in this setting due to CKMY20, who introduced this model.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96035"} +{"video_file": "EQHQzRJy75_39026939.mp4", "openreview_id": "EQHQzRJy75", "slideslive_id": 39026939, "venue": "nips2024", "title": "STONE: A Submodular Optimization Framework for Active 3D Object Detection", "status": "Poster", "keywords": "Active learning;3D object detection", "tldr": "Unified active 3D object detection framework based on submodular optimization.", "abstract": "3D object detection is fundamentally important for various emerging applications, including autonomous driving and robotics. A key requirement for training an accurate 3D object detector is the availability of a large amount of LiDAR-based point cloud data. Unfortunately, labeling point cloud data is extremely challenging, as accurate 3D bounding boxes and semantic labels are required for each potential object. This paper proposes a unified active 3D object detection framework, for greatly reducing the labeling cost of training 3D object detectors. Our framework is based on a novel formulation of submodular optimization, specifically tailored to the problem of active 3D object detection. In particular, we address two fundamental challenges associated with active 3D object detection: data imbalance and the need to cover the distribution of the data, including LiDAR-based point cloud data of varying difficulty levels. Extensive experiments demonstrate that our method achieves state-of-the-art performance with high computational efficiency compared to existing active learning methods. The code is available at https://github.com/RuiyuM/STONE", "primary_area": "active_learning", "site": "https://neurips.cc/virtual/2024/poster/96033"} +{"video_file": "ES0Gj1KVUk_39027177.mp4", "openreview_id": "ES0Gj1KVUk", "slideslive_id": 39027177, "venue": "nips2024", "title": "Data subsampling for Poisson regression with pth-root-link", "status": "Poster", "keywords": "Poisson regression;subsampling;coresets;Lambert function", "tldr": "We show novel results on data subsampling for approximating Poisson regression via coresets, as well as their limitations.", "abstract": "We develop and analyze data subsampling techniques for Poisson regression, the standard model for count data\ny\n\u2208\nN\n. In particular, we consider the Poisson generalized linear model with ID- and square root-link functions. We consider the method of \\emph{coresets}, which are small weighted subsets that approximate the loss function of Poisson regression up to a factor of\n1\n\u00b1\n\u03b5\n. We show\n\u03a9\n(\nn\n)\nlower bounds against coresets for Poisson regression that continue to hold against arbitrary data reduction techniques up to logarithmic factors. By introducing a novel complexity parameter and a domain shifting approach, we show that sublinear coresets with\n1\n\u00b1\n\u03b5\napproximation guarantee exist when the complexity parameter is small. In particular, the dependence on the number of input points can be reduced to polylogarithmic. We show that the dependence on other input parameters can also be bounded sublinearly, though not always logarithmically. In particular, we show that the square root-link admits an\nO\n(\nlog\n\u2061\n(\ny\nmax\n)\n)\ndependence, where\ny\nmax\ndenotes the largest count presented in the data, while the ID-link requires a\n\u0398\n(\ny\nmax\n/\nlog\n\u2061\n(\ny\nmax\n)\n)\ndependence. As an auxiliary result for proving the tightness of the bound with respect to\ny\nmax\nin the case of the ID-link, we show an improved bound on the principal branch of the Lambert\nW\n0\nfunction, which may be of independent interest. We further show the limitations of our analysis when\np\nth degree root-link functions for\np\n\u2265\n3\nare considered, which indicate that other analytical or computational methods would be required if such a generalization is even possible.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/96031"} +{"video_file": "EVw8Jh5Et9_39027885.mp4", "openreview_id": "EVw8Jh5Et9", "slideslive_id": 39027885, "venue": "nips2024", "title": "Dual Defense: Enhancing Privacy and Mitigating Poisoning Attacks in Federated Learning", "status": "Poster", "keywords": "federated learning;privacy preservation;secure aggregation;model poisoning attack", "tldr": "This paper presents a dual defense approach for enhancing privacy and mitigating poisoning attacks at once in federated learning", "abstract": "Federated learning (FL) is inherently susceptible to privacy breaches and poisoning attacks. To tackle these challenges, researchers have separately devised secure aggregation mechanisms to protect data privacy and robust aggregation methods that withstand poisoning attacks. However, simultaneously addressing both concerns is challenging; secure aggregation facilitates poisoning attacks as most anomaly detection techniques require access to unencrypted local model updates, which are obscured by secure aggregation. Few recent efforts to simultaneously tackle both challenges offen depend on impractical assumption of non-colluding two-server setups that disrupt FL's topology, or three-party computation which introduces scalability issues, complicating deployment and application. To overcome this dilemma, this paper introduce a Dual Defense Federated learning (DDFed) framework. DDFed simultaneously boosts privacy protection and mitigates poisoning attacks, without introducing new participant roles or disrupting the existing FL topology. DDFed initially leverages cutting-edge fully homomorphic encryption (FHE) to securely aggregate model updates, without the impractical requirement for non-colluding two-server setups and ensures strong privacy protection. Additionally, we proposes a unique two-phase anomaly detection mechanism for encrypted model updates, featuring secure similarity computation and feedback-driven collaborative selection, with additional measures to prevent potential privacy breaches from Byzantine clients incorporated into the detection process. We conducted extensive experiments on various model poisoning attacks and FL scenarios, including both cross-device and cross-silo FL. Experiments on publicly available datasets demonstrate that DDFed successfully protects model privacy and effectively defends against model poisoning threats.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/96030"} +{"video_file": "EXuv4tVNa3_39026513.mp4", "openreview_id": "EXuv4tVNa3", "slideslive_id": 39026513, "venue": "nips2024", "title": "Enhancing Feature Diversity Boosts Channel-Adaptive Vision Transformers", "status": "Poster", "keywords": "vision transformer;representation learning;multi-channel imaging", "tldr": "We enhance Vision Transformers for multi-channel imaging by improving diverse representations.", "abstract": "Multi-Channel Imaging (MCI) contains an array of challenges for encoding useful feature representations not present in traditional images. For example, images from two different satellites may both contain RGB channels, but the remaining channels can be different for each imaging source. Thus, MCI models must support a variety of channel configurations at test time. Recent work has extended traditional visual encoders for MCI, such as Vision Transformers (ViT), by supplementing pixel information with an encoding representing the channel configuration. However, these methods treat each channel equally, i.e., they do not consider the unique properties of each channel type, which can result in needless and potentially harmful redundancies in the learned features. For example, if RGB channels are always present, the other channels can focus on extracting information that cannot be captured by the RGB channels. To this end, we propose DiChaViT, which aims to enhance the diversity in the learned features of MCI-ViT models. This is achieved through a novel channel sampling strategy that encourages the selection of more distinct channel sets for training. Additionally, we employ regularization and initialization techniques to increase the likelihood that new information is learned from each channel. Many of our improvements are architecture agnostic and can be incorporated into new architectures as they are developed. Experiments on both satellite and cell microscopy datasets, CHAMMI, JUMP-CP, and So2Sat, report DiChaViT yields a 1.5 - 5.0% gain over the state-of-the-art. Our code is publicly available at https://github.com/chaudatascience/diverse_channel_vit.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96027"} +{"video_file": "EY2agT920S_39027794.mp4", "openreview_id": "EY2agT920S", "slideslive_id": 39027794, "venue": "nips2024", "title": "Rethinking the Power of Timestamps for Robust Time Series Forecasting: A Global-Local Fusion Perspective", "status": "Poster", "keywords": "Time Series Forecasting", "tldr": "We propose a plugin to enhance the robust prediction capability of time series forecasting backbones in the real world.", "abstract": "Time series forecasting has played a pivotal role across various industries, including finance, transportation, energy, healthcare, and climate. Due to the abundant seasonal information they contain, timestamps possess the potential to offer robust global guidance for forecasting techniques. However, existing works primarily focus on local observations, with timestamps being treated merely as an optional supplement that remains underutilized. When data gathered from the real world is polluted, the absence of global information will damage the robust prediction capability of these algorithms. To address these problems, we propose a novel framework named GLAFF. Within this framework, the timestamps are modeled individually to capture the global dependencies. Working as a plugin, GLAFF adaptively adjusts the combined weights for global and local information, enabling seamless collaboration with any time series forecasting backbone. Extensive experiments conducted on nine real-world datasets demonstrate that GLAFF significantly enhances the average performance of widely used mainstream forecasting models by 12.5%, surpassing the previous state-of-the-art method by 5.5%.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/96026"} +{"video_file": "EdXW71LvKE_39026217.mp4", "openreview_id": "EdXW71LvKE", "slideslive_id": 39026217, "venue": "nips2024", "title": "CRT-Fusion: Camera, Radar, Temporal Fusion Using Motion Information for 3D Object Detection", "status": "Poster", "keywords": "3D Object Detection;Sensor Fusion;Temporal Fusion;Radar;Camera", "tldr": "CRT-Fusion is a novel framework that significantly improves the accuracy and robustness of 3D object detection by effectively integrating radar-camera information and temporal cues, explicitly considering the motion of dynamic objects.", "abstract": "Accurate and robust 3D object detection is a critical component in autonomous vehicles and robotics. While recent radar-camera fusion methods have made significant progress by fusing information in the bird's-eye view (BEV) representation, they often struggle to effectively capture the motion of dynamic objects, leading to limited performance in real-world scenarios. In this paper, we introduce CRT-Fusion, a novel framework that integrates temporal information into radar-camera fusion to address this challenge. Our approach comprises three key modules: Multi-View Fusion (MVF), Motion Feature Estimator (MFE), and Motion Guided Temporal Fusion (MGTF). The MVF module fuses radar and image features within both the camera view and bird's-eye view, thereby generating a more precise unified BEV representation. The MFE module conducts two simultaneous tasks: estimation of pixel-wise velocity information and BEV segmentation. Based on the velocity and the occupancy score map obtained from the MFE module, the MGTF module aligns and fuses feature maps across multiple timestamps in a recurrent manner. By considering the motion of dynamic objects, CRT-Fusion can produce robust BEV feature maps, thereby improving detection accuracy and robustness. Extensive evaluations on the challenging nuScenes dataset demonstrate that CRT-Fusion achieves state-of-the-art performance for radar-camera-based 3D object detection. Our approach outperforms the previous best method in terms of NDS by +1.7%, while also surpassing the leading approach in mAP by +1.4%. These significant improvements in both metrics showcase the effectiveness of our proposed fusion strategy in enhancing the reliability and accuracy of 3D object detection.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/96022"} +{"video_file": "EeXcOYf3Lg_39024669.mp4", "openreview_id": "EeXcOYf3Lg", "slideslive_id": 39024669, "venue": "nips2024", "title": "SHMT: Self-supervised Hierarchical Makeup Transfer via Latent Diffusion Models", "status": "Poster", "keywords": "Makeup transfer;self-supervised learning;diffusion models", "tldr": "We propose a self-supervised hierarchical makeup transfer method that is flexible for both simple and complex makeup styles.", "abstract": "This paper studies the challenging task of makeup transfer, which aims to apply diverse makeup styles precisely and naturally to a given facial image. Due to the absence of paired data, current methods typically synthesize sub-optimal pseudo ground truths to guide the model training, resulting in low makeup fidelity. Additionally, different makeup styles generally have varying effects on the person face, but existing methods struggle to deal with this diversity. To address these issues, we propose a novel Self-supervised Hierarchical Makeup Transfer (SHMT) method via latent diffusion models. Following a \"decoupling-and-reconstruction\" paradigm, SHMT works in a self-supervised manner, freeing itself from the misguidance of imprecise pseudo-paired data. Furthermore, to accommodate a variety of makeup styles, hierarchical texture details are decomposed via a Laplacian pyramid and selectively introduced to the content representation. Finally, we design a novel Iterative Dual Alignment (IDA) module that dynamically adjusts the injection condition of the diffusion model, allowing the alignment errors caused by the domain gap between content and makeup representations to be corrected. Extensive quantitative and qualitative analyses demonstrate the effectiveness of our method. Our code is available at https://github.com/Snowfallingplum/SHMT.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96021"} +{"video_file": "EehS4erXWB_39025035.mp4", "openreview_id": "EehS4erXWB", "slideslive_id": 39025035, "venue": "nips2024", "title": "SE(3)-bi-equivariant Transformers for Point Cloud Assembly", "status": "Poster", "keywords": "equivariant neural networks;SE(3)-bi-equivariant transformer;point cloud assembly", "tldr": "A SE(3)-bi-equivariant and correspondence-free method for point cloud assembly.", "abstract": "Given a pair of point clouds, the goal of assembly is to recover a rigid transformation that aligns one point cloud to the other. This task is challenging because the point clouds may be non-overlapped, and they may have arbitrary initial positions. To address these difficulties, we propose a method, called\nS\nE\n(\n3\n)\n-bi-equivariant transformer (BITR), based on the\nS\nE\n(\n3\n)\n-bi-equivariance prior of the task:it guarantees that when the inputs are rigidly perturbed, the output will transform accordingly. Due to its equivariance property, BITR can not only handle non-overlapped PCs, but also guarantee robustness against initial positions. Specifically, BITR first extracts features of the inputs using a novel\nS\nE\n(\n3\n)\n\u00d7\nS\nE\n(\n3\n)\n-transformer, and then projects the learned feature to group\nS\nE\n(\n3\n)\nas the output. Moreover, we theoretically show that swap and scale equivariances can be incorporated into BITR, thus it further guarantees stable performance under scaling and swapping the inputs. We experimentally show the effectiveness of BITR in practical tasks.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96020"} +{"video_file": "EfpZNpkrm2_39025442.mp4", "openreview_id": "EfpZNpkrm2", "slideslive_id": 39025442, "venue": "nips2024", "title": "QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor Adaptation", "status": "Poster", "keywords": "LLM;Language Model;PEFT;Finetuning;High Rank", "tldr": "We propose Quantum-informed Tensor Adaptation (QuanTA), a novel, easy-to-implement, high-rank fine-tuning method with no inference overhead for large-scale pre-trained language models.", "abstract": "We propose Quantum-informed Tensor Adaptation (QuanTA), a novel, easy-to-implement, fine-tuning method with no inference overhead for large-scale pre-trained language models. By leveraging quantum-inspired methods derived from quantum circuit structures, QuanTA enables efficient high-rank fine-tuning, surpassing the limitations of Low-Rank Adaptation (LoRA)---low-rank approximation may fail for complicated downstream tasks. Our approach is theoretically supported by the universality theorem and the rank representation theorem to achieve efficient high-rank adaptations. Experiments demonstrate that QuanTA significantly enhances commonsense reasoning, arithmetic reasoning, and scalability compared to traditional methods. Furthermore, QuanTA shows superior performance with fewer trainable parameters compared to other approaches and can be designed to integrate with existing fine-tuning algorithms for further improvement, providing a scalable and efficient solution for fine-tuning large language models and advancing state-of-the-art in natural language processing.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/96019"} +{"video_file": "Ehsd856Ltb_39028192.mp4", "openreview_id": "Ehsd856Ltb", "slideslive_id": 39028192, "venue": "nips2024", "title": "Revisiting K-mer Profile for Effective and Scalable Genome Representation Learning", "status": "Spotlight", "keywords": "metagenomic binning;genome representation learning;dna sequences;genome analysis", "tldr": "We propose a lightweight and scalable model for performing metagenomic binning at the genome read level, relying only on the k-mer compositions of the DNA fragments.", "abstract": "Obtaining effective representations of DNA sequences is crucial for genome analysis. Metagenomic binning, for instance, relies on genome representations to cluster complex mixtures of DNA fragments from biological samples with the aim of determining their microbial compositions. In this paper, we revisit k-mer-based representations of genomes and provide a theoretical analysis of their use in representation learning. Based on the analysis, we propose a lightweight and scalable model for performing metagenomic binning at the genome read level, relying only on the k-mer compositions of the DNA fragments. We compare the model to recent genome foundation models and demonstrate that while the models are comparable in performance, the proposed model is significantly more effective in terms of scalability, a crucial aspect for performing metagenomic binning of real-world data sets.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/96018"} +{"video_file": "EjKNSErSMJ_39025856.mp4", "openreview_id": "EjKNSErSMJ", "slideslive_id": 39025856, "venue": "nips2024", "title": "Last-Iterate Convergence for Generalized Frank-Wolfe in Monotone Variational Inequalities", "status": "Poster", "keywords": "Monotone variational inequalities;generalized Frank-Wolfe method;last-iterate convergence;smoothed fictitious-play", "tldr": "This paper establishes last-iterate convergence rate for a generalized Frank-Wolfe algorithm for solving monotone variational inequality problems.", "abstract": "We study the convergence behavior of a generalized Frank-Wolfe algorithm in constrained (stochastic) monotone variational inequality (MVI) problems. In recent years, there have been numerous efforts to design algorithms for solving constrained MVI problems due to their connections with optimization, machine learning, and equilibrium computation in games. Most work in this domain has focused on extensions of simultaneous gradient play, with particular emphasis on understanding the convergence properties of extragradient and optimistic gradient methods. In contrast, we examine the performance of an algorithm from another well-known class of optimization algorithms: Frank-Wolfe. We show that a generalized variant of this algorithm achieves a fast $\\mathcal{O}(T^{-1/2})$ last-iterate convergence rate in constrained MVI problems. By drawing connections between our generalized Frank-Wolfe algorithm and the well-known smoothed fictitious play (FP) from game theory, we also derive a finite-sample convergence rate for smoothed FP in zero-sum matrix games. Furthermore, we demonstrate that a stochastic variant of the generalized Frank-Wolfe algorithm for MVI problems also converges in a last-iterate sense, albeit at a slower $\\mathcal{O}(T^{-1/6})$ convergence rate.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/96016"} +{"video_file": "Ejg4d4FVrs_39027340.mp4", "openreview_id": "Ejg4d4FVrs", "slideslive_id": 39027340, "venue": "nips2024", "title": "Elliptical Attention", "status": "Poster", "keywords": "attention;non-parametric kernel regression;robustness;representation collapse", "tldr": "We show that using a Mahalanobis metric within the attention mechanism reduces representation collapse, and improves robustness to contamination.", "abstract": "Pairwise dot-product self-attention is key to the success of transformers that achieve state-of-the-art performance across a variety of applications in language and vision. This dot-product self-attention computes attention weights among the input tokens using Euclidean distance, which makes the model prone to representation collapse and vulnerable to contaminated samples. In this paper, we propose using a Mahalanobis distance metric for computing the attention weights to stretch the underlying feature space in directions of high contextual relevance. In particular, we define a hyper-ellipsoidal neighborhood around each query to increase the attention weights of the tokens lying in the contextually important directions. We term this novel class of attention Elliptical Attention. Our Elliptical Attention provides two benefits: 1) reducing representation collapse and 2) enhancing the model's robustness as the Elliptical Attention pays more attention to contextually relevant information, rather than focusing on some small subset of informative features. We empirically demonstrate the advantages of Elliptical Attention over the baseline dot-product attention and state-of-the-art attention methods on various practical tasks, including object classification, image segmentation, and language modeling across different data modalities.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/96015"} +{"video_file": "Eok6HbcSRI_39024882.mp4", "openreview_id": "Eok6HbcSRI", "slideslive_id": 39024882, "venue": "nips2024", "title": "Fast Tree-Field Integrators: From Low Displacement Rank to Topological Transformers", "status": "Poster", "keywords": "Tree Metrics;Low Displacement Rank;Field Integrators;Topological Transformers;Graph Theory;Efficient Algorithms on Graphs", "tldr": "We present a new class of algorithms for the efficient integration of tensor fields defined on weighted trees, with several applications in ML, ranging from mesh modeling to training Topological Transformers.", "abstract": "We present a new class of fast polylog-linear algorithms based on the theory of structured matrices (in particular low displacement rank) for integrating tensor fields defined on weighted trees. Several applications of the resulting fast tree-field integrators (FTFIs) are presented, including: (a) approximation of graph metrics with tree metrics, (b) graph classification, (c) modeling on meshes, and finally (d) Topological Transformers (TTs) (Choromanski et al., 2022) for images. For Topological Transformers, we propose new relative position encoding (RPE) masking mechanisms with as few as three extra learnable parameters per Transformer layer, leading to 1.0-1.5%+ accuracy gains. Importantly, most of FTFIs are exact methods, thus numerically equivalent to their brute-force counterparts. When applied to graphs with thousands of nodes, those exact algorithms provide 5.7-13x speedups. We also provide an extensive theoretical analysis of our methods.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/96014"} +{"video_file": "EpusiLXfNd_39025947.mp4", "openreview_id": "EpusiLXfNd", "slideslive_id": 39025947, "venue": "nips2024", "title": "3D Structure Prediction of Atomic Systems with Flow-based Direct Preference Optimization", "status": "Poster", "keywords": "Flow Matching;Direct Preference Optimization;Geometric Graph Neural Networks;Structure Prediction", "tldr": "We propose FlowDPO, a novel framework to enhance Flow Matching models via Direct Preference Optimization on 3D structure prediction tasks.", "abstract": "Predicting high-fidelity 3D structures of atomic systems is a fundamental yet challenging problem in scientific domains. While recent work demonstrates the advantage of generative models in this realm, the exploration of different probability paths are still insufficient, and hallucinations during sampling are persistently occurring. To address these pitfalls, we introduce FlowDPO, a novel framework that explores various probability paths with flow matching models and further suppresses hallucinations using Direct Preference Optimization (DPO) for structure generation. Our approach begins with a pre-trained flow matching model to generate multiple candidate structures for each training sample. These structures are then evaluated and ranked based on their distance to the ground truth, resulting in an automatic preference dataset. Using this dataset, we apply DPO to optimize the original model, improving its performance in generating structures closely aligned with the desired reference distribution. As confirmed by our theoretical analysis, such paradigm and objective function are compatible with arbitrary Gaussian paths, exhibiting favorable universality. Extensive experimental results on antibodies and crystals demonstrate substantial benefits of our FlowDPO, highlighting its potential to advance the field of 3D structure prediction with generative models.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/96013"} +{"video_file": "Eu80DGuOcs_39027995.mp4", "openreview_id": "Eu80DGuOcs", "slideslive_id": 39027995, "venue": "nips2024", "title": "Understanding and Improving Training-free Loss-based Diffusion Guidance", "status": "Poster", "keywords": "Training-free guidance;universal guidance;motion diffusion", "tldr": "This paper examines the mechanisms and limitations of training-free guidance for diffusion models, proposing methods to address these challenges effectively.", "abstract": "Adding additional guidance to pretrained diffusion models has become an increasingly popular research area, with extensive applications in computer vision, reinforcement learning, and AI for science. Recently, several studies have proposed training-free loss-based guidance by using off-the-shelf networks pretrained on clean images. This approach enables zero-shot conditional generation for universal control formats, which appears to offer a free lunch in diffusion guidance. In this paper, we aim to develop a deeper understanding of training-free guidance, as well as overcome its limitations. We offer a theoretical analysis that supports training-free guidance from the perspective of optimization, distinguishing it from classifier-based (or classifier-free) guidance. To elucidate their drawbacks, we theoretically demonstrate that training-free guidance is more susceptible to misaligned gradients and exhibits slower convergence rates compared to classifier guidance. We then introduce a collection of techniques designed to overcome the limitations, accompanied by theoretical rationale and empirical evidence. Our experiments in image and motion generation confirm the efficacy of these techniques.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/96010"} +{"video_file": "EwWpAPzcay_39024423.mp4", "openreview_id": "EwWpAPzcay", "slideslive_id": 39024423, "venue": "nips2024", "title": "Effective Rank Analysis and Regularization for Enhanced 3D Gaussian Splatting", "status": "Poster", "keywords": "3D reconstruction;3D Gaussian Splatting;NeRF;Surface reconstruction;3DGS regularization", "tldr": "Effective Rank Analysis and Regularization for Enhanced 3D Gaussian Splatting", "abstract": "3D reconstruction from multi-view images is one of the fundamental challenges in computer vision and graphics. Recently, 3D Gaussian Splatting (3DGS) has emerged as a promising technique capable of real-time rendering with high-quality 3D reconstruction. This method utilizes 3D Gaussian representation and tile-based splatting techniques, bypassing the expensive neural field querying. Despite its potential, 3DGS encounters challenges such as needle-like artifacts, suboptimal geometries, and inaccurate normals caused by the Gaussians converging into anisotropic shapes with one dominant variance. We propose using the effective rank analysis to examine the shape statistics of 3D Gaussian primitives, and identify the Gaussians indeed converge into needle-like shapes with the effective rank 1. To address this, we introduce the effective rank as a regularization, which constrains the structure of the Gaussians. Our new regularization method enhances normal and geometry reconstruction while reducing needle-like artifacts. The approach can be integrated as an add-on module to other 3DGS variants, improving their quality without compromising visual fidelity. The project page is available at https://junhahyung.github.io/erankgs.github.io/.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96009"} +{"video_file": "ExeIyx6U0Z_39027163.mp4", "openreview_id": "ExeIyx6U0Z", "slideslive_id": 39027163, "venue": "nips2024", "title": "LLaNA: Large Language and NeRF Assistant", "status": "Poster", "keywords": "LLM;NeRF;VQA", "tldr": "We propose LLaNA, che first Multimodal Large Language Model (MLLM) able to perform NeRF-language tasks, such as NeRF captioning and NeRF QA.", "abstract": "Multimodal Large Language Models (MLLMs) have demonstrated an excellent understanding of images and 3D data. However, both modalities have shortcomings in holistically capturing the appearance and geometry of objects. Meanwhile, Neural Radiance Fields (NeRFs), which encode information within the weights of a simple Multi-Layer Perceptron (MLP), have emerged as an increasingly widespread modality that simultaneously encodes the geometry and photorealistic appearance of objects. This paper investigates the feasibility and effectiveness of ingesting NeRF into MLLM. We create LLaNA, the first general-purpose NeRF-language assistant capable of performing new tasks such as NeRF captioning and Q&A. Notably, our method directly processes the weights of the NeRF\u2019s MLP to extract information about the represented objects without the need to render images or materialize 3D data structures. Moreover, we build a dataset of NeRFs with text annotations for various NeRF-language tasks with no human intervention. Based on this dataset, we develop a benchmark to evaluate the NeRF understanding capability of our method. Results show that processing NeRF weights performs favourably against extracting 2D or 3D representations from NeRFs.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96007"} +{"video_file": "F6L23TNlFW_39028224.mp4", "openreview_id": "F6L23TNlFW", "slideslive_id": 39028224, "venue": "nips2024", "title": "Predicting Label Distribution from Ternary Labels", "status": "Poster", "keywords": "label distribution;label polysemy;multi-label;ternary label", "tldr": "Our paper proposes to predict label distribution from ternary labels and demonstrates the effectiveness theoretically and methodologically.", "abstract": "Label distribution learning is a powerful learning paradigm to deal with label polysemy and has been widely applied in many practical tasks. A significant obstacle to the effective utilization of label distribution is the substantial expenses of accurate quantifying the label distributions. To tackle this challenge, label enhancement methods automatically infer label distributions from more easily accessible multi-label data based on binary annotations. However, the binary annotation of multi-label data requires experts to accurately assess whether each label can describe the instance, which may diminish the annotating efficiency and heighten the risk of erroneous annotation since the relationship between the label and the instance is unclear in many practical scenarios. Therefore, we propose to predict label distribution from ternary labels, allowing experts to annotate labels in a three-way annotation scheme. They can annotate the label as \"\n0\n\" indicating \"uncertain relevant\" if it is difficult to definitively determine whether the label can describe the instance, in addition to the binary annotation of \"\n1\n\" indicating \"definitely relevant\" and \"\n\u2212\n1\n\" indicating \"definitely irrelevant\". Both the theoretical and methodological studies are conducted for the proposed learning paradigm. In the theoretical part, we conduct a quantitative comparison of approximation error between ternary and binary labels to elucidate the superiority of ternary labels over binary labels. In the methodological part, we propose a Categorical distribution with monotonicity and orderliness to model the mapping from label description degrees to ternary labels, which can serve as a loss function or as a probability distribution, allowing most existing label enhancement methods to be adapted to our task. Finally, we experimentally demonstrate the effectiveness of our proposal.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/96005"} +{"video_file": "F738WY1Xm4_39027853.mp4", "openreview_id": "F738WY1Xm4", "slideslive_id": 39027853, "venue": "nips2024", "title": "Deep linear networks for regression are implicitly regularized towards flat minima", "status": "Poster", "keywords": "deep learning theory;sharpness;non-convex optimization;implicit regularization;gradient flow", "tldr": "Gradient flow implicitly regularizes deep linear networks towards flat minima, both for a small-scale and a residual initialization.", "abstract": "The largest eigenvalue of the Hessian, or sharpness, of neural networks is a key quantity to understand their optimization dynamics. In this paper, we study the sharpness of deep linear networks for univariate regression. Minimizers can have arbitrarily large sharpness, but not an arbitrarily small one. Indeed, we show a lower bound on the sharpness of minimizers, which grows linearly with depth. We then study the properties of the minimizer found by gradient flow, which is the limit of gradient descent with vanishing learning rate. We show an implicit regularization towards flat minima: the sharpness of the minimizer is no more than a constant times the lower bound. The constant depends on the condition number of the data covariance matrix, but not on width or depth. This result is proven both for a small-scale initialization and a residual initialization. Results of independent interest are shown in both cases. For small-scale initialization, we show that the learned weight matrices are approximately rank-one and that their singular vectors align. For residual initialization, convergence of the gradient flow for a Gaussian initialization of the residual network is proven. Numerical experiments illustrate our results and connect them to gradient descent with non-vanishing learning rate.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/96004"} +{"video_file": "F8DWffLkYG_39027250.mp4", "openreview_id": "F8DWffLkYG", "slideslive_id": 39027250, "venue": "nips2024", "title": "Designing Cell-Type-Specific Promoter Sequences Using Conservative Model-Based Optimization", "status": "Poster", "keywords": "ML applications;computational genomics;computational biology;model-based optimization", "tldr": "We propose a workflow to design cell-type-specific promoters while accounting for various practical considerations and demonstrate its efficacy in a difficult setting.", "abstract": "Gene therapies have the potential to treat disease by delivering therapeutic genetic cargo to disease-associated cells. One limitation to their widespread use is the lack of short regulatory sequences, or promoters, that differentially induce the expression of delivered genetic cargo in target cells, minimizing side effects in other cell types. Such cell-type-specific promoters are difficult to discover using existing methods, requiring either manual curation or access to large datasets of promoter-driven expression from both targeted and untargeted cells. Model-based optimization (MBO) has emerged as an effective method to design biological sequences in an automated manner, and has recently been used in promoter design methods. However, these methods have only been tested using large training datasets that are expensive to collect, and focus on designing promoters for markedly different cell types, overlooking the complexities associated with designing promoters for closely related cell types that share similar regulatory features. Therefore, we introduce a comprehensive framework for utilizing MBO to design promoters in a data-efficient manner, with an emphasis on discovering promoters for similar cell types. We use conservative objective models (COMs) for MBO and highlight practical considerations such as best practices for improving sequence diversity, getting estimates of model uncertainty, and choosing the optimal set of sequences for experimental validation. Using three leukemia cell lines (Jurkat, K562, and THP1), we show that our approach discovers many novel cell-type-specific promoters after experimentally validating the designed sequences. For K562 cells, in particular, we discover a promoter that has 75.85% higher cell-type-specificity than the best promoter from the initial dataset used to train our models. Our code and data will be available at https://github.com/young-geng/promoter_design.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/96002"} +{"video_file": "F8aSOovlEP_39027468.mp4", "openreview_id": "F8aSOovlEP", "slideslive_id": 39027468, "venue": "nips2024", "title": "MECD: Unlocking Multi-Event Causal Discovery in Video Reasoning", "status": "Spotlight", "keywords": "Video understanding; Video reasoning; Causal discovery; Causal inference", "tldr": "A multi-event causal discovery task for video reasoning.", "abstract": "Video causal reasoning aims to achieve a high-level understanding of video content from a causal perspective. However, current video reasoning tasks are limited in scope, primarily executed in a question-answering paradigm and focusing on short videos containing only a single event and simple causal relationships, lacking comprehensive and structured causality analysis for videos with multiple events. To fill this gap, we introduce a new task and dataset, Multi-Event Causal Discovery (MECD). It aims to uncover the causal relationships between events distributed chronologically across long videos. Given visual segments and textual descriptions of events, MECD requires identifying the causal associations between these events to derive a comprehensive, structured event-level video causal diagram explaining why and how the final result event occurred. To address MECD, we devise a novel framework inspired by the Granger Causality method, using an efficient mask-based event prediction model to perform an Event Granger Test, which estimates causality by comparing the predicted result event when premise events are masked versus unmasked. Furthermore, we integrate causal inference techniques such as front-door adjustment and counterfactual inference to address challenges in MECD like causality confounding and illusory causality. Experiments validate the effectiveness of our framework in providing causal relationships in multi-event videos, outperforming GPT-4o and VideoLLaVA by 5.7% and 4.1%, respectively.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/96001"} +{"video_file": "F9NDzHQtOl_39027113.mp4", "openreview_id": "F9NDzHQtOl", "slideslive_id": 39027113, "venue": "nips2024", "title": "Accelerating Diffusion Models with Parallel Sampling: Inference at Sub-Linear Time Complexity", "status": "Spotlight", "keywords": "diffusion model;parallel sampling;stochastic differential equations;probability flow ode", "tldr": "We propose new parallel inference algorithms for diffusion models using parallel sampling and rigorously prove that our algorithms enjoy sub-linear inference cost w.r.t. data dimension for both SDE and probability flow ODE implementations.", "abstract": "Diffusion models have become a leading method for generative modeling of both image and scientific data. As these models are costly to train and \\emph{evaluate}, reducing the inference cost for diffusion models remains a major goal. Inspired by the recent empirical success in accelerating diffusion models via the parallel sampling technique~\\cite{shih2024parallel}, we propose to divide the sampling process into $\\mathcal{O}(1)$ blocks with parallelizable Picard iterations within each block. Rigorous theoretical analysis reveals that our algorithm achieves $\\widetilde{\\mathcal{O}}(\\mathrm{poly} \\log d)$ overall time complexity, marking \\emph{the first implementation with provable sub-linear complexity w.r.t. the data dimension $d$}. Our analysis is based on a generalized version of Girsanov's theorem and is compatible with both the SDE and probability flow ODE implementations. Our results shed light on the potential of fast and efficient sampling of high-dimensional data on fast-evolving modern large-memory GPU clusters.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95999"} +{"video_file": "FAuFpGeLmx_39026542.mp4", "openreview_id": "FAuFpGeLmx", "slideslive_id": 39026542, "venue": "nips2024", "title": "Segmenting Watermarked Texts From Language Models", "status": "Poster", "keywords": "Large language models;Randomization test;Segmentation;Watermark", "tldr": "We develop statistical methods for detecting and identifying watermarked sub-strings generated from language models.", "abstract": "Watermarking is a technique that involves embedding nearly unnoticeable statistical signals within generated content to help trace its source. This work focuses on a scenario where an untrusted third-party user sends prompts to a trusted language model (LLM) provider, who then generates a text from their LLM with a watermark. This setup makes it possible for a detector to later identify the source of the text if the user publishes it. The user can modify the generated text by substitutions, insertions, or deletions. Our objective is to develop a statistical method to detect if a published text is LLM-generated from the perspective of a detector. We further propose a methodology to segment the published text into watermarked and non-watermarked sub-strings. The proposed approach is built upon randomization tests and change point detection techniques. We demonstrate that our method ensures Type I and Type II error control and can accurately identify watermarked sub-strings by finding the corresponding change point locations. To validate our technique, we apply it to texts generated by several language models with prompts extracted from Google's C4 dataset and obtain encouraging numerical results. We release all code publicly at https://github.com/doccstat/llm-watermark-cpd.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95996"} +{"video_file": "FBMsBdH0yz_39025521.mp4", "openreview_id": "FBMsBdH0yz", "slideslive_id": 39025521, "venue": "nips2024", "title": "Masked Hard-Attention Transformers Recognize Exactly the Star-Free Languages", "status": "Poster", "keywords": "Transformers;Formal Language Theory;Logic;Automata;Expressivity", "tldr": "We prove masked hard-attention transformers recognize exactly the star-free regular languages.", "abstract": "The expressive power of transformers over inputs of unbounded size can be studied through their ability to recognize classes of formal languages. In this paper, we establish exact characterizations of transformers with hard attention (in which all attention is focused on exactly one position) and attention masking (in which each position only attends to positions on one side). With strict masking (each position cannot attend to itself) and without position embeddings, these transformers are expressively equivalent to linear temporal logic (LTL), which defines exactly the star-free languages. A key technique is the use of Boolean RASP as a convenient intermediate language between transformers and LTL. We then take numerous results known for LTL and apply them to transformers, showing how position embeddings, strict masking, and depth all increase expressive power.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95994"} +{"video_file": "FFW6rPz48Z_39027315.mp4", "openreview_id": "FFW6rPz48Z", "slideslive_id": 39027315, "venue": "nips2024", "title": "Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting", "status": "Spotlight", "keywords": "Random Matrix Theory ; Optimization ; Regularization ; Multi-task regression ; Multi-task learning ; Multivariate Time Series Forecasting", "tldr": "This paper introduces a new multi-task regression framework using random matrix theory for improved performance estimation and presents a regularization-based optimization with empirical methods, enhancing univariate models.", "abstract": "In this paper, we introduce a novel theoretical framework for multi-task regression, applying random matrix theory to provide precise performance estimations, under high-dimensional, non-Gaussian data distributions. We formulate a multi-task optimization problem as a regularization technique to enable single-task models to leverage multi-task learning information. We derive a closed-form solution for multi-task optimization in the context of linear models. Our analysis provides valuable insights by linking the multi-task learning performance to various model statistics such as raw data covariances, signal-generating hyperplanes, noise levels, as well as the size and number of datasets. We finally propose a consistent estimation of training and testing errors, thereby offering a robust foundation for hyperparameter optimization in multi-task regression scenarios. Experimental validations on both synthetic and real-world datasets in regression and multivariate time series forecasting demonstrate improvements on univariate models, incorporating our method into the training loss and thus leveraging multivariate information.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95988"} +{"video_file": "FGTDe6EA0B_39026720.mp4", "openreview_id": "FGTDe6EA0B", "slideslive_id": 39026720, "venue": "nips2024", "title": "Language Generation in the Limit", "status": "Spotlight", "keywords": "language generation;large language models;enumeration", "tldr": "In a theoretical model, we show that language generation is always possible in the limit, in contrast to classical impossibility results for language identification in the limit.", "abstract": "Although current large language models are complex, the most basic specifications of the underlying language generation problem itself are simple to state: given a finite set of training samples from an unknown language, produce valid new strings from the language that don't already appear in the training data. Here we ask what we can conclude about language generation using only this specification, without further assumptions. In particular, suppose that an adversary enumerates the strings of an unknown target language L that is known only to come from one of a possibly infinite list of candidates. A computational agent is trying to learn to generate from this language; we say that the agent generates from $L$ in the limit if after some finite point in the enumeration of $L$, the agent is able to produce new elements that come exclusively from $L$ and that have not yet been presented by the adversary. Our main result is that there is an agent that is able to generate in the limit for every countable list of candidate languages. This contrasts dramatically with negative results due to Gold and Angluin in a well-studied model of language learning where the goal is to identify an unknown language from samples; the difference between these results suggests that identifying a language is a fundamentally different problem than generating from it.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95986"} +{"video_file": "FIs87Iro9j_39026954.mp4", "openreview_id": "FIs87Iro9j", "slideslive_id": 39026954, "venue": "nips2024", "title": "ProxyFusion: Face Feature Aggregation Through Sparse Experts", "status": "Poster", "keywords": "Feature Fusion;Face Recognition;Pooling", "tldr": "Efficient face feature fusion using sparse experts for robust recognition in challenging environments.", "abstract": "Face feature fusion is indispensable for robust face recognition, particularly in scenarios involving long-range, low-resolution media (unconstrained environments) where not all frames or features are equally informative. Existing methods often rely on large intermediate feature maps or face metadata information, making them incompatible with legacy biometric template databases that store pre-computed features. Additionally, real-time inference and generalization to large probe sets remains challenging. To address these limitations, we introduce a linear time O(N) proxy based sparse expert selection and pooling approach for context driven feature-set attention. Our approach is order invariant on the feature-set, generalizes to large sets, is compatible with legacy template stores, and utilizes significantly less parameters making it suitable real-time inference and edge use-cases. Through qualitative experiments, we demonstrate that ProxyFusion learns discriminative information for importance weighting of face features without relying on intermediate features. Quantitative evaluations on challenging low-resolution face verification datasets such as IARPA BTS3.1 and DroneSURF show the superiority of ProxyFusion in unconstrained long-range face recognition setting. Our code and pretrained models are available at: https://github.com/bhavinjawade/ProxyFusion", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95985"} +{"video_file": "FJlrSZBMCD_39025671.mp4", "openreview_id": "FJlrSZBMCD", "slideslive_id": 39025671, "venue": "nips2024", "title": "Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Models", "status": "Poster", "keywords": "distillation;mamba;sub-quadratic;ssm;state-space-models", "tldr": "We show that we can leverage pre-trained Transformers to create highly performant state space models using very little training data", "abstract": "Transformer architectures have become a dominant paradigm for domains like language modeling but suffer in many inference settings due to their quadratic-time self-attention. Recently proposed subquadratic architectures, such as Mamba, have shown promise, but have been pretrained with substantially less computational resources than the strongest Transformer models. In this work, we present a method that is able to distill a pretrained Transformer architecture into alternative architectures such as state space models (SSMs). The key idea to our approach is that we can view both Transformers and SSMs as applying different forms of mixing matrices over the token sequences. We can thus progressively distill the Transformer architecture by matching different degrees of granularity in the SSM: first matching the mixing matrices themselves, then the hidden units at each block, and finally the end-to-end predictions. Our method, called MOHAWK, is able to distill a Mamba-2 variant based on the Phi-1.5 architecture (Phi-Mamba) using only 3B tokens. Despite using less than 1% of the training data typically used to train models from scratch, Phi-Mamba boasts substantially stronger performance compared to all past open-source non-Transformer models. MOHAWK allows models like SSMs to leverage computational resources invested in training Transformer-based architectures, highlighting a new avenue for building such models.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95984"} +{"video_file": "FLNnlfBGMo_39027766.mp4", "openreview_id": "FLNnlfBGMo", "slideslive_id": 39027766, "venue": "nips2024", "title": "Efficient Prompt Optimization Through the Lens of Best Arm Identification", "status": "Poster", "keywords": "Prompt Optimization; Best-arm Identification; Limited Budget; Large Language Models", "tldr": "This work introduces a framework that harnesses the power of fixed-budget best arm identification to efficiently perform prompt optimization for large language models (LLMs), which achieves outstanding performance across tasks and targeted LLMs.", "abstract": "The remarkable instruction-following capability of large language models (LLMs) has sparked a growing interest in automatically finding good prompts, i.e., prompt optimization. Most existing works follow the scheme of selecting from a pre-generated pool of candidate prompts. However, these designs mainly focus on the generation strategy, while limited attention has been paid to the selection method. Especially, the cost incurred during the selection (e.g., accessing LLM and evaluating the responses) is rarely explicitly considered. To overcome this limitation, this work provides a principled framework, TRIPLE, to efficiently perform prompt selection under an explicit budget constraint. TRIPLE is built on a novel connection established between prompt optimization and fixed-budget best arm identification (BAI-FB) in multi-armed bandits (MAB); thus, it is capable of leveraging the rich toolbox from BAI-FB systematically and also incorporating unique characteristics of prompt optimization. Extensive experiments on multiple well-adopted tasks using various LLMs demonstrate the remarkable performance improvement of TRIPLE over baselines while satisfying the limited budget constraints. As an extension, variants of TRIPLE are proposed to efficiently select examples for few-shot prompts, also achieving superior empirical performance.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/95983"} +{"video_file": "FNOBf6JM7r_39028827.mp4", "openreview_id": "FNOBf6JM7r", "slideslive_id": 39028827, "venue": "nips2024", "title": "Stabilizing Linear Passive-Aggressive Online Learning with Weighted Reservoir Sampling", "status": "Poster", "keywords": "online learning;passive aggressive;weighted reservoir sampling;stability", "tldr": "We combine weighted reservoir sampling with passive-aggressive online algorithms to dramatically mitigate test accuracy fluctuations.", "abstract": "Online learning methods, like the seminal Passive-Aggressive (PA) classifier, are still highly effective for high-dimensional streaming data, out-of-core processing, and other throughput-sensitive applications. Many such algorithms rely on fast adaptation to individual errors as a key to their convergence. While such algorithms enjoy low theoretical regret, in real-world deployment they can be sensitive to individual outliers that cause the algorithm to over-correct. When such outliers occur at the end of the data stream, this can cause the final solution to have unexpectedly low accuracy. We design a weighted reservoir sampling (WRS) approach to obtain a stable ensemble model from the sequence of solutions without requiring additional passes over the data, hold-out sets, or a growing amount of memory. Our key insight is that good solutions tend to be error-free for more iterations than bad solutions, and thus, the number of passive rounds provides an estimate of a solution's relative quality. Our reservoir thus contains $K$ previous intermediate weight vectors with high survival times. We demonstrate our WRS approach on the Passive-Aggressive Classifier (PAC) and First-Order Sparse Online Learning (FSOL), where our method consistently and significantly outperforms the unmodified approach. We show that the risk of the ensemble classifier is bounded with respect to the regret of the underlying online learning method.", "primary_area": "online_learning", "site": "https://neurips.cc/virtual/2024/poster/95981"} +{"video_file": "FNtsZLwkGr_39027454.mp4", "openreview_id": "FNtsZLwkGr", "slideslive_id": 39027454, "venue": "nips2024", "title": "Pruning neural network models for gene regulatory dynamics using data and domain knowledge", "status": "Poster", "keywords": "neural network pruning;sparsification;domain knowledge;gene regulation", "tldr": "We leverage domain knowledge to inform neural network pruning, thereby obtaining interpretable models that align with known biology", "abstract": "The practical utility of machine learning models in the sciences often hinges on their interpretability. It is common to assess a model's merit for scientific discovery, and thus novel insights, by how well it aligns with already available domain knowledge - a dimension that is currently largely disregarded in the comparison of neural network models. While pruning can simplify deep neural network architectures and excels in identifying sparse models, as we show in the context of gene regulatory network inference, state-of-the-art techniques struggle with biologically meaningful structure learning. To address this issue, we propose DASH, a generalizable framework that guides network pruning by using domain-specific structural information in model fitting and leads to sparser, better interpretable models that are more robust to noise. Using both synthetic data with ground truth information, as well as real-world gene expression data, we show that DASH, using knowledge about gene interaction partners within the putative regulatory network, outperforms general pruning methods by a large margin and yields deeper insights into the biological systems being studied.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/95980"} +{"video_file": "FTpKGuxEfy_39028674.mp4", "openreview_id": "FTpKGuxEfy", "slideslive_id": 39028674, "venue": "nips2024", "title": "Vision Foundation Model Enables Generalizable Object Pose Estimation", "status": "Poster", "keywords": "Object Pose Estimation;Vision Foundation Model", "tldr": "The paper presents a new framework for generalizable object pose estimation.", "abstract": "Object pose estimation plays a crucial role in robotic manipulation, however, its practical applicability still suffers from limited generalizability. This paper addresses the challenge of generalizable object pose estimation, particularly focusing on category-level object pose estimation for unseen object categories. Current methods either require impractical instance-level training or are confined to predefined categories, limiting their applicability. We propose VFM-6D, a novel framework that explores harnessing existing vision and language models, to elaborate object pose estimation into two stages: category-level object viewpoint estimation and object coordinate map estimation. Based on the two-stage framework, we introduce a 2D-to-3D feature lifting module and a shape-matching module, both of which leverage pre-trained vision foundation models to improve object representation and matching accuracy. VFM-6D is trained on cost-effective synthetic data and exhibits superior generalization capabilities. It can be applied to both instance-level unseen object pose estimation and category-level object pose estimation for novel categories. Evaluations on benchmark datasets demonstrate the effectiveness and versatility of VFM-6D in various real-world scenarios.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95972"} +{"video_file": "FVgCwcwpJw_39026806.mp4", "openreview_id": "FVgCwcwpJw", "slideslive_id": 39026806, "venue": "nips2024", "title": "Policy Improvement using Language Feedback Models", "status": "Poster", "keywords": "instruction following;language feedback;language grounding;learning feedback model;imitation learning", "tldr": "We train small and efficient language feedback models to identify productive behaviour in grounded instruction following environments, then imitate this behaviour to improve policy performance.", "abstract": "We introduce Language Feedback Models (LFMs) that identify desirable behaviour --- actions that help achieve tasks specified in the instruction - for imitation learning in instruction following. To train LFMs, we obtain feedback from Large Language Models (LLMs) on visual trajectories verbalized to language descriptions. First, by using LFMs to identify desirable behaviour to imitate, we improve in task-completion rate over strong behavioural cloning baselines on three distinct language grounding environments (Touchdown, ScienceWorld, and ALFWorld). Second, LFMs outperform using LLMs as experts to directly predict actions, when controlling for the number of LLM output tokens. Third, LFMs generalize to unseen environments, improving task-completion rate by 3.5-12.0% through one round of adaptation. Finally, LFMs can be modified to provide human-interpretable feedback without performance loss, allowing human verification of desirable behaviour for imitation learning.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95969"} +{"video_file": "FXJDcriMYH_39028101.mp4", "openreview_id": "FXJDcriMYH", "slideslive_id": 39028101, "venue": "nips2024", "title": "Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training", "status": "Spotlight", "keywords": "Efficient LLM pre-training;Model growth", "tldr": "We systematically study model growth techniques for efficient LLM pre-training, finding a depth-wise stacking operator with great acceleration and scalability, and propose clear guidelines for its use.", "abstract": "LLMs are computationally expensive to pre-train due to their large scale. Model growth emerges as a promising approach by leveraging smaller models to accelerate the training of larger ones. However, the viability of these model growth methods in efficient LLM pre-training remains underexplored. This work identifies three critical\nO\n\u2015\nbstacles: (\nO\n1) lack of comprehensive evaluation, (\nO\n2) untested viability for scaling, and (\nO\n3) lack of empirical guidelines. To tackle\nO\n1, we summarize existing approaches into four atomic growth operators and systematically evaluate them in a standardized LLM pre-training setting. Our findings reveal that a depthwise stacking operator, called\nG\nstack\n, exhibits remarkable acceleration in training, leading to decreased loss and improved overall performance on eight standard NLP benchmarks compared to strong baselines. Motivated by these promising results, we conduct extensive experiments to delve deeper into\nG\nstack\nto address\nO\n2 and\nO\n3. For\nO\n2 (untested scalability), our study shows that\nG\nstack\nis scalable and consistently performs well, with experiments up to 7B LLMs after growth and pre-training LLMs with 750B tokens. For example, compared to a conventionally trained 7B model using 300B tokens, our\nG\nstack\nmodel converges to the same loss with 194B tokens, resulting in a 54.6% speedup. We further address\nO\n3 (lack of empirical guidelines) by formalizing guidelines to determine growth timing and growth factor for\nG\nstack\n, making it practical in general LLM pre-training. We also provide in-depth discussions and comprehensive ablation studies of\nG\nstack\n. Our code and pre-trained model are available at https://llm-stacking.github.io/.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95968"} +{"video_file": "FY6vPtITtE_39025555.mp4", "openreview_id": "FY6vPtITtE", "slideslive_id": 39025555, "venue": "nips2024", "title": "The Challenges of the Nonlinear Regime for Physics-Informed Neural Networks", "status": "Poster", "keywords": "Physics-Informed Neural Networks;Neural Tangent Kernel;Nonlinear PDEs;Second-order optimization", "tldr": "A Neural Tangent Kernel analysis of the training dynamics of Physics-Informed Neural Networks for nonlinear PDEs.", "abstract": "The Neural Tangent Kernel (NTK) viewpoint is widely employed to analyze the training dynamics of overparameterized Physics-Informed Neural Networks (PINNs). However, unlike the case of linear Partial Differential Equations (PDEs), we show how the NTK perspective falls short in the nonlinear scenario. Specifically, we establish that the NTK yields a random matrix at initialization that is not constant during training, contrary to conventional belief. Another significant difference from the linear regime is that, even in the idealistic infinite-width limit, the Hessian does not vanish and hence it cannot be disregarded during training. This motivates the adoption of second-order optimization methods. We explore the convergence guarantees of such methods in both linear and nonlinear cases, addressing challenges such as spectral bias and slow convergence. Every theoretical result is supported by numerical examples with both linear and nonlinear PDEs, and we highlight the benefits of second-order methods in benchmark test cases.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/95966"} +{"video_file": "FaNhyXY6Y1_39024939.mp4", "openreview_id": "FaNhyXY6Y1", "slideslive_id": 39024939, "venue": "nips2024", "title": "Artemis: Towards Referential Understanding in Complex Videos", "status": "Poster", "keywords": "Video Referring;Multimodal;RoI Selection Mechanism", "tldr": "This paper proposes a challenging setting for video-based referring and establishes an effective MLLM named Artemis.", "abstract": "Videos carry rich visual information including object description, action, interaction, etc., but the existing multimodal large language models (MLLMs) fell short in referential understanding scenarios such as video-based referring. In this paper, we present Artemis, an MLLM that pushes video-based referential understanding to a finer level. Given a video, Artemis receives a natural-language question with a bounding box in any video frame and describes the referred target in the entire video. The key to achieving this goal lies in extracting compact, target-specific video features, where we set a solid baseline by tracking and selecting spatiotemporal features from the video. We train Artemis on the newly established ViderRef45K dataset with 45K video-QA pairs and design a computationally efficient, three-stage training procedure. Results are promising both quantitatively and qualitatively. Additionally, we show that Artemis can be integrated with video grounding and text summarization tools to understand more complex scenarios. Code and data are available at https://github.com/NeurIPS24Artemis/Artemis.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95960"} +{"video_file": "Ffb30OVVCa_39027889.mp4", "openreview_id": "Ffb30OVVCa", "slideslive_id": 39027889, "venue": "nips2024", "title": "Is One GPU Enough? Pushing Image Generation at Higher-Resolutions with Foundation Models.", "status": "Poster", "keywords": "diffusion model;generative AI;image generation;foundation models;higher resolution", "tldr": "Pushing Image Generation at Higher-Resolutions with Foundation Models using a single GPU", "abstract": "In this work, we introduce Pixelsmith, a zero-shot text-to-image generative framework to sample images at higher resolutions with a single GPU. We are the first to show that it is possible to scale the output of a pre-trained diffusion model by a factor of 1000, opening the road to gigapixel image generation at no extra cost. Our cascading method uses the image generated at the lowest resolution as baseline to sample at higher resolutions. For the guidance, we introduce the Slider, a mechanism that fuses the overall structure contained in the first-generated image with enhanced fine details. At each inference step, we denoise patches rather than the entire latent space, minimizing memory demands so that a single GPU can handle the process, regardless of the image's resolution. Our experimental results show that this method not only achieves higher quality and diversity compared to existing techniques but also reduces sampling time and ablation artifacts.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95952"} +{"video_file": "FisyQfoJCm_39024505.mp4", "openreview_id": "FisyQfoJCm", "slideslive_id": 39024505, "venue": "nips2024", "title": "MoGenTS: Motion Generation based on Spatial-Temporal Joint Modeling", "status": "Poster", "keywords": "Motion Generation;Spation-Temporal;Joint", "tldr": "Human Motion Generation based on Spatial-Temporal Joint Modeling", "abstract": "Motion generation from discrete quantization offers many advantages over continuous regression, but at the cost of inevitable approximation errors. Previous methods usually quantize the entire body pose into one code, which not only faces the difficulty in encoding all joints within one vector but also loses the spatial relationship between different joints. Differently, in this work we quantize each individual joint into one vector, which i) simplifies the quantization process as the complexity associated with a single joint is markedly lower than that of the entire pose; ii) maintains a spatial-temporal structure that preserves both the spatial relationships among joints and the temporal movement patterns; iii) yields a 2D token map, which enables the application of various 2D operations widely used in 2D images. Grounded in the 2D motion quantization, we build a spatial-temporal modeling framework, where 2D joint VQVAE, temporal-spatial 2D masking technique, and spatial-temporal 2D attention are proposed to take advantage of spatial-temporal signals among the 2D tokens. Extensive experiments demonstrate that our method significantly outperforms previous methods across different datasets, with a\n26.6\ndecrease of FID on HumanML3D and a\n29.9\ndecrease on KIT-ML.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95951"} +{"video_file": "FlcdW7NPRY_39026967.mp4", "openreview_id": "FlcdW7NPRY", "slideslive_id": 39026967, "venue": "nips2024", "title": "Approaching Human-Level Forecasting with Language Models", "status": "Poster", "keywords": "langauge models;forecasting;information retrieval;retrieval augmentation", "tldr": "We present the first ML system that can forecast at near human levels.", "abstract": "Forecasting future events is important for policy and decision making. In this work, we study whether language models (LMs) can forecast at the level of competitive human forecasters. Towards this goal, we develop a retrieval-augmented LM system designed to automatically search for relevant information, generate forecasts, and aggregate predictions. To facilitate our study, we collect a large dataset of questions from competitive forecasting platforms. Under a test set published after the knowledge cut-offs of our LMs, we evaluate the end-to-end performance of our system against the aggregates of human forecasts. On average, the system nears the crowd aggregate of competitive forecasters and, in a certain relaxed setting, surpasses it. Our work suggests that using LMs to forecasts the future could provide accurate predictions at scale and help to inform institutional decision making.", "primary_area": "machine_learning_for_social_sciences", "site": "https://neurips.cc/virtual/2024/poster/95949"} +{"video_file": "FmNoFIImZG_39026671.mp4", "openreview_id": "FmNoFIImZG", "slideslive_id": 39026671, "venue": "nips2024", "title": "TabEBM: A Tabular Data Augmentation Method with Distinct Class-Specific Energy-Based Models", "status": "Poster", "keywords": "tabular data;data augmentation;synthetic data generation;energy based model", "tldr": "We introduce a new data augmentation method for tabular data, which trains class-specific generators.", "abstract": "Data collection is often difficult in critical fields such as medicine, physics, and chemistry, yielding typically only small tabular datasets. However, classification methods tend to struggle with these small datasets, leading to poor predictive performance. Increasing the training set with additional synthetic data, similar to data augmentation in images, is commonly believed to improve downstream tabular classification performance. However, current tabular generative methods that learn either the joint distribution\np\n(\nx\n,\ny\n)\nor the class-conditional distribution\np\n(\nx\n\u2223\ny\n)\noften overfit on small datasets, resulting in poor-quality synthetic data, usually worsening classification performance compared to using real data alone. To solve these challenges, we introduce TabEBM, a novel class-conditional generative method using Energy-Based Models (EBMs). Unlike existing tabular methods that use a shared model to approximate all class-conditional densities, our key innovation is to create distinct EBM generative models for each class, each modelling its class-specific data distribution individually. This approach creates robust energy landscapes, even in ambiguous class distributions. Our experiments show that TabEBM generates synthetic data with higher quality and better statistical fidelity than existing methods. When used for data augmentation, our synthetic data consistently leads to improved classification performance across diverse datasets of various sizes, especially small ones. Code is available at https://github.com/andreimargeloiu/TabEBM.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/95948"} +{"video_file": "Fp3JVz5XE7_39025791.mp4", "openreview_id": "Fp3JVz5XE7", "slideslive_id": 39025791, "venue": "nips2024", "title": "Federated Black-Box Adaptation for Semantic Segmentation", "status": "Poster", "keywords": "Federated Learning;Blackbox Learning;Split Networks;Segmentation", "tldr": "Federated semantic segmentation without transfer of gradient or model weights, enabling more privacy preserving networks", "abstract": "Federated Learning (FL) is a form of distributed learning that allows multiple institutions or clients to collaboratively learn a global model to solve a task. This allows the model to utilize the information from every institute while preserving data privacy. However, recent studies show that the promise of protecting the privacy of data is not upheld by existing methods and that it is possible to recreate the training data from the different institutions. This is done by utilizing gradients transferred between the clients and the global server during training or by knowing the model architecture at the client end. In this paper, we propose a federated learning framework for semantic segmentation without knowing the model architecture nor transferring gradients between the client and the server, thus enabling better privacy preservation. We propose \\textit{BlackFed} - a black-box adaptation of neural networks that utilizes zero order optimization (ZOO) to update the client model weights and first order optimization (FOO) to update the server weights. We evaluate our approach on several computer vision and medical imaging datasets to demonstrate its effectiveness. To the best of our knowledge, this work is one of the first works in employing federated learning for segmentation, devoid of gradients or model information exchange. Code: https://github.com/JayParanjape/blackfed/tree/master", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95946"} +{"video_file": "FqWyzyErVT_39028790.mp4", "openreview_id": "FqWyzyErVT", "slideslive_id": 39028790, "venue": "nips2024", "title": "Federated Transformer: Multi-Party Vertical Federated Learning on Practical Fuzzily Linked Data", "status": "Poster", "keywords": "vertical federated learning;federated learning;transformer;record linkage;entity alignment;differential privacy;fuzzy alignment", "tldr": "We introduce the Federated Transformer (FeT), a novel framework that supports multi-party Vertical Federated Learning (VFL) with fuzzy identifiers, which surpasses existing models in performance and privacy.", "abstract": "Federated Learning (FL) is an evolving paradigm that enables multiple parties to collaboratively train models without sharing raw data. Among its variants, Vertical Federated Learning (VFL) is particularly relevant in real-world, cross-organizational collaborations, where distinct features of a shared instance group are contributed by different parties. In these scenarios, parties are often linked using fuzzy identifiers, leading to a common practice termed as multi-party fuzzy VFL. Existing models generally address either multi-party VFL or fuzzy VFL between two parties. Extending these models to practical multi-party fuzzy VFL typically results in significant performance degradation and increased costs for maintaining privacy. To overcome these limitations, we introduce the Federated Transformer (FeT), a novel framework that supports multi-party VFL with fuzzy identifiers. FeT innovatively encodes these identifiers into data representations and employs a transformer architecture distributed across different parties, incorporating three new techniques to enhance performance. Furthermore, we have developed a multi-party privacy framework for VFL that integrates differential privacy with secure multi-party computation, effectively protecting local representations while minimizing associated utility costs. Our experiments demonstrate that the FeT surpasses the baseline models by up to 46% in terms of accuracy when scaled to 50 parties. Additionally, in two-party fuzzy VFL settings, FeT also shows improved performance and privacy over cutting-edge VFL models.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95945"} +{"video_file": "FsA0OSsdzJ_39027930.mp4", "openreview_id": "FsA0OSsdzJ", "slideslive_id": 39027930, "venue": "nips2024", "title": "Structured Learning of Compositional Sequential Interventions", "status": "Poster", "keywords": "Causality;sequential data", "tldr": "We describe a way of identifying and learning the effect of combinations of interventions in a sequential setup, in the regime of sparse data where limited combinations are jointly observed.", "abstract": "We consider sequential treatment regimes where each unit is exposed to combinations of interventions over time. When interventions are described by qualitative labels, such as \"close schools for a month due to a pandemic\" or \"promote this podcast to this user during this week\", it is unclear which appropriate structural assumptions allow us to generalize behavioral predictions to previously unseen combinations of interventions. Standard black-box approaches mapping sequences of categorical variables to outputs are applicable, but they rely on poorly understood assumptions on how reliable generalization can be obtained, and may underperform under sparse sequences, temporal variability, and large action spaces. To approach that, we pose an explicit model for composition, that is, how the effect of sequential interventions can be isolated into modules, clarifying which data conditions allow for the identification of their combined effect at different units and time steps. We show the identification properties of our compositional model, inspired by advances in causal matrix factorization methods. Our focus is on predictive models for novel compositions of interventions instead of matrix completion tasks and causal effect estimation. We compare our approach to flexible but generic black-box models to illustrate how structure aids prediction in sparse data conditions.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/95943"} +{"video_file": "FsdB3I9Y24_39025888.mp4", "openreview_id": "FsdB3I9Y24", "slideslive_id": 39025888, "venue": "nips2024", "title": "Constrained Synthesis with Projected Diffusion Models", "status": "Poster", "keywords": "Constraint satisfaction;Generative diffusion models;physics-informed models", "tldr": "We propose an alteration of the sampling step in diffusion models to generate outputs that satisfy desired constraints and physical principles", "abstract": "This paper introduces an approach to endow generative diffusion processes the ability to satisfy and certify compliance with constraints and physical principles. The proposed method recast the traditional sampling process of generative diffusion models as a constrained optimization problem, steering the generated data distribution to remain within a specified region to ensure adherence to the given constraints. These capabilities are validated on applications featuring both convex and challenging, non-convex, constraints as well as ordinary differential equations, in domains spanning from synthesizing new materials with precise morphometric properties, generating physics-informed motion, optimizing paths in planning scenarios, and human motion synthesis.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95942"} +{"video_file": "FuTfZK7PK3_39027531.mp4", "openreview_id": "FuTfZK7PK3", "slideslive_id": 39027531, "venue": "nips2024", "title": "The Power of Extrapolation in Federated Learning", "status": "Poster", "keywords": "Federated Learning;Optimization", "tldr": "We extend the FedProx algorithm to its extrapolated counterpart with better theoretical guarantees.", "abstract": "We propose and study several server-extrapolation strategies for enhancing the theoretical and empirical convergence properties of the popular federated learning optimizer FedProx [Li et al., 2020]. While it has long been known that some form of extrapolation can help in the practice of FL, only a handful of works provide any theoretical guarantees. The phenomenon seems elusive, and our current theoretical understanding remains severely incomplete. In our work, we focus on smooth convex or strongly convex problems in the interpolation regime. In particular, we propose Extrapolated FedProx (FedExProx), and study three extrapolation strategies: a constant strategy (depending on various smoothness parameters and the number of participating devices), and two smoothness-adaptive strategies; one based on the notion of gradient diversity (FedExProx-GraDS), and the other one based on the stochastic Polyak stepsize (FedExProx-StoPS). Our theory is corroborated with carefully constructed numerical experiments.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95940"} +{"video_file": "FwxOHl0BEl_39025191.mp4", "openreview_id": "FwxOHl0BEl", "slideslive_id": 39025191, "venue": "nips2024", "title": "Neural Network Reparametrization for Accelerated Optimization in Molecular Simulations", "status": "Poster", "keywords": "coarse-graining;molecular dynamics;protein-folding;repametrization;hessian;graph neural networks", "tldr": "We introduces a neural net reparametrization technique that significantly enhances the efficiency and accuracy of energy minimization in molecular simulations", "abstract": "We propose a novel approach to molecular simulations using neural network reparametrization, which offers a flexible alternative to traditional coarse-graining methods. Unlike conventional techniques that strictly reduce degrees of freedom, the complexity of the system can be adjusted in our model, sometimes increasing it to simplify the optimization process. Our approach also maintains continuous access to fine-grained modes and eliminates the need for force-matching, enhancing both the efficiency and accuracy of energy minimization. Importantly, our framework allows for the use of potentially arbitrary neural networks (e.g., Graph Neural Networks (GNN)) to perform the reparametrization, incorporating CG modes as needed. In fact, our experiments using very weak molecular forces (Lennard-Jones potential) the GNN-based model is the sole model to find the correct configuration. Similarly, in protein-folding scenarios, our GNN-based CG method consistently outperforms traditional optimization methods. It not only recovers the target structures more accurately but also achieves faster convergence to the deepest energy states. This work demonstrates significant advancements in molecular simulations by optimizing energy minimization and convergence speeds, offering a new, efficient framework for simulating complex molecular systems.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/95938"} +{"video_file": "G0LfcMiRkc_39026568.mp4", "openreview_id": "G0LfcMiRkc", "slideslive_id": 39026568, "venue": "nips2024", "title": "Linguistic Collapse: Neural Collapse in (Large) Language Models", "status": "Poster", "keywords": "neural collapse;uniformity;large language models;LLM;GPT;language modeling;geometry;unconstrained features;generative model;transformer;attention;causal;autoregressive", "tldr": "Analysis of the development of neural collapse in (large) language models and its relationship with generalization.", "abstract": "Neural collapse (\nNC\n) is a phenomenon observed in classification tasks where top-layer representations collapse into their class means, which become equinorm, equiangular and aligned with the classifiers. These behaviors -- associated with generalization and robustness -- would manifest under specific conditions: models are trained towards zero loss, with noise-free labels belonging to balanced classes, which do not outnumber the model's hidden dimension. Recent studies have explored\nNC\nin the absence of one or more of these conditions to extend and capitalize on the associated benefits of ideal geometries. Language modeling presents a curious frontier, as \\textit{training by token prediction} constitutes a classification task where none of the conditions exist: the vocabulary is imbalanced and exceeds the embedding dimension; different tokens might correspond to similar contextual embeddings; and large language models (LLMs) in particular are typically only trained for a few epochs. This paper empirically investigates the impact of scaling the architectures and training of causal language models (CLMs) on their progression towards\nNC\n. We find that\nNC\nproperties that develop with scale (and regularization) are linked to generalization. Moreover, there is evidence of some relationship between\nNC\nand generalization independent of scale. Our work thereby underscores the generality of\nNC\nas it extends to the novel and more challenging setting of language modeling. Downstream, we seek to inspire further research on the phenomenon to deepen our understanding of LLMs -- and neural networks at large -- and improve existing architectures based on\nNC\n-related properties. Our code is hosted on GitHub: https://github.com/rhubarbwu/linguistic-collapse.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/95936"} +{"video_file": "G0v0TxX01N_39024885.mp4", "openreview_id": "G0v0TxX01N", "slideslive_id": 39024885, "venue": "nips2024", "title": "Diffusion of Thought: Chain-of-Thought Reasoning in Diffusion Language Models", "status": "Poster", "keywords": "text diffusion model;mathematical reasoning", "tldr": "We propose Diffusion of Thought (DoT), an inherent chain-of-thought method tailored for diffusion models.", "abstract": "Recently, diffusion models have garnered significant interest in the field of text processing due to their many potential advantages compared to conventional autoregressive models. In this work, we propose Diffusion-of-Thought (DoT), a novel approach that integrates diffusion models with Chain-of-Thought, a well-established technique for improving the reasoning ability of autoregressive language models. In contrast to autoregressive language models that make decisions in a left-to-right, token-by-token manner, DoT allows reasoning steps to diffuse over time through a diffusion language model and offers greater flexibility in trading-off computation for reasoning performance. Our experimental results demonstrate the effectiveness of DoT in multi-digit multiplication, boolean logic, and grade school math problems. In addition to that, DoT showcases promising self-correction abilities and benefits from existing reasoning-enhancing techniques like self-consistency decoding. Our findings contribute to the understanding and development of reasoning with diffusion language models.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95935"} +{"video_file": "G0yxFmP87g_39025522.mp4", "openreview_id": "G0yxFmP87g", "slideslive_id": 39025522, "venue": "nips2024", "title": "AmoebaLLM: Constructing Any-Shape Large Language Models for Efficient and Instant Deployment", "status": "Poster", "keywords": "Efficient Large Language Models;Model Compression", "tldr": "We propose AmoebaLLM, a novel framework designed to enable the instant derivation of LLM subnets of arbitrary shapes, which achieve the accuracy-efficiency frontier and can be extracted immediately after a one-time fine-tuning.", "abstract": "Motivated by the transformative capabilities of large language models (LLMs) across various natural language tasks, there has been a growing demand to deploy these models effectively across diverse real-world applications and platforms. However, the challenge of efficiently deploying LLMs has become increasingly pronounced due to the varying application-specific performance requirements and the rapid evolution of computational platforms, which feature diverse resource constraints and deployment flows. These varying requirements necessitate LLMs that can adapt their structures (depth and width) for optimal efficiency across different platforms and application specifications. To address this critical gap, we propose AmoebaLLM, a novel framework designed to enable the instant derivation of LLM subnets of arbitrary shapes, which achieve the accuracy-efficiency frontier and can be extracted immediately after a one-time fine-tuning. In this way, AmoebaLLM significantly facilitates rapid deployment tailored to various platforms and applications. Specifically, AmoebaLLM integrates three innovative components: (1) a knowledge-preserving subnet selection strategy that features a dynamic-programming approach for depth shrinking and an importance-driven method for width shrinking; (2) a shape-aware mixture of LoRAs to mitigate gradient conflicts among subnets during fine-tuning; and (3) an in-place distillation scheme with loss-magnitude balancing as the fine-tuning objective. Extensive experiments validate that AmoebaLLM not only sets new standards in LLM adaptability but also successfully delivers subnets that achieve state-of-the-art trade-offs between accuracy and efficiency.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95934"} +{"video_file": "G24fOpC3JE_39024583.mp4", "openreview_id": "G24fOpC3JE", "slideslive_id": 39024583, "venue": "nips2024", "title": "Continuous Temporal Domain Generalization", "status": "Poster", "keywords": "Domain Generalization;Temporal Domain Generalization;Continuous Dynamics;Koopman Operator;Concept Drift;Neural ODEs", "tldr": "This work defines Continuous Temporal Domain Generalization. By leveraging a Koopman operator-driven continuous system, the paper effectively captures the essence of temporal generalization through the synchronization of model and data dynamics.", "abstract": "Temporal Domain Generalization (TDG) addresses the challenge of training predictive models under temporally varying data distributions. Traditional TDG approaches typically focus on domain data collected at fixed, discrete time intervals, which limits their capability to capture the inherent dynamics within continuous-evolving and irregularly-observed temporal domains. To overcome this, this work formalizes the concept of Continuous Temporal Domain Generalization (CTDG), where domain data are derived from continuous times and are collected at arbitrary times. CTDG tackles critical challenges including: 1) Characterizing the continuous dynamics of both data and models, 2) Learning complex high-dimensional nonlinear dynamics, and 3) Optimizing and controlling the generalization across continuous temporal domains. To address them, we propose a Koopman operator-driven continuous temporal domain generalization (Koodos) framework. We formulate the problem within a continuous dynamic system and leverage the Koopman theory to learn the underlying dynamics; the framework is further enhanced with a comprehensive optimization strategy equipped with analysis and control driven by prior knowledge of the dynamics patterns. Extensive experiments demonstrate the effectiveness and efficiency of our approach. The code can be found at: https://github.com/Zekun-Cai/Koodos.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/95933"} +{"video_file": "G4vFNmraxj_39028064.mp4", "openreview_id": "G4vFNmraxj", "slideslive_id": 39028064, "venue": "nips2024", "title": "Hybrid Generative AI for De Novo Design of Co-Crystals with Enhanced Tabletability", "status": "Poster", "keywords": "Co-crystals;Tabletability;Generative Design;Evolutionary Optimization", "tldr": "We introduce GEMCODE, a new automated screening pipeline leveraging hybridization of generative AI and evolutionary optimization for de novo co-crystal design with enhanced tabletability properties.", "abstract": "Co-crystallization is an accessible way to control physicochemical characteristics of organic crystals, which finds many biomedical applications. In this work, we present Generative Method for Co-crystal Design (GEMCODE), a novel pipeline for automated co-crystal screening based on the hybridization of deep generative models and evolutionary optimization for broader exploration of the target chemical space. GEMCODE enables fast de novo co-crystal design with target tabletability profiles, which is crucial for the development of pharmaceuticals. With a series of experimental studies highlighting validation and discovery cases, we show that GEMCODE is effective even under realistic computational constraints. Furthermore, we explore the potential of language models in generating co-crystals. Finally, we present numerous previously unknown co-crystals predicted by GEMCODE and discuss its potential in accelerating drug development.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/95931"} +{"video_file": "G5lMFOtFHa_39027936.mp4", "openreview_id": "G5lMFOtFHa", "slideslive_id": 39027936, "venue": "nips2024", "title": "Where Do Large Learning Rates Lead Us?", "status": "Poster", "keywords": "neural network training;large learning rate;generalization;loss landscape;feature learning", "tldr": "We show that only a narrow range of large LRs is beneficial for generalization and analyze it from the loss landscape and feature learning perspectives.", "abstract": "It is generally accepted that starting neural networks training with large learning rates (LRs) improves generalization. Following a line of research devoted to understanding this effect, we conduct an empirical study in a controlled setting focusing on two questions: 1) how large an initial LR is required for obtaining optimal quality, and 2) what are the key differences between models trained with different LRs? We discover that only a narrow range of initial LRs slightly above the convergence threshold lead to optimal results after fine-tuning with a small LR or weight averaging. By studying the local geometry of reached minima, we observe that using LRs from this optimal range allows for the optimization to locate a basin that only contains high-quality minima. Additionally, we show that these initial LRs result in a sparse set of learned features, with a clear focus on those most relevant for the task. In contrast, starting training with too small LRs leads to unstable minima and attempts to learn all features simultaneously, resulting in poor generalization. Conversely, using initial LRs that are too large fails to detect a basin with good solutions and extract meaningful patterns from the data.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/95929"} +{"video_file": "G7QS68ICPJ_39028089.mp4", "openreview_id": "G7QS68ICPJ", "slideslive_id": 39028089, "venue": "nips2024", "title": "Nimbus: Secure and Efficient Two-Party Inference for Transformers", "status": "Poster", "keywords": "Secure inferece;Transformer;Multi-party computation;homomorphic encryption", "tldr": "We propose a 2-party computation framework to accelerate the secure inference of the Transformer model by optimizing the protocols of both linear and non-linear layers.", "abstract": "Transformer models have gained significant attention due to their power in machine learning tasks. Their extensive deployment has raised concerns about the potential leakage of sensitive information during inference. However, when being applied to Transformers, existing approaches based on secure two-party computation (2PC) bring about efficiency limitations in two folds: (1) resource-intensive matrix multiplications in linear layers, and (2) complex non-linear activation functions like\nGELU\nand\nSoftmax\n. This work presents a new two-party inference framework\nNimbus\nfor Transformer models. Specifically, we propose a new 2PC paradigm to securely compute matrix multiplications based on an outer-product insight, which achieves\n2.9\n\u00d7\n\u223c\n12.5\n\u00d7\nperformance improvements compared to the state-of-the-art (SOTA) protocol. Furthermore, through a new observation of utilizing the input distribution, we propose an approach of low-degree polynomial approximation for\nGELU\nand\nSoftmax\n, which improves the performance of the SOTA polynomial approximation by\n2.9\n\u00d7\n\u223c\n4.0\n\u00d7\n, where the average accuracy loss of our approach is 0.08% compared to the non-2PC inference without privacy. Compared with the SOTA two-party inference,\nNimbus\nimproves the end-to-end performance of\nB\nE\nR\nT\nb\na\ns\ne\ninference by\n2.7\n\u00d7\n\u223c\n4.7\n\u00d7\nacross different network settings.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/95926"} +{"video_file": "G8aS48B9bm_39028777.mp4", "openreview_id": "G8aS48B9bm", "slideslive_id": 39028777, "venue": "nips2024", "title": "Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just Clip Gradient Differences", "status": "Poster", "keywords": "Byzantine robustness;distributed optimization;communication compression;non-convex optimization", "tldr": "In this work we introduce a novel approach how to handle byzantine workers in partial participation regime, when they can form majority.", "abstract": "Distributed learning has emerged as a leading paradigm for training large machine learning models. However, in real-world scenarios, participants may be unreliable or malicious, posing a significant challenge to the integrity and accuracy of the trained models. Byzantine fault tolerance mechanisms have been proposed to address these issues, but they often assume full participation from all clients, which is not always practical due to the unavailability of some clients or communication constraints. In our work, we propose the first distributed method with client sampling and provable tolerance to Byzantine workers. The key idea behind the developed method is the use of gradient clipping to control stochastic gradient differences in recursive variance reduction. This allows us to bound the potential harm caused by Byzantine workers, even during iterations when all sampled clients are Byzantine. Furthermore, we incorporate communication compression into the method to enhance communication efficiency. Under general assumptions, we prove convergence rates for the proposed method that match the existing state-of-the-art (SOTA) theoretical results. We also propose a heuristic on how to adjust any Byzantine-robust method to a partial participation scenario via clipping.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95924"} +{"video_file": "G99BSV9pt5_39026878.mp4", "openreview_id": "G99BSV9pt5", "slideslive_id": 39026878, "venue": "nips2024", "title": "Relational Concept Bottleneck Models", "status": "Poster", "keywords": "Concept Bottleneck Models;Neuro-symbolic Models;Message Passing;Logic-based explanations", "tldr": "This work presents R-CBMs, a family of concept bottleneck models designed for relational tasks, which match the generalization performance of existing relational black-boxes, while support the generation of quantified concept-based explanations.", "abstract": "The design of interpretable deep learning models working in relational domains poses an open challenge: interpretable deep learning methods, such as Concept Bottleneck Models (CBMs), are not designed to solve relational problems, while relational deep learning models, such as Graph Neural Networks (GNNs), are not as interpretable as CBMs. To overcome these limitations, we propose Relational Concept Bottleneck Models (R-CBMs), a family of relational deep learning methods providing interpretable task predictions. As special cases, we show that R-CBMs are capable of both representing standard CBMs and message passing GNNs. To evaluate the effectiveness and versatility of these models, we designed a class of experimental problems, ranging from image classification to link prediction in knowledge graphs. In particular we show that R-CBMs (i) match generalization performance of existing relational black-boxes, (ii) support the generation of quantified concept-based explanations, (iii) effectively respond to test-time interventions, and (iv) withstand demanding settings including out-of-distribution scenarios, limited training data regimes, and scarce concept supervisions.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/95923"} +{"video_file": "G9OJUgKo4B_39025264.mp4", "openreview_id": "G9OJUgKo4B", "slideslive_id": 39025264, "venue": "nips2024", "title": "Knowledge Composition using Task Vectors with Learned Anisotropic Scaling", "status": "Poster", "keywords": "task vectors;task arithmetic;transfer learning;few-shot learning;test-time adaptation;parameter-efficient fine-tuning", "tldr": "We present a learning algorithm where knowledge from different domains can be combined or transferred using task vectors with learned anisotropic scaling.", "abstract": "Pre-trained models produce strong generic representations that can be adapted via fine-tuning on specialised datasets. The learned weight difference relative to the pre-trained model, known as a task vector, characterises the direction and stride of fine-tuning that enables the model to capture these specialised representations. The significance of task vectors is such that simple arithmetic operations on them can be used to combine diverse representations from different domains. This paper builds on these properties of task vectors and aims to answer (1) whether components of task vectors, particularly parameter blocks, exhibit similar characteristics, and (2) how such blocks can be used to enhance knowledge composition and transfer. To this end, we introduce aTLAS, an algorithm that linearly combines parameter blocks with different learned coefficients, resulting in anisotropic scaling at the task vector level. We show that such linear combinations explicitly exploit the low intrinsic dimensionality of pre-trained models, with only a few coefficients being the learnable parameters. Furthermore, composition of parameter blocks enables modular learning that effectively leverages the already learned representations, thereby reducing the dependency on large amounts of data. We demonstrate the effectiveness of our method in task arithmetic, few-shot recognition and test-time adaptation, with supervised or unsupervised objectives. In particular, we show that (1) learned anisotropic scaling allows task vectors to be more disentangled, causing less interference in composition; (2) task vector composition excels with scarce or no labelled data and is less prone to domain shift, thus leading to better generalisability; (3) mixing the most informative parameter blocks across different task vectors prior to training can reduce the memory footprint and improve the flexibility of knowledge transfer. Moreover, we show the potential of aTLAS as a parameter-efficient fine-tuning method, particularly with less data, and demonstrate that it can be easily scaled up for higher performance.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95922"} +{"video_file": "GCmmy4At6i_39027132.mp4", "openreview_id": "GCmmy4At6i", "slideslive_id": 39027132, "venue": "nips2024", "title": "Lightweight Frequency Masker for Cross-Domain Few-Shot Semantic Segmentation", "status": "Poster", "keywords": "cross-domain few-shot segmentation;frequency component;feature disentanglement\uff0cchannel attention", "tldr": "We find simply filtering frequency components significantly improves CD-FSS performance. We delve into this for an interpretation, and further propose a lightweight frequency masker, importing only 2.5% parameters but averagely improving over 11%.", "abstract": "Cross-domain few-shot segmentation (CD-FSS) is proposed to first pre-train the model on a large-scale source-domain dataset, and then transfer the model to data-scarce target-domain datasets for pixel-level segmentation. The significant domain gap between the source and target datasets leads to a sharp decline in the performance of existing few-shot segmentation (FSS) methods in cross-domain scenarios. In this work, we discover an intriguing phenomenon: simply filtering different frequency components for target domains can lead to a significant performance improvement, sometimes even as high as 14% mIoU. Then, we delve into this phenomenon for an interpretation, and find such improvements stem from the reduced inter-channel correlation in feature maps, which benefits CD-FSS with enhanced robustness against domain gaps and larger activated regions for segmentation. Based on this, we propose a lightweight frequency masker, which further reduces channel correlations by an Amplitude-Phase Masker (APM) module and an Adaptive Channel Phase Attention (ACPA) module. Notably, APM introduces only 0.01% additional parameters but improves the average performance by over 10%, and ACPA imports only 2.5% parameters but further improves the performance by over 1.5%, which significantly surpasses the state-of-the-art CD-FSS methods.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95919"} +{"video_file": "GDNZajKrML_39027682.mp4", "openreview_id": "GDNZajKrML", "slideslive_id": 39027682, "venue": "nips2024", "title": "GL-NeRF: Gauss-Laguerre Quadrature Enables Training-Free NeRF Acceleration", "status": "Poster", "keywords": "Neural Radiance Fields;Real-Time NeRF;Gauss-Laguerre Quadrature;Neural Rendering", "tldr": "In this work, we propose GL-NeRF, a new perspective of computing volume rendering with the Gauss-Laguerre quadrature.", "abstract": "Volume rendering in neural radiance fields is inherently time-consuming due to the large number of MLP calls on the points sampled per ray. Previous works would address this issue by introducing new neural networks or data structures. In this work, we propose GL-NeRF, a new perspective of computing volume rendering with the Gauss-Laguerre quadrature. GL-NeRF significantly reduces the number of MLP calls needed for volume rendering, introducing no additional data structures or neural networks. The simple formulation makes adopting GL-NeRF in any NeRF model possible. In the paper, we first justify the use of the Gauss-Laguerre quadrature and then demonstrate this plug-and-play attribute by implementing it in two different NeRF models. We show that with a minimal drop in performance, GL-NeRF can significantly reduce the number of MLP calls, showing the potential to speed up any NeRF model. Code can be found in project page https://silongyong.github.io/GL-NeRF_project_page/.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95918"} +{"video_file": "GJMYvWzjE1_39024555.mp4", "openreview_id": "GJMYvWzjE1", "slideslive_id": 39024555, "venue": "nips2024", "title": "Language Models as Hierarchy Encoders", "status": "Poster", "keywords": "Language Models;Transformer Encoders;Hierarchy Encoders;Hyperbolic Embedding", "tldr": "This work introduces a novel approach to re-train transformer encoder-based language models as explicit hierarchy encoders, leveraging the expansive nature of hyperbolic geometry.", "abstract": "Interpreting hierarchical structures latent in language is a key limitation of current language models (LMs). While previous research has implicitly leveraged these hierarchies to enhance LMs, approaches for their explicit encoding are yet to be explored. To address this, we introduce a novel approach to re-train transformer encoder-based LMs as Hierarchy Transformer encoders (HiTs), harnessing the expansive nature of hyperbolic space. Our method situates the output embedding space of pre-trained LMs within a Poincar\u00e9 ball with a curvature that adapts to the embedding dimension, followed by re-training on hyperbolic clustering and centripetal losses. These losses are designed to effectively cluster related entities (input as texts) and organise them hierarchically. We evaluate HiTs against pre-trained LMs, standard fine-tuned LMs, and several hyperbolic embedding baselines, focusing on their capabilities in simulating transitive inference, predicting subsumptions, and transferring knowledge across hierarchies. The results demonstrate that HiTs consistently outperform all baselines in these tasks, underscoring the effectiveness and transferability of our re-trained hierarchy encoders.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95913"} +{"video_file": "GLUIuli3Sm_39024487.mp4", "openreview_id": "GLUIuli3Sm", "slideslive_id": 39024487, "venue": "nips2024", "title": "On the Convergence of Loss and Uncertainty-based Active Learning Algorithms", "status": "Poster", "keywords": "active learning;uncertainty sampling;loss-based sampling", "tldr": "We consider the convergence rates of loss and uncertainty-based active learning algorithms under various assumptions", "abstract": "We investigate the convergence rates and data sample sizes required for training a machine learning model using a stochastic gradient descent (SGD) algorithm, where data points are sampled based on either their loss value or uncertainty value. These training methods are particularly relevant for active learning and data subset selection problems. For SGD with a constant step size update, we present convergence results for linear classifiers and linearly separable datasets using squared hinge loss and similar training loss functions. Additionally, we extend our analysis to more general classifiers and datasets, considering a wide range of loss-based sampling strategies and smooth convex training loss functions. We propose a novel algorithm called Adaptive-Weight Sampling (AWS) that utilizes SGD with an adaptive step size that achieves stochastic Polyak's step size in expectation. We establish convergence rate results for AWS for smooth convex training loss functions. Our numerical experiments demonstrate the efficiency of AWS on various datasets by using either exact or estimated loss values.", "primary_area": "active_learning", "site": "https://neurips.cc/virtual/2024/poster/95912"} +{"video_file": "GN2qbxZlni_39025507.mp4", "openreview_id": "GN2qbxZlni", "slideslive_id": 39025507, "venue": "nips2024", "title": "MR-Ben: A Meta-Reasoning Benchmark for Evaluating System-2 Thinking in LLMs", "status": "Poster", "keywords": "Large Language Models;Reasoning;System-2;Slow Thinking;Resource and Evaluation;Analysis and Interpretability", "tldr": "MR-Ben is a comprehensive process-oriented evaluation benchmark comprising 5,975 questions that cover various subjects, ranging from natural science to coding, challenging LLMs to score the reasoning steps of candidate solutions.", "abstract": "Large language models (LLMs) have shown increasing capability in problem-solving and decision-making, largely based on the step-by-step chain-of-thought reasoning processes. However, evaluating these reasoning abilities has become increasingly challenging. Existing outcome-based benchmarks are beginning to saturate, becoming less effective in tracking meaningful progress. To address this, we present a process-based benchmark MR-Ben that demands a meta-reasoning skill, where LMs are asked to locate and analyse potential errors in automatically generated reasoning steps. Our meta-reasoning paradigm is especially suited for system-2 slow thinking, mirroring the human cognitive process of carefully examining assumptions, conditions, calculations, and logic to identify mistakes. MR-Ben comprises 5,975 questions curated by human experts across a wide range of subjects, including physics, chemistry, logic, coding, and more. Through our designed metrics for assessing meta-reasoning on this benchmark, we identify interesting limitations and weaknesses of current LLMs (open-source and closed-source models). For example, with models like the o1 series from OpenAI demonstrating strong performance by effectively scrutinizing the solution space, many other state-of-the-art models fall significantly behind on MR-Ben, exposing potential shortcomings in their training strategies and inference methodologies.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/95909"} +{"video_file": "GNhrGRCerd_39028083.mp4", "openreview_id": "GNhrGRCerd", "slideslive_id": 39028083, "venue": "nips2024", "title": "Trap-MID: Trapdoor-based Defense against Model Inversion Attacks", "status": "Poster", "keywords": "model inversion attacks;privacy;defense;trapdoor;backdoor", "tldr": "We propose a trapdoor-based defense to preserve privacy by misleading Model Inversion attacks.", "abstract": "Model Inversion (MI) attacks pose a significant threat to the privacy of Deep Neural Networks by recovering training data distribution from well-trained models. While existing defenses often rely on regularization techniques to reduce information leakage, they remain vulnerable to recent attacks. In this paper, we propose the Trapdoor-based Model Inversion Defense (Trap-MID) to mislead MI attacks. A trapdoor is integrated into the model to predict a specific label when the input is injected with the corresponding trigger. Consequently, this trapdoor information serves as the \"shortcut\" for MI attacks, leading them to extract trapdoor triggers rather than private data. We provide theoretical insights into the impacts of trapdoor's effectiveness and naturalness on deceiving MI attacks. In addition, empirical experiments demonstrate the state-of-the-art defense performance of Trap-MID against various MI attacks without the requirements for extra data or large computational overhead. Our source code is publicly available at https://github.com/ntuaislab/Trap-MID.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/95907"} +{"video_file": "GOgKhunkfw_39026436.mp4", "openreview_id": "GOgKhunkfw", "slideslive_id": 39026436, "venue": "nips2024", "title": "Simulation-Free Training of Neural ODEs on Paired Data", "status": "Poster", "keywords": "Neural ODE;simulation-free training;flow matching", "tldr": "We propose simulation-free training method for Neural ODEs by adopting flow matching objective with learnable embeddings.", "abstract": "In this work, we investigate a method for simulation-free training of Neural Ordinary Differential Equations (NODEs) for learning deterministic mappings between paired data. Despite the analogy of NODEs as continuous-depth residual networks, their application in typical supervised learning tasks has not been popular, mainly due to the large number of function evaluations required by ODE solvers and numerical instability in gradient estimation. To alleviate this problem, we employ the flow matching framework for simulation-free training of NODEs, which directly regresses the parameterized dynamics function to a predefined target velocity field. Contrary to generative tasks, however, we show that applying flow matching directly between paired data can often lead to an ill-defined flow that breaks the coupling of the data pairs (e.g., due to crossing trajectories). We propose a simple extension that applies flow matching in the embedding space of data pairs, where the embeddings are learned jointly with the dynamic function to ensure the validity of the flow which is also easier to learn. We demonstrate the effectiveness of our method on both regression and classification tasks, where our method outperforms existing NODEs with a significantly lower number of function evaluations. The code is available at https://github.com/seminkim/simulation-free-node.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95906"} +{"video_file": "GQNvvQquO0_39025096.mp4", "openreview_id": "GQNvvQquO0", "slideslive_id": 39025096, "venue": "nips2024", "title": "Differentially Private Set Representations", "status": "Poster", "keywords": "Differential Privacy;Data Structure", "tldr": "This paper studies the problem of differentially private mechanism for representing sparse sets.", "abstract": "We study the problem of differentially private (DP) mechanisms for representing sets of size $k$ from a large universe. Our first construction creates $(\\epsilon,\\delta)$-DP representations with error probability of $1/(e^\\epsilon + 1)$ using space at most $1.05 k \\epsilon \\cdot \\log(e)$ bits where the time to construct a representation is $O(k \\log(1/\\delta))$ while decoding time is $O(\\log(1/\\delta))$. We also present a second algorithm for pure $\\epsilon$-DP representations with the same error using space at most $k \\epsilon \\cdot \\log(e)$ bits, but requiring large decoding times. Our algorithms match the lower bounds on privacy-utility trade-offs (including constants but ignoring $\\delta$ factors) and we also present a new space lower bound matching our constructions up to small constant factors. To obtain our results, we design a new approach embedding sets into random linear systems deviating from most prior approaches that inject noise into non-private solutions.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/95905"} +{"video_file": "GRmQjLzaPM_39025586.mp4", "openreview_id": "GRmQjLzaPM", "slideslive_id": 39025586, "venue": "nips2024", "title": "BehaviorGPT: Smart Agent Simulation for Autonomous Driving with Next-Patch Prediction", "status": "Poster", "keywords": "Multi-Agent Systems;Transformers;Generative Models;Autonomous Driving", "tldr": "A fully autoregressive Transformer with next-patch prediction mechanism for multi-agent behavior simulation in autonomous driving", "abstract": "Simulating realistic behaviors of traffic agents is pivotal for efficiently validating the safety of autonomous driving systems. Existing data-driven simulators primarily use an encoder-decoder architecture to encode the historical trajectories before decoding the future. However, the heterogeneity between encoders and decoders complicates the models, and the manual separation of historical and future trajectories leads to low data utilization. Given these limitations, we propose BehaviorGPT, a homogeneous and fully autoregressive Transformer designed to simulate the sequential behavior of multiple agents. Crucially, our approach discards the traditional separation between \"history\" and \"future\" by modeling each time step as the \"current\" one for motion generation, leading to a simpler, more parameter- and data-efficient agent simulator. We further introduce the Next-Patch Prediction Paradigm (NP3) to mitigate the negative effects of autoregressive modeling, in which models are trained to reason at the patch level of trajectories and capture long-range spatial-temporal interactions. Despite having merely 3M model parameters, BehaviorGPT won first place in the 2024 Waymo Open Sim Agents Challenge with a realism score of 0.7473 and a minADE score of 1.4147, demonstrating its exceptional performance in traffic agent simulation.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/95903"} +{"video_file": "GVgRbz8MvG_39027448.mp4", "openreview_id": "GVgRbz8MvG", "slideslive_id": 39027448, "venue": "nips2024", "title": "Nonparametric Evaluation of Noisy ICA Solutions", "status": "Poster", "keywords": "ICA;Independent Component Analysis;Kurtosis;Characteristic Function;Cumulant Generating Function;Blind Signal Separation;Convergence", "tldr": "We propose a non-parametric score to evaluate algorithms for noisy ICA and consequently propose a meta-algorithm which utilizes the score to choose the best algorithm for any given data distribution.", "abstract": "Independent Component Analysis (ICA) was introduced in the 1980's as a model for Blind Source Separation (BSS), which refers to the process of recovering the sources underlying a mixture of signals, with little knowledge about the source signals or the mixing process. While there are many sophisticated algorithms for estimation, different methods have different shortcomings. In this paper, we develop a nonparametric score to adaptively pick the right algorithm for ICA with arbitrary Gaussian noise. The novelty of this score stems from the fact that it just assumes a finite second moment of the data and uses the characteristic function to evaluate the quality of the estimated mixing matrix without any knowledge of the parameters of the noise distribution. In addition, we propose some new contrast functions and algorithms that enjoy the same fast computability as existing algorithms like FASTICA and JADE but work in domains where the former may fail. While these also may have weaknesses, our proposed diagnostic, as shown by our simulations, can remedy them. Finally, we propose a theoretical framework to analyze the local and global convergence properties of our algorithms.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95900"} +{"video_file": "GYd5AfZaor_39026941.mp4", "openreview_id": "GYd5AfZaor", "slideslive_id": 39026941, "venue": "nips2024", "title": "Sample Selection via Contrastive Fragmentation for Noisy Label Regression", "status": "Poster", "keywords": "Noisy Labels;Regression", "tldr": "To address the problem of regression with noisy labels, we propose the Contrastive Fragmentation framework to select clean samples by Mixture of Neighboring Fragments and curate four benchmark datasets along with a novel metric, Error Residual Ratio.", "abstract": "As with many other problems, real-world regression is plagued by the presence of noisy labels, an inevitable issue that demands our attention. Fortunately, much real-world data often exhibits an intrinsic property of continuously ordered correlations between labels and features, where data points with similar labels are also represented with closely related features. In response, we propose a novel approach named ConFrag, where we collectively model the regression data by transforming them into disjoint yet contrasting fragmentation pairs. This enables the training of more distinctive representations, enhancing the ability to select clean samples. Our ConFrag framework leverages a mixture of neighboring fragments to discern noisy labels through neighborhood agreement among expert feature extractors. We extensively perform experiments on four newly curated benchmark datasets of diverse domains, including age prediction, price prediction, and music production year estimation. We also introduce a metric called Error Residual Ratio (ERR) to better account for varying degrees of label noise. Our approach consistently outperforms fourteen state-of-the-art baselines, being robust against symmetric and random Gaussian label noise.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95898"} +{"video_file": "Gb0mXhn5h3_39026869.mp4", "openreview_id": "Gb0mXhn5h3", "slideslive_id": 39026869, "venue": "nips2024", "title": "MiSO: Optimizing brain stimulation to create neural activity states", "status": "Poster", "keywords": "closed-loop optimization;microstimulation;neural population activity;dimensionality reduction;latent variable models;reinforcement learning", "tldr": "MiSO (MicroStimulation Optimization): a closed-loop stimulation framework to drive neural population activity toward specified states by optimizing over a large stimulation parameter space.", "abstract": "Brain stimulation has the potential to create desired neural population activity states. However, it is challenging to search the large space of stimulation parameters, for example, selecting which subset of electrodes to be used for stimulation. In this scenario, creating a model that maps the configuration of stimulation parameters to the brain\u2019s response can be beneficial. Training such an expansive model usually requires more stimulation-response samples than can be collected in a given experimental session. Furthermore, changes in the properties of the recorded activity over time can make it challenging to merge stimulation-response samples across sessions. To address these challenges, we propose MiSO (MicroStimulation Optimization), a closed-loop stimulation framework to drive neural population activity toward specified states by optimizing over a large stimulation parameter space. MiSO consists of three key components: 1) a neural activity alignment method to merge stimulation-response samples across sessions, 2) a statistical model trained on the merged samples to predict the brain's response to untested stimulation parameter configurations, and 3) an online optimization algorithm to adaptively update the stimulation parameter configuration based on the model's predictions. In this study, we implemented MiSO with a factor analysis (FA) based alignment method, a convolutional neural network (CNN), and an epsilon greedy optimization algorithm. We tested MiSO in closed-loop experiments using electrical microstimulation in the prefrontal cortex of a non-human primate. Guided by the CNN predictions, MiSO successfully searched amongst thousands of stimulation parameter configurations to drive the neural population activity toward specified states. More broadly, MiSO increases the clinical viability of neuromodulation technologies by enabling the use of many-fold larger stimulation parameter spaces.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/95894"} +{"video_file": "GbqzN9HiUC_39025112.mp4", "openreview_id": "GbqzN9HiUC", "slideslive_id": 39025112, "venue": "nips2024", "title": "Latent Learning Progress Drives Autonomous Goal Selection in Human Reinforcement Learning", "status": "Poster", "keywords": "goals;reinforcement learning;cognitive science;computational modeling;autotelic agents;curriculum development", "tldr": "A newly defined, \"latent\" form of learning progress provides a valuable signal for goal selection in human reinforcement learning", "abstract": "Humans are autotelic agents who learn by setting and pursuing their own goals. However, the precise mechanisms guiding human goal selection remain unclear. Learning progress, typically measured as the observed change in performance, can provide a valuable signal for goal selection in both humans and artificial agents. We hypothesize that human choices of goals may also be driven by latent learning progress, which humans can estimate through knowledge of their actions and the environment \u2013 even without experiencing immediate changes in performance. To test this hypothesis, we designed a hierarchical reinforcement learning task in which human participants (N = 175) repeatedly chose their own goals and learned goal-conditioned policies. Our behavioral and computational modeling results confirm the influence of latent learning progress on goal selection and uncover inter-individual differences, partially mediated by recognition of the task's hierarchical structure. By investigating the role of latent learning progress in human goal selection, we pave the way for more effective and personalized learning experiences as well as the advancement of more human-like autotelic machines.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/95893"} +{"video_file": "GkHXBasQwm_39026536.mp4", "openreview_id": "GkHXBasQwm", "slideslive_id": 39026536, "venue": "nips2024", "title": "HOI-Swap: Swapping Objects in Videos with Hand-Object Interaction Awareness", "status": "Poster", "keywords": "video editing;hand-object interaction", "tldr": "We present HOI-Swap, a diffusion-based video editing framework, that seamlessly swaps the in-contact object in videos given a reference object image.", "abstract": "We study the problem of precisely swapping objects in videos, with a focus on those interacted with by hands, given one user-provided reference object image. Despite the great advancements that diffusion models have made in video editing recently, these models often fall short in handling the intricacies of hand-object interactions (HOI), failing to produce realistic edits---especially when object swapping results in object shape or functionality changes. To bridge this gap, we present HOI-Swap, a novel diffusion-based video editing framework trained in a self-supervised manner. Designed in two stages, the first stage focuses on object swapping in a single frame with HOI awareness; the model learns to adjust the interaction patterns, such as the hand grasp, based on changes in the object's properties. The second stage extends the single-frame edit across the entire sequence; we achieve controllable motion alignment with the original video by: (1) warping a new sequence from the stage-I edited frame based on sampled motion points and (2) conditioning video generation on the warped sequence. Comprehensive qualitative and quantitative evaluations demonstrate that HOI-Swap significantly outperforms existing methods, delivering high-quality video edits with realistic HOIs.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95885"} +{"video_file": "GkJbXpd3wM_39027451.mp4", "openreview_id": "GkJbXpd3wM", "slideslive_id": 39027451, "venue": "nips2024", "title": "Active Set Ordering", "status": "Poster", "keywords": "active learning;Bayesian optimization;top-k set;contour line", "tldr": "This paper addresses the active set ordering problem, discovering set rankings through costly black-box evaluations.", "abstract": "In this paper, we formalize the active set ordering problem, which involves actively discovering a set of inputs based on their orderings determined by expensive evaluations of a blackbox function. We then propose the mean prediction (MP) algorithm and theoretically analyze it in terms of the regret of predicted pairwise orderings between inputs. Notably, as a special case of this framework, we can cast Bayesian optimization as an active set ordering problem by recognizing that maximizers can be identified solely by comparison rather than by precisely estimating the function evaluations. As a result, we are able to construct the popular Gaussian process upper confidence bound (GP-UCB) algorithm through the lens of ordering with several nuanced insights. We empirically validate the performance of our proposed solution using various synthetic functions and real-world datasets.", "primary_area": "active_learning", "site": "https://neurips.cc/virtual/2024/poster/95884"} +{"video_file": "GkzrVxs9LS_39025884.mp4", "openreview_id": "GkzrVxs9LS", "slideslive_id": 39025884, "venue": "nips2024", "title": "Learning Low-Rank Feature for Thorax Disease Classification", "status": "Poster", "keywords": "Low-Rank Feature Learning;Low Frequency Property;Thorax Disease Classification", "tldr": "We propose a novel Low-Rank Feature Learning (LRFL) method for thorax disease classification, which learns low-rank features of a neural network so as to effectively reduce the adverse effect of background or non-disease areas.", "abstract": "Deep neural networks, including Convolutional Neural Networks (CNNs) and Visual Transformers (ViT), have achieved stunning success in the medical image domain. We study thorax disease classification in this paper. Effective extraction of features for the disease areas is crucial for disease classification on radiographic images. While various neural architectures and training techniques, such as self-supervised learning with contrastive/restorative learning, have been employed for disease classification on radiographic images, there are no principled methods that can effectively reduce the adverse effect of noise and background or non-disease areas on the radiographic images for disease classification. To address this challenge, we propose a novel Low-Rank Feature Learning (LRFL) method in this paper, which is universally applicable to the training of all neural networks. The LRFL method is both empirically motivated by a Low Frequency Property (LFP) and theoretically motivated by our sharp generalization bound for neural networks with low-rank features. LFP not only widely exists in deep neural networks for generic machine learning but also exists in all the thorax medical datasets studied in this paper. In the empirical study, using a neural network such as a ViT or a CNN pre-trained on unlabeled chest X-rays by Masked Autoencoders (MAE), our novel LRFL method is applied on the pre-trained neural network and demonstrates better classification results in terms of both multi-class area under the receiver operating curve (mAUC) and classification accuracy than the current state-of-the-art. The code of LRFL is available at \\url{https://github.com/Statistical-Deep-Learning/LRFL}.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/95883"} +{"video_file": "GlD9Juva5V_39027202.mp4", "openreview_id": "GlD9Juva5V", "slideslive_id": 39027202, "venue": "nips2024", "title": "SongCreator: Lyrics-based Universal Song Generation", "status": "Poster", "keywords": "Song generation;Song editing;Music generation;Language Model;Diffusion Model", "tldr": "This paper presents SongCreator, a song generation system that achieves competitive performances on eight tasks, particularly in lyrics-to-song and lyrics-to-vocals.", "abstract": "Music is an integral part of human culture, embodying human intelligence and creativity, of which songs compose an essential part. While various aspects of song generation have been explored by previous works, such as singing voice, vocal composition and instrumental arrangement, etc., generating songs with both vocals and accompaniment given lyrics remains a significant challenge, hindering the application of music generation models in the real world. In this light, we propose SongCreator, a song-generation system designed to tackle this challenge. The model features two novel designs: a meticulously designed dual-sequence language model (DSLM) to capture the information of vocals and accompaniment for song generation, and a series of attention mask strategies for DSLM, which allows our model to understand, generate and edit songs, making it suitable for various songrelated generation tasks by utilizing specific attention masks. Extensive experiments demonstrate the effectiveness of SongCreator by achieving state-of-the-art or competitive performances on all eight tasks. Notably, it surpasses previous works by a large margin in lyrics-to-song and lyrics-to-vocals. Additionally, it is able to independently control the acoustic conditions of the vocals and accompaniment in the generated song through different audio prompts, exhibiting its potential applicability. Our samples are available at https://thuhcsi.github.io/SongCreator/.", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/95882"} +{"video_file": "GlXUxNI6TN_39026563.mp4", "openreview_id": "GlXUxNI6TN", "slideslive_id": 39026563, "venue": "nips2024", "title": "Abductive Reasoning in Logical Credal Networks", "status": "Poster", "keywords": "probabilistic logic;imprecise probabilities;MAP inference;search;message passing", "tldr": "The paper presents new algorithms for MAP and Marginal MAP inference in Logical Credal Networks.", "abstract": "Logical Credal Networks or LCNs were recently introduced as a powerful probabilistic logic framework for representing and reasoning with imprecise knowledge. Unlike many existing formalisms, LCNs have the ability to represent cycles and allow specifying marginal and conditional probability bounds on logic formulae which may be important in many realistic scenarios. Previous work on LCNs has focused exclusively on marginal inference, namely computing posterior lower and upper probability bounds on a query formula. In this paper, we explore abductive reasoning tasks such as solving MAP and Marginal MAP queries in LCNs given some evidence. We first formally define the MAP and Marginal MAP tasks for LCNs and subsequently show how to solve these tasks exactly using search-based approaches. We then propose several approximate schemes that allow us to scale MAP and Marginal MAP inference to larger problem instances. An extensive empirical evaluation demonstrates the effectiveness of our algorithms on both random LCN instances as well as LCNs derived from more realistic use-cases.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95881"} +{"video_file": "Glt37xoU7e_39027840.mp4", "openreview_id": "Glt37xoU7e", "slideslive_id": 39027840, "venue": "nips2024", "title": "Omnigrasp: Grasping Diverse Objects with Simulated Humanoids", "status": "Poster", "keywords": "Physics Simulation;Humanoid Control;Dexterous Manipulation", "tldr": "Full body and dexterous humanoid grasping and object manipulation.", "abstract": "We present a method for controlling a simulated humanoid to grasp an object and move it to follow an object's trajectory. Due to the challenges in controlling a humanoid with dexterous hands, prior methods often use a disembodied hand and only consider vertical lifts or short trajectories. This limited scope hampers their applicability for object manipulation required for animation and simulation. To close this gap, we learn a controller that can pick up a large number (>1200) of objects and carry them to follow randomly generated trajectories. Our key insight is to leverage a humanoid motion representation that provides human-like motor skills and significantly speeds up training. Using only simplistic reward, state, and object representations, our method shows favorable scalability on diverse objects and trajectories. For training, we do not need a dataset of paired full-body motion and object trajectories. At test time, we only require the object mesh and desired trajectories for grasping and transporting. To demonstrate the capabilities of our method, we show state-of-the-art success rates in following object trajectories and generalizing to unseen objects. Code and models will be released.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/95880"} +{"video_file": "GnaFrZRHPf_39028028.mp4", "openreview_id": "GnaFrZRHPf", "slideslive_id": 39028028, "venue": "nips2024", "title": "Adaptive Preference Scaling for Reinforcement Learning with Human Feedback", "status": "Poster", "keywords": "Reinforcement Learning from Human Feedback;Large Language Models;Alignment", "tldr": "We propose a new adaptive preference loss for RLHF.", "abstract": "Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values by learning rewards from human preference data. Due to various reasons, however, such data typically takes the form of rankings over pairs of trajectory segments, which fails to capture the varying strengths of preferences across different pairs. In this paper, we propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO), designed to address this uncertainty in preference strength. By incorporating an adaptive scaling parameter into the loss for each pair, our method increases the flexibility of the reward function. Specifically, it assigns small scaling parameters to pairs with ambiguous preferences, leading to more comparable rewards, and large scaling parameters to those with clear preferences for more distinct rewards. Computationally, our proposed loss function is strictly convex and univariate with respect to each scaling parameter, enabling its efficient optimization through a simple second-order algorithm. Our method is versatile and can be readily adapted to various preference optimization frameworks, including direct preference optimization (DPO). Our experiments with robotic control and natural language generation with large language models (LLMs) show that our method not only improves policy performance but also aligns reward function selection more closely with policy optimization, simplifying the hyperparameter tuning process.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95876"} +{"video_file": "GqefKjw1OR_39026397.mp4", "openreview_id": "GqefKjw1OR", "slideslive_id": 39026397, "venue": "nips2024", "title": "Sparse Bayesian Generative Modeling for Compressive Sensing", "status": "Poster", "keywords": "Compressive sensing;variational inference;sparse bayesian learning;variational autoencoder;Gaussian mixture model;generative model", "tldr": "In this work, a new type of sparsity inducing generative prior for compressive sensing is introduced.", "abstract": "This work addresses the fundamental linear inverse problem in compressive sensing (CS) by introducing a new type of regularizing generative prior. Our proposed method utilizes ideas from classical dictionary-based CS and, in particular, sparse Bayesian learning (SBL), to integrate a strong regularization towards sparse solutions. At the same time, by leveraging the notion of conditional Gaussianity, it also incorporates the adaptability from generative models to training data. However, unlike most state-of-the-art generative models, it is able to learn from a few compressed and noisy data samples and requires no optimization algorithm for solving the inverse problem. Additionally, similar to Dirichlet prior networks, our model parameterizes a conjugate prior enabling its application for uncertainty quantification. We support our approach theoretically through the concept of variational inference and validate it empirically using different types of compressible signals.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95874"} +{"video_file": "GrMczQGTlA_39027109.mp4", "openreview_id": "GrMczQGTlA", "slideslive_id": 39027109, "venue": "nips2024", "title": "Humanoid Locomotion as Next Token Prediction", "status": "Spotlight", "keywords": "Real-World Humanoid Control;Next Token Prediction", "tldr": "We cast real-world humanoid control as a next token prediction problem.", "abstract": "We cast real-world humanoid control as a next token prediction problem, akin to predicting the next word in language. Our model is a causal transformer trained via autoregressive prediction of sensorimotor sequences. To account for the multi-modal nature of the data, we perform prediction in a modality-aligned way, and for each input token predict the next token from the same modality. This general formulation enables us to leverage data with missing modalities, such as videos without actions. We train our model on a dataset of sequences from a prior neural network policy, a model-based controller, motion capture, and YouTube videos of humans. We show that our model enables a real humanoid robot to walk in San Francisco zero-shot. Our model can transfer to the real world even when trained on only 27 hours of walking data, and can generalize to commands not seen during training. These findings suggest a promising path toward learning challenging real-world control tasks by generative modeling of sensorimotor sequences.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/95871"} +{"video_file": "Grd7yzFm5V_39027189.mp4", "openreview_id": "Grd7yzFm5V", "slideslive_id": 39027189, "venue": "nips2024", "title": "Bayesian Domain Adaptation with Gaussian Mixture Domain-Indexing", "status": "Poster", "keywords": "domain adaptation;dynamic Gaussian mixture model;structural variational inference", "tldr": "A Bayesian Domain Adaptation with Gaussian Mixture Domain-Indexing algorithm aims to generating domain indexes with stronger interpretability.", "abstract": "Recent methods are proposed to improve performance of domain adaptation by inferring domain index under an adversarial variational bayesian framework, where domain index is unavailable. However, existing methods typically assume that the global domain indices are sampled from a vanilla gaussian prior, overlooking the inherent structures among different domains. To address this challenge, we propose a Bayesian Domain Adaptation with Gaussian Mixture Domain-Indexing(GMDI) algorithm. GMDI employs a Gaussian Mixture Model for domain indices, with the number of component distributions in the \"domain-themes'' space adaptively determined by a Chinese Restaurant Process. By dynamically adjusting the mixtures at the domain indices level, GMDI significantly improves domain adaptation performance. Our theoretical analysis demonstrates that GMDI achieves a more stringent evidence lower bound, closer to the log-likelihood. For classification, GMDI outperforms all approaches, and surpasses the state-of-the-art method, VDI, by up to 3.4%, reaching 99.3%. For regression, GMDI reduces MSE by up to 21% (from 3.160 to 2.493), achieving the lowest errors among all methods.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95870"} +{"video_file": "GruuYVTGXV_39025071.mp4", "openreview_id": "GruuYVTGXV", "slideslive_id": 39025071, "venue": "nips2024", "title": "Dual Critic Reinforcement Learning under Partial Observability", "status": "Poster", "keywords": "Reinforcement Learning;Partial Observability;POMDP", "tldr": "This paper introduces DCRL, a framework for efficiency improvement and variance reduction under partial observability.", "abstract": "Partial observability in environments poses significant challenges that impede the formation of effective policies in reinforcement learning. Prior research has shown that borrowing the complete state information can enhance sample efficiency. This strategy, however, frequently encounters unstable learning with high variance in practical applications due to the over-reliance on complete information. This paper introduces DCRL, a Dual Critic Reinforcement Learning framework designed to adaptively harness full-state information during training to reduce variance for optimized online performance. In particular, DCRL incorporates two distinct critics: an oracle critic with access to complete state information and a standard critic functioning within the partially observable context. It innovates a synergistic strategy to meld the strengths of the oracle critic for efficiency improvement and the standard critic for variance reduction, featuring a novel mechanism for seamless transition and weighting between them. We theoretically prove that DCRL mitigates the learning variance while maintaining unbiasedness. Extensive experimental analyses across the Box2D and Box3D environments have verified DCRL's superior performance. The source code is available in the supplementary.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95869"} +{"video_file": "GuY0zB2xVU_39027722.mp4", "openreview_id": "GuY0zB2xVU", "slideslive_id": 39027722, "venue": "nips2024", "title": "Boosting Generalization in Parametric PDE Neural Solvers through Adaptive Conditioning", "status": "Poster", "keywords": "Deep Learning;Parametric PDEs;Meta-Learning;physics-aware", "tldr": "We propose a $1^st$ order low-rank meta-learning model for generalizing Neural PDE dynamics solvers to unseen dynamics. Our approach is general, it can incorporate knowledge from known physics if available.", "abstract": "Solving parametric partial differential equations (PDEs) presents significant challenges for data-driven methods due to the sensitivity of spatio-temporal dynamics to variations in PDE parameters. Machine learning approaches often struggle to capture this variability. To address this, data-driven approaches learn parametric PDEs by sampling a very large variety of trajectories with varying PDE parameters. We first show that incorporating conditioning mechanisms for learning parametric PDEs is essential and that among them, \\textit{adaptive conditioning}, allows stronger generalization. As existing adaptive conditioning methods do not scale well with respect to the number of parameters to adapt in the neural solver, we propose GEPS, a simple adaptation mechanism to boost GEneralization in Pde Solvers via a first-order optimization and low-rank rapid adaptation of a small set of context parameters. We demonstrate the versatility of our approach for both fully data-driven and for physics-aware neural solvers. Validation performed on a whole range of spatio-temporal forecasting problems demonstrates excellent performance for generalizing to unseen conditions including initial conditions, PDE coefficients, forcing terms and solution domain. Project page: https://geps-project.github.io", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/95866"} +{"video_file": "GvQU54uA7u_39028739.mp4", "openreview_id": "GvQU54uA7u", "slideslive_id": 39028739, "venue": "nips2024", "title": "Preference-based Pure Exploration", "status": "Poster", "keywords": "Pure exploration;multi-armed bandits;vector-valued rewards;preferences", "tldr": "A new algorithm for vectorial optimization under bandit feedback", "abstract": "We study the preference-based pure exploration problem for bandits with vector-valued rewards and a set of preferences imposed over them. Specifically, we aim to identify the most preferred policy over a set of arms according to the preferences induced on the reward vectors by an ordering cone $C$. First, to quantify the impact of preferences, we derive a novel lower bound on the sample complexity for identifying the most preferred arm with confidence level $1-\\delta$. Our lower bound shows that how the geometry of the preferences and reward vectors changes the hardness of this problem. We further explicate this geometry for Gaussian distributions of rewards, and provide a convex reformulation of the lower bound solvable with linear programming. Then, we leverage this convex reformulation of the lower bound to design the Track and Stop with Preferences (TSwP) algorithm that identifies the most preferred policy. Finally, we derive a new concentration result for vector-valued rewards, and show that TSwP achieves a matching sample complexity upper bound.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/95864"} +{"video_file": "GxwnQ8sxkL_39024687.mp4", "openreview_id": "GxwnQ8sxkL", "slideslive_id": 39024687, "venue": "nips2024", "title": "Learning from Snapshots of Discrete and Continuous Data Streams", "status": "Poster", "keywords": "Learning Theory; Online Learning; Continuous Processes", "tldr": "This paper builds a theoretical framework for non-adaptive and adaptive algorithms to predict sets of functions from continuous data streams, showing how selective querying supports accurate learning, even with limited observability.", "abstract": "Imagine a smart camera trap selectively clicking pictures to understand animal movement patterns within a particular habitat. These \"snapshots\", or pieces of data captured from a data stream at adaptively chosen times, provide a glimpse of different animal movements unfolding through time. Learning a continuous-time process through snapshots, such as smart camera traps, is a central theme governing a wide array of online learning situations. In this paper, we adopt a learning-theoretic perspective in understanding the fundamental nature of learning different classes of functions from both discrete data streams and continuous data streams. In our first framework, the update-and-deploy setting, a learning algorithm discretely queries from a process to update a predictor designed to make predictions given as input the data stream. We construct a uniform sampling algorithm that can learn with bounded error any concept class with finite Littlestone dimension. Our second framework, known as the blind-prediction setting, consists of a learning algorithm generating predictions independently of observing the process, only engaging with the process when it chooses to make queries. Interestingly, we show a stark contrast in learnability where non-trivial concept classes are unlearnable. However, we show that adaptive learning algorithms are necessary to learn sets of time-dependent and data-dependent functions, called pattern classes, in either framework. Finally, we develop a theory of pattern classes under discrete data streams for the blind-prediction setting.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95862"} +{"video_file": "H3at5y8VFW_39027933.mp4", "openreview_id": "H3at5y8VFW", "slideslive_id": 39027933, "venue": "nips2024", "title": "Self-Retrieval: End-to-End Information Retrieval with One Large Language Model", "status": "Poster", "keywords": "Large Language Model;Information Retrieval;Retrieval Augmented Generation", "tldr": "In this paper, we propose Self-Retrieval, an end-to-end LLM-driven information retrieval architecture that unifies indexing, retrieval, and reranking in a single LLM.", "abstract": "The rise of large language models (LLMs) has significantly transformed both the construction and application of information retrieval (IR) systems. However, current interactions between IR systems and LLMs remain limited, with LLMs merely serving as part of components within IR systems, and IR systems being constructed independently of LLMs. This separated architecture restricts knowledge sharing and deep collaboration between them. In this paper, we introduce Self-Retrieval, a novel end-to-end LLM-driven information retrieval architecture. Self-Retrieval unifies all essential IR functions within a single LLM, leveraging the inherent capabilities of LLMs throughout the IR process. Specifically, Self-Retrieval internalizes the retrieval corpus through self-supervised learning, transforms the retrieval process into sequential passage generation, and performs relevance assessment for reranking. Experimental results demonstrate that Self-Retrieval not only outperforms existing retrieval approaches by a significant margin, but also substantially enhances the performance of LLM-driven downstream applications like retrieval-augmented generation.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95858"} +{"video_file": "H5z0XqEX57_39028457.mp4", "openreview_id": "H5z0XqEX57", "slideslive_id": 39028457, "venue": "nips2024", "title": "Physics-informed Neural Networks for Functional Differential Equations: Cylindrical Approximation and Its Convergence Guarantees", "status": "Poster", "keywords": "Physics-informed neural network;Functional differential equation;Functional derivative", "tldr": "We propose the first learning scheme to solve functional differential equations, built on the physics-informed neural network.", "abstract": "We propose the first learning scheme for functional differential equations (FDEs). FDEs play a fundamental role in physics, mathematics, and optimal control. However, the numerical analysis of FDEs has faced challenges due to its unrealistic computational costs and has been a long standing problem over decades. Thus, numerical approximations of FDEs have been developed, but they often oversimplify the solutions. To tackle these two issues, we propose a hybrid approach combining physics-informed neural networks (PINNs) with the cylindrical approximation. The cylindrical approximation expands functions and functional derivatives with an orthonormal basis and transforms FDEs into high-dimensional PDEs. To validate the reliability of the cylindrical approximation for FDE applications, we prove the convergence theorems of approximated functional derivatives and solutions. Then, the derived high-dimensional PDEs are numerically solved with PINNs. Through the capabilities of PINNs, our approach can handle a broader class of functional derivatives more efficiently than conventional discretization-based methods, improving the scalability of the cylindrical approximation. As a proof of concept, we conduct experiments on two FDEs and demonstrate that our model can successfully achieve typical\nL\n1\nrelative error orders of PINNs\n\u223c\n10\n\u2212\n3\n. Overall, our work provides a strong backbone for physicists, mathematicians, and machine learning experts to analyze previously challenging FDEs, thereby democratizing their numerical analysis, which has received limited attention.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/95857"} +{"video_file": "H7SaaqfCUi_39028181.mp4", "openreview_id": "H7SaaqfCUi", "slideslive_id": 39028181, "venue": "nips2024", "title": "Learning the Infinitesimal Generator of Stochastic Diffusion Processes", "status": "Poster", "keywords": "Stochastic Diffusion Processes;Infinitesimal Generator;RKHS;non-asymptotic learning bounds", "tldr": "An energy-based formulation for learning infinitesimal generator of stochastic diffusions with theoretical learning bounds.", "abstract": "We address data-driven learning of the infinitesimal generator of stochastic diffusion processes, essential for understanding numerical simulations of natural and physical systems. The unbounded nature of the generator poses significant challenges, rendering conventional analysis techniques for Hilbert-Schmidt operators ineffective. To overcome this, we introduce a novel framework based on the energy functional for these stochastic processes. Our approach integrates physical priors through an energy-based risk metric in both full and partial knowledge settings. We evaluate the statistical performance of a reduced-rank estimator in reproducing kernel Hilbert spaces (RKHS) in the partial knowledge setting. Notably, our approach provides learning bounds independent of the state space dimension and ensures non-spurious spectral estimation. Additionally, we elucidate how the distortion between the intrinsic energy-induced metric of the stochastic diffusion and the RKHS metric used for generator estimation impacts the spectral learning bounds.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95855"} +{"video_file": "H7qVZ0Zu8E_39026885.mp4", "openreview_id": "H7qVZ0Zu8E", "slideslive_id": 39026885, "venue": "nips2024", "title": "Achieving Linear Convergence with Parameter-Free Algorithms in Decentralized Optimization", "status": "Poster", "keywords": "decentralized optimization;linear convergence;parameter free", "tldr": "Parameter Free Decentralized Optimization", "abstract": "This paper addresses the minimization of the sum of strongly convex, smooth functions over a network of agents without a centralized server. Existing decentralized algorithms require knowledge of functions and network parameters, such as the Lipschitz constant of the global gradient and/or network connectivity, for hyperparameter tuning. Agents usually cannot access this information, leading to conservative selections and slow convergence or divergence. This paper introduces a decentralized algorithm that eliminates the need for specific parameter tuning. Our approach employs an operator splitting technique with a novel variable metric, enabling a local backtracking line-search to adaptively select the stepsize without global information or extensive communications. This results in favorable convergence guarantees and dependence on optimization and network parameters compared to existing nonadaptive methods. Notably, our method is the first adaptive decentralized algorithm that achieves linear convergence for strongly convex, smooth objectives. Preliminary numerical experiments support our theoretical findings, demonstrating superior performance in convergence speed and scalability.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95853"} +{"video_file": "HAcaANQNMK_39026811.mp4", "openreview_id": "HAcaANQNMK", "slideslive_id": 39026811, "venue": "nips2024", "title": "ESPACE: Dimensionality Reduction of Activations for Model Compression", "status": "Poster", "keywords": "Activation tensor decomposition;model compression;matrix factorization", "tldr": "We propose ESPACE, an LLM compression technique based on dimensionality reduction of activations which enables retraining LLMs with no loss in expressivity; while at inference, weight decomposition is obtained as a byproduct of matrix associativity.", "abstract": "We propose ESPACE, an LLM compression technique based on dimensionality reduction of activations. Unlike prior works on weight-centric tensor decomposition, ESPACE projects activations onto a pre-calibrated set of principal components. The activation-centrality of the approach enables retraining LLMs with no loss of expressivity; while at inference, weight decomposition is obtained as a byproduct of matrix multiplication associativity. Theoretical results on the construction of projection matrices with optimal computational accuracy are provided. Experimentally, we find ESPACE enables 50% compression of GPT3, Llama2, and Nemotron4 models with small accuracy degradation, as low as a 0.18 perplexity increase on GPT3-22B. At lower compression rates of 20% to 40%, ESPACE drives GPT3 models to outperforming their baseline, by up to a 0.38 decrease in perplexity for GPT3-8B. ESPACE also reduces GEMM execution time and prefill inference latency on existing hardware. Comparison with related works on compressing Llama2-7B via matrix factorization shows that ESPACE is a first step in advancing the state-of-the-art in tensor decomposition compression of LLMs.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/95852"} +{"video_file": "HCTikT7LS4_39025167.mp4", "openreview_id": "HCTikT7LS4", "slideslive_id": 39025167, "venue": "nips2024", "title": "Enhancing Robustness in Deep Reinforcement Learning: A Lyapunov Exponent Approach", "status": "Poster", "keywords": "Reinforcement Learning;Robust Reinforcement Learning;Stable Reinforcement Learning;Lyapunov Exponents;Chaos Theory", "tldr": "We show reinforcement learning controllers produce chaotic states and reward trajectories in continuous control environments. To mitigate this we propose a regularisation method which minimises chaos and improves the robustness of Dreamer V3.", "abstract": "Deep reinforcement learning agents achieve state-of-the-art performance in a wide range of simulated control tasks. However, successful applications to real-world problems remain limited. One reason for this dichotomy is because the learnt policies are not robust to observation noise or adversarial attacks. In this paper, we investigate the robustness of deep RL policies to a single small state perturbation in deterministic continuous control tasks. We demonstrate that RL policies can be deterministically chaotic, as small perturbations to the system state have a large impact on subsequent state and reward trajectories. This unstable non-linear behaviour has two consequences: first, inaccuracies in sensor readings, or adversarial attacks, can cause significant performance degradation; second, even policies that show robust performance in terms of rewards may have unpredictable behaviour in practice. These two facets of chaos in RL policies drastically restrict the application of deep RL to real-world problems. To address this issue, we propose an improvement on the successful Dreamer V3 architecture, implementing Maximal Lyapunov Exponent regularisation. This new approach reduces the chaotic state dynamics, rendering the learnt policies more resilient to sensor noise or adversarial attacks and thereby improving the suitability of deep reinforcement learning for real-world applications.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95847"} +{"video_file": "HDVsiUHQ1w_39027385.mp4", "openreview_id": "HDVsiUHQ1w", "slideslive_id": 39027385, "venue": "nips2024", "title": "SCOREQ: Speech Quality Assessment with Contrastive Regression", "status": "Poster", "keywords": "Perceptual measures of audio quality;objective and subjective quality assessment;domain mismatch;contrastive learning;regression", "tldr": "We propose a new loss function based on contrastive regression to address the domain mismatch in speech quality metrics", "abstract": "In this paper, we present SCOREQ, a novel approach for speech quality prediction. SCOREQ is a triplet loss function for contrastive regression that addresses the domain generalisation shortcoming exhibited by state of the art no-reference speech quality metrics. In the paper we: (i) illustrate the problem of L2 loss training failing at capturing the continuous nature of the mean opinion score (MOS) labels; (ii) demonstrate the lack of generalisation through a benchmarking evaluation across several speech domains; (iii) outline our approach and explore the impact of the architectural design decisions through incremental evaluation; (iv) evaluate the final model against state of the art models for a wide variety of data and domains. The results show that the lack of generalisation observed in state of the art speech quality metrics is addressed by SCOREQ. We conclude that using a triplet loss function for contrastive regression improves generalisation for speech quality prediction models but also has potential utility across a wide range of applications using regression-based predictive models.", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/95846"} +{"video_file": "HFS800reZK_39026197.mp4", "openreview_id": "HFS800reZK", "slideslive_id": 39026197, "venue": "nips2024", "title": "Learning Representations for Hierarchies with Minimal Support", "status": "Poster", "keywords": "graph embeddings;representation learning", "tldr": "We provide the provably minimal set of entries from the adjacency matrix necessary to train representations for transitively-closed DAGs, provided our energy function has transitivity bias.", "abstract": "When training node embedding models to represent large directed graphs (digraphs), it is impossible to observe all entries of the adjacency matrix during training. As a consequence most methods employ sampling. For very large digraphs, however, this means many (most) entries may be unobserved during training. In general, observing every entry would be necessary to uniquely identify a graph, however if we know the graph has a certain property some entries can be omitted - for example, only half the entries would be required for a symmetric graph. In this work, we develop a novel framework to identify a subset of entries required to uniquely distinguish a graph among all transitively-closed DAGs. We give an explicit algorithm to compute the provably minimal set of entries, and demonstrate empirically that one can train node embedding models with greater efficiency and performance, provided the energy function has an appropriate inductive bias. We achieve robust performance on synthetic hierarchies and a larger real-world taxonomy, observing improved convergence rates in a resource-constrained setting while reducing the set of training examples by as much as 99%.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95845"} +{"video_file": "HGNTcy4eEp_39028247.mp4", "openreview_id": "HGNTcy4eEp", "slideslive_id": 39028247, "venue": "nips2024", "title": "Learning Group Actions on Latent Representations", "status": "Poster", "keywords": "group action;representation learning;image rendering", "tldr": "A model to learn group action on latent factors, e.g. manipulating objects on 2D images.", "abstract": "In this work, we introduce a new approach to model group actions in autoencoders. Diverging from prior research in this domain, we propose to learn the group actions on the latent space rather than strictly on the data space. This adaptation enhances the versatility of our model, enabling it to learn a broader range of scenarios prevalent in the real world, where groups can act on latent factors. Our method allows a wide flexibility in the encoder and decoder architectures and does not require group-specific layers. In addition, we show that our model theoretically serves as a superset of methods that learn group actions on the data space. We test our approach on five image datasets with diverse groups acting on them and demonstrate superior performance to recently proposed methods for modeling group actions.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95844"} +{"video_file": "HQgHCVZiHw_39024823.mp4", "openreview_id": "HQgHCVZiHw", "slideslive_id": 39024823, "venue": "nips2024", "title": "Is Score Matching Suitable for Estimating Point Processes?", "status": "Poster", "keywords": "point processes;score matching;parameter estimation", "tldr": "This study highlights the incompleteness of previously proposed score matching estimators for point processes. In addressing this issue, we introduce a novel score matching estimator for point processes.", "abstract": "Score matching estimators for point processes have gained widespread attention in recent years because they do not require the calculation of intensity integrals, thereby effectively addressing the computational challenges in maximum likelihood estimation (MLE). Some existing works have proposed score matching estimators for point processes. However, this work demonstrates that the incompleteness of the estimators proposed in those works renders them applicable only to specific problems, and they fail for more general point processes. To address this issue, this work introduces the weighted score matching estimator to point processes. Theoretically, we prove the consistency of the estimator we propose. Experimental results indicate that our estimator accurately estimates model parameters on synthetic data and yields results consistent with MLE on real data. In contrast, existing score matching estimators fail to perform effectively. Codes are publicly available at \\url{https://github.com/KenCao2007/WSM_TPP}.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95838"} +{"video_file": "HRnSVflpgt_39025549.mp4", "openreview_id": "HRnSVflpgt", "slideslive_id": 39025549, "venue": "nips2024", "title": "Schur Nets: exploiting local structure for equivariance in higher order graph neural networks", "status": "Poster", "keywords": "graph neural networks;equivariance;spectral graph theory;higher order message passing", "tldr": "We show how to build higher order GNNs that are equivariant to the automorphism groups of subgraphs without actually having to find the automorphism groups.", "abstract": "Recent works have shown that extending the message passing paradigm to subgraphs communicating with other subgraphs, especially via higher order messages, can boost the expressivity of graph neural networks. In such architectures, to faithfully account for local structure such as cycles, the local operations must be equivariant to the automorphism group of the local environment. However, enumerating the automorphism groups of all subgraphs of interest and finding appropriate equivariant operations for each one of them separately is generally not feasible. In this paper we propose a solution to this problem based on spectral graph theory that bypasses having to determine the automorphism group entirely and constructs a basis for equivariant operations directly from the graph Laplacian. We show that this approach can boost the performance of GNNs on some standard benchmarks.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/95836"} +{"video_file": "HSJOt2hyDf_39025235.mp4", "openreview_id": "HSJOt2hyDf", "slideslive_id": 39025235, "venue": "nips2024", "title": "Initializing Services in Interactive ML Systems for Diverse Users", "status": "Poster", "keywords": "Algorithm design for multi-service ML systems;initialization;clustering;approximation ratio;preference learning", "tldr": "We study the problem of initializing multiple services for a provider catering to a user base with diverse preferences, developing algorithms with approximation ratio guarantees on average and worst case losses.", "abstract": "This paper investigates ML systems serving a group of users, with multiple models/services, each aimed at specializing to a sub-group of users. We consider settings where upon deploying a set of services, users choose the one minimizing their personal losses and the learner iteratively learns by interacting with diverse users. Prior research shows that the outcomes of learning dynamics, which comprise both the services' adjustments and users' service selections, hinge significantly on the initial conditions. However, finding good initial conditions faces two main challenges: (i) \\emph{Bandit feedback:} Typically, data on user preferences are not available before deploying services and observing user behavior; (ii) \\emph{Suboptimal local solutions:} The total loss landscape (i.e., the sum of loss functions across all users and services) is not convex and gradient-based algorithms can get stuck in poor local minima.\nWe address these challenges with a randomized algorithm to adaptively select a minimal set of users for data collection in order to initialize a set of services. Under mild assumptions on the loss functions, we prove that our initialization leads to a total loss within a factor of the \\textit{globally optimal total loss,with complete user preference data}, and this factor scales logarithmically in the number of services. This result is a generalization of the well-known\nk\n-means++ guarantee to a broad problem class which is also of independent interest. The theory is complemented by experiments on real as well as semi-synthetic datasets.", "primary_area": "active_learning", "site": "https://neurips.cc/virtual/2024/poster/95834"} +{"video_file": "HShs7q1Njh_39025308.mp4", "openreview_id": "HShs7q1Njh", "slideslive_id": 39025308, "venue": "nips2024", "title": "LLM Processes: Numerical Predictive Distributions Conditioned on Natural Language", "status": "Poster", "keywords": "Large Language Models;Probabilistic Regression;In-context Learning", "tldr": "We use LLMs to build a regression model that can process numerical data and make probabilistic predictions at arbitrary locations, guided by natural language text which describes a user's prior knowledge.", "abstract": "Machine learning practitioners often face significant challenges in formally integrating their prior knowledge and beliefs into predictive models, limiting the potential for nuanced and context-aware analyses. Moreover, the expertise needed to integrate this prior knowledge into probabilistic modeling typically limits the application of these models to specialists. Our goal is to build a regression model that can process numerical data and make probabilistic predictions at arbitrary locations, guided by natural language text which describes a user's prior knowledge. Large Language Models (LLMs) provide a useful starting point for designing such a tool since they 1) provide an interface where users can incorporate expert insights in natural language and 2) provide an opportunity for leveraging latent problem-relevant knowledge encoded in LLMs that users may not have themselves. We start by exploring strategies for eliciting explicit, coherent numerical predictive distributions from LLMs. We examine these joint predictive distributions, which we call LLM Processes, over arbitrarily-many quantities in settings such as forecasting, multi-dimensional regression, black-box optimization, and image modeling. We investigate the practical details of prompting to elicit coherent predictive distributions, and demonstrate their effectiveness at regression. Finally, we demonstrate the ability to usefully incorporate text into numerical predictions, improving predictive performance and giving quantitative structure that reflects qualitative descriptions. This lets us begin to explore the rich, grounded hypothesis space that LLMs implicitly encode.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95832"} +{"video_file": "HTLJptF7qM_39026335.mp4", "openreview_id": "HTLJptF7qM", "slideslive_id": 39026335, "venue": "nips2024", "title": "Noisy Label Learning with Instance-Dependent Outliers: Identifiability via Crowd Wisdom", "status": "Spotlight", "keywords": "Noisy label;instance-dependent label noise;sample selection;end-to-end learning;identifiability;crowdsourcing", "tldr": "proposed an end-to-end noisy label learning system that provably identifies the target system (as if no label noise exists) in the presence of instance-dependent outliers.", "abstract": "The generation of label noise is often modeled as a process involving a probability transition matrix (also interpreted as the annotator confusion matrix) imposed onto the label distribution. Under this model, learning the ``ground-truth classifier''---i.e., the classifier that can be learned if no noise was present---and the confusion matrix boils down to a model identification problem. Prior works along this line demonstrated appealing empirical performance, yet identifiability of the model was mostly established by assuming an instance-invariant confusion matrix. Having an (occasionally) instance-dependent confusion matrix across data samples is apparently more realistic, but inevitably introduces outliers to the model. Our interest lies in confusion matrix-based noisy label learning with such outliers taken into consideration. We begin with pointing out that under the model of interest, using labels produced by only one annotator is fundamentally insufficient to detect the outliers or identify the ground-truth classifier. Then, we prove that by employing a crowdsourcing strategy involving multiple annotators, a carefully designed loss function can establish the desired model identifiability under reasonable conditions. Our development builds upon a link between the noisy label model and a column-corrupted matrix factorization mode---based on which we show that crowdsourced annotations distinguish nominal data and instance-dependent outliers using a low-dimensional subspace. Experiments show that our learning scheme substantially improves outlier detection and the classifier's testing accuracy.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95831"} +{"video_file": "HUxtJcQpDS_39025032.mp4", "openreview_id": "HUxtJcQpDS", "slideslive_id": 39025032, "venue": "nips2024", "title": "HEALNet: Multimodal Fusion for Heterogeneous Biomedical Data", "status": "Poster", "keywords": "multimodal;fusion;computational pathology;multiomic analyses", "tldr": "Flexible method for multimodal fusion on different biomedical data structures", "abstract": "Technological advances in medical data collection, such as high-throughput genomic sequencing and digital high-resolution histopathology, have contributed to the rising requirement for multimodal biomedical modelling, specifically for image, tabular and graph data. Most multimodal deep learning approaches use modality-specific architectures that are often trained separately and cannot capture the crucial cross-modal information that motivates the integration of different data sources. This paper presents the Hybrid Early-fusion Attention Learning Network (HEALNet) \u2013 a flexible multimodal fusion architecture, which: a) preserves modality-specific structural information, b) captures the cross-modal interactions and structural information in a shared latent space, c) can effectively handle missing modalities during training and inference, and d) enables intuitive model inspection by learning on the raw data input instead of opaque embeddings. We conduct multimodal survival analysis on Whole Slide Images and Multi-omic data on four cancer datasets from The Cancer Genome Atlas (TCGA). HEALNet achieves state-of-the-art performance compared to other end-to-end trained fusion models, substantially improving over unimodal and multimodal baselines whilst being robust in scenarios with missing modalities. The code is available at https://github.com/konst-int-i/healnet.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/95829"} +{"video_file": "HXdAfK488A_39025317.mp4", "openreview_id": "HXdAfK488A", "slideslive_id": 39025317, "venue": "nips2024", "title": "Doing Experiments and Revising Rules with Natural Language and Probabilistic Reasoning", "status": "Poster", "keywords": "induction;LLM;active learning", "tldr": "We give a model of how to infer hidden natural language rules by doing experiments using probabilistic reasoning with language models.", "abstract": "We give a model of how to infer natural language rules by doing experiments. The model integrates Large Language Models (LLMs) with Monte Carlo algorithms for probabilistic inference, interleaving online belief updates with experiment design under information-theoretic criteria. We conduct a human-model comparison on a Zendo-style task, finding that a critical ingredient for modeling the human data is to assume that humans also consider fuzzy, probabilistic rules, in addition to assuming that humans perform approximately-Bayesian belief updates. We also compare with recent algorithms for using LLMs to generate and revise hypotheses, finding that our online inference method yields higher accuracy at recovering the true underlying rule, and provides better support for designing optimal experiments.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/95827"} +{"video_file": "HYa3eu8scG_39025003.mp4", "openreview_id": "HYa3eu8scG", "slideslive_id": 39025003, "venue": "nips2024", "title": "Training for Stable Explanation for Free", "status": "Poster", "keywords": "accountable machine learning;stability;transparency;interpretability", "tldr": "We propose a framework for stable and transparent model without expensive training.", "abstract": "To foster trust in machine learning models, explanations must be faithful and stable for consistent insights. Existing relevant works rely on the\n\u2113\np\ndistance for stability assessment, which diverges from human perception. Besides, existing adversarial training (AT) associated with intensive computations may lead to an arms race. To address these challenges, we introduce a novel metric to assess the stability of top-\nk\nsalient features. We introduce R2ET which trains for stable explanation by efficient and effective regularizer, and analyze R2ET by multi-objective optimization to prove numerical and statistical stability of explanations. Moreover, theoretical connections between R2ET and certified robustness justify R2ET's stability in all attacks. Extensive experiments across various data modalities and model architectures show that R2ET achieves superior stability against stealthy attacks, and generalizes effectively across different explanation methods. The code can be found at https://github.com/ccha005/R2ET.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/95826"} +{"video_file": "HbIBqn3grD_39025160.mp4", "openreview_id": "HbIBqn3grD", "slideslive_id": 39025160, "venue": "nips2024", "title": "Structured flexibility in recurrent neural networks via neuromodulation", "status": "Poster", "keywords": "recurrent neural networks;neuromodulation;low-rank recurrent neural networks;timing;biological computation", "tldr": "We add a neuromodulation-inspired signal to a low-rank RNN and show that it enhances performance and generalization on neuroscience and machine learning tasks.", "abstract": "A core aim in theoretical and systems neuroscience is to develop models which help us better understand biological intelligence. Such models range broadly in both complexity and biological plausibility. One widely-adopted example is task-optimized recurrent neural networks (RNNs), which have been used to generate hypotheses about how the brain\u2019s neural dynamics may organize to accomplish tasks. However, task-optimized RNNs typically have a fixed weight matrix representing the synaptic connectivity between neurons. From decades of neuroscience research, we know that synaptic weights are constantly changing, controlled in part by chemicals such as neuromodulators. In this work we explore the computational implications of synaptic gain scaling, a form of neuromodulation, using task-optimized low-rank RNNs. In our neuromodulated RNN (NM-RNN) model, a neuromodulatory subnetwork outputs a low-dimensional neuromodulatory signal that dynamically scales the low-rank recurrent weights of an output-generating RNN. In empirical experiments, we find that the structured flexibility in the NM-RNN allows it to both train and generalize with a higher degree of accuracy than low-rank RNNs on a set of canonical tasks. Additionally, via theoretical analyses we show how neuromodulatory gain scaling endows networks with gating mechanisms commonly found in artificial RNNs. We end by analyzing the low-rank dynamics of trained NM-RNNs, to show how task computations are distributed.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/95823"} +{"video_file": "HcqnhqoXS3_39027551.mp4", "openreview_id": "HcqnhqoXS3", "slideslive_id": 39027551, "venue": "nips2024", "title": "Decomposed Prompt Decision Transformer for Efficient Unseen Task Generalization", "status": "Poster", "keywords": "Offline Reinforcement Learning;Prompt Tuning", "tldr": "A novel offline reinforcement learning method for efficient generalization of unseen tasks", "abstract": "Multi-task offline reinforcement learning aims to develop a unified policy for diverse tasks without requiring real-time interaction with the environment. Recent work explores sequence modeling, leveraging the scalability of the transformer architecture as a foundation for multi-task learning. Given the variations in task content and complexity, formulating policies becomes a challenging endeavor, requiring careful parameter sharing and adept management of conflicting gradients to extract rich cross-task knowledge from multiple tasks and transfer it to unseen tasks. In this paper, we propose the Decomposed Prompt Decision Transformer (DPDT) that adopts a two-stage paradigm to efficiently learn prompts for unseen tasks in a parameter-efficient manner. We incorporate parameters from pre-trained language models (PLMs) to initialize DPDT, thereby providing rich prior knowledge encoded in language models. During the decomposed prompt tuning phase, we learn both cross-task and task-specific prompts on training tasks to achieve prompt decomposition. In the test time adaptation phase, the cross-task prompt, serving as a good initialization, were further optimized on unseen tasks through test time adaptation, enhancing the model's performance on these tasks. Empirical evaluation on a series of Meta-RL benchmarks demonstrates the superiority of our approach. The project is available at https://github.com/ruthless-man/DPDT.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95819"} +{"video_file": "HeJ1cBAgiV_39028514.mp4", "openreview_id": "HeJ1cBAgiV", "slideslive_id": 39028514, "venue": "nips2024", "title": "SCAFFLSA: Taming Heterogeneity in Federated Linear Stochastic Approximation and TD Learning", "status": "Poster", "keywords": "Stochastic Approximation;Reinforcement learning;Federated Learning;Machine Learning", "tldr": "We perform a non-asymptotic analysis of federated LSA, study the impcaft of heterogeneity, and propose a method that mitigates this impact by using control variates while preserving the linear speed-up. We apply the results to federated TD.", "abstract": "In this paper, we analyze the sample and communication complexity of the federated linear stochastic approximation (FedLSA) algorithm. We explicitly quantify the effects of local training with agent heterogeneity. We show that the communication complexity of FedLSA scales polynomially with the inverse of the desired accuracy \u03f5. To overcome this, we propose SCAFFLSA a new variant of FedLSA that uses control variates to correct for client drift, and establish its sample and communication complexities. We show that for statistically heterogeneous agents, its communication complexity scales logarithmically with the desired accuracy, similar to Scaffnew. An important finding is that, compared to the existing results for Scaffnew, the sample complexity scales with the inverse of the number of agents, a property referred to as linear speed-up. Achieving this linear speed-up requires completely new theoretical arguments. We apply the proposed method to federated temporal difference learning with linear function approximation and analyze the corresponding complexity improvements.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95816"} +{"video_file": "Hew2JSDycr_39026180.mp4", "openreview_id": "Hew2JSDycr", "slideslive_id": 39026180, "venue": "nips2024", "title": "BiScope: AI-generated Text Detection by Checking Memorization of Preceding Tokens", "status": "Poster", "keywords": "Large Language Models;AI-text Detection;Paraphrase;Trustworthy AI", "tldr": "We propose BiScope, leveraging a novel bi-directional cross-entropy calculation method to detect AI-generated texts.", "abstract": "Detecting text generated by Large Language Models (LLMs) is a pressing need in order to identify and prevent misuse of these powerful models in a wide range of applications, which have highly undesirable consequences such as misinformation and academic dishonesty. Given a piece of subject text, many existing detection methods work by measuring the difficulty of LLM predicting the next token in the text from their prefix. In this paper, we make a critical observation that how well the current token\u2019s output logits memorizes the closely preceding input tokens also provides strong evidence. Therefore, we propose a novel bi-directional calculation method that measures the cross-entropy losses between an output logits and the ground-truth token (forward) and between the output logits and the immediately preceding input token (backward). A classifier is trained to make the final prediction based on the statistics of these losses. We evaluate our system, named BISCOPE, on texts generated by five latest commercial LLMs across five heterogeneous datasets, including both natural language and code. BISCOPE demonstrates superior detection accuracy and robustness compared to six existing baseline methods, exceeding the state-of-the-art non-commercial methods\u2019 detection accuracy by over 0.30 F1 score, achieving over 0.95 detection F1 score on average. It also outperforms the best commercial tool GPTZero that is based on a commercial LLM trained with an enormous volume of data. Code is available at https://github.com/MarkGHX/BiScope.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95814"} +{"video_file": "HfQF8LoLhs_39028011.mp4", "openreview_id": "HfQF8LoLhs", "slideslive_id": 39028011, "venue": "nips2024", "title": "Asymptotics of Alpha-Divergence Variational Inference Algorithms with Exponential Families", "status": "Poster", "keywords": "Variational inference;stochastic algorithms;asymptotic analysis;alpha divergence;exponential models", "tldr": "We provide asymptotic results for algorithms optimizing the alpha-divergence criterion in the context of Variational Inference, using an exponential variational family.", "abstract": "Recent works in Variational Inference have examined alternative criteria to the commonly used exclusive Kullback-Leibler divergence. Encouraging empirical results have been obtained with the family of alpha-divergences, but few works have focused on the asymptotic properties of the proposed algorithms, especially as the number of iterations goes to infinity. In this paper, we study a procedure that ensures a monotonic decrease in the alpha-divergence. We provide sufficient conditions to guarantee its convergence to a local minimizer of the alpha-divergence at a geometric rate when the variational family belongs to the class of exponential models. The sample-based version of this ideal procedure involves biased gradient estimators, thus hindering any theoretical study. We propose an alternative unbiased algorithm, we prove its almost sure convergence to a local minimizer of the alpha-divergence, and a law of the iterated logarithm. Our results are exemplified with toy and real-data experiments.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95813"} +{"video_file": "HfpV6u0kbX_39027110.mp4", "openreview_id": "HfpV6u0kbX", "slideslive_id": 39027110, "venue": "nips2024", "title": "Efficient Multi-task LLM Quantization and Serving for Multiple LoRA Adapters", "status": "Poster", "keywords": "Multi-LoRA serving system; LLM serving; LoRA; Post Training Quantization; Multi-task Scheduling", "tldr": "We proposed LoRA-Inlaid, a resource-efficient and high-performance system for multi-task LLM quantization and serving.", "abstract": "With the remarkable achievements of large language models (LLMs), the demand for fine-tuning and deploying LLMs in various downstream tasks has garnered widespread interest. Parameter-efficient fine-tuning techniques represented by LoRA and model quantization techniques represented by GPTQ and AWQ are of paramount significance. However, although these techniques have been widely adopted in single-task scenarios, research is scarce in multi-task scenarios. To be specific, we find that mainstream quantization methods would prevent the base LLM from being shared among tasks, so current LLM serving systems are infeasible to integrate LLM quantization with multiple LoRA adapters to achieve memory-efficient multi-task serving. Moreover, existing LLM serving systems lack support for dynamic task addition and overlook the workload differences among tasks, leading to inefficiencies in multi-task scenarios.\nThis work proposes LoRA-Inlaid, an efficient multi-task LLM serving system. On the one hand, LoRA-Inlaid designs a flexible and efficient multi-task quantization algorithm (MLGPTQ) that facilitates the sharing of a single quantized model for multiple LoRA adapters, which significantly reduces the memory consumption for model deployment. Meanwhile, it supports adding LoRA adapters for new tasks on the fly, without sacrificing the stability of online services. On the other hand, LoRA-Inlaid develops a novel multi-task scheduling algorithm guided by output length prediction and grouping among different tasks, which effectively shrinks the memory consumption and avoids frequent switching of LoRA adapters. Empirical results verify that LoRA-Inlaid outperforms existing state-of-the-art LLM serving systems by up to 1.58 times in terms of throughput, 1.76 times in terms of average latency, 2 times in terms of job completion time, and 10 times in terms of SLO Attainment, while maintaining the same level of model quality.", "primary_area": "infrastructure", "site": "https://neurips.cc/virtual/2024/poster/95811"} +{"video_file": "HkC4OYee3Q_39027782.mp4", "openreview_id": "HkC4OYee3Q", "slideslive_id": 39027782, "venue": "nips2024", "title": "SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents", "status": "Poster", "keywords": "Reinforcement Learning;Backdoor Attacks;Adversarial Machine Learning;Security;Poisoning Attacks;Reinforcement Learning Theory", "tldr": "Theoretically sound backdoor attacks against reinforcement learning with provable guarantees.", "abstract": "Reinforcement learning (RL) is an actively growing field that is seeing increased usage in real-world, safety-critical applications -- making it paramount to ensure the robustness of RL algorithms against adversarial attacks. In this work we explore a particularly stealthy form of training-time attacks against RL -- backdoor poisoning. Here the adversary intercepts the training of an RL agent with the goal of reliably inducing a particular action when the agent observes a pre-determined trigger at inference time. We uncover theoretical limitations of prior work by proving their inability to generalize across domains and MDPs. Motivated by this, we formulate a novel poisoning attack framework which interlinks the adversary's objectives with those of finding an optimal policy -- guaranteeing attack success in the limit. Using insights from our theoretical analysis we develop \"SleeperNets\" as a universal backdoor attack which exploits a newly proposed threat model and leverages dynamic reward poisoning techniques. We evaluate our attack in 6 environments spanning multiple domains and demonstrate significant improvements in attack success over existing methods, while preserving benign episodic return.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95806"} +{"video_file": "Hlcek7AYgP_39026999.mp4", "openreview_id": "Hlcek7AYgP", "slideslive_id": 39026999, "venue": "nips2024", "title": "Neural Embeddings Rank: Aligning 3D latent dynamics with movements", "status": "Poster", "keywords": "Dimensionality reduction;Latent dynamics;Brain-machine interfaces;Neural decoding;Contrastive learning", "tldr": "We reduce high-dimensional neural dynamics to only three dimensions and decode movements using just linear and logistic regression", "abstract": "Aligning neural dynamics with movements is a fundamental goal in neuroscience and brain-machine interfaces. However, there is still a lack of dimensionality reduction methods that can effectively align low-dimensional latent dynamics with movements. To address this gap, we propose Neural Embeddings Rank (NER), a technique that embeds neural dynamics into a 3D latent space and contrasts the embeddings based on movement ranks. NER learns to regress continuous representations of neural dynamics (i.e., embeddings) on continuous movements. We apply NER and six other dimensionality reduction techniques to neurons in the primary motor cortex (M1), dorsal premotor cortex (PMd), and primary somatosensory cortex (S1) as monkeys perform reaching tasks. Only NER aligns latent dynamics with both hand position and direction, visualizable in 3D. NER reveals consistent latent dynamics in M1 and PMd across sixteen sessions over a year. Using a linear regression decoder, NER explains 86% and 97% of the variance in velocity and position, respectively. Linear models trained on data from one session successfully decode velocity, position, and direction in held-out test data from different dates and cortical areas (64%, 88%, and 90%). NER also reveals distinct latent dynamics in S1 during consistent movements and in M1 during curved reaching tasks. The code is available at https://github.com/NeuroscienceAI/NER.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/95804"} +{"video_file": "HmCmxbCpp2_39028600.mp4", "openreview_id": "HmCmxbCpp2", "slideslive_id": 39028600, "venue": "nips2024", "title": "SG-Nav: Online 3D Scene Graph Prompting for LLM-based Zero-shot Object Navigation", "status": "Poster", "keywords": "navigation;scene graph;large language model", "tldr": "We propose a scene graph representation to prompt LLM for zero-shot object navigation, which achieves state-of-the-art performance while being explainable.", "abstract": "In this paper, we propose a new framework for zero-shot object navigation. Existing zero-shot object navigation methods prompt LLM with the text of spatially closed objects, which lacks enough scene context for in-depth reasoning. To better preserve the information of environment and fully exploit the reasoning ability of LLM, we propose to represent the observed scene with 3D scene graph. The scene graph encodes the relationships between objects, groups and rooms with a LLM-friendly structure, for which we design a hierarchical chain-of-thought prompt to help LLM reason the goal location according to scene context by traversing the nodes and edges. Moreover, benefit from the scene graph representation, we further design a re-perception mechanism to empower the object navigation framework with the ability to correct perception error. We conduct extensive experiments on MP3D, HM3D and RoboTHOR environments, where SG-Nav surpasses previous state-of-the-art zero-shot methods by more than \\textbf{10%} SR on all benchmarks, while the decision process is explainable. To the best of our knowledge, SG-Nav is the first zero-shot method that achieves even higher performance than supervised object navigation methods on the challenging MP3D benchmark. Code of this project will be released in the final version.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/95803"} +{"video_file": "HmMSBhMAw4_39026274.mp4", "openreview_id": "HmMSBhMAw4", "slideslive_id": 39026274, "venue": "nips2024", "title": "Periodic agent-state based Q-learning for POMDPs", "status": "Poster", "keywords": "POMDPs;RL;Q-learning;non-stationary policies;non-Markovian environments", "tldr": "Periodicity helps in agent-state based RL in POMDPs because agent-states are not Markov.", "abstract": "The standard approach for Partially Observable Markov Decision Processes (POMDPs) is to convert them to a fully observed belief-state MDP. However, the belief state depends on the system model and is therefore not viable in reinforcement learning (RL) settings. A widely used alternative is to use an agent state, which is a model-free, recursively updateable function of the observation history. Examples include frame stacking and recurrent neural networks. Since the agent state is model-free, it is used to adapt standard RL algorithms to POMDPs. However, standard RL algorithms like Q-learning learn a stationary policy. Our main thesis that we illustrate via examples is that because the agent state does not satisfy the Markov property, non-stationary agent-state based policies can outperform stationary ones. To leverage this feature, we propose PASQL (periodic agent-state based Q-learning), which is a variant of agent-state-based Q-learning that learns periodic policies. By combining ideas from periodic Markov chains and stochastic approximation, we rigorously establish that PASQL converges to a cyclic limit and characterize the approximation error of the converged periodic policy. Finally, we present a numerical experiment to highlight the salient features of PASQL and demonstrate the benefit of learning periodic policies over stationary policies.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95802"} +{"video_file": "HpN4xeDJQF_39025359.mp4", "openreview_id": "HpN4xeDJQF", "slideslive_id": 39025359, "venue": "nips2024", "title": "Beyond Single Stationary Policies: Meta-Task Players as Naturally Superior Collaborators", "status": "Poster", "keywords": "human-AI collaboration;Bayesian policy reuse;reinforcement learning", "tldr": "An effective human-AI collaboration framework that adaptively selects optimal policies in non-stationary tasks, with guarantees for rapid convergence and superior performance in dynamic environments.", "abstract": "In human-AI collaborative tasks, the distribution of human behavior, influenced by mental models, is non-stationary, manifesting in various levels of initiative and different collaborative strategies. A significant challenge in human-AI collaboration is determining how to collaborate effectively with humans exhibiting non-stationary dynamics. Current collaborative agents involve initially running self-play (SP) multiple times to build a policy pool, followed by training the final adaptive policy against this pool. These agents themselves are a single policy network, which is $\\textbf{insufficient for handling non-stationary human dynamics}$. We discern that despite the inherent diversity in human behaviors, the $\\textbf{underlying meta-tasks within specific collaborative contexts tend to be strikingly similar}$. Accordingly, we propose $\\textbf{C}$ollaborative $\\textbf{B}$ayesian $\\textbf{P}$olicy $\\textbf{R}$euse ($\\textbf{CBPR}$), a novel Bayesian-based framework that $\\textbf{adaptively selects optimal collaborative policies matching the current meta-task from multiple policy networks}$ instead of just selecting actions relying on a single policy network. We provide theoretical guarantees for CBPR's rapid convergence to the optimal policy once human partners alter their policies. This framework shifts from directly modeling human behavior to identifying various meta-tasks that support human decision-making and training meta-task playing (MTP) agents tailored to enhance collaboration. Our method undergoes rigorous testing in a well-recognized collaborative cooking simulator, $\\textit{Overcooked}$. Both empirical results and user studies demonstrate CBPR's superior competitiveness compared to existing baselines.", "primary_area": "human-AI_interaction", "site": "https://neurips.cc/virtual/2024/poster/95801"} +{"video_file": "HtlfNbyfOn_39025898.mp4", "openreview_id": "HtlfNbyfOn", "slideslive_id": 39025898, "venue": "nips2024", "title": "bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction", "status": "Poster", "keywords": "Denoising; Self-supervised learning; Quanta imaging; Photon counting", "tldr": "We show that high quality video can be created from extremely sparse 1-bit image sequence, the typical raw data of an emerging image sensor, using self-supervised learning.", "abstract": "Quanta image sensors, such as SPAD arrays, are an emerging sensor technology, producing 1-bit arrays representing photon detection events over exposures as short as a few nanoseconds. In practice, raw data are post-processed using heavy spatiotemporal binning to create more useful and interpretable images at the cost of degrading spatiotemporal resolution. In this work, we propose bit2bit, a new method for reconstructing high-quality image stacks at the original spatiotemporal resolution from sparse binary quanta image data. Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data by predicting the photon arrival location probability distribution. However, due to the binary nature of the data, we show that the assumption of a Poisson distribution is inadequate. Instead, we model the process with a Bernoulli lattice process from the truncated Poisson. This leads to the proposal of a novel self-supervised solution based on a masked loss function. We evaluate our method using both simulated and real data. On simulated data from a conventional video, we achieve 34.35 mean PSNR with extremely photon-sparse binary input (<0.06 photons per pixel per frame). We also present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions. The scenes cover strong/weak ambient light, strong motion, ultra-fast events, etc., which will be made available to the community, on which we demonstrate the promise of our approach. Both reconstruction quality and throughput substantially surpass the state-of-the-art methods (e.g., Quanta Burst Photography (QBP)). Our approach significantly enhances the visualization and usability of the data, enabling the application of existing analysis techniques.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95800"} +{"video_file": "HwO1mNluoL_39024747.mp4", "openreview_id": "HwO1mNluoL", "slideslive_id": 39024747, "venue": "nips2024", "title": "Mitigating Biases in Blackbox Feature Extractors for Image Classification Tasks", "status": "Poster", "keywords": "bias;fairness;spurious correlation", "tldr": "Analyses the propagation of biases in presence of blackbox feature extractors and suggests a simple migitation strategy for image classification", "abstract": "In image classification, it is common to utilize a pretrained model to extract meaningful features of the input images, and then to train a classifier on top of it to make predictions for any downstream task. Trained on enormous amounts of data, these models have been shown to contain harmful biases which can hurt their performance when adapted for a downstream classification task. Further, very often they may be blackbox, either due to scale, or because of unavailability of model weights or architecture. Thus, during a downstream task, we cannot debias such models by updating the weights of the feature encoder, as only the classifier can be finetuned. In this regard, we investigate the suitability of some existing debiasing techniques and thereby motivate the need for more focused research towards this problem setting. Furthermore, we propose a simple method consisting of a clustering-based adaptive margin loss with a blackbox feature encoder, with no knowledge of the bias attribute. Our experiments demonstrate the effectiveness of our method across multiple benchmarks.", "primary_area": "fairness", "site": "https://neurips.cc/virtual/2024/poster/95798"} +{"video_file": "HxGdbAmYYr_39027352.mp4", "openreview_id": "HxGdbAmYYr", "slideslive_id": 39027352, "venue": "nips2024", "title": "Mixture of Adversarial LoRAs: Boosting Robust Generalization in Meta-Tuning", "status": "Poster", "keywords": "meta-tuning;few-shot learning", "tldr": "A new adversarial meta-tuning method for boosting the performance of pre-trained models across domains in few-shot image classification", "abstract": "This paper introduces AMT, an \\textbf{A}dversarial \\textbf{M}eta-\\textbf{T}uning methodology, to boost the robust generalization of pre-trained models in the out-of-domain (OOD) few-shot learning. To address the challenge of transferring knowledge from source domains to unseen target domains, we construct the robust LoRAPool by meta-tuning LoRAs with dual perturbations applied to not only the inputs but also singular values and vectors of the weight matrices at various robustness levels. On top of that, we introduce a simple yet effective test-time merging mechanism to dynamically merge discriminative LoRAs for test-time task customization. Extensive evaluations demonstrate that AMT yields significant improvements, up to 12.92% in clean generalization and up to 49.72% in adversarial generalization, over previous state-of-the-art methods across a diverse range of OOD few-shot image classification tasks on three benchmarks, confirming the effectiveness of our approach to boost the robust generalization of pre-trained models. Our code is available at \\href{https://github.com/xyang583/AMT}{https://github.com/xyang583/AMT}.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95797"} +{"video_file": "HzANl2unCB_39025294.mp4", "openreview_id": "HzANl2unCB", "slideslive_id": 39025294, "venue": "nips2024", "title": "ChatTracker: Enhancing Visual Tracking Performance via Chatting with Multimodal Large Language Model", "status": "Poster", "keywords": "Single object tracking;Visual object tracking;Vision-Language trackers;Multimodal learning", "tldr": "We utilize Multimodal Large Language Model to enhance visual object tracking performance", "abstract": "Visual object tracking aims to locate a targeted object in a video sequence based on an initial bounding box. Recently, Vision-Language~(VL) trackers have proposed to utilize additional natural language descriptions to enhance versatility in various applications. However, VL trackers are still inferior to State-of-The-Art (SoTA) visual trackers in terms of tracking performance. We found that this inferiority primarily results from their heavy reliance on manual textual annotations, which include the frequent provision of ambiguous language descriptions. In this paper, we propose ChatTracker to leverage the wealth of world knowledge in the Multimodal Large Language Model (MLLM) to generate high-quality language descriptions and enhance tracking performance. To this end, we propose a novel reflection-based prompt optimization module to iteratively refine the ambiguous and inaccurate descriptions of the target with tracking feedback. To further utilize semantic information produced by MLLM, a simple yet effective VL tracking framework is proposed and can be easily integrated as a plug-and-play module to boost the performance of both VL and visual trackers. Experimental results show that our proposed ChatTracker achieves a performance comparable to existing methods.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95794"} +{"video_file": "I3IuclVLFZ_39027259.mp4", "openreview_id": "I3IuclVLFZ", "slideslive_id": 39027259, "venue": "nips2024", "title": "FedLPA: One-shot Federated Learning with Layer-Wise Posterior Aggregation", "status": "Poster", "keywords": "One-shot Federated Learning", "tldr": "We propose FedLPA to significantly improve the performance via Layer-Wise Posterior Aggregation in one-shot federated learning.", "abstract": "Efficiently aggregating trained neural networks from local clients into a global model on a server is a widely researched topic in federated learning. Recently, motivated by diminishing privacy concerns, mitigating potential attacks, and reducing communication overhead, one-shot federated learning (i.e., limiting client-server communication into a single round) has gained popularity among researchers. However, the one-shot aggregation performances are sensitively affected by the non-identical training data distribution, which exhibits high statistical heterogeneity in some real-world scenarios. To address this issue, we propose a novel one-shot aggregation method with layer-wise posterior aggregation, named FedLPA. FedLPA aggregates local models to obtain a more accurate global model without requiring extra auxiliary datasets or exposing any private label information, e.g., label distributions. To effectively capture the statistics maintained in the biased local datasets in the practical non-IID scenario, we efficiently infer the posteriors of each layer in each local model using layer-wise Laplace approximation and aggregate them to train the global parameters. Extensive experimental results demonstrate that FedLPA significantly improves learning performance over state-of-the-art methods across several metrics.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95791"} +{"video_file": "I6tBNcJE2F_39026769.mp4", "openreview_id": "I6tBNcJE2F", "slideslive_id": 39026769, "venue": "nips2024", "title": "Real-world Image Dehazing with Coherence-based Pseudo Labeling and Cooperative Unfolding Network", "status": "Spotlight", "keywords": "Semi-supervised Learning;Real-world image dehazing", "tldr": "The cooperative unfolding network (CORUN) and the first plug-in-play iterative mean-teacher framework (Colabator) for real-world image dehazing.", "abstract": "Real-world Image Dehazing (RID) aims to alleviate haze-induced degradation in real-world settings. This task remains challenging due to the complexities in accurately modeling real haze distributions and the scarcity of paired real-world data. To address these challenges, we first introduce a cooperative unfolding network that jointly models atmospheric scattering and image scenes, effectively integrating physical knowledge into deep networks to restore haze-contaminated details. Additionally, we propose the first RID-oriented iterative mean-teacher framework, termed the Coherence-based Label Generator, to generate high-quality pseudo labels for network training. Specifically, we provide an optimal label pool to store the best pseudo-labels during network training, leveraging both global and local coherence to select high-quality candidates and assign weights to prioritize haze-free regions. We verify the effectiveness of our method, with experiments demonstrating that it achieves state-of-the-art performance on RID tasks. Code will be available at https://github.com/cnyvfang/CORUN-Colabator.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95790"} +{"video_file": "I90ypQpLgL_39028111.mp4", "openreview_id": "I90ypQpLgL", "slideslive_id": 39028111, "venue": "nips2024", "title": "Fair Online Bilateral Trade", "status": "Poster", "keywords": "Regret minimization;online learning;two-sided markets;fairness", "tldr": "We prove tight regret bounds for a fair version of the online bilateral trade problem", "abstract": "In online bilateral trade, a platform posts prices to incoming pairs of buyers and sellers that have private valuations for a certain good. If the price is lower than the buyers' valuation and higher than the sellers' valuation, then a trade takes place. Previous work focused on the platform perspective, with the goal of setting prices maximizing the gain from trade (the sum of sellers' and buyers' utilities). Gain from trade is, however, potentially unfair to traders, as they may receive highly uneven shares of the total utility. In this work we enforce fairness by rewarding the platform with the fair gain from trade, defined as the minimum between sellers' and buyers' utilities. After showing that any no-regret learning algorithm designed to maximize the sum of the utilities may fail badly with fair gain from trade, we present our main contribution: a complete characterization of the regret regimes for fair gain from trade when, after each interaction, the platform only learns whether each trader accepted the current price. Specifically, we prove the following regret bounds:\n\u0398\n(\nln\n\u2061\nT\n)\nin the deterministic setting,\n\u03a9\n(\nT\n)\nin the stochastic setting, and\n\u0398\n~\n(\nT\n2\n/\n3\n)\nin the stochastic setting when sellers' and buyers' valuations are independent of each other. We conclude by providing tight regret bounds when, after each interaction, the platform is allowed to observe the true traders' valuations.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/95787"} +{"video_file": "IDn9SiKgLy_39026589.mp4", "openreview_id": "IDn9SiKgLy", "slideslive_id": 39026589, "venue": "nips2024", "title": "Principled Bayesian Optimization in Collaboration with Human Experts", "status": "Spotlight", "keywords": "Bayesian optimisation;human-AI collaboration;knowledge elicitation", "tldr": "Robust and efficient Bayesian optimisation algorithm that elicits knowledge from human experts to accelerate optimisation.", "abstract": "Bayesian optimisation for real-world problems is often performed interactively with human experts, and integrating their domain knowledge is key to accelerate the optimisation process. We consider a setup where experts provide advice on the next query point through binary accept/reject recommendations (labels). Experts\u2019 labels are often costly, requiring efficient use of their efforts, and can at the same time be unreliable, requiring careful adjustment of the degree to which any expert is trusted. We introduce the first principled approach that provides two key guarantees. (1) Handover guarantee: similar to a no-regret property, we establish a sublinear bound on the cumulative number of experts\u2019 binary labels. Initially, multiple labels per query are needed, but the number of expert labels required asymptotically converges to zero, saving both expert effort and computation time. (2) No-harm guarantee with data-driven trust level adjustment: our adaptive trust level ensures that the convergence rate will not be worse than the one without using advice, even if the advice from experts is adversarial. Unlike existing methods that employ a user-defined function that hand-tunes the trust level adjustment, our approach enables data-driven adjustments. Real-world applications empirically demonstrate that our method not only outperforms existing baselines, but also maintains robustness despite varying labelling accuracy, in tasks of battery design with human experts.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95782"} +{"video_file": "IEyXWuXAQT_39027958.mp4", "openreview_id": "IEyXWuXAQT", "slideslive_id": 39027958, "venue": "nips2024", "title": "Learning via Surrogate PAC-Bayes", "status": "Poster", "keywords": "PAC-Bayes;Generalisation;Optimisation;Learning Theory", "tldr": "We introduce a novel approach for learning via surrogate training objectives inherited from PAC-Bayes generalisation bounds, offering computational efficiency and theoretical support, with applications to meta-learning.", "abstract": "PAC-Bayes learning is a comprehensive setting for (i) studying the generalisation ability of learning algorithms and (ii) deriving new learning algorithms by optimising a generalisation bound. However, optimising generalisation bounds might not always be viable for tractable or computational reasons, or both. For example, iteratively querying the empirical risk might prove computationally expensive. In response, we introduce a novel principled strategy for building an iterative learning algorithm via the optimisation of a sequence of surrogate training objectives, inherited from PAC-Bayes generalisation bounds. The key argument is to replace the empirical risk (seen as a function of hypotheses) in the generalisation bound by its projection onto a constructible low dimensional functional space: these projections can be queried much more efficiently than the initial risk. On top of providing that generic recipe for learning via surrogate PAC-Bayes bounds, we (i) contribute theoretical results establishing that iteratively optimising our surrogates implies the optimisation of the original generalisation bounds, (ii) instantiate this strategy to the framework of meta-learning, introducing a meta-objective offering a closed form expression for meta-gradient, (iii) illustrate our approach with numerical experiments inspired by an industrial biochemical problem.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95781"} +{"video_file": "IGCaTQ4n1R_39028081.mp4", "openreview_id": "IGCaTQ4n1R", "slideslive_id": 39028081, "venue": "nips2024", "title": "OpenDlign: Open-World Point Cloud Understanding with Depth-Aligned Images", "status": "Poster", "keywords": "Open-World;3D Representation Learning;3D Shape Understanding", "tldr": "OpenDlign: Open-World Point Cloud Understanding with Depth-Aligned Images", "abstract": "Recent open-world 3D representation learning methods using Vision-Language Models (VLMs) to align 3D point clouds with image-text information have shown superior 3D zero-shot performance. However, CAD-rendered images for this alignment often lack realism and texture variation, compromising alignment robustness. Moreover, the volume discrepancy between 3D and 2D pretraining datasets highlights the need for effective strategies to transfer the representational abilities of VLMs to 3D learning. In this paper, we present OpenDlign, a novel open-world 3D model using depth-aligned images generated from a diffusion model for robust multimodal alignment. These images exhibit greater texture diversity than CAD renderings due to the stochastic nature of the diffusion model. By refining the depth map projection pipeline and designing depth-specific prompts, OpenDlign leverages rich knowledge in pre-trained VLM for 3D representation learning with streamlined fine-tuning. Our experiments show that OpenDlign achieves high zero-shot and few-shot performance on diverse 3D tasks, despite only fine-tuning 6 million parameters on a limited ShapeNet dataset. In zero-shot classification, OpenDlign surpasses previous models by 8.0% on ModelNet40 and 16.4% on OmniObject3D. Additionally, using depth-aligned images for multimodal alignment consistently enhances the performance of other state-of-the-art models.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95778"} +{"video_file": "IGhpUd496D_39027423.mp4", "openreview_id": "IGhpUd496D", "slideslive_id": 39027423, "venue": "nips2024", "title": "Provable Editing of Deep Neural Networks using Parametric Linear Relaxation", "status": "Poster", "keywords": "Provable editing;provable repair;provable training;trustworthiness;linear programming;local robustness;verification", "tldr": "Efficient technique for provably editing the DNN parameters such that the DNN satisfies a given property for all inputs in a given polytope.", "abstract": "Ensuring that a DNN satisfies a desired property is critical when deploying DNNs in safety-critical applications. There are efficient methods that can verify whether a DNN satisfies a property, as seen in the annual DNN verification competition (VNN-COMP). However, the problem of provably editing a DNN to satisfy a property remains challenging. We present PREPARED, the first efficient technique for provable editing of DNNs. Given a DNN $\\mathcal{N}$ with parameters $\\theta$, input polytope $P$, and output polytope $Q$, PREPARED finds new parameters $\\theta'$ such that $\\forall \\mathrm{x} \\in P . \\mathcal{N}(\\mathrm{x}; \\theta') \\in Q$ while minimizing the changes $\\lVert{\\theta' - \\theta}\\rVert$. Given a DNN and a property it violates from the VNN-COMP benchmarks, PREPARED is able to provably edit the DNN to satisfy this property within 45 seconds. PREPARED is efficient because it relaxes the NP-hard provable editing problem to solving a linear program. The key contribution is the novel notion of Parametric Linear Relaxation, which enables PREPARED to construct tight output bounds of the DNN that are parameterized by the new parameters $\\theta'$. We demonstrate that PREPARED is more efficient and effective compared to prior DNN editing approaches i) using the VNN-COMP benchmarks, ii) by editing CIFAR10 and TinyImageNet image-recognition DNNs, and BERT sentiment-classification DNNs for local robustness, and iii) by training a DNN to model a geodynamics process and satisfy physics constraints.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/95777"} +{"video_file": "IHjoPnNZb9_39024906.mp4", "openreview_id": "IHjoPnNZb9", "slideslive_id": 39024906, "venue": "nips2024", "title": "Rethinking Decoders for Transformer-based Semantic Segmentation: A Compression Perspective", "status": "Poster", "keywords": "Semantic Segmentation;Transformer;Decoder;Coding Rate;Principal Components", "tldr": "We derive a white-box decoder for Transformer-based semantic segmentation from the perspective of maximizing coding rate and computing principle components.", "abstract": "State-of-the-art methods for Transformer-based semantic segmentation typically adopt Transformer decoders that are used to extract additional embeddings from image embeddings via cross-attention, refine either or both types of embeddings via self-attention, and project image embeddings onto the additional embeddings via dot-product. Despite their remarkable success, these empirical designs still lack theoretical justifications or interpretations, thus hindering potentially principled improvements. In this paper, we argue that there are fundamental connections between semantic segmentation and compression, especially between the Transformer decoders and Principal Component Analysis (PCA). From such a perspective, we derive a white-box, fully attentional DEcoder for PrIncipled semantiC segemenTation (DEPICT), with the interpretations as follows: 1) the self-attention operator refines image embeddings to construct an ideal principal subspace that aligns with the supervision and retains most information; 2) the cross-attention operator seeks to find a low-rank approximation of the refined image embeddings, which is expected to be a set of orthonormal bases of the principal subspace and corresponds to the predefined classes; 3) the dot-product operation yields compact representation for image embeddings as segmentation masks. Experiments conducted on dataset ADE20K find that DEPICT consistently outperforms its black-box counterpart, Segmenter, and it is light weight and more robust.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/95774"} +{"video_file": "IIoH8bf5BA_39024400.mp4", "openreview_id": "IIoH8bf5BA", "slideslive_id": 39024400, "venue": "nips2024", "title": "Piecewise deterministic generative models", "status": "Poster", "keywords": "generative modelling;piecewise deterministic Markov processes;time reversals", "tldr": "We introduce a novel class of generative models based on piecewise deterministic Markov processes, a family of non-diffusive stochastic processes consisting of deterministic motion and random jumps at random times.", "abstract": "We introduce a novel class of generative models based on piecewise deterministic Markov processes (PDMPs), a family of non-diffusive stochastic processes consisting of deterministic motion and random jumps at random times. Similarly to diffusions, such Markov processes admit time reversals that turn out to be PDMPs as well. We apply this observation to three PDMPs considered in the literature: the Zig-Zag process, Bouncy Particle Sampler, and Randomised Hamiltonian Monte Carlo. For these three particular instances, we show that the jump rates and kernels of the corresponding time reversals admit explicit expressions depending on some conditional densities of the PDMP under consideration before and after a jump. Based on these results, we propose efficient training procedures to learn these characteristics and consider methods to approximately simulate the reverse process. Finally, we provide bounds in the total variation distance between the data distribution and the resulting distribution of our model in the case where the base distribution is the standard $d$-dimensional Gaussian distribution. Promising numerical simulations support further investigations into this class of models.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95773"} +{"video_file": "IM4LtYRWdE_39026799.mp4", "openreview_id": "IM4LtYRWdE", "slideslive_id": 39026799, "venue": "nips2024", "title": "Inflationary Flows: Calibrated Bayesian Inference with Diffusion-Based Models", "status": "Poster", "keywords": "Deep Generative Models;Diffusion-Based Models;Probability Flow ODE;Inference;Bayesian Inference;Calibrated Inference;Compression;Dimension Reduction", "tldr": "In this work we introduce Inflationary Flows, a class of highly expressive generative models that allows us to map complex data distributions to uniquely defined and lower-dimensional latent spaces while also affording principled Bayesian inference.", "abstract": "Beyond estimating parameters of interest from data, one of the key goals of statistical inference is to properly quantify uncertainty in these estimates. In Bayesian inference, this uncertainty is provided by the posterior distribution, the computation of which typically involves an intractable high-dimensional integral. Among available approximation methods, sampling-based approaches come with strong theoretical guarantees but scale poorly to large problems, while variational approaches scale well but offer few theoretical guarantees. In particular, variational methods are known to produce overconfident estimates of posterior uncertainty and are typically non-identifiable, with many latent variable configurations generating equivalent predictions. Here, we address these challenges by showing how diffusion-based models (DBMs), which have recently produced state-of-the-art performance in generative modeling tasks, can be repurposed for performing calibrated, identifiable Bayesian inference. By exploiting a previously established connection between the stochastic and probability flow ordinary differential equations (pfODEs) underlying DBMs, we derive a class of models, \\emph{inflationary flows,} that uniquely and deterministically map high-dimensional data to a lower-dimensional Gaussian distribution via ODE integration. This map is both invertible and neighborhood-preserving, with controllable numerical error, with the result that uncertainties in the data are correctly propagated to the latent space. We demonstrate how such maps can be learned via standard DBM training using a novel noise schedule and are effective at both preserving and reducing intrinsic data dimensionality. The result is a class of highly expressive generative models, uniquely defined on a low-dimensional latent space, that afford principled Bayesian inference.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95772"} +{"video_file": "IMlDpZmLnL_39028848.mp4", "openreview_id": "IMlDpZmLnL", "slideslive_id": 39028848, "venue": "nips2024", "title": "A Comprehensive Analysis on the Learning Curve in Kernel Ridge Regression", "status": "Poster", "keywords": "kernel ridge regression;generalization", "tldr": "We validate the Gaussian Equivalent Property rigorously, and provide novel bounds for kernel ridge regression.", "abstract": "This paper conducts a comprehensive study of the learning curves of kernel ridge regression (KRR) under minimal assumptions. Our contributions are three-fold: 1) we analyze the role of key properties of the kernel, such as its spectral eigen-decay, the characteristics of the eigenfunctions, and the smoothness of the kernel; 2) we demonstrate the validity of the Gaussian Equivalent Property (GEP), which states that the generalization performance of KRR remains the same when the whitened features are replaced by standard Gaussian vectors, thereby shedding light on the success of previous analyzes under the Gaussian Design Assumption; 3) we derive novel bounds that improve over existing bounds across a broad range of setting such as (in)dependent feature vectors and various combinations of eigen-decay rates in the over/underparameterized regimes.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95771"} +{"video_file": "IOKLUxB05h_39028712.mp4", "openreview_id": "IOKLUxB05h", "slideslive_id": 39028712, "venue": "nips2024", "title": "Combining Observational Data and Language for Species Range Estimation", "status": "Poster", "keywords": "Species range estimation;zero-shot learning;few-shot learning;implicit networks", "tldr": "TL;DR: We developed a method for creating species range maps by integrating citizen science data with Wikipedia text, enabling accurate zero-shot and few-shot range estimation.", "abstract": "Species range maps (SRMs) are essential tools for research and policy-making in ecology, conservation, and environmental management. However, traditional SRMs rely on the availability of environmental covariates and high-quality observational data, both of which can be challenging to obtain due to geographic inaccessibility and resource constraints. We propose a novel approach combining millions of citizen science species observations with textual descriptions from Wikipedia, covering habitat preferences and range descriptions for tens of thousands of species. Our framework maps location, species, and text descriptions into a common space, facilitating the learning of rich spatial covariates at a global scale and enabling zero-shot range estimation from textual descriptions. Evaluated on held-out species, our zero-shot SRMs significantly outperform baselines and match the performance of SRMs obtained using tens of observations. Our approach also acts as a strong prior when combined with observational data, resulting in more accurate range estimation with less data. We present extensive quantitative and qualitative analyses of the learned representations in the context of range estimation and other spatial tasks, demonstrating the effectiveness of our approach.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/95769"} +{"video_file": "IbIB8SBKFV_39028402.mp4", "openreview_id": "IbIB8SBKFV", "slideslive_id": 39028402, "venue": "nips2024", "title": "Improving Alignment and Robustness with Circuit Breakers", "status": "Poster", "keywords": "alignment;adversarial robustness;adversarial attacks;harmfulness;security;reliability;ML safety;AI safety", "tldr": "We propose a novel approach that \"short-circuits\" harmful outputs in AI systems by directly controlling the responsible representations, offering robust protection against harmful actions and adversarial attacks without compromising utility.", "abstract": "AI systems can take harmful actions and are highly vulnerable to adversarial attacks. We present an approach, inspired by recent advances in representation engineering, that interrupts the models as they respond with harmful outputs with \"circuit breakers.\" Existing techniques aimed at improving alignment, such as refusal training, are often bypassed. Techniques such as adversarial training try to plug these holes by countering specific attacks. As an alternative to refusal training and adversarial training, circuit-breaking directly controls the representations that are responsible for harmful outputs in the first place. Our technique can be applied to both text-only and multimodal language models to prevent the generation of harmful outputs without sacrificing utility -- even in the presence of powerful unseen attacks. Notably, while adversarial robustness in standalone image recognition remains an open challenge, circuit breakers allow the larger multimodal system to reliably withstand image \"hijacks\" that aim to produce harmful content. Finally, we extend our approach to AI agents, demonstrating considerable reductions in the rate of harmful actions when they are under attack. Our approach represents a significant step forward in the development of reliable safeguards to harmful behavior and adversarial attacks.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/95761"} +{"video_file": "IlIDNMvwmX_39024556.mp4", "openreview_id": "IlIDNMvwmX", "slideslive_id": 39024556, "venue": "nips2024", "title": "LM-HT SNN: Enhancing the Performance of SNN to ANN Counterpart through Learnable Multi-hierarchical Threshold Model", "status": "Poster", "keywords": "Spiking Neural Networks;Learnable Multi-hierarchical Threshold Model;STBP Training", "tldr": "We propose an advanced LM-HT model, which can enhance the performance of SNNs to the level of ANNs and further establish a bridge between the vanilla STBP and quantized ANNs training.", "abstract": "Compared to traditional Artificial Neural Network (ANN), Spiking Neural Network (SNN) has garnered widespread academic interest for its intrinsic ability to transmit information in a more energy-efficient manner. However, despite previous efforts to optimize the learning algorithm of SNNs through various methods, SNNs still lag behind ANNs in terms of performance. The recently proposed multi-threshold model provides more possibilities for further enhancing the learning capability of SNNs. In this paper, we rigorously analyze the relationship among the multi-threshold model, vanilla spiking model and quantized ANNs from a mathematical perspective, then propose a novel LM-HT model, which is an equidistant multi-threshold model that can dynamically regulate the global input current and membrane potential leakage on the time dimension. The LM-HT model can also be transformed into a vanilla single threshold model through reparameterization, thereby achieving more flexible hardware deployment. In addition, we note that the LM-HT model can seamlessly integrate with ANN-SNN Conversion framework under special initialization. This novel hybrid learning framework can effectively improve the relatively poor performance of converted SNNs under low time latency. Extensive experimental results have demonstrated that our model can outperform previous state-of-the-art works on various types of datasets, which promote SNNs to achieve a brand-new level of performance comparable to quantized ANNs. Code is available at https://github.com/hzc1208/LMHT_SNN.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/95754"} +{"video_file": "Io1qKqCVIK_39024729.mp4", "openreview_id": "Io1qKqCVIK", "slideslive_id": 39024729, "venue": "nips2024", "title": "DMesh: A Differentiable Mesh Representation", "status": "Poster", "keywords": "Differentiable Mesh;3D Reconstruction", "tldr": "We present a differentiable representation, DMesh, for general 3D triangular meshes. It considers both the geometry and connectivity information of a mesh in a differentiable manner.", "abstract": "We present a differentiable representation, DMesh, for general 3D triangular meshes. DMesh considers both the geometry and connectivity information of a mesh. In our design, we first get a set of convex tetrahedra that compactly tessellates the domain based on Weighted Delaunay Triangulation (WDT), and select triangular faces on the tetrahedra to define the final mesh. We formulate probability of faces to exist on the actual surface in a differentiable manner based on the WDT. This enables DMesh to represent meshes of various topology in a differentiable way, and allows us to reconstruct the mesh under various observations, such as point clouds and multi-view images using gradient-based optimization. We publicize the source code and supplementary material at our project page (https://sonsang.github.io/dmesh-project).", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95753"} +{"video_file": "IoRT7EhFap_39027293.mp4", "openreview_id": "IoRT7EhFap", "slideslive_id": 39027293, "venue": "nips2024", "title": "Addressing Spectral Bias of Deep Neural Networks by Multi-Grade Deep Learning", "status": "Poster", "keywords": "deep neural network;spectral bias;multi-grade deep learning", "tldr": "This paper addresses the spectral bias issue of deep neural networks by multi-grade deep learning.", "abstract": "Deep neural networks (DNNs) have showcased their remarkable precision in approximating smooth functions. However, they suffer from the {\\it spectral bias}, wherein DNNs typically exhibit a tendency to prioritize the learning of lower-frequency components of a function, struggling to effectively capture its high-frequency features. This paper is to address this issue. Notice that a function having only low frequency components may be well-represented by a shallow neural network (SNN), a network having only a few layers. By observing that composition of low frequency functions can effectively approximate a high-frequency function, we propose to learn a function containing high-frequency components by composing several SNNs, each of which learns certain low-frequency information from the given data. We implement the proposed idea by exploiting the multi-grade deep learning (MGDL) model, a recently introduced model that trains a DNN incrementally, grade by grade, a current grade learning from the residue of the previous grade only an SNN (with trainable parameters) composed with the SNNs (with fixed parameters) trained in the preceding grades as features. We apply MGDL to synthetic, manifold, colored images, and MNIST datasets, all characterized by presence of high-frequency features. Our study reveals that MGDL excels at representing functions containing high-frequency information. Specifically, the neural networks learned in each grade adeptly capture some low-frequency information, allowing their compositions with SNNs learned in the previous grades effectively representing the high-frequency features. Our experimental results underscore the efficacy of MGDL in addressing the spectral bias inherent in DNNs. By leveraging MGDL, we offer insights into overcoming spectral bias limitation of DNNs, thereby enhancing the performance and applicability of deep learning models in tasks requiring the representation of high-frequency information. This study confirms that the proposed method offers a promising solution to address the spectral bias of DNNs. The code is available on GitHub: \\href{https://github.com/Ronglong-Fang/AddressingSpectralBiasviaMGDL}{\\texttt{Addressing Spectral Bias via MGDL}}.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95752"} +{"video_file": "Ioabr42B44_39025038.mp4", "openreview_id": "Ioabr42B44", "slideslive_id": 39025038, "venue": "nips2024", "title": "Dense Connector for MLLMs", "status": "Poster", "keywords": "Multimodal Large Language Models;Vision-Language Model", "tldr": "We introduce the Dense Connector - a simple, effective, and plug-and-play vision-language connecter that significantly enhances existing MLLMs by leveraging multi-layer visual features, with minimal additional computational overhead.", "abstract": "Do we fully leverage the potential of visual encoder in Multimodal Large Language Models (MLLMs)? The recent outstanding performance of MLLMs in multimodal understanding has garnered broad attention from both academia and industry. In the current MLLM rat race, the focus seems to be predominantly on the linguistic side. We witness the rise of larger and higher-quality instruction datasets, as well as the involvement of larger-sized LLMs. Yet, scant attention has been directed towards the visual signals utilized by MLLMs, often assumed to be the final high-level features extracted by a frozen visual encoder. In this paper, we introduce the Dense Connector - a simple, effective, and plug-and-play vision-language connector that significantly enhances existing MLLMs by leveraging multi-layer visual features, with minimal additional computational overhead. Building on this, we also propose the Efficient Dense Connector, which achieves performance comparable to LLaVA-v1.5 with only 25% of the visual tokens. Furthermore, our model, trained solely on images, showcases remarkable zero-shot capabilities in video understanding as well. Experimental results across various vision encoders, image resolutions, training dataset scales, varying sizes of LLMs (2.7B\u219270B), and diverse architectures of MLLMs (e.g., LLaVA-v1.5, LLaVA-NeXT and Mini-Gemini) validate the versatility and scalability of our approach, achieving state-of-the-art performance across 19 image and video benchmarks. We hope that this work will provide valuable experience and serve as a basic module for future MLLM development. Code is available at https://github.com/HJYao00/DenseConnector.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95751"} +{"video_file": "Iq2IAWozNr_39026332.mp4", "openreview_id": "Iq2IAWozNr", "slideslive_id": 39026332, "venue": "nips2024", "title": "Smoke and Mirrors in Causal Downstream Tasks", "status": "Poster", "keywords": "AI for Science;Randomized Controlled Trial;Representation Learning", "tldr": "Representation Learning for Treatment Effect Estimation on real world Randomized Controlled Trials.", "abstract": "Machine Learning and AI have the potential to transform data-driven scientific discovery, enabling accurate predictions for several scientific phenomena. As many scientific questions are inherently causal, this paper looks at the causal inference task of treatment effect estimation, where the outcome of interest is recorded in high-dimensional observations in a Randomized Controlled Trial (RCT). Despite being the simplest possible causal setting and a perfect fit for deep learning, we theoretically find that many common choices in the literature may lead to biased estimates. To test the practical impact of these considerations, we recorded ISTAnt, the first real-world benchmark for causal inference downstream tasks on high-dimensional observations as an RCT studying how garden ants (Lasius neglectus) respond to microparticles applied onto their colony members by hygienic grooming. Comparing 6 480 models fine-tuned from state-of-the-art visual backbones, we find that the sampling and modeling choices significantly affect the accuracy of the causal estimate, and that classification accuracy is not a proxy thereof. We further validated the analysis, repeating it on a synthetically generated visual data set controlling the causal model. Our results suggest that future benchmarks should carefully consider real downstream scientific questions, especially causal ones. Further, we highlight guidelines for representation learning methods to help answer causal questions in the sciences.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/95749"} +{"video_file": "ItzD2Cnu9y_39025970.mp4", "openreview_id": "ItzD2Cnu9y", "slideslive_id": 39025970, "venue": "nips2024", "title": "Randomized Sparse Matrix Compression for Large-Scale Constrained Optimization in Cancer Radiotherapy", "status": "Poster", "keywords": "Sparsification;Cancer Radiotherapy;Optimization;Randomization;Sketching", "tldr": "This paper presents a novel algorithm for generating a randomized sparse sketch of a large matrix, with novel application in solving the large-scale optimization problems arising in cancer radiotherapy.", "abstract": "Radiation therapy, treating over half of all cancer patients, involves using specialized machines to direct high-energy beams at tumors, aiming to damage cancer cells while minimizing harm to nearby healthy tissues. Customizing the shape and intensity of radiation beams for each patient leads to solving large-scale constrained optimization problems that need to be solved within tight clinical time-frame. At the core of these challenges is a large matrix that is commonly sparsified for computational efficiency by neglecting small elements. Such a crude approximation can degrade the quality of treatment, potentially causing unnecessary radiation exposure to healthy tissues\u2014this may lead to significant radiation-induced side effects\u2014or delivering inadequate radiation to the tumor, which is crucial for effective tumor treatment. In this work, we demonstrate, for the first time, that randomized sketch tools can effectively sparsify this matrix without sacrificing treatment quality. We also develop a novel randomized sketch method with desirable theoretical guarantees that outperforms existing techniques in practical application. Beyond developing a novel randomized sketch method, this work emphasizes the potential of harnessing scientific computing tools, crucial in today's big data analysis, to tackle computationally intensive challenges in healthcare. The application of these tools could have a profound impact on the lives of numerous cancer patients. Code and sample data available at https://github.com/PortPy-Project/CompressRTP", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/95748"} +{"video_file": "IwNTiNPxFt_39025793.mp4", "openreview_id": "IwNTiNPxFt", "slideslive_id": 39025793, "venue": "nips2024", "title": "Stable-Pose: Leveraging Transformers for Pose-Guided Text-to-Image Generation", "status": "Poster", "keywords": "Pose-guided text-to-image (T2I);Diffusion models;Stable Diffusion;vision Transformers", "tldr": "We present Stable-Pose, a novel adapter model that incorporates a coarse-to-fine attention masking strategy into a vision Transformer (ViT) for gaining accurate pose guidance in T2I diffusion models.", "abstract": "Controllable text-to-image (T2I) diffusion models have shown impressive performance in generating high-quality visual content through the incorporation of various conditions. Current methods, however, exhibit limited performance when guided by skeleton human poses, especially in complex pose conditions such as side or rear perspectives of human figures. To address this issue, we present Stable-Pose, a novel adapter model that introduces a coarse-to-fine attention masking strategy into a vision Transformer (ViT) to gain accurate pose guidance for T2I models. Stable-Pose is designed to adeptly handle pose conditions within pre-trained Stable Diffusion, providing a refined and efficient way of aligning pose representation during image synthesis. We leverage the query-key self-attention mechanism of ViTs to explore the interconnections among different anatomical parts in human pose skeletons. Masked pose images are used to smoothly refine the attention maps based on target pose-related features in a hierarchical manner, transitioning from coarse to fine levels. Additionally, our loss function is formulated to allocate increased emphasis to the pose region, thereby augmenting the model's precision in capturing intricate pose details. We assessed the performance of Stable-Pose across five public datasets under a wide range of indoor and outdoor human pose scenarios. Stable-Pose achieved an AP score of 57.1 in the LAION-Human dataset, marking around 13% improvement over the established technique ControlNet. The project link and code is available at https://github.com/ai-med/StablePose.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95747"} +{"video_file": "IxEhb4NCvy_39027107.mp4", "openreview_id": "IxEhb4NCvy", "slideslive_id": 39027107, "venue": "nips2024", "title": "SSDM: Scalable Speech Dysfluency Modeling", "status": "Poster", "keywords": "Speech Dysfluency;Disfluency;Stutter;Alignment;Articulatory;Scaling", "tldr": "A speech processing framework that supports language learning, speech therapy and disorder screening.", "abstract": "Speech dysfluency modeling is the core module for spoken language learning, and speech therapy. However, there are three challenges. First, current state-of-the-art solutions~~\\cite{lian2023unconstrained-udm, lian-anumanchipalli-2024-towards-hudm} suffer from poor scalability. Second, there is a lack of a large-scale dysfluency corpus. Third, there is not an effective learning framework. In this paper, we propose \\textit{SSDM: Scalable Speech Dysfluency Modeling}, which (1) adopts articulatory gestures as scalable forced alignment; (2) introduces connectionist subsequence aligner (CSA) to achieve dysfluency alignment; (3) introduces a large-scale simulated dysfluency corpus called Libri-Dys; and (4) develops an end-to-end system by leveraging the power of large language models (LLMs). We expect SSDM to serve as a standard in the area of dysfluency modeling. Demo is available at \\url{https://berkeley-speech-group.github.io/SSDM/}.", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/95746"} +{"video_file": "IxRf7Q3s5e_39028072.mp4", "openreview_id": "IxRf7Q3s5e", "slideslive_id": 39028072, "venue": "nips2024", "title": "NeuralSolver: Learning Algorithms For Consistent and Efficient Extrapolation Across General Tasks", "status": "Poster", "keywords": "Deep learning;algorithm synthesis;recurrent networks;algorithmic reasoning;sequential decision making;extrapolation", "tldr": "A recurrent network that learns scalable algorithms that extrapolate on larger general tasks without fine-tuning.", "abstract": "We contribute NeuralSolver, a novel recurrent solver that can efficiently and consistently extrapolate, i.e., learn algorithms from smaller problems (in terms of observation size) and execute those algorithms in large problems. Contrary to previous recurrent solvers, NeuralSolver can be naturally applied in both same-size problems, where the input and output sizes are the same, and in different-size problems, where the size of the input and output differ. To allow for this versatility, we design NeuralSolver with three main components: a recurrent module, that iteratively processes input information at different scales, a processing module, responsible for aggregating the previously processed information, and a curriculum-based training scheme, that improves the extrapolation performance of the method. To evaluate our method we introduce a set of novel different-size tasks and we show that NeuralSolver consistently outperforms the prior state-of-the-art recurrent solvers in extrapolating to larger problems, considering smaller training problems and requiring less parameters than other approaches.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95745"} +{"video_file": "J3w0AXtEhp_39027117.mp4", "openreview_id": "J3w0AXtEhp", "slideslive_id": 39027117, "venue": "nips2024", "title": "Uniform Last-Iterate Guarantee for Bandits and Reinforcement Learning", "status": "Poster", "keywords": "uniform last-iterate;bandits;reinforcement learning", "tldr": "We propose a new metric, called uniform last-iterate and show it is achievable for multi-armed bandits and linear bandits, and tabular Markov decision processes", "abstract": "Existing metrics for reinforcement learning (RL) such as regret, PAC bounds, or uniform-PAC (Dann et al., 2017), typically evaluate the cumulative performance, while allowing the play of an arbitrarily bad policy at any finite time t. Such a behavior can be highly detrimental in high-stakes applications. This paper introduces a stronger metric, uniform last-iterate (ULI) guarantee, capturing both cumulative and instantaneous performance of RL algorithms. Specifically, ULI characterizes the instantaneous performance since it ensures that the per-round suboptimality of the played policy is bounded by a function, monotonically decreasing w.r.t. (large) round t, preventing revisits to bad policies when sufficient samples are available. We demonstrate that a near-optimal ULI guarantee directly implies near-optimal cumulative performance across aforementioned metrics, but not the other way around. To examine the achievability of ULI, we first provide two positive results for bandit problems with finite arms, showing that some elimination-based algorithms and high-probability adversarial algorithms with stronger analysis or additional designs, can attain near-optimal ULI guarantees. We also provide a negative result, indicating that optimistic algorithms cannot achieve a near-optimal ULI guarantee. Furthermore, we propose an efficient algorithm for linear bandits with infinitely many arms, which achieves the ULI guarantee, given access to an optimization oracle. Finally, we propose an algorithm that achieves a near-optimal ULI guarantee for the online reinforcement learning setting.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95739"} +{"video_file": "J6zHcScAo0_39026728.mp4", "openreview_id": "J6zHcScAo0", "slideslive_id": 39026728, "venue": "nips2024", "title": "Transcoders find interpretable LLM feature circuits", "status": "Poster", "keywords": "mechanistic interpretability;transcoders;sparse autoencoders;circuit analysis", "tldr": "Modifying sparse autoencoders to approximate MLP behavior enables clean, fine-grained, interpretable circuit analysis for transformers", "abstract": "A key goal in mechanistic interpretability is circuit analysis: finding sparse subgraphs of models corresponding to specific behaviors or capabilities. However, MLP sublayers make fine-grained circuit analysis on transformer-based language models difficult. In particular, interpretable features\u2014such as those found by sparse autoencoders (SAEs)\u2014are typically linear combinations of extremely many neurons, each with its own nonlinearity to account for. Circuit analysis in this setting thus either yields intractably large circuits or fails to disentangle local and global behavior. To address this we explore transcoders, which seek to faithfully approximate a densely activating MLP layer with a wider, sparsely-activating MLP layer. We introduce a novel method for using transcoders to perform weights-based circuit analysis through MLP sublayers. The resulting circuits neatly factorize into input-dependent and input-invariant terms. We then successfully train transcoders on language models with 120M, 410M, and 1.4B parameters, and find them to perform at least on par with SAEs in terms of sparsity, faithfulness, and human-interpretability. Finally, we apply transcoders to reverse-engineer unknown circuits in the model, and we obtain novel insights regarding the \"greater-than circuit\" in GPT2-small. Our results suggest that transcoders can prove effective in decomposing model computations involving MLPs into interpretable circuits. Code is available at https://github.com/jacobdunefsky/transcoder_circuits/", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/95736"} +{"video_file": "JAhNsZ9dvG_39027731.mp4", "openreview_id": "JAhNsZ9dvG", "slideslive_id": 39027731, "venue": "nips2024", "title": "SpecExec: Massively Parallel Speculative Decoding For Interactive LLM Inference on Consumer Devices", "status": "Poster", "keywords": "speculative decoding;offloading;large language models;inference", "tldr": "We propose a speculative decoding algorithm that scales with large token budgets and allows running 50B+ models on a single GPU up to 15x faster than standard offloading", "abstract": "As large language models gain widespread adoption, running them efficiently becomes a crucial task. Recent works on LLM inference use speculative decoding to achieve extreme speedups. However, most of these works implicitly design their algorithms for high-end datacenter hardware. In this work, we ask the opposite question: how fast can we run LLMs on consumer machines? Consumer GPUs can no longer fit the largest available models and must offload them to RAM or SSD. With parameter offloading, hundreds or thousands of tokens can be processed in batches within the same time as just one token, making it a natural fit for speculative decoding. We propose SpecExec (Speculative Execution), a simple parallel decoding method that can generate up to 20 tokens per target model iteration for popular LLM families. SpecExec takes the most probable continuations from the draft model to build a \"cache\" tree for the target model, which then gets validated in a single pass. Using SpecExec, we demonstrate inference of 50B+ parameter LLMs on consumer GPUs with RAM offloading at 4--6 tokens per second with 4-bit quantization or 2--3 tokens per second with 16-bit weights. Our code is available at https://github.com/yandex-research/specexec .", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95733"} +{"video_file": "JCyBN5syv3_39026871.mp4", "openreview_id": "JCyBN5syv3", "slideslive_id": 39026871, "venue": "nips2024", "title": "SimGen: Simulator-conditioned Driving Scene Generation", "status": "Poster", "keywords": "Autonomous Driving;Generative Models;Simulators", "tldr": "This paper introduces a simulator-conditioned diffusion model, which follows the layout guidance from the simulator and cues of the rich text prompts to generate images of diverse driving scenes.", "abstract": "Controllable synthetic data generation can substantially lower the annotation cost of training data. Prior works use diffusion models to generate driving images conditioned on the 3D object layout. However, those models are trained on small-scale datasets like nuScenes, which lack appearance and layout diversity. Moreover, overfitting often happens, where the trained models can only generate images based on the layout data from the validation set of the same dataset. In this work, we introduce a simulator-conditioned scene generation framework called SimGen that can learn to generate diverse driving scenes by mixing data from the simulator and the real world. It uses a novel cascade diffusion pipeline to address challenging sim-to-real gaps and multi-condition conflicts. A driving video dataset DIVA is collected to enhance the generative diversity of SimGen, which contains over 147.5 hours of real-world driving videos from 73 locations worldwide and simulated driving data from the MetaDrive simulator. SimGen achieves superior generation quality and diversity while preserving controllability based on the text prompt and the layout pulled from a simulator. We further demonstrate the improvements brought by SimGen for synthetic data augmentation on the BEV detection and segmentation task and showcase its capability in safety-critical data generation.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95730"} +{"video_file": "JEKXTLjEIq_39026307.mp4", "openreview_id": "JEKXTLjEIq", "slideslive_id": 39026307, "venue": "nips2024", "title": "Binary Search with Distributional Predictions", "status": "Poster", "keywords": "learning-augmented algorithms;algorithms with predictions;distribution predictions", "tldr": "This paper extends the algorithms with predictions model to the case where predictions are distributions for the problem of searching in a sorted array.", "abstract": "Algorithms with (machine-learned) predictions is a powerful framework for combining traditional worst-case algorithms with modern machine learning. However, the vast majority of work in this space assumes that the prediction itself is non-probabilistic, even if it is generated by some stochastic process (such as a machine learning system). This is a poor fit for modern ML, particularly modern neural networks, which naturally generate a distribution. We initiate the study of algorithms with distributional predictions, where the prediction itself is a distribution. We focus on one of the simplest yet fundamental settings: binary search (or searching a sorted array).\nThis setting has one of the simplest algorithms with a point prediction, but what happens if the prediction is a distribution? We show that this is a richer setting: there are simple distributions where using the classical prediction-based algorithm with any single prediction does poorly.\nMotivated by this, as our main result, we give an algorithm with query complexity\nO\n(\nH\n(\np\n)\n+\nlog\n\u2061\n\u03b7\n)\n, where\nH\n(\np\n)\nis the entropy of the true distribution\np\nand\n\u03b7\nis the earth mover's distance between\np\nand the predicted distribution\np\n^\n. This also yields the first distributionally-robust algorithm for the classical problem of computing an optimal binary search tree given a distribution over target keys. We complement this with a lower bound showing that this query complexity is essentially optimal (up to constants), and experiments validating the practical usefulness of our algorithm.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95727"} +{"video_file": "JInTfcxH3Q_39028339.mp4", "openreview_id": "JInTfcxH3Q", "slideslive_id": 39028339, "venue": "nips2024", "title": "PowerPM: Foundation Model for Power Systems", "status": "Poster", "keywords": "time series pre-training;power systems;foundation model", "tldr": "PowerPM is a foundation model for power systems.", "abstract": "The proliferation of abundant electricity time series (ETS) data presents numerous opportunities for various applications within power systems, including demand-side management, grid stability, and consumer behavior analysis. Deep learning models have advanced ETS modeling by effectively capturing sequence dependence. However, learning a generic representation of ETS data for various applications is challenging due to the inherently complex hierarchical structure of ETS data. Moreover, ETS data exhibits intricate temporal dependencies and is susceptible to the influence of exogenous variables. Furthermore, different instances exhibit diverse electricity consumption behavior. In this paper, we propose a foundation model PowerPM for ETS data, providing a large-scale, off-the-shelf model for power systems. PowerPM consists of a temporal encoder and a hierarchical encoder. The temporal encoder captures temporal dependencies within ETS data, taking into account exogenous variables. The hierarchical encoder models correlations between different levels of hierarchy. Furthermore, PowerPM leverages a novel self-supervised pre-training framework consisting of masked ETS modeling and dual-view contrastive learning. This framework enables PowerPM to capture temporal dependency within ETS windows and aware the discrepancy across ETS windows, providing two different perspectives to learn generic representation. Our experiments span five real-world scenario datasets, including both private and public data. Through pre-training on massive ETS data, PowerPM achieves SOTA performance on diverse downstream tasks within the private dataset. Notably, when transferred to public datasets, PowerPM retains its edge, showcasing its remarkable generalization ability across various tasks and domains. Moreover, ablation studies and few-shot experiments further substantiate the effectiveness of our model.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/95723"} +{"video_file": "JJGfCvjpTV_39028574.mp4", "openreview_id": "JJGfCvjpTV", "slideslive_id": 39028574, "venue": "nips2024", "title": "Hamiltonian Score Matching and Generative Flows", "status": "Poster", "keywords": "Hamiltonian dynamics;score matching;generative models;diffusion models;flow matching;Hamiltonian Monte Carlo", "tldr": "Novel score matching and flow-based generative models are introduced by learning velocity predictors of Hamiltonian dynamics.", "abstract": "Classical Hamiltonian mechanics has been widely used in machine learning in the form of Hamiltonian Monte Carlo for applications with predetermined force fields. In this paper, we explore the potential of deliberately designing force fields for Hamiltonian systems, introducing Hamiltonian velocity predictors (HVPs) as a core tool for constructing energy-based and generative models. We present two innovations: Hamiltonian Score Matching (HSM), which utilizes score functions to augment data by simulating Hamiltonian trajectories, and Hamiltonian Generative Flows (HGFs), a novel generative model that encompasses diffusion models and OT-flow matching as HGFs with zero force fields. We showcase the extended design space of force fields by introducing Oscillation HGFs, a generative model inspired by harmonic oscillators. Our experiments demonstrate that HSM and HGFs rival leading score-matching and generative modeling techniques. Overall, our work systematically elucidates the synergy between Hamiltonian dynamics, force fields, and generative models, thereby opening new avenues for applications of machine learning in physical sciences and dynamical systems.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95722"} +{"video_file": "JK728xy8G7_39028002.mp4", "openreview_id": "JK728xy8G7", "slideslive_id": 39028002, "venue": "nips2024", "title": "Smoothed Energy Guidance: Guiding Diffusion Models with Reduced Energy Curvature of Attention", "status": "Poster", "keywords": "Diffusion Models;Diffusion Guidance", "tldr": "This paper proposes Smoothed Energy Guidance (SEG), a novel training- and label-free diffusion guidance method that leverages the energy-based perspective of the self-attention mechanism to enhance image generation.", "abstract": "Conditional diffusion models have shown remarkable success in visual content generation, producing high-quality samples across various domains, largely due to classifier-free guidance (CFG). Recent attempts to extend guidance to unconditional models have relied on heuristic techniques, resulting in suboptimal generation quality and unintended effects. In this work, we propose Smoothed Energy Guidance (SEG), a novel training- and condition-free approach that leverages the energy-based perspective of the self-attention mechanism to enhance image generation. By defining the energy of self-attention, we introduce a method to reduce the curvature of the energy landscape of attention and use the output as the unconditional prediction. Practically, we control the curvature of the energy landscape by adjusting the Gaussian kernel parameter while keeping the guidance scale parameter fixed. Additionally, we present a query blurring method that is equivalent to blurring the entire attention weights without incurring quadratic complexity in the number of tokens. In our experiments, SEG achieves a Pareto improvement in both quality and the reduction of side effects. The code is available at https://github.com/SusungHong/SEG-SDXL.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95721"} +{"video_file": "JKEIYQUSUc_39026592.mp4", "openreview_id": "JKEIYQUSUc", "slideslive_id": 39026592, "venue": "nips2024", "title": "SpatialRGPT: Grounded Spatial Reasoning in Vision-Language Models", "status": "Poster", "keywords": "Vision-Language Models;Spatial Reasoning", "tldr": "A powerful region-level VLM adept at 3D spatial reasoning.", "abstract": "Vision Language Models (VLMs) have demonstrated remarkable performance in 2D vision and language tasks. However, their ability to reason about spatial arrangements remains limited. In this work, we introduce Spatial Region GPT (SpatialRGPT) to enhance VLMs\u2019 spatial perception and reasoning capabilities. SpatialRGPT advances VLMs\u2019 spatial understanding through two key innovations: (i) a data curation pipeline that enables effective learning of regional representation from 3D scene graphs, and (ii) a flexible ``plugin'' module for integrating depth information into the visual encoder of existing VLMs. During inference, when provided with user-specified region proposals, SpatialRGPT can accurately perceive their relative directions and distances. Additionally, we propose SpatialRGBT-Bench, a benchmark with ground-truth 3D annotations encompassing indoor, outdoor, and simulated environments, for evaluating 3D spatial cognition in Vision-Language Models (VLMs). Our results demonstrate that SpatialRGPT significantly enhances performance in spatial reasoning tasks, both with and without local region prompts. The model also exhibits strong generalization capabilities, effectively reasoning about complex spatial relations and functioning as a region-aware dense reward annotator for robotic tasks. Code, dataset, and benchmark are released at https://www.anjiecheng.me/SpatialRGPT.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95720"} +{"video_file": "JL2eMCfDW8_39026968.mp4", "openreview_id": "JL2eMCfDW8", "slideslive_id": 39026968, "venue": "nips2024", "title": "Federated Learning over Connected Modes", "status": "Poster", "keywords": "Federated Learning;Linear Mode Connectivity", "tldr": "Floco addresses the challenges of statistical heterogeneity in cross-silo federated learning by leveraging linear mode connectivity.", "abstract": "Statistical heterogeneity in federated learning poses two major challenges: slow global training due to conflicting gradient signals, and the need of personalization for local distributions. In this work, we tackle both challenges by leveraging recent advances in \\emph{linear mode connectivity} --- identifying a linearly connected low-loss region in the parameter space of neural networks, which we call solution simplex. We propose federated learning over connected modes (\\textsc{Floco}), where clients are assigned local subregions in this simplex based on their gradient signals, and together learn the shared global solution simplex. This allows personalization of the client models to fit their local distributions within the degrees of freedom in the solution simplex and homogenizes the update signals for the global simplex training. Our experiments show that \\textsc{Floco} accelerates the global training process, and significantly improves the local accuracy with minimal computational overhead in cross-silo federated learning settings.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95719"} +{"video_file": "JM0IQSliol_39026814.mp4", "openreview_id": "JM0IQSliol", "slideslive_id": 39026814, "venue": "nips2024", "title": "Shape analysis for time series", "status": "Poster", "keywords": "Machine learning for sciences;Machine learning for healthcare;Representation learning for time series;Shape analysis;LDDMM;Kernel methods", "tldr": "This paper introduces an unsupervised representation learning algorithm for time series tailored to biomedical inter-individual studies using tools from shape analysis.", "abstract": "Analyzing inter-individual variability of physiological functions is particularly appealing in medical and biological contexts to describe or quantify health conditions. Such analysis can be done by comparing individuals to a reference one with time series as biomedical data. This paper introduces an unsupervised representation learning (URL) algorithm for time series tailored to inter-individual studies. The idea is to represent time series as deformations of a reference time series. The deformations are diffeomorphisms parameterized and learned by our method called TS-LDDMM. Once the deformations and the reference time series are learned, the vector representations of individual time series are given by the parametrization of their corresponding deformation. At the crossroads between URL for time series and shape analysis, the proposed algorithm handles irregularly sampled multivariate time series of variable lengths and provides shape-based representations of temporal data. In this work, we establish a representation theorem for the graph of a time series and derive its consequences on the LDDMM framework. We showcase the advantages of our representation compared to existing methods using synthetic data and real-world examples motivated by biomedical applications.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/95718"} +{"video_file": "JNl6h3U3oW_39027741.mp4", "openreview_id": "JNl6h3U3oW", "slideslive_id": 39027741, "venue": "nips2024", "title": "ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization", "status": "Poster", "keywords": "Large Language Models (LLMs); Efficient LLMs; Multiplication-less networks; Hardware acceleration", "tldr": "We propose accelerating pretrained LLMs through a \"post-training\" shift and add reparameterization, towards efficient multiplication-less LLMs, dubbed ShiftAddLLM.", "abstract": "Large language models (LLMs) have shown impressive performance on language tasks but face challenges when deployed on resource-constrained devices due to their extensive parameters and reliance on dense multiplications, resulting in high memory demands and latency bottlenecks. Shift-and-add reparameterization offers a promising solution by replacing costly multiplications with hardware-friendly primitives in both the attention and multi-layer perceptron (MLP) layers of an LLM. However, current reparameterization techniques require training from scratch or full parameter fine-tuning to restore accuracy, which is resource-intensive for LLMs. To address this, we propose accelerating pretrained LLMs through post-training shift-and-add reparameterization, creating efficient multiplication-free models, dubbed ShiftAddLLM. Specifically, we quantize each weight matrix into binary matrices paired with group-wise scaling factors. The associated multiplications are reparameterized into (1) shifts between activations and scaling factors and (2) queries and adds according to the binary matrices. To reduce accuracy loss, we present a multi-objective optimization method to minimize both weight and output activation reparameterization errors. Additionally, based on varying sensitivity across layers to reparameterization, we develop an automated bit allocation strategy to further reduce memory usage and latency. Experiments on five LLM families and eight tasks consistently validate the effectiveness of ShiftAddLLM, achieving average perplexity reductions of 5.6 and 22.7 points at comparable or lower latency compared to the most competitive quantized LLMs at 3- and 2-bit precision, respectively, and more than 80% memory and energy reductions over the original LLMs. Codes and models are available at https://github.com/GATECH-EIC/ShiftAddLLM.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95715"} +{"video_file": "JXKbf1d4ib_39027392.mp4", "openreview_id": "JXKbf1d4ib", "slideslive_id": 39027392, "venue": "nips2024", "title": "Near-Minimax-Optimal Distributional Reinforcement Learning with a Generative Model", "status": "Poster", "keywords": "Reinforcement learning;Distributional reinforcement learning;dynamic programming;TD learning;sample complexity;theory", "tldr": "Roughly speaking, we show that distributional reinforcement learning is of comparable statistical difficulty to learning a value function, in the generative model setting.", "abstract": "We propose a new algorithm for model-based distributional reinforcement learning (RL), and prove that it is minimax-optimal for approximating return distributions in the generative model regime (up to logarithmic factors), the first result of this kind for any distributional RL algorithm. Our analysis also provides new theoretical perspectives on categorical approaches to distributional RL, as well as introducing a new distributional Bellman equation, the stochastic categorical CDF Bellman equation, which we expect to be of independent interest. Finally, we provide an experimental study comparing a variety of model-based distributional RL algorithms, with several key takeaways for practitioners.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95709"} +{"video_file": "JZHFRLoqDq_39027761.mp4", "openreview_id": "JZHFRLoqDq", "slideslive_id": 39027761, "venue": "nips2024", "title": "Energy-Guided Continuous Entropic Barycenter Estimation for General Costs", "status": "Spotlight", "keywords": "energy-based model;generative model;optimal transport;entropic optimal transport barycenters;general optimal transport cost", "tldr": "We propose a new energy-based method to compute entropic optimal transport barycenters with general cost functions.", "abstract": "Optimal transport (OT) barycenters are a mathematically grounded way of averaging probability distributions while capturing their geometric properties. In short, the barycenter task is to take the average of a collection of probability distributions w.r.t. given OT discrepancies. We propose a novel algorithm for approximating the continuous Entropic OT (EOT) barycenter for arbitrary OT cost functions. Our approach is built upon the dual reformulation of the EOT problem based on weak OT, which has recently gained the attention of the ML community. Beyond its novelty, our method enjoys several advantageous properties: (i) we establish quality bounds for the recovered solution; (ii) this approach seamlessly interconnects with the Energy-Based Models (EBMs) learning procedure enabling the use of well-tuned algorithms for the problem of interest; (iii) it provides an intuitive optimization scheme avoiding min-max, reinforce and other intricate technical tricks. For validation, we consider several low-dimensional scenarios and image-space setups, including non-Euclidean cost functions. Furthermore, we investigate the practical task of learning the barycenter on an image manifold generated by a pretrained generative model, opening up new directions for real-world applications. Our code is available at https://github.com/justkolesov/EnergyGuidedBarycenters.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95708"} +{"video_file": "JiQXsLvDls_39027089.mp4", "openreview_id": "JiQXsLvDls", "slideslive_id": 39027089, "venue": "nips2024", "title": "Mutual Information Estimation via Normalizing Flows", "status": "Poster", "keywords": "Normalizing flows;information theory;mutual information", "tldr": "Normalizing flows are used to allow for explicit mutual information estimation via closed-form expressions", "abstract": "We propose a novel approach to the problem of mutual information (MI) estimation via introducing a family of estimators based on normalizing flows. The estimator maps original data to the target distribution, for which MI is easier to estimate. We additionally explore the target distributions with known closed-form expressions for MI. Theoretical guarantees are provided to demonstrate that our approach yields MI estimates for the original data. Experiments with high-dimensional data are conducted to highlight the practical advantages of the proposed method.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95704"} +{"video_file": "JiRGxrqHh0_39026006.mp4", "openreview_id": "JiRGxrqHh0", "slideslive_id": 39026006, "venue": "nips2024", "title": "FACT or Fiction: Can Truthful Mechanisms Eliminate Federated Free Riding?", "status": "Poster", "keywords": "Federated Learning;Truthfulness;Free-Riding", "tldr": "We propose a FL mechanism that ensures agents do not free ride even when they are untruthful.", "abstract": "Standard federated learning (FL) approaches are vulnerable to the free-rider dilemma: participating agents can contribute little to nothing yet receive a well-trained aggregated model. While prior mechanisms attempt to solve the free-rider dilemma, none have addressed the issue of truthfulness. In practice, adversarial agents can provide false information to the server in order to cheat its way out of contributing to federated training. In an effort to make free-riding-averse federated mechanisms truthful, and consequently less prone to breaking down in practice, we propose FACT. FACT is the first federated mechanism that: (1) eliminates federated free riding by using a penalty system, (2) ensures agents provide truthful information by creating a competitive environment, and (3) encourages agent participation by offering better performance than training alone. Empirically, FACT avoids free-riding when agents are untruthful, and reduces agent loss by over 4x.", "primary_area": "infrastructure", "site": "https://neurips.cc/virtual/2024/poster/95703"} +{"video_file": "Jkt42QYyEH_39025839.mp4", "openreview_id": "Jkt42QYyEH", "slideslive_id": 39025839, "venue": "nips2024", "title": "LiveScene: Language Embedding Interactive Radiance Fields for Physical Scene Control and Rendering", "status": "Poster", "keywords": "Interactive Scene Reconstruction; Controllable NeRF; Language Embedding; Dataset;", "tldr": "the first scene-level language-embedded interactive radiance field; interaction datasets;", "abstract": "This paper scales object-level reconstruction to complex scenes, advancing interactive scene reconstruction. We introduce two datasets, OmniSim and InterReal, featuring 28 scenes with multiple interactive objects. To tackle the challenge of inaccurate interactive motion recovery in complex scenes, we propose LiveScene, a scene-level language-embedded interactive radiance field that efficiently reconstructs and controls multiple objects. By decomposing the interactive scene into local deformable fields, LiveScene enables separate reconstruction of individual object motions, reducing memory consumption. Additionally, our interaction-aware language embedding localizes individual interactive objects, allowing for arbitrary control using natural language. Our approach demonstrates significant superiority in novel view synthesis, interactive scene control, and language grounding performance through extensive experiments. Project page: https://livescenes.github.io.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95700"} +{"video_file": "JlWn80mTJi_39025610.mp4", "openreview_id": "JlWn80mTJi", "slideslive_id": 39025610, "venue": "nips2024", "title": "The Implicit Bias of Gradient Descent on Separable Multiclass Data", "status": "Poster", "keywords": "gradient descent;multiclass classification;hard-margin SVM;implicit bias", "tldr": "We prove implicit bias of gradient descent for linearly separable multiclass problems.", "abstract": "Implicit bias describes the phenomenon where optimization-based training algorithms, without explicit regularization, show a preference for simple estimators even when more complex estimators have equal objective values. Multiple works have developed the theory of implicit bias for binary classification under the assumption that the loss satisfies an exponential tail property. However, there is a noticeable gap in analysis for multiclass classification, with only a handful of results which themselves are restricted to the cross-entropy loss. In this work, we employ the framework of Permutation Equivariant and Relative Margin-based (PERM) losses [Wang and Scott, 2024] to introduce a multiclass extension of the exponential tail property. This class of losses includes not only cross-entropy but also other losses. Using this framework, we extend the implicit bias result of Soudry et al. [2018] to multiclass classification. Furthermore, our proof techniques closely mirror those of the binary case, thus illustrating the power of the PERM framework for bridging the binary-multiclass gap.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95699"} +{"video_file": "JpqEzPTuv6_39025832.mp4", "openreview_id": "JpqEzPTuv6", "slideslive_id": 39025832, "venue": "nips2024", "title": "What Makes Partial-Label Learning Algorithms Effective?", "status": "Poster", "keywords": "partial-label learning;algorithm design principles", "tldr": "We summarize the success of PLL so far into some minimal algorithm design principles.", "abstract": "A partial label (PL) specifies a set of candidate labels for an instance and partial-label learning (PLL) trains multi-class classifiers with PLs. Recently, many methods that incorporate techniques from other domains have shown strong potential. The expectation that stronger techniques would enhance performance has resulted in prominent PLL methods becoming not only highly complicated but also quite different from one another, making it challenging to choose the best direction for future algorithm design. While it is exciting to see higher performance, this leaves open a fundamental question: what makes a PLL method effective? We present a comprehensive empirical analysis of this question and summarize the success of PLL so far into some minimal algorithm design principles. Our findings reveal that high accuracy on benchmark-simulated datasets with PLs can misleadingly amplify the perceived effectiveness of some general techniques, which may improve representation learning but have limited impact on addressing the inherent challenges of PLs. We further identify the common behavior among successful PLL methods as a progressive transition from uniform to one-hot pseudo-labels, highlighting the critical role of mini-batch PL purification in achieving top performance. Based on our findings, we introduce a minimal working algorithm that is surprisingly simple yet effective, and propose an improved strategy to implement the design principles, suggesting a promising direction for improvements in PLL.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95697"} +{"video_file": "JrIPBXWiS8_39024629.mp4", "openreview_id": "JrIPBXWiS8", "slideslive_id": 39024629, "venue": "nips2024", "title": "Resfusion: Denoising Diffusion Probabilistic Models for Image Restoration Based on Prior Residual Noise", "status": "Poster", "keywords": "Diffusion based models; Image restoration; Shadow removal; Low-light enhancement; Deraining", "tldr": "We propose Resfusion, a general framework that incorporates the residual term into the diffusion forward process, starting the reverse process directly from the noisy degraded images.", "abstract": "Recently, research on denoising diffusion models has expanded its application to the field of image restoration. Traditional diffusion-based image restoration methods utilize degraded images as conditional input to effectively guide the reverse generation process, without modifying the original denoising diffusion process. However, since the degraded images already include low-frequency information, starting from Gaussian white noise will result in increased sampling steps. We propose Resfusion, a general framework that incorporates the residual term into the diffusion forward process, starting the reverse process directly from the noisy degraded images. The form of our inference process is consistent with the DDPM. We introduced a weighted residual noise, named resnoise, as the prediction target and explicitly provide the quantitative relationship between the residual term and the noise term in resnoise. By leveraging a smooth equivalence transformation, Resfusion determine the optimal acceleration step and maintains the integrity of existing noise schedules, unifying the training and inference processes. The experimental results demonstrate that Resfusion exhibits competitive performance on ISTD dataset, LOL dataset and Raindrop dataset with only five sampling steps. Furthermore, Resfusion can be easily applied to image generation and emerges with strong versatility. Our code and model are available at https://github.com/nkicsl/Resfusion.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95696"} +{"video_file": "JzcIKnnOpJ_39025132.mp4", "openreview_id": "JzcIKnnOpJ", "slideslive_id": 39025132, "venue": "nips2024", "title": "Rejection via Learning Density Ratios", "status": "Poster", "keywords": "Rejection;Distributional Robust Optimization;Variational Inference;Density Ratio", "tldr": "We provide an alternative perspective for classification with rejection by learning density ratios.", "abstract": "Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions. The predominant approach is to alter the supervised learning pipeline by augmenting typical loss functions, letting model rejection incur a lower loss than an incorrect prediction. Instead, we propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance. This can be formalized via the optimization of a loss's risk with a\n\u03d5\n-divergence regularization term. Through this idealized distribution, a rejection decision can be made by utilizing the density ratio between this distribution and the data distribution. We focus on the setting where our\n\u03d5\n-divergences are specified by the family of\n\u03b1\n-divergence. Our framework is tested empirically over clean and noisy datasets.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95685"} +{"video_file": "Jzog9gvOf6_39027201.mp4", "openreview_id": "Jzog9gvOf6", "slideslive_id": 39027201, "venue": "nips2024", "title": "Progressive Exploration-Conformal Learning for Sparsely Annotated Object Detection in Aerial Images", "status": "Poster", "keywords": "Aerial object detection; Sparse annotation; Conformal exploratory learning", "tldr": "We address sparsely annotated aerial object detection task with a Progressive Exploration-Conformal Learning (PECL) framework.", "abstract": "The ability to detect aerial objects with limited annotation is pivotal to the development of real-world aerial intelligence systems. In this work, we focus on a demanding but practical sparsely annotated object detection (SAOD) in aerial images, which encompasses a wider variety of aerial scenes with the same number of annotated objects. Although most existing SAOD methods rely on fixed thresholding to filter pseudo-labels for enhancing detector performance, adapting to aerial objects proves challenging due to the imbalanced probabilities/confidences associated with predicted aerial objects. To address this problem, we propose a novel Progressive Exploration-Conformal Learning (PECL) framework to address the SAOD task, which can adaptively perform the selection of high-quality pseudo-labels in aerial images. Specifically, the pseudo-label exploration can be formulated as a decision-making paradigm by adopting a conformal pseudo-label explorer and a multi-clue selection evaluator. The conformal pseudo-label explorer learns an adaptive policy by maximizing the cumulative reward, which can decide how to select these high-quality candidates by leveraging their essential characteristics and inter-instance contextual information. The multi-clue selection evaluator is designed to evaluate the explorer-guided pseudo-label selections by providing an instructive feedback for policy optimization. Finally, the explored pseudo-labels can be adopted to guide the optimization of aerial object detector in a closed-looping progressive fashion. Comprehensive evaluations on two public datasets demonstrate the superiority of our PECL when compared with other state-of-the-art methods in the sparsely annotated aerial object detection task.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95684"} +{"video_file": "K3h2kZFz8h_39026110.mp4", "openreview_id": "K3h2kZFz8h", "slideslive_id": 39026110, "venue": "nips2024", "title": "An Analytical Study of Utility Functions in Multi-Objective Reinforcement Learning", "status": "Poster", "keywords": "reinforcement learning;multi-objective decision making; multi-objective reinforcement learning;learning theory;markov decision processes", "tldr": "In Multi-Objective Reinforcement Learning, we offer a characterisation of the types of preferences that can be expressed as utility functions, and the utility functions for which an associated optimal policy exists.", "abstract": "Multi-objective reinforcement learning (MORL) is an excellent framework for multi-objective sequential decision-making. MORL employs a utility function to aggregate multiple objectives into one that expresses a user's preferences. However, MORL still misses two crucial theoretical analyses of the properties of utility functions: (1) a characterisation of the utility functions for which an associated optimal policy exists, and (2) a characterisation of the types of preferences that can be expressed as utility functions. As a result, we formally characterise the families of preferences and utility functions that MORL should focus on: those for which an optimal policy is guaranteed to exist. We expect our theoretical results to promote the development of novel MORL algorithms that exploit our theoretical findings.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95682"} +{"video_file": "K5PA3SK2jB_39025162.mp4", "openreview_id": "K5PA3SK2jB", "slideslive_id": 39025162, "venue": "nips2024", "title": "ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Field", "status": "Poster", "keywords": "NeRF;Reconstruction;Stochastic Process;Sparse View;Novel View Synthesis;Uncertainty Estimation", "tldr": "We model provenance -- the locations where each point is likely visible -- of a NeRF using a stochastic field.", "abstract": "Neural radiance fields (NeRFs) have gained popularity with multiple works showing promising results across various applications. However, to the best of our knowledge, existing works do not explicitly model the distribution of training camera poses, or consequently the triangulation quality, a key factor affecting reconstruction quality dating back to classical vision literature. We close this gap with ProvNeRF, an approach that models the provenance for each point -- i.e., the locations where it is likely visible -- of NeRFs as a stochastic field. We achieve this by extending implicit maximum likelihood estimation (IMLE) to functional space with an optimizable objective. We show that modeling per-point provenance during the NeRF optimization enriches the model with information on triangulation leading to improvements in novel view synthesis and uncertainty estimation under the challenging sparse, unconstrained view setting against competitive baselines. The code will be available at https://github.com/georgeNakayama/ProvNeRF.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95680"} +{"video_file": "KAAUvi4kpb_39027996.mp4", "openreview_id": "KAAUvi4kpb", "slideslive_id": 39027996, "venue": "nips2024", "title": "BrainBits: How Much of the Brain are Generative Reconstruction Methods Using?", "status": "Poster", "keywords": "diffusion models;fMRI;computational neuroscience;generative AI", "tldr": "Image stimuli can be reconstructed from the brain without actually using a lot of the brain signal", "abstract": "When evaluating stimuli reconstruction results it is tempting to assume that higher fidelity text and image generation is due to an improved understanding of the brain or more powerful signal extraction from neural recordings. However, in practice, new reconstruction methods could improve performance for at least three other reasons: learning more about the distribution of stimuli, becoming better at reconstructing text or images in general, or exploiting weaknesses in current image and/or text evaluation metrics. Here we disentangle how much of the reconstruction is due to these other factors vs. productively using the neural recordings. We introduce BrainBits, a method that uses a bottleneck to quantify the amount of signal extracted from neural recordings that is actually necessary to reproduce a method's reconstruction fidelity. We find that it takes surprisingly little information from the brain to produce reconstructions with high fidelity. In these cases, it is clear that the priors of the methods' generative models are so powerful that the outputs they produce extrapolate far beyond the neural signal they decode. Given that reconstructing stimuli can be improved independently by either improving signal extraction from the brain or by building more powerful generative models, improving the latter may fool us into thinking we are improving the former. We propose that methods should report a method-specific random baseline, a reconstruction ceiling, and a curve of performance as a function of bottleneck size, with the ultimate goal of using more of the neural recordings.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/95678"} +{"video_file": "KEe4IUp20I_39025023.mp4", "openreview_id": "KEe4IUp20I", "slideslive_id": 39025023, "venue": "nips2024", "title": "SpaceByte: Towards Deleting Tokenization from Large Language Modeling", "status": "Poster", "keywords": "byte level language model;model architecture;tokenization;efficient pretraining", "tldr": "SpaceByte is a byte-level autoregressive language model that roughly matches the performance of tokenized Transformers in compute-controlled experiments.", "abstract": "Tokenization is widely used in large language models because it significantly improves performance. However, tokenization imposes several disadvantages, such as performance biases, increased adversarial vulnerability, decreased character-level modeling performance, and increased modeling complexity. To address these disadvantages without sacrificing performance, we propose SpaceByte, a novel byte-level decoder architecture that closes the performance gap between byte-level and subword autoregressive language modeling. SpaceByte consists of a byte-level Transformer model, but with extra larger transformer blocks inserted in the middle of the layers. We find that performance is significantly improved by applying these larger blocks only after certain bytes, such as space characters, which typically denote word boundaries. Our experiments show that for a fixed training and inference compute budget, SpaceByte outperforms other byte-level architectures and roughly matches the performance of tokenized Transformer architectures.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95677"} +{"video_file": "KHX0dKXdqH_39026959.mp4", "openreview_id": "KHX0dKXdqH", "slideslive_id": 39026959, "venue": "nips2024", "title": "Causal Imitation for Markov Decision Processes: a Partial Identification Approach", "status": "Poster", "keywords": "Causal Inference;Imitation Learning", "tldr": "This paper presents novel causal imitation learning algorithms that adapt to confounded expert demonstrations in MDPs using partial identification techniques, demonstrating their effectiveness theoretically and empirically across various scenarios.", "abstract": "Imitation learning enables an agent to learn from expert demonstrations when the performance measure is unknown and the reward signal is not specified. Standard imitation methods do not generally apply when the learner and the expert's sensory capabilities mismatch and demonstrations are contaminated with unobserved confounding bias. To address these challenges, recent advancements in causal imitation learning have been pursued. However, these methods often require access to underlying causal structures that might not always be available, posing practical challenges. In this paper, we investigate robust imitation learning within the framework of canonical Markov Decision Processes (MDPs) using partial identification, allowing the agent to achieve expert performance even when the system dynamics are not uniquely determined from the confounded expert demonstrations. Specifically, first, we theoretically demonstrate that when unobserved confounders (UCs) exist in an MDP, the learner is generally unable to imitate expert performance. We then explore imitation learning in partially identifiable settings --- either transition distribution or reward function is non-identifiable from the available data and knowledge. Augmenting the celebrated GAIL method (Ho & Ermon, 2016), our analysis leads to two novel causal imitation algorithms that can obtain effective policies guaranteed to achieve expert performance.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/95675"} +{"video_file": "KHcB1drMRX_39024969.mp4", "openreview_id": "KHcB1drMRX", "slideslive_id": 39024969, "venue": "nips2024", "title": "Accelerating Pre-training of Multimodal LLMs via Chain-of-Sight", "status": "Poster", "keywords": "Chain-of-Sight;MLLMs;pre-training efficiency;3.7x speedup", "tldr": "Through multi-scale visual prompt and post-pretrain token scaling, Chain-of-Sight achieves 3.7x pre-training speedup without performance sacrifice.", "abstract": "This paper introduces Chain-of-Sight, a vision-language bridge module that accelerates the pre-training of Multimodal Large Language Models (MLLMs). Our approach employs a sequence of visual resamplers that capture visual details at various spacial scales. This architecture not only leverages global and local visual contexts effectively, but also facilitates the flexible extension of visual tokens through a compound token scaling strategy, allowing up to a 16x increase in the token count post pre-training. Consequently, Chain-of-Sight requires significantly fewer visual tokens in the pre-training phase compared to the fine-tuning phase. This intentional reduction of visual tokens during pre-training notably accelerates the pre-training process, cutting down the wall-clock training time by\n\u223c\n73%. Empirical results on a series of vision-language benchmarks reveal that the pre-train acceleration through Chain-of-Sight is achieved without sacrificing performance, matching or surpassing the standard pipeline of utilizing all visual tokens throughout the entire training process. Further scaling up the number of visual tokens for pre-training leads to stronger performances, competitive to existing approaches in a series of benchmarks.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95674"} +{"video_file": "KI5TANE02e_39028637.mp4", "openreview_id": "KI5TANE02e", "slideslive_id": 39028637, "venue": "nips2024", "title": "Score-based generative models are provably robust: an uncertainty quantification perspective", "status": "Poster", "keywords": "Score-based generative modeling;uncertainty quantification;Hamilton-Jacobi equations;generalization", "tldr": "Using techniques from analysis of partial differential equations and Hamilton-Jacobi equations, we show score-based generative models are robust, and also yield generalization bounds.", "abstract": "Through an uncertainty quantification (UQ) perspective, we show that score-based generative models (SGMs) are provably robust to the multiple sources of error in practical implementation. Our primary tool is the Wasserstein uncertainty propagation (WUP) theorem, a model-form UQ bound that describes how the\nL\n2\nerror from learning the score function propagates to a Wasserstein-1 (\nd\n1\n) ball around the true data distribution under the evolution of the Fokker-Planck equation. We show how errors due to (a) finite sample approximation, (b) early stopping, (c) score-matching objective choice, (d) score function parametrization expressiveness, and (e) reference distribution choice, impact the quality of the generative model in terms of a\nd\n1\nbound of computable quantities. The WUP theorem relies on Bernstein estimates for Hamilton-Jacobi-Bellman partial differential equations (PDE) and the regularizing properties of diffusion processes. Specifically, PDE regularity theory shows that stochasticity is the key mechanism ensuring SGM algorithms are provably robust. The WUP theorem applies to integral probability metrics beyond\nd\n1\n, such as the total variation distance and the maximum mean discrepancy. Sample complexity and generalization bounds in\nd\n1\nfollow directly from the WUP theorem. Our approach requires minimal assumptions, is agnostic to the manifold hypothesis and avoids absolute continuity assumptions for the target distribution. Additionally, our results clarify the trade-offs among multiple error sources in SGMs.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95673"} +{"video_file": "KKrj1vCQaG_39027963.mp4", "openreview_id": "KKrj1vCQaG", "slideslive_id": 39027963, "venue": "nips2024", "title": "RectifID: Personalizing Rectified Flow with Anchored Classifier Guidance", "status": "Poster", "keywords": "Personalized Image Generation;Rectified Flow;Classifier Guidance", "tldr": "A training-free approach to personalizing rectified flow with classifier guidance", "abstract": "Customizing diffusion models to generate identity-preserving images from user-provided reference images is an intriguing new problem. The prevalent approaches typically require training on extensive domain-specific images to achieve identity preservation, which lacks flexibility across different use cases. To address this issue, we exploit classifier guidance, a training-free technique that steers diffusion models using an existing classifier, for personalized image generation. Our study shows that based on a recent rectified flow framework, the major limitation of vanilla classifier guidance in requiring a special classifier can be resolved with a simple fixed-point solution, allowing flexible personalization with off-the-shelf image discriminators. Moreover, its solving procedure proves to be stable when anchored to a reference flow trajectory, with a convergence guarantee. The derived method is implemented on rectified flow with different off-the-shelf image discriminators, delivering advantageous personalization results for human faces, live subjects, and certain objects. Code is available at https://github.com/feifeiobama/RectifID.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95671"} +{"video_file": "KLL70pTQ17_39027275.mp4", "openreview_id": "KLL70pTQ17", "slideslive_id": 39027275, "venue": "nips2024", "title": "Oracle-Efficient Reinforcement Learning for Max Value Ensembles", "status": "Poster", "keywords": "Reinforcement Learning Theory;Ensembling;Max-Following;Learning Theory", "tldr": "We provide an efficient algorithm to learn an approximate max-following policy using K constituent policies in large state spaces.", "abstract": "Reinforcement learning (RL) in large or infinite state spaces is notoriously challenging, both theoretically (where worst-case sample and computational complexities must scale with state space cardinality) and experimentally (where function approximation and policy gradient techniques often scale poorly and suffer from instability and high variance). One line of research attempting to address these difficulties makes the natural assumption that we are given a collection of base or constituent policies (possibly heuristic) upon which we would like to improve in a scalable manner. In this work we aim to compete with the max-following policy, which at each state follows the action of whichever constituent policy has the highest value. The max-following policy is always at least as good as the best constituent policy, and may be considerably better. Our main result is an efficient algorithm that learns to compete with the max-following policy, given only access to the constituent policies (but not their value functions). In contrast to prior work in similar settings, our theoretical results require only the minimal assumption of an ERM oracle for value function approximation for the constituent policies (and not the global optimal policy or the max-following policy itself) on samplable distributions. We illustrate our algorithm's experimental effectiveness and behavior on several robotic simulation testbeds.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95670"} +{"video_file": "KNrwaFEi1u_39028673.mp4", "openreview_id": "KNrwaFEi1u", "slideslive_id": 39028673, "venue": "nips2024", "title": "Multi-Object Hallucination in Vision Language Models", "status": "Poster", "keywords": "Large Vision Language Models;Object Hallucination;Visual Prompting", "tldr": "We study the problem of multi-object hallucination and introduce Recognition-based Object Probing Evaluation (ROPE), an automated evaluation protocol.", "abstract": "Large vision language models (LVLMs) often suffer from object hallucination, producing objects not present in the given images. While current benchmarks for object hallucination primarily concentrate on the presence of a single object class rather than individual entities, this work systematically investigates multi-object hallucination, examining how models misperceive (e.g., invent nonexistent objects or become distracted) when tasked with focusing on multiple objects simultaneously. We introduce Recognition-based Object Probing Evaluation (ROPE), an automated evaluation protocol that considers the distribution of object classes within a single image during testing and uses visual referring prompts to eliminate ambiguity. With comprehensive empirical studies and analysis of potential factors leading to multi-object hallucination, we found that (1) LVLMs suffer more hallucinations when focusing on multiple objects compared to a single object. (2) The tested object class distribution affects hallucination behaviors, indicating that LVLMs may follow shortcuts and spurious correlations. (3) Hallucinatory behaviors are influenced by data-specific factors, salience and frequency, and model intrinsic behaviors. We hope to enable LVLMs to recognize and reason about multiple objects that often occur in realistic visual scenes, provide insights, and quantify our progress towards mitigating the issues.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95666"} +{"video_file": "KVAx5tys2p_39026577.mp4", "openreview_id": "KVAx5tys2p", "slideslive_id": 39026577, "venue": "nips2024", "title": "TopoFR: A Closer Look at Topology Alignment on Face Recognition", "status": "Poster", "keywords": "Face Recognition;Structure Alignment;Face Perception and Understanding", "tldr": "Investigate structure alignment on face recognition.", "abstract": "The field of face recognition (FR) has undergone significant advancements with the rise of deep learning. Recently, the success of unsupervised learning and graph neural networks has demonstrated the effectiveness of data structure information. Considering that the FR task can leverage large-scale training data, which intrinsically contains significant structure information, we aim to investigate how to encode such critical structure information into the latent space. As revealed from our observations, directly aligning the structure information between the input and latent spaces inevitably suffers from an overfitting problem, leading to a structure collapse phenomenon in the latent space. To address this problem, we propose TopoFR, a novel FR model that leverages a topological structure alignment strategy called PTSA and a hard sample mining strategy named SDE. Concretely, PTSA uses persistent homology to align the topological structures of the input and latent spaces, effectively preserving the structure information and improving the generalization performance of FR model. To mitigate the impact of hard samples on the latent space structure, SDE accurately identifies hard samples by automatically computing structure damage score (SDS) for each sample, and directs the model to prioritize optimizing these samples. Experimental results on popular face benchmarks demonstrate the superiority of our TopoFR over the state-of-the-art methods. Code and models are available at: https://github.com/modelscope/facechain/tree/main/face_module/TopoFR.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95660"} +{"video_file": "KXUijdMFdG_39025113.mp4", "openreview_id": "KXUijdMFdG", "slideslive_id": 39025113, "venue": "nips2024", "title": "Deep Homomorphism Networks", "status": "Poster", "keywords": "graph homomorphism;subgraph counting;graph neural network expressivity", "tldr": "We study parameterized multi-layers graph homomorphism networks where homomorphism mappings acted as convolutional kernels. The proposed multi-layers graph homomorphism can be understood by using rooted graph product homormorphism.", "abstract": "Many real-world graphs are large and have some characteristic subgraph patterns, such as triangles in social networks, cliques in web graphs, and cycles in molecular networks. Detecting such subgraph patterns is important in many applications; therefore, establishing graph neural networks (GNNs) that can detect such patterns and run fast on large graphs is demanding. In this study, we propose a new GNN layer, named \\emph{graph homomorphism layer}. It enumerates local subgraph patterns that match the predefined set of patterns\nP\n\u2219\n, applies non-linear transformations to node features, and aggregates them along with the patterns. By stacking these layers, we obtain a deep GNN model called \\emph{deep homomorphism network (DHN)}. The expressive power of the DHN is completely characterised by the set of patterns generated from\nP\n\u2219\nby graph-theoretic operations; hence, it serves as a useful theoretical tool to analyse the expressive power of many GNN models. Furthermore, the model runs in the same time complexity as the graph homomorphisms, which is fast in many real-word graphs. Thus, it serves as a practical and lightweight model that solves difficult problems using domain knowledge.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/95659"} +{"video_file": "KY07A73F3Y_39027046.mp4", "openreview_id": "KY07A73F3Y", "slideslive_id": 39027046, "venue": "nips2024", "title": "Pre-trained Text-to-Image Diffusion Models Are Versatile Representation Learners for Control", "status": "Spotlight", "keywords": "Embodied AI;Representation Learning for Control;Diffusion Models;Foundation Models", "tldr": "We investigate representations from pre-trained text-to-image diffusion models for control tasks and showcase competitive performance across a wide range of tasks.", "abstract": "Embodied AI agents require a fine-grained understanding of the physical world mediated through visual and language inputs. Such capabilities are difficult to learn solely from task-specific data. This has led to the emergence of pre-trained vision-language models as a tool for transferring representations learned from internet-scale data to downstream tasks and new domains. However, commonly used contrastively trained representations such as in CLIP have been shown to fail at enabling embodied agents to gain a sufficiently fine-grained scene understanding\u2014a capability vital for control. To address this shortcoming, we consider representations from pre-trained text-to-image diffusion models, which are explicitly optimized to generate images from text prompts and as such, contain text-conditioned representations that reflect highly fine-grained visuo-spatial information. Using pre-trained text-to-image diffusion models, we construct Stable Control Representations which allow learning downstream control policies that generalize to complex, open-ended environments. We show that policies learned using Stable Control Representations are competitive with state-of-the-art representation learning approaches across a broad range of simulated control settings, encompassing challenging manipulation and navigation tasks. Most notably, we show that Stable Control Representations enable learning policies that exhibit state-of-the-art performance on OVMM, a difficult open-vocabulary navigation benchmark.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95658"} +{"video_file": "KYHVBsEHuC_39024701.mp4", "openreview_id": "KYHVBsEHuC", "slideslive_id": 39024701, "venue": "nips2024", "title": "DiffuPac: Contextual Mimicry in Adversarial Packets Generation via Diffusion Model", "status": "Poster", "keywords": "Network Intrusion Detection System;Adversarial Machine Learning;Cybersecurity;Adversarial Sample Generation", "tldr": "DiffuPac, a first-of-its-kind model that utilized pre-trained BERT with diffusion model to generate adversarial packets", "abstract": "In domains of cybersecurity, recent advancements in Machine Learning (ML) and Deep Learning (DL) have significantly enhanced Network Intrusion Detection Systems (NIDS), improving the effectiveness of cybersecurity operations. However, attackers have also leveraged ML/DL to develop sophisticated models that generate adversarial packets capable of evading NIDS detection. Consequently, defenders must study and analyze these models to prepare for the evasion attacks that exploit NIDS detection mechanisms. Unfortunately, conventional generation models often rely on unrealistic assumptions about attackers' knowledge of NIDS components, making them impractical for real-world scenarios. To address this issue, we present DiffuPac, a first-of-its-kind generation model designed to generate adversarial packets that evade detection without relying on specific NIDS components. DiffuPac integrates a pre-trained Bidirectional Encoder Representations from Transformers (BERT) with diffusion model, which, through its capability for conditional denoising and classifier-free guidance, effectively addresses the real-world constraint of limited attacker knowledge. By concatenating malicious packets with contextually relevant normal packets and applying targeted noising only to the malicious packets, DiffuPac seamlessly blends adversarial packets into genuine network traffic. Through evaluations on real-world datasets, we demonstrate that DiffuPac achieves strong evasion capabilities against sophisticated NIDS, outperforming conventional methods by an average of 6.69 percentage points, while preserving the functionality and practicality of the generated adversarial packets.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95657"} +{"video_file": "KYHma7hzjr_39027914.mp4", "openreview_id": "KYHma7hzjr", "slideslive_id": 39027914, "venue": "nips2024", "title": "Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?", "status": "Poster", "keywords": "interpretability;explainability;concepts;concept bottleneck models;model interventions;healthcare", "tldr": "We introduce concept-based interventions for black-box models, formalise the model's intervenability as a measure of intervention effectiveness, and propose a fine-tuning procedure to improve intervenability.", "abstract": "Recently, interpretable machine learning has re-explored concept bottleneck models (CBM). An advantage of this model class is the user's ability to intervene on predicted concept values, affecting the downstream output. In this work, we introduce a method to perform such concept-based interventions on pretrained neural networks, which are not interpretable by design, only given a small validation set with concept labels. Furthermore, we formalise the notion of intervenability as a measure of the effectiveness of concept-based interventions and leverage this definition to fine-tune black boxes. Empirically, we explore the intervenability of black-box classifiers on synthetic tabular and natural image benchmarks. We focus on backbone architectures of varying complexity, from simple, fully connected neural nets to Stable Diffusion. We demonstrate that the proposed fine-tuning improves intervention effectiveness and often yields better-calibrated predictions. To showcase the practical utility of our techniques, we apply them to deep chest X-ray classifiers and show that fine-tuned black boxes are more intervenable than CBMs. Lastly, we establish that our methods are still effective under vision-language-model-based concept annotations, alleviating the need for a human-annotated validation set.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/95656"} +{"video_file": "Kcsj9FGnKR_39026844.mp4", "openreview_id": "Kcsj9FGnKR", "slideslive_id": 39026844, "venue": "nips2024", "title": "DiffuLT: Diffusion for Long-tail Recognition Without External Knowledge", "status": "Poster", "keywords": "Long-tail learning; long-tail classification;diffusion model", "tldr": "We introduce a pipeline for long-tail recognition that uses a diffusion model trained solely on the long-tailed dataset to generate a balanced proxy.", "abstract": "This paper introduces a novel pipeline for long-tail (LT) recognition that diverges from conventional strategies. Instead, it leverages the long-tailed dataset itself to generate a balanced proxy dataset without utilizing external data or model. We deploy a diffusion model trained from scratch on only the long-tailed dataset to create this proxy and verify the effectiveness of the data produced. Our analysis identifies approximately-in-distribution (AID) samples, which slightly deviate from the real data distribution and incorporate a blend of class information, as the crucial samples for enhancing the generative model's performance in long-tail classification. We promote the generation of AID samples during the training of a generative model by utilizing a feature extractor to guide the process and filter out detrimental samples during generation. Our approach, termed Diffusion model for Long-Tail recognition (DiffuLT), represents a pioneer application of generative models in long-tail recognition. DiffuLT achieves state-of-the-art results on CIFAR10-LT, CIFAR100-LT, and ImageNet-LT, surpassing leading competitors by significant margins. Comprehensive ablations enhance the interpretability of our pipeline. Notably, the entire generative process is conducted without relying on external data or pre-trained model weights, which leads to its generalizability to real-world long-tailed scenarios.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95651"} +{"video_file": "Ke3MSP8Nr6_39026732.mp4", "openreview_id": "Ke3MSP8Nr6", "slideslive_id": 39026732, "venue": "nips2024", "title": "Information-theoretic Limits of Online Classification with Noisy Labels", "status": "Poster", "keywords": "Online classification;noisy label;pairwise testing;Hellinger divergence;Le Cam-Birge testing", "tldr": "We provide nearly matching lower and upper bounds for online classification with noisy labels across a wide range of hypothesis classes and noise mechanisms, using the Hellinger gap of the induced noisy label distributions.", "abstract": "We study online classification with general hypothesis classes where the true labels are determined by some function within the class, but are corrupted by unknown stochastic noise, and the features are generated adversarially. Predictions are made using observed noisy labels and noiseless features, while the performance is measured via minimax risk when comparing against true labels. The noisy mechanism is modeled via a general noisy kernel that specifies, for any individual data point, a set of distributions from which the actual noisy label distribution is chosen. We show that minimax risk is tightly characterized (up to a logarithmic factor of the hypothesis class size) by the Hellinger gap of the noisy label distributions induced by the kernel, independent of other properties such as the means and variances of the noise. Our main technique is based on a novel reduction to an online comparison scheme of two hypotheses, along with a new conditional version of Le Cam-Birg\u00e9 testing suitable for online settings. Our work provides the first comprehensive characterization of noisy online classification with guarantees that apply to the ground truth while addressing general noisy observations.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95650"} +{"video_file": "KhwOuB0fs9_39027549.mp4", "openreview_id": "KhwOuB0fs9", "slideslive_id": 39027549, "venue": "nips2024", "title": "EffiLearner: Enhancing Efficiency of Generated Code via Self-Optimization", "status": "Poster", "keywords": "Code Generation;Efficiency", "tldr": "In this paper, we propose SOOP to improve the efficiency of LLMs generated code.", "abstract": "Large language models (LLMs) have shown remarkable progress in code generation, but their generated code often suffers from inefficiency, resulting in longer execution times and higher memory consumption. To address this issue, we propose EffiLearner, a self-optimization framework that utilizes execution overhead profiles to improve the efficiency of LLM-generated code. EffiLearner first generates code using an LLM, then executes it locally to capture execution time and memory usage profiles. These profiles are fed back to the LLM, which then revises the code to reduce overhead. To evaluate the effectiveness of EffiLearner, we conduct extensive experiments on EffiBench and two commonly used code generation benchmarks with 16 open-source and 6 closed-source models. Our evaluation results demonstrate that through iterative self-optimization, EffiLearner significantly enhances the efficiency of LLM-generated code. For example, the execution time (ET) of StarCoder2-15B for the EffiBench decreases from 0.93 (s) to 0.12 (s) which reduces 87.1% execution time requirement compared with the initial code. The total memory usage (TMU) of StarCoder2-15B also decreases from 22.02 (Mbs) to 2.03 (Mbs), which decreases 90.8% total memory consumption during the execution process.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95648"} +{"video_file": "KjNEzWRIqn_39026327.mp4", "openreview_id": "KjNEzWRIqn", "slideslive_id": 39026327, "venue": "nips2024", "title": "Synatra: Turning Indirect Knowledge into Direct Demonstrations for Digital Agents at Scale", "status": "Poster", "keywords": "AI agents;sythetic data;web navigation", "tldr": "we introduce a data synthesize approach for computer agents and achieve strong performance", "abstract": "LLMs can now act as autonomous agents that interact with digital environments and complete specific objectives (e.g., arranging an online meeting). However, accuracy is still far from satisfactory, partly due to a lack of large-scale, direct demonstrations for digital tasks. Obtaining supervised data from humans is costly, and automatic data collection through exploration or reinforcement learning relies on complex environmental and content setup, resulting in datasets that lack comprehensive coverage of various scenarios. On the other hand, there is abundant knowledge that may indirectly assist task completion, such as online tutorials that were created for human consumption. In this work, we present Synatra, an approach that effectively transforms this indirect knowledge into direct supervision at scale. We define different types of indirect knowledge, and carefully study the available sources to obtain it, methods to encode the structure of direct demonstrations, and finally methods to transform indirect knowledge into direct demonstrations. We use 100k such synthetically-created demonstrations to finetune a 7B CodeLlama, and demonstrate that the resulting agent surpasses all comparably sized models on three web-based task benchmarks Mind2Web, MiniWoB++ and WebArena, as well as surpassing GPT-3.5 on WebArena and Mind2Web. In addition, while synthetic demonstrations prove to be only 3% the cost of human demonstrations (at $0.031 each), we show that the synthetic demonstrations can be more effective than an identical number of human demonstrations collected from limited domains.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95647"} +{"video_file": "KqbLzSIXkm_39025060.mp4", "openreview_id": "KqbLzSIXkm", "slideslive_id": 39025060, "venue": "nips2024", "title": "DiMSUM: Diffusion Mamba - A Scalable and Unified Spatial-Frequency Method for Image Generation", "status": "Poster", "keywords": "Diffusion models;Mamba;Wavelet transformation", "tldr": "A new Mamba architecture for image diffusion models", "abstract": "We introduce a novel state-space architecture for diffusion models, effectively harnessing spatial and frequency information to enhance the inductive bias towards local features in input images for image generation tasks. While state-space networks, including Mamba, a revolutionary advancement in recurrent neural networks, typically scan input sequences from left to right, they face difficulties in designing effective scanning strategies, especially in the processing of image data. Our method demonstrates that integrating wavelet transformation into Mamba enhances the local structure awareness of visual inputs and better captures long-range relations of frequencies by disentangling them into wavelet subbands, representing both low- and high-frequency components. These wavelet-based outputs are then processed and seamlessly fused with the original Mamba outputs through a cross-attention fusion layer, combining both spatial and frequency information to optimize the order awareness of state-space models which is essential for the details and overall quality of image generation. Besides, we introduce a globally-shared transformer to supercharge the performance of Mamba, harnessing its exceptional power to capture global relationships. Through extensive experiments on standard benchmarks, our method demonstrates superior results compared to DiT and DIFFUSSM, achieving faster training convergence and delivering high-quality outputs. The codes and pretrained models are released at https://github.com/VinAIResearch/DiMSUM.git.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95642"} +{"video_file": "KsLX5pFpOs_39027662.mp4", "openreview_id": "KsLX5pFpOs", "slideslive_id": 39027662, "venue": "nips2024", "title": "Proportional Fairness in Clustering: A Social Choice Perspective", "status": "Poster", "keywords": "clustering;fair clustering;proportional clustering;social choice;fairness;multiwinner voting", "tldr": "We show that several previously unrelated fairness notions from clustering are related to each other and to notions from social choice.", "abstract": "We study the proportional clustering problem of Chen et al. (ICML'19) and relate it to the area of multiwinner voting in computational social choice. We show that any clustering satisfying a weak proportionality notion of Brill and Peters (EC'23) simultaneously obtains the best known approximations to the proportional fairness notion of Chen et al., but also to individual fairness (Jung et al., FORC'20) and the ``core'' (Li et al., ICML'21). In fact, we show that any approximation to proportional fairness is also an approximation to individual fairness and vice versa. Finally, we also study stronger notions of proportional representation, in which deviations do not only happen to single, but multiple candidate centers, and show that stronger proportionality notions of Brill and Peters imply approximations to these stronger guarantees.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/95639"} +{"video_file": "Ktx95ZuRjP_39024579.mp4", "openreview_id": "Ktx95ZuRjP", "slideslive_id": 39024579, "venue": "nips2024", "title": "Learning to Handle Complex Constraints for Vehicle Routing Problems", "status": "Poster", "keywords": "vehicle routing problem;learning to optimize;constraint handling", "tldr": "We propose generic and effective Proactive Infeasibility Prevention (PIP) frameworks to advance the capabilities of neural methods towards more complex VRPs.", "abstract": "Vehicle Routing Problems (VRPs) can model many real-world scenarios and often involve complex constraints. While recent neural methods excel in constructing solutions based on feasibility masking, they struggle with handling complex constraints, especially when obtaining the masking itself is NP-hard. In this paper, we propose a novel Proactive Infeasibility Prevention (PIP) framework to advance the capabilities of neural methods towards more complex VRPs. Our PIP integrates the Lagrangian multiplier as a basis to enhance constraint awareness and introduces preventative infeasibility masking to proactively steer the solution construction process. Moreover, we present PIP-D, which employs an auxiliary decoder and two adaptive strategies to learn and predict these tailored masks, potentially enhancing performance while significantly reducing computational costs during training. To verify our PIP designs, we conduct extensive experiments on the highly challenging Traveling Salesman Problem with Time Window (TSPTW), and TSP with Draft Limit (TSPDL) variants under different constraint hardness levels. Notably, our PIP is generic to boost many neural methods, and exhibits both a significant reduction in infeasible rate and a substantial improvement in solution quality.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95638"} +{"video_file": "Kx8I0rP7w2_39025049.mp4", "openreview_id": "Kx8I0rP7w2", "slideslive_id": 39025049, "venue": "nips2024", "title": "Why the Metric Backbone Preserves Community Structure", "status": "Poster", "keywords": "community detection;graph sparsification;stochastic block model", "tldr": "The metric backbone of a weighted graph is the union of all-pairs shortest path; despite its tendency to delete intra-community edges, the metric backbone preserves the community structure and is an efficient graph sparsifier.", "abstract": "The metric backbone of a weighted graph is the union of all-pairs shortest paths. It is obtained by removing all edges\n(\nu\n,\nv\n)\nthat are not the shortest path between\nu\nand\nv\n. In networks with well-separated communities, the metric backbone tends to preserve many inter-community edges, because these edges serve as bridges connecting two communities, but tends to delete many intra-community edges because the communities are dense. This suggests that the metric backbone would dilute or destroy the community structure of the network. However, this is not borne out by prior empirical work, which instead showed that the metric backbone of real networks preserves the community structure of the original network well. In this work, we analyze the metric backbone of a broad class of weighted random graphs with communities, and we formally prove the robustness of the community structure with respect to the deletion of all the edges that are not in the metric backbone. An empirical comparison of several graph sparsification techniques confirms our theoretical finding and shows that the metric backbone is an efficient sparsifier in the presence of communities.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95632"} +{"video_file": "KyNO0n1bJ9_39025462.mp4", "openreview_id": "KyNO0n1bJ9", "slideslive_id": 39025462, "venue": "nips2024", "title": "The Minimax Rate of HSIC Estimation for Translation-Invariant Kernels", "status": "Poster", "keywords": "kernel method;Hilbert-Schmidt independence criterion;minimax rate;translation-invariant kernels", "tldr": "We prove that the minimax optimal rate of HSIC estimation on R^d with continuous bounded translation-invariant characteristic kernels is O(n^{\u22121/2}).", "abstract": "Kernel techniques are among the most influential approaches in data science and statistics. Under mild conditions, the reproducing kernel Hilbert space associated to a kernel is capable of encoding the independence of $M\\ge2$ random variables. Probably the most widespread independence measure relying on kernels is the so-called Hilbert-Schmidt independence criterion (HSIC; also referred to as distance covariance in the statistics literature). Despite various existing HSIC estimators designed since its introduction close to two decades ago, the fundamental question of the rate at which HSIC can be estimated is still open. In this work, we prove that the minimax optimal rate of HSIC estimation on $\\mathbb{R}^d$ for Borel measures containing the Gaussians with continuous bounded translation-invariant characteristic kernels is $\\mathcal{O}\\left(n^{-1/2}\\right)$. Specifically, our result implies the optimality in the minimax sense of many of the most-frequently used estimators (including the U-statistic, the V-statistic, and the Nystr\u00f6m-based one) on $\\mathbb{R}^d$.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95630"} +{"video_file": "KyVBzkConO_39028301.mp4", "openreview_id": "KyVBzkConO", "slideslive_id": 39028301, "venue": "nips2024", "title": "Injecting Undetectable Backdoors in Obfuscated Neural Networks and Language Models", "status": "Poster", "keywords": "backdoors;white-box undetectable;obfuscation;theory", "tldr": "We develop a strategy to backdoor any neural network while ensuring that even if a model\u2019s weights and parameters are accessible, the backdoor cannot be efficiently detected.", "abstract": "As ML models become increasingly complex and integral to high-stakes domains such as finance and healthcare, they also become more susceptible to sophisticated adversarial attacks. We investigate the threat posed by undetectable backdoors, as defined in Goldwasser et al. [2022], in models developed by insidious external expert firms. When such backdoors exist, they allow the designer of the model to sell information on how to slightly perturb their input to change the outcome of the model. We develop a general strategy to plant backdoors to obfuscated neural networks, that satisfy the security properties of the celebrated notion of indistinguishability obfuscation. Applying obfuscation before releasing neural networks is a strategy that is well motivated to protect sensitive information of the external expert firm. Our method to plant backdoors ensures that even if the weights and architecture of the obfuscated model are accessible, the existence of the backdoor is still undetectable. Finally, we introduce the notion of undetectable backdoors to language models and extend our neural network backdoor attacks to such models based on the existence of steganographic functions.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/95629"} +{"video_file": "Kzno1r3Xef_39027600.mp4", "openreview_id": "Kzno1r3Xef", "slideslive_id": 39027600, "venue": "nips2024", "title": "A Structure-Aware Framework for Learning Device Placements on Computation Graphs", "status": "Poster", "keywords": "device placement;heterogeneous computing;computation graphs;graph pooling", "tldr": "Add:", "abstract": "Computation graphs are Directed Acyclic Graphs (DAGs) where the nodes correspond to mathematical operations and are used widely as abstractions in optimizations of neural networks. The device placement problem aims to identify optimal allocations of those nodes to a set of (potentially heterogeneous) devices. Existing approaches rely on two types of architectures known as grouper-placer and encoder-placer, respectively. In this work, we bridge the gap between encoder-placer and grouper-placer techniques and propose a novel framework for the task of device placement, relying on smaller computation graphs extracted from the OpenVINO toolkit. The framework consists of five steps, including graph coarsening, node representation learning and policy optimization. It facilitates end-to-end training and takes into account the DAG nature of the computation graphs. We also propose a model variant, inspired by graph parsing networks and complex network analysis, enabling graph representation learning and jointed, personalized graph partitioning, using an unspecified number of groups. To train the entire framework, we use reinforcement learning using the execution time of the placement as a reward. We demonstrate the flexibility and effectiveness of our approach through multiple experiments with three benchmark models, namely Inception-V3, ResNet, and BERT. The robustness of the proposed framework is also highlighted through an ablation study. The suggested placements improve the inference speed for the benchmark models by up to\n58.2\nover CPU execution and by up to\n60.24\ncompared to other commonly used baselines.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/95628"} +{"video_file": "L1mMK39Z7P_39025327.mp4", "openreview_id": "L1mMK39Z7P", "slideslive_id": 39025327, "venue": "nips2024", "title": "ACES: Generating a Diversity of Challenging Programming Puzzles with Autotelic Generative Models", "status": "Spotlight", "keywords": "diversity search;code generation;quality-diversity;open-endedness;generative models;evolutionary algorithms;code models", "tldr": "We introduce a new open-ended algorithm to automate the generation of diverse and challenging programming puzzles to evaluate LLM-based problem solvers.", "abstract": "The ability to invent novel and interesting problems is a remarkable feature of human intelligence that drives innovation, art, and science. We propose a method that aims to automate this process by harnessing the power of state-of-the-art generative models to produce a diversity of challenging yet solvable problems, here in the context of Python programming puzzles. Inspired by the intrinsically motivated literature, Autotelic CodE Search (ACES) jointly optimizes for the diversity and difficulty of generated problems. We represent problems in a space of LLM-generated semantic descriptors describing the programming skills required to solve them (e.g. string manipulation, dynamic programming, etc.) and measure their difficulty empirically as a linearly decreasing function of the success rate of \\textit{Llama-3-70B}, a state-of-the-art LLM problem solver. ACES iteratively prompts a large language model to generate difficult problems achieving a diversity of target semantic descriptors (goal-directed exploration) using previously generated problems as in-context examples. ACES generates problems that are more diverse and more challenging than problems produced by baseline methods and three times more challenging than problems found in existing Python programming benchmarks on average across 11 state-of-the-art code LLMs.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95626"} +{"video_file": "L3RYBqzRmF_39026584.mp4", "openreview_id": "L3RYBqzRmF", "slideslive_id": 39026584, "venue": "nips2024", "title": "Counter-Current Learning: A Biologically Plausible Dual Network Approach for Deep Learning", "status": "Poster", "keywords": "biologically plausible algorithm;backward locking problem;biological inspired algorithm;target propagation", "tldr": "We propose counter-current learning, a biologically inspired dual network architecture that facilitates local learning and addresses weight transport, non-local credit assignment, and backward locking issues in backpropagation.", "abstract": "Despite its widespread use in neural networks, error backpropagation has faced criticism for its lack of biological plausibility, suffering from issues such as the backward locking problem and the weight transport problem. These limitations have motivated researchers to explore more biologically plausible learning algorithms that could potentially shed light on how biological neural systems adapt and learn. Inspired by the counter-current exchange mechanisms observed in biological systems, we propose counter-current learning (CCL), a biologically plausible framework for credit assignment in deep learning. This framework employs a feedforward network to process input data and a feedback network to process targets, with each network enhancing the other through anti-parallel signal propagation. By leveraging the more informative signals from the bottom layer of the feedback network to guide the updates of the top layer of the feedforward network and vice versa, CCL enables the simultaneous transformation of source inputs to target outputs and the dynamic mutual influence of these transformations. Experimental results on MNIST, FashionMNIST, CIFAR10, CIFAR100, and STL-10 datasets using multi-layer perceptrons and convolutional neural networks demonstrate that CCL achieves comparable performance to other biological plausible algorithms while offering a more biologically realistic learning mechanism. Furthermore, we showcase the applicability of our approach to an autoencoder task, underscoring its potential for unsupervised representation learning. Our work presents a promising direction for biologically inspired and plausible learning algorithms, offering insights into the mechanisms of learning and adaptation in neural networks.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95624"} +{"video_file": "L6ICzOxAfi_39025700.mp4", "openreview_id": "L6ICzOxAfi", "slideslive_id": 39025700, "venue": "nips2024", "title": "LoCo: Learning 3D Location-Consistent Image Features with a Memory-Efficient Ranking Loss", "status": "Poster", "keywords": "Feature Learning;Self-Supervised Learning;3D Vision", "tldr": "A loss function for training feature extractors to extract location-consistent features in a self-supervised manner.", "abstract": "Image feature extractors are rendered substantially more useful if different views of the same 3D location yield similar features while still being distinct from other locations. A feature extractor that achieves this goal even under significant viewpoint changes must recognise not just semantic categories in a scene, but also understand how different objects relate to each other in three dimensions. Existing work addresses this task by posing it as a patch retrieval problem, training the extracted features to facilitate retrieval of all image patches that project from the same 3D location. However, this approach uses a loss formulation that requires substantial memory and computation resources, limiting its applicability for large-scale training. We present a method for memory-efficient learning of location-consistent features that reformulates and approximates the smooth average precision objective. This novel loss function enables improvements in memory efficiency by three orders of magnitude, mitigating a key bottleneck of previous methods and allowing much larger models to be trained with the same computational resources. We showcase the improved location consistency of our trained feature extractor directly on a multi-view consistency task, as well as the downstream task of scene-stable panoptic segmentation, significantly outperforming previous state-of-the-art.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95621"} +{"video_file": "L86glqNCUj_39028714.mp4", "openreview_id": "L86glqNCUj", "slideslive_id": 39028714, "venue": "nips2024", "title": "Symmetries in Overparametrized Neural Networks: A Mean Field View", "status": "Spotlight", "keywords": "Overparametrized Neural Networks;Mean Field Limit of Neural Networks;Symmetries in Neural Networks;Wasserstein Gradient Flow;Data Augmentation;Feature Averaging;Equivariant Architechtures.", "tldr": "Mean Field analysis of overparametrized shallow models under symmetric data and/or symmetry-leveraging techniques.", "abstract": "We develop a Mean-Field (MF) view of the learning dynamics of overparametrized Artificial Neural Networks (NN) under distributional symmetries of the data w.r.t. the action of a general compact group\nG\n. We consider for this a class of generalized shallow NNs given by an ensemble of\nN\nmulti-layer units, jointly trained using stochastic gradient descent (SGD) and possibly symmetry-leveraging (SL) techniques, such as Data Augmentation (DA), Feature Averaging (FA) or Equivariant Architectures (EA). We introduce the notions of weakly and strongly invariant laws (WI and SI) on the parameter space of each single unit, corresponding, respectively, to\nG\n-invariant distributions, and to distributions supported on parameters fixed by the group action (which encode EA). This allows us to define symmetric models compatible with taking\nN\n\u2192\n\u221e\nand give an interpretation of the asymptotic dynamics of DA, FA and EA in terms of Wasserstein Gradient Flows describing their MF limits. When activations respect the group action, we show that, for symmetric data, DA, FA and freely-trained models obey the exact same MF dynamic, which stays in the space of WI parameter laws and attains therein the population risk's minimizer. We also provide a counterexample to the general attainability of such an optimum over SI laws. Despite this, and quite remarkably, we show that the space of SI laws is also preserved by these MF distributional dynamics even when freely trained. This sharply contrasts the finite-\nN\nsetting, in which EAs are generally not preserved by unconstrained SGD. We illustrate the validity of our findings as\nN\ngets larger, in a teacher-student experimental setting, training a student NN to learn from a WI, SI or arbitrary teacher model through various SL schemes. We lastly deduce a data-driven heuristic to discover the largest subspace of parameters supporting SI distributions for a problem, that could be used for designing EA with minimal generalization error.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95619"} +{"video_file": "L8Q21Qrjmd_39025122.mp4", "openreview_id": "L8Q21Qrjmd", "slideslive_id": 39025122, "venue": "nips2024", "title": "Pessimistic Backward Policy for GFlowNets", "status": "Poster", "keywords": "Generative flow networks;generative models;reinforcement learning", "tldr": "We propose a pessimistic backward policy for GFlowNets to resolve the under-exploitation problem of backward policy-based flow matching.", "abstract": "This paper studies Generative Flow Networks (GFlowNets), which learn to sample objects proportionally to a given reward function through the trajectory of state transitions. In this work, we observe that GFlowNets tend to under-exploit the high-reward objects due to training on insufficient number of trajectories, which may lead to a large gap between the estimated flow and the (known) reward value. In response to this challenge, we propose a pessimistic backward policy for GFlowNets (PBP-GFN), which maximizes the observed flow to align closely with the true reward for the object. We extensively evaluate PBP-GFN across eight benchmarks, including hyper-grid environment, bag generation, structured set generation, molecular generation, and four RNA sequence generation tasks. In particular, PBP-GFN enhances the discovery of high-reward objects, maintains the diversity of the objects, and consistently outperforms existing methods.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95618"} +{"video_file": "L8h6cozcbn_39027499.mp4", "openreview_id": "L8h6cozcbn", "slideslive_id": 39027499, "venue": "nips2024", "title": "Transformers Learn to Achieve Second-Order Convergence Rates for In-Context Linear Regression", "status": "Poster", "keywords": "transformers;in-context learning;linear regression", "tldr": "Add:", "abstract": "Transformers excel at in-context learning (ICL)---learning from demonstrations without parameter updates---but how they do so remains a mystery. Recent work suggests that Transformers may internally run Gradient Descent (GD), a first-order optimization method, to perform ICL. In this paper, we instead demonstrate that Transformers learn to approximate second-order optimization methods for ICL. For in-context linear regression, Transformers share a similar convergence rate as Iterative Newton's Method, both exponentially faster than GD. Empirically, predictions from successive Transformer layers closely match different iterations of Newton\u2019s Method linearly, with each middle layer roughly computing 3 iterations; thus, Transformers and Newton\u2019s method converge at roughly the same rate. In contrast, Gradient Descent converges exponentially more slowly. We also show that Transformers can learn in-context on ill-conditioned data, a setting where Gradient Descent struggles but Iterative Newton succeeds. Finally, to corroborate our empirical findings, we prove that Transformers can implement\nk\niterations of Newton's method with\nk\n+\nO\n(\n1\n)\nlayers.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/95617"} +{"video_file": "LDzrQB4X5w_39028476.mp4", "openreview_id": "LDzrQB4X5w", "slideslive_id": 39028476, "venue": "nips2024", "title": "A Best-of-both-worlds Algorithm for Bandits with Delayed Feedback with Robustness to Excessive Delays", "status": "Poster", "keywords": "Best-of-both-worlds;delayed bandit feedback", "tldr": "We propose a new best-of-both-worlds algorithm for bandits with variably delayed feedback that is robust to excessive delays", "abstract": "We propose a new best-of-both-worlds algorithm for bandits with variably delayed feedback. In contrast to prior work, which required prior knowledge of the maximal delay $d_{\\max}$ and had a linear dependence of the regret on it, our algorithm can tolerate arbitrary excessive delays up to order $T$ (where $T$ is the time horizon). The algorithm is based on three technical innovations, which may all be of independent interest: (1) We introduce the first implicit exploration scheme that works in best-of-both-worlds setting. (2) We introduce the first control of distribution drift that does not rely on boundedness of delays. The control is based on the implicit exploration scheme and adaptive skipping of observations with excessive delays. (3) We introduce a procedure relating standard regret with drifted regret that does not rely on boundedness of delays. At the conceptual level, we demonstrate that complexity of best-of-both-worlds bandits with delayed feedback is characterized by the amount of information missing at the time of decision making (measured by the number of outstanding observations) rather than the time that the information is missing (measured by the delays).", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/95613"} +{"video_file": "LEed5Is4oi_39024996.mp4", "openreview_id": "LEed5Is4oi", "slideslive_id": 39024996, "venue": "nips2024", "title": "Robot Policy Learning with Temporal Optimal Transport Reward", "status": "Poster", "keywords": "Reinforcement Learning;Imitation Learning;Optimal Transport", "tldr": "Temporal optimal transport reward for policy learning", "abstract": "Reward specification is one of the most tricky problems in Reinforcement Learning, which usually requires tedious hand engineering in practice. One promising approach to tackle this challenge is to adopt existing expert video demonstrations for policy learning. Some recent work investigates how to learn robot policies from only a single/few expert video demonstrations. For example, reward labeling via Optimal Transport (OT) has been shown to be an effective strategy to generate a proxy reward by measuring the alignment between the robot trajectory and the expert demonstrations. However, previous work mostly overlooks that the OT reward is invariant to temporal order information, which could bring extra noise to the reward signal. To address this issue, in this paper, we introduce the Temporal Optimal Transport (TemporalOT) reward to incorporate temporal order information for learning a more accurate OT-based proxy reward. Extensive experiments on the Meta-world benchmark tasks validate the efficacy of the proposed method. Our code is available at: https://github.com/fuyw/TemporalOT.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95612"} +{"video_file": "LGXeIx75sc_39028290.mp4", "openreview_id": "LGXeIx75sc", "slideslive_id": 39028290, "venue": "nips2024", "title": "Where's Waldo: Diffusion Features For Personalized Segmentation and Retrieval", "status": "Poster", "keywords": "Tex-to-image diffusion model;instance retrieval", "tldr": "We present a new approach for personalized tasks such as segmentation or retrieval without training using text-to-image diffusion models", "abstract": "Personalized retrieval and segmentation aim to locate specific instances within a dataset based on an input image and a short description of the reference instance. While supervised methods are effective, they require extensive labeled data for training. Recently, self-supervised foundation models have been introduced to these tasks showing comparable results to supervised methods. However, a significant flaw in these models is evident: they struggle to locate a desired instance when other instances within the same class are presented. In this paper, we explore text-to-image diffusion models for these tasks. Specifically, we propose a novel approach called PDM for Personalized Diffusion Features Matching, that leverages intermediate features of pre-trained text-to-image models for personalization tasks without any additional training. PDM demonstrates superior performance on popular retrieval and segmentation benchmarks, outperforming even supervised methods. We also highlight notable shortcomings in current instance and segmentation datasets and propose new benchmarks for these tasks.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95609"} +{"video_file": "LGus3wXPxc_39027489.mp4", "openreview_id": "LGus3wXPxc", "slideslive_id": 39027489, "venue": "nips2024", "title": "Seeing Beyond the Crop: Using Language Priors for Out-of-Bounding Box Keypoint Prediction", "status": "Poster", "keywords": "2D pose estimation;out-of-image keypoint prediction;multimodal pose estimation;CLIP", "tldr": "We re-conceptualize pose estimation as an out-of-image keypoint prediction task to robustly predict extension keypoints from human pose.", "abstract": "Accurate estimation of human pose and the pose of interacting objects, like a hockey stick, is crucial for action recognition and performance analysis, particularly in sports. Existing methods capture the object along with the human in the bounding boxes, assuming all keypoints are visible within the bounding box. This necessitates larger bounding boxes to capture the object, introducing unnecessary visual features and hindering performance in real-world cluttered environments. We propose a simple image and text-based multimodal solution TokenCLIPose that addresses this limitation. Our approach focuses solely on human keypoints within the bounding box, treating objects as unseen. TokenCLIPose leverages the rich semantic representations endowed by language for inducing keypoint-specific context, even for occluded keypoints. We evaluate the performance of TokenCLIPose on a real-world Ice-Hockey dataset, and demonstrate its generalizability through zero-shot transfer to a smaller Lacrosse dataset. Additionally, we showcase its flexibility on CrowdPose, a popular occlusion benchmark with keypoints within the bounding box. Our method significantly improves over state-of-the-art approaches on all three datasets, with gains of 4.36%, 2.35%, and 3.8%, respectively.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95608"} +{"video_file": "LJCQH6U0pl_39028171.mp4", "openreview_id": "LJCQH6U0pl", "slideslive_id": 39028171, "venue": "nips2024", "title": "Towards Principled Graph Transformers", "status": "Poster", "keywords": "graph transformers;expressivity;Weisfeiler and Leman;Weisfeiler and Lehman", "tldr": "We study graph transformers that are both theoretically grounded in the Weisfeiler-Leman hierarchy as well as perform comparative with state-of-the-art on graph learning benchmarks.", "abstract": "The expressive power of graph learning architectures based on the\nk\n-dimensional Weisfeiler-Leman (\nk\n-WL) hierarchy is well understood. However, such architectures often fail to deliver solid predictive performance on real-world tasks, limiting their practical impact. In contrast, global attention-based models such as graph transformers demonstrate strong performance in practice, but comparing their expressive power with the\nk\n-WL hierarchy remains challenging, particularly since these architectures rely on positional or structural encodings for their expressivity and predictive performance. To address this, we show that the recently proposed Edge Transformer, a global attention model operating on node pairs instead of nodes, has 3-WL expressive power when provided with the right tokenization. Empirically, we demonstrate that the Edge Transformer surpasses other theoretically aligned architectures regarding predictive performance while not relying on positional or structural encodings.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/95605"} +{"video_file": "LJNqVIKSCr_39027399.mp4", "openreview_id": "LJNqVIKSCr", "slideslive_id": 39027399, "venue": "nips2024", "title": "Double-Ended Synthesis Planning with Goal-Constrained Bidirectional Search", "status": "Spotlight", "keywords": "Retrosynthesis;synthesis planning;chemistry;bidirectional search", "tldr": "We propose a neural-guided bidirectional search algorithm for a new starting material-constrained formulation of synthesis planning", "abstract": "Computer-aided synthesis planning (CASP) algorithms have demonstrated expert-level abilities in planning retrosynthetic routes to molecules of low to moderate complexity. However, current search methods assume the sufficiency of reaching arbitrary building blocks, failing to address the common real-world constraint where using specific molecules is desired. To this end, we present a formulation of synthesis planning with starting material constraints. Under this formulation, we propose Double-Ended Synthesis Planning (\nDESP\n), a novel CASP algorithm under a bidirectional graph search scheme that interleaves expansions from the target and from the goal starting materials to ensure constraint satisfiability. The search algorithm is guided by a goal-conditioned cost network learned offline from a partially observed hypergraph of valid chemical reactions. We demonstrate the utility of\nDESP\nin improving solve rates and reducing the number of search expansions by biasing synthesis planning towards expert goals on multiple new benchmarks.\nDESP\ncan make use of existing one-step retrosynthesis models, and we anticipate its performance to scale as these one-step model capabilities improve.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/95604"} +{"video_file": "LPbqZszt8Y_39026190.mp4", "openreview_id": "LPbqZszt8Y", "slideslive_id": 39026190, "venue": "nips2024", "title": "MAC Advice for facility location mechanism design", "status": "Poster", "keywords": "Algorithms with Predictions;MAC Predictions;Facility Location;Discrete Optimization", "tldr": "We define a notion of Mostly Approximately Correct predictions, and use them to get better strategyproof mechanisms for facility location.", "abstract": "Algorithms with predictions are gaining traction across various domains, as a way to surpass traditional worst-case bounds through (machine-learned) advice. We study the canonical problem of\nk\n-facility location mechanism design, where the\nn\nagents are strategic and might misreport their locations. We receive a prediction for each agent's location, and these predictions are crucially allowed to be only \"mostly\" and \"approximately\" correct (MAC for short): a\n\u03b4\n-fraction of the predicted locations are allowed to be arbitrarily incorrect, and the remainder of the predictions are required to be correct up to an\n\u03b5\n-error. Moreover, we make no assumption on the independence of the errors. Can such \"flawed\" predictions allow us to beat the current best bounds for strategyproof facility location?\nWe show how natural robustness of the\n1\n-median (also known as the geometric median) of a set of points leads to an algorithm for single-facility location with MAC predictions. We extend our results to a natural \"balanced\" variant of the\nk\n-facility case, and show that without balancedness, robustness completely breaks down even for\nk\n=\n2\nfacilities on a line. As our main result, for this \"unbalanced\" setting we devise a truthful random mechanism, which outperforms the best known mechanism (with no predictions) by Lu et al.~[2010]. En route, we introduce the problem of \"second\" facility location, in which the first facility location is already fixed. Our robustness findings may be of independent interest, as quantitative versions of classic breakdown-point results in robust statistics.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/95596"} +{"video_file": "LQBlSGeOGm_39024684.mp4", "openreview_id": "LQBlSGeOGm", "slideslive_id": 39024684, "venue": "nips2024", "title": "How Molecules Impact Cells: Unlocking Contrastive PhenoMolecular Retrieval", "status": "Poster", "keywords": "Multi-Modality;Contrastive Learning;CLIP;Cell Morphology;Molecules;Molecular Retrieval;Zero-Shot Learning;Cell-Painting", "tldr": "We address the challenge of contrastive phenomic molecular retrieval. We demonstrate pre-trained uni-modal representation methods can be used in a variety of ways to significantly improve zero-shot molecular retrieval rates.", "abstract": "Predicting molecular impact on cellular function is a core challenge in therapeutic design. Phenomic experiments, designed to capture cellular morphology, utilize microscopy based techniques and demonstrate a high throughput solution for uncovering molecular impact on the cell. In this work, we learn a joint latent space between molecular structures and microscopy phenomic experiments, aligning paired samples with contrastive learning. Specifically, we study the problem of Contrastive PhenoMolecular Retrieval, which consists of zero-shot molecular structure identification conditioned on phenomic experiments. We assess challenges in multi-modal learning of phenomics and molecular modalities such as experimental batch effect, inactive molecule perturbations, and encoding perturbation concentration. We demonstrate improved multi-modal learner retrieval through (1) a uni-modal pre-trained phenomics model, (2) a novel inter sample similarity aware loss, and (3) models conditioned on a representation of molecular concentration. Following this recipe, we propose MolPhenix, a molecular phenomics model. MolPhenix leverages a pre-trained phenomics model to demonstrate significant performance gains across perturbation concentrations, molecular scaffolds, and activity thresholds. In particular, we demonstrate an 8.1 times improvement in zero shot molecular retrieval of active molecules over the previous state-of-the-art, reaching 77.33% in top-1% accuracy. These results open the door for machine learning to be applied in virtual phenomics screening, which can significantly benefit drug discovery applications.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/95593"} +{"video_file": "LR1nnsD7H0_39027688.mp4", "openreview_id": "LR1nnsD7H0", "slideslive_id": 39027688, "venue": "nips2024", "title": "Neural decoding from stereotactic EEG: accounting for electrode variability across subjects", "status": "Poster", "keywords": "sEEG;Neural Decoding;Transformers;Multi-Subject Training", "tldr": "Seegnificant: a framework and architecture for multi-subject neural decoding based on sEEG", "abstract": "Deep learning based neural decoding from stereotactic electroencephalography (sEEG) would likely benefit from scaling up both dataset and model size. To achieve this, combining data across multiple subjects is crucial. However, in sEEG cohorts, each subject has a variable number of electrodes placed at distinct locations in their brain, solely based on clinical needs. Such heterogeneity in electrode number/placement poses a significant challenge for data integration, since there is no clear correspondence of the neural activity recorded at distinct sites between individuals. Here we introduce seegnificant: a training framework and architecture that can be used to decode behavior across subjects using sEEG data. We tokenize the neural activity within electrodes using convolutions and extract long-term temporal dependencies between tokens using self-attention in the time dimension. The 3D location of each electrode is then mixed with the tokens, followed by another self-attention in the electrode dimension to extract effective spatiotemporal neural representations. Subject-specific heads are then used for downstream decoding tasks. Using this approach, we construct a multi-subject model trained on the combined data from 21 subjects performing a behavioral task. We demonstrate that our model is able to decode the trial-wise response time of the subjects during the behavioral task solely from neural data. We also show that the neural representations learned by pretraining our model across individuals can be transferred in a few-shot manner to new subjects. This work introduces a scalable approach towards sEEG data integration for multi-subject model training, paving the way for cross-subject generalization for sEEG decoding.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/95591"} +{"video_file": "LUIXdWn6Z5_39026228.mp4", "openreview_id": "LUIXdWn6Z5", "slideslive_id": 39026228, "venue": "nips2024", "title": "Risk-sensitive control as inference with R\u00e9nyi divergence", "status": "Poster", "keywords": "risk-sensitive control;optimal control;reinforcement learning;variational inference;R\u00e9nyi divergence", "tldr": "This study formulates risk-sensitive control as variational inference using R\u00e9nyi divergence. Based on the proposed unifying framework, we reveal several equivalence results for control problems and derive risk-sensitive RL algorithms.", "abstract": "This paper introduces the risk-sensitive control as inference (RCaI) that extends CaI by using R\u00e9nyi divergence variational inference. RCaI is shown to be equivalent to log-probability regularized risk-sensitive control, which is an extension of the maximum entropy (MaxEnt) control. We also prove that the risk-sensitive optimal policy can be obtained by solving a soft Bellman equation, which reveals several equivalences between RCaI, MaxEnt control, the optimal posterior for CaI, and linearly-solvable control. Moreover, based on RCaI, we derive the risk-sensitive reinforcement learning (RL) methods: the policy gradient and the soft actor-critic. As the risk-sensitivity parameter vanishes, we recover the risk-neutral CaI and RL, which means that RCaI is a unifying framework. Furthermore, we give another risk-sensitive generalization of the MaxEnt control using R\u00e9nyi entropy regularization. We show that in both of our extensions, the optimal policies have the same structure even though the derivations are very different.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95589"} +{"video_file": "LX1lwP90kt_39025860.mp4", "openreview_id": "LX1lwP90kt", "slideslive_id": 39025860, "venue": "nips2024", "title": "Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dynamical Systems", "status": "Poster", "keywords": "gaussian process;switching;slds;neural;neuroscience;dynamics;probabilistic;time series", "tldr": "Gaussian Process Switching Linear Dynamical Systems maintain the locally linear interpretability of rSLDS models while naturally capturing uncertainty in dynamics.", "abstract": "Understanding how the collective activity of neural populations relates to computation and ultimately behavior is a key goal in neuroscience. To this end, statistical methods which describe high-dimensional neural time series in terms of low-dimensional latent dynamics have played a fundamental role in characterizing neural systems. Yet, what constitutes a successful method involves two opposing criteria: (1) methods should be expressive enough to capture complex nonlinear dynamics, and (2) they should maintain a notion of interpretability often only warranted by simpler linear models. In this paper, we develop an approach that balances these two objectives: the Gaussian Process Switching Linear Dynamical System (gpSLDS). Our method builds on previous work modeling the latent state evolution via a stochastic differential equation whose nonlinear dynamics are described by a Gaussian process (GP-SDEs). We propose a novel kernel function which enforces smoothly interpolated locally linear dynamics, and therefore expresses flexible -- yet interpretable -- dynamics akin to those of recurrent switching linear dynamical systems (rSLDS). Our approach resolves key limitations of the rSLDS such as artifactual oscillations in dynamics near discrete state boundaries, while also providing posterior uncertainty estimates of the dynamics. To fit our models, we leverage a modified learning objective which improves the estimation accuracy of kernel hyperparameters compared to previous GP-SDE fitting approaches. We apply our method to synthetic data and data recorded in two neuroscience experiments and demonstrate favorable performance in comparison to the rSLDS.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95587"} +{"video_file": "LXz1xIEBkF_39028115.mp4", "openreview_id": "LXz1xIEBkF", "slideslive_id": 39028115, "venue": "nips2024", "title": "STL: Still Tricky Logic (for System Validation, Even When Showing Your Work)", "status": "Poster", "keywords": "Explainability;Formal Methods;Human Experiments;Robotics", "tldr": "Formal methods are not particularly \"explainable\" for system validation even when applying best practices from education research", "abstract": "As learned control policies become increasingly common in autonomous systems, there is increasing need to ensure that they are interpretable and can be checked by human stakeholders. Formal specifications have been proposed as ways to produce human-interpretable policies for autonomous systems that can still be learned from examples. Previous work showed that despite claims of interpretability, humans are unable to use formal specifications presented in a variety of ways to validate even simple robot behaviors. This work uses active learning, a standard pedagogical method, to attempt to improve humans' ability to validate policies in signal temporal logic (STL). Results show that overall validation accuracy is not high, at 65%\n\u00b1\n15% (mean\n\u00b1\nstandard deviation), and that the three conditions of no active learning, active learning, and active learning with feedback do not significantly differ from each other. Our results suggest that the utility of formal specifications for human interpretability is still unsupported but point to other avenues of development which may enable improvements in system validation.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/95586"} +{"video_file": "LYivxMp5es_39024919.mp4", "openreview_id": "LYivxMp5es", "slideslive_id": 39024919, "venue": "nips2024", "title": "Towards Effective Planning Strategies for Dynamic Opinion Networks", "status": "Poster", "keywords": "Opinion networks;Dynamic Planning;Misinformation Spread;Network dynamics.", "tldr": "Containing misinformation spread through learning-based approaches using Graph Neural Networks", "abstract": "In this study, we investigate the under-explored intervention planning aimed at disseminating accurate information within dynamic opinion networks by leveraging learning strategies. Intervention planning involves identifying key nodes (search) and exerting control (e.g., disseminating accurate/official information through the nodes) to mitigate the influence of misinformation. However, as the network size increases, the problem becomes computationally intractable. To address this, we first introduce a ranking algorithm to identify key nodes for disseminating accurate information, which facilitates the training of neural network (NN) classifiers that provide generalized solutions for the search and planning problems. Second, we mitigate the complexity of label generation\u2014which becomes challenging as the network grows\u2014by developing a reinforcement learning (RL)-based centralized dynamic planning framework. We analyze these NN-based planners for opinion networks governed by two dynamic propagation models. Each model incorporates both binary and continuous opinion and trust representations. Our experimental results demonstrate that the ranking algorithm-based classifiers provide plans that enhance infection rate control, especially with increased action budgets for small networks. Further, we observe that the reward strategies focusing on key metrics, such as the number of susceptible nodes and infection rates, outperform those prioritizing faster blocking strategies. Additionally, our findings reveal that graph convolutional network (GCN)-based planners facilitate scalable centralized plans that achieve lower infection rates (higher control) across various network configurations (e.g., Watts-Strogatz topology, varying action budgets, varying initial infected nodes, and varying degree of infected nodes).", "primary_area": "machine_learning_for_social_sciences", "site": "https://neurips.cc/virtual/2024/poster/95585"} +{"video_file": "LYx4w3CAgy_39024441.mp4", "openreview_id": "LYx4w3CAgy", "slideslive_id": 39024441, "venue": "nips2024", "title": "LLM-Check: Investigating Detection of Hallucinations in Large Language Models", "status": "Poster", "keywords": "Large Language Models;Hallucinations in Language Models;Hallucination Detection;Eigen-analysis of LM Embeddings", "tldr": "In this work, we study hallucination detection in Large Language Models by analyzing their internal hidden states, attention maps and output prediction probabilities.", "abstract": "While Large Language Models (LLMs) have become immensely popular due to their outstanding performance on a broad range of tasks, these models are prone to producing hallucinations\u2014 outputs that are fallacious or fabricated yet often appear plausible or tenable at a glance. In this paper, we conduct a comprehensive investigation into the nature of hallucinations within LLMs and furthermore explore effective techniques for detecting such inaccuracies in various real-world settings. Prior approaches to detect hallucinations in LLM outputs, such as consistency checks or retrieval-based methods, typically assume access to multiple model responses or large databases. These techniques, however, tend to be computationally expensive in practice, thereby limiting their applicability to real-time analysis. In contrast, in this work, we seek to identify hallucinations within a single response in both white-box and black-box settings by analyzing the internal hidden states, attention maps, and output prediction probabilities of an auxiliary LLM. In addition, we also study hallucination detection in scenarios where ground-truth references are also available, such as in the setting of Retrieval-Augmented Generation (RAG). We demonstrate that the proposed detection methods are extremely compute-efficient, with speedups of up to 45x and 450x over other baselines, while achieving significant improvements in detection performance over diverse datasets.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95584"} +{"video_file": "Lbuxdzg1pd_39026822.mp4", "openreview_id": "Lbuxdzg1pd", "slideslive_id": 39026822, "venue": "nips2024", "title": "The Secretary Problem with Predicted Additive Gap", "status": "Poster", "keywords": "Secretary Problem;Competitive Analysis;Online Algorithms;Predictions;Robustness;Consistency", "tldr": "The paper studies the secretary problem with a weak piece of information: a single additive gap between weights. Given this, we derive improved guarantees for the secretary problem, beating previously tight bounds.", "abstract": "The secretary problem is one of the fundamental problems in online decision making; a tight competitive ratio for this problem of\n1\n/\ne\n\u2248\n0.368\nhas been known since the 1960s. Much more recently, the study of algorithms with predictions was introduced: The algorithm is equipped with a (possibly erroneous) additional piece of information upfront which can be used to improve the algorithm's performance. Complementing previous work on secretary problems with prior knowledge, we tackle the following question:\nWhat is the weakest piece of information that allows us to break the\n1\n/\ne\nbarrier?\nTo this end, we introduce the secretary problem with predicted additive gap. As in the classical problem, weights are fixed by an adversary and elements appear in random order. In contrast to previous variants of predictions, our algorithm only has access to a much weaker piece of information: an additive gap\nc\n. This gap is the difference between the highest and\nk\n-th highest weight in the sequence. Unlike previous pieces of advice, knowing an exact additive gap does not make the problem trivial.\nOur contribution is twofold. First, we show that for any index\nk\nand any gap\nc\n, we can obtain a competitive ratio of\n0.4\nwhen knowing the exact gap (even if we do not know\nk\n), hence beating the prevalent bound for the classical problem by a constant. Second, a slightly modified version of our algorithm allows to prove standard robustness-consistency properties as well as improved guarantees when knowing a range for the error of the prediction.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95582"} +{"video_file": "Lc8gemv97Y_39027304.mp4", "openreview_id": "Lc8gemv97Y", "slideslive_id": 39027304, "venue": "nips2024", "title": "Dealing with Synthetic Data Contamination in Online Continual Learning", "status": "Poster", "keywords": "Online Continual Learning;Image Generation;Replay-based method;Entropy Selection", "tldr": "Investigating the dataset contamination caused by synthetic data in Online Continual Learning, and proposing a method to alleviate the performance degradation with entropy selection and real-synthetic similarity maximization.", "abstract": "Image generation has shown remarkable results in generating high-fidelity realistic images, in particular with the advancement of diffusion-based models. However, the prevalence of AI-generated images may have side effects for the machine learning community that are not clearly identified. Meanwhile, the success of deep learning in computer vision is driven by the massive dataset collected on the Internet. The extensive quantity of synthetic data being added to the Internet would become an obstacle for future researchers to collect \"clean\" datasets without AI-generated content. Prior research has shown that using datasets contaminated by synthetic images may result in performance degradation when used for training. In this paper, we investigate the potential impact of contaminated datasets on Online Continual Learning (CL) research. We experimentally show that contaminated datasets might hinder the training of existing online CL methods. Also, we propose Entropy Selection with Real-synthetic similarity Maximization (ESRM), a method to alleviate the performance deterioration caused by synthetic images when training online CL models. Experiments show that our method can significantly alleviate performance deterioration, especially when the contamination is severe. For reproducibility, the source code of our work is available at https://github.com/maorong-wang/ESRM.", "primary_area": "online_learning", "site": "https://neurips.cc/virtual/2024/poster/95581"} +{"video_file": "LezAEImfoc_39028318.mp4", "openreview_id": "LezAEImfoc", "slideslive_id": 39028318, "venue": "nips2024", "title": "Beyond Accuracy: Tracking more like Human via Visual Search", "status": "Poster", "keywords": "visual object tracking;central-peripheral dichotomy;human behaviour", "tldr": "Inspired by central-peripheral dichotomy, we developed a tracker that emulates human visual search abilities, validated by its high accuracy and error consistency.", "abstract": "Human visual search ability enables efficient and accurate tracking of an arbitrary moving target, which is a significant research interest in cognitive neuroscience. The recently proposed Central-Peripheral Dichotomy (CPD) theory sheds light on how humans effectively process visual information and track moving targets in complex environments. However, existing visual object tracking algorithms still fall short of matching human performance in maintaining tracking over time, particularly in complex scenarios requiring robust visual search skills. These scenarios often involve Spatio-Temporal Discontinuities (i.e., STDChallenge), prevalent in long-term tracking and global instance tracking. To address this issue, we conduct research from a human-like modeling perspective: (1) Inspired by the CPD, we pro- pose a new tracker named CPDTrack to achieve human-like visual search ability. The central vision of CPDTrack leverages the spatio-temporal continuity of videos to introduce priors and enhance localization precision, while the peripheral vision improves global awareness and detects object movements. (2) To further evaluate and analyze STDChallenge, we create the STDChallenge Benchmark. Besides, by incorporating human subjects, we establish a human baseline, creating a high- quality environment specifically designed to assess trackers\u2019 visual search abilities in videos across STDChallenge. (3) Our extensive experiments demonstrate that the proposed CPDTrack not only achieves state-of-the-art (SOTA) performance in this challenge but also narrows the behavioral differences with humans. Additionally, CPDTrack exhibits strong generalizability across various challenging benchmarks. In summary, our research underscores the importance of human-like modeling and offers strategic insights for advancing intelligent visual target tracking. Code and models are available at https://github.com/ZhangDailing8/CPDTrack.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95579"} +{"video_file": "Li9YTHoItP_39027057.mp4", "openreview_id": "Li9YTHoItP", "slideslive_id": 39027057, "venue": "nips2024", "title": "Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering", "status": "Poster", "keywords": "large language model;knowledge boundary;question answering", "tldr": "We explore the knowledge boundary of LLMs by investigating their responses to semi-open-ended questions, using an auxiliary model to uncover ambiguous answers and highlighting LLMs\u2019 challenges in recognizing their knowledge limits.", "abstract": "Large Language Models (LLMs) are widely used for knowledge-seeking purposes yet suffer from hallucinations. The knowledge boundary of an LLM limits its factual understanding, beyond which it may begin to hallucinate. Investigating the perception of LLMs' knowledge boundary is crucial for detecting hallucinations and LLMs' reliable generation. Current studies perceive LLMs' knowledge boundary on questions with concrete answers (close-ended questions) while paying limited attention to semi-open-ended questions that correspond to many potential answers. Some researchers achieve it by judging whether the question is answerable or not. However, this paradigm is not so suitable for semi-open-ended questions, which are usually ``partially answerable questions'' containing both answerable answers and ambiguous (unanswerable) answers. Ambiguous answers are essential for knowledge-seeking, but it may go beyond the knowledge boundary of LLMs. In this paper, we perceive the LLMs' knowledge boundary with semi-open-ended questions by discovering more ambiguous answers. First, we apply an LLM-based approach to construct semi-open-ended questions and obtain answers from a target LLM. Unfortunately, the output probabilities of mainstream black-box LLMs are inaccessible to sample more low-probability ambiguous answers. Therefore, we apply an open-sourced auxiliary model to explore ambiguous answers for the target LLM. We calculate the nearest semantic representation for existing answers to estimate their probabilities, with which we reduce the generation probability of high-probability existing answers to achieve a more effective generation. Finally, we compare the results from the RAG-based evaluation and LLM self-evaluation to categorize four types of ambiguous answers that are beyond the knowledge boundary of the target LLM. Following our method, we construct a dataset to perceive the knowledge boundary for GPT-4. We find that GPT-4 performs poorly on semi-open-ended questions and is often unaware of its knowledge boundary. Besides, our auxiliary model, LLaMA-2-13B, is effective in discovering many ambiguous answers, including correct answers neglected by GPT-4 and delusive wrong answers GPT-4 struggles to identify.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95575"} +{"video_file": "LmjLRHVCMG_39025526.mp4", "openreview_id": "LmjLRHVCMG", "slideslive_id": 39025526, "venue": "nips2024", "title": "An Improved Empirical Fisher Approximation for Natural Gradient Descent", "status": "Poster", "keywords": "Empirical Fisher;Natural Gradient Descent;Second-order Optimisation;Deep Learning", "tldr": "An improved Empirical Fisher is proposed to resolve the limitations of Empirical Fisher.", "abstract": "Approximate Natural Gradient Descent (NGD) methods are an important family of optimisers for deep learning models, which use approximate Fisher information matrices to pre-condition gradients during training. The empirical Fisher (EF) method approximates the Fisher information matrix empirically by reusing the per-sample gradients collected during back-propagation. Despite its ease of implementation, the EF approximation has its theoretical and practical limitations. This paper investigates the inversely-scaled projection issue of EF, which is shown to be a major cause of its poor empirical approximation quality. An improved empirical Fisher (iEF) method is proposed to address this issue, which is motivated as a generalised NGD method from a loss reduction perspective, meanwhile retaining the practical convenience of EF. The exact iEF and EF methods are experimentally evaluated using practical deep learning setups, including widely-used setups for parameter-efficient fine-tuning of pre-trained models (T5-base with LoRA and Prompt-Tuning on GLUE tasks, and ViT with LoRA for CIFAR100). Optimisation experiments show that applying exact iEF directly as an optimiser provides strong convergence and generalisation. It achieves the best test performance and the lowest training loss for the majority of the tasks, even when compared to well-tuned AdamW/Adafactor baselines. Additionally, under a novel empirical evaluation framework, the proposed iEF method shows consistently better approximation quality to exact Natural Gradient updates than both the EF and the more expensive sampled Fisher methods, meanwhile demonstrating the superior property of being robust to the choice of damping across tasks and training stages. Improving existing approximate NGD optimisers with iEF is expected to lead to better convergence and robustness. Furthermore, the iEF method also serves as a better approximation method to the Fisher information matrix itself, which enables the improvement of a variety of Fisher-based methods, not limited to the scope of optimisation.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95572"} +{"video_file": "Ln8ogihZ2S_39027412.mp4", "openreview_id": "Ln8ogihZ2S", "slideslive_id": 39027412, "venue": "nips2024", "title": "eXponential FAmily Dynamical Systems (XFADS): Large-scale nonlinear Gaussian state-space modeling", "status": "Poster", "keywords": "variational inference;nonlinear state-space model;dynamical system", "tldr": "We introduce a structured variational approximation and inference algorithm for efficient Bayesian inference in nonlinear state-space models.", "abstract": "State-space graphical models and the variational autoencoder framework provide a principled apparatus for learning dynamical systems from data. State-of-the-art probabilistic approaches are often able to scale to large problems at the cost of flexibility of the variational posterior or expressivity of the dynamics model. However, those consolidations can be detrimental if the ultimate goal is to learn a generative model capable of explaining the spatiotemporal structure of the data and making accurate forecasts. We introduce a low-rank structured variational autoencoding framework for nonlinear Gaussian state-space graphical models capable of capturing dense covariance structures that are important for learning dynamical systems with predictive capabilities. Our inference algorithm exploits the covariance structures that arise naturally from sample based approximate Gaussian message passing and low-rank amortized posterior updates -- effectively performing approximate variational smoothing with time complexity scaling linearly in the state dimensionality. In comparisons with other deep state-space model architectures our approach consistently demonstrates the ability to learn a more predictive generative model. Furthermore, when applied to neural physiological recordings, our approach is able to learn a dynamical system capable of forecasting population spiking and behavioral correlates from a small portion of single trials.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95571"} +{"video_file": "LpXV29Ggl3_39025477.mp4", "openreview_id": "LpXV29Ggl3", "slideslive_id": 39025477, "venue": "nips2024", "title": "Exploratory Retrieval-Augmented Planning For Continual Embodied Instruction Following", "status": "Poster", "keywords": "Continual instruction;Embodied planning;Retrieval augmented planning;Integrated task planning", "tldr": "We propose an exploratory retrieval augmented planning framework that utilizes environmental context memory to address continual instruction tasks in non-stationary embodied environments.", "abstract": "This study presents an Exploratory Retrieval-Augmented Planning (ExRAP) framework, designed to tackle continual instruction following tasks of embodied agents in dynamic, non-stationary environments. The framework enhances Large Language Models' (LLMs) embodied reasoning capabilities by efficiently exploring the physical environment and establishing the environmental context memory, thereby effectively grounding the task planning process in time-varying environment contexts. In ExRAP, given multiple continual instruction following tasks, each instruction is decomposed into queries on the environmental context memory and task executions conditioned on the query results. To efficiently handle these multiple tasks that are performed continuously and simultaneously, we implement an exploration-integrated task planning scheme by incorporating the information-based exploration into the LLM-based planning process. Combined with memory-augmented query evaluation, this integrated scheme not only allows for a better balance between the validity of the environmental context memory and the load of environment exploration, but also improves overall task performance. Furthermore, we devise a temporal consistency refinement scheme for query evaluation to address the inherent decay of knowledge in the memory. Through experiments with VirtualHome, ALFRED, and CARLA, our approach demonstrates robustness against a variety of embodied instruction following scenarios involving different instruction scales and types, and non-stationarity degrees, and it consistently outperforms other state-of-the-art LLM-based task planning approaches in terms of both goal success rate and execution efficiency.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95569"} +{"video_file": "LpvSHL9lcK_39025830.mp4", "openreview_id": "LpvSHL9lcK", "slideslive_id": 39025830, "venue": "nips2024", "title": "Probabilistic Graph Rewiring via Virtual Nodes", "status": "Poster", "keywords": "probabilistic;graph;rewiring;virtual;nodes;long-range", "tldr": "Our approach enhances message-passing in graphs with probabilistic rewiring and virtual nodes, addressing under-reaching and over-squashing, and outperforming traditional MPNNs and graph transformers in both expressiveness and efficiency.", "abstract": "Message-passing graph neural networks (MPNNs) have emerged as a powerful paradigm for graph-based machine learning. Despite their effectiveness, MPNNs face challenges such as under-reaching and over-squashing, where limited receptive fields and structural bottlenecks hinder information flow in the graph. While graph transformers hold promise in addressing these issues, their scalability is limited due to quadratic complexity regarding the number of nodes, rendering them impractical for larger graphs. Here, we propose implicitly rewired message-passing neural networks (IPR-MPNNs), a novel approach that integrates implicit probabilistic graph rewiring into MPNNs. By introducing a small number of virtual nodes, i.e., adding additional nodes to a given graph and connecting them to existing nodes, in a differentiable, end-to-end manner, IPR-MPNNs enable long-distance message propagation, circumventing quadratic complexity. Theoretically, we demonstrate that IPR-MPNNs surpass the expressiveness of traditional MPNNs. Empirically, we validate our approach by showcasing its ability to mitigate under-reaching and over-squashing effects, achieving state-of-the-art performance across multiple graph datasets. Notably, IPR-MPNNs outperform graph transformers while maintaining significantly faster computational efficiency.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/95568"} +{"video_file": "LqdcdqIeVD_39028799.mp4", "openreview_id": "LqdcdqIeVD", "slideslive_id": 39028799, "venue": "nips2024", "title": "Spherical Frustum Sparse Convolution Network for LiDAR Point Cloud Semantic Segmentation", "status": "Poster", "keywords": "LiDAR Point Cloud Semantic Segmentation;2D Projection;Quantized Information Loss", "tldr": "We propose spherical frustum structure to avoid quantized information loss in conventional 2D spherical projection for LiDAR point cloud semantic segmentation.", "abstract": "LiDAR point cloud semantic segmentation enables the robots to obtain fine-grained semantic information of the surrounding environment. Recently, many works project the point cloud onto the 2D image and adopt the 2D Convolutional Neural Networks (CNNs) or vision transformer for LiDAR point cloud semantic segmentation. However, since more than one point can be projected onto the same 2D position but only one point can be preserved, the previous 2D projection-based segmentation methods suffer from inevitable quantized information loss, which results in incomplete geometric structure, especially for small objects. To avoid quantized information loss, in this paper, we propose a novel spherical frustum structure, which preserves all points projected onto the same 2D position. Additionally, a hash-based representation is proposed for memory-efficient spherical frustum storage. Based on the spherical frustum structure, the Spherical Frustum sparse Convolution (SFC) and Frustum Farthest Point Sampling (F2PS) are proposed to convolve and sample the points stored in spherical frustums respectively. Finally, we present the Spherical Frustum sparse Convolution Network (SFCNet) to adopt 2D CNNs for LiDAR point cloud semantic segmentation without quantized information loss. Extensive experiments on the SemanticKITTI and nuScenes datasets demonstrate that our SFCNet outperforms previous 2D projection-based semantic segmentation methods based on conventional spherical projection and shows better performance on small object segmentation by preserving complete geometric structure. Codes will be available at https://github.com/IRMVLab/SFCNet.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95567"} +{"video_file": "Lt6wO0oZ8k_39024393.mp4", "openreview_id": "Lt6wO0oZ8k", "slideslive_id": 39024393, "venue": "nips2024", "title": "Opponent Modeling based on Subgoal Inference", "status": "Poster", "keywords": "multi-agent;deep reinforment learning", "tldr": "We introduce a novel method for opponent modeling based on subgoal inference.", "abstract": "When an agent is in a multi-agent environment, it may face previously unseen opponents, and it is a challenge to cooperate with other agents to accomplish the task together or to maximize its own rewards. Most opponent modeling methods deal with the non-stationarity caused by unknown opponent policies via predicting the opponent\u2019s actions. However, focusing on the opponent\u2019s action is shortsighted, which also constrains the adaptability to unknown opponents in complex tasks. In this paper, we propose opponent modeling based on subgoal inference, which infers the opponent\u2019s subgoals through historical trajectories. As subgoals are likely to be shared by different opponent policies, predicting subgoals can yield better generalization to unknown opponents. Additionally, we design two subgoal selection modes for cooperative games and general-sum games respectively. Empirically, we show that our method achieves more effective adaptation than existing methods in a variety of tasks.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95566"} +{"video_file": "LuCLf4BJsr_39026276.mp4", "openreview_id": "LuCLf4BJsr", "slideslive_id": 39026276, "venue": "nips2024", "title": "Chain of Agents: Large Language Models Collaborating on Long-Context Tasks", "status": "Poster", "keywords": "Large Language Models;Long Context Tasks;Multi-agent Collaboration;LLM Agents", "tldr": "We propose Chain-of-Agents leveraging LLMs collaboration to solve long context tasks and outperform RAG and long context LLMs.", "abstract": "Addressing the challenge of effectively processing long contexts has become a critical issue for Large Language Models (LLMs). Two common strategies have emerged: 1) reducing the input length, such as retrieving relevant chunks by Retrieval-Augmented Generation (RAG), and 2) expanding the context window limit of LLMs. However, both strategies have drawbacks: input reduction has no guarantee of covering the part with needed information, while window extension struggles with focusing on the pertinent information for solving the task. To mitigate these limitations, we propose Chain-of-Agents (CoA), a novel framework that harnesses multi-agent collaboration through natural language to enable information aggregation and context reasoning across various LLMs over long-context tasks. CoA consists of multiple worker agents who sequentially communicate to handle different segmented portions of the text, followed by a manager agent who synthesizes these contributions into a coherent final output. CoA processes the entire input by interleaving reading and reasoning, and it mitigates long context focus issues by assigning each agent a short context. We perform a comprehensive evaluation of CoA on a wide range of long-context tasks in question answering, summarization, and code completion, demonstrating significant improvements by up to 10% over strong baselines of RAG, Full-Context, and multi-agent LLMs.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95563"} +{"video_file": "LxxIiInmuF_39026922.mp4", "openreview_id": "LxxIiInmuF", "slideslive_id": 39026922, "venue": "nips2024", "title": "Paths to Equilibrium in Games", "status": "Spotlight", "keywords": "game theory;multi-agent reinforcement learning;strategic dynamics", "tldr": "We study a path connectivity structure of games that is relevant to strategic dynamics and iterative learning algorithms in multi-agent systems. We prove that paths to equilibrium exist in all normal-form games.", "abstract": "In multi-agent reinforcement learning (MARL) and game theory, agents repeatedly interact and revise their strategies as new data arrives, producing a sequence of strategy profiles. This paper studies sequences of strategies satisfying a pairwise constraint inspired by policy updating in reinforcement learning, where an agent who is best responding in one period does not switch its strategy in the next period. This constraint merely requires that optimizing agents do not switch strategies, but does not constrain the non-optimizing agents in any way, and thus allows for exploration. Sequences with this property are called satisficing paths, and arise naturally in many MARL algorithms. A fundamental question about strategic dynamics is such: for a given game and initial strategy profile, is it always possible to construct a satisficing path that terminates at an equilibrium? The resolution of this question has implications about the capabilities or limitations of a class of MARL algorithms. We answer this question in the affirmative for normal-form games. Our analysis reveals a counterintuitive insight that suboptimal, and perhaps even reward deteriorating, strategic updates are key to driving play to equilibrium along a satisficing path.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/95556"} +{"video_file": "LyAFfdx8YF_39027341.mp4", "openreview_id": "LyAFfdx8YF", "slideslive_id": 39027341, "venue": "nips2024", "title": "PEAC: Unsupervised Pre-training for Cross-Embodiment Reinforcement Learning", "status": "Poster", "keywords": "cross-embodiment reinforcement learning;unsupervised reinforcement learning;cross-embodiment exploration;cross-embodiment skill discovery", "tldr": "We propose an unsupervised pre-training method named Pre-trained Embodiment-Aware Control (PEAC) for handling Cross-Embodiment Reinforcement Learning", "abstract": "Designing generalizable agents capable of adapting to diverse embodiments has achieved significant attention in Reinforcement Learning (RL), which is critical for deploying RL agents in various real-world applications. Previous Cross-Embodiment RL approaches have focused on transferring knowledge across embodiments within specific tasks. These methods often result in knowledge tightly coupled with those tasks and fail to adequately capture the distinct characteristics of different embodiments. To address this limitation, we introduce the notion of Cross-Embodiment Unsupervised RL (CEURL), which leverages unsupervised learning to enable agents to acquire embodiment-aware and task-agnostic knowledge through online interactions within reward-free environments. We formulate CEURL as a novel Controlled Embodiment Markov Decision Process (CE-MDP) and systematically analyze CEURL's pre-training objectives under CE-MDP. Based on these analyses, we develop a novel algorithm Pre-trained Embodiment-Aware Control (PEAC) for handling CEURL, incorporating an intrinsic reward function specifically designed for cross-embodiment pre-training. PEAC not only provides an intuitive optimization strategy for cross-embodiment pre-training but also can integrate flexibly with existing unsupervised RL methods, facilitating cross-embodiment exploration and skill discovery. Extensive experiments in both simulated (e.g., DMC and Robosuite) and real-world environments (e.g., legged locomotion) demonstrate that PEAC significantly improves adaptation performance and cross-embodiment generalization, demonstrating its effectiveness in overcoming the unique challenges of CEURL. The project page and code are in https://yingchengyang.github.io/ceurl.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95555"} +{"video_file": "Lzl8qJYXv5_39028608.mp4", "openreview_id": "Lzl8qJYXv5", "slideslive_id": 39028608, "venue": "nips2024", "title": "Estimating the Hallucination Rate of Generative AI", "status": "Poster", "keywords": "Uncertainty Quantification;Large Language Models;Conditional Generative Models;Hallucination Prediction", "tldr": "A method for estimating the hallucination rate of generative AI for in-context learning", "abstract": "This paper presents a method for estimating the hallucination rate for in-context learning (ICL) with generative AI. In ICL, a conditional generative model (CGM) is prompted with a dataset and a prediction question and asked to generate a response. One interpretation of ICL assumes that the CGM computes the posterior predictive of an unknown Bayesian model, which implicitly defines a joint distribution over observable datasets and latent mechanisms. This joint distribution factorizes into two components: the model prior over mechanisms and the model likelihood of datasets given a mechanism. With this perspective, we define a \\textit{hallucination} as a generated response to the prediction question with low model likelihood given the mechanism. We develop a new method that takes an ICL problem and estimates the probability that a CGM will generate a hallucination. Our method only requires generating prediction questions and responses from the CGM and evaluating its response log probability. We empirically evaluate our method using large language models for synthetic regression and natural language ICL tasks.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95553"} +{"video_file": "M1PRU0x1Iz_39027923.mp4", "openreview_id": "M1PRU0x1Iz", "slideslive_id": 39027923, "venue": "nips2024", "title": "FedAvP: Augment Local Data via Shared Policy in Federated Learning", "status": "Poster", "keywords": "federated learning;data augmentation", "tldr": "we introduce a federated data augmentation algorithm that shares the augmentation policies.", "abstract": "Federated Learning (FL) allows multiple clients to collaboratively train models without directly sharing their private data. While various data augmentation techniques have been actively studied in the FL environment, most of these methods share input-level or feature-level data information over communication, posing potential privacy leakage. In response to this challenge, we introduce a federated data augmentation algorithm named FedAvP that shares only the augmentation policies, not the data-related information. For data security and efficient policy search, we interpret the policy loss as a meta update loss in standard FL algorithms and utilize the first-order gradient information to further enhance privacy and reduce communication costs. Moreover, we propose a meta-learning method to search for adaptive personalized policies tailored to heterogeneous clients. Our approach outperforms existing best performing augmentation policy search methods and federated data augmentation methods, in the benchmarks for heterogeneous FL.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95551"} +{"video_file": "M2QREVHK1V_39025034.mp4", "openreview_id": "M2QREVHK1V", "slideslive_id": 39025034, "venue": "nips2024", "title": "Perceptual Fairness in Image Restoration", "status": "Poster", "keywords": "fairness;bias;inverse problems;image restoration;image processing;machine learning;computer vision;responsible AI;super-resolution;deblurring;denoising", "tldr": "We propose a new definition of fairness for image restoration algorithms, draw its connection to previous ones, study its theoretical properties, and demonstrate its practical utility.", "abstract": "Fairness in image restoration tasks is the desire to treat different sub-groups of images equally well. Existing definitions of fairness in image restoration are highly restrictive. They consider a reconstruction to be a correct outcome for a group (e.g., women) only if it falls within the group's set of ground truth images (e.g., natural images of women); otherwise, it is considered entirely incorrect. Consequently, such definitions are prone to controversy, as errors in image restoration can manifest in various ways. In this work we offer an alternative approach towards fairness in image restoration, by considering the Group Perceptual Index (GPI), which we define as the statistical distance between the distribution of the group's ground truth images and the distribution of their reconstructions. We assess the fairness of an algorithm by comparing the GPI of different groups, and say that it achieves perfect Perceptual Fairness (PF) if the GPIs of all groups are identical. We motivate and theoretically study our new notion of fairness, draw its connection to previous ones, and demonstrate its utility on state-of-the-art face image restoration algorithms.", "primary_area": "fairness", "site": "https://neurips.cc/virtual/2024/poster/95549"} +{"video_file": "M2UzLRoqic_39027663.mp4", "openreview_id": "M2UzLRoqic", "slideslive_id": 39027663, "venue": "nips2024", "title": "Reducing Transformer Key-Value Cache Size with Cross-Layer Attention", "status": "Poster", "keywords": "transformers;attention;KV cache;LLMs", "tldr": "By sharing key and value activations between adjacent layers, we can reduce the key-value cache memory footprint of multi-query attention transformers by 2x with negligible impact on accuracy.", "abstract": "Key-value (KV) caching plays an essential role in accelerating decoding for transformer-based autoregressive large language models (LLMs). However, the amount of memory required to store the KV cache can become prohibitive at long sequence lengths and large batch sizes. Since the invention of the transformer, two of the most effective interventions discovered for reducing the size of the KV cache have been Multi-Query Attention (MQA) and its generalization, Grouped-Query Attention (GQA). MQA and GQA both modify the design of the attention block so that multiple query heads can share a single key/value head, reducing the number of distinct key/value heads by a large factor while only minimally degrading accuracy. In this paper, we show that it is possible to take Multi-Query Attention a step further by also sharing key and value heads between adjacent layers, yielding a new attention design we call Cross-Layer Attention (CLA). With CLA, we find that it is possible to reduce the size of the KV cache by another\n2\n\u00d7\nwhile maintaining nearly the same accuracy as unmodified MQA. In experiments training 1B- and 3B-parameter models from scratch, we demonstrate that CLA provides a Pareto improvement over the memory/accuracy tradeoffs which are possible with traditional MQA, potentially enabling future models to operate at longer sequence lengths and larger batch sizes than would otherwise be possible.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95548"} +{"video_file": "M3BIsgGQNb_39027400.mp4", "openreview_id": "M3BIsgGQNb", "slideslive_id": 39027400, "venue": "nips2024", "title": "Meta 3D AssetGen: Text-to-Mesh Generation with High-Quality Geometry, Texture, and PBR Materials", "status": "Poster", "keywords": "text to 3d;3d generative models;sparse view reconstruction;3d shape generation", "tldr": "A fast text to 3D generative model with high quality geometry, textures which supports PBR decomposition of materials", "abstract": "We present Meta 3D AssetGen (AssetGen), a significant advancement in text-to-3D generation which produces faithful, high-quality meshes with texture and material control. Compared to works that bake shading in the 3D object\u2019s appearance, AssetGen outputs physically-based rendering (PBR) materials, supporting realistic relighting. AssetGen generates first several views of the object with separate shaded and albedo appearance channels, and then reconstructs colours, metalness and roughness in 3D, using a deferred shading loss for efficient supervision. It also uses a sign-distance function to represent 3D shape more reliably and introduces a corresponding loss for direct shape supervision. This is implemented using fused kernels for high memory efficiency. After mesh extraction, a texture refinement transformer operating in UV space significantly improves sharpness and details. AssetGen achieves 17% improvement in Chamfer Distance and 40% in LPIPS over the best concurrent work for few-view reconstruction, and a human preference of 72% over the best industry competitors of comparable speed, including those that support PBR. Project page with generated assets: https://assetgen.github.io", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95547"} +{"video_file": "M75dBr10dZ_39027482.mp4", "openreview_id": "M75dBr10dZ", "slideslive_id": 39027482, "venue": "nips2024", "title": "Structured Multi-Track Accompaniment Arrangement via Style Prior Modelling", "status": "Poster", "keywords": "symbolic music generation;style transfer;accompaniment arrangement", "tldr": "A cascaded style prior modelling approach to whole-song multi-track accompaniment arrangement", "abstract": "In the realm of music AI, arranging rich and structured multi-track accompaniments from a simple lead sheet presents significant challenges. Such challenges include maintaining track cohesion, ensuring long-term coherence, and optimizing computational efficiency. In this paper, we introduce a novel system that leverages prior modelling over disentangled style factors to address these challenges. Our method presents a two-stage process: initially, a piano arrangement is derived from the lead sheet by retrieving piano texture styles; subsequently, a multi-track orchestration is generated by infusing orchestral function styles into the piano arrangement. Our key design is the use of vector quantization and a unique multi-stream Transformer to model the long-term flow of the orchestration style, which enables flexible, controllable, and structured music generation. Experiments show that by factorizing the arrangement task into interpretable sub-stages, our approach enhances generative capacity while improving efficiency. Additionally, our system supports a variety of music genres and provides style control at different composition hierarchies. We further show that our system achieves superior coherence, structure, and overall arrangement quality compared to existing baselines.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95545"} +{"video_file": "M8dy0ZuSb1_39025009.mp4", "openreview_id": "M8dy0ZuSb1", "slideslive_id": 39025009, "venue": "nips2024", "title": "Improving robustness to corruptions with multiplicative weight perturbations", "status": "Spotlight", "keywords": "covariate shift;corruption robustness;generalization;regularization;training method;deep learning", "tldr": "We show that simply perturbing weights during training with random multiplicative noises can improve robustness of neural networks to a wide range of corruptions.", "abstract": "Deep neural networks (DNNs) excel on clean images but struggle with corrupted ones. Incorporating specific corruptions into the data augmentation pipeline can improve robustness to those corruptions but may harm performance on clean images and other types of distortion. In this paper, we introduce an alternative approach that improves the robustness of DNNs to a wide range of corruptions without compromising accuracy on clean images. We first demonstrate that input perturbations can be mimicked by multiplicative perturbations in the weight space. Leveraging this, we propose Data Augmentation via Multiplicative Perturbation (DAMP), a training method that optimizes DNNs under random multiplicative weight perturbations. We also examine the recently proposed Adaptive Sharpness-Aware Minimization (ASAM) and show that it optimizes DNNs under adversarial multiplicative weight perturbations. Experiments on image classification datasets (CIFAR-10/100, TinyImageNet and ImageNet) and neural network architectures (ResNet50, ViT-S/16, ViT-B/16) show that DAMP enhances model generalization performance in the presence of corruptions across different settings. Notably, DAMP is able to train a ViT-S/16 on ImageNet from scratch, reaching the top-1 error of 23.7% which is comparable to ResNet50 without extensive data augmentations.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/95542"} +{"video_file": "MFKfm5scHi_39025999.mp4", "openreview_id": "MFKfm5scHi", "slideslive_id": 39025999, "venue": "nips2024", "title": "Approximately Pareto-optimal Solutions for Bi-Objective k-Clustering", "status": "Poster", "keywords": "multi-criteria clustering;approximation algorithms;Pareto-optimal solutions;k-means;single linkage;k-median;k-center", "tldr": "We develop novel algorithms for approximating the set of Pareto-optimal clusterings for various combinations of two objectives.", "abstract": "As a major unsupervised learning method, clustering has received a lot of attention over multiple decades. The various clustering problems that have been studied intensively include, e.g., the\nk\n-means problem and the\nk\n-center problem. However, in applications, it is common that good clusterings should optimize multiple objectives (e.g., visualizing data on a map by clustering districts into areas that are both geographically compact but also homogeneous with respect to the data). We study combinations of different objectives, for example optimizing\nk\n-center and\nk\n-means simultaneously or optimizing\nk\n-center with respect to two different metrics. Usually these objectives are conflicting and cannot be optimized simultaneously, making it necessary to find trade-offs. We develop novel algorithms for computing the set of Pareto-optimal solutions (approximately) for various combinations of two objectives. Our algorithms achieve provable approximation guarantees and we demonstrate in several experiments that the (approximate) Pareto set contains good clusterings that cannot be found by considering one of the objectives separately.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95536"} +{"video_file": "MLgFu6dQYc_39026654.mp4", "openreview_id": "MLgFu6dQYc", "slideslive_id": 39026654, "venue": "nips2024", "title": "How to Boost Any Loss Function", "status": "Poster", "keywords": "boosting;loss functions;zeroth-order optimisation", "tldr": "An algorithm that essentially provably boosts any loss", "abstract": "Boosting is a highly successful ML-born optimization setting in which one is required to computationally efficiently learn arbitrarily good models based on the access to a weak learner oracle, providing classifiers performing at least slightly differently from random guessing. A key difference with gradient-based optimization is that boosting's original model does not requires access to first order information about a loss, yet the decades long history of boosting has quickly evolved it into a first order optimization setting -- sometimes even wrongfully defining it as such. Owing to recent progress extending gradient-based optimization to use only a loss' zeroth ($0^{th}$) order information to learn, this begs the question: what loss functions be efficiently optimized with boosting and what is the information really needed for boosting to meet the original boosting blueprint's requirements ?\nWe provide a constructive formal answer essentially showing that any loss function can be optimized with boosting and thus boosting can achieve a feat not yet known to be possible in the classical $0^{th}$ order setting, since loss functions are not required to be be convex, nor differentiable or Lipschitz -- and in fact not required to be continuous either. Some tools we use are rooted in quantum calculus, the mathematical field -- not to be confounded with quantum computation -- that studies calculus without passing to the limit, and thus without using first order information.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95532"} +{"video_file": "MN4nt01TeO_39025019.mp4", "openreview_id": "MN4nt01TeO", "slideslive_id": 39025019, "venue": "nips2024", "title": "Adaptive Randomized Smoothing: Certified Adversarial Robustness for Multi-Step Defences", "status": "Spotlight", "keywords": "Robustness;Adversarial examples;Adaptive defenses;Certified test-time defenses;Randomized Smoothing", "tldr": "Adaptive Randomized Smoothing soundly and flexibly certifies the predictions of test-time adaptive models against adversarial examples while improving certified and standard accuracies.", "abstract": "We propose Adaptive Randomized Smoothing (ARS) to certify the predictions of our test-time adaptive models against adversarial examples. ARS extends the analysis of randomized smoothing using\nf\n-Differential Privacy to certify the adaptive composition of multiple steps. For the first time, our theory covers the sound adaptive composition of general and high-dimensional functions of noisy inputs. We instantiate ARS on deep image classification to certify predictions against adversarial examples of bounded\nL\n\u221e\nnorm. In the\nL\n\u221e\nthreat model, ARS enables flexible adaptation through high-dimensional input-dependent masking. We design adaptivity benchmarks, based on CIFAR-10 and CelebA, and show that ARS improves standard test accuracy by 1 to 15% points. On ImageNet, ARS improves certified test accuracy by up to 1.6% points over standard RS without adaptivity. Our code is available at https://github.com/ubc-systopia/adaptive-randomized-smoothing.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/95529"} +{"video_file": "MNg331t8Tj_39026875.mp4", "openreview_id": "MNg331t8Tj", "slideslive_id": 39026875, "venue": "nips2024", "title": "Advancing Fine-Grained Classification by Structure and Subject Preserving Augmentation", "status": "Poster", "keywords": "Fine-grained Visual Classification;Data Augmentation;Synthetic Data;Diffusion Models;Image Classification", "tldr": "Our paper introduces SaSPA, a novel data augmentation (DA) method for fine-grained visual classification that surpasses existing DA methods by generating synthetic data that is both more diverse and accurately represents fine-grained classes.", "abstract": "Fine-grained visual classification (FGVC) involves classifying closely related subcategories. This task is inherently difficult due to the subtle differences between classes and the high intra-class variance. Moreover, FGVC datasets are typically small and challenging to gather, thus highlighting a significant need for effective data augmentation. Recent advancements in text-to-image diffusion models have introduced new possibilities for data augmentation in image classification. While these models have been used to generate training data for classification tasks, their effectiveness in full-dataset training of FGVC models remains under-explored. Recent techniques that rely on text-to-image generation or Img2Img methods, such as SDEdit, often struggle to generate images that accurately represent the class while modifying them to a degree that significantly increases the dataset's diversity. To address these challenges, we present SaSPA: Structure and Subject Preserving Augmentation. Contrary to recent methods, our method does not use real images as guidance, thereby increasing generation flexibility and promoting greater diversity. To ensure accurate class representation, we employ conditioning mechanisms, specifically by conditioning on image edges and subject representation. We conduct extensive experiments and benchmark SaSPA against both traditional and generative data augmentation techniques. SaSPA consistently outperforms all established baselines across multiple settings, including full dataset training and contextual bias. Additionally, our results reveal interesting patterns in using synthetic data for FGVC models; for instance, we find a relationship between the amount of real data used and the optimal proportion of synthetic data.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95527"} +{"video_file": "MOFwt8OeXr_39024575.mp4", "openreview_id": "MOFwt8OeXr", "slideslive_id": 39024575, "venue": "nips2024", "title": "Generalizing Consistency Policy to Visual RL with Prioritized Proximal Experience Regularization", "status": "Poster", "keywords": "Visual Reinforcement Learning;Reinforcement Learning;Consistency Model;Dormant Neuron Phenomenon;Diffusion Model", "tldr": "This paper extends consistency policy to visual RL and achieves new SOTA performance on 21 visual control tasks.", "abstract": "With high-dimensional state spaces, visual reinforcement learning (RL) faces significant challenges in exploitation and exploration, resulting in low sample efficiency and training stability. As a time-efficient diffusion model, although consistency models have been validated in online state-based RL, it is still an open question whether it can be extended to visual RL. In this paper, we investigate the impact of non-stationary distribution and the actor-critic framework on consistency policy in online RL, and find that consistency policy was unstable during the training, especially in visual RL with the high-dimensional state space. To this end, we suggest sample-based entropy regularization to stabilize the policy training, and propose a consistency policy with prioritized proximal experience regularization (CP3ER) to improve sample efficiency. CP3ER achieves new state-of-the-art (SOTA) performance in 21 tasks across DeepMind control suite and Meta-world. To our knowledge, CP3ER is the first method to apply diffusion/consistency models to visual RL and demonstrates the potential of consistency models in visual RL.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95526"} +{"video_file": "MP7j58lbWO_39025660.mp4", "openreview_id": "MP7j58lbWO", "slideslive_id": 39025660, "venue": "nips2024", "title": "Probing Social Bias in Labor Market Text Generation by ChatGPT: A Masked Language Model Approach", "status": "Poster", "keywords": "social bias;LLM;NLP;sociology;labor market", "tldr": "This study examines how ChatGPT-generated job applications may reinforce social biases in the labor market through language patterns with a novel bias evaluation framework.", "abstract": "As generative large language models (LLMs) such as ChatGPT gain widespread adoption in various domains, their potential to propagate and amplify social biases, particularly in high-stakes areas such as the labor market, has become a pressing concern. AI algorithms are not only widely used in the selection of job applicants, individual job seekers may also make use of generative LLMs to help develop their job application materials. Against this backdrop, this research builds on a novel experimental design to examine social biases within ChatGPT-generated job applications in response to real job advertisements. By simulating the process of job application creation, we examine the language patterns and biases that emerge when the model is prompted with diverse job postings. Notably, we present a novel bias evaluation framework based on Masked Language Models to quantitatively assess social bias based on validated inventories of social cues/words, enabling a systematic analysis of the language used. Our findings show that the increasing adoption of generative AI, not only by employers but also increasingly by individual job seekers, can reinforce and exacerbate gender and social inequalities in the labor market through the use of biased and gendered language.", "primary_area": "machine_learning_for_social_sciences", "site": "https://neurips.cc/virtual/2024/poster/95525"} +{"video_file": "MQIET1VfoV_39028445.mp4", "openreview_id": "MQIET1VfoV", "slideslive_id": 39028445, "venue": "nips2024", "title": "Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance", "status": "Poster", "keywords": "Equivariant Graph Neural Networks;Reinforcement Learning;Multi-agent Reinforcement Learning;Symmetry;generalization;sample efficiency;MARL", "tldr": "We demonstrate improved sample efficiency and generalization in Multi-Agent Reinforcement Learning (MARL) via using Exploration-enhanced Equivariant Neural Networks instead of traditional function approximators such as MLPs.", "abstract": "Multi-Agent Reinforcement Learning (MARL) struggles with sample inefficiency and poor generalization [1]. These challenges are partially due to a lack of structure or inductive bias in the neural networks typically used in learning the policy. One such form of structure that is commonly observed in multi-agent scenarios is symmetry. The field of Geometric Deep Learning has developed Equivariant Graph Neural Networks (EGNN) that are equivariant (or symmetric) to rotations, translations, and reflections of nodes. Incorporating equivariance has been shown to improve learning efficiency and decrease error [ 2 ]. In this paper, we demonstrate that EGNNs improve the sample efficiency and generalization in MARL. However, we also show that a naive application of EGNNs to MARL results in poor early exploration due to a bias in the EGNN structure. To mitigate this bias, we present Exploration-enhanced Equivariant Graph Neural Networks or E2GN2. We compare E2GN2 to other common function approximators using common MARL benchmarks MPE and SMACv2. E2GN2 demonstrates a significant improvement in sample efficiency, greater final reward convergence, and a 2x-5x gain in over standard GNNs in our generalization tests. These results pave the way for more reliable and effective solutions in complex multi-agent systems.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95522"} +{"video_file": "MSsQDWUWpd_39027710.mp4", "openreview_id": "MSsQDWUWpd", "slideslive_id": 39027710, "venue": "nips2024", "title": "Analysis of Corrected Graph Convolutions", "status": "Poster", "keywords": "Node classification;partial classification;exact classification;contextual stochastic block model;graph convolution;spectral analysis", "tldr": "Analysis of the corrected graph convolution on the contextual stochastic block model for arbitrary number of convolutions.", "abstract": "Machine learning for node classification on graphs is a prominent area driven by applications such as recommendation systems. State-of-the-art models often use multiple graph convolutions on the data, as empirical evidence suggests they can enhance performance. However, it has been shown empirically and theoretically, that too many graph convolutions can degrade performance significantly, a phenomenon known as oversmoothing. In this paper, we provide a rigorous theoretical analysis, based on the two-class contextual stochastic block model (CSBM), of the performance of vanilla graph convolution from which we remove the principal eigenvector to avoid oversmoothing. We perform a spectral analysis for\nk\nrounds of corrected graph convolutions, and we provide results for partial and exact classification. For partial classification, we show that each round of convolution can reduce the misclassification error exponentially up to a saturation level, after which performance does not worsen. We also extend this analysis to the multi-class setting with features distributed according to a Gaussian mixture model. For exact classification, we show that the separability threshold can be improved exponentially up to\nO\n(\nlog\n\u2061\nn\n/\nlog\n\u2061\nlog\n\u2061\nn\n)\ncorrected convolutions.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95519"} +{"video_file": "MTMShU5QaC_39025241.mp4", "openreview_id": "MTMShU5QaC", "slideslive_id": 39025241, "venue": "nips2024", "title": "Aligning Diffusion Models by Optimizing Human Utility", "status": "Poster", "keywords": "text-to-image; diffusion; computer vision;", "tldr": "We extend the utility maximization framework to the setting of diffusion models and use it to align text-to-image diffusion models with human preferences using only per-image binary preference signals, e.g., likes and dislikes.", "abstract": "We present Diffusion-KTO, a novel approach for aligning text-to-image diffusion models by formulating the alignment objective as the maximization of expected human utility. Unlike previous methods, Diffusion-KTO does not require collecting pairwise preference data nor training a complex reward model. Instead, our objective uses per-image binary feedback signals, e.g. likes or dislikes, to align the model with human preferences. After fine-tuning using Diffusion-KTO, text-to-image diffusion models exhibit improved performance compared to existing techniques, including supervised fine-tuning and Diffusion-DPO, both in terms of human judgment and automatic evaluation metrics such as PickScore and ImageReward. Overall, Diffusion-KTO unlocks the potential of leveraging readily available per-image binary preference signals and broadens the applicability of aligning text-to-image diffusion models with human preferences.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95518"} +{"video_file": "MU27zjHBcW_39025514.mp4", "openreview_id": "MU27zjHBcW", "slideslive_id": 39025514, "venue": "nips2024", "title": "DePLM: Denoising Protein Language Models for Property Optimization", "status": "Poster", "keywords": "protein language model;protein engineering;diffusion model;evolutionary information", "tldr": "We introduce Denoising Protein Language Models (DePLM), a novel approach that refines the evolutionary information embodied in PLMs for improved protein optimization", "abstract": "Protein optimization is a fundamental biological task aimed at enhancing theperformance of proteins by modifying their sequences. Computational methodsprimarily rely on evolutionary information (EI) encoded by protein languagemodels (PLMs) to predict fitness landscape for optimization. However, thesemethods suffer from a few limitations. (1) Evolutionary processes involve thesimultaneous consideration of multiple functional properties, often overshadowingthe specific property of interest. (2) Measurements of these properties tend to betailored to experimental conditions, leading to reduced generalizability of trainedmodels to novel proteins. To address these limitations, we introduce DenoisingProtein Language Models (DePLM), a novel approach that refines the evolutionaryinformation embodied in PLMs for improved protein optimization. Specifically, weconceptualize EI as comprising both property-relevant and irrelevant information,with the latter acting as \u201cnoise\u201d for the optimization task at hand. Our approachinvolves denoising this EI in PLMs through a diffusion process conducted in therank space of property values, thereby enhancing model generalization and ensuringdataset-agnostic learning. Extensive experimental results have demonstrated thatDePLM not only surpasses the state-of-the-art in mutation effect prediction butalso exhibits strong generalization capabilities for novel proteins.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/95517"} +{"video_file": "MXOzgjlWDF_39026752.mp4", "openreview_id": "MXOzgjlWDF", "slideslive_id": 39026752, "venue": "nips2024", "title": "Structured Unrestricted-Rank Matrices for Parameter Efficient Finetuning", "status": "Poster", "keywords": "Low Displacement Rank;Structured Matrices;Transformers;Vision Transformers;Fine-tuning", "tldr": "We propose a new class of structured unrestricted-rank matrices, including low displacement rank matrices, for the parameter efficient fine-tuning of Transformers.", "abstract": "Recent efforts to scale Transformer models have demonstrated rapid progress across a wide range of tasks (Wei at. al 2022). However, fine-tuning these models for downstream tasks is quite expensive due to their large parameter counts. Parameter-efficient fine-tuning (PEFT) approaches have emerged as a viable alternative, allowing us to fine-tune models by updating only a small number of parameters. In this work, we propose a general framework for parameter efficient fine-tuning (PEFT), based on structured unrestricted-rank matrices (SURM) which can serve as a drop-in replacement for popular approaches such as Adapters and LoRA. Unlike other methods like LoRA, SURMs give us more flexibility in finding the right balance between compactness and expressiveness. This is achieved by using low displacement rank matrices (LDRMs), which hasn't been used in this context before. SURMs remain competitive with baselines, often providing significant quality improvements while using a smaller parameter budget. SURMs achieve: 5-7% accuracy gains on various image classification tasks while replacing low-rank matrices in LoRA and: up to 12x reduction of the number of parameters in adapters (with virtually no loss in quality) on the GLUE benchmark.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95515"} +{"video_file": "MXY0qsGgeO_39025929.mp4", "openreview_id": "MXY0qsGgeO", "slideslive_id": 39025929, "venue": "nips2024", "title": "ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization", "status": "Poster", "keywords": "Text-to-Image Generation;Diffusion Models;Test-Time Training;Reward Models;Learning From Human Feedback", "tldr": "Optimizing initial noise of one-step diffusion models leads to significantly improved results", "abstract": "Text-to-Image (T2I) models have made significant advancements in recent years, but they still struggle to accurately capture intricate details specified in complex compositional prompts. While fine-tuning T2I models with reward objectives has shown promise, it suffers from \"reward hacking\" and may not generalize well to unseen prompt distributions. In this work, we propose Reward-based Noise Optimization (ReNO), a novel approach that enhances T2I models at inference by optimizing the initial noise based on the signal from one or multiple human preference reward models. Remarkably, solving this optimization problem with gradient ascent for 50 iterations yields impressive results on four different one-step models across two competitive benchmarks, T2I-CompBench and GenEval. Within a computational budget of 20-50 seconds, ReNO-enhanced one-step models consistently surpass the performance of all current open-source Text-to-Image models. Extensive user studies demonstrate that our model is preferred nearly twice as often compared to the popular SDXL model and is on par with the proprietary Stable Diffusion 3 with 8B parameters. Moreover, given the same computational resources, a ReNO-optimized one-step model outperforms widely-used open-source models such as SDXL and PixArt-alpha, highlighting the efficiency and effectiveness of ReNO in enhancing T2I model performance at inference time.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95513"} +{"video_file": "MXzr10iX2d_39025163.mp4", "openreview_id": "MXzr10iX2d", "slideslive_id": 39025163, "venue": "nips2024", "title": "TopoLogic: An Interpretable Pipeline for Lane Topology Reasoning on Driving Scenes", "status": "Poster", "keywords": "Autonomous Driving;Topology Reasoning;Lane Detection;High Definition Map Learning", "tldr": "We proposed an interpretable pipline based on lane geometric distance and semantic similarity that has significantly enhanced the performance of lane topology reasoning on driving scenes.", "abstract": "As an emerging task that integrates perception and reasoning, topology reasoning in autonomous driving scenes has recently garnered widespread attention. However, existing work often emphasizes \"perception over reasoning\": they typically boost reasoning performance by enhancing the perception of lanes and directly adopt vanilla MLPs to learn lane topology from lane query. This paradigm overlooks the geometric features intrinsic to the lanes themselves and are prone to being influenced by inherent endpoint shifts in lane detection. To tackle this issue, we propose an interpretable method for lane topology reasoning based on lane geometric distance and lane query similarity, named TopoLogic. This method mitigates the impact of endpoint shifts in geometric space, and introduces explicit similarity calculation in semantic space as a complement. By integrating results from both spaces, our methods provides more comprehensive information for lane topology. Ultimately, our approach significantly outperforms the existing state-of-the-art methods on the mainstream benchmark OpenLane-V2 (23.9 v.s. 10.9 in TOP\nl\nl\nand 44.1 v.s. 39.8 in OLS on subsetA). Additionally, our proposed geometric distance topology reasoning method can be incorporated into well-trained models without re-training, significantly enhancing the performance of lane topology reasoning. The code is released at https://github.com/Franpin/TopoLogic.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95511"} +{"video_file": "MYI443zCvv_39026763.mp4", "openreview_id": "MYI443zCvv", "slideslive_id": 39026763, "venue": "nips2024", "title": "DEPrune: Depth-wise Separable Convolution Pruning for Maximizing GPU Parallelism", "status": "Poster", "keywords": "Pruning;Depth-wise Separable Convolution;GPU", "tldr": "Hardware-aware Pruning for Fast Depth-wise Separable Convolution", "abstract": "Depth-wise Separable Convolution (DSConv) has a powerful representation even with fewer parameters and computation, leading to its adoption by almost all of the state-of-the-art CNN models. DSConv models are already compact making it hard to apply pruning, and there are few previous pruning techniques that target depth-wise convolution (DW-conv). In this paper, we present Depth-wise Separable Convolution Pruning (DEPrune), a novel pruning method applied to both point-wise and depth-wise convolutions. DEPrune is optimized by analyzing the computation of DSConv on GPUs. DEPrune employs a fine-grained pruning approach, yet it achieves the structured sparsity typically absent in fine-grained pruning, enabling practical hardware acceleration. Moreover, this method maintains a high pruning ratio without causing any accuracy drop. We additionally represent techniques that further enhance DEPrune performance: 1) balanced workload tuning (BWT), and 2) hardware-aware sparsity recalibration (HSR). Experiment results show that DEPrune achieves up to\n3.74\n\u00d7\npractical speedup in DSConv inference on GPUs while maintaining the accuracy of EfficientNet-B0 on ImageNet.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95510"} +{"video_file": "MaDykgj4Ru_39025907.mp4", "openreview_id": "MaDykgj4Ru", "slideslive_id": 39025907, "venue": "nips2024", "title": "BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models", "status": "Poster", "keywords": "Bayesian Neural Network;Finetuning;Large Language Models", "tldr": "We introduce a principled Bayesian framework for improving large language models' generalization and uncertainty estimation.", "abstract": "Large Language Models (LLMs) often suffer from overconfidence during inference, particularly when adapted to downstream domain-specific tasks with limited data. Previous work addresses this issue by employing approximate Bayesian estimation after the LLMs are trained, enabling them to quantify uncertainty. However, such post-training approaches' performance is severely limited by the parameters learned during training. In this paper, we go beyond post-training Bayesianization and propose Bayesian Low-Rank Adaptation by Backpropagation (BLoB), an algorithm that continuously and jointly adjusts both the mean and covariance of LLM parameters throughout the whole fine-tuning process. Our empirical results verify the effectiveness of BLoB in terms of generalization and uncertainty estimation, when evaluated on both in-distribution and out-of-distribution data.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95507"} +{"video_file": "MbZuh8L0Xg_39025509.mp4", "openreview_id": "MbZuh8L0Xg", "slideslive_id": 39025509, "venue": "nips2024", "title": "DiffPhyCon: A Generative Approach to Control Complex Physical Systems", "status": "Poster", "keywords": "Physical systems control;physical simulation;generative models;prior reweighting", "tldr": "We introduce a novel method for controlling complex physical systems using generative models, by minimizing the learned generative energy function and specified objective", "abstract": "Controlling the evolution of complex physical systems is a fundamental task across science and engineering. Classical techniques suffer from limited applicability or huge computational costs. On the other hand, recent deep learning and reinforcement learning-based approaches often struggle to optimize long-term control sequences under the constraints of system dynamics. In this work, we introduce Diffusion Physical systems Control (DiffPhyCon), a new class of method to address the physical systems control problem. DiffPhyCon excels by simultaneously minimizing both the learned generative energy function and the predefined control objectives across the entire trajectory and control sequence. Thus, it can explore globally and plan near-optimal control sequences. Moreover, we enhance DiffPhyCon with prior reweighting, enabling the discovery of control sequences that significantly deviate from the training distribution. We test our method on three tasks: 1D Burgers' equation, 2D jellyfish movement control, and 2D high-dimensional smoke control, where our generated jellyfish dataset is released as a benchmark for complex physical system control research. Our method outperforms widely applied classical approaches and state-of-the-art deep learning and reinforcement learning methods. Notably, DiffPhyCon unveils an intriguing fast-close-slow-open pattern observed in the jellyfish, aligning with established findings in the field of fluid dynamics. The project website, jellyfish dataset, and code can be found at https://github.com/AI4Science-WestlakeU/diffphycon.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/95505"} +{"video_file": "MelYGfpy4x_39027463.mp4", "openreview_id": "MelYGfpy4x", "slideslive_id": 39027463, "venue": "nips2024", "title": "Robust group and simultaneous inferences for high-dimensional single index model", "status": "Poster", "keywords": "FDR control;high-dimensional inference;honest test;outliers;robustness", "tldr": "This paper introduces high-dimensional robust inference procedures by recasting the single index model into a pseudo-linear model with transformed responses.", "abstract": "The high-dimensional single index model (SIM), which assumes that the response is independent of the predictors given a linear combination of predictors, has drawn attention due to its flexibility and interpretability, but its efficiency is adversely affected by outlying observations and heavy-tailed distributions. This paper introduces a robust procedure by recasting the SIM into a pseudo-linear model with transformed responses. It relaxes the distributional conditions on random errors from sub-Gaussian to more general distributions and thus it is robust with substantial efficiency gain for heavy-tailed random errors. Under this paradigm, we provide asymptotically honest group inference procedures based on the idea of orthogonalization, which enjoys the feature that it does not require the zero and nonzero coefficients to be well-separated. Asymptotic null distribution and bootstrap implementation are both established. Moreover, we develop a multiple testing procedure for determining if the individual coefficients are relevant simultaneously, and show that it is able to control the false discovery rate asymptotically. Numerical results indicate that the new procedures can be highly competitive among existing methods, especially for heavy-tailed errors.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95500"} +{"video_file": "Mktgayam7U_39027757.mp4", "openreview_id": "Mktgayam7U", "slideslive_id": 39027757, "venue": "nips2024", "title": "Scalable Kernel Inverse Optimization", "status": "Poster", "keywords": "Optimization;Imitation Learning;Inverse Optimization", "tldr": "Proposing a kernel-based Inverse Optimization model with a scalable algorithm designed for imitation learning tasks.", "abstract": "Inverse Optimization (IO) is a framework for learning the unknown objective function of an expert decision-maker from a past dataset. In this paper, we extend the hypothesis class of IO objective functions to a reproducing kernel Hilbert space (RKHS), thereby enhancing feature representation to an infinite-dimensional space. We demonstrate that a variant of the representer theorem holds for a specific training loss, allowing the reformulation of the problem as a finite-dimensional convex optimization program. To address scalability issues commonly associated with kernel methods, we propose the Sequential Selection Optimization (SSO) algorithm to efficiently train the proposed Kernel Inverse Optimization (KIO) model. Finally, we validate the generalization capabilities of the proposed KIO model and the effectiveness of the SSO algorithm through learning-from-demonstration tasks on the MuJoCo benchmark.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95494"} +{"video_file": "MncgmW8b6q_39027928.mp4", "openreview_id": "MncgmW8b6q", "slideslive_id": 39027928, "venue": "nips2024", "title": "Unsupervised Discovery of Formulas for Mathematical Constants", "status": "Poster", "keywords": "AI for Science;Automated Conjecture Generation;Experimental Mathematics;Mathematical Constants;Irrationality Measure;Unsupervised Learning;Formula Generation;Continued Fractions", "tldr": "Unsupervised learning on dynamics-based features discovers novel formulas for mathematical constants.", "abstract": "Ongoing efforts that span over decades show a rise of AI methods for accelerating scientific discovery, yet accelerating discovery in mathematics remains a persistent challenge for AI. Specifically, AI methods were not effective in creation of formulas for mathematical constants because each such formula must be correct for infinite digits of precision, with 'near-true' formulas providing no insight toward the correct ones. Consequently, formula discovery lacks a clear distance metric needed to guide automated discovery in this realm.\nIn this work, we propose a systematic methodology for categorization, characterization, and pattern identification of such formulas. The key to our methodology is introducing metrics based on the convergence dynamics of the formulas, rather than on the numerical value of the formula. These metrics enable the first automated clustering of mathematical formulas. We demonstrate this methodology on Polynomial Continued Fraction formulas, which are ubiquitous in their intrinsic connections to mathematical constants, and generalize many mathematical functions and structures. We test our methodology on a set of 1,768,900 such formulas, identifying many known formulas for mathematical constants, and discover previously unknown formulas for\n\u03c0\n,\nln\n\u2061\n(\n2\n)\n, Gauss', and Lemniscate's constants. The uncovered patterns enable a direct generalization of individual formulas to infinite families, unveiling rich mathematical structures. This success paves the way towards a generative model that creates formulas fulfilling specified mathematical properties, accelerating the rate of discovery of useful formulas.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/95491"} +{"video_file": "Mrs9a1XQAp_39024994.mp4", "openreview_id": "Mrs9a1XQAp", "slideslive_id": 39024994, "venue": "nips2024", "title": "Beyond Slow Signs in High-fidelity Model Extraction", "status": "Poster", "keywords": "model extraction;cryptanalytic extraction", "tldr": "significantly improve performance of high fidelity model extraction, importantly removing prior bottlenecks of the attack", "abstract": "Deep neural networks, costly to train and rich in intellectual property value, are increasingly threatened by model extraction attacks that compromise their confiden- tiality. Previous attacks have succeeded in reverse-engineering model parameters up to a precision of float64 for models trained on random data with at most three hidden layers using cryptanalytical techniques. However, the process was identified to be very time consuming and not feasible for larger and deeper models trained on standard benchmarks. Our study evaluates the feasibility of parameter extraction methods of Carlini et al. [1] further enhanced by Canales-Mart\u00ednez et al. [2] for models trained on standard benchmarks. We introduce a unified codebase that integrates previous methods and reveal that computational tools can significantly influence performance. We develop further optimisations to the end-to-end attack and improve the efficiency of extracting weight signs by up to 14.8 times com- pared to former methods through the identification of easier and harder to extract neurons. Contrary to prior assumptions, we identify extraction of weights, not extraction of weight signs, as the critical bottleneck. With our improvements, a 16,721 parameter model with 2 hidden layers trained on MNIST is extracted within only 98 minutes compared to at least 150 minutes previously. Finally, addressing methodological deficiencies observed in previous studies, we propose new ways of robust benchmarking for future model extraction attacks.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/95487"} +{"video_file": "MtRvzJBsBA_39025074.mp4", "openreview_id": "MtRvzJBsBA", "slideslive_id": 39025074, "venue": "nips2024", "title": "LRM-Zero: Training Large Reconstruction Models with Synthesized Data", "status": "Poster", "keywords": "3D Reconstruction;Transformer;Pre-training;Synthetic Data", "tldr": "We train a Large Reconstruction Model (LRM) on a synthesized 3D dataset and closely match the performance of the state-of-the-art LRM model trained on real 3D data.", "abstract": "We present LRM-Zero, a Large Reconstruction Model (LRM) trained entirely on synthesized 3D data, achieving high-quality sparse-view 3D reconstruction. The core of LRM-Zero is our procedural 3D dataset, Zeroverse, which is automatically synthesized from simple primitive shapes with random texturing and augmentations (e.g., height fields, boolean differences, and wireframes). Unlike previous 3D datasets (e.g., Objaverse) which are often captured or crafted by humans to approximate real 3D data, Zeroverse completely ignores realistic global semantics but is rich in complex geometric and texture details that are locally similar to or even more intricate than real objects. We demonstrate that our LRM-Zero, trained with our fully synthesized Zeroverse, can achieve high visual quality in the reconstruction of real-world objects, competitive with models trained on Objaverse. We also analyze several critical design choices of Zeroverse that contribute to LRM-Zero's capability and training stability. Our work demonstrates that 3D reconstruction, one of the core tasks in 3D vision, can potentially be addressed without the semantics of real-world objects. The Zeroverse's procedural synthesis code and interactive visualization are available at: https://desaixie.github.io/lrm-zero/.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95485"} +{"video_file": "MuPlJ9fT4b_39026330.mp4", "openreview_id": "MuPlJ9fT4b", "slideslive_id": 39026330, "venue": "nips2024", "title": "Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning", "status": "Poster", "keywords": "scientific machine learning;unsupervised pretraining;neural operators;foundation models", "tldr": "We introduce unsupervised pretraining and in-context learning for Scientific Machine Learning, significantly enhancing data efficiency and generalizability of neural operators.", "abstract": "Recent years have witnessed the promise of coupling machine learning methods and physical domain-specific insights for solving scientific problems based on partial differential equations (PDEs). However, being data-intensive, these methods still require a large amount of PDE data. This reintroduces the need for expensive numerical PDE solutions, partially undermining the original goal of avoiding these expensive simulations. In this work, seeking data efficiency, we design unsupervised pretraining for PDE operator learning. To reduce the need for training data with heavy simulation costs, we mine unlabeled PDE data without simulated solutions, and we pretrain neural operators with physics-inspired reconstruction-based proxy tasks. To improve out-of-distribution performance, we further assist neural operators in flexibly leveraging a similarity-based method that learns in-context examples, without incurring extra training costs or designs. Extensive empirical evaluations on a diverse set of PDEs demonstrate that our method is highly data-efficient, more generalizable, and even outperforms conventional vision-pretrained models. We provide our code at https://github.com/delta-lab-ai/data_efficient_nopt.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/95483"} +{"video_file": "MwFeh4RqvA_39027035.mp4", "openreview_id": "MwFeh4RqvA", "slideslive_id": 39027035, "venue": "nips2024", "title": "Generating compositional scenes via Text-to-image RGBA Instance Generation", "status": "Poster", "keywords": "RGBA generation;scene composition;diffusion models", "tldr": "We present a multi-layer approach for text-to-image diffusion models that improves fine-grained control over object attributes and layout by generating isolated RGBA images and blending them into detailed composite scenes.", "abstract": "Text-to-image diffusion generative models can generate high quality images at the cost of tedious prompt engineering. Controllability can be improved by introducing layout conditioning, however existing methods lack layout editing ability and fine-grained control over object attributes. The concept of multi-layer generation holds great potential to address these limitations, however generating image instances concurrently to scene composition limits control over fine-grained object attributes, relative positioning in 3D space and scene manipulation abilities. In this work, we propose a novel multi-stage generation paradigm that is designed for fine-grained control, flexibility and interactivity. To ensure control over instance attributes, we devise a novel training paradigm to adapt a diffusion model to generate isolated scene components as RGBA images with transparency information. To build complex images, we employ these pre-generated instances and introduce a multi-layer composite generation process that smoothly assembles components in realistic scenes. Our experiments show that our RGBA diffusion model is capable of generating diverse and high quality instances with precise control over object attributes. Through multi-layer composition, we demonstrate that our approach allows to build and manipulate images from highly complex prompts with fine-grained control over object appearance and location, granting a higher degree of control than competing methods.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95481"} +{"video_file": "Mwj57TcHWX_39027971.mp4", "openreview_id": "Mwj57TcHWX", "slideslive_id": 39027971, "venue": "nips2024", "title": "DiffTORI: Differentiable Trajectory Optimization for Deep Reinforcement and Imitation Learning", "status": "Spotlight", "keywords": "imitation learning;model-based reinforcement learning;differentiable trajectory optimization", "tldr": "This paper introduces DiffTORI, which uses Differentiable Trajectory Optimization as the policy representation to generate actions for deep reinforcement and imitation learning, and outperforms prior state-of-the-art methods in both domains.", "abstract": "This paper introduces DiffTORI, which utilizes\nDiff\nerentiable\nT\nrajectory\nO\nptimization as the policy representation to generate actions for deep\nR\neinforcement and\nI\nmitation learning. Trajectory optimization is a powerful and widely used algorithm in control, parameterized by a cost and a dynamics function. The key to our approach is to leverage the recent progress in differentiable trajectory optimization, which enables computing the gradients of the loss with respect to the parameters of trajectory optimization. As a result, the cost and dynamics functions of trajectory optimization can be learned end-to-end. DiffTORI addresses the \u201cobjective mismatch\u201d issue of prior model-based RL algorithms, as the dynamics model in DiffTORI is learned to directly maximize task performance by differentiating the policy gradient loss through the trajectory optimization process. We further benchmark DiffTORI for imitation learning on standard robotic manipulation task suites with high-dimensional sensory observations and compare our method to feedforward policy classes as well as Energy-Based Models (EBM) and Diffusion. Across 15 model based RL tasks and 35 imitation learning tasks with high-dimensional image and point cloud inputs, DiffTORI outperforms prior state-of-the-art methods in both domains.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95479"} +{"video_file": "MwmmBg1VYg_39024560.mp4", "openreview_id": "MwmmBg1VYg", "slideslive_id": 39024560, "venue": "nips2024", "title": "Why are Visually-Grounded Language Models Bad at Image Classification?", "status": "Poster", "keywords": "vision-language models;image classification", "tldr": "We explain why visually-grounded language models are bad at classification and propose a simple data intervention method to fix that.", "abstract": "Image classification is one of the most fundamental capabilities of machine vision intelligence. In this work, we revisit the image classification task using visually-grounded language models (VLMs) such as GPT-4V and LLaVA. We find that existing proprietary and public VLMs, despite often using CLIP as a vision encoder and having many more parameters, significantly underperform CLIP on standard image classification benchmarks like ImageNet. To understand the reason, we explore several hypotheses concerning the inference algorithms, training objectives, and data processing in VLMs. Our analysis reveals that the primary cause is data-related: critical information for image classification is encoded in the VLM's latent space but can only be effectively decoded with enough training data. Specifically, there is a strong correlation between the frequency of class exposure during VLM training and instruction-tuning and the VLM's performance in those classes; when trained with sufficient data, VLMs can match the accuracy of state-of-the-art classification models. Based on these findings, we enhance a VLM by integrating classification-focused datasets into its training, and demonstrate that the enhanced classification performance of the VLM transfers to its general capabilities, resulting in an improvement of 11.8% on the newly collected ImageWikiQA dataset.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/95478"} +{"video_file": "MxWpCherzD_39024547.mp4", "openreview_id": "MxWpCherzD", "slideslive_id": 39024547, "venue": "nips2024", "title": "Equivariant spatio-hemispherical networks for diffusion MRI deconvolution", "status": "Poster", "keywords": "Geometric Deep Learning;Diffusion MRI;Spherical Networks;Biomedical Image Analysis", "tldr": "We create efficient equivariant networks for volumes where every voxel contains a spherical signal. We developed this method to improve fiber deconvolution in diffusion MRI, the primary method of mapping neuronal fibers in the brain.\"", "abstract": "Each voxel in a diffusion MRI (dMRI) image contains a spherical signal corresponding to the direction and strength of water diffusion in the brain. This paper advances the analysis of such spatio-spherical data by developing convolutional network layers that are equivariant to the $\\mathbf{E(3) \\times SO(3)}$ group and account for the physical symmetries of dMRI including rotations, translations, and reflections of space alongside voxel-wise rotations. Further, neuronal fibers are typically antipodally symmetric, a fact we leverage to construct highly efficient spatio-hemispherical graph convolutions to accelerate the analysis of high-dimensional dMRI data. In the context of sparse spherical fiber deconvolution to recover white matter microstructure, our proposed equivariant network layers yield substantial performance and efficiency gains, leading to better and more practical resolution of crossing neuronal fibers and fiber tractography. These gains are experimentally consistent across both simulation and in vivo human datasets.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/95476"} +{"video_file": "MyVyH5Jo1l_39024368.mp4", "openreview_id": "MyVyH5Jo1l", "slideslive_id": 39024368, "venue": "nips2024", "title": "Quantifying the Gain in Weak-to-Strong Generalization", "status": "Poster", "keywords": "Weak-to-Strong Generalization", "tldr": "We show that the improvement in performance achieved by a strong model supervised by a weak model is quantified by the misfit error incurred by the strong model on labels generated by the weak model.", "abstract": "Recent advances in large language models have shown capabilities that are extraordinary and near-superhuman. These models operate with such complexity that reliably evaluating and aligning them proves challenging for humans. This leads to the natural question: can guidance from weak models (like humans) adequately direct the capabilities of strong models? In a recent and somewhat surprising work, Burns et al. (2023) empirically demonstrated that when strong models (like GPT-4) are finetuned using labels generated by weak supervisors (like GPT-2), the strong models outperform their weaker counterparts---a phenomenon they term weak-to-strong generalization.\nIn this work, we present a theoretical framework for understanding weak-to-strong generalization. Specifically, we show that the improvement in performance achieved by strong models over their weaker counterparts is quantified by the misfit error incurred by the strong model on labels generated by the weaker model. Our theory reveals several curious algorithmic insights. For instance, we can predict the amount by which the strong model will improve over the weak model, and also choose among different weak models to train the strong model, based on its misfit error. We validate our theoretical findings through various empirical assessments.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95474"} +{"video_file": "MzTdZhMjeC_39024668.mp4", "openreview_id": "MzTdZhMjeC", "slideslive_id": 39024668, "venue": "nips2024", "title": "MO-DDN: A Coarse-to-Fine Attribute-based Exploration Agent for Multi-Object Demand-driven Navigation", "status": "Poster", "keywords": "Mudolar Object Navigation;Demand-driven Navigation;Attribute Learning", "tldr": "We propose a multi-object demand-driven navigation benchmark and train an coarse-to-fine attribute-based exploration agent to solve this task.", "abstract": "The process of satisfying daily demands is a fundamental aspect of humans' daily lives. With the advancement of embodied AI, robots are increasingly capable of satisfying human demands. Demand-driven navigation (DDN) is a task in which an agent must locate an object to satisfy a specified demand instruction, such as \"I am thirsty.\" The previous study typically assumes that each demand instruction requires only one object to be fulfilled and does not consider individual preferences. However, the realistic human demand may involve multiple objects. In this paper, we introduce the Multi-object Demand-driven Navigation (MO-DDN) benchmark, which addresses these nuanced aspects, including multi-object search and personal preferences, thus making the MO-DDN task more reflective of real-life scenarios compared to DDN. Building upon previous work, we employ the concept of ``attribute'' to tackle this new task. However, instead of solely relying on attribute features in an end-to-end manner like DDN, we propose a modular method that involves constructing a coarse-to-fine attribute-based exploration agent (C2FAgent). Our experimental results illustrate that this coarse-to-fine exploration strategy capitalizes on the advantages of attributes at various decision-making levels, resulting in superior performance compared to baseline methods. Code and video can be found at https://sites.google.com/view/moddn.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/95471"} +{"video_file": "N12B6wvA55_39026002.mp4", "openreview_id": "N12B6wvA55", "slideslive_id": 39026002, "venue": "nips2024", "title": "Mirror and Preconditioned Gradient Descent in Wasserstein Space", "status": "Spotlight", "keywords": "wasserstein gradient flows;mirror descent;preconditioned gradient descent", "tldr": "We study mirror descent and preconditioned gradient descent on Wasserstein space.", "abstract": "As the problem of minimizing functionals on the Wasserstein space encompasses many applications in machine learning, different optimization algorithms on\nR\nd\nhave received their counterpart analog on the Wasserstein space. We focus here on lifting two explicit algorithms: mirror descent and preconditioned gradient descent. These algorithms have been introduced to better capture the geometry of the function to minimize and are provably convergent under appropriate (namely relative) smoothness and convexity conditions. Adapting these notions to the Wasserstein space, we prove guarantees of convergence of some Wasserstein-gradient-based discrete-time schemes for new pairings of objective functionals and regularizers. The difficulty here is to carefully select along which curves the functionals should be smooth and convex. We illustrate the advantages of adapting the geometry induced by the regularizer on ill conditioned optimization tasks, and showcase the improvement of choosing different discrepancies and geometries in a computational biology task of aligning single-cells.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95469"} +{"video_file": "N2RaC7LO6k_39026940.mp4", "openreview_id": "N2RaC7LO6k", "slideslive_id": 39026940, "venue": "nips2024", "title": "Geometry of naturalistic object representations in recurrent neural network models of working memory", "status": "Poster", "keywords": "Working memory;geometry;recurrent neural networks", "tldr": "We found that multi-task RNN models of working memory maintain both task-relevant and irrelevant information in orthogonalized subspaces and use rotational dynamics to track past information amidst new inputs.", "abstract": "Working memory is a central cognitive ability crucial for intelligent decision-making. Recent experimental and computational work studying working memory has primarily used categorical (i.e., one-hot) inputs, rather than ecologically-relevant, multidimensional naturalistic ones. Moreover, studies have primarily investigated working memory during single or few number of cognitive tasks. As a result, an understanding of how naturalistic object information is maintained in working memory in neural networks is still lacking. To bridge this gap, we developed sensory-cognitive models, comprising of a convolutional neural network (CNN) coupled with a recurrent neural network (RNN), and trained them on nine distinct N-back tasks using naturalistic stimuli. By examining the RNN\u2019s latent space, we found that: 1) Multi-task RNNs represent both task-relevant and irrelevant information simultaneously while performing tasks; 2) While the latent subspaces used to maintain specific object properties in vanilla RNNs are largely shared across tasks, they are highly task-specific in gated RNNs such as GRU and LSTM; 3) Surprisingly, RNNs embed objects in new representational spaces in which individual object features are less orthogonalized relative to the perceptual space; 4) Interestingly, the transformation of WM encodings (i.e., embedding of visual inputs in the RNN latent space) into memory was shared across stimuli, yet the transformations governing the retention of a memory in the face of incoming distractor stimuli were distinct across time. Our findings indicate that goal-driven RNNs employ chronological memory subspaces to track information over short time spans, enabling testable predictions with neural data.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/95467"} +{"video_file": "N4quRxE19p_39026150.mp4", "openreview_id": "N4quRxE19p", "slideslive_id": 39026150, "venue": "nips2024", "title": "AvaTaR: Optimizing LLM Agents for Tool Usage via Contrastive Reasoning", "status": "Poster", "keywords": "LLM agents;Tool utilization;Automatic prompt optimization;Complex retrieval;Question-Answering tasks", "tldr": "We introduce AvaTaR, a novel framework that automates the optimization of LLM agents for enhanced tool utilization and generalization in multi-step problems", "abstract": "Large language model (LLM) agents have demonstrated impressive capabilities in utilizing external tools and knowledge to boost accuracy and reduce hallucinations. However, developing prompting techniques that enable LLM agents to effectively use these tools and knowledge remains a heuristic and labor-intensive task. Here, we introduce AvaTaR, a novel and automated framework that optimizes an LLM agent to effectively leverage provided tools, improving performance on a given task. During optimization, we design a comparator module to iteratively deliver insightful and comprehensive prompts to the LLM agent by contrastively reasoning between positive and negative examples sampled from training data. We demon- strate AvaTaR on four complex multimodal retrieval datasets featuring textual, visual, and relational information, and three general question-answering (QA) datasets. We find AvaTaR consistently outperforms state-of-the-art approaches across all seven tasks, exhibiting strong generalization ability when applied to novel cases and achieving an average relative improvement of 14% on the Hit@1 metric for the retrieval datasets and 13% for the QA datasets. Code and dataset are available at https://github.com/zou-group/avatar.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95465"} +{"video_file": "N6zJ8DclC2_39025864.mp4", "openreview_id": "N6zJ8DclC2", "slideslive_id": 39025864, "venue": "nips2024", "title": "Natural Counterfactuals With Necessary Backtracking", "status": "Poster", "keywords": "causal model;counterfactual reasoning;counterfactual generation;normalizing flows", "tldr": "We propose a framework of natural counterfactuals and a method for generating counterfactuals that are more feasible with respect to the actual data distribution.", "abstract": "Counterfactual reasoning is pivotal in human cognition and especially important for providing explanations and making decisions. While Judea Pearl's influential approach is theoretically elegant, its generation of a counterfactual scenario often requires too much deviation from the observed scenarios to be feasible, as we show using simple examples. To mitigate this difficulty, we propose a framework of natural counterfactuals and a method for generating counterfactuals that are more feasible with respect to the actual data distribution. Our methodology incorporates a certain amount of backtracking when needed, allowing changes in causally preceding variables to minimize deviations from realistic scenarios. Specifically, we introduce a novel optimization framework that permits but also controls the extent of backtracking with a \"naturalness'' criterion. Empirical experiments demonstrate the effectiveness of our method. The code is available at https://github.com/GuangyuanHao/natural_counterfactuals.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/95463"} +{"video_file": "NAcHv7vtL2_39028260.mp4", "openreview_id": "NAcHv7vtL2", "slideslive_id": 39028260, "venue": "nips2024", "title": "Scaling laws for learning with real and surrogate data", "status": "Poster", "keywords": "Machine learning;Synthetic data;Surrogate data;Scaling laws;Linear regression", "tldr": "We study the integration of surrogate data from different sources alongside real data in training, via weighted risk minimization and provide theoretical and empirical evidence for a new scaling law that allows to optimize the weighting scheme.", "abstract": "Collecting large quantities of high-quality data can be prohibitively expensive or impractical, and a bottleneck in machine learning. One may instead augment a small set of\nn\ndata points from the target distribution with data from more accessible sources, e.g. data collected under different circumstances or synthesized by generative models. We refer to such data as `surrogate data'. We study a weighted empirical risk minimization (ERM) approach for integrating surrogate data into training. We analyze mathematically this method under several classical statistical models, and validate our findings empirically on datasets from different domains. Our main findings are:\n(\ni\n)\nIntegrating surrogate data can significantly reduce the test error on the original distribution. Surprisingly, this can happen even when the surrogate data is unrelated to the original ones. We trace back this behavior to the classical Stein's paradox.\n(\ni\ni\n)\nIn order to reap the benefit of surrogate data, it is crucial to use optimally weighted ERM.\n(\ni\ni\ni\n)\nThe test error of models trained on mixtures of real and surrogate data is approximately described by a scaling law. This scaling law can be used to predict the optimal weighting scheme, and to choose the amount of surrogate data to add.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95461"} +{"video_file": "NBq1vmfP4X_39026903.mp4", "openreview_id": "NBq1vmfP4X", "slideslive_id": 39026903, "venue": "nips2024", "title": "The Power of Hard Attention Transformers on Data Sequences: A formal language theoretic perspective", "status": "Poster", "keywords": "Theory;Attention;Circuit complexity;Formal languages;Data Sequences;Expressiveness", "tldr": "We prove results about the expressiveness of Unique Hard Attention Transformers (UHAT) on sequences of data (i.e. tuples of numbers)", "abstract": "Formal language theory has recently been successfully employed to unravel the power of transformer encoders. This setting is primarily applicable in Natural Language Processing (NLP), as a token embedding function (where a bounded number of tokens is admitted) is first applied before feeding the input to the transformer. On certain kinds of data (e.g. time series), we want our transformers to be able to handle arbitrary input sequences of numbers (or tuples thereof) without a priori limiting the values of these numbers. In this paper, we initiate the study of the expressive power of transformer encoders on sequences of data (i.e. tuples of numbers). Our results indicate an increase in expressive power of hard attention transformers over data sequences, in stark contrast to the case of strings. In particular, we prove that Unique Hard Attention Transformers (UHAT) over inputs as data sequences no longer lie within the circuit complexity class AC0 (even without positional encodings), unlike the case of string inputs, but are still within the complexity class TC0 (even with positional encodings). Over strings, UHAT without positional encodings capture only regular languages. In contrast, we show that over data sequences UHAT can capture non-regular properties. Finally, we show that UHAT capture languages definable in an extension of linear temporal logic with unary numeric predicates and arithmetics.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95460"} +{"video_file": "NCX3Kgb1nh_39026316.mp4", "openreview_id": "NCX3Kgb1nh", "slideslive_id": 39026316, "venue": "nips2024", "title": "Multivariate Stochastic Dominance via Optimal Transport and Applications to Models Benchmarking", "status": "Poster", "keywords": "Optimal Transport;Stochastic dominance;hypothesis testing;Central limit theorem;LLM benchmarking", "tldr": "We extend the univariate first order stochastic dominance hypothesis testing to the multivariate case using entropic smooth optimal transport", "abstract": "Stochastic dominance is an important concept in probability theory, econometrics and social choice theory for robustly modeling agents' preferences between random outcomes. While many works have been dedicated to the univariate case, little has been done in the multivariate scenario, wherein an agent has to decide between different multivariate outcomes. By exploiting a characterization of multivariate first stochastic dominance in terms of couplings, we introduce a statistic that assesses multivariate almost stochastic dominance under the framework of Optimal Transport with a smooth cost. Further, we introduce an entropic regularization of this statistic, and establish a central limit theorem (CLT) and consistency of the bootstrap procedure for the empirical statistic. Armed with this CLT, we propose a hypothesis testing framework as well as an efficient implementation using the Sinkhorn algorithm. We showcase our method in comparing and benchmarking Large Language Models that are evaluated on multiple metrics. Our multivariate stochastic dominance test allows us to capture the dependencies between the metrics in order to make an informed and statistically significant decision on the relative performance of the models.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95459"} +{"video_file": "NGpMCH5q7Y_39024516.mp4", "openreview_id": "NGpMCH5q7Y", "slideslive_id": 39024516, "venue": "nips2024", "title": "Integrating Suboptimal Human Knowledge with Hierarchical Reinforcement Learning for Large-Scale Multiagent Systems", "status": "Poster", "keywords": "Multi-agent system;Multi-agent reinforcement learning;Transfer learning;Human agent interaction;Scalability", "tldr": "In this study, we introduce a novel hierarchical learning framework for enhancing coordination in large-scale MAS by leveraging suboptimal human knowledge in an end-to-end manner.", "abstract": "Due to the exponential growth of agent interactions and the curse of dimensionality, learning efficient coordination from scratch is inherently challenging in large-scale multi-agent systems. While agents' learning is data-driven, sampling from millions of steps, human learning processes are quite different. Inspired by the concept of Human-on-the-Loop and the daily human hierarchical control, we propose a novel knowledge-guided multi-agent reinforcement learning framework (hhk-MARL), which combines human abstract knowledge with hierarchical reinforcement learning to address the learning difficulties among a large number of agents. In this work, fuzzy logic is applied to represent human suboptimal knowledge, and agents are allowed to freely decide how to leverage the proposed prior knowledge. Additionally, a graph-based group controller is built to enhance agent coordination. The proposed framework is end-to-end and compatible with various existing algorithms. We conduct experiments in challenging domains of the StarCraft Multi-agent Challenge combined with three famous algorithms: IQL, QMIX, and Qatten. The results show that our approach can greatly accelerate the training process and improve the final performance, even based on low-performance human prior knowledge.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95455"} +{"video_file": "NGuGVT7ar2_39028347.mp4", "openreview_id": "NGuGVT7ar2", "slideslive_id": 39028347, "venue": "nips2024", "title": "Enhancing LLM Reasoning via Vision-Augmented Prompting", "status": "Spotlight", "keywords": "Multimodal Large Language Models;Dual-Modality Reasoning", "tldr": "A dual-modality reasoning framework that enhances reasoning capabilities by incorporating self-synthesized visual and spatial information for problem-solving.", "abstract": "Verbal and visual-spatial information processing are two critical subsystems that activate different brain regions and often collaborate together for cognitive reasoning. Despite the rapid advancement of LLM-based reasoning, the mainstream frameworks, such as Chain-of-Thought (CoT) and its variants, primarily focus on the verbal dimension, resulting in limitations in tackling reasoning problems with visual and spatial clues. To bridge the gap, we propose a novel dual-modality reasoning framework called Vision-Augmented Prompting (VAP). Upon receiving a textual problem description, VAP automatically synthesizes an image from the visual and spatial clues by utilizing external drawing tools. Subsequently, VAP formulates a chain of thought in both modalities and iteratively refines the synthesized image. Finally, a conclusive reasoning scheme based on self-alignment is proposed for final result generation. Extensive experiments are conducted across four versatile tasks, including solving geometry problems, Sudoku, time series prediction, and travelling salesman problem. The results validated the superiority of VAP over existing LLMs-based reasoning frameworks.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95453"} +{"video_file": "NKPXHzYusG_39028747.mp4", "openreview_id": "NKPXHzYusG", "slideslive_id": 39028747, "venue": "nips2024", "title": "VideoLLM-MoD: Efficient Video-Language Streaming with Mixture-of-Depths Vision Computation", "status": "Poster", "keywords": "Online video understanding; effcient modeling", "tldr": "Efficient Video-Language Streaming with Mixture-of-Depths Vision Computation", "abstract": "A well-known dilemma in large vision-language models (e.g., GPT-4, LLaVA) is that while increasing the number of vision tokens generally enhances visual understanding, it also significantly raises memory and computational costs, especially in long-term, dense video frame streaming scenarios. Although learnable approaches like Q-Former and Perceiver Resampler have been developed to reduce the vision token burden, they overlook the context causally modeled by LLMs (i.e., key-value cache), potentially leading to missed visual cues when addressing user queries. In this paper, we introduce a novel approach to reduce vision compute by leveraging redundant vision tokens ``skipping layers'' rather than decreasing the number of vision tokens. Our method, VideoLLM-MoD, is inspired by mixture-of-depths LLMs and addresses the challenge of numerous vision tokens in long-term or streaming video. Specifically, for certain transformer layer, we learn to skip the computation for a high proportion (e.g., 80%) of vision tokens, passing them directly to the next layer. This approach significantly enhances model efficiency, achieving approximately 42% time and 30% memory savings for the entire training. Moreover, our method reduces the computation in the context and avoid decreasing the vision tokens, thus preserving or even improving performance compared to the vanilla model. We conduct extensive experiments to demonstrate the effectiveness of VideoLLM-MoD, showing its state-of-the-art results on multiple benchmarks, including narration, forecasting, and summarization tasks in COIN, Ego4D, and Ego-Exo4D datasets. The code and checkpoints will be made available at github.com/showlab/VideoLLM-online.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95449"} +{"video_file": "NKzLqRgG45_39025633.mp4", "openreview_id": "NKzLqRgG45", "slideslive_id": 39025633, "venue": "nips2024", "title": "Parameter-Inverted Image Pyramid Networks", "status": "Spotlight", "keywords": "Vision Foundation Models; Object Detection", "tldr": "We propose the Parameter-Inverted Image Pyramid Networks to address the computational challenges of traditional image pyramids.", "abstract": "Image pyramids are commonly used in modern computer vision tasks to obtain multi-scale features for precise understanding of images. However, image pyramids process multiple resolutions of images using the same large-scale model, which requires significant computational cost. To overcome this issue, we propose a novel network architecture known as the Parameter-Inverted Image Pyramid Networks (PIIP). Our core idea is to use models with different parameter sizes to process different resolution levels of the image pyramid, thereby balancing computational efficiency and performance. Specifically, the input to PIIP is a set of multi-scale images, where higher resolution images are processed by smaller networks. We further propose a feature interaction mechanism to allow features of different resolutions to complement each other and effectively integrate information from different spatial scales. Extensive experiments demonstrate that the PIIP achieves superior performance in tasks such as object detection, segmentation, and image classification, compared to traditional image pyramid methods and single-branch networks, while reducing computational cost. Notably, when applying our method on a large-scale vision foundation model InternViT-6B, we improve its performance by 1%-2% on detection and segmentation with only 40%-60% of the original computation. These results validate the effectiveness of the PIIP approach and provide a new technical direction for future vision computing tasks.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95447"} +{"video_file": "NLUYZ4ZqNq_39025374.mp4", "openreview_id": "NLUYZ4ZqNq", "slideslive_id": 39025374, "venue": "nips2024", "title": "SaulLM-54B & SaulLM-141B: Scaling Up Domain Adaptation for the Legal Domain", "status": "Poster", "keywords": "NLP; Deep Learning; Law", "tldr": "LLM for Law via domain adaptation", "abstract": "In this paper, we introduce SaulLM-medium and SaulLM-large, two large language models (LLMs) families tailored for the legal sector. These models, which feature architectures of 54 billion and 140 billion parameters, respectively, are based on the Mixtral architecture. The development of SaulLM-54B and SaulLM-140B is guided by large-scale domain adaptation, divided into strategies: (1) the exploitation of continued pretaining involving a legal corpus that includes over\n400\nbillion tokens, (2) the implementation of a specialized legal instruction-following protocol, and (3) the alignment of model outputs with human preferences in legal interpretations. The integration of synthetically generated data in the second and third steps enhances the models' capabilities in interpreting and processing legal texts, effectively reaching state-of-the-art performance and outperforming all previous open-source models on LegalBench Instruct. This research thoroughly explores the trade-offs involved in domain-specific adaptation at this scale, offering insights that may inform future studies on domain adaptation using strong decoder models. Building upon SaulLM-7B, this study refines the approach to produce an LLM better equipped for legal tasks and domains. Additionally, we release base, instruct and aligned versions on top of SaulLM-medium and SaulLM-large under the MIT License to facilitate reuse and collaborative research.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95446"} +{"video_file": "NN9U0lEcAn_39028232.mp4", "openreview_id": "NN9U0lEcAn", "slideslive_id": 39028232, "venue": "nips2024", "title": "ActFusion: a Unified Diffusion Model for Action Segmentation and Anticipation", "status": "Poster", "keywords": "temporal action segmentation;long-term action anticipation", "tldr": "a unified diffusion model for temporal action segmentation and long-term action anticipation", "abstract": "Temporal action segmentation and long-term action anticipation are two popular vision tasks for the temporal analysis of actions in videos. Despite apparent relevance and potential complementarity, these two problems have been investigated as separate and distinct tasks. In this work, we tackle these two problems, action segmentation, and action anticipation, jointly using a unified diffusion model dubbed ActFusion. The key idea to unification is to train the model to effectively handle both visible and invisible parts of the sequence in an integrated manner; the visible part is for temporal segmentation, and the invisible part is for future anticipation. To this end, we introduce a new anticipative masking strategy during training in which a late part of the video frames is masked as invisible, and learnable tokens replace these frames to learn to predict the invisible future. Experimental results demonstrate the bi-directional benefits between action segmentation and anticipation. ActFusion achieves the state-of-the-art performance across the standard benchmarks of 50 Salads, Breakfast, and GTEA, outperforming task-specific models in both of the two tasks with a single unified model through joint learning.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95443"} +{"video_file": "NPu7Cdk2f9_39026499.mp4", "openreview_id": "NPu7Cdk2f9", "slideslive_id": 39026499, "venue": "nips2024", "title": "Adaptive Depth Networks with Skippable Sub-Paths", "status": "Poster", "keywords": "adaptive networks;training efficiency;inference efficiency;efficiency;acceleration;inference acceleration;convolutional neural networks;CNN;Vision transformer;vit;transformer", "tldr": "We introduce a novel training method for adaptive depth networks that can provide flexible accuracy-efficiency trade- offs in a single network.", "abstract": "Predictable adaptation of network depths can be an effective way to control inference latency and meet the resource condition of various devices. However, previous adaptive depth networks do not provide general principles and a formal explanation on why and which layers can be skipped, and, hence, their approaches are hard to be generalized and require long and complex training steps. In this paper, we present a practical approach to adaptive depth networks that is applicable to various networks with minimal training effort. In our approach, every hierarchical residual stage is divided into two sub-paths, and they are trained to acquire different properties through a simple self-distillation strategy. While the first sub-path is essential for hierarchical feature learning, the second one is trained to refine the learned features and minimize performance degradation if it is skipped. Unlike prior adaptive networks, our approach does not train every target sub-network in an iterative manner. At test time, however, we can connect these sub-paths in a combinatorial manner to select sub-networks of various accuracy-efficiency trade-offs from a single network. We provide a formal rationale for why the proposed training method can reduce overall prediction errors while minimizing the impact of skipping sub-paths. We demonstrate the generality and effectiveness of our approach with convolutional neural networks and transformers.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95440"} +{"video_file": "NQB9myZksw_39025458.mp4", "openreview_id": "NQB9myZksw", "slideslive_id": 39025458, "venue": "nips2024", "title": "Robustly overfitting latents for flexible neural image compression", "status": "Poster", "keywords": "neural image compression;latent optimization", "tldr": "This paper introduces SGA+ a method that works better and is less sensitive to hyperparameter settings", "abstract": "Neural image compression has made a great deal of progress. State-of-the-art models are based on variational autoencoders and are outperforming classical models. Neural compression models learn to encode an image into a quantized latent representation that can be efficiently sent to the decoder, which decodes the quantized latent into a reconstructed image. While these models have proven successful in practice, they lead to sub-optimal results due to imperfect optimization and limitations in the encoder and decoder capacity. Recent work shows how to use stochastic Gumbel annealing (SGA) to refine the latents of pre-trained neural image compression models. We extend this idea by introducing SGA+, which contains three different methods that build upon SGA. We show how our method improves the overall compression performance in terms of the R-D trade-off, compared to its predecessors. Additionally, we show how refinement of the latents with our best-performing method improves the compression performance on both the Tecnick and CLIC dataset. Our method is deployed for a pre-trained hyperprior and for a more flexible model. Further, we give a detailed analysis of our proposed methods and show that they are less sensitive to hyperparameter choices. Finally, we show how each method can be extended to three- instead of two-class rounding.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95439"} +{"video_file": "NT8Z5NjwxF_39026053.mp4", "openreview_id": "NT8Z5NjwxF", "slideslive_id": 39026053, "venue": "nips2024", "title": "Dual-Diffusion for Binocular 3D Human Pose Estimation", "status": "Poster", "keywords": "3D Human Pose Estimation;Binocular Vision;Diffusion Model;Pose Priors", "tldr": "To address the increasing uncertainty of binocular 3D HPE due to the reduction of views compared to multiview setups, we propose a Dual-Diffusion method to simutaneously denoise the 3D and 2D poses.", "abstract": "Binocular 3D human pose estimation (HPE), reconstructing a 3D pose from 2D poses of two views, offers practical advantages by combining multiview geometry with the convenience of a monocular setup. However, compared to a multiview setup, the reduction in the number of cameras increases uncertainty in 3D reconstruction. To address this issue, we leverage the diffusion model, which has shown success in monocular 3D HPE by recovering 3D poses from noisy data with high uncertainty. Yet, the uncertainty distribution of initial 3D poses remains unknown. Considering that 3D errors stem from 2D errors within geometric constraints, we recognize that the uncertainties of 3D and 2D are integrated in a binocular configuration, with the initial 2D uncertainty being well-defined. Based on this insight, we propose Dual-Diffusion specifically for Binocular 3D HPE, simultaneously denoising the uncertainties in 2D and 3D, and recovering plausible and accurate results. Additionally, we introduce Z-embedding as an additional condition for denoising and implement baseline-width-related pose normalization to enhance the model flexibility for various baseline settings. This is crucial as 3D error influence factors encompass depth and baseline width. Extensive experiments validate the effectiveness of our Dual-Diffusion in 2D refinement and 3D estimation. The code and models are available at https://github.com/sherrywan/Dual-Diffusion.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95437"} +{"video_file": "NTWXVvIXJM_39026759.mp4", "openreview_id": "NTWXVvIXJM", "slideslive_id": 39026759, "venue": "nips2024", "title": "Meta-Diffu$B$: A Contextualized Sequence-to-Sequence Text Diffusion Model with Meta-Exploration", "status": "Poster", "keywords": "Diffusion Models;Sequence-to-Sequence;Text Generation;Meta-Exploration;Noise Scheduling", "tldr": "We propose a novel scheduler-exploiter framework, Meta-Diffu\nB\n, to achieve contextualized noise scheduling inspired by Meta-Exploration.", "abstract": "The diffusion model, a new generative modeling paradigm, has achieved significant success in generating images, audio, video, and text. It has been adapted for sequence-to-sequence text generation (Seq2Seq) through DiffuSeq, termed the S2S-Diffusion model. Existing S2S-Diffusion models predominantly rely on fixed or hand-crafted rules to schedule noise during the diffusion and denoising processes. However, these models are limited by non-contextualized noise, which fails to fully consider the characteristics of Seq2Seq tasks. In this paper, we propose the Meta-Diffu\nB\nframework\u2014a novel scheduler-exploiter S2S-Diffusion paradigm designed to overcome the limitations of existing S2S-Diffusion models. We employ Meta-Exploration to train an additional scheduler model dedicated to scheduling contextualized noise for each sentence. Our exploiter model, an S2S-Diffusion model, leverages the noise scheduled by our scheduler model for updating and generation. Meta-Diffu\nB\nachieves state-of-the-art performance compared to previous S2S-Diffusion models and fine-tuned pre-trained language models (PLMs) across four Seq2Seq benchmark datasets. We further investigate and visualize the impact of Meta-Diffu\nB\n's noise scheduling on the generation of sentences with varying difficulties. Additionally, our scheduler model can function as a \"plug-and-play\" model to enhance DiffuSeq without the need for fine-tuning during the inference stage.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95436"} +{"video_file": "NU3tE3lIqf_39025245.mp4", "openreview_id": "NU3tE3lIqf", "slideslive_id": 39025245, "venue": "nips2024", "title": "WildGaussians: 3D Gaussian Splatting In the Wild", "status": "Poster", "keywords": "Gaussian Splatting;Novel View Synthesis;3D Scene Reconstruction", "tldr": "Extending Gaussian splatting to handle occlusions and appearance changes when transferring to in-the-wild captures.", "abstract": "While the field of 3D scene reconstruction is dominated by NeRFs due to their photorealistic quality, 3D Gaussian Splatting (3DGS) has recently emerged, offering similar quality with real-time rendering speeds. However, both methods primarily excel with well-controlled 3D scenes, while in-the-wild data - characterized by occlusions, dynamic objects, and varying illumination - remains challenging. NeRFs can adapt to such conditions easily through per-image embedding vectors, but 3DGS struggles due to its explicit representation and lack of shared parameters. To address this, we introduce WildGaussians, a novel approach to handle occlusions and appearance changes with 3DGS. By leveraging robust DINO features and integrating an appearance modeling module within 3DGS, our method achieves state-of-the-art results. We demonstrate that WildGaussians matches the real-time rendering speed of 3DGS while surpassing both 3DGS and NeRF baselines in handling in-the-wild data, all within a simple architectural framework.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95434"} +{"video_file": "NaCXcUKihH_39025158.mp4", "openreview_id": "NaCXcUKihH", "slideslive_id": 39025158, "venue": "nips2024", "title": "Towards a theory of how the structure of language is acquired by deep neural networks", "status": "Poster", "keywords": "Hierarchical Models;Language Models;Learning Theory;Representation Learning;Self-Supervised Learning;Statistical Physics of Learning", "tldr": "Increasing the dataset size helps language models capture the latent hierarchical structure of data by leveraging token correlations.", "abstract": "How much data is required to learn the structure of a language via next-token prediction? We study this question for synthetic datasets generated via a Probabilistic Context-Free Grammar (PCFG)---a hierarchical generative model that captures the tree-like structure of natural languages. We determine token-token correlations analytically in our model and show that they can be used to build a representation of the grammar's hidden variables, the longer the range the deeper the variable. In addition, a finite training set limits the resolution of correlations to an effective range, whose size grows with that of the training set. As a result, a Language Model trained with increasingly many examples can build a deeper representation of the grammar's structure, thus reaching good performance despite the high dimensionality of the problem. We conjecture that the relationship between training set size and effective range of correlations holds beyond our synthetic datasets, and we test it in a collection of lines from Shakespeare's plays. In particular, we show that reducing the input size leads to saturation of the test loss decay at a characteristic training set size that can be predicted in our framework.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95429"} +{"video_file": "NadTwTODgC_39028131.mp4", "openreview_id": "NadTwTODgC", "slideslive_id": 39028131, "venue": "nips2024", "title": "Diffusion for World Modeling: Visual Details Matter in Atari", "status": "Spotlight", "keywords": "World models;diffusion models;reinforcement learning;generative models;Atari", "tldr": "We introduce DIAMOND, a diffusion world model, to train sample-efficient RL agents on Atari 100k.", "abstract": "World models constitute a promising approach for training reinforcement learning agents in a safe and sample-efficient manner. Recent world models predominantly operate on sequences of discrete latent variables to model environment dynamics. However, this compression into a compact discrete representation may ignore visual details that are important for reinforcement learning. Concurrently, diffusion models have become a dominant approach for image generation, challenging well-established methods modeling discrete latents. Motivated by this paradigm shift, we introduce DIAMOND (DIffusion As a Model Of eNvironment Dreams), a reinforcement learning agent trained in a diffusion world model. We analyze the key design choices that are required to make diffusion suitable for world modeling, and demonstrate how improved visual details can lead to improved agent performance. DIAMOND achieves a mean human normalized score of 1.46 on the competitive Atari 100k benchmark; a new best for agents trained entirely within a world model. We further demonstrate that DIAMOND's diffusion world model can stand alone as an interactive neural game engine by training on static Counter-Strike: Global Offensive gameplay. To foster future research on diffusion for world modeling, we release our code, agents, videos and playable world models at https://diamond-wm.github.io.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95428"} +{"video_file": "Nb5xlelV0C_39027536.mp4", "openreview_id": "Nb5xlelV0C", "slideslive_id": 39027536, "venue": "nips2024", "title": "AID: Attention Interpolation of Text-to-Image Diffusion", "status": "Poster", "keywords": "diffusion models;training-free;image interpolation;compositional generation", "tldr": "AID (Attention Interpolation of Diffusion) is a method that enables the text-to-image diffusion model to generate interpolation between different conditions with high consistency, smoothness and fidelity", "abstract": "Conditional diffusion models can create unseen images in various settings, aiding image interpolation. Interpolation in latent spaces is well-studied, but interpolation with specific conditions like text or image is less understood. Common approaches interpolate linearly in the conditioning space but tend to result in inconsistent images with poor fidelity. This work introduces a novel training-free technique named \\textbf{Attention Interpolation via Diffusion (AID)}. AID has two key contributions: \\textbf{1)} a fused inner/outer interpolated attention layer to boost image consistency and fidelity; and \\textbf{2)} selection of interpolation coefficients via a beta distribution to increase smoothness. Additionally, we present an AID variant called \\textbf{Prompt-guided Attention Interpolation via Diffusion (PAID)}, which \\textbf{3)} treats interpolation as a condition-dependent generative process. Experiments demonstrate that our method achieves greater consistency, smoothness, and efficiency in condition-based interpolation, aligning closely with human preferences. Furthermore, PAID offers substantial benefits for compositional generation, controlled image editing, image morphing and image-controlled generation, all while remaining training-free.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95427"} +{"video_file": "Nf4MHF1pi5_39026525.mp4", "openreview_id": "Nf4MHF1pi5", "slideslive_id": 39026525, "venue": "nips2024", "title": "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents", "status": "Poster", "keywords": "LLM-based Agents;Backdoor Attack", "tldr": "We take the initial step towards investigating backdoor attacks on LLM-based agents, and present a general framework along with 3 concrete forms of agent backdoor attacks.", "abstract": "Driven by the rapid development of Large Language Models (LLMs), LLM-based agents have been developed to handle various real-world applications, including finance, healthcare, and shopping, etc. It is crucial to ensure the reliability and security of LLM-based agents during applications. However, the safety issues of LLM-based agents are currently under-explored. In this work, we take the first step to investigate one of the typical safety threats, backdoor attack, to LLM-based agents. We first formulate a general framework of agent backdoor attacks, then we present a thorough analysis of different forms of agent backdoor attacks. Specifically, compared with traditional backdoor attacks on LLMs that are only able to manipulate the user inputs and model outputs, agent backdoor attacks exhibit more diverse and covert forms: (1) From the perspective of the final attacking outcomes, the agent backdoor attacker can not only choose to manipulate the final output distribution, but also introduce the malicious behavior in an intermediate reasoning step only, while keeping the final output correct. (2) Furthermore, the former category can be divided into two subcategories based on trigger locations, in which the backdoor trigger can either be hidden in the user query or appear in an intermediate observation returned by the external environment. We implement the above variations of agent backdoor attacks on two typical agent tasks including web shopping and tool utilization. Extensive experiments show that LLM-based agents suffer severely from backdoor attacks and such backdoor vulnerability cannot be easily mitigated by current textual backdoor defense algorithms. This indicates an urgent need for further research on the development of targeted defenses against backdoor attacks on LLM-based agents. Warning: This paper may contain biased content.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/95425"} +{"video_file": "NgyT80IPUK_39028635.mp4", "openreview_id": "NgyT80IPUK", "slideslive_id": 39028635, "venue": "nips2024", "title": "Matrix Denoising with Doubly Heteroscedastic Noise: Fundamental Limits and Optimal Spectral Methods", "status": "Poster", "keywords": "Matrix denoising;heteroscedasticity;spectral methods;approximate message passing;statistical physics", "tldr": "For the problem of matrix denoising with doubly heteroscedastic noise, we design a spectral estimator and prove that it (i) attains the weak recovery threshold and (ii) is Bayes-optimal in the one-sided heteroscedastic case.", "abstract": "We study the matrix denoising problem of estimating the singular vectors of a rank-\n1\nsignal corrupted by noise with both column and row correlations. Existing works are either unable to pinpoint the exact asymptotic estimation error or, when they do so, the resulting approaches (e.g., based on whitening or singular value shrinkage) remain vastly suboptimal. On top of this, most of the literature has focused on the special case of estimating the left singular vector of the signal when the noise only possesses row correlation (one-sided heteroscedasticity). In contrast, our work establishes the information-theoretic and algorithmic limits of matrix denoising with doubly heteroscedastic noise. We characterize the exact asymptotic minimum mean square error, and design a novel spectral estimator with rigorous optimality guarantees: under a technical condition, it attains positive correlation with the signals whenever information-theoretically possible and, for one-sided heteroscedasticity, it also achieves the Bayes-optimal error. Numerical experiments demonstrate the significant advantage of our theoretically principled method with the state of the art. The proofs draw connections with statistical physics and approximate message passing, departing drastically from standard random matrix theory techniques.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95423"} +{"video_file": "NjewXJUDYq_39025005.mp4", "openreview_id": "NjewXJUDYq", "slideslive_id": 39025005, "venue": "nips2024", "title": "Paralinguistics-Aware Speech-Empowered Large Language Models for Natural Conversation", "status": "Poster", "keywords": "Spoken Dialog Modeling;Speech-Text Pretraining;Paralingustics;Spoken Language Model;LLM;USDM", "tldr": "We directly model spoken dialog using an LLM-based speech-text model, capable of generating spoken dialogs with coherent content and natural prosody. We also present speech-text pretraining that captures the comprehensive cross-modal relationships.", "abstract": "Recent work shows promising results in expanding the capabilities of large language models (LLM) to directly understand and synthesize speech. However, an LLM-based strategy for modeling spoken dialogs remains elusive, calling for further investigation. This paper introduces an extensive speech-text LLM framework, the Unified Spoken Dialog Model (USDM), designed to generate coherent spoken responses with naturally occurring prosodic features relevant to the given input speech without relying on explicit automatic speech recognition (ASR) or text-to-speech (TTS) systems. We have verified the inclusion of prosody in speech tokens that predominantly contain semantic information and have used this foundation to construct a prosody-infused speech-text model. Additionally, we propose a generalized speech-text pretraining scheme that enhances the capture of cross-modal semantics. To construct USDM, we fine-tune our speech-text model on spoken dialog data using a multi-step spoken dialog template that stimulates the chain-of-reasoning capabilities exhibited by the underlying LLM. Automatic and human evaluations on the DailyTalk dataset demonstrate that our approach effectively generates natural-sounding spoken responses, surpassing previous and cascaded baselines. Our code and checkpoints are available at https://github.com/naver-ai/usdm.", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/95416"} +{"video_file": "NlpHKNjNNZ_39026254.mp4", "openreview_id": "NlpHKNjNNZ", "slideslive_id": 39026254, "venue": "nips2024", "title": "Just Add $100 More: Augmenting Pseudo-LiDAR Point Cloud for Resolving Class-imbalance Problem", "status": "Poster", "keywords": "Autonomous Driving;Class Imbalance;Data Augmentation", "tldr": "A low-cost yet effective data augmentation framework for alleviating class imbalance in 3D object detection.", "abstract": "Typical LiDAR-based 3D object detection models are trained with real-world data collection, which is often imbalanced over classes. To deal with it, augmentation techniques are commonly used, such as copying ground truth LiDAR points and pasting them into scenes. However, existing methods struggle with the lack of sample diversity for minority classes and the limitation of suitable placement. In this work, we introduce a novel approach that utilizes pseudo LiDAR point clouds generated from low-cost miniatures or real-world videos, which is called Pseudo Ground Truth augmentation (PGT-Aug). PGT-Aug involves three key steps: (i) volumetric 3D instance reconstruction using a 2D-to-3D view synthesis model, (ii) object-level domain alignment with LiDAR intensity simulation, and (iii) a hybrid context-aware placement method from ground and map information. We demonstrate the superiority and generality of our method through performance improvements in extensive experiments conducted on popular benchmarks, i.e., nuScenes, KITTI, and Lyft, especially for the datasets with large domain gaps captured by different LiDAR configurations. The project webpage is https://just-add-100-more.github.io.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95413"} +{"video_file": "NmlnmLYMZ4_39026273.mp4", "openreview_id": "NmlnmLYMZ4", "slideslive_id": 39026273, "venue": "nips2024", "title": "When does perceptual alignment benefit vision representations?", "status": "Poster", "keywords": "representation learning;alignment;perception;transfer learning;computer vision;foundation model", "tldr": "We find that aligning the representations of large vision models to human perceptual judgements improves downstream performance on a variety of tasks.", "abstract": "Humans judge perceptual similarity according to diverse visual attributes, including scene layout, subject location, and camera pose. Existing vision models understand a wide range of semantic abstractions but improperly weigh these attributes and thus make inferences misaligned with human perception. While vision representations have previously benefited from human preference alignment in contexts like image generation, the utility of perceptually aligned representations in more general-purpose settings remains unclear. Here, we investigate how aligning vision model representations to human perceptual judgments impacts their usability in standard computer vision tasks. We finetune state-of-the-art models on a dataset of human similarity judgments for synthetic image triplets and evaluate them across diverse computer vision tasks. We find that aligning models to perceptual judgments yields representations that improve upon the original backbones across many downstream tasks, including counting, semantic segmentation, depth estimation, instance retrieval, and retrieval-augmented generation. In addition, we find that performance is widely preserved on other tasks, including specialized out-of-distribution domains such as in medical imaging and 3D environment frames. Our results suggest that injecting an inductive bias about human perceptual knowledge into vision models can make them better representation learners.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95412"} +{"video_file": "Nmmiyjw7Xg_39027069.mp4", "openreview_id": "Nmmiyjw7Xg", "slideslive_id": 39027069, "venue": "nips2024", "title": "Safe and Sparse Newton Method for Entropic-Regularized Optimal Transport", "status": "Poster", "keywords": "optimal transport;Newton method;sparsified Hessian matrix;global convergence;quadratic convergence rate", "tldr": "A sparsified Newton method to solve entropic-regularized optimal transport, with low per-iteration computational cost, high numerical stability, provable global convergence, and quadratic local convergence rate.", "abstract": "Computational optimal transport (OT) has received massive interests in the machine learning community, and great advances have been gained in the direction of entropic-regularized OT. The Sinkhorn algorithm, as well as its many improved versions, has become the de facto solution to large-scale OT problems. However, most of the existing methods behave like first-order methods, which typically require a large number of iterations to converge. More recently, Newton-type methods using sparsified Hessian matrices have demonstrated promising results on OT computation, but there still remain a lot of unresolved open questions. In this article, we make major new progresses towards this direction: first, we propose a novel Hessian sparsification scheme that promises a strict control of the approximation error; second, based on this sparsification scheme, we develop a safe Newton-type method that is guaranteed to avoid singularity in computing the search directions; third, the developed algorithm has a clear implementation for practical use, avoiding most hyperparameter tuning; and remarkably, we provide rigorous global and local convergence analysis of the proposed algorithm, which is lacking in the prior literature. Various numerical experiments are conducted to demonstrate the effectiveness of the proposed algorithm in solving large-scale OT problems.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95411"} +{"video_file": "NnAi0L5H8J_39027354.mp4", "openreview_id": "NnAi0L5H8J", "slideslive_id": 39027354, "venue": "nips2024", "title": "Multi-Instance Partial-Label Learning with Margin Adjustment", "status": "Poster", "keywords": "Machine Learning;Multi-Instance Partial-Label Learning;Multi-Instance Learning;Partial-Label Learning", "tldr": "We propose a MIPL algorithm with adjusted margins to improve generalization performance.", "abstract": "Multi-instance partial-label learning (MIPL) is an emerging learning framework where each training sample is represented as a multi-instance bag associated with a candidate label set. Existing MIPL algorithms often overlook the margins for attention scores and predicted probabilities, leading to suboptimal generalization performance. A critical issue with these algorithms is that the highest prediction probability of the classifier may appear on a non-candidate label. In this paper, we propose an algorithm named MIPLMA, i.e., Multi-Instance Partial-Label learning with Margin Adjustment, which adjusts the margins for attention scores and predicted probabilities. We introduce a margin-aware attention mechanism to dynamically adjust the margins for attention scores and propose a margin distribution loss to constrain the margins between the predicted probabilities on candidate and non-candidate label sets. Experimental results demonstrate the superior performance of MIPLMA over existing MIPL algorithms, as well as other well-established multi-instance learning algorithms and partial-label learning algorithms.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95410"} +{"video_file": "NnoAj91HZX_39025051.mp4", "openreview_id": "NnoAj91HZX", "slideslive_id": 39025051, "venue": "nips2024", "title": "No-Regret M${}^{\\natural}$-Concave Function Maximization: Stochastic Bandit Algorithms and NP-Hardness of Adversarial Full-Information Setting", "status": "Poster", "keywords": "online learning;discrete convex analysis;combinatorial bandit", "tldr": "We present stochastic bandit algorithms and prove NP-hardness of the adversarial setting for online M\n\u266e\n-concave function maximization.", "abstract": "M\n\u266e\n-concave functions, a.k.a. gross substitute valuation functions, play a fundamental role in many fields, including discrete mathematics and economics. In practice, perfect knowledge of M\n\u266e\n-concave functions is often unavailable a priori, and we can optimize them only interactively based on some feedback. Motivated by such situations, we study online M\n\u266e\n-concave function maximization problems, which are interactive versions of the problem studied by Murota and Shioura (1999). For the stochastic bandit setting, we present\nO\n(\nT\n\u2212\n1\n/\n2\n)\n-simple regret and\nO\n(\nT\n2\n/\n3\n)\n-regret algorithms under\nT\ntimes access to unbiased noisy value oracles of M\n\u266e\n-concave functions. A key to proving these results is the robustness of the greedy algorithm to local errors in M\n\u266e\n-concave function maximization, which is one of our main technical results. While we obtain those positive results for the stochastic setting, another main result of our work is an impossibility in the adversarial setting. We prove that, even with full-information feedback, no algorithms that run in polynomial time per round can achieve\nO\n(\nT\n1\n\u2212\nc\n)\nregret for any constant\nc\n>\n0\nunless\nP\n=\nNP\n. Our proof is based on a reduction from the matroid intersection problem for three matroids, which would be a novel idea in the context of online learning.", "primary_area": "online_learning", "site": "https://neurips.cc/virtual/2024/poster/95409"} +{"video_file": "Ns0LQokxa5_39026448.mp4", "openreview_id": "Ns0LQokxa5", "slideslive_id": 39026448, "venue": "nips2024", "title": "GaussianCut: Interactive segmentation via graph cut for 3D Gaussian Splatting", "status": "Poster", "keywords": "3D Vision;Segmentation;Graph cut;Gaussian Splatting", "tldr": "Object selection and segmentation using graph cut for scene represented using 3D gaussian splatting", "abstract": "We introduce GaussianCut, a new method for interactive multiview segmentation of scenes represented as 3D Gaussians. Our approach allows for selecting the objects to be segmented by interacting with a single view. It accepts intuitive user input, such as point clicks, coarse scribbles, or text. Using 3D Gaussian Splatting (3DGS) as the underlying scene representation simplifies the extraction of objects of interest which are considered to be a subset of the scene's Gaussians. Our key idea is to represent the scene as a graph and use the graph-cut algorithm to minimize an energy function to effectively partition the Gaussians into foreground and background. To achieve this, we construct a graph based on scene Gaussians and devise a segmentation-aligned energy function on the graph to combine user inputs with scene properties. To obtain an initial coarse segmentation, we leverage 2D image/video segmentation models and further refine these coarse estimates using our graph construction. Our empirical evaluations show the adaptability of GaussianCut across a diverse set of scenes. GaussianCut achieves competitive performance with state-of-the-art approaches for 3D segmentation without requiring any additional segmentation-aware training", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95406"} +{"video_file": "NtNTfRTjE8_39028178.mp4", "openreview_id": "NtNTfRTjE8", "slideslive_id": 39028178, "venue": "nips2024", "title": "Breaking Semantic Artifacts for Generalized AI-generated Image Detection", "status": "Poster", "keywords": "AI Security; Deepfake Detection; AI-generated Image; Diffusion Model;", "tldr": "The paper introduces \"semantic artifacts\" in cross-scene images, which leads to performance drops on generalized detection. A dataset from various generative models and scenes is built and a simple yet effective patch-based method is proposed.", "abstract": "With the continuous evolution of AI-generated images, the generalized detection of them has become a crucial aspect of AI security. Existing detectors have focused on cross-generator generalization, while it remains unexplored whether these detectors can generalize across different image scenes, e.g., images from different datasets with different semantics. In this paper, we reveal that existing detectors suffer from substantial Accuracy drops in such cross-scene generalization. In particular, we attribute their failures to ''semantic artifacts'' in both real and generated images, to which detectors may overfit. To break such ''semantic artifacts'', we propose a simple yet effective approach based on conducting an image patch shuffle and then training an end-to-end patch-based classifier. We conduct a comprehensive open-world evaluation on 31 test sets, covering 7 Generative Adversarial Networks, 18 (variants of) Diffusion Models, and another 6 CNN-based generative models. The results demonstrate that our approach outperforms previous approaches by 2.08% (absolute) on average regarding cross-scene detection Accuracy. We also notice the superiority of our approach in open-world generalization, with an average Accuracy improvement of 10.59% (absolute) across all test sets. Our code is available at https://github.com/Zig-HS/FakeImageDetection.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95403"} +{"video_file": "Nycj81Z692_39027232.mp4", "openreview_id": "Nycj81Z692", "slideslive_id": 39027232, "venue": "nips2024", "title": "UrbanKGent: A Unified Large Language Model Agent Framework for Urban Knowledge Graph Construction", "status": "Poster", "keywords": "urban knowledge graph;knowledge graph construction;large language model agent", "tldr": "In this paper proposed UrbanKGent, the first UrbanKG construction agent framework with Large Language Models.", "abstract": "Urban knowledge graph has recently worked as an emerging building block to distill critical knowledge from multi-sourced urban data for diverse urban application scenarios. Despite its promising benefits, urban knowledge graph construction (UrbanKGC) still heavily relies on manual effort, hindering its potential advancement. This paper presents UrbanKGent, a unified large language model agent framework, for urban knowledge graph construction. Specifically, we first construct the knowledgeable instruction set for UrbanKGC tasks (such as relational triplet extraction and knowledge graph completion) via heterogeneity-aware and geospatial-infused instruction generation. Moreover, we propose a tool-augmented iterative trajectory refinement module to enhance and refine the trajectories distilled from GPT-4. Through hybrid instruction fine-tuning with augmented trajectories on Llama 2 and Llama 3 family, we obtain UrbanKGC agent family, consisting of UrbanKGent-7/8/13B version. We perform a comprehensive evaluation on two real-world datasets using both human and GPT-4 self-evaluation. The experimental results demonstrate that UrbanKGent family can not only significantly outperform 31 baselines in UrbanKGC tasks, but also surpass the state-of-the-art LLM, GPT-4, by more than 10% with approximately 20 times lower cost. Compared with the existing benchmark, the UrbanKGent family could help construct an UrbanKG with hundreds of times richer relationships using only one-fifth of the data. Our data and code are available at https://github.com/usail-hkust/UrbanKGent.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95400"} +{"video_file": "Nzfg1LXTdS_39027676.mp4", "openreview_id": "Nzfg1LXTdS", "slideslive_id": 39027676, "venue": "nips2024", "title": "How Diffusion Models Learn to Factorize and Compose", "status": "Poster", "keywords": "Representation Learning;Compositional Generalization;Diffusion Models;Generative Models", "tldr": "We carry out experiments on conditional DDPMs using synthetic datasets that demonstrate their ability to learn factorized representations and compositionally generalize.", "abstract": "Diffusion models are capable of generating photo-realistic images that combine elements which do not appear together in natural images, demonstrating their ability to compositionally generalize. Nonetheless, the precise mechanism of compositionality and how it is acquired through training remains elusive. Here, we consider a highly reduced setting to examine whether diffusion models learn semantically meaningful and fully factorized representations of composable features. We performed extensive controlled experiments on conditional DDPMs trained to generate various forms of 2D Gaussian data. We demonstrate that the models learn factorized, semi-continuous manifold representations that are orthogonal in underlying continuous latent features of independent variations but are not aligned for different values of the same feature. With such representations, models demonstrate superior compositionality but have limited ability to interpolate over unseen values of a given feature. Our experimental results further demonstrate that diffusion models can attain compositionality with a small amount of compositional examples, suggesting a novel way to train DDPMs. Finally, we connect manifold formation in diffusion models to percolation theory in physics, thereby offering insights into the sudden onset of factorized representation learning. Our thorough toy experiments thus contribute a deeper understanding of how diffusion models capture compositional structure in data, paving the way for future research aimed at enhancing factorization and compositional generalization in generative models for real-world applications.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95399"} +{"video_file": "O1fp9nVraj_39025726.mp4", "openreview_id": "O1fp9nVraj", "slideslive_id": 39025726, "venue": "nips2024", "title": "On scalable oversight with weak LLMs judging strong LLMs", "status": "Poster", "keywords": "alignment;safety;scalable oversight;debate;LLM", "tldr": "Large-scale study of scalable oversight with LLMs as strong debaters/consultants and weak judges", "abstract": "Scalable oversight protocols aim to enable humans to accurately supervise superhuman AI. In this paper we study debate, where two AI's compete to convince a judge; consultancy, where a single AI tries to convince a judge that asks questions; and compare to a baseline of direct question-answering, where the judge just answers outright without the AI. We use large language models (LLMs) as both AI agents and as stand-ins for human judges, taking the judge models to be weaker than agent models. We benchmark on a diverse range of asymmetries between judges and agents, extending previous work on a single extractive QA task with information asymmetry, to also include mathematics, coding, logic and multimodal reasoning asymmetries. We find that debate outperforms consultancy across all tasks when the consultant is randomly assigned to argue for the correct/incorrect answer. Comparing debate to direct question answering, the results depend on the type of task: in extractive QA tasks with information asymmetry debate outperforms direct question answering, but in other tasks without information asymmetry the results are mixed. Previous work assigned debaters/consultants an answer to argue for. When we allow them to instead choose which answer to argue for, we find judges are less frequently convinced by the wrong answer in debate than in consultancy. Further, we find that stronger debater models increase judge accuracy, though more modestly than in previous studies.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/95397"} +{"video_file": "O23XfTnhWR_39024789.mp4", "openreview_id": "O23XfTnhWR", "slideslive_id": 39024789, "venue": "nips2024", "title": "Graphcode: Learning from multiparameter persistent homology using graph neural networks", "status": "Poster", "keywords": "Topological Data Analysis;Multiparameter Persistent Homology;Machine Learning;Geometric Deep Learning;Graph Neural Networks", "tldr": "We introduce a novel method to learn from two-parameter persistent homology using graph neural networks.", "abstract": "We introduce graphcodes, a novel multi-scale summary of the topological properties of a dataset that is based on the well-established theory of persistent homology. Graphcodes handle datasets that are filtered along two real-valued scale parameters. Such multi-parameter topological summaries are usually based on complicated theoretical foundations and difficult to compute; in contrast, graphcodes yield an informative and interpretable summary and can be computed as efficient as one-parameter summaries. Moreover, a graphcode is simply an embedded graph and can therefore be readily integrated in machine learning pipelines using graph neural networks. We describe such a pipeline and demonstrate that graphcodes achieve better classification accuracy than state-of-the-art approaches on various datasets.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/95396"} +{"video_file": "O4RCFjVUBJ_39026290.mp4", "openreview_id": "O4RCFjVUBJ", "slideslive_id": 39026290, "venue": "nips2024", "title": "How to Continually Adapt Text-to-Image Diffusion Models for Flexible Customization?", "status": "Poster", "keywords": "Text-to-Image Diffusion;Continual Learning;Concept Customization", "tldr": "We aim to tackle a new practical concept customization problem named Concept-Incremental Flexible Customization (CIFC), where the main challenges are catastrophic forgetting and concept neglect.", "abstract": "Custom diffusion models (CDMs) have attracted widespread attention due to their astonishing generative ability for personalized concepts. However, most existing CDMs unreasonably assume that personalized concepts are fixed and cannot change over time. Moreover, they heavily suffer from catastrophic forgetting and concept neglect on old personalized concepts when continually learning a series of new concepts. To address these challenges, we propose a novel Concept-Incremental text-to-image Diffusion Model (CIDM), which can resolve catastrophic forgetting and concept neglect to learn new customization tasks in a concept-incremental manner. Specifically, to surmount the catastrophic forgetting of old concepts, we develop a concept consolidation loss and an elastic weight aggregation module. They can explore task-specific and task-shared knowledge during training, and aggregate all low-rank weights of old concepts based on their contributions during inference. Moreover, in order to address concept neglect, we devise a context-controllable synthesis strategy that leverages expressive region features and noise estimation to control the contexts of generated images according to user conditions. Experiments validate that our CIDM surpasses existing custom diffusion models. The source codes are available at https://github.com/JiahuaDong/CIFC.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95393"} +{"video_file": "O7IN4nsaIO_39026590.mp4", "openreview_id": "O7IN4nsaIO", "slideslive_id": 39026590, "venue": "nips2024", "title": "Achieving Near-Optimal Convergence for Distributed Minimax Optimization with Adaptive Stepsizes", "status": "Poster", "keywords": "Minimax Optimization;Distributed Learning;Nonconvex Optimization;Convergence Analysis;Stepsize Inconsistency", "tldr": "This paper proposes a distributed adaptive method for solving nonconvex minimax problems and establishes a near-optimal convergence rate by employing adaptive stepsize tracking to eliminate the inconsistency in locally computed adaptive stepsizes.", "abstract": "In this paper, we show that applying adaptive methods directly to distributed minimax problems can result in non-convergence due to inconsistency in locally computed adaptive stepsizes. To address this challenge, we propose D-AdaST, a Distributed Adaptive minimax method with Stepsize Tracking. The key strategy is to employ an adaptive stepsize tracking protocol involving the transmission of two extra (scalar) variables. This protocol ensures the consistency among stepsizes of nodes, eliminating the steady-state error due to the lack of coordination of stepsizes among nodes that commonly exists in vanilla distributed adaptive methods, and thus guarantees exact convergence. For nonconvex-strongly-concave distributed minimax problems, we characterize the specific transient times that ensure time-scale separation of stepsizes and quasi-independence of networks, leading to a near-optimal convergence rate of\nO\n~\n(\n\u03f5\n\u2212\n(\n4\n+\n\u03b4\n)\n)\nfor any small\n\u03b4\n>\n0\n, matching that of the centralized counterpart. To our best knowledge, D-AdaST is the first distributed adaptive method achieving near-optimal convergence without knowing any problem-dependent parameters for nonconvex minimax problems. Extensive experiments are conducted to validate our theoretical results.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95391"} +{"video_file": "O9RZAEp34l_39028526.mp4", "openreview_id": "O9RZAEp34l", "slideslive_id": 39028526, "venue": "nips2024", "title": "Abrupt Learning in Transformers: A Case Study on Matrix Completion", "status": "Poster", "keywords": "Science of language models;matrix completion;BERT;phase transition;interpretability", "tldr": "BERT solves low rank matrix completion in an interpretable manner with a sudden drop in the loss.", "abstract": "Recent analysis on the training dynamics of Transformers has unveiled an interesting characteristic: the training loss plateaus for a significant number of training steps, and then suddenly (and sharply) drops to near--optimal values. To understand this phenomenon in depth, we formulate the low-rank matrix completion problem as a masked language modeling (MLM) task, and show that it is possible to train a BERT model to solve this task to low error. Furthermore, the loss curve shows a plateau early in training followed by a sudden drop to near-optimal values, despite no changes in the training procedure or hyper-parameters. To gain interpretability insights into this sudden drop, we examine the model's predictions, attention heads, and hidden states before and after this transition. Concretely, we observe that (a) the model transitions from simply copying the masked input to accurately predicting the masked entries; (b) the attention heads transition to interpretable patterns relevant to the task; and (c) the embeddings and hidden states encode information relevant to the problem. We also analyze the training dynamics of individual model components to understand the sudden drop in loss.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/95388"} +{"video_file": "OCcfKzXded_39027452.mp4", "openreview_id": "OCcfKzXded", "slideslive_id": 39027452, "venue": "nips2024", "title": "Mining and Transferring Feature-Geometry Coherence for Unsupervised Point Cloud Registration", "status": "Poster", "keywords": "Unsupervised Point Cloud Registration;Deep Learning;Computer Vision", "tldr": "A novel unsupervised point cloud registration method for large-scale outdoor scenes.", "abstract": "Point cloud registration, a fundamental task in 3D vision, has achieved remark\u0002able success with learning-based methods in outdoor environments. Unsupervised outdoor point cloud registration methods have recently emerged to circumvent the need for costly pose annotations. However, they fail to establish reliable optimization objectives for unsupervised training, either relying on overly strong geometric assumptions, or suffering from poor-quality pseudo-labels due to inadequate integration of low-level geometric and high-level contextual information. We have observed that in the feature space, latent new inlier correspondences tend to cluster around respective positive anchors that summarize features of existing inliers. Motivated by this observation, we propose a novel unsupervised registration method termed INTEGER to incorporate high-level contextual information for reliable pseudo-label mining. Specifically, we propose the Feature-Geometry Coherence Mining module to dynamically adapt the teacher for each mini-batch of data during training and discover reliable pseudo-labels by considering both high-level feature representations and low-level geometric cues. Furthermore, we propose Anchor-Based Contrastive Learning to facilitate contrastive learning with anchors for a robust feature space. Lastly, we introduce a Mixed-Density Student to learn density-invariant features, addressing challenges related to density variation and low overlap in the outdoor scenario. Extensive experiments on KITTI and nuScenes datasets demonstrate that our INTEGER achieves competitive performance in terms of accuracy and generalizability.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95385"} +{"video_file": "OFmclNhp0y_39026091.mp4", "openreview_id": "OFmclNhp0y", "slideslive_id": 39026091, "venue": "nips2024", "title": "Deterministic Uncertainty Propagation for Improved Model-Based Offline Reinforcement Learning", "status": "Poster", "keywords": "offline reinforcement learning;offline model-based reinforcement learning;uncertainty propagation;moment matching", "tldr": "Prior model-based offline reinforcement learning suffers from high variance due to sampling-based estimation, but MOMBO addresses this by deterministically propagating uncertainties through the value function, providing novel suboptimality guarantees", "abstract": "Current approaches to model-based offline reinforcement learning often incorporate uncertainty-based reward penalization to address the distributional shift problem. These approaches, commonly known as pessimistic value iteration, use Monte Carlo sampling to estimate the Bellman target to perform temporal difference-based policy evaluation. We find out that the randomness caused by this sampling step significantly delays convergence. We present a theoretical result demonstrating the strong dependency of suboptimality on the number of Monte Carlo samples taken per Bellman target calculation. Our main contribution is a deterministic approximation to the Bellman target that uses progressive moment matching, a method developed originally for deterministic variational inference. The resulting algorithm, which we call Moment Matching Offline Model-Based Policy Optimization (MOMBO), propagates the uncertainty of the next state through a nonlinear Q-network in a deterministic fashion by approximating the distributions of hidden layer activations by a normal distribution. We show that it is possible to provide tighter guarantees for the suboptimality of MOMBO than the existing Monte Carlo sampling approaches. We also observe MOMBO to converge faster than these approaches in a large set of benchmark tasks.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95381"} +{"video_file": "OIsUWQSvkD_39028819.mp4", "openreview_id": "OIsUWQSvkD", "slideslive_id": 39028819, "venue": "nips2024", "title": "Identifying Causal Effects Under Functional Dependencies", "status": "Spotlight", "keywords": "Identifiability;Causal Effects;Functional Dependencies", "tldr": "The paper studies the identifiability of causal effects under functional dependencies.", "abstract": "We study the identification of causal effects, motivated by two improvements to identifiability which can be attained if one knows that some variables in a causal graph are functionally determined by their parents (without needing to know the specific functions). First, an unidentifiable causal effect may become identifiable when certain variables are functional. Second, certain functional variables can be excluded from being observed without affecting the identifiability of a causal effect, which may significantly reduce the number of needed variables in observational data. Our results are largely based on an elimination procedure which removes functional variables from a causal graph while preserving key properties in the resulting causal graph, including the identifiability of causal effects.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/95380"} +{"video_file": "OJxua0PAIo_39024410.mp4", "openreview_id": "OJxua0PAIo", "slideslive_id": 39024410, "venue": "nips2024", "title": "Stochastic Extragradient with Flip-Flop Shuffling & Anchoring: Provable Improvements", "status": "Poster", "keywords": "minimax optimization;stochastic optimization;extragradient method;without-replacement sampling", "tldr": "We study the convergence of the stochastic extragradient method with flip-flop shuffling sampling scheme and anchoring, and show provable improvements over other SGDA/SEG variants.", "abstract": "In minimax optimization, the extragradient (EG) method has been extensively studied because it outperforms the gradient descent-ascent method in convex-concave (C-C) problems. Yet, stochastic EG (SEG) has seen limited success in C-C problems, especially for unconstrained cases. Motivated by the recent progress of shuffling-based stochastic methods, we investigate the convergence of shuffling-based SEG in unconstrained finite-sum minimax problems, in search of convergent shuffling-based SEG. Our analysis reveals that both random reshuffling and the recently proposed flip-flop shuffling alone can suffer divergence in C-C problems. However, with an additional simple trick called anchoring, we develop the SEG with flip-flop anchoring (SEG-FFA) method which successfully converges in C-C problems. We also show upper and lower bounds in the strongly-convex-strongly-concave setting, demonstrating that SEG-FFA has a provably faster convergence rate compared to other shuffling-based methods.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95378"} +{"video_file": "OONojmx3wH_39026837.mp4", "openreview_id": "OONojmx3wH", "slideslive_id": 39026837, "venue": "nips2024", "title": "When is Multicalibration Post-Processing Necessary?", "status": "Poster", "keywords": "multicalibration;calibration;fairness", "tldr": "We initiate a comprehensive study of existing multicalibration post-processing algorithms to compare how they fare against ERM and traditional calibration methods.", "abstract": "Calibration is a well-studied property of predictors which guarantees meaningful uncertainty estimates. Multicalibration is a related notion --- originating in algorithmic fairness --- which requires predictors to be simultaneously calibrated over a potentially complex and overlapping collection of protected subpopulations (such as groups defined by ethnicity, race, or income). We conduct the first comprehensive study evaluating the usefulness of multicalibration post-processing across a broad set of tabular, image, and language datasets for models spanning from simple decision trees to 90 million parameter fine-tuned LLMs. Our findings can be summarized as follows: (1) models which are calibrated out of the box tend to be relatively multicalibrated without any additional post-processing; (2) multicalibration can help inherently uncalibrated models and also large vision and language models; and (3) traditional calibration measures may sometimes provide multicalibration implicitly. More generally, we also distill many independent observations which may be useful for practical and effective applications of multicalibration post-processing in real-world contexts.", "primary_area": "fairness", "site": "https://neurips.cc/virtual/2024/poster/95377"} +{"video_file": "OOiRS6fiM7_39028846.mp4", "openreview_id": "OOiRS6fiM7", "slideslive_id": 39028846, "venue": "nips2024", "title": "A Fast Convoluted Story: Scaling Probabilistic Inference for Integer Arithmetics", "status": "Poster", "keywords": "Probability theory;neurosymbolic AI;neuro-symbolic AI;neural-symbolic AI;integer arithmetic;linear integer arithmetic;integer programming;discrete random variables", "tldr": "We propose a differentiable tensorisation of linear arithmetics over integer-valued random variables that extends the horizon of exact probabilistic inference by exploiting the fast Fourier transform.", "abstract": "As illustrated by the success of integer linear programming, linear integer arithmetics is a powerful tool for modelling combinatorial problems. Furthermore, the probabilistic extension of linear programming has been used to formulate problems in neurosymbolic AI. However, two key problems persist that prevent the adoption of neurosymbolic techniques beyond toy problems. First, probabilistic inference is inherently hard, #P-hard to be precise. Second, the discrete nature of integers renders the construction of meaningful gradients challenging, which is problematic for learning. In order to mitigate these issues, we formulate linear arithmetics over integer-valued random variables as tensor manipulations that can be implemented in a straightforward fashion using modern deep learning libraries. At the core of our formulation lies the observation that the addition of two integer-valued random variables can be performed by adapting the fast Fourier transform to probabilities in the log-domain. By relying on tensor operations we obtain a differentiable data structure, which unlocks, virtually for free, gradient-based learning. In our experimental validation we show that tensorising probabilistic integer linear arithmetics and leveraging the fast Fourier transform allows us to push the state of the art by several orders of magnitude in terms of inference and learning times.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95376"} +{"video_file": "OP2D9sIdo4_39025436.mp4", "openreview_id": "OP2D9sIdo4", "slideslive_id": 39025436, "venue": "nips2024", "title": "Suitable is the Best: Task-Oriented Knowledge Fusion in Vulnerability Detection", "status": "Poster", "keywords": "deep learning;Graph Neural Networks;vulnerability detection;static analysis;software security;knowledge fusion", "tldr": "We propose KF-GVD, a knowledge fusion-based GNN model for task-oriented source code vulnerability detection.", "abstract": "Deep learning technologies have demonstrated remarkable performance in vulnerability detection. Existing works primarily adopt a uniform and consistent feature learning pattern across the entire target set. While designed for general-purpose detection tasks, they lack sensitivity towards target code comprising multiple functional modules or diverse vulnerability subtypes. In this paper, we present a knowledge fusion-based vulnerability detection method (KF-GVD) that integrates specific vulnerability knowledge into the Graph Neural Network feature learning process. KF-GVD achieves accurate vulnerability detection across different functional modules of the Linux kernel and vulnerability subtypes without compromising general task performance. Extensive experiments demonstrate that KF-GVD outperforms SOTAs on function-level and statement-level vulnerability detection across various target tasks, with an average increase of 40.9% in precision and 26.1% in recall. Notably, KF-GVD discovered 9 undisclosed vulnerabilities when employing on C/C++ open-source projects without ground truth.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/95375"} +{"video_file": "OPrPegYIZo_39026551.mp4", "openreview_id": "OPrPegYIZo", "slideslive_id": 39026551, "venue": "nips2024", "title": "DynaMITE-RL: A Dynamic Model for Improved Temporal Meta-Reinforcement Learning", "status": "Poster", "keywords": "Reinforcement Learning;Bayesian Reinforcement Learning;Meta-Reinforcement Learning", "tldr": "We introduce DynaMITE-RL, a meta-reinforcement learning (meta-RL) approach to approximate task inference in environments where the latent state evolves at varying rates.", "abstract": "We introduce DynaMITE-RL, a meta-reinforcement learning (meta-RL) approach to approximate inference in environments where the latent state evolves at varying rates. We model episode sessions---parts of the episode where the latent state is fixed---and propose three key modifications to existing meta-RL methods: (i) consistency of latent information within sessions, (ii) session masking, and (iii) prior latent conditioning. We demonstrate the importance of these modifications in various domains, ranging from discrete Gridworld environments to continuous-control and simulated robot assistive tasks, illustrating the efficacy of DynaMITE-RL over state-of-the-art baselines in both online and offline RL settings.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95373"} +{"video_file": "OQUg2T4qJB_39025180.mp4", "openreview_id": "OQUg2T4qJB", "slideslive_id": 39025180, "venue": "nips2024", "title": "Ordering-Based Causal Discovery for Linear and Nonlinear Relations", "status": "Poster", "keywords": "causal discovery; directed acyclic graph; additive noise model", "tldr": "This paper propose a causal discovery method that works well in both linear and nonlinear.", "abstract": "Identifying causal relations from purely observational data typically requires additional assumptions on relations and/or noise. Most current methods restrict their analysis to datasets that are assumed to have pure linear or nonlinear relations, which is often not reflective of real-world datasets that contain a combination of both. This paper presents CaPS, an ordering-based causal discovery algorithm that effectively handles linear and nonlinear relations. CaPS introduces a novel identification criterion for topological ordering and incorporates the concept of \"parent score\" during the post-processing optimization stage. These scores quantify the strength of the average causal effect, helping to accelerate the pruning process and correct inaccurate predictions in the pruning step. Experimental results demonstrate that our proposed solutions outperform state-of-the-art baselines on synthetic data with varying ratios of linear and nonlinear relations. The results obtained from real-world data also support the competitiveness of CaPS. Code and datasets are available at https://github.com/E2real/CaPS.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/95372"} +{"video_file": "ORQiboaRqY_39026280.mp4", "openreview_id": "ORQiboaRqY", "slideslive_id": 39026280, "venue": "nips2024", "title": "On the Power of Small-size Graph Neural Networks for Linear Programming", "status": "Poster", "keywords": "Learning to optimize;graph neural network;linear programming;gradient descent;efficiency", "tldr": "We show that polylogarithmic-depth constant-width GNNs are sufficient to solve packing and covering LPs.", "abstract": "Graph neural networks (GNNs) have recently emerged as powerful tools for addressing complex optimization problems. It has been theoretically demonstrated that GNNs can universally approximate the solution mapping functions of linear programming (LP) problems. However, these theoretical results typically require GNNs to have large parameter sizes. Conversely, empirical experiments have shown that relatively small GNNs can solve LPs effectively, revealing a significant discrepancy between theoretical predictions and practical observations. In this work, we aim to bridge this gap by providing a theoretical foundation for the effectiveness of small-size GNNs. We prove that polylogarithmic-depth, constant-width GNNs are sufficient to solve packing and covering LPs, two widely used classes of LPs. Our proof leverages the capability of GNNs to simulate a variant of the gradient descent algorithm on a carefully selected potential function. Additionally, we introduce a new GNN architecture, termed GD-Net. Experimental results demonstrate that GD-Net significantly outperforms conventional GNN structures while using fewer parameters.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/95370"} +{"video_file": "OV8YUk151r_39026610.mp4", "openreview_id": "OV8YUk151r", "slideslive_id": 39026610, "venue": "nips2024", "title": "HLM-Cite: Hybrid Language Model Workflow for Text-based Scientific Citation Prediction", "status": "Poster", "keywords": "Citation prediction;large language model;text embedding model", "tldr": "This paper designs a hybrid language model workflow integrating representative and generative LMs to precisely predict which previous papers (candidates) will an ongoing new paper (query) cite from vast candidate set.", "abstract": "Citation networks are critical infrastructures of modern science, serving as intricate webs of past literature and enabling researchers to navigate the knowledge production system. To mine information hiding in the link space of such networks, predicting which previous papers (candidates) will a new paper (query) cite is a critical problem that has long been studied. However, an important gap remains unaddressed: the roles of a paper's citations vary significantly, ranging from foundational knowledge basis to superficial contexts. Distinguishing these roles requires a deeper understanding of the logical relationships among papers, beyond simple edges in citation networks. The emergence of large language models (LLMs) with textual reasoning capabilities offers new possibilities for discerning these relationships, but there are two major challenges. First, in practice, a new paper may select its citations from gigantic existing papers, where the combined texts far exceed the context length of LLMs. Second, logical relationships between papers are often implicit, and directly prompting an LLM to predict citations may lead to results based primarily on surface-level textual similarities, rather than the deeper logical reasoning required. In this paper, we introduce the novel concept of core citation, which identifies the critical references that go beyond superficial mentions. Thereby, we elevate the citation prediction task from a simple binary classification to a more nuanced problem: distinguishing core citations from both superficial citations and non-citations. To address this, we propose\nHLM-Cite\n, a\nH\nybrid\nL\nanguage\nM\nodel workflow for citation prediction, which combines embedding and generative LMs. We design a curriculum finetune procedure to adapt a pretrained text embedding model to coarsely retrieve high-likelihood core citations from vast candidate sets and then design an LLM agentic workflow to rank the retrieved papers through one-shot reasoning, revealing the implicit relationships among papers. With the two-stage pipeline, we can scale the candidate sets to 100K papers, vastly exceeding the size handled by existing methods. We evaluate HLM-Cite on a dataset across 19 scientific fields, demonstrating a 17.6% performance improvement comparing SOTA methods. Our code is open-source at https://github.com/tsinghua-fib-lab/H-LM for reproducibility.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95366"} +{"video_file": "OWmu3QOa0O_39027999.mp4", "openreview_id": "OWmu3QOa0O", "slideslive_id": 39027999, "venue": "nips2024", "title": "Sparse maximal update parameterization: A holistic approach to sparse training dynamics", "status": "Poster", "keywords": "sparsity;large language models;scaling laws", "tldr": "We introduce the sparse maximal update parameterization (S\u00b5\n\u00b5\nPar) which ensures optimal HPs remain the same for any width or sparsity level. This dramatically reduces HP tuning costs, allowing S\u00b5\n\u00b5\nPar to achieve superior losses.", "abstract": "Several challenges make it difficult for sparse neural networks to compete with dense models. First, setting a large fraction of weights to zero impairs forward and gradient signal propagation. Second, sparse studies often need to test multiple sparsity levels, while also introducing new hyperparameters (HPs), leading to prohibitive tuning costs. Indeed, the standard practice is to re-use the learning HPs originally crafted for dense models. Unfortunately, we show sparse and dense networks do not share the same optimal HPs. Without stable dynamics and effective training recipes, it is costly to test sparsity at scale, which is key to surpassing dense networks and making the business case for sparsity acceleration in hardware.\nA holistic approach is needed to tackle these challenges and we propose S\u00b5\n\u00b5\nPar as one such approach. For random unstructured static sparsity, S\u00b5\n\u00b5\nPar ensures activations, gradients, and weight updates all scale independently of sparsity level. Further, by reparameterizing the HPs, S\u00b5\n\u00b5\nPar enables the same HP values to be optimal as we vary both sparsity level and model width. HPs can be tuned on small dense networks and transferred to large sparse models, greatly reducing tuning costs. On large-scale language modeling, S\u00b5\n\u00b5\nPar shows increasing improvements over standard parameterization as sparsity increases, leading up to 11.9% relative loss improvement at 99.2% sparsity. A minimal implementation of S\u00b5\n\u00b5\nPar is available at https://github.com/EleutherAI/nanoGPT-mup/tree/supar.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/95363"} +{"video_file": "OWwdlxwnFN_39026147.mp4", "openreview_id": "OWwdlxwnFN", "slideslive_id": 39026147, "venue": "nips2024", "title": "MonkeySee: Space-time-resolved reconstructions of natural images from macaque multi-unit activity", "status": "Poster", "keywords": "space-time dependent reconstructions;biological constraints;neural decoding;deep neural network;receptive fields;visual cortex;multi unit activity;convolutional model;brain image reconstruction;naturalistic stimuli", "tldr": "This study uses a CNN to reconstruct images from macaque brain signals and explores how different brain areas process these signals at different timepoints.", "abstract": "In this paper, we reconstruct naturalistic images directly from macaque brain signals using a convolutional neural network (CNN) based decoder. We investigate the ability of this CNN-based decoding technique to differentiate among neuronal populations from areas V1, V4, and IT, revealing distinct readout characteristics for each. This research marks a progression from low-level to high-level brain signals, thereby enriching the existing framework for utilizing CNN-based decoders to decode brain activity. Our results demonstrate high-precision reconstructions of naturalistic images, highlighting the efficiency of CNN-based decoders in advancing our knowledge of how the brain's representations translate into pixels. Additionally, we present a novel space-time-resolved decoding technique, demonstrating how temporal resolution in decoding can advance our understanding of neural representations. Moreover, we introduce a learned receptive field layer that sheds light on the CNN-based model's data processing during training, enhancing understanding of its structure and interpretive capacity.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/95362"} +{"video_file": "ObUjBHBx8O_39024381.mp4", "openreview_id": "ObUjBHBx8O", "slideslive_id": 39024381, "venue": "nips2024", "title": "Mitigating Spurious Correlations via Disagreement Probability", "status": "Poster", "keywords": "Debiasing;Spurious correlation;Group robustness", "tldr": "We propose a novel training objective and method designed to promote consistent model performance irrespective of spurious correlations.", "abstract": "Models trained with empirical risk minimization (ERM) are prone to be biased towards spurious correlations between target labels and bias attributes, which leads to poor performance on data groups lacking spurious correlations. It is particularly challenging to address this problem when access to bias labels is not permitted. To mitigate the effect of spurious correlations without bias labels, we first introduce a novel training objective designed to robustly enhance model performance across all data samples, irrespective of the presence of spurious correlations. From this objective, we then derive a debiasing method, Disagreement Probability based Resampling for debiasing (DPR), which does not require bias labels. DPR leverages the disagreement between the target label and the prediction of a biased model to identify bias-conflicting samples\u2014those without spurious correlations\u2014and upsamples them according to the disagreement probability. Empirical evaluations on multiple benchmarks demonstrate that DPR achieves state-of-the-art performance over existing baselines that do not use bias labels. Furthermore, we provide a theoretical analysis that details how DPR reduces dependency on spurious correlations.", "primary_area": "fairness", "site": "https://neurips.cc/virtual/2024/poster/95358"} +{"video_file": "OgnYoIxtIN_39024889.mp4", "openreview_id": "OgnYoIxtIN", "slideslive_id": 39024889, "venue": "nips2024", "title": "Zero-Shot Transfer of Neural ODEs", "status": "Poster", "keywords": "Zero-shot Transfer;Neural ODE;Model-based control", "tldr": "We introduce a method for zero-shot transfer and dynamics prediction via function encoders using neural ODE basis functions.", "abstract": "Autonomous systems often encounter environments and scenarios beyond the scope of their training data, which underscores a critical challenge: the need to generalize and adapt to unseen scenarios in real time. This challenge necessitates new mathematical and algorithmic tools that enable adaptation and zero-shot transfer. To this end, we leverage the theory of function encoders, which enables zero-shot transfer by combining the flexibility of neural networks with the mathematical principles of Hilbert spaces. Using this theory, we first present a method for learning a space of dynamics spanned by a set of neural ODE basis functions. After training, the proposed approach can rapidly identify dynamics in the learned space using an efficient inner product calculation. Critically, this calculation requires no gradient calculations or retraining during the online phase. This method enables zero-shot transfer for autonomous systems at runtime and opens the door for a new class of adaptable control algorithms. We demonstrate state-of-the-art system modeling accuracy for two MuJoCo robot environments and show that the learned models can be used for more efficient MPC control of a quadrotor.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/95353"} +{"video_file": "OiVxYf9trg_39028049.mp4", "openreview_id": "OiVxYf9trg", "slideslive_id": 39028049, "venue": "nips2024", "title": "Clustering in Causal Attention Masking", "status": "Poster", "keywords": "Transformers;causal attention;continuous-time interacting particle systems;clustering", "tldr": "Theoretical study of clustering phenomena in evolution of tokens representations as they propagate through layers of a causal transformer.", "abstract": "This work presents a modification of the self-attention dynamics proposed in Geshkovski et al to better reflect the practically relevant, causally masked attention used in transformer architectures for generative AI. This modification translates into an interacting particle system that cannot be interpreted as a mean-field gradient flow. Despite this loss of structure, we significantly strengthen the results of Geshkovski et al in this context: While previous rigorous results focused on cases where all three matrices (key, query, and value) were scaled identities, we prove asymptotic convergence to a single cluster for arbitrary key-query matrices and value matrix equal to the identity. Additionally, we establish a connection to the classical R'enyi parking problem from combinatorial geometry to make initial theoretical steps towards demonstrating the existence of meta-stable states.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95352"} +{"video_file": "OrtN9hPP7V_39026880.mp4", "openreview_id": "OrtN9hPP7V", "slideslive_id": 39026880, "venue": "nips2024", "title": "The GAN is dead; long live the GAN! A Modern GAN Baseline", "status": "Poster", "keywords": "GAN", "tldr": "Makes GAN training stable with math (!), then uses this to create simple modern baseline architecture with SOTA-competitive performance.", "abstract": "There is a widely-spread claim that GANs are difficult to train, and GAN architectures in the literature are littered with empirical tricks. We provide evidence against this claim and build a modern GAN baseline in a more principled manner. First, we derive a well-behaved regularized relativistic GAN loss that addresses issues of mode dropping and non-convergence that were previously tackled via a bag of ad-hoc tricks. We analyze our loss mathematically and prove that it admits local convergence guarantees, unlike most existing relativistic losses. Second, this loss allows us to discard all ad-hoc tricks and replace outdated backbones used in common GANs with modern architectures. Using StyleGAN2 as an example, we present a roadmap of simplification and modernization that results in a new minimalist baseline---R3GAN. Despite being simple, our approach surpasses StyleGAN2 on FFHQ, ImageNet, CIFAR, and Stacked MNIST datasets, and compares favorably against state-of-the-art GANs and diffusion models. Code: https://www.github.com/brownvc/R3GAN", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95345"} +{"video_file": "OtYCp1yfbX_39024769.mp4", "openreview_id": "OtYCp1yfbX", "slideslive_id": 39024769, "venue": "nips2024", "title": "Improved Guarantees for Fully Dynamic $k$-Center Clustering with Outliers in General Metric Spaces", "status": "Poster", "keywords": "$k$-center with outliers;fully dynamic model;metric spaces", "tldr": "We propose a novel fully dynamic algorithm that maintains an \n(\n4\n+\n\u03f5\n)\n-approximate solution to the \n(\nk\n,\nz\n)\n-center clustering that covers all but at most \n(\n1\n+\n\u03f5\n)\nz\n points at any time in the sequence.", "abstract": "The metric\nk\n-center clustering problem with\nz\noutliers, also known as\n(\nk\n,\nz\n)\n-center clustering, involves clustering a given point set\nP\nin a metric space\n(\nM\n,\nd\n)\nusing at most\nk\nballs, minimizing the maximum ball radius while excluding up to\nz\npoints from the clustering. This problem holds fundamental significance in various domains such as machine learning, data mining, and database systems.\nThis paper addresses the fully dynamic version of the problem, where the point set undergoes continuous updates (insertions and deletions) over time. The objective is to maintain an approximate\n(\nk\n,\nz\n)\n-center clustering with efficient update times. We propose a novel fully dynamic algorithm that maintains a\n(\n4\n+\n\u03f5\n)\n-approximate solution to the\n(\nk\n,\nz\n)\n-center clustering problem that covers all but at most\n(\n1\n+\n\u03f5\n)\nz\npoints at any time in the sequence with probability\n1\n\u2212\nk\n/\ne\n\u03a9\n(\nlog\n\u2061\nk\n)\n. The algorithm achieves an expected amortized update time of\nO\n(\n\u03f5\n\u2212\n2\nk\n6\nlog\n\u2061\n(\nk\n)\nlog\n\u2061\n(\n\u0394\n)\n)\n, and is applicable to general metric spaces. Our dynamic algorithm presents a significant improvement over the recent dynamic\n(\n14\n+\n\u03f5\n)\n-approximation algorithm by Chan, Lattanzi, Sozio, and Wang for this problem.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95343"} +{"video_file": "Ouc1F0Sfb7_39025748.mp4", "openreview_id": "Ouc1F0Sfb7", "slideslive_id": 39025748, "venue": "nips2024", "title": "Cost-aware Bayesian Optimization via the Pandora's Box Gittins Index", "status": "Poster", "keywords": "Bayesian optimization;acquisition function;cost-per-sample;Gittins index;Pandora's box", "tldr": "We introduce a connection between cost-aware Bayesian optimization and the Pandora's Box problem from economics, and use it to derive a novel cost-aware acquisition function class with promising performance.", "abstract": "Bayesian optimization is a technique for efficiently optimizing unknown functions in a black-box manner. To handle practical settings where gathering data requires use of finite resources, it is desirable to explicitly incorporate function evaluation costs into Bayesian optimization policies. To understand how to do so, we develop a previously-unexplored connection between cost-aware Bayesian optimization and the Pandora's Box problem, a decision problem from economics. The Pandora's Box problem admits a Bayesian-optimal solution based on an expression called the Gittins index, which can be reinterpreted as an acquisition function. We study the use of this acquisition function for cost-aware Bayesian optimization, and demonstrate empirically that it performs well, particularly in medium-high dimensions. We further show that this performance carries over to classical Bayesian optimization without explicit evaluation costs. Our work constitutes a first step towards integrating techniques from Gittins index theory into Bayesian optimization.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95339"} +{"video_file": "Oy2x0Xfx0u_39024850.mp4", "openreview_id": "Oy2x0Xfx0u", "slideslive_id": 39024850, "venue": "nips2024", "title": "What do Graph Neural Networks learn? Insights from Tropical Geometry", "status": "Poster", "keywords": "Graph Representation Learning;Graph Neural Networks;Geometric Complexity;Message Passing;Learning Theory", "tldr": "We leverage tools from tropical geometry to establish several new results about ReLU MPNNs (including commonly used architectures).", "abstract": "Graph neural networks (GNNs) have been analyzed from multiple perspectives, including the WL-hierarchy, which exposes limits on their expressivity to distinguish graphs. However, characterizing the class of functions that they learn has remained unresolved. We address this fundamental question for message passing GNNs under ReLU activations, i.e., the de-facto choice for most GNNs.\nWe first show that such GNNs learn tropical rational signomial maps or continuous piecewise linear functions, establishing an equivalence with feedforward networks (FNNs). We then elucidate the role of the choice of aggregation and update functions, and derive the first general upper and lower bounds on the geometric complexity (i.e., the number of linear regions), establishing new results for popular architectures such as GraphSAGE and GIN. We also introduce and theoretically analyze several new architectures to illuminate the relative merits of the feedforward and the message passing layers, and the tradeoffs involving depth and number of trainable parameters. Finally, we also characterize the decision boundary for node and graph classification tasks.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/95336"} +{"video_file": "OycU0bAus6_39026749.mp4", "openreview_id": "OycU0bAus6", "slideslive_id": 39026749, "venue": "nips2024", "title": "DenoiseRep: Denoising Model for Representation Learning", "status": "Oral", "keywords": "Diffusion Model;Representation Learning;Generative Model;Discriminative Models", "tldr": "DenoiseRep is a computation-free, label-optional and model-irrelevant algorithm to incrementally improve representation learning.", "abstract": "The denoising model has been proven a powerful generative model but has little exploration of discriminative tasks. Representation learning is important in discriminative tasks, which is defined as \"learning representations (or features) of the data that make it easier to extract useful information when building classifiers or other predictors\". In this paper, we propose a novel Denoising Model for Representation Learning (DenoiseRep) to improve feature discrimination with joint feature extraction and denoising. DenoiseRep views each embedding layer in a backbone as a denoising layer, processing the cascaded embedding layers as if we are recursively denoise features step-by-step. This unifies the frameworks of feature extraction and denoising, where the former progressively embeds features from low-level to high-level, and the latter recursively denoises features step-by-step. After that, DenoiseRep fuses the parameters of feature extraction and denoising layers, and theoretically demonstrates its equivalence before and after the fusion, thus making feature denoising computation-free. DenoiseRep is a label-free algorithm that incrementally improves features but also complementary to the label if available. Experimental results on various discriminative vision tasks, including re-identification (Market-1501, DukeMTMC-reID, MSMT17, CUHK-03, vehicleID), image classification (ImageNet, UB200, Oxford-Pet, Flowers), object detection (COCO), image segmentation (ADE20K) show stability and impressive improvements. We also validate its effectiveness on the CNN (ResNet) and Transformer (ViT, Swin, Vmamda) architectures.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95335"} +{"video_file": "P3v3x7HnV0_39025301.mp4", "openreview_id": "P3v3x7HnV0", "slideslive_id": 39025301, "venue": "nips2024", "title": "QueST: Self-Supervised Skill Abstractions for Learning Continuous Control", "status": "Poster", "keywords": "Behavior Clonning;Action Tokenization;Self Supervised Skill Abstraction;Few-shot Imitation Learning", "tldr": "QueST is a multitask latent-variable behavior model that learns sharable low-level skills by representing temporal action abstractions (1-2 secs motion) with a sequence of discrete codebook entries (skill-tokens).", "abstract": "Generalization capabilities, or rather a lack thereof, is one of the most important unsolved problems in the field of robot learning, and while several large scale efforts have set out to tackle this problem, unsolved it remains. In this paper, we hypothesize that learning temporal action abstractions using latent variable models (LVMs), which learn to map data to a compressed latent space and back, is a promising direction towards low-level skills that can readily be used for new tasks. Although several works have attempted to show this, they have generally been limited by architectures that do not faithfully capture sharable representations. To address this we present Quantized Skill Transformer (QueST), which learns a larger and more flexible latent encoding that is more capable of modeling the breadth of low-level skills necessary for a variety of tasks. To make use of this extra flexibility, QueST imparts causal inductive bias from the action sequence data into the latent space, leading to more semantically useful and transferable representations. We compare to state-of-the-art imitation learning and LVM baselines and see that QueST\u2019s architecture leads to strong performance on several multitask and few-shot learning benchmarks. Further results and videos are available at https://quest-model.github.io.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/95334"} +{"video_file": "P4s6FUpCbG_39025772.mp4", "openreview_id": "P4s6FUpCbG", "slideslive_id": 39025772, "venue": "nips2024", "title": "3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with View-consistent 2D Diffusion Priors", "status": "Spotlight", "keywords": "3D model enhancement;3D Guassian splatting;novel view synthesis;diffusion model;image restoration", "tldr": "We propose a method that exploits view-consistent 2D generative priors, i.e., a video diffusion model, to enhance 3D Gaussian splatting rendering quality.", "abstract": "Novel-view synthesis aims to generate novel views of a scene from multiple input images or videos, and recent advancements like 3D Gaussian splatting (3DGS) have achieved notable success in producing photorealistic renderings with efficient pipelines. However, generating high-quality novel views under challenging settings, such as sparse input views, remains difficult due to insufficient information in under-sampled areas, often resulting in noticeable artifacts. This paper presents 3DGS-Enhancer, a novel pipeline for enhancing the representation quality of 3DGS representations. We leverage 2D video diffusion priors to address the challenging 3D view consistency problem, reformulating it as achieving temporal consistency within a video generation process. 3DGS-Enhancer restores view- consistent latent features of rendered novel views and integrates them with the input views through a spatial-temporal decoder. The enhanced views are then used to fine-tune the initial 3DGS model, significantly improving its rendering performance. Extensive experiments on large-scale datasets of unbounded scenes demonstrate that 3DGS-Enhancer yields superior reconstruction performance and high-fidelity rendering results compared to state-of-the-art methods. The project webpage is https://xiliu8006.github.io/3DGS-Enhancer-project.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95333"} +{"video_file": "P5dEZeECGu_39025498.mp4", "openreview_id": "P5dEZeECGu", "slideslive_id": 39025498, "venue": "nips2024", "title": "FlexCap: Describe Anything in Images in Controllable Detail", "status": "Poster", "keywords": "vision-language models;dense captioning;open-ended object detection", "tldr": "New image captioning model that can generate short or detailed descriptions of specific image areas.", "abstract": "We introduce FlexCap, a vision-language model that generates region-specific descriptions of varying lengths. FlexCap is trained to produce length-conditioned captions for input boxes, enabling control over information density, with descriptions ranging from concise object labels to detailed captions. To achieve this, we create large-scale training datasets of image region descriptions with varying lengths from captioned web images. We demonstrate FlexCap\u2019s effectiveness in several applications: first, it achieves strong performance in dense captioning tasks on the Visual Genome dataset. Second, we show how FlexCap\u2019s localized descriptions can serve as input to a large language model to create a visual question answering (VQA) system, achieving state-of-the-art zero-shot performance on multiple VQA benchmarks. Our experiments illustrate FlexCap\u2019s utility for tasks including image labeling, object attribute recognition, and visual dialog. Project webpage: https://flex-cap.github.io.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95332"} +{"video_file": "P5yezHuMSS_39024724.mp4", "openreview_id": "P5yezHuMSS", "slideslive_id": 39024724, "venue": "nips2024", "title": "Monoculture in Matching Markets", "status": "Poster", "keywords": "algorithmic monoculture;matching markets;hiring", "tldr": "We study algorithmic monoculture in a matching markets framework", "abstract": "Algorithmic monoculture arises when many decision-makers rely on the same algorithm to evaluate applicants. An emerging body of work investigates possible harms of this kind of homogeneity, but has been limited by the challenge of incorporating market effects in which the preferences and behavior of many applicants and decision-makers jointly interact to determine outcomes.\nAddressing this challenge, we introduce a tractable theoretical model of algorithmic monoculture in a two-sided matching market with many participants. We use the model to analyze outcomes under monoculture (when decision-makers all evaluate applicants using a common algorithm) and under polyculture (when decision-makers evaluate applicants independently). All else equal, monoculture (1) selects less-preferred applicants when noise is well-behaved, (2) matches more applicants to their top choice, though individual applicants may be worse off depending on their value to decision-makers and risk tolerance, and (3) is more robust to disparities in the number of applications submitted.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/95331"} +{"video_file": "P6nVDZRZRB_39026437.mp4", "openreview_id": "P6nVDZRZRB", "slideslive_id": 39026437, "venue": "nips2024", "title": "Are Uncertainty Quantification Capabilities of Evidential Deep Learning a Mirage?", "status": "Poster", "keywords": "Uncertainty Quantification;Evidential Deep Learning;Out-of-distribution Data Detection;Bayesian Learning;Epistemic Uncertainty", "tldr": "We affirmatively answer the question in the title by providing comprehensive analyses and empirical evidence.", "abstract": "This paper questions the effectiveness of a modern predictive uncertainty quantification approach, called evidential deep learning (EDL), in which a single neural network model is trained to learn a meta distribution over the predictive distribution by minimizing a specific objective function. Despite their perceived strong empirical performance on downstream tasks, a line of recent studies by Bengs et al. identify limitations of the existing methods to conclude their learned epistemic uncertainties are unreliable, e.g., in that they are non-vanishing even with infinite data. Building on and sharpening such analysis, we 1) provide a sharper understanding of the asymptotic behavior of a wide class of EDL methods by unifying various objective functions; 2) reveal that the EDL methods can be better interpreted as an out-of-distribution detection algorithm based on energy-based-models; and 3) conduct extensive ablation studies to better assess their empirical effectiveness with real-world datasets. Through all these analyses, we conclude that even when EDL methods are empirically effective on downstream tasks, this occurs despite their poor uncertainty quantification capabilities. Our investigation suggests that incorporating model uncertainty can help EDL methods faithfully quantify uncertainties and further improve performance on representative downstream tasks, albeit at the cost of additional computational complexity.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/95329"} +{"video_file": "P8rTCT6g45_39024725.mp4", "openreview_id": "P8rTCT6g45", "slideslive_id": 39024725, "venue": "nips2024", "title": "Memory-Efficient LLM Training with Online Subspace Descent", "status": "Poster", "keywords": "Large Language Model Pretraining; Optimizer; Memory-Efficient LLM Training", "tldr": "a new family of optimization algorithms adaptively descending on projected subspaces found by online PCA", "abstract": "Recently, a wide range of memory-efficient LLM training algorithms have gained substantial popularity. These methods leverage the low-rank structure of gradients to project optimizer states into a subspace using projection matrix found by singular value decomposition (SVD). However, convergence of these algorithms is highly dependent on the update rules of their projection matrix. In this work, we provide the \\emph{first} convergence guarantee for arbitrary update rules of projection matrix. This guarantee is generally applicable to optimizers that can be analyzed with Hamiltonian Descent, including most common ones, such as LION, Adam. Inspired by our theoretical understanding, we propose Online Subspace Descent, a new family of subspace descent optimizer without SVD. Instead of updating projection matrix with eigenvectors, Online Subspace Descent updates projection matrix wtih online PCA. Online Subspace Descent is flexible and introduces only minimum overhead to training. We demonstrate that, for the task of pretraining LLaMA models ranging from 60M to 1B parameters on the C4 dataset, Online Subspace Descent achieves lower perplexity than state-of-the-art low-rank training methods across different settings and narrows the gap with full-rank baselines.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/95328"} +{"video_file": "PAWQvrForJ_39027674.mp4", "openreview_id": "PAWQvrForJ", "slideslive_id": 39027674, "venue": "nips2024", "title": "Membership Inference Attacks against Fine-tuned Large Language Models via Self-prompt Calibration", "status": "Poster", "keywords": "Large Language Models;Membership Inference Attacks;Privacy and Security", "tldr": "This work proposes a practical membership inference attack against fine-tuned large language models.", "abstract": "Membership Inference Attacks (MIA) aim to infer whether a target data record has been utilized for model training or not. Existing MIAs designed for large language models (LLMs) can be bifurcated into two types: reference-free and reference-based attacks. Although reference-based attacks appear promising performance by calibrating the probability measured on the target model with reference models, this illusion of privacy risk heavily depends on a reference dataset that closely resembles the training set. Both two types of attacks are predicated on the hypothesis that training records consistently maintain a higher probability of being sampled. However, this hypothesis heavily relies on the overfitting of target models, which will be mitigated by multiple regularization methods and the generalization of LLMs. Thus, these reasons lead to high false-positive rates of MIAs in practical scenarios. We propose a Membership Inference Attack based on Self-calibrated Probabilistic Variation (SPV-MIA). Specifically, we introduce a self-prompt approach, which constructs the dataset to fine-tune the reference model by prompting the target LLM itself. In this manner, the adversary can collect a dataset with a similar distribution from public APIs. Furthermore, we introduce probabilistic variation, a more reliable membership signal based on LLM memorization rather than overfitting, from which we rediscover the neighbour attack with theoretical grounding. Comprehensive evaluation conducted on three datasets and four exemplary LLMs shows that SPV-MIA raises the AUC of MIAs from 0.7 to a significantly high level of 0.9. Our code and dataset are available at: https://github.com/tsinghua-fib-lab/NeurIPS2024_SPV-MIA", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/95327"} +{"video_file": "PEEqnXlSCk_39025811.mp4", "openreview_id": "PEEqnXlSCk", "slideslive_id": 39025811, "venue": "nips2024", "title": "SDP4Bit: Toward 4-bit Communication Quantization in Sharded Data Parallelism for LLM Training", "status": "Poster", "keywords": "LLM Training;Quantization;Communication Reduction;Collective Communication", "tldr": "The first work that successfully reduces both gradients and weights to nearly 4 bits without compromising training accuracy in distributed training of large language models.", "abstract": "Recent years have witnessed a clear trend towards language models with an ever-increasing number of parameters, as well as the growing training overhead and memory usage. Distributed training, particularly through Sharded Data Parallelism (ShardedDP) which partitions optimizer states among workers, has emerged as a crucial technique to mitigate training time and memory usage. Yet, a major challenge in the scalability of ShardedDP is the intensive communication of weights and gradients. While compression techniques can alleviate this issue, they often result in worse accuracy. Driven by this limitation, we propose SDP4Bit (Toward 4Bit Communication Quantization in Sharded Data Parallelism for LLM Training), which effectively reduces the communication of weights and gradients to nearly 4 bits via two novel techniques: quantization on weight differences, and two-level gradient smooth quantization. Furthermore, SDP4Bit presents an algorithm-system co-design with runtime optimization to minimize the computation overhead of compression. Additional to the theoretical guarantees of convergence, we empirically evaluate the accuracy of SDP4Bit on the pre-training of GPT models with up to 6.7 billion parameters, and the results demonstrate a negligible impact on training loss. Furthermore, speed experiments show that SDP4Bit achieves up to 4.08\u00d7 speedup in end-to-end throughput on a scale of 128 GPUs.", "primary_area": "infrastructure", "site": "https://neurips.cc/virtual/2024/poster/95323"} +{"video_file": "PGOuBHYdbr_39028418.mp4", "openreview_id": "PGOuBHYdbr", "slideslive_id": 39028418, "venue": "nips2024", "title": "Thompson Sampling For Combinatorial Bandits: Polynomial Regret and Mismatched Sampling Paradox", "status": "Spotlight", "keywords": "Combinatorial bandits;Thomspon Sampling", "tldr": "We propose a modified version of Thompson sampling for combinatorial bandits, the first that does not exhibit exponential regret.", "abstract": "We consider Thompson Sampling (TS) for linear combinatorial semi-bandits and subgaussian rewards. We propose the first known TS whose finite-time regret does not scale exponentially with the dimension of the problem. We further show the mismatched sampling paradox: A learner who knows the rewards distributions and samples from the correct posterior distribution can perform exponentially worse than a learner who does not know the rewards and simply samples from a well-chosen Gaussian posterior. The code used to generate the experiments is available at https://github.com/RaymZhang/CTS-Mismatched-Paradox", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/95322"} +{"video_file": "PK8xOCBQRO_39026363.mp4", "openreview_id": "PK8xOCBQRO", "slideslive_id": 39026363, "venue": "nips2024", "title": "Transfer Learning for Latent Variable Network Models", "status": "Poster", "keywords": "Transfer learning;network estimation;latent variable models", "tldr": "Transfer learning gets vanishing reconstruction error on the entire target network given o(1) fraction of the target nodes", "abstract": "We study transfer learning for estimation in latent variable network models. In our setting, the conditional edge probability matrices given the latent variables are represented by $P$ for the source and $Q$ for the target. We wish to estimate $Q$ given two kinds of data: (1) edge data from a subgraph induced by an $o(1)$ fraction of the nodes of $Q$, and (2) edge data from all of $P$. If the source $P$ has no relation to the target $Q$, the estimation error must be $\\Omega(1)$. However, we show that if the latent variables are shared, then vanishing error is possible. We give an efficient algorithm that utilizes the ordering of a suitably defined graph distance. Our algorithm achieves $o(1)$ error and does not assume a parametric form on the source or target networks. Next, for the specific case of Stochastic Block Models we prove a minimax lower bound and show that a simple algorithm achieves this rate. Finally, we empirically demonstrate our algorithm's use on real-world and simulated graph transfer problems.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95319"} +{"video_file": "PLbFid00aU_39025692.mp4", "openreview_id": "PLbFid00aU", "slideslive_id": 39025692, "venue": "nips2024", "title": "The Impact of Geometric Complexity on Neural Collapse in Transfer Learning", "status": "Poster", "keywords": "transfer learning;geometric complexity;neural collapse;implicit bias;flatness;generalization bounds", "tldr": "We show that pre-trained models with lower geometric complexity lead to lower neural collapse and better transfer learning performance, particularly in the few-shot setting.", "abstract": "Many of the recent advances in computer vision and language models can be attributed to the success of transfer learning via the pre-training of large foundation models. However, a theoretical framework which explains this empirical success is incomplete and remains an active area of research. Flatness of the loss surface and neural collapse have recently emerged as useful pre-training metrics which shed light on the implicit biases underlying pre-training. In this paper, we explore the geometric complexity of a model's learned representations as a fundamental mechanism that relates these two concepts. We show through experiments and theory that mechanisms which affect the geometric complexity of the pre-trained network also influence the neural collapse. Furthermore, we show how this effect of the geometric complexity generalizes to the neural collapse of new classes as well, thus encouraging better performance on downstream tasks, particularly in the few-shot setting.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95317"} +{"video_file": "PPdJPIO3mV_39024581.mp4", "openreview_id": "PPdJPIO3mV", "slideslive_id": 39024581, "venue": "nips2024", "title": "Accelerating Transformers with Spectrum-Preserving Token Merging", "status": "Poster", "keywords": "token merging;vision transformer;model compression", "tldr": "a new method for token merging in transformer architecture", "abstract": "Increasing the throughput of the Transformer architecture, a foundational component used in numerous state-of-the-art models for vision and language tasks (e.g., GPT, LLaVa), is an important problem in machine learning. One recent and effective strategy is to merge token representations within Transformer models, aiming to reduce computational and memory requirements while maintaining accuracy. Prior work has proposed algorithms based on Bipartite Soft Matching (BSM), which divides tokens into distinct sets and merges the top\nk\nsimilar tokens. However, these methods have significant drawbacks, such as sensitivity to token-splitting strategies and damage to informative tokens in later layers. This paper presents a novel paradigm called PiToMe, which prioritizes the preservation of informative tokens using an additional metric termed the \\textit{energy score}. This score identifies large clusters of similar tokens as high-energy, indicating potential candidates for merging, while smaller (unique and isolated) clusters are considered as low-energy and preserved. Experimental findings demonstrate that PiToMe saved from 40-60% FLOPs of the base models while exhibiting superior off-the-shelf performance on image classification (0.5% average performance drop of ViT-MAEH compared to 2.6% as baselines), image-text retrieval (0.3% average performance drop of Clip on Flick30k compared to 4.5% as others), and analogously in visual questions answering with LLaVa-7B. Furthermore, PiToMe is theoretically shown to preserve intrinsic spectral properties to the original token space under mild conditions.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95316"} +{"video_file": "PQt6Vg2X5u_39027544.mp4", "openreview_id": "PQt6Vg2X5u", "slideslive_id": 39027544, "venue": "nips2024", "title": "Recursive PAC-Bayes: A Frequentist Approach to Sequential Prior Updates with No Information Loss", "status": "Spotlight", "keywords": "PAC-Bayes;data-dependent prior", "tldr": "We solve a long-standing open problem on how to do sequential prior updates in the frequentist framework (as it has always been done in the Bayesian framework) without losing information along the way.", "abstract": "PAC-Bayesian analysis is a frequentist framework for incorporating prior knowledge into learning. It was inspired by Bayesian learning, which allows sequential data processing and naturally turns posteriors from one processing step into priors for the next. However, despite two and a half decades of research, the ability to update priors sequentially without losing confidence information along the way remained elusive for PAC-Bayes. While PAC-Bayes allows construction of data-informed priors, the final confidence intervals depend only on the number of points that were not used for the construction of the prior, whereas confidence information in the prior, which is related to the number of points used to construct the prior, is lost. This limits the possibility and benefit of sequential prior updates, because the final bounds depend only on the size of the final batch.\nWe present a novel and, in retrospect, surprisingly simple and powerful PAC-Bayesian procedure that allows sequential prior updates with no information loss. The procedure is based on a novel decomposition of the expected loss of randomized classifiers. The decomposition rewrites the loss of the posterior as an excess loss relative to a downscaled loss of the prior plus the downscaled loss of the prior, which is bounded recursively. As a side result, we also present a generalization of the split-kl and PAC-Bayes-split-kl inequalities to discrete random variables, which we use for bounding the excess losses, and which can be of independent interest. In empirical evaluation the new procedure significantly outperforms state-of-the-art.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95315"} +{"video_file": "PRBsEz8rnV_39026102.mp4", "openreview_id": "PRBsEz8rnV", "slideslive_id": 39026102, "venue": "nips2024", "title": "No Train, all Gain: Self-Supervised Gradients Improve Deep Frozen Representations", "status": "Poster", "keywords": "self-supervised;gradients;computer vision;transformers;k-nearest neighbor;classification;in-context learning;clustering;retrieval", "tldr": "We propose using self-supervised gradients to enhance pretrained embedding features and achieve significant improvements in k-nearest neighbor classification, in-context scene understanding, linear probing and clustering.", "abstract": "This paper introduces FUNGI, Features from UNsupervised GradIents, a method to enhance the features of transformer encoders by leveraging self-supervised gradients. Our method is simple: given any pretrained model, we first compute gradients from various self-supervised objectives for each input. These gradients are projected to a lower dimension and then concatenated with the model's output embedding. The resulting features are evaluated on k-nearest neighbor classification over 11 datasets from vision, 5 from natural language processing, and 2 from audio. Across backbones spanning various sizes and pretraining strategies, FUNGI features provide consistent performance improvements over the embeddings. We also show that using FUNGI features can benefit linear classification, clustering and image retrieval, and that they significantly improve the retrieval-based in-context scene understanding abilities of pretrained models, for example improving upon DINO by +17% for semantic segmentation - without any training. Code is available at https://github.com/WalterSimoncini/fungivision.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95313"} +{"video_file": "PSLH5q7PFo_39027734.mp4", "openreview_id": "PSLH5q7PFo", "slideslive_id": 39027734, "venue": "nips2024", "title": "Active preference learning for ordering items in- and out-of-sample", "status": "Poster", "keywords": "ordering;active learning;preference learning;medical imaging;human feedback;pairwise comparison", "tldr": "Active preference learning supports fast ordering of items in- and out-of-sample. Sample complexity analysis justifies active sampling criteria.", "abstract": "Learning an ordering of items based on pairwise comparisons is useful when items are difficult to rate consistently on an absolute scale, for example, when annotators have to make subjective assessments. When exhaustive comparison is infeasible, actively sampling item pairs can reduce the number of annotations necessary for learning an accurate ordering. However, many algorithms ignore shared structure between items, limiting their sample efficiency and precluding generalization to new items. It is also common to disregard how noise in comparisons varies between item pairs, despite it being informative of item similarity. In this work, we study active preference learning for ordering items with contextual attributes, both in- and out-of-sample. We give an upper bound on the expected ordering error of a logistic preference model as a function of which items have been compared. Next, we propose an active learning strategy that samples items to minimize this bound by accounting for aleatoric and epistemic uncertainty in comparisons. We evaluate the resulting algorithm, and a variant aimed at reducing model misspecification, in multiple realistic ordering tasks with comparisons made by human annotators. Our results demonstrate superior sample efficiency and generalization compared to non-contextual ranking approaches and active preference learning baselines.", "primary_area": "active_learning", "site": "https://neurips.cc/virtual/2024/poster/95312"} +{"video_file": "PSPtj26Lbp_39026883.mp4", "openreview_id": "PSPtj26Lbp", "slideslive_id": 39026883, "venue": "nips2024", "title": "L4GM: Large 4D Gaussian Reconstruction Model", "status": "Poster", "keywords": "4D Reconstruction; 4D Generation", "tldr": "We present the first 4D Large Reconstruction Model that produces animated objects from a single-view video input -- in a single feed-forward pass that takes only a second.", "abstract": "We present L4GM, the first 4D Large Reconstruction Model that produces animated objects from a single-view video input -- in a single feed-forward pass that takes only a second. Key to our success is a novel dataset of multiview videos containing curated, rendered animated objects from Objaverse. This dataset depicts 44K diverse objects with 110K animations rendered in 48 viewpoints, resulting in 12M videos with a total of 300M frames. We keep our L4GM simple for scalability and build directly on top of LGM, a pretrained 3D Large Reconstruction Model that outputs 3D Gaussian ellipsoids from multiview image input. L4GM outputs a per-frame 3D Gaussian splat representation from video frames sampled at a low fps and then upsamples the representation to a higher fps to achieve temporal smoothness. We add temporal self-attention layers to the base LGM to help it learn consistency across time, and utilize a per-timestep multiview rendering loss to train the model. The representation is upsampled to a higher framerate by training an interpolation model which produces intermediate 3D Gaussian representations. We showcase that L4GM that is only trained on synthetic data generalizes well on in-the-wild videos, producing high quality animated 3D assets.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95310"} +{"video_file": "PThi9hf9UT_39028099.mp4", "openreview_id": "PThi9hf9UT", "slideslive_id": 39028099, "venue": "nips2024", "title": "Mutual Information Estimation via $f$-Divergence and Data Derangements", "status": "Poster", "keywords": "mutual information;variational divergence;f-divergence;neural estimators;permutation;derangement", "tldr": "A new method for estimating mutual information exploiting the variational representation of the \nf\n-divergence and a derangement training strategy", "abstract": "Estimating mutual information accurately is pivotal across diverse applications, from machine learning to communications and biology, enabling us to gain insights into the inner mechanisms of complex systems. Yet, dealing with high-dimensional data presents a formidable challenge, due to its size and the presence of intricate relationships. Recently proposed neural methods employing variational lower bounds on the mutual information have gained prominence. However, these approaches suffer from either high bias or high variance, as the sample size and the structure of the loss function directly influence the training process. In this paper, we propose a novel class of discriminative mutual information estimators based on the variational representation of the\nf\n-divergence. We investigate the impact of the permutation function used to obtain the marginal training samples and present a novel architectural solution based on derangements. The proposed estimator is flexible since it exhibits an excellent bias/variance trade-off. The comparison with state-of-the-art neural estimators, through extensive experimentation within established reference scenarios, shows that our approach offers higher accuracy and lower complexity.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95307"} +{"video_file": "PWkjxjgGLP_39026038.mp4", "openreview_id": "PWkjxjgGLP", "slideslive_id": 39026038, "venue": "nips2024", "title": "Hierarchical Visual Feature Aggregation for OCR-Free Document Understanding", "status": "Poster", "keywords": "Document Understanding;Multi-modal Learning", "tldr": "An OCR-free document understanding framework that efficiently processes multi-scale visual features while learning to read text with layout by position-aware instruction tuning.", "abstract": "We present a novel OCR-free document understanding framework based on pretrained Multimodal Large Language Models (MLLMs). Our approach employs multi-scale visual features to effectively handle various font sizes within document images. To address the increasing costs of considering the multi-scale visual inputs for MLLMs, we propose the Hierarchical Visual Feature Aggregation (HVFA) module, designed to reduce the number of input tokens to LLMs. Leveraging a feature pyramid with cross-attentive pooling, our approach effectively manages the trade-off between information loss and efficiency without being affected by varying document image sizes. Furthermore, we introduce a novel instruction tuning task, which facilitates the model's text-reading capability by learning to predict the relative positions of input text, eventually minimizing the risk of truncated text caused by the limited capacity of LLMs. Comprehensive experiments validate the effectiveness of our approach, demonstrating superior performance in various document understanding tasks.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95304"} +{"video_file": "PXGY9Fz8vC_39027166.mp4", "openreview_id": "PXGY9Fz8vC", "slideslive_id": 39027166, "venue": "nips2024", "title": "Who\u2019s Gaming the System? A Causally-Motivated Approach for Detecting Strategic Adaptation", "status": "Poster", "keywords": "causal inference;strategic classification;gaming;healthcare", "tldr": "We propose a causal effect estimation framework for detecting which agents are exploiting a payout model.", "abstract": "In many settings, machine learning models may be used to inform decisions that impact individuals or entities who interact with the model. Such entities, or agents, may game model decisions by manipulating their inputs to the model to obtain better outcomes and maximize some utility. We consider a multi-agent setting where the goal is to identify the \u201cworst offenders:\u201d agents that are gaming most aggressively. However, identifying such agents is difficult without knowledge of their utility function. Thus, we introduce a framework in which each agent\u2019s tendency to game is parameterized via a scalar. We show that this gaming parameter is only partially identifiable. By recasting the problem as a causal effect estimation problem where different agents represent different \u201ctreatments,\u201d we prove that a ranking of all agents by their gaming parameters is identifiable. We present empirical results in a synthetic data study validating the usage of causal effect estimation for gaming detection and show in a case study of diagnosis coding behavior in the U.S. that our approach highlights features associated with gaming.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/95302"} +{"video_file": "PZCiWtQjAw_39024580.mp4", "openreview_id": "PZCiWtQjAw", "slideslive_id": 39024580, "venue": "nips2024", "title": "Continual Audio-Visual Sound Separation", "status": "Poster", "keywords": "Audio-Visual Learning;Sound Separation;Continual Learning", "tldr": "In this paper, we introduce a novel continual audio-visual sound separation task and an approach named ContAV-Sep for the proposed task.", "abstract": "In this paper, we introduce a novel continual audio-visual sound separation task, aiming to continuously separate sound sources for new classes while preserving performance on previously learned classes, with the aid of visual guidance. This problem is crucial for practical visually guided auditory perception as it can significantly enhance the adaptability and robustness of audio-visual sound separation models, making them more applicable for real-world scenarios where encountering new sound sources is commonplace. The task is inherently challenging as our models must not only effectively utilize information from both modalities in current tasks but also preserve their cross-modal association in old tasks to mitigate catastrophic forgetting during audio-visual continual learning. To address these challenges, we propose a novel approach named ContAV-Sep (\nCont\ninual\nA\nudio-\nV\nisual Sound\nSep\naration). ContAV-Sep presents a novel Cross-modal Similarity Distillation Constraint (CrossSDC) to uphold the cross-modal semantic similarity through incremental tasks and retain previously acquired knowledge of semantic similarity in old models, mitigating the risk of catastrophic forgetting. The CrossSDC can seamlessly integrate into the training process of different audio-visual sound separation frameworks. Experiments demonstrate that ContAV-Sep can effectively mitigate catastrophic forgetting and achieve significantly better performance compared to other continual learning baselines for audio-visual sound separation. Code is available at: https://github.com/weiguoPian/ContAV-Sep_NeurIPS2024.", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/95301"} +{"video_file": "PaqJ71zf1M_39027355.mp4", "openreview_id": "PaqJ71zf1M", "slideslive_id": 39027355, "venue": "nips2024", "title": "Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition", "status": "Poster", "keywords": "Semi-supervised learning;Long-tail learning;Weakly-supervised learning", "tldr": "This paper introduces a probabilistic framework for long-tailed learning and extends to semi-supervised learning based on continuous pseudo-labels.", "abstract": "Long-tailed semi-supervised learning poses a significant challenge in training models with limited labeled data exhibiting a long-tailed label distribution. Current state-of-the-art LTSSL approaches heavily rely on high-quality pseudo-labels for large-scale unlabeled data. However, these methods often neglect the impact of representations learned by the neural network and struggle with real-world unlabeled data, which typically follows a different distribution than labeled data. This paper introduces a novel probabilistic framework that unifies various recent proposals in long-tail learning. Our framework derives the class-balanced contrastive loss through Gaussian kernel density estimation. We introduce a continuous contrastive learning method, CCL, extending our framework to unlabeled data using reliable and smoothed pseudo-labels. By progressively estimating the underlying label distribution and optimizing its alignment with model predictions, we tackle the diverse distribution of unlabeled data in real-world scenarios. Extensive experiments across multiple datasets with varying unlabeled data distributions demonstrate that CCL consistently outperforms prior state-of-the-art methods, achieving over 4% improvement on the ImageNet-127 dataset. The supplementary material includes the source code for reproducibility.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95298"} +{"video_file": "Pc9LLjTL5f_39025214.mp4", "openreview_id": "Pc9LLjTL5f", "slideslive_id": 39025214, "venue": "nips2024", "title": "Elo Uncovered: Robustness and Best Practices in Language Model Evaluation", "status": "Poster", "keywords": "Elo Rating System;Language Model Evaluation;Reliability;Robustness;Reproducibility;LLMs Ranking", "tldr": "This paper probes the Elo rating system in LLM evaluations, revealing its inherent volatility and providing empirical guidelines for ensuring robust and accurate model ranking in real-world scenarios.", "abstract": "In Natural Language Processing (NLP), the Elo rating system, originally designed for ranking players in dynamic games such as chess, is increasingly being used to evaluate Large Language Models (LLMs) through \"A vs B\" paired comparisons. However, while popular, the system's suitability for assessing entities with constant skill levels, such as LLMs, remains relatively unexplored. We study two fundamental axioms that evaluation methods should adhere to: reliability and transitivity. We conduct an extensive evaluation of Elo behavior across simulated and real-world scenarios, demonstrating that individual Elo computations can exhibit significant volatility. We show that both axioms are not always satisfied, raising questions about the reliability of current comparative evaluations of LLMs. If the current use of Elo scores is intended to substitute the costly head-to-head comparison of LLMs, it is crucial to ensure the ranking is as robust as possible. Guided by the axioms, our findings offer concrete guidelines for enhancing the reliability of LLM evaluation methods, suggesting a need for reassessment of existing comparative approaches.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/95297"} +{"video_file": "Pezt0xttae_39028859.mp4", "openreview_id": "Pezt0xttae", "slideslive_id": 39028859, "venue": "nips2024", "title": "DapperFL: Domain Adaptive Federated Learning with Model Fusion Pruning for Edge Devices", "status": "Oral", "keywords": "Federated learning;Model pruning;Domain adaptation;Edge intelligence", "tldr": "A heterogeneous FL framework DapperFL to enhance model performance across multiple domains on resource-limited edge devices.", "abstract": "Federated learning (FL) has emerged as a prominent machine learning paradigm in edge computing environments, enabling edge devices to collaboratively optimize a global model without sharing their private data. However, existing FL frameworks suffer from efficacy deterioration due to the system heterogeneity inherent in edge computing, especially in the presence of domain shifts across local data. In this paper, we propose a heterogeneous FL framework DapperFL, to enhance model performance across multiple domains. In DapperFL, we introduce a dedicated Model Fusion Pruning (MFP) module to produce personalized compact local models for clients to address the system heterogeneity challenges. The MFP module prunes local models with fused knowledge obtained from both local and remaining domains, ensuring robustness to domain shifts. Additionally, we design a Domain Adaptive Regularization (DAR) module to further improve the overall performance of DapperFL. The DAR module employs regularization generated by the pruned model, aiming to learn robust representations across domains. Furthermore, we introduce a specific aggregation algorithm for aggregating heterogeneous local models with tailored architectures and weights. We implement DapperFL on a real-world FL platform with heterogeneous clients. Experimental results on benchmark datasets with multiple domains demonstrate that DapperFL outperforms several state-of-the-art FL frameworks by up to 2.28%, while significantly achieving model volume reductions ranging from 20% to 80%. Our code is available at: https://github.com/jyzgh/DapperFL.", "primary_area": "infrastructure", "site": "https://neurips.cc/virtual/2024/poster/95295"} +{"video_file": "Pf7kdIjHRf_39026420.mp4", "openreview_id": "Pf7kdIjHRf", "slideslive_id": 39026420, "venue": "nips2024", "title": "Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers", "status": "Spotlight", "keywords": "heterogeneous robot learning;heterogeneous pre-trained transformer;scaling law for robotics;robotic foundation model", "tldr": "We propose Heterogeneous Pre-trained Transformers (HPT) that pre-train policy representation across different robot embodiments and tasks, scale it to 1B parameters and 50 datasets, and demonstrate transfer in simulation and real world evaluation.", "abstract": "One of the roadblocks for training generalist robotic models today is heterogeneity. Previous robot learning methods often collect data to train with one specific embodiment for one task, which is expensive and prone to overfitting. This work studies the problem of learning policy representations through heterogeneous pre-training on robot data across different embodiments and tasks at scale. We propose Heterogeneous Pre-trained Transformers (HPT), which pre-train a large, shareable trunk of a policy neural network to learn a task and embodiment agnostic shared representation. This general architecture aligns the specific proprioception and vision inputs from distinct embodiments to a short sequence of tokens and then processes such tokens to map to control robots for different tasks. Leveraging the recent large-scale multi-embodiment real-world robotic datasets as well as simulation, deployed robots, and human video datasets, we investigate pre-training policies across heterogeneity. We conduct experiments to investigate the scaling behaviors of training objectives, to the extent of 52 datasets. HPTs outperform several baselines and enhance the fine-tuned policy performance by over 20% on unseen tasks in multiple simulator benchmarks and real-world settings. See the project website (liruiw.github.io/hpt) for code and videos.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/95294"} +{"video_file": "PfOeAKxx6i_39024374.mp4", "openreview_id": "PfOeAKxx6i", "slideslive_id": 39024374, "venue": "nips2024", "title": "Algebraic Positional Encodings", "status": "Spotlight", "keywords": "positional encodings;transformers;structured attention;group theory", "tldr": "Positional encodings as group homomorphisms: it's beautiful and it works.", "abstract": "We introduce a novel positional encoding strategy for Transformer-style models, addressing the shortcomings of existing, often ad hoc, approaches. Our framework implements a flexible mapping from the algebraic specification of a domain to a positional encoding scheme where positions are interpreted as orthogonal operators. This design preserves the structural properties of the source domain, thereby ensuring that the end-model upholds them. The framework can accommodate various structures, including sequences, grids and trees, but also their compositions. We conduct a series of experiments demonstrating the practical applicability of our method. Our results suggest performance on par with or surpassing the current state of the art, without hyper-parameter optimizations or ``task search'' of any kind. Code is available through https://aalto-quml.github.io/ape/.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95293"} +{"video_file": "PhLlE8UOEv_39028350.mp4", "openreview_id": "PhLlE8UOEv", "slideslive_id": 39028350, "venue": "nips2024", "title": "Provable Posterior Sampling with Denoising Oracles via Tilted Transport", "status": "Poster", "keywords": "diffusion based model; posterior sampling; inverse problem; provable sampling", "tldr": "We develop the tilted transport technique, which leverages the prior score to exactly transform the original posterior sampling problem into a new one that is provably easier to sample.", "abstract": "Score-based diffusion models have significantly advanced high-dimensional data generation across various domains, by learning a denoising oracle (or score) from datasets. From a Bayesian perspective, they offer a realistic modeling of data priors and facilitate solving inverse problems through posterior sampling. Although many heuristic methods have been developed recently for this purpose, they lack the quantitative guarantees needed in many scientific applications. This work addresses the topic from two perspectives. We first present a hardness result indicating that a generic method leveraging the prior denoising oracle for posterior sampling becomes infeasible as soon as the measurement operator is mildly ill-conditioned. We next develop the tilted transport technique, which leverages the quadratic structure of the log-likelihood in linear inverse problems in combination with the prior denoising oracle to exactly transform the original posterior sampling problem into a new one that is provably easier to sample from. We quantify the conditions under which the boosted posterior is strongly log-concave, highlighting how task difficulty depends on the condition number of the measurement matrix and the signal-to-noise ratio. The resulting general scheme is shown to match the best-known sampling methods for Ising models, and is further validated on high-dimensional Gaussian mixture models.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95291"} +{"video_file": "PhjnK9KWOx_39028264.mp4", "openreview_id": "PhjnK9KWOx", "slideslive_id": 39028264, "venue": "nips2024", "title": "PSL: Rethinking and Improving Softmax Loss from Pairwise Perspective for Recommendation", "status": "Poster", "keywords": "Recommender Systems;Ranking Metrics Optimization;Surrogate Loss;Distributionally Robust Optimization", "tldr": "This paper rethinks Softmax Loss (SL) from a pairwise perspective, introducing a novel family of robust DCG surrogate losses to address the limitations of SL, termed Pairwise Softmax Loss (PSL).", "abstract": "Softmax Loss (SL) is widely applied in recommender systems (RS) and has demonstrated effectiveness. This work analyzes SL from a pairwise perspective, revealing two significant limitations: 1) the relationship between SL and conventional ranking metrics like DCG is not sufficiently tight; 2) SL is highly sensitive to false negative instances. Our analysis indicates that these limitations are primarily due to the use of the exponential function. To address these issues, this work extends SL to a new family of loss functions, termed Pairwise Softmax Loss (PSL), which replaces the exponential function in SL with other appropriate activation functions. While the revision is minimal, we highlight three merits of PSL: 1) it serves as a tighter surrogate for DCG with suitable activation functions; 2) it better balances data contributions; and 3) it acts as a specific BPR loss enhanced by Distributionally Robust Optimization (DRO). We further validate the effectiveness and robustness of PSL through empirical experiments. The code is available at https://github.com/Tiny-Snow/IR-Benchmark.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95290"} +{"video_file": "PmLty7tODm_39028308.mp4", "openreview_id": "PmLty7tODm", "slideslive_id": 39028308, "venue": "nips2024", "title": "Interpretable Mesomorphic Networks for Tabular Data", "status": "Poster", "keywords": "explainability;deep neural networks;tabular data;hypernetwork;interpretability;explainable benchmark;xai.", "tldr": "Explainable deep networks that are not only as accurate as their black-box deep-learning counterparts but also as interpretable as state-of-the-art explanation techniques.", "abstract": "Even though neural networks have been long deployed in applications involving tabular data, still existing neural architectures are not explainable by design. In this paper, we propose a new class of interpretable neural networks for tabular data that are both deep and linear at the same time (i.e. mesomorphic). We optimize deep hypernetworks to generate explainable linear models on a per-instance basis. As a result, our models retain the accuracy of black-box deep networks while offering free-lunch explainability for tabular data by design. Through extensive experiments, we demonstrate that our explainable deep networks have comparable performance to state-of-the-art classifiers on tabular data and outperform current existing methods that are explainable by design.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/95288"} +{"video_file": "PnlCHQrM69_39025047.mp4", "openreview_id": "PnlCHQrM69", "slideslive_id": 39025047, "venue": "nips2024", "title": "SemCoder: Training Code Language Models with Comprehensive Semantics Reasoning", "status": "Poster", "keywords": "Code Language Models;Program Semantics Learning;Code Execution Reasoning", "tldr": "This paper introduces SemCoder, a novel 6.7B Code LM trained with comprehensive program semantics from static code to dynamic execution, achieving superior performance in code generation and execution reasoning.", "abstract": "Code Large Language Models (Code LLMs) have excelled at tasks like code completion but often miss deeper semantics such as execution effects and dynamic states. This paper aims to bridge the gap between Code LLMs' reliance on static text data and the need for semantic understanding for complex tasks like debugging and program repair. We introduce a novel strategy, monologue reasoning, to train Code LLMs to reason comprehensive semantics, encompassing high-level functional descriptions, local execution effects of individual statements, and overall input/output behavior, thereby linking static code text with dynamic execution states. We begin by collecting PyX, a clean Python corpus of fully executable code samples with functional descriptions and test cases. We propose training Code LLMs not only to write code but also to understand code semantics by reasoning about key properties, constraints, and execution behaviors using natural language, mimicking human verbal debugging, i.e., rubber-duck debugging. This approach led to the development of SemCoder, a Code LLM with only 6.7B parameters, which shows competitive performance with GPT-3.5-turbo on code generation and execution reasoning tasks. SemCoder achieves 79.3% on HumanEval (GPT-3.5-turbo: 76.8%), 63.6% on CRUXEval-I (GPT-3.5-turbo: 50.3%), and 63.9% on CRUXEval-O (GPT-3.5-turbo: 59.0%). We also study the effectiveness of SemCoder's monologue-style execution reasoning compared to concrete scratchpad reasoning, showing that our approach integrates semantics from multiple dimensions more smoothly. Finally, we demonstrate the potential of applying learned semantics to improve Code LLMs' debugging and self-refining capabilities. Our data, code, and models are available at: https://github.com/ARiSE-Lab/SemCoder.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95287"} +{"video_file": "Pnv8C0bU9t_39027067.mp4", "openreview_id": "Pnv8C0bU9t", "slideslive_id": 39027067, "venue": "nips2024", "title": "LoQT: Low-Rank Adapters for Quantized Pretraining", "status": "Poster", "keywords": "Quantization;Low-Rank Adaptation;Memory Efficient Training;Large Language Models", "tldr": "LoQT enables efficient quantized pretraining of LLMs with results close to full-rank non-quantized models. It enables pretraining of a 13B LLM on a 24GB GPU without model parallel, checkpointing, or offloading strategies during training.", "abstract": "Despite advances using low-rank adapters and quantization, pretraining of large models on consumer hardware has not been possible without model sharding, offloading during training, or per-layer gradient updates. To address these limitations, we propose Low-Rank Adapters for Quantized Training (LoQT), a method for efficiently training quantized models. LoQT uses gradient-based tensor factorization to initialize low-rank trainable weight matrices that are periodically merged into quantized full-rank weight matrices. Our approach is suitable for both pretraining and fine-tuning models. We demonstrate this for language modeling and downstream task adaptation, finding that LoQT enables efficient training of models up to 7B parameters on a 24GB GPU. We also demonstrate the feasibility of training a 13B model using per-layer gradient updates on the same hardware.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/95286"} +{"video_file": "Po7iQKKT5b_39027631.mp4", "openreview_id": "Po7iQKKT5b", "slideslive_id": 39027631, "venue": "nips2024", "title": "Object segmentation from common fate: Motion energy processing enables human-like zero-shot generalization to random dot stimuli", "status": "Poster", "keywords": "common fate;motion energy;optical flow;figure-ground segmentation;humans vs machines", "tldr": "We use random dot stimuli that remove appearance but not motion information from videos to show substantial differences between human perception and state-of-the-art machine vision.", "abstract": "Humans excel at detecting and segmenting moving objects according to the {\\it Gestalt} principle of \u201ccommon fate\u201d. Remarkably, previous works have shown that human perception generalizes this principle in a zero-shot fashion to unseen textures or random dots. In this work, we seek to better understand the computational basis for this capability by evaluating a broad range of optical flow models and a neuroscience inspired motion energy model for zero-shot figure-ground segmentation of random dot stimuli. Specifically, we use the extensively validated motion energy model proposed by Simoncelli and Heeger in 1998 which is fitted to neural recordings in cortex area MT. We find that a cross section of 40 deep optical flow models trained on different datasets struggle to estimate motion patterns in random dot videos, resulting in poor figure-ground segmentation performance. Conversely, the neuroscience-inspired model significantly outperforms all optical flow models on this task. For a direct comparison to human perception, we conduct a psychophysical study using a shape identification task as a proxy to measure human segmentation performance. All state-of-the-art optical flow models fall short of human performance, but only the motion energy model matches human capability. This neuroscience-inspired model successfully addresses the lack of human-like zero-shot generalization to random dot stimuli in current computer vision models, and thus establishes a compelling link between the Gestalt psychology of human object perception and cortical motion processing in the brain.\nCode, models and datasets are available at https://github.com/mtangemann/motion_energy_segmentation", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/95285"} +{"video_file": "PoCs4jq7cV_39025676.mp4", "openreview_id": "PoCs4jq7cV", "slideslive_id": 39025676, "venue": "nips2024", "title": "Inference via Interpolation: Contrastive Representations Provably Enable Planning and Inference", "status": "Poster", "keywords": "contrastive learning;prediction;planning;inference;time-series", "tldr": "While prediction and planning over time series data is challenging, these problems have closed form solutions in terms of temporal contrastive representations", "abstract": "Given time series data, how can we answer questions like what will happen in the future?'' and how did we get here?'' These sorts of probabilistic inference questions are challenging when observations are high-dimensional. In this paper, we show how these questions can have compact, closed form solutions in terms of learned representations. The key idea is to apply a variant of contrastive learning to time series data. Prior work already shows that the representations learned by contrastive learning encode a probability ratio. By extending prior work to show that the marginal distribution over representations is Gaussian, we can then prove that joint distribution of representations is also Gaussian. Taken together, these results show that representations learned via temporal contrastive learning follow a Gauss-Markov chain, a graphical model where inference (e.g., prediction, planning) over representations corresponds to inverting a low-dimensional matrix. In one special case, inferring intermediate representations will be equivalent to interpolating between the learned representations. We validate our theory using numerical simulations on tasks up to 46-dimensions.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95284"} +{"video_file": "Pox8jNQOo5_39027054.mp4", "openreview_id": "Pox8jNQOo5", "slideslive_id": 39027054, "venue": "nips2024", "title": "Second-order forward-mode optimization of recurrent neural networks for neuroscience", "status": "Spotlight", "keywords": "computational neuroscience;recurrent neural networks;motor control", "tldr": "A second-order, memory-efficient optimization method for RNNs that does not require backpropagation, runs in wallclock time comparable to first-order methods, and successfully trains RNNs where Adam fails", "abstract": "A common source of anxiety for the computational neuroscience student is the question \u201cwill my recurrent neural network (RNN) model finally learn that task?\u201d. Unlike in machine learning where any architectural modification of an RNN (e.g. GRU or LSTM) is acceptable if it speeds up training, the RNN models trained as models of brain dynamics are subject to plausibility constraints that fundamentally exclude the usual machine learning hacks. The \u201cvanilla\u201d RNNs commonly used in computational neuroscience find themselves plagued by ill-conditioned loss surfaces that complicate training and significantly hinder our capacity to investigate the brain dynamics underlying complex tasks. Moreover, some tasks may require very long time horizons which backpropagation cannot handle given typical GPU memory limits. Here, we develop SOFO, a second-order optimizer that efficiently navigates loss surfaces whilst not requiring backpropagation. By relying instead on easily parallelized batched forward-mode differentiation, SOFO enjoys constant memory cost in time. Morever, unlike most second-order optimizers which involve inherently sequential operations, SOFO's effective use of GPU parallelism yields a per-iteration wallclock time essentially on par with first-order gradient-based optimizers. We show vastly superior performance compared to Adam on a number of RNN tasks, including a difficult double-reaching motor task and the learning of an adaptive Kalman filter algorithm trained over a long horizon.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/95282"} +{"video_file": "PqlKliEXyJ_39025541.mp4", "openreview_id": "PqlKliEXyJ", "slideslive_id": 39025541, "venue": "nips2024", "title": "LoD-Loc: Aerial Visual Localization using LoD 3D Map with Neural Wireframe Alignment", "status": "Poster", "keywords": "3D computer vision; Pose estimation; Privacy-preserving localization", "tldr": "A new method for aerial visual localization using LoD 3D map with neural wireframe alignment.", "abstract": "We propose a new method named LoD-Loc for visual localization in the air. Unlike existing localization algorithms, LoD-Loc does not rely on complex 3D representations and can estimate the pose of an Unmanned Aerial Vehicle (UAV) using a Level-of-Detail (LoD) 3D map. LoD-Loc mainly achieves this goal by aligning the wireframe derived from the LoD projected model with that predicted by the neural network. Specifically, given a coarse pose provided by the UAV sensor, LoD-Loc hierarchically builds a cost volume for uniformly sampled pose hypotheses to describe pose probability distribution and select a pose with maximum probability. Each cost within this volume measures the degree of line alignment between projected and predicted wireframes. LoD-Loc also devises a 6-DoF pose optimization algorithm to refine the previous result with a differentiable Gaussian-Newton method. As no public dataset exists for the studied problem, we collect two datasets with map levels of LoD3.0 and LoD2.0, along with real RGB queries and ground-truth pose annotations. We benchmark our method and demonstrate that LoD-Loc achieves excellent performance, even surpassing current state-of-the-art methods that use textured 3D models for localization. The code and dataset will be made available upon publication.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95281"} +{"video_file": "PuXYI4HOQU_39025243.mp4", "openreview_id": "PuXYI4HOQU", "slideslive_id": 39025243, "venue": "nips2024", "title": "Fundamental Convergence Analysis of Sharpness-Aware Minimization", "status": "Poster", "keywords": "Convergence Analysis;Deep Learning;Inexact Gradient Descent Methods;Neural Network;Sharpness-Aware Minimization", "tldr": "A convergence analysis for Sharpness-Aware Minimization and its variants.", "abstract": "The paper investigates the fundamental convergence properties of Sharpness-Aware Minimization (SAM), a recently proposed gradient-based optimization method (Foret et al., 2021) that significantly improves the generalization of deep neural networks. The convergence properties including the stationarity of accumulation points, the convergence of the sequence of gradients to the origin, the sequence of function values to the optimal value, and the sequence of iterates to the optimal solution are established for the method. The universality of the provided convergence analysis based on inexact gradient descent frameworks (Khanh et al., 2023b) allows its extensions to the normalized versions of SAM such as F-SAM (Li et al. 2024), VaSSO (Li & Giannakis, 2023), RSAM (Liu et al., 2022), and to the unnormalized versions of SAM such as USAM (Andriushchenko & Flammarion, 2022). Numerical experiments are conducted on classification tasks using deep learning models to confirm the practical aspects of our analysis.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95276"} +{"video_file": "Pwl9n4zlf5_39026272.mp4", "openreview_id": "Pwl9n4zlf5", "slideslive_id": 39026272, "venue": "nips2024", "title": "AutoManual: Constructing Instruction Manuals by LLM Agents via Interactive Environmental Learning", "status": "Poster", "keywords": "Large Language Models;AI Agents;planning;decision making;programming", "tldr": "The paper introduces AutoManual, a framework that enables LLM agents to autonomously adapt to new environments by interacting and generating comprehensive instruction manuals through online rule optimization.", "abstract": "Large Language Models (LLM) based agents have shown promise in autonomously completing tasks across various domains, e.g., robotics, games, and web navigation. However, these agents typically require elaborate design and expert prompts to solve tasks in specific domains, which limits their adaptability. We introduce AutoManual, a framework enabling LLM agents to autonomously build their understanding through interaction and adapt to new environments. AutoManual categorizes environmental knowledge into diverse rules and optimizes them in an online fashion by two agents: 1) The Planner codes actionable plans based on current rules for interacting with the environment. 2) The Builder updates the rules through a well-structured rule system that facilitates online rule management and essential detail retention. To mitigate hallucinations in managing rules, we introduce a case-conditioned prompting strategy for the Builder. Finally, the Formulator agent compiles these rules into a comprehensive manual. The self-generated manual can not only improve the adaptability but also guide the planning of smaller LLMs while being human-readable. Given only one simple demonstration, AutoManual significantly improves task success rates, achieving 97.4% with GPT-4-turbo and 86.2% with GPT-3.5-turbo on ALFWorld benchmark tasks. The code is available at https://github.com/minghchen/automanual.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95273"} +{"video_file": "Q0KwoyZlSo_39025556.mp4", "openreview_id": "Q0KwoyZlSo", "slideslive_id": 39025556, "venue": "nips2024", "title": "On the Complexity of Learning Sparse Functions with Statistical and Gradient Queries", "status": "Poster", "keywords": "Statistical Query Complexity;Differtiable Learning;Sparse Functions;Leap Exponent", "tldr": "We study the power of differentiable learning compared to the statistical query (SQ) and correlation statistical query (CSQ) methods for the problem of learning sparse support of size P out of d coordinates with d>>P.", "abstract": "The goal of this paper is to investigate the complexity of gradient algorithms when learning sparse functions (juntas). We introduce a type of Statistical Queries (\nSQ\n), which we call Differentiable Learning Queries (\nDLQ\n), to model gradient queries on a specified loss with respect to an arbitrary model. We provide a tight characterization of the query complexity of\nDLQ\nfor learning the support of a sparse function over generic product distributions. This complexity crucially depends on the loss function. For the squared loss,\nDLQ\nmatches the complexity of Correlation Statistical Queries\n(\nCSQ\n)\n\u2014potentially much worse than\nSQ\n. But for other simple loss functions, including the\n\u2113\n1\nloss,\nDLQ\nalways achieves the same complexity as\nSQ\n. We also provide evidence that\nDLQ\ncan indeed capture learning with (stochastic) gradient descent by showing it correctly describes the complexity of learning with a two-layer neural network in the mean field regime and linear scaling.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95269"} +{"video_file": "Q4NWfStqVf_39028480.mp4", "openreview_id": "Q4NWfStqVf", "slideslive_id": 39028480, "venue": "nips2024", "title": "Nearly Minimax Optimal Regret for Multinomial Logistic Bandit", "status": "Poster", "keywords": "Bandit;Contextual Bandit;Multinomial Logistic Bandit;Minimax Regret", "tldr": "In this paper, we investigate an adversarial contextual multinomial logit (MNL) bandit problem and establish minimax optimal lower and upper regret bounds.", "abstract": "In this paper, we study the contextual multinomial logit (MNL) bandit problem in which a learning agent sequentially selects an assortment based on contextual information, and user feedback follows an MNL choice model. There has been a significant discrepancy between lower and upper regret bounds, particularly regarding the maximum assortment size\nK\n. Additionally, the variation in reward structures between these bounds complicates the quest for optimality. Under uniform rewards, where all items have the same expected reward, we establish a regret lower bound of\n\u03a9\n(\nd\nT\n/\nK\n)\nand propose a constant-time algorithm, OFU-MNL+, that achieves a matching upper bound of\nO\n~\n(\nd\nT\n/\nK\n)\n. We also provide instance-dependent minimax regret bounds under uniform rewards. Under non-uniform rewards, we prove a lower bound of\n\u03a9\n(\nd\nT\n)\nand an upper bound of\nO\n~\n(\nd\nT\n)\n, also achievable by OFU-MNL+. Our empirical studies support these theoretical findings. To the best of our knowledge, this is the first work in the contextual MNL bandit literature to prove minimax optimality --- for either uniform or non-uniform reward setting --- and to propose a computationally efficient algorithm that achieves this optimality up to logarithmic factors.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/95268"} +{"video_file": "Q5RYn6jagC_39025322.mp4", "openreview_id": "Q5RYn6jagC", "slideslive_id": 39025322, "venue": "nips2024", "title": "Understanding the Limits of Vision Language Models Through the Lens of the Binding Problem", "status": "Poster", "keywords": "visual reasoning;foundation models;multi-object reasoning;cognitive science", "tldr": "We show that vision language models exhibit human-like capacity constraints in multi-object visual reasoning and image generation.", "abstract": "Recent work has documented striking heterogeneity in the performance of state-of-the-art vision language models (VLMs), including both multimodal language models and text-to-image models. These models are able to describe and generate a diverse array of complex, naturalistic images, yet they exhibit surprising failures on basic multi-object reasoning tasks -- such as counting, localization, and simple forms of visual analogy -- that humans perform with near perfect accuracy. To better understand this puzzling pattern of successes and failures, we turn to theoretical accounts of the binding problem in cognitive science and neuroscience, a fundamental problem that arises when a shared set of representational resources must be used to represent distinct entities (e.g., to represent multiple objects in an image), necessitating the use of serial processing to avoid interference. We find that many of the puzzling failures of state-of-the-art VLMs can be explained as arising due to the binding problem, and that these failure modes are strikingly similar to the limitations exhibited by rapid, feedforward processing in the human brain.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/95266"} +{"video_file": "QC4e0vOanp_39024659.mp4", "openreview_id": "QC4e0vOanp", "slideslive_id": 39024659, "venue": "nips2024", "title": "Leveraging partial stragglers within gradient coding", "status": "Poster", "keywords": "gradient coding;stragglers;communication-efficient", "tldr": "We propose a novel gradient coding protocol that efficiently utilizes partial stragglers and is simultaneously computation-efficient and communication-efficient.", "abstract": "Within distributed learning, workers typically compute gradients on their assigned dataset chunks and send them to the parameter server (PS), which aggregates them to compute either an exact or approximate version of\n\u2207\nL\n(gradient of the loss function\nL\n). However, in large-scale clusters, many workers are slower than their promised speed or even failure-prone. A gradient coding solution introduces redundancy within the assignment of chunks to the workers and uses coding theoretic ideas to allow the PS to recover\n\u2207\nL\n(exactly or approximately), even in the presence of stragglers. Unfortunately, most existing gradient coding protocols are inefficient from a computation perspective as they coarsely classify workers as operational or failed; the potentially valuable work performed by slow workers (partial stragglers) is ignored. In this work, we present novel gradient coding protocols that judiciously leverage the work performed by partial stragglers. Our protocols are efficient from a computation and communication perspective and numerically stable. For an important class of chunk assignments, we present efficient algorithms for optimizing the relative ordering of chunks within the workers; this ordering affects the overall execution time. For exact gradient reconstruction, our protocol is around\n2\n\u00d7\nfaster than the original class of protocols and for approximate gradient reconstruction, the mean-squared-error of our reconstructed gradient is several orders of magnitude better.", "primary_area": "infrastructure", "site": "https://neurips.cc/virtual/2024/poster/95255"} +{"video_file": "QDG2q5MYHV_39025281.mp4", "openreview_id": "QDG2q5MYHV", "slideslive_id": 39025281, "venue": "nips2024", "title": "A Gradient Accumulation Method for Dense Retriever under Memory Constraint", "status": "Poster", "keywords": "Dense Retriever;Efficient Training;Memory Reduction;Memory Bank;Dual Encoder", "tldr": "Our proposed method (ContAccum) reduces hardware needs for large batch training in information retrieval, outperforming not only memory reduction methods but also high-resource scenario in low-resource settings with stable dual encoder training.", "abstract": "InfoNCE loss is commonly used to train dense retriever in information retrieval tasks. It is well known that a large batch is essential to stable and effective training with InfoNCE loss, which requires significant hardware resources. Due to the dependency of large batch, dense retriever has bottleneck of application and research. Recently, memory reduction methods have been broadly adopted to resolve the hardware bottleneck by decomposing forward and backward or using a memory bank. However, current methods still suffer from slow and unstable train. To address these issues, we propose Contrastive Accumulation (ContAccum), a stable and efficient memory reduction method for dense retriever trains that uses a dual memory bank structure to leverage previously generated query and passage representations. Experiments on widely used five information retrieval datasets indicate that ContAccum can surpass not only existing memory reduction methods but also high-resource scenarios. Moreover, theoretical analysis and experimental results confirm that ContAccum provides more stable dual-encoder training than current memory bank utilization methods.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95253"} +{"video_file": "QDprhde3jb_39026168.mp4", "openreview_id": "QDprhde3jb", "slideslive_id": 39026168, "venue": "nips2024", "title": "Learning Optimal Tax Design in Nonatomic Congestion Games", "status": "Poster", "keywords": "game theory; congestion games; mechanism design; equilibrium feedback", "tldr": "We propose the first algorithm for learning optimal tax in nonatomic congestion games with equilibrium feedback.", "abstract": "In multiplayer games, self-interested behavior among the players can harm the social welfare. Tax mechanisms are a common method to alleviate this issue and induce socially optimal behavior. In this work, we take the initial step of learning the optimal tax that can maximize social welfare with limited feedback in congestion games. We propose a new type of feedback named \\emph{equilibrium feedback}, where the tax designer can only observe the Nash equilibrium after deploying a tax plan. Existing algorithms are not applicable due to the exponentially large tax function space, nonexistence of the gradient, and nonconvexity of the objective. To tackle these challenges, we design a computationally efficient algorithm that leverages several novel components: (1) a piece-wise linear tax to approximate the optimal tax; (2) extra linear terms to guarantee a strongly convex potential function; (3) an efficient subroutine to find the exploratory tax that can provide critical information about the game. The algorithm can find an\n\u03f5\n-optimal tax with\nO\n(\n\u03b2\nF\n2\n/\n\u03f5\n)\nsample complexity, where\n\u03b2\nis the smoothness of the cost function and\nF\nis the number of facilities.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/95251"} +{"video_file": "QEUntqKvmm_39027844.mp4", "openreview_id": "QEUntqKvmm", "slideslive_id": 39027844, "venue": "nips2024", "title": "The surprising efficiency of temporal difference learning for rare event prediction", "status": "Poster", "keywords": "temporal difference learning;reinforcement learning;rare events;policy evaluation;prediction;perturbation bounds", "tldr": "Temporal difference methods require exponentially less data than Monte Carlo to estimate rare event statistics", "abstract": "We quantify the efficiency of temporal difference (TD) learning over the direct, or Monte Carlo (MC), estimator for policy evaluation in reinforcement learning, with an emphasis on estimation of quantities related to rare events. Policy evaluation is complicated in the rare event setting by the long timescale of the event and by the need for \\emph{relative accuracy} in estimates of very small values. Specifically, we focus on least-squares TD (LSTD) prediction for finite state Markov chains, and show that LSTD can achieve relative accuracy far more efficiently than MC. We prove a central limit theorem for the LSTD estimator and upper bound the \\emph{relative asymptotic variance} by simple quantities characterizing the connectivity of states relative to the transition probabilities between them. Using this bound, we show that, even when both the timescale of the rare event and the relative accuracy of the MC estimator are exponentially large in the number of states, LSTD maintains a fixed level of relative accuracy with a total number of observed transitions of the Markov chain that is only \\emph{polynomially} large in the number of states.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95250"} +{"video_file": "QFUsZvw9mx_39025475.mp4", "openreview_id": "QFUsZvw9mx", "slideslive_id": 39025475, "venue": "nips2024", "title": "Towards an Information Theoretic Framework of Context-Based Offline Meta-Reinforcement Learning", "status": "Spotlight", "keywords": "Offline Meta-RL;Information Theory", "tldr": "We propose a novel information theoretic framework of the context-based offline meta-RL paradigm, which unifies several mainstream methods and leads to two robust algorithm implementations.", "abstract": "As a marriage between offline RL and meta-RL, the advent of offline meta-reinforcement learning (OMRL) has shown great promise in enabling RL agents to multi-task and quickly adapt while acquiring knowledge safely. Among which, context-based OMRL (COMRL) as a popular paradigm, aims to learn a universal policy conditioned on effective task representations. In this work, by examining several key milestones in the field of COMRL, we propose to integrate these seemingly independent methodologies into a unified framework. Most importantly, we show that the pre-existing COMRL algorithms are essentially optimizing the same mutual information objective between the task variable\nM\nand its latent representation\nZ\nby implementing various approximate bounds. Such theoretical insight offers ample design freedom for novel algorithms. As demonstrations, we propose a supervised and a self-supervised implementation of\nI\n(\nZ\n;\nM\n)\n, and empirically show that the corresponding optimization algorithms exhibit remarkable generalization across a broad spectrum of RL benchmarks, context shift scenarios, data qualities and deep learning architectures. This work lays the information theoretic foundation for COMRL methods, leading to a better understanding of task representation learning in the context of reinforcement learning. Given its generality, we envision our framework as a promising offline pre-training paradigm of foundation models for decision making.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95247"} +{"video_file": "QHRLFdhkLu_39027534.mp4", "openreview_id": "QHRLFdhkLu", "slideslive_id": 39027534, "venue": "nips2024", "title": "Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language Models", "status": "Poster", "keywords": "LLM;augmentation;efficient methods", "tldr": "A new paradigm for fitting LLMs to downstream tasks.", "abstract": "Large language models (LLMs) have rapidly advanced and demonstrated impressive capabilities. In-Context Learning (ICL) and Parameter-Efficient Fine-Tuning (PEFT) are currently two mainstream methods for augmenting LLMs to downstream tasks. ICL typically constructs a few-shot learning scenario, either manually or by setting up a Retrieval-Augmented Generation (RAG) system, helping models quickly grasp domain knowledge or question-answering patterns without changing model parameters. However, this approach involves trade-offs, such as slower inference speed and increased space occupancy. PEFT assists the model in adapting to tasks through minimal parameter modifications, but the training process still demands high hardware requirements, even with a small number of parameters involved. To address these challenges, we propose Reference Trustable Decoding (RTD), a paradigm that allows models to quickly adapt to new tasks without fine-tuning, maintaining low inference costs. RTD constructs a reference datastore from the provided training examples and optimizes the LLM's final vocabulary distribution by flexibly selecting suitable references based on the input, resulting in more trustable responses and enabling the model to adapt to downstream tasks at a low cost. Experimental evaluations on various LLMs using different benchmarks demonstrate that RTD establishes a new paradigm for augmenting models to downstream tasks. Furthermore, our method exhibits strong orthogonality with traditional methods, allowing for concurrent usage. Our code can be found at https://github.com/ShiLuohe/ReferenceTrustableDecoding.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95245"} +{"video_file": "QKp3nhPU41_39025857.mp4", "openreview_id": "QKp3nhPU41", "slideslive_id": 39025857, "venue": "nips2024", "title": "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution", "status": "Poster", "keywords": "embodied AI;dynamic network;CALVIN benchmark;multimodal large language model;robotics", "tldr": "Propose a dynamic multimodal large language model framework for saving computational and GPU memory costs in robotic execution", "abstract": "Multimodal Large Language Models (MLLMs) have demonstrated remarkable comprehension and reasoning capabilities with complex language and visual data. These advances have spurred the vision of establishing a generalist robotic MLLM proficient in understanding complex human instructions and accomplishing various embodied tasks, whose feasibility has been recently verified~\\cite{rt-2,rt-x}. However, developing MLLMs for real-world robots is challenging due to the typically limited computation and memory capacities available on robotic platforms. In contrast, the inference of MLLMs usually incorporates storing billions of parameters and performing tremendous computation, imposing significant hardware demands. In our paper, we seek to address this challenge by leveraging an intriguing observation: relatively easier situations make up the bulk of the procedure of controlling robots to fulfill diverse tasks, and they generally require far smaller models to obtain the correct robotic actions. Motivated by this observation, we propose a \\emph{Dynamic Early-Exit for Robotic MLLM} (DeeR) framework that automatically adjusts the size of the activated MLLM based on each situation at hand. The approach leverages a multi-exit architecture in MLLMs, which allows the model to cease processing once a proper size of the model has been activated for a specific situation, thus avoiding further redundant computation. Additionally, we develop novel algorithms that establish early-termination criteria for DeeR, conditioned on predefined demands such as average computational cost (\\emph{i.e.}, power consumption), as well as peak computational consumption (\\emph{i.e.}, latency) and GPU memory usage. These enhancements ensure that DeeR operates efficiently under varying resource constraints while maintaining competitive performance. Moreover, we design a tailored training method for integrating temporal information on top of such multi-exit architectures to predict actions reasonably. On the CALVIN robot manipulation benchmark, DeeR demonstrates significant reductions in computational costs by 5.2-6.5x and GPU memory by 2x without compromising performance. Code and checkpoints are available at https://github.com/yueyang130/DeeR-VLA.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95242"} +{"video_file": "QLRO8o4bol_39025796.mp4", "openreview_id": "QLRO8o4bol", "slideslive_id": 39025796, "venue": "nips2024", "title": "Generate Universal Adversarial Perturbations for Few-Shot Learning", "status": "Poster", "keywords": "Adversarial Attacks;Universal Adversarial Perturbations;Few-Shot Learning", "tldr": "We find the ineffectiveness in generating UAP for open-set scenarios such as Few-Shot Learning. Through in-depth analysis, we point out the two shifts and address them to finally achieve a unified attacking framework.", "abstract": "Deep networks are known to be vulnerable to adversarial examples which are deliberately designed to mislead the trained model by introducing imperceptible perturbations to input samples. Compared to traditional perturbations crafted specifically for each data point, Universal Adversarial Perturbations (UAPs) are input-agnostic and shown to be more practical in the real world. However, UAPs are typically generated in a close-set scenario that shares the same classification task during the training and testing phases. This paper demonstrates the ineffectiveness of traditional UAPs in open-set scenarios like Few-Shot Learning (FSL). Through analysis, we identify two primary challenges that hinder the attacking process: the task shift and the semantic shift. To enhance the transferability of UAPs in FSL, we propose a unifying attacking framework addressing these two shifts. The task shift is addressed by aligning proxy tasks to the downstream tasks, while the semantic shift is handled by leveraging the generalizability of pre-trained encoders.The proposed Few-Shot Attacking FrameWork, denoted as FSAFW, can effectively generate UAPs across various FSL training paradigms and different downstream tasks. Our approach not only sets a new standard for state-of-the-art works but also significantly enhances attack performance, exceeding the baseline method by over 16%.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95241"} +{"video_file": "QNieOPt4fg_39026356.mp4", "openreview_id": "QNieOPt4fg", "slideslive_id": 39026356, "venue": "nips2024", "title": "SelectIT: Selective Instruction Tuning for LLMs via Uncertainty-Aware Self-Reflection", "status": "Poster", "keywords": "Data Selection;Large Language Models", "tldr": "A novel strategy of using internal uncertainty of LLMs to select instruction tuning data.", "abstract": "Instruction tuning (IT) is crucial to tailoring large language models (LLMs) towards human-centric interactions. Recent advancements have shown that the careful selection of a small, high-quality subset of IT data can significantly enhance the performance of LLMs. Despite this, common approaches often rely on additional models or data, which increases costs and limits widespread adoption. In this work, we propose a novel approach, termed\nSelectIT\n, that capitalizes on the foundational capabilities of the LLM itself. Specifically, we exploit the intrinsic uncertainty present in LLMs to more effectively select high-quality IT data, without the need for extra resources. Furthermore, we introduce a curated IT dataset, the\nSelective Alpaca\n, created by applying SelectIT to the Alpaca-GPT4 dataset. Empirical results demonstrate that IT using Selective Alpaca leads to substantial model ability enhancement. The robustness of SelectIT has also been corroborated in various foundation models and domain-specific tasks. Our findings suggest that longer and more computationally intensive IT data may serve as superior sources of IT, offering valuable insights for future research in this area. Data, code, and scripts are freely available at https://github.com/Blue-Raincoat/SelectIT.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95238"} +{"video_file": "QUYLbzwtTV_39026140.mp4", "openreview_id": "QUYLbzwtTV", "slideslive_id": 39026140, "venue": "nips2024", "title": "Bias in Motion: Theoretical Insights into the Dynamics of Bias in SGD Training", "status": "Poster", "keywords": "learning dynamics;online learning;stochastic gradient descent;analytical model;fairness;spurious correlation", "tldr": "We propose a completely analytically tractable framework for studying the evolution of bias of a classifier during training", "abstract": "Machine learning systems often acquire biases by leveraging undesired features in the data, impacting accuracy variably across different sub-populations of the data. However, our current understanding of bias formation mostly focuses on the initial and final stages of learning, leaving a gap in knowledge regarding the transient dynamics. To address this gap, this paper explores the evolution of bias in a teacher-student setup that models different data sub-populations with a Gaussian-mixture model. We provide an analytical description of the stochastic gradient descent dynamics of a linear classifier in this setup, which we prove to be exact in high dimension. Notably, our analysis identifies different properties of the sub-populations that drive bias at different timescales and hence shows a shifting preference of our classifier during training. By applying our general solution to fairness and robustness, we delineate how and when heterogeneous data and spurious features can generate and amplify bias. We empirically validate our results in more complex scenarios by training deeper networks on synthetic and real data, i.e. using CIFAR10, MNIST, and CelebA datasets.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95235"} +{"video_file": "QZtJ22aOV4_39026734.mp4", "openreview_id": "QZtJ22aOV4", "slideslive_id": 39026734, "venue": "nips2024", "title": "Safe Exploitative Play with Untrusted Type Beliefs", "status": "Poster", "keywords": "Bayesian games;type beliefs;opportunity and risk tradeoff", "tldr": "We investigate the impact of incorrect type beliefs on an agent\u2019s payoff in Bayesian games, defining a trade-off between risk and opportunity, and providing upper and lower bounds on the payoff gap for both normal-form and stochastic settings.", "abstract": "The combination of the Bayesian game and learning has a rich history, with the idea of controlling a single agent in a system composed of multiple agents with unknown behaviors given a set of types, each specifying a possible behavior for the other agents. The idea is to plan an agent's own actions with respect to those types which it believes are most likely to maximize the payoff. However, the type beliefs are often learned from past actions and likely to be incorrect. With this perspective in mind, we consider an agent in a game with type predictions of other components, and investigate the impact of incorrect beliefs to the agent\u2019s payoff. In particular, we formally define a tradeoff between risk and opportunity by comparing the payoff obtained against the optimal payoff, which is represented by a gap caused by trusting or distrusting the learned beliefs.Our main results characterize the tradeoff by establishing upper and lower bounds on the Pareto front for both normal-form and stochastic Bayesian games, with numerical results provided.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/95226"} +{"video_file": "QbPHYPZKJI_39026478.mp4", "openreview_id": "QbPHYPZKJI", "slideslive_id": 39026478, "venue": "nips2024", "title": "Learning Distributions on Manifolds with Free-Form Flows", "status": "Poster", "keywords": "generative model;riemannian geometry;riemannian manifolds;free-form flows;normalizing flows", "tldr": "We propose Manifold Free-form Flows, the first generative model for data on arbitrary manifolds that sample in a single function evaluation at high quality.", "abstract": "We propose Manifold Free-Form Flows (M-FFF), a simple new generative model for data on manifolds. The existing approaches to learning a distribution on arbitrary manifolds are expensive at inference time, since sampling requires solving a differential equation. Our method overcomes this limitation by sampling in a single function evaluation. The key innovation is to optimize a neural network via maximum likelihood on the manifold, possible by adapting the free-form flow framework to Riemannian manifolds. M-FFF is straightforwardly adapted to any manifold with a known projection. It consistently matches or outperforms previous single-step methods specialized to specific manifolds. It is typically two orders of magnitude faster than multi-step methods based on diffusion or flow matching, achieving better likelihoods in several experiments. We provide our code at https://github.com/vislearn/FFF.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95225"} +{"video_file": "QbsPz0SnyV_39025809.mp4", "openreview_id": "QbsPz0SnyV", "slideslive_id": 39025809, "venue": "nips2024", "title": "Facilitating Multimodal Classification via Dynamically Learning Modality Gap", "status": "Poster", "keywords": "Multimodal Learning;Modality Gap;Multimodal Imbalance", "tldr": "A novel multimodal learning method integrates contrastive learning and supervised learning to address multimodal imbalance problem.", "abstract": "Multimodal learning falls into the trap of the optimization dilemma due to the modality imbalance phenomenon, leading to unsatisfactory performance in real applications. A core reason for modality imbalance is that the models of each modality converge at different rates. Many attempts naturally focus on adjusting learning procedures adaptively. Essentially, the reason why models converge at different rates is because the difficulty of fitting category labels is inconsistent for each modality during learning. From the perspective of fitting labels, we find that appropriate positive intervention label fitting can correct this difference in learning ability. By exploiting the ability of contrastive learning to intervene in the learning of category label fitting, we propose a novel multimodal learning approach that dynamically integrates unsupervised contrastive learning and supervised multimodal learning to address the modality imbalance problem. We find that a simple yet heuristic integration strategy can significantly alleviate the modality imbalance phenomenon. Moreover, we design a learning-based integration strategy to integrate two losses dynamically, further improving the performance. Experiments on widely used datasets demonstrate the superiority of our method compared with state-of-the-art (SOTA) multimodal learning approaches. The code is available at https://github.com/njustkmg/NeurIPS24-LFM.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95223"} +{"video_file": "QiCJomIW3l_39028611.mp4", "openreview_id": "QiCJomIW3l", "slideslive_id": 39028611, "venue": "nips2024", "title": "Toward Dynamic Non-Line-of-Sight Imaging with Mamba Enforced Temporal Consistency", "status": "Poster", "keywords": "dynamic;non-line-of-sight imaging;spatial-temporal Mamba", "tldr": "We propose a novel spatial-temporal Mamba based approach for dynamic NLOS reconstruction, as well as a new dataset for training and testing.", "abstract": "Dynamic reconstruction in confocal non-line-of-sight imaging encounters great challenges since the dense raster-scanning manner limits the practical frame rate. A fewer pioneer works reconstruct high-resolution volumes from the under-scanning transient measurements but overlook temporal consistency among transient frames. To fully exploit multi-frame information, we propose the first spatial-temporal Mamba (ST-Mamba) based method tailored for dynamic reconstruction of transient videos. Our method capitalizes on neighbouring transient frames to aggregate the target 3D hidden volume. Specifically, the interleaved features extracted from the input transient frames are fed to the proposed ST-Mamba blocks, which leverage the time-resolving causality in transient measurement. The cross ST-Mamba blocks are then devised to integrate the adjacent transient features. The target high-resolution transient frame is subsequently recovered by the transient spreading module. After transient fusion and recovery, a physical-based network is employed to reconstruct the hidden volume. To tackle the substantial noise inherent in transient videos, we propose a wave-based loss function to impose constraints within the phasor field. Besides, we introduce a new dataset, comprising synthetic videos for training and real-world videos for evaluation. Extensive experiments showcase the superior performance of our method on both synthetic data and real world data captured by different imaging setups. The code and data are available at https://github.com/Depth2World/Dynamic_NLOS.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/95216"} +{"video_file": "QpKWFLtZKi_39027643.mp4", "openreview_id": "QpKWFLtZKi", "slideslive_id": 39027643, "venue": "nips2024", "title": "Rethinking Exploration in Reinforcement Learning with Effective Metric-Based Exploration Bonus", "status": "Spotlight", "keywords": "Reinforcement Learning;exploration bonus;intrinsic reward;metric-based behavioral similarity", "tldr": "We introduce the Effective Metric-based Exploration-bonus which addresses the inherent limitations and approximation inaccuracies of current metric-based state discrepancy methods for exploration", "abstract": "Enhancing exploration in reinforcement learning (RL) through the incorporation of intrinsic rewards, specifically by leveraging state discrepancy measures within various metric spaces as exploration bonuses, has emerged as a prevalent strategy to encourage agents to visit novel states. The critical factor lies in how to quantify the difference between adjacent states as novelty for promoting effective exploration. Nonetheless, existing methods that evaluate state discrepancy in the latent space under\nL\n1\nor\nL\n2\nnorm often depend on count-based episodic terms as scaling factors for exploration bonuses, significantly limiting their scalability. Additionally, methods that utilize the bisimulation metric for evaluating state discrepancies face a theory-practice gap due to improper approximations in metric learning, particularly struggling with hard exploration tasks. To overcome these challenges, we introduce the Effective Metric-based Exploration-bonus (EME). EME critically examines and addresses the inherent limitations and approximation inaccuracies of current metric-based state discrepancy methods for exploration, proposing a robust metric for state discrepancy evaluation backed by comprehensive theoretical analysis. Furthermore, we propose the diversity-enhanced scaling factor integrated into the exploration bonus to be dynamically adjusted by the variance of prediction from an ensemble of reward models, thereby enhancing exploration effectiveness in particularly challenging scenarios. Extensive experiments are conducted on hard exploration tasks within Atari games, Minigrid, Robosuite, and Habitat, which illustrate our method's scalability to various scenarios. The project website can be found at https://sites.google.com/view/effective-metric-exploration.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95213"} +{"video_file": "QrE9QPq4ya_39027661.mp4", "openreview_id": "QrE9QPq4ya", "slideslive_id": 39027661, "venue": "nips2024", "title": "PhyRecon: Physically Plausible Neural Scene Reconstruction", "status": "Poster", "keywords": "Multi-view Reconstruction;Neural Implicit Surface Reconstruction;Physically Plausible Reconstruction", "tldr": "We propose to leverage both differentiable rendering and differentiable physics simulation for neural scene reconstruction", "abstract": "We address the issue of physical implausibility in multi-view neural reconstruction. While implicit representations have gained popularity in multi-view 3D reconstruction, previous work struggles to yield physically plausible results, limiting their utility in domains requiring rigorous physical accuracy. This lack of plausibility stems from the absence of physics modeling in existing methods and their inability to recover intricate geometrical structures. In this paper, we introduce PHYRECON, the first approach to leverage both differentiable rendering and differentiable physics simulation to learn implicit surface representations. PHYRECON features a novel differentiable particle-based physical simulator built on neural implicit representations. Central to this design is an efficient transformation between SDF-based implicit representations and explicit surface points via our proposed Surface Points Marching Cubes (SP-MC), enabling differentiable learning with both rendering and physical losses. Additionally, PHYRECON models both rendering and physical uncertainty to identify and compensate for inconsistent and inaccurate monocular geometric priors. The physical uncertainty further facilitates physics-guided pixel sampling to enhance the learning of slender structures. By integrating these techniques, our model supports differentiable joint modeling of appearance, geometry, and physics. Extensive experiments demonstrate that PHYRECON significantly improves the reconstruction quality. Our results also exhibit superior physical stability in physical simulators, with at least a 40% improvement across all datasets, paving the way for future physics-based applications.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95212"} +{"video_file": "Qtf6Xz4VvE_39028738.mp4", "openreview_id": "Qtf6Xz4VvE", "slideslive_id": 39028738, "venue": "nips2024", "title": "Cascade of phase transitions in the training of energy-based models", "status": "Poster", "keywords": "Restricted Boltzmann Machine;Generative model;Phase transition;statistical physics;Energy-based model", "tldr": "We show theoretically and numerically that the training of Energy-based models undergoes several phase transitions.", "abstract": "In this paper, we investigate the feature encoding process in a prototypical energy-based generative model, the Restricted Boltzmann Machine (RBM). We start with an analytical investigation using simplified architectures and data structures, and end with numerical analysis of real trainings on real datasets. Our study tracks the evolution of the model\u2019s weight matrix through its singular value decomposition, revealing a series of thermodynamic phase transitions that shape the principal learning modes of the empirical probability distribution. We first describe this process analytically in several controlled setups that allow us to fully monitor the training dynamics until convergence. We then validate these findings by training the Bernoulli-Bernoulli RBM on real data sets. By studying the phase behavior over data sets of increasing dimension, we show that these phase transitions are genuine in the thermodynamic sense. Moreover, we propose a mean-field finite-size scaling hypothesis, confirming that the initial phase transition, reminiscent of the paramagnetic-to-ferromagnetic phase transition in mean-field ferromagnetism models, is governed by mean-field critical exponents.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95210"} +{"video_file": "QvqLdeSLWA_39028229.mp4", "openreview_id": "QvqLdeSLWA", "slideslive_id": 39028229, "venue": "nips2024", "title": "Suppress Content Shift: Better Diffusion Features via Off-the-Shelf Generation Techniques", "status": "Poster", "keywords": "Diffusion Models;Representation Learning;Model Property", "tldr": "We reveal a hidden yet harmful phenomenon, content shift, in diffusion features, and propose a method to utilize off-the-shelf generation techniques to suppress it.", "abstract": "Diffusion models are powerful generative models, and this capability can also be applied to discrimination. The inner activations of a pre-trained diffusion model can serve as features for discriminative tasks, namely, diffusion feature. We discover that diffusion feature has been hindered by a hidden yet universal phenomenon that we call content shift. To be specific, there are content differences between features and the input image, such as the exact shape of a certain object. We locate the cause of content shift as one inherent characteristic of diffusion models, which suggests the broad existence of this phenomenon in diffusion feature. Further empirical study also indicates that its negative impact is not negligible even when content shift is not visually perceivable. Hence, we propose to suppress content shift to enhance the overall quality of diffusion features. Specifically, content shift is related to the information drift during the process of recovering an image from the noisy input, pointing out the possibility of turning off-the-shelf generation techniques into tools for content shift suppression. We further propose a practical guideline named GATE to efficiently evaluate the potential benefit of a technique and provide an implementation of our methodology. Despite the simplicity, the proposed approach has achieved superior results on various tasks and datasets, validating its potential as a generic booster for diffusion features. Our code is available at https://github.com/Darkbblue/diffusion-content-shift.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95209"} +{"video_file": "QyR1dNDxRP_39025564.mp4", "openreview_id": "QyR1dNDxRP", "slideslive_id": 39025564, "venue": "nips2024", "title": "Provable Tempered Overfitting of Minimal Nets and Typical Nets", "status": "Poster", "keywords": "Deep Learning;Tempered Overfitting;Generalization", "tldr": "We prove that fully connected neural networks with quantized weights exhibit tempered overfitting when using both the smallest interpolating NN and a random interpolating NN.", "abstract": "We study the overfitting behavior of fully connected deep Neural Networks (NNs) with binary weights fitted to perfectly classify a noisy training set. We consider interpolation using both the smallest NN (having the minimal number of weights) and a random interpolating NN. For both learning rules, we prove overfitting is tempered. Our analysis rests on a new bound on the size of a threshold circuit consistent with a partial function. To the best of our knowledge, ours are the first theoretical results on benign or tempered overfitting that: (1) apply to deep NNs, and (2) do not require a very high or very low input dimension.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95208"} +{"video_file": "R0bnWrpIeN_39026013.mp4", "openreview_id": "R0bnWrpIeN", "slideslive_id": 39026013, "venue": "nips2024", "title": "CoSy: Evaluating Textual Explanations of Neurons", "status": "Poster", "keywords": "Explainable AI;Evaluation of Explainability Methods;Mechanistic Interpretability", "tldr": "We propose CoSy, an automatic evaluation framework for textual explanations of neurons.", "abstract": "A crucial aspect of understanding the complex nature of Deep Neural Networks (DNNs) is the ability to explain learned concepts within their latent representations. While methods exist to connect neurons to human-understandable textual descriptions, evaluating the quality of these explanations is challenging due to the lack of a unified quantitative approach. We introduce CoSy (Concept Synthesis), a novel, architecture-agnostic framework for evaluating textual explanations of latent neurons. Given textual explanations, our proposed framework uses a generative model conditioned on textual input to create data points representing the explanations. By comparing the neuron's response to these generated data points and control data points, we can estimate the quality of the explanation. We validate our framework through sanity checks and benchmark various neuron description methods for Computer Vision tasks, revealing significant differences in quality.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/95204"} +{"video_file": "R4IBZrSF5d_39027249.mp4", "openreview_id": "R4IBZrSF5d", "slideslive_id": 39027249, "venue": "nips2024", "title": "Virtual Scanning: Unsupervised Non-line-of-sight Imaging from Irregularly Undersampled Transients", "status": "Poster", "keywords": "Non-line-of-sight imaging;Machine Vision;Computational Imaging", "tldr": "We propose an unsupervised learning-based framework for NLOS imaging from irregularly undersampled transients for for high-quality and fast inference.", "abstract": "Non-line-of-sight (NLOS) imaging allows for seeing hidden scenes around corners through active sensing. Most previous algorithms for NLOS reconstruction require dense transients acquired through regular scans over a large relay surface, which limits their applicability in realistic scenarios with irregular relay surfaces. In this paper, we propose an unsupervised learning-based framework for NLOS imaging from irregularly undersampled transients~(IUT). Our method learns implicit priors from noisy irregularly undersampled transients without requiring paired data, which is difficult and expensive to acquire and align. To overcome the ambiguity of the measurement consistency constraint in inferring the albedo volume, we design a virtual scanning process that enables the network to learn within both range and null spaces for high-quality reconstruction. We devise a physics-guided SURE-based denoiser to enhance robustness to ubiquitous noise in low-photon imaging conditions. Extensive experiments on both simulated and real-world data validate the performance and generalization of our method. Compared with the state-of-the-art (SOTA) method, our method achieves higher fidelity, greater robustness, and remarkably faster inference times by orders of magnitude. The code and model are available at https://github.com/XingyuCuii/Virtual-Scanning-NLOS.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95200"} +{"video_file": "R6N9AGyz13_39025290.mp4", "openreview_id": "R6N9AGyz13", "slideslive_id": 39025290, "venue": "nips2024", "title": "Parallelizing Model-based Reinforcement Learning Over the Sequence Length", "status": "Poster", "keywords": "Model-based reinforcement learning;world model;parallelization", "tldr": "This paper introduces the PaMoRL framework, which parallelizes MBRL over the sequence length, improving training speed and sample efficiency.", "abstract": "Recently, Model-based Reinforcement Learning (MBRL) methods have demonstrated stunning sample efficiency in various RL domains. However, achieving this extraordinary sample efficiency comes with additional training costs in terms of computations, memory, and training time. To address these challenges, we propose the Parallelized Model-based Reinforcement Learning (PaMoRL) framework. PaMoRL introduces two novel techniques: the Parallel World Model (PWM) and the Parallelized Eligibility Trace Estimation (PETE) to parallelize both model learning and policy learning stages of current MBRL methods over the sequence length. Our PaMoRL framework is hardware-efficient and stable, and it can be applied to various tasks with discrete or continuous action spaces using a single set of hyperparameters. The empirical results demonstrate that the PWM and PETE within PaMoRL significantly increase training speed without sacrificing inference efficiency. In terms of sample efficiency, PaMoRL maintains an MBRL-level sample efficiency that outperforms other no-look-ahead MBRL methods and model-free RL methods, and it even exceeds the performance of planning-based MBRL methods and methods with larger networks in certain tasks.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95198"} +{"video_file": "R8znYRjxj3_39024761.mp4", "openreview_id": "R8znYRjxj3", "slideslive_id": 39024761, "venue": "nips2024", "title": "Bayes-optimal learning of an extensive-width neural network from quadratically many samples", "status": "Poster", "keywords": "Theory of neural networks;Bayes-optimal learning;non-convex optimization;statistical physics;high-dimensional statistics", "tldr": "We show how to compute exact asymptotic predictions for the minimal generalization error reachable in learning an extensive-width shallow neural network, with a number of data samples quadratic in the dimension.", "abstract": "We consider the problem of learning a target function corresponding to a single hidden layer neural network, with a quadratic activation function after the first layer, and random weights. We consider the asymptotic limit where the input dimension and the network width are proportionally large. Recent work [Cui et al., 2023] established that linear regression provides Bayes-optimal test error to learn such a function when the number of available samples is only linear in the dimension. That work stressed the open challenge of theoretically analyzing the optimal test error in the more interesting regime where the number of samples is quadratic in the dimension. In this paper, we solve this challenge for quadratic activations and derive a closed-form expression for the Bayes-optimal test error. We also provide an algorithm, that we call GAMP-RIE, which combines approximate message passing with rotationally invariant matrix denoising, and that asymptotically achieves the optimal performance. Technically, our result is enabled by establishing a link with recent works on optimal denoising of extensive-rank matrices and on the ellipsoid fitting problem. We further show empirically that, in the absence of noise, randomly-initialized gradient descent seems to sample the space of weights, leading to zero training loss, and averaging over initialization leads to a test error equal to the Bayes-optimal one.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95194"} +{"video_file": "RA6rzOJ2zI_39028887.mp4", "openreview_id": "RA6rzOJ2zI", "slideslive_id": 39028887, "venue": "nips2024", "title": "Navigating Extremes: Dynamic Sparsity in Large Output Spaces", "status": "Poster", "keywords": "Dynamic sparse training;extreme classification;memory efficient training;large output spaces;scalable machine learning", "tldr": "Investigates Dynamic Sparse Training for large output spaces. Leveraging semi-structured sparsity, intermediate layers, and auxiliary loss, it enables end-to-end training with millions of labels on commodity hardware with near-dense performance.", "abstract": "In recent years, Dynamic Sparse Training (DST) has emerged as an alternative to post-training pruning for generating efficient models. In principle, DST allows for a much more memory efficient training process, as it maintains sparsity throughout the entire training run. However, current DST implementations fail to capitalize on this. Because sparse matrix multiplication is much less efficient than dense matrix multiplication on GPUs, most implementations simulate sparsity by masking weights. In this paper, we leverage recent advances in semi-structured sparse training to apply DST in the domain of classification with large output spaces, where memory-efficiency is paramount. With a label space of possibly millions of candidates, the classification layer alone will consume several gigabytes of memory. Switching from a dense to a fixed fan-in sparse layer updated with sparse evolutionary training (SET); however, severely hampers training convergence, especially at the largest label spaces. We find that the gradients fed back from the classifier into the text encoder make it much more difficult to learn good input representations, despite using a dense encoder. By employing an intermediate layer or adding an auxiliary training objective, we recover most of the generalisation performance of the dense model. Overall, we demonstrate the applicability of DST in a challenging domain, characterized by a highly skewed label distribution, that lies outside of DST's typical benchmark datasets, and enable end-to-end training with millions of labels on commodity hardware.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/95193"} +{"video_file": "RB1F2h5YEx_39027509.mp4", "openreview_id": "RB1F2h5YEx", "slideslive_id": 39027509, "venue": "nips2024", "title": "Parseval Regularization for Continual Reinforcement Learning", "status": "Poster", "keywords": "Reinforcement Learning;Continual Learning;Plasticity;Optimization", "tldr": "Maintaining orthogonal weight matrices during training improves continual reinforcement learning agents.", "abstract": "Plasticity loss, trainability loss, and primacy bias have been identified as issues arising when training deep neural networks on sequences of tasks---referring to the increased difficulty in training on new tasks. We propose to use Parseval regularization, which maintains orthogonality of weight matrices, to preserve useful optimization properties and improve training in a continual reinforcement learning setting. We show that it provides significant benefits to RL agents on a suite of gridworld, CARL and MetaWorld tasks. We conduct comprehensive ablations to identify the source of its benefits and investigate the effect of certain metrics associated to network trainability including weight matrix rank, weight norms and policy entropy.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95192"} +{"video_file": "REIK4SZMJt_39024911.mp4", "openreview_id": "REIK4SZMJt", "slideslive_id": 39024911, "venue": "nips2024", "title": "Trading Place for Space: Increasing Location Resolution Reduces Contextual Capacity in Hippocampal Codes", "status": "Oral", "keywords": "Neuroscience;Neural Coding;Memory", "tldr": "We characterize the capacity of the place system to distinguish context, as well as the tradeoff between this ability and the ability to determine location.", "abstract": "Many animals learn cognitive maps of their environment - a simultaneous representation of context, experience, and position. Place cells in the hippocampus, named for their explicit encoding of position, are believed to be a neural substrate of these maps, with place cell \"remapping\" explaining how this system can represent different contexts. Briefly, place cells alter their firing properties, or \"remap\", in response to changes in experiential or sensory cues. Substantial sensory changes, produced, e.g., by moving between environments, cause large subpopulations of place cells to change their tuning entirely. While many studies have looked at the physiological basis of remapping, we lack explicit calculations of how the contextual capacity of the place cell system changes as a function of place field firing properties. Here, we propose a geometric approach to understanding population level activity of place cells. Using known firing field statistics, we investigate how changes to place cell firing properties affect the distances between representations of different environments within firing rate space. Using this approach, we find that the number of contexts storable by the hippocampus grows exponentially with the number of place cells, and calculate this exponent for environments of different sizes. We identify a fundamental trade-off between high resolution encoding of position and the number of storable contexts. This trade-off is tuned by place cell width, which might explain the change in firing field scale along the dorsal-ventral axis of the hippocampus. We demonstrate that clustering of place cells near likely points of confusion, such as boundaries, increases the contextual capacity of the place system within our framework and conclude by discussing how our geometric approach could be extended to include other cell types and abstract spaces.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/95187"} +{"video_file": "RERls4Opnm_39027535.mp4", "openreview_id": "RERls4Opnm", "slideslive_id": 39027535, "venue": "nips2024", "title": "Sample-efficient Bayesian Optimisation Using Known Invariances", "status": "Poster", "keywords": "Bayesian optimisation;bandit optimisation;Gaussian processes;kernel methods;groups;invariance;transformations;sample efficiency", "tldr": "We derive sample complexity bounds for BayesOpt algorithms with kernels that incorporate known invariance of the target function, and demonstrate their application to the design of a fusion reactor.", "abstract": "Bayesian optimisation (BO) is a powerful framework for global optimisation of costly functions, using predictions from Gaussian process models (GPs). In this work, we apply BO to functions that exhibit invariance to a known group of transformations. We show that vanilla and constrained BO algorithms are inefficient when optimising such invariant objectives, and provide a method for incorporating group invariances into the kernel of the GP to produce invariance-aware algorithms that achieve significant improvements in sample efficiency. We derive a bound on the maximum information gain of these invariant kernels, and provide novel upper and lower bounds on the number of observations required for invariance-aware BO algorithms to achieve $\\epsilon$-optimality. We demonstrate our method's improved performance on a range of synthetic invariant and quasi-invariant functions. We also apply our method in the case where only some of the invariance is incorporated into the kernel, and find that these kernels achieve similar gains in sample efficiency at significantly reduced computational cost. Finally, we use invariant BO to design a current drive system for a nuclear fusion reactor, finding a high-performance solution where non-invariant methods failed.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/95186"} +{"video_file": "RH7tfqhiZY_39028589.mp4", "openreview_id": "RH7tfqhiZY", "slideslive_id": 39028589, "venue": "nips2024", "title": "YouDream: Generating Anatomically Controllable Consistent Text-to-3D Animals", "status": "Poster", "keywords": "3D Animal generation;Text-to-3D;Diffusion models", "tldr": "YouDream generates anatomically and geometrically consistent 3D animals following a 3D pose prior, thus also enabling the generation of novel unseen animals.", "abstract": "3D generation guided by text-to-image diffusion models enables the creation of visually compelling assets. However previous methods explore generation based on image or text. The boundaries of creativity are limited by what can be expressed through words or the images that can be sourced. We present YouDream, a method to generate high-quality anatomically controllable animals. YouDream is guided using a text-to-image diffusion model controlled by 2D views of a 3D pose prior. Our method is capable of generating novel imaginary animals that previous text-to-3D generative methods are unable to create. Additionally, our method can preserve anatomic consistency in the generated animals, an area where prior approaches often struggle. Moreover, we design a fully automated pipeline for generating commonly observed animals. To circumvent the need for human intervention to create a 3D pose, we propose a multi-agent LLM that adapts poses from a limited library of animal 3D poses to represent the desired animal. A user study conducted on the outcomes of YouDream demonstrates the preference of the animal models generated by our method over others. Visualizations and code are available at https://youdream3d.github.io/.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95184"} +{"video_file": "RL4FXrGcTw_39025205.mp4", "openreview_id": "RL4FXrGcTw", "slideslive_id": 39025205, "venue": "nips2024", "title": "Gradients of Functions of Large Matrices", "status": "Spotlight", "keywords": "Automatic differentiation;numerical methods;linear algebra;implicit differentiation;adjoint methods;differential equations;Bayesian neural networks;Gaussian processes", "tldr": "We derive previously unknown gradients of Lanczos and Arnoldi iterations and use them for PDEs, Gaussian processes, and Bayesian neural networks.", "abstract": "Tuning scientific and probabilistic machine learning models - for example, partial differential equations, Gaussian processes, or Bayesian neural networks - often relies on evaluating functions of matrices whose size grows with the data set or the number of parameters. While the state-of-the-art for evaluating these quantities is almost always based on Lanczos and Arnoldi iterations, the present work is the first to explain how to differentiate these workhorses of numerical linear algebra efficiently. To get there, we derive previously unknown adjoint systems for Lanczos and Arnoldi iterations, implement them in JAX, and show that the resulting code can compete with Diffrax when it comes to differentiating PDEs, GPyTorch for selecting Gaussian process models and beats standard factorisation methods for calibrating Bayesian neural networks. All this is achieved without any problem-specific code optimisation. Find the code at [link redacted] and install the library with pip install [redacted].", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/95179"} +{"video_file": "RMdnTnffou_39025227.mp4", "openreview_id": "RMdnTnffou", "slideslive_id": 39025227, "venue": "nips2024", "title": "Coarse-to-Fine Concept Bottleneck Models", "status": "Poster", "keywords": "Interpretability;Explainability;Concept Bottleneck Models;Sparsity;Multimodal Models;Concepts;Textual Descriptions;Bayesian;Masking", "tldr": "We propose a novel coarse-to-fine construction for concept discovery; we do not solely rely on the similarity between concepts and the whole image, but we also consider granular information residing in patch-specific regions of the image.", "abstract": "Deep learning algorithms have recently gained significant attention due to their impressive performance. However, their high complexity and un-interpretable mode of operation hinders their confident deployment in real-world safety-critical tasks. This work targets ante hoc interpretability, and specifically Concept Bottleneck Models (CBMs). Our goal is to design a framework that admits a highly interpretable decision making process with respect to human understandable concepts, on two levels of granularity. To this end, we propose a novel two-level concept discovery formulation leveraging: (i) recent advances in vision-language models, and (ii) an innovative formulation for coarse-to-fine concept selection via data-driven and sparsity inducing Bayesian arguments. Within this framework, concept information does not solely rely on the similarity between the whole image and general unstructured concepts; instead, we introduce the notion of concept hierarchy to uncover and exploit more granular concept information residing in patch-specific regions of the image scene. As we experimentally show, the proposed construction not only outperforms recent CBM approaches, but also yields a principled framework towards interpetability.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/95178"} +{"video_file": "RNbrIQ0se8_39026425.mp4", "openreview_id": "RNbrIQ0se8", "slideslive_id": 39026425, "venue": "nips2024", "title": "Ada-MSHyper: Adaptive Multi-Scale Hypergraph Transformer for Time Series Forecasting", "status": "Poster", "keywords": "Time series forecasting;transformer;multi-scale modeling;hypergraph neural network;hypergraph learning", "tldr": "We propose an adaptive multi-scale hypergraph transformer for time series forecasting.", "abstract": "Although transformer-based methods have achieved great success in multi-scale temporal pattern interaction modeling, two key challenges limit their further development: (1) Individual time points contain less semantic information, and leveraging attention to model pair-wise interactions may cause the information utilization bottleneck. (2) Multiple inherent temporal variations (e.g., rising, falling, and fluctuating) entangled in temporal patterns. To this end, we propose Adaptive Multi-Scale Hypergraph Transformer (Ada-MSHyper) for time series forecasting. Specifically, an adaptive hypergraph learning module is designed to provide foundations for modeling group-wise interactions, then a multi-scale interaction module is introduced to promote more comprehensive pattern interactions at different scales. In addition, a node and hyperedge constraint mechanism is introduced to cluster nodes with similar semantic information and differentiate the temporal variations within each scales. Extensive experiments on 11 real-world datasets demonstrate that Ada-MSHyper achieves state-of-the-art performance, reducing prediction errors by an average of 4.56%, 10.38%, and 4.97% in MSE for long-range, short-range, and ultra-long-range time series forecasting, respectively. Code is available at https://github.com/shangzongjiang/Ada-MSHyper.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95175"} +{"video_file": "RPChapuXlC_39028442.mp4", "openreview_id": "RPChapuXlC", "slideslive_id": 39028442, "venue": "nips2024", "title": "Lisa: Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning Attack", "status": "Poster", "keywords": "Large language model;safety alignment;harmful finetuning attack", "tldr": "This paper proposes a lazy safety alignment against harmful finetuning attack in Large language models.", "abstract": "Recent studies show that Large Language Models (LLMs) with safety alignment can be jail-broken by fine-tuning on a dataset mixed with harmful data. For the first time in the literature, we show that the jail-break effect can be mitigated by separating two states in the fine-tuning stage to respectively optimize over the alignment and user datasets. Unfortunately, our subsequent study shows that this simple Bi-State Optimization (BSO) solution experiences convergence instability when steps invested in its alignment state is too small, leading to downgraded alignment performance. By statistical analysis, we show that the \\textit{excess drift} towards the switching iterates of the two states could be a probable reason for the instability. To remedy this issue, we propose \\textbf{L}azy(\\textbf{i}) \\textbf{s}afety \\textbf{a}lignment (\\textbf{Lisa}), which introduces a proximal term to constraint the drift of each state. Theoretically, the benefit of the proximal term is supported by the convergence analysis, wherein we show that a sufficient large proximal factor is necessary to guarantee Lisa's convergence. Empirically, our results on four downstream fine-tuning tasks show that Lisa with a proximal term can significantly increase alignment performance while maintaining the LLM's accuracy on the user tasks. Code is available at https://github.com/git-disl/Lisa.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/95174"} +{"video_file": "RQCmMSSzvI_39028241.mp4", "openreview_id": "RQCmMSSzvI", "slideslive_id": 39028241, "venue": "nips2024", "title": "Non-Asymptotic Uncertainty Quantification in High-Dimensional Learning", "status": "Spotlight", "keywords": "high-dimensional regression;uncertainty quantification;model-based deep learning;debiased estimator;inverse problems", "tldr": "Our paper derives and validates a data-driven approach to construct non-asymptotic confidence intervals for high-dimensional regression that overcomes the issues faced by previous asymptotic uncertainty quantification techniques.", "abstract": "Uncertainty quantification (UQ) is a crucial but challenging task in many high-dimensional learning problems to increase the confidence of a given predictor. We develop a new data-driven approach for UQ in regression that applies both to classical optimization approaches such as the LASSO as well as to neural networks. One of the most notable UQ techniques is the debiased LASSO, which modifies the LASSO to allow for the construction of asymptotic confidence intervals by decomposing the estimation error into a Gaussian and an asymptotically vanishing bias component. However, in real-world problems with finite-dimensional data, the bias term is often too significant to disregard, resulting in overly narrow confidence intervals. Our work rigorously addresses this issue and derives a data-driven adjustment that corrects the confidence intervals for a large class of predictors by estimating the means and variances of the bias terms from training data, exploiting high-dimensional concentration phenomena. This gives rise to non-asymptotic confidence intervals, which can help avoid overestimating certainty in critical applications such as MRI diagnosis. Importantly, our analysis extends beyond sparse regression to data-driven predictors like neural networks, enhancing the reliability of model-based deep learning. Our findings bridge the gap between established theory and the practical applicability of such methods.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95172"} +{"video_file": "RY3rDQV0tQ_39027490.mp4", "openreview_id": "RY3rDQV0tQ", "slideslive_id": 39027490, "venue": "nips2024", "title": "Optical Diffusion Models for Image Generation", "status": "Poster", "keywords": "Diffusion based model;image generation;optical computing;efficient computing", "tldr": "Light propagation can be programmed to perform denoising diffusion efficiently by transmitting through learned modulation layers.", "abstract": "Diffusion models generate new samples by progressively decreasing the noise from the initially provided random distribution. This inference procedure generally utilizes a trained neural network numerous times to obtain the final output, creating significant latency and energy consumption on digital electronic hardware such as GPUs. In this study, we demonstrate that the propagation of a light beam through a transparent medium can be programmed to implement a denoising diffusion model on image samples. This framework projects noisy image patterns through passive diffractive optical layers, which collectively only transmit the predicted noise term in the image. The optical transparent layers, which are trained with an online training approach, backpropagating the error to the analytical model of the system, are passive and kept the same across different steps of denoising. Hence this method enables high-speed image generation with minimal power consumption, benefiting from the bandwidth and energy efficiency of optical information processing.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95165"} +{"video_file": "RcPAJAnpnm_39028722.mp4", "openreview_id": "RcPAJAnpnm", "slideslive_id": 39028722, "venue": "nips2024", "title": "Incremental Learning of Retrievable Skills For Efficient Continual Task Adaptation", "status": "Poster", "keywords": "Continual Imitation Learning;Unlearning", "tldr": "Incremental Learning of Retrievable Skills For Efficient Continual Task Adaptation", "abstract": "Continual Imitation Learning (CiL) involves extracting and accumulating task knowledge from demonstrations across multiple stages and tasks to achieve a multi-task policy. With recent advancements in foundation models, there has been a growing interest in adapter-based CiL approaches, where adapters are established parameter-efficiently for tasks newly demonstrated. While these approaches isolate parameters for specific tasks and tend to mitigate catastrophic forgetting, they limit knowledge sharing among different demonstrations. We introduce IsCiL, an adapter-based CiL framework that addresses this limitation of knowledge sharing by incrementally learning shareable skills from different demonstrations, thus enabling sample-efficient task adaptation using the skills particularly in non-stationary CiL environments. In IsCiL, demonstrations are mapped into the state embedding space, where proper skills can be retrieved upon input states through prototype-based memory. These retrievable skills are incrementally learned on their corresponding adapters. Our CiL experiments with complex tasks in the Franka-Kitchen and Meta-World demonstrate the robust performance of IsCiL in both task adaptation and sample-efficiency. We also show a simple extension of IsCiL for task unlearning scenarios.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95159"} +{"video_file": "RcPHbofiCN_39026159.mp4", "openreview_id": "RcPHbofiCN", "slideslive_id": 39026159, "venue": "nips2024", "title": "Mixture of In-Context Experts Enhance LLMs' Long Context Awareness", "status": "Poster", "keywords": "long context awareness;large language model;attention mechanism", "tldr": "We propose MoICE, which enhances LLMs' context awareness. MoICE introduces a router in each attention head within LLMs, which dynamically directs the head's attention to contextual positions crucial for completing the head's function well.", "abstract": "Many studies have revealed that large language models (LLMs) exhibit uneven awareness of different contextual positions. Their limited context awareness can lead to overlooking critical information and subsequent task failures. While several approaches have been proposed to enhance LLMs' context awareness, achieving both effectiveness and efficiency remains challenging. In this paper, for LLMs utilizing RoPE as position embeddings, we introduce a novel method called \"Mixture of In-Context Experts\" (MoICE) to address this challenge. MoICE comprises two key components: a router integrated into each attention head within LLMs and a lightweight router-only training optimization strategy:(1) MoICE views each RoPE angle as an 'in-context' expert, demonstrated to be capable of directing the attention of a head to specific contextual positions. Consequently, each attention head flexibly processes tokens using multiple RoPE angles dynamically selected by the router to attend to the needed positions. This approach mitigates the risk of overlooking essential contextual information. (2) The router-only training strategy entails freezing LLM parameters and exclusively updating routers for only a few steps. When applied to open-source LLMs including Llama and Mistral, MoICE surpasses prior methods across multiple tasks on long context understanding and generation, all while maintaining commendable inference efficiency.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95158"} +{"video_file": "RfSvAom7sS_39025778.mp4", "openreview_id": "RfSvAom7sS", "slideslive_id": 39025778, "venue": "nips2024", "title": "Sample Efficient Bayesian Learning of Causal Graphs from Interventions", "status": "Poster", "keywords": "Causal Discovery;Bayesian Learning;Sample Efficiency", "tldr": "We propose a sample efficient causal discovery algorithm that learns the causal graph in a Bayesian approach.", "abstract": "Causal discovery is a fundamental problem with applications spanning various areas in science and engineering. It is well understood that solely using observational data, one can only orient the causal graph up to its Markov equivalence class, necessitating interventional data to learn the complete causal graph. Most works in the literature design causal discovery policies with perfect interventions, i.e., they have access to infinite interventional samples. This study considers a Bayesian approach for learning causal graphs with limited interventional samples, mirroring real-world scenarios where such samples are usually costly to obtain. By leveraging the recent result of Wien\u00f6bst et al. [2023] on uniform DAG sampling in polynomial time, we can efficiently enumerate all the cut configurations and their corresponding interventional distributions of a target set, and further track their posteriors. Given any number of interventional samples, our proposed algorithm randomly intervenes on a set of target vertices that cut all the edges in the graph and returns a causal graph according to the posterior of each target set. When the number of interventional samples is large enough, we show theoretically that our proposed algorithm will return the true causal graph with high probability. We compare our algorithm against various baseline methods on simulated datasets, demonstrating its superior accuracy measured by the structural Hamming distance between the learned DAG and the ground truth. Additionally, we present a case study showing how this algorithm could be modified to answer more general causal questions without learning the whole graph. As an example, we illustrate that our method can be used to estimate the causal effect of a variable that cannot be intervened.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/95157"} +{"video_file": "RfsfRn9OFd_39028279.mp4", "openreview_id": "RfsfRn9OFd", "slideslive_id": 39028279, "venue": "nips2024", "title": "EEG2Video: Towards Decoding Dynamic Visual Perception from EEG Signals", "status": "Poster", "keywords": "EEG;video generation;diffusion model;brain-computer interface", "tldr": "We build an EEG-video dataset and propose a framework to generate videos from EEG signals, taking an important step towards decoding dynamic visual perception from EEG.", "abstract": "Our visual experience in daily life are dominated by dynamic change. Decoding such dynamic information from brain activity can enhance the understanding of the brain\u2019s visual processing system. However, previous studies predominately focus on reconstructing static visual stimuli. In this paper, we explore to decode dynamic visual perception from electroencephalography (EEG), a neuroimaging technique able to record brain activity with high temporal resolution (1000 Hz) for capturing rapid changes in brains. Our contributions are threefold: Firstly, we develop a large dataset recording signals from 20 subjects while they were watching 1400 dynamic video clips of 40 concepts. This dataset fills the gap in the lack of EEG-video pairs. Secondly, we annotate each video clips to investigate the potential for decoding some specific meta information (e.g., color, dynamic, human or not) from EEG. Thirdly, we propose a novel baseline EEG2Video for video reconstruction from EEG signals that better aligns dynamic movements with high temporal resolution brain signals by Seq2Seq architecture. EEG2Video achieves a 2-way accuracy of 79.8% in semantic classification tasks and 0.256 in structural similarity index (SSIM). Overall, our works takes an important step towards decoding dynamic visual perception from EEG signals. Our dataset and code will be released soon.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/95156"} +{"video_file": "RlZgnEZsOH_39028289.mp4", "openreview_id": "RlZgnEZsOH", "slideslive_id": 39028289, "venue": "nips2024", "title": "HuRef: HUman-REadable Fingerprint for Large Language Models", "status": "Poster", "keywords": "Model Identification;Fingerprinting;Large Language Models (LLMs)", "tldr": "We generate a dog image as an identity fingerprint for an LLM, where the dog's appearance strongly indicates the LLM's base model.", "abstract": "Protecting the copyright of large language models (LLMs) has become crucial due to their resource-intensive training and accompanying carefully designed licenses. However, identifying the original base model of an LLM is challenging due to potential parameter alterations. In this study, we introduce HuRef, a human-readable fingerprint for LLMs that uniquely identifies the base model without interfering with training or exposing model parameters to the public. We first observe that the vector direction of LLM parameters remains stable after the model has converged during pretraining, with negligible perturbations through subsequent training steps, including continued pretraining, supervised fine-tuning, and RLHF, which makes it a sufficient condition to identify the base model. The necessity is validated by continuing to train an LLM with an extra term to drive away the model parameters' direction and the model becomes damaged. However, this direction is vulnerable to simple attacks like dimension permutation or matrix rotation, which significantly change it without affecting performance. To address this, leveraging the Transformer structure, we systematically analyze potential attacks and define three invariant terms that identify an LLM's base model. Due to the potential risk of information leakage, we cannot publish invariant terms directly. Instead, we map them to a Gaussian vector using an encoder, then convert it into a natural image using StyleGAN2, and finally publish the image. In our black-box setting, all fingerprinting steps are internally conducted by the LLMs owners. To ensure the published fingerprints are honestly generated, we introduced Zero-Knowledge Proof (ZKP). Experimental results across various LLMs demonstrate the effectiveness of our method. The code is available at https://github.com/LUMIA-Group/HuRef.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/95154"} +{"video_file": "RnQdRY1h5v_39025079.mp4", "openreview_id": "RnQdRY1h5v", "slideslive_id": 39025079, "venue": "nips2024", "title": "B'MOJO: Hybrid State Space Realizations of Foundation Models with Eidetic and Fading Memory", "status": "Poster", "keywords": "Sequence Models;Language Models;State Space Models;Hybrid Architectures", "tldr": "We introduce a novel hybrid architecture (State Space Models + Attention) that efficiently processes past information using both fading and eidetic memory.", "abstract": "We describe a family of architectures to support transductive inference by allowing memory to grow to a finite but a-priori unknown bound while making efficient use of finite resources for inference. Current architectures use such resources to represent data either eidetically over a finite span ('context' in Transformers), or fading over an infinite span (in State Space Models, or SSMs). Recent hybrid architectures have combined eidetic and fading memory, but with limitations that do not allow the designer or the learning process to seamlessly modulate the two, nor to extend the eidetic memory span. We leverage ideas from Stochastic Realization Theory to develop a class of models called B'MOJO to seamlessly combine eidetic and fading memory within an elementary composable module. The overall architecture can be used to implement models that can access short-term eidetic memory 'in-context,' permanent structural memory 'in-weights,' fading memory 'in-state,' and long-term eidetic memory 'in-storage' by natively incorporating retrieval from an asynchronously updated memory. We show that Transformers, existing SSMs such as Mamba, and hybrid architectures such as Jamba are special cases of B'MOJO and describe a basic implementation that can be stacked and scaled efficiently in hardware. We test B'MOJO on transductive inference tasks, such as associative recall, where it outperforms existing SSMs and Hybrid models; as a baseline, we test ordinary language modeling where B'MOJO achieves perplexity comparable to similarly-sized Transformers and SSMs up to 1.4B parameters, while being up to 10% faster to train. Finally, we test whether models trained inductively on a-priori bounded sequences (up to 8K tokens) can still perform transductive inference on sequences many-fold longer. B'MOJO's ability to modulate eidetic and fading memory results in better inference on longer sequences tested up to 32K tokens, four-fold the length of the longest sequences seen during training.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95153"} +{"video_file": "RrTjcbcHEH_39028576.mp4", "openreview_id": "RrTjcbcHEH", "slideslive_id": 39028576, "venue": "nips2024", "title": "Neural Localizer Fields for Continuous 3D Human Pose and Shape Estimation", "status": "Poster", "keywords": "3D human pose estimation;human shape estimation;computer vision;human mesh recovery", "tldr": "We train a state-of-the-art generalist human pose and shape estimation model that can localize any point of the human body.", "abstract": "With the explosive growth of available training data, single-image 3D human modeling is ahead of a transition to a data-centric paradigm. A key to successfully exploiting data scale is to design flexible models that can be supervised from various heterogeneous data sources produced by different researchers or vendors. To this end, we propose a simple yet powerful paradigm for seamlessly unifying different human pose and shape-related tasks and datasets. Our formulation is centered on the ability - both at training and test time - to query any arbitrary point of the human volume, and obtain its estimated location in 3D. We achieve this by learning a continuous neural field of body point localizer functions, each of which is a differently parameterized 3D heatmap-based convolutional point localizer (detector). For generating parametric output, we propose an efficient post-processing step for fitting SMPL-family body models to nonparametric joint and vertex predictions. With this approach, we can naturally exploit differently annotated data sources including mesh, 2D/3D skeleton and dense pose, without having to convert between them, and thereby train large-scale 3D human mesh and skeleton estimation models that outperform the state-of-the-art on several public benchmarks including 3DPW, EMDB, EHF, SSP-3D and AGORA by a considerable margin. We release our code and models to foster downstream research.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95149"} +{"video_file": "RsawwSBCs7_39028568.mp4", "openreview_id": "RsawwSBCs7", "slideslive_id": 39028568, "venue": "nips2024", "title": "OTTER: Effortless Label Distribution Adaptation of Zero-shot Models", "status": "Poster", "keywords": "zero-shot classification;label distribution;adaptation", "tldr": "We introduce a simple and extremely low-cost approach for label distribution adaptation in zero-shot models.", "abstract": "Popular zero-shot models suffer due to artifacts inherited from pretraining. One particularly detrimental issue, caused by unbalanced web-scale pretraining data, is mismatched label distribution. Existing approaches that seek to repair the label distribution are not suitable in zero-shot settings, as they have mismatching requirements, such as needing access to labeled downstream task data or knowledge of the true label balance in the pretraining distribution. We sidestep these challenges and introduce a simple and lightweight approach to adjust pretrained model predictions via optimal transport. Our technique requires only an estimate of the label distribution of a downstream task. Theoretically, we characterize the improvement produced by our procedure under certain mild conditions and provide bounds on the error caused by misspecification. Empirically, we validate our method in a wide array of zero-shot image and text classification tasks, improving accuracy by 4.8% and 15.9% on average, and beating baselines like prior matching---often by significant margins---in 17 out of 21 datasets.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95148"} +{"video_file": "RwBObRsIzC_39026711.mp4", "openreview_id": "RwBObRsIzC", "slideslive_id": 39026711, "venue": "nips2024", "title": "Zero-Shot Tokenizer Transfer", "status": "Poster", "keywords": "tokenization;transfer learning;natural language processing;hypernetworks;zero-shot learning", "tldr": "We introduce the new problem of Zero-Shot Tokenizer Transfer (using a language model with a tokenizer it has never been trained with), and a first high-performing baseline to solve this problem.", "abstract": "Language models (LMs) are bound to their tokenizer, which maps raw text to a sequence of vocabulary items (tokens). This restricts their flexibility: for example, LMs trained primarily on English may still perform well in other natural and programming languages, but have vastly decreased efficiency due to their English-centric tokenizer. To mitigate this, we should be able to swap the original LM tokenizer with an arbitrary one, on the fly, without degrading performance. Hence, in this work we define a new problem: Zero-Shot Tokenizer Transfer (ZeTT). The challenge at the core of ZeTT is finding embeddings for the tokens in the vocabulary of the new tokenizer. Since prior heuristics for initializing embeddings often perform at chance level in a ZeTT setting, we propose a new solution: we train a hypernetwork taking a tokenizer as input and predicting the corresponding embeddings. We empirically demonstrate that the hypernetwork generalizes to new tokenizers both with encoder (e.g., XLM-R) and decoder LLMs (e.g., Mistral-7B). Our method comes close to the original models' performance in cross-lingual and coding tasks while markedly reducing the length of the tokenized sequence. We also find that the remaining gap can be quickly closed by continued training on less than 1B tokens. Finally, we show that a ZeTT hypernetwork trained for a base (L)LM can also be applied to fine-tuned variants without extra training. Overall, our results make substantial strides toward detaching LMs from their tokenizer.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95143"} +{"video_file": "RwK0tgfptL_39024746.mp4", "openreview_id": "RwK0tgfptL", "slideslive_id": 39024746, "venue": "nips2024", "title": "Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks using the Marginal Likelihood", "status": "Poster", "keywords": "Bayesian deep learning;Bayesian model selection;Marginal likelihood;Laplace approximation;Sparsification;Pruning", "tldr": "We show that the Bayesian marginal likelihood can be used to make neural networks more sparsifiable.", "abstract": "Neural network sparsification is a promising avenue to save computational time and memory costs, especially in an age where many successful AI models are becoming too large to naively deploy on consumer hardware. While much work has focused on different weight pruning criteria, the overall sparsifiability of the network, i.e., its capacity to be pruned without quality loss, has often been overlooked. We present Sparsifiability via the Marginal likelihood (SpaM), a sparsification framework that highlights the effectiveness of using the Bayesian marginal likelihood in conjunction with sparsity-inducing priors for making neural networks more sparsifiable. Our approach implements an automatic Occam's razor that selects the most sparsifiable model that still explains the data well, both for structured and unstructured sparsification. In addition, we demonstrate that the pre-computed posterior precision from the Laplace approximation can be re-used to define a cheap pruning criterion, which outperforms many existing (more expensive) approaches. We demonstrate the effectiveness of our framework, especially at high sparsity levels, across a range of different neural network architectures and datasets.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95142"} +{"video_file": "RxQoIekEa2_39028807.mp4", "openreview_id": "RxQoIekEa2", "slideslive_id": 39028807, "venue": "nips2024", "title": "Statistical and Geometrical properties of the Kernel Kullback-Leibler divergence", "status": "Poster", "keywords": "kernels; optimisation; optimal transport", "tldr": "In this paper, we study the statistical and geometric properties of the Kullback-Leibler divergence with kernel covariance operators (KKL) introduced by [Bach, 2022, Information Theory with Kernel Methods].", "abstract": "In this paper, we study the statistical and geometrical properties of the Kullback-Leibler divergence with kernel covariance operators (KKL) introduced by [Bach, 2022, Information Theory with Kernel Methods]. Unlike the classical Kullback-Leibler (KL) divergence that involves density ratios, the KKL compares probability distributions through covariance operators (embeddings) in a reproducible kernel Hilbert space (RKHS), and compute the Kullback-Leibler quantum divergence. This novel divergence hence shares parallel but different aspects with both the standard Kullback-Leibler between probability distributions and kernel embeddings metrics such as the maximum mean discrepancy. A limitation faced with the original KKL divergence is its inability to be defined for distributions with disjoint supports. To solve this problem, we propose in this paper a regularised variant that guarantees that divergence is well defined for all distributions. We derive bounds that quantify the deviation of the regularised KKL to the original one, as well as concentration bounds. In addition, we provide a closed-form expression for the regularised KKL, specifically applicable when the distributions consist of finite sets of points, which makes it implementable. Furthermore, we derive a Wasserstein gradient descent scheme of the KKL divergence in the case of discrete distributions, and study empirically its properties to transport a set of points to a target distribution.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95140"} +{"video_file": "RxXdokK2qz_39024967.mp4", "openreview_id": "RxXdokK2qz", "slideslive_id": 39024967, "venue": "nips2024", "title": "Computing the Bias of Constant-step Stochastic Approximation with Markovian Noise", "status": "Poster", "keywords": "Stochastic approximation; Polyak-Ruppert averaging; Stein's method", "tldr": "We characterize the bias of constant step-size stochastic approximation by using generator techniques close to Stein's method.", "abstract": "We study stochastic approximation algorithms with Markovian noise and constant step-size\n\u03b1\n. We develop a method based on infinitesimal generator comparisons to study the bias of the algorithm, which is the expected difference between\n\u03b8\nn\n---the value at iteration\nn\n--- and\n\u03b8\n\u2217\n---the unique equilibrium of the corresponding ODE. We show that, under some smoothness conditions, this bias is of order\nO\n(\n\u03b1\n)\n. Furthermore, we show that the time-averaged bias is equal to\n\u03b1\nV\n+\nO\n(\n\u03b1\n2\n)\n, where\nV\nis a constant characterized by a Lyapunov equation, showing that\nE\n[\n\u03b8\n\u00af\nn\n]\n\u2248\n\u03b8\n\u2217\n+\nV\n\u03b1\n+\nO\n(\n\u03b1\n2\n)\n, where\n\u03b8\n\u00af\nn\nis the Polyak-Ruppert average. We also show that\n\u03b8\n\u00af\nn\nconverges with high probability around\n\u03b8\n\u2217\n+\n\u03b1\nV\n. We illustrate how to combine this with Richardson-Romberg extrapolation to derive an iterative scheme with a bias of order\nO\n(\n\u03b1\n2\n)\n.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95139"} +{"video_file": "S0Ci1AsJL5_39027383.mp4", "openreview_id": "S0Ci1AsJL5", "slideslive_id": 39027383, "venue": "nips2024", "title": "Gaussian Approximation and Multiplier Bootstrap for Polyak-Ruppert Averaged Linear Stochastic Approximation with Applications to TD Learning", "status": "Poster", "keywords": "Linear Stochastic Approximation;Normal Approximation;Bootstrap Validity", "tldr": "We prove rates of normal approximation for the iterates of the linear stochastic approximation algorithm and non-asymptotic guarantees for constructing confidence intervals with multiplier bootstrap", "abstract": "In this paper, we obtain the Berry\u2013Esseen bound for multivariate normal approximation for the Polyak-Ruppert averaged iterates of the linear stochastic approximation (LSA) algorithm with decreasing step size. Moreover, we prove the non-asymptotic validity of the confidence intervals for parameter estimation with LSA based on multiplier bootstrap. This procedure updates the LSA estimate together with a set of randomly perturbed LSA estimates upon the arrival of subsequent observations. We illustrate our findings in the setting of temporal difference learning with linear function approximation.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95136"} +{"video_file": "S4YRCLbUK1_39028661.mp4", "openreview_id": "S4YRCLbUK1", "slideslive_id": 39028661, "venue": "nips2024", "title": "Who Evaluates the Evaluations? Objectively Scoring Text-to-Image Prompt Coherence Metrics with T2IScoreScore (TS2)", "status": "Spotlight", "keywords": "text-to-image;t2i;metric;meta-metric;analysis", "tldr": "We propose a meta-metric for objectively evaluating text-to-image faithfulness metrics, by assessing their ability to sort and discriminate images of increasing error counts in a \"Semantic Error Graph\", and many existing metrics are surprisingly bad.", "abstract": "With advances in the quality of text-to-image (T2I) models has come interest in benchmarking their prompt faithfulness---the semantic coherence of generated images to the prompts they were conditioned on. A variety of T2I faithfulness metrics have been proposed, leveraging advances in cross-modal embeddings and vision-language models (VLMs). However, these metrics are not rigorously compared and benchmarked, instead presented with correlation to human Likert scores over a set of easy-to-discriminate images against seemingly weak baselines.\nWe introduce T2IScoreScore, a curated set of semantic error graphs containing a prompt and a set of increasingly erroneous images. These allow us to rigorously judge whether a given prompt faithfulness metric can correctly order images with respect to their objective error count and significantly discriminate between different error nodes, using meta-metric scores derived from established statistical tests. Surprisingly, we find that the state-of-the-art VLM-based metrics (e.g., TIFA, DSG, LLMScore, VIEScore) we tested fail to significantly outperform simple (and supposedly worse) feature-based metrics like CLIPScore, particularly on a hard subset of naturally-occurring T2I model errors. TS2 will enable the development of better T2I prompt faithfulness metrics through more rigorous comparison of their conformity to expected orderings and separations under objective criteria.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/95132"} +{"video_file": "S5coB5kqSD_39024561.mp4", "openreview_id": "S5coB5kqSD", "slideslive_id": 39024561, "venue": "nips2024", "title": "VeXKD: The Versatile Integration of Cross-Modal Fusion and Knowledge Distillation for 3D Perception", "status": "Poster", "keywords": "3D Perception;Multi-modal Fusion;Cross-modal Knowledge Distillation", "tldr": "A simple yet versatile framework that jointly considers multi-modal fusion and cross-modal knowledge distillation.", "abstract": "Recent advancements in 3D perception have led to a proliferation of network architectures, particularly those involving multi-modal fusion algorithms. While these fusion algorithms improve accuracy, their complexity often impedes real-time performance. This paper introduces VeXKD, an effective and Versatile framework that integrates Cross-Modal Fusion with Knowledge Distillation. VeXKD applies knowledge distillation exclusively to the Bird's Eye View (BEV) feature maps, enabling the transfer of cross-modal insights to single-modal students without additional inference time overhead. It avoids volatile components that can vary across various 3D perception tasks and student modalities, thus improving versatility. The framework adopts a modality-general cross-modal fusion module to bridge the modality gap between the multi-modal teachers and single-modal students. Furthermore, leveraging byproducts generated during fusion, our BEV query guided mask generation network identifies crucial spatial locations across different BEV feature maps in a data-driven manner, significantly enhancing the effectiveness of knowledge distillation. Extensive experiments on the nuScenes dataset demonstrate notable improvements, with up to 6.9%/4.2% increase in mAP and NDS for 3D detection tasks and up to 4.3% rise in mIoU for BEV map segmentation tasks, narrowing the performance gap with multi-modal models.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95130"} +{"video_file": "S7THlpvH8i_39025309.mp4", "openreview_id": "S7THlpvH8i", "slideslive_id": 39025309, "venue": "nips2024", "title": "Normalization Layer Per-Example Gradients are Sufficient to Predict Gradient Noise Scale in Transformers", "status": "Poster", "keywords": "Efficient deep learning;gradient noise scale;critical batch size;language models", "tldr": "While using a trick to compute per-example gradients efficiently we discover that normalization layers statistics predict GNS accurately.", "abstract": "Per-example gradient norms are a vital ingredient for estimating gradient noise scale (GNS) with minimal variance. Observing the tensor contractions required to compute them, we propose a method with minimal FLOPs in 3D or greater tensor regimes by simultaneously computing the norms while computing the parameter gradients. Using this method we are able to observe the GNS of different layers at higher accuracy than previously possible. We find that the total GNS of contemporary transformer models is predicted well by the GNS of only the normalization layers. As a result, focusing only on the normalization layer, we develop a custom kernel to compute the per-example gradient norms while performing the LayerNorm backward pass with zero throughput overhead. Tracking GNS on only those layers, we are able to guide a practical batch size schedule that reduces training time by 18% on a Chinchilla-optimal language model.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/95128"} +{"video_file": "S8wFXyT4dY_39025735.mp4", "openreview_id": "S8wFXyT4dY", "slideslive_id": 39025735, "venue": "nips2024", "title": "PPLNs: Parametric Piecewise Linear Networks for Event-Based Temporal Modeling and Beyond", "status": "Poster", "keywords": "Spiking mechanism;event camera;event vision", "tldr": "We present Parametric Piecewise Linear Networks (PPLNs) for temporal vision inference.", "abstract": "We present Parametric Piecewise Linear Networks (PPLNs) for temporal vision inference. Motivated by the neuromorphic principles that regulate biological neural behaviors, PPLNs are ideal for processing data captured by event cameras, which are built to simulate neural activities in the human retina. We discuss how to represent the membrane potential of an artificial neuron by a parametric piecewise linear function with learnable coefficients. This design echoes the idea of building deep models from learnable parametric functions recently popularized by Kolmogorov\u2013Arnold Networks (KANs). Experiments demonstrate the state-of-the-art performance of PPLNs in event-based and image-based vision applications, including steering prediction, human pose estimation, and motion deblurring.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95126"} +{"video_file": "S93hrwT8u9_39028300.mp4", "openreview_id": "S93hrwT8u9", "slideslive_id": 39028300, "venue": "nips2024", "title": "Activation Map Compression through Tensor Decomposition for Deep Learning", "status": "Poster", "keywords": "Deep Learning;Computer Vision;Compression", "tldr": "In this paper we demonstrate the relevance of applying tensor decomposition methods to compress activation maps and allow on-device learning.", "abstract": "Internet of Things and Deep Learning are synergetically and exponentially growing industrial fields with a massive call for their unification into a common framework called Edge AI. While on-device inference is a well-explored topic in recent research, backpropagation remains an open challenge due to its prohibitive computational and memory costs compared to the extreme resource constraints of embedded devices. Drawing on tensor decomposition research, we tackle the main bottleneck of backpropagation, namely the memory footprint of activation map storage. We investigate and compare the effects of activation compression using Singular Value Decomposition and its tensor variant, High-Order Singular Value Decomposition. The application of low-order decomposition results in considerable memory savings while preserving the features essential for learning, and also offers theoretical guarantees to convergence. Experimental results obtained on main-stream architectures and tasks demonstrate Pareto-superiority over other state-of-the-art solutions, in terms of the trade-off between generalization and memory footprint.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/95125"} +{"video_file": "S98OzJD3jn_39025366.mp4", "openreview_id": "S98OzJD3jn", "slideslive_id": 39025366, "venue": "nips2024", "title": "Diffusion Tuning: Transferring Diffusion Models via Chain of Forgetting", "status": "Poster", "keywords": "Transfer Learning;Fine-tuning;Diffusion Model;Generative Model", "tldr": "This paper introduces Diff-Tuning, a method to adapt pre-trained diffusion models, showing significant improvements over standard fine-tuning across eight tasks and five conditions with ControlNet.", "abstract": "Diffusion models have significantly advanced the field of generative modeling. However, training a diffusion model is computationally expensive, creating a pressing need to adapt off-the-shelf diffusion models for downstream generation tasks. Current fine-tuning methods focus on parameter-efficient transfer learning but overlook the fundamental transfer characteristics of diffusion models. In this paper, we investigate the transferability of diffusion models and observe a monotonous chain of forgetting trend of transferability along the reverse process. Based on this observation and novel theoretical insights, we present Diff-Tuning, a frustratingly simple transfer approach that leverages the chain of forgetting tendency. Diff-Tuning encourages the fine-tuned model to retain the pre-trained knowledge at the end of the denoising chain close to the generated data while discarding the other noise side. We conduct comprehensive experiments to evaluate Diff-Tuning, including the transfer of pre-trained Diffusion Transformer models to eight downstream generations and the adaptation of Stable Diffusion to five control conditions with ControlNet. Diff-Tuning achieves a 24.6% improvement over standard fine-tuning and enhances the convergence speed of ControlNet by 24%. Notably, parameter-efficient transfer learning techniques for diffusion models can also benefit from Diff-Tuning. Code is available at this repository: https://github.com/thuml/Diffusion-Tuning.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95124"} +{"video_file": "SAZeQV2PtT_39028215.mp4", "openreview_id": "SAZeQV2PtT", "slideslive_id": 39028215, "venue": "nips2024", "title": "General bounds on the quality of Bayesian coresets", "status": "Poster", "keywords": "Bayesian;coreset;Kullback Leibler divergence;error bounds", "tldr": "This paper presents new general upper and lower bounds on the quality of Bayesian coresets.", "abstract": "Bayesian coresets speed up posterior inference in the large-scale data regime by approximating the full-data log-likelihood function with a surrogate log-likelihood based on a small, weighted subset of the data. But while Bayesian coresets and methods for construction are applicable in a wide range of models, existing theoretical analysis of the posterior inferential error incurred by coreset approximations only apply in restrictive settings---i.e., exponential family models, or models with strong log-concavity and smoothness assumptions. This work presents general upper and lower bounds on the Kullback-Leibler (KL) divergence of coreset approximations that reflect the full range of applicability of Bayesian coresets. The lower bounds require only mild model assumptions typical of Bayesian asymptotic analyses, while the upper bounds require the log-likelihood functions to satisfy a generalized subexponentiality criterion that is weaker than conditions used in earlier work. The lower bounds are applied to obtain fundamental limitations on the quality of coreset approximations, and to provide a theoretical explanation for the previously-observed poor empirical performance of importance sampling-based construction methods. The upper bounds are used to analyze the performance of recent subsample-optimize methods. The flexibility of the theory is demonstrated in validation experiments involving multimodal, unidentifiable, heavy-tailed Bayesian posterior distributions.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95122"} +{"video_file": "SCEdoGghcw_39025524.mp4", "openreview_id": "SCEdoGghcw", "slideslive_id": 39025524, "venue": "nips2024", "title": "Measuring Progress in Dictionary Learning for Language Model Interpretability with Board Game Models", "status": "Poster", "keywords": "Language models;interpretability;dictionary learning", "tldr": "We measure progress in training sparse autoencoders for LM interpretability by working in the setting of LMs trained on chess and Othello.", "abstract": "What latent features are encoded in language model (LM) representations? Recent work on training sparse autoencoders (SAEs) to disentangle interpretable features in LM representations has shown significant promise. However, evaluating the quality of these SAEs is difficult because we lack a ground-truth collection of interpretable features which we expect good SAEs to identify. We thus propose to measure progress in interpretable dictionary learning by working in the setting of LMs trained on Chess and Othello transcripts. These settings carry natural collections of interpretable features\u2014for example, \u201cthere is a knight on F3\u201d\u2014which we leverage into metrics for SAE quality. To guide progress in interpretable dictionary learning, we introduce a new SAE training technique,\np\n-annealing, which demonstrates improved performance on our metric.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/95121"} +{"video_file": "SEflLHIhhJ_39024871.mp4", "openreview_id": "SEflLHIhhJ", "slideslive_id": 39024871, "venue": "nips2024", "title": "Stepping on the Edge: Curvature Aware Learning Rate Tuners", "status": "Poster", "keywords": "deep learning optimization;learning rate tuner;progressive sharpening;edge of stability", "tldr": "We investigate the interplay between curvature dynamics and learning rate tuners through a new learning rate tuning method.", "abstract": "Curvature information -- particularly, the largest eigenvalue of the loss Hessian, known as the sharpness -- often forms the basis for learning rate tuners. However, recent work has shown that the curvature information undergoes complex dynamics during training, going from a phase of increasing sharpness to eventual stabilization. We analyze the closed-loop feedback effect between learning rate tuning and curvature. We find that classical learning rate tuners may yield greater one-step loss reduction, yet they ultimately underperform in the long term when compared to constant learning rates in the full batch regime. These models break the stabilization of the sharpness, which we explain using a simplified model of the joint dynamics of the learning rate and the curvature. To further investigate these effects, we introduce a new learning rate tuning method, Curvature Dynamics Aware Tuning (CDAT), which prioritizes long term curvature stabilization over instantaneous progress on the objective. In the full batch regime, CDAT shows behavior akin to prefixed warm-up schedules on deep learning objectives, outperforming tuned constant learning rates. In the mini batch regime, we observe that stochasticity introduces confounding effects that explain the previous success of some learning rate tuners at appropriate batch sizes. Our findings highlight the critical role of understanding the joint dynamics of the learning rate and curvature, beyond greedy minimization, to diagnose failures and design effective adaptive learning rate tuners.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/95119"} +{"video_file": "SFk7AMpyhx_39024833.mp4", "openreview_id": "SFk7AMpyhx", "slideslive_id": 39024833, "venue": "nips2024", "title": "4Diffusion: Multi-view Video Diffusion Model for 4D Generation", "status": "Poster", "keywords": "Diffusion Model;4D Generation;NeRF", "tldr": "we present a novel 4D generation pipeline, 4Diffusion, to create high-quality spatial-temporally consistent 4D content from a monocular video.", "abstract": "Current 4D generation methods have achieved noteworthy efficacy with the aid of advanced diffusion generative models. However, these methods lack multi-view spatial-temporal modeling and encounter challenges in integrating diverse prior knowledge from multiple diffusion models, resulting in inconsistent temporal appearance and flickers. In this paper, we propose a novel 4D generation pipeline, namely\n4Diffusion\n, aimed at generating spatial-temporally consistent 4D content from a monocular video. We first design a unified diffusion model tailored for multi-view video generation by incorporating a learnable motion module into a frozen 3D-aware diffusion model to capture multi-view spatial-temporal correlations. After training on a curated dataset, our diffusion model acquires reasonable temporal consistency and inherently preserves the generalizability and spatial consistency of the 3D-aware diffusion model. Subsequently, we propose 4D-aware Score Distillation Sampling loss, which is based on our multi-view video diffusion model, to optimize 4D representation parameterized by dynamic NeRF. This aims to eliminate discrepancies arising from multiple diffusion models, allowing for generating spatial-temporally consistent 4D content. Moreover, we devise an anchor loss to enhance the appearance details and facilitate the learning of dynamic NeRF. Extensive qualitative and quantitative experiments demonstrate that our method achieves superior performance compared to previous methods.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95115"} +{"video_file": "SGcnphYOeq_39027268.mp4", "openreview_id": "SGcnphYOeq", "slideslive_id": 39027268, "venue": "nips2024", "title": "Parameter-free Clipped Gradient Descent Meets Polyak", "status": "Poster", "keywords": "Polyak stepsize;clipped gradient descent;generalized smoothness", "tldr": "We proposed parameter-free methods whose convergence rate is asymptotically independent of \nL\n under \n(\nL\n0\n,\nL\n1\n)\n-smoothness", "abstract": "Gradient descent and its variants are de facto standard algorithms for training machine learning models. As gradient descent is sensitive to its hyperparameters, we need to tune the hyperparameters carefully using a grid search. However, the method is time-consuming, particularly when multiple hyperparameters exist. Therefore, recent studies have analyzed parameter-free methods that adjust the hyperparameters on the fly. However, the existing work is limited to investigations of parameter-free methods for the stepsize, and parameter-free methods for other hyperparameters have not been explored. For instance, although the gradient clipping threshold is a crucial hyperparameter in addition to the stepsize for preventing gradient explosion issues, none of the existing studies have investigated parameter-free methods for clipped gradient descent. Therefore, in this study, we investigate the parameter-free methods for clipped gradient descent. Specifically, we propose Inexact Polyak Stepsize, which converges to the optimal solution without any hyperparameters tuning, and its convergence rate is asymptotically independent of\nL\nunder\nL\n-smooth and\n(\nL\n0\n,\nL\n1\n)\n-smooth assumptions of the loss function, similar to that of clipped gradient descent with well-tuned hyperparameters. We numerically validated our convergence results using a synthetic function and demonstrated the effectiveness of our proposed methods using LSTM, Nano-GPT, and T5.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95113"} +{"video_file": "SKhR5CuiqQ_39025643.mp4", "openreview_id": "SKhR5CuiqQ", "slideslive_id": 39025643, "venue": "nips2024", "title": "Diffusing Differentiable Representations", "status": "Poster", "keywords": "Diffusion models;Differential Geometry;Implicit Neural Representations;NeRF;Siren", "tldr": "We sample differentiable representations by solving the pulled back reverse diffusion process in parameter space.", "abstract": "We introduce a novel, training-free method for sampling differentiable representations (diffreps) using pretrained diffusion models. Rather than merely mode-seeking, our method achieves sampling by \"pulling back\" the dynamics of the reverse-time process\u2014from the image space to the diffrep parameter space\u2014and updating the parameters according to this pulled-back process. We identify an implicit constraint on the samples induced by the diffrep and demonstrate that addressing this constraint significantly improves the consistency and detail of the generated objects. Our method yields diffreps with substantially improved quality and diversity for images, panoramas, and 3D NeRFs compared to existing techniques. Our approach is a general-purpose method for sampling diffreps, expanding the scope of problems that diffusion models can tackle.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95110"} +{"video_file": "SM9IWrHz4e_39025440.mp4", "openreview_id": "SM9IWrHz4e", "slideslive_id": 39025440, "venue": "nips2024", "title": "Achieving Tractable Minimax Optimal Regret in Average Reward MDPs", "status": "Poster", "keywords": "Markov decision processes;Regret;Average reward;Minimax;Optimism;Model-based", "tldr": "We provide the first tractable algorithm that achieves minimax optimal regret in average reward MDPs.", "abstract": "In recent years, significant attention has been directed towards learning average-reward Markov Decision Processes (MDPs). However, existing algorithms either suffer from sub-optimal regret guarantees or computational inefficiencies. In this paper, we present the first tractable algorithm with minimax optimal regret of\nO\n(\nsp\n(\nh\n\u2217\n)\nS\nA\nT\nlog\n\u2061\n(\nS\nA\nT\n)\n)\nwhere\nsp\n(\nh\n\u2217\n)\nis the span of the optimal bias function\nh\n\u2217\n,\nS\n\u00d7\nA\nis the size of the state-action space and\nT\nthe number of learning steps. Remarkably, our algorithm does not require prior information on\nsp\n(\nh\n\u2217\n)\n.\nOur algorithm relies on a novel subroutine, Projected Mitigated Extended Value Iteration (PMEVI), to compute bias-constrained optimal policies efficiently. This subroutine can be applied to various previous algorithms to obtain improved regret bounds.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95107"} +{"video_file": "SO1aRpwVLk_39024660.mp4", "openreview_id": "SO1aRpwVLk", "slideslive_id": 39024660, "venue": "nips2024", "title": "4Real: Towards Photorealistic 4D Scene Generation via Video Diffusion Models", "status": "Poster", "keywords": "4D generation; novel view synthesis; gaussian splatting; text-4D; 4D reconstruction", "tldr": "Generate 4D Gaussian splatting for photorealistic scene rendering from text input", "abstract": "Existing dynamic scene generation methods mostly rely on distilling knowledge from pre-trained 3D generative models, which are typically fine-tuned on synthetic object datasets. As a result, the generated scenes are often object-centric and lack photorealism. To address these limitations, we introduce a novel pipeline designed for photorealistic text-to-4D scene generation, discarding the dependency on multi-view generative models and instead fully utilizing video generative models trained on diverse real-world datasets. Our method begins by generating a reference video using the video generation model. We then learn the canonical 3D representation of the video using a freeze-time video, delicately generated from the reference video. To handle inconsistencies in the freeze-time video, we jointly learn a per-frame deformation to model these imperfections. We then learn the temporal deformation based on the canonical representation to capture dynamic interactions in the reference video. The pipeline facilitates the generation of dynamic scenes with enhanced photorealism and structural integrity, viewable from multiple perspectives, thereby setting a new standard in 4D scene generation.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95105"} +{"video_file": "SSCtCq2MH2_39026792.mp4", "openreview_id": "SSCtCq2MH2", "slideslive_id": 39026792, "venue": "nips2024", "title": "GIC: Gaussian-Informed Continuum for Physical Property Identification and Simulation", "status": "Oral", "keywords": "Object Property Identification;Gaussian-inform Continuum", "tldr": "We propose a novel hybrid pipeline that takes advantage of the 3D Gaussian representation of the object to both acquire 3D shapes and empower the simulated continuum to render 2D shapes for physical property estimation.", "abstract": "This paper studies the problem of estimating physical properties (system identification) through visual observations. To facilitate geometry-aware guidance in physical property estimation, we introduce a novel hybrid framework that leverages 3D Gaussian representation to not only capture explicit shapes but also enable the simulated continuum to render object masks as 2D shape surrogates during training. We propose a new dynamic 3D Gaussian framework based on motion factorization to recover the object as 3D Gaussian point sets across different time states. Furthermore, we develop a coarse-to-fine filling strategy to generate the density fields of the object from the Gaussian reconstruction, allowing for the extraction of object continuums along with their surfaces and the integration of Gaussian attributes into these continuum. In addition to the extracted object surfaces, the Gaussian-informed continuum also enables the rendering of object masks during simulations, serving as 2D-shape guidance for physical property estimation. Extensive experimental evaluations demonstrate that our pipeline achieves state-of-the-art performance across multiple benchmarks and metrics. Additionally, we illustrate the effectiveness of the proposed method through real-world demonstrations, showcasing its practical utility. Our project page is at https://jukgei.github.io/project/gic.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95099"} +{"video_file": "STrpbhrvt3_39025815.mp4", "openreview_id": "STrpbhrvt3", "slideslive_id": 39025815, "venue": "nips2024", "title": "A Textbook Remedy for Domain Shifts: Knowledge Priors for Medical Image Analysis", "status": "Spotlight", "keywords": "Robustness;Interpretability;Domain Generalization;Knowledge Prior;Medical Images", "tldr": "We tackle robustness to domain shifts in medicine by incorporating knowledge priors from documents via inherently interpretable models.", "abstract": "While deep networks have achieved broad success in analyzing natural images, when applied to medical scans, they often fail in unexcepted situations. We investigate this challenge and focus on model sensitivity to domain shifts, such as data sampled from different hospitals or data confounded by demographic variables such as sex, race, etc, in the context of chest X-rays and skin lesion images. A key finding we show empirically is that existing visual backbones lack an appropriate prior from the architecture for reliable generalization in these settings. Taking inspiration from medical training, we propose giving deep networks a prior grounded in explicit medical knowledge communicated in natural language. To this end, we introduce Knowledge-enhanced Bottlenecks (KnoBo), a class of concept bottleneck models that incorporates knowledge priors that constrain it to reason with clinically relevant factors found in medical textbooks or PubMed. KnoBo uses retrieval-augmented language models to design an appropriate concept space paired with an automatic training procedure for recognizing the concept. We evaluate different resources of knowledge and recognition architectures on a broad range of domain shifts across 20 datasets. In our comprehensive evaluation with two imaging modalities, KnoBo outperforms fine-tuned models on confounded datasets by 32.4% on average. Finally, evaluations reveal that PubMed is a promising resource for making medical models less sensitive to domain shift, outperforming other resources on both diversity of information and final prediction performance.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/95098"} +{"video_file": "SXbyy0a3rY_39028420.mp4", "openreview_id": "SXbyy0a3rY", "slideslive_id": 39028420, "venue": "nips2024", "title": "GrounDiT: Grounding Diffusion Transformers via Noisy Patch Transplantation", "status": "Poster", "keywords": "Text-to-Image Generation;Visual Grounding", "tldr": "We introduce a novel training-free approach for bounding box-based image generation leveraging the semantic sharing properties of Diffusion Transformers.", "abstract": "We introduce GrounDiT, a novel training-free spatial grounding technique for text-to-image generation using Diffusion Transformers (DiT). Spatial grounding with bounding boxes has gained attention for its simplicity and versatility, allowing for enhanced user control in image generation. However, prior training-free approaches often rely on updating the noisy image during the reverse diffusion process via backpropagation from custom loss functions, which frequently struggle to provide precise control over individual bounding boxes. In this work, we leverage the flexibility of the Transformer architecture, demonstrating that DiT can generate noisy patches corresponding to each bounding box, fully encoding the target object and allowing for fine-grained control over each region. Our approach builds on an intriguing property of DiT, which we refer to as semantic sharing. Due to semantic sharing, when a smaller patch is jointly denoised alongside a generatable-size image, the two become semantic clones. Each patch is denoised in its own branch of the generation process and then transplanted into the corresponding region of the original noisy image at each timestep, resulting in robust spatial grounding for each bounding box. In our experiments on the HRS and DrawBench benchmarks, we achieve state-of-the-art performance compared to previous training-free approaches. Project Page: https://groundit-diffusion.github.io/.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95097"} +{"video_file": "SXy1nVGyO7_39024845.mp4", "openreview_id": "SXy1nVGyO7", "slideslive_id": 39024845, "venue": "nips2024", "title": "On the Identifiability of Hybrid Deep Generative Models: Meta-Learning as a Solution", "status": "Poster", "keywords": "hybrid modeling; identifiability; meta-learning", "tldr": "We propose the learn-to-identify framework to improve the identifiability of hybrid-DGMs.", "abstract": "The interest in leveraging physics-based inductive bias in deep learning has resulted in recent development of hybrid deep generative models (hybrid-DGMs) that integrates known physics-based mathematical expressions in neural generative models. To identify these hybrid-DGMs requires inferring parameters of the physics-based component along with their neural component. The identifiability of these hybrid-DGMs, however, has not yet been theoretically probed or established. How does the existing theory of the un-identifiability of general DGMs apply to hybrid-DGMs? What may be an effective approach to consutrct a hybrid-DGM with theoretically-proven identifiability? This paper provides the first theoretical probe into the identifiability of hybrid-DGMs, and present meta-learning as a novel solution to construct identifiable hybrid-DGMs. On synthetic and real-data benchmarks, we provide strong empirical evidence for the un-identifiability of existing hybrid-DGMs using unconditional priors, and strong identifiability results of the presented meta-formulations of hybrid-DGMs.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95096"} +{"video_file": "SdLOs1FR4h_39025479.mp4", "openreview_id": "SdLOs1FR4h", "slideslive_id": 39025479, "venue": "nips2024", "title": "FUGAL: Feature-fortified Unrestricted Graph Alignment", "status": "Poster", "keywords": "graph alignment", "tldr": "We find a permutation matrix that maps one graph to another by directly operating on their adjacency matrices, surpassing state-of-the-art methods in accuracy across all benchmark datasets without encumbering efficiency.", "abstract": "The necessity to align two graphs, minimizing a structural distance metric, is prevalent in biology, chemistry, recommender systems, and social network analysis. Due to the problem\u2019s NP-hardness, prevailing graph alignment methods follow a modular and mediated approach, solving the problem by restricting to the domain of intermediary graph representations or products like embeddings, spectra, and graph signals. Restricting the problem to this intermediate space may distort the original problem and are hence predisposed to miss high-quality solutions. In this paper, we propose an unrestricted method, FUGAL, which finds a permutation matrix that maps one graph to another by directly operating on their adjacency matrices with judicious constraint relaxation. Extensive experimentation demonstrates that FUGAL consistently surpasses state-of-the-art graph alignment methods in accuracy across all benchmark datasets without encumbering efficiency.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95090"} +{"video_file": "Shwtw8uV8l_39024807.mp4", "openreview_id": "Shwtw8uV8l", "slideslive_id": 39024807, "venue": "nips2024", "title": "Single Image Reflection Separation via Dual-Stream Interactive Transformers", "status": "Poster", "keywords": "Single Image Reflection Separation;Vision Transformer;Image Restoration;Reflection Removal", "tldr": "This paper presents a dual-stream interactive transformer design to handle single image reflection separation problem, considering both intra-layer and inter-layer feature correlations.", "abstract": "Despite satisfactory results on ``easy'' cases of single image reflection separation, prior dual-stream methods still suffer from considerable performance degradation when facing complex ones, i.e, the transmission layer is densely entangled with the reflection having a wide distribution of spatial intensity. The main reasons come from the lack of concern on the feature correlation during interaction, and the limited receptive field. To remedy these deficiencies, this paper presents a Dual-Stream Interactive Transformer (DSIT) design. Specifically, we devise a dual-attention interactive structure that embraces a dual-stream self-attention and a layer-aware dual-stream cross-attention mechanism to simultaneously capture intra-layer and inter-layer feature correlations. Meanwhile, the introduction of attention mechanisms can also mitigate the receptive field limitation. We modulate single-stream pre-trained Transformer embeddings with dual-stream convolutional features through cross-architecture interactions to provide richer semantic priors, thereby further relieving the ill-posedness of the problem. Extensive experimental results reveal the merits of the proposed DSIT over other state-of-the-art alternatives. Our code is publicly available at https://github.com/mingcv/DSIT.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95086"} +{"video_file": "SjQ1iIqpfU_39028493.mp4", "openreview_id": "SjQ1iIqpfU", "slideslive_id": 39028493, "venue": "nips2024", "title": "CoBo: Collaborative Learning via Bilevel Optimization", "status": "Poster", "keywords": "collaborative learning;personalized federated learning;bilevel optimization;distributed learning", "tldr": "Collaborative learning through solving a bilevel optimization problem.", "abstract": "Collaborative learning is an important tool to train multiple clients more effectively by enabling communication among clients. Identifying helpful clients, however, presents challenging and often introduces significant overhead. In this paper, we model client-selection and model-training as two interconnected optimization problems, proposing a novel bilevel optimization problem for collaborative learning. We introduce CoBo, a scalable and elastic, SGD-type alternating optimization algorithm that efficiently addresses these problem with theoretical convergence guarantees. Empirically, CoBo achieves superior performance, surpassing popular personalization algorithms by 9.3% in accuracy on a task with high heterogeneity, involving datasets distributed among 80 clients.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/95083"} +{"video_file": "Skv26JteFz_39026659.mp4", "openreview_id": "Skv26JteFz", "slideslive_id": 39026659, "venue": "nips2024", "title": "Optimal Hypothesis Selection in (Almost) Linear Time", "status": "Poster", "keywords": "hypothesis selection;distribution learning;density estimation;time efficient algorithms;computational constraints", "tldr": "We introduce the first near-linear-time algorithm for hypothesis selection that achieves optimal accuracy, significantly advancing previous methods.", "abstract": "Hypothesis selection, also known as density estimation, is a fundamental problem in statistics and learning theory. Suppose we are given a sample set from an unknown distribution\nP\nand a finite class of candidate distributions (called hypotheses)\nH\n:\n=\nH\n1\n,\nH\n2\n,\n\u2026\n,\nH\nn\n. The aim is to design an algorithm that selects a distribution\nH\n^\nin\nH\nthat best fits the data. The algorithm's accuracy is measured based on the distance between\nH\n^\nand\nP\ncompared to the distance of the closest distribution in\nH\nto\nP\n(denoted by\nO\nP\nT\n). Concretely, we aim for\n|\nH\n^\n\u2212\nP\n|\nT\nV\nto be at most\n\u03b1\n\u22c5\nO\nP\nT\n+\n\u03f5\nfor some small\n\u03f5\nand\n\u03b1\n. While it is possible to decrease the value of\n\u03f5\nas the number of samples increases,\n\u03b1\nis an inherent characteristic of the algorithm. In fact, one cannot hope to achieve\n\u03b1\n<\n3\neven when there are only two candidate hypotheses, unless the number of samples is proportional to the domain size of\nP\n[Bousquet, Kane, Moran '19]. Finding the best\n\u03b1\nhas been one of the main focuses of studies of the problem since early work of [Devroye, Lugosi '01]. Prior to our work, no algorithm was known that achieves\n\u03b1\n=\n3\nin near-linear time. We provide the first algorithm that operates in almost linear time (\nO\n~\n(\nn\n/\n\u03f5\n3\n)\ntime) and achieves\n\u03b1\n=\n3\n. This result improves upon a long list of results in hypothesis selection. Previously known algorithms either had worse time complexity, a larger factor\n\u03b1\n, or extra assumptions about the problem setting. In addition to this algorithm, we provide another (almost) linear-time algorithm with better dependency on the additive accuracy parameter\n\u03f5\n, albeit with a slightly worse accuracy parameter,\n\u03b1\n=\n4\n.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95081"} +{"video_file": "SlDx451MjC_39027148.mp4", "openreview_id": "SlDx451MjC", "slideslive_id": 39027148, "venue": "nips2024", "title": "Dual Encoder GAN Inversion for High-Fidelity 3D Head Reconstruction from Single Images", "status": "Poster", "keywords": "3D Reconstruction;GAN", "tldr": "We present a method for high-quality 360-degree head synthesis from single images.", "abstract": "3D GAN inversion aims to project a single image into the latent space of a 3D Generative Adversarial Network (GAN), thereby achieving 3D geometry reconstruction. While there exist encoders that achieve good results in 3D GAN inversion, they are predominantly built on EG3D, which specializes in synthesizing near-frontal views and is limiting in synthesizing comprehensive 3D scenes from diverse viewpoints. In contrast to existing approaches, we propose a novel framework built on PanoHead, which excels in synthesizing images from a 360-degree perspective. To achieve realistic 3D modeling of the input image, we introduce a dual encoder system tailored for high-fidelity reconstruction and realistic generation from different viewpoints. Accompanying this, we propose a stitching framework on the triplane domain to get the best predictions from both. To achieve seamless stitching, both encoders must output consistent results despite being specialized for different tasks. For this reason, we carefully train these encoders using specialized losses, including an adversarial loss based on our novel occlusion-aware triplane discriminator. Experiments reveal that our approach surpasses the existing encoder training methods qualitatively and quantitatively.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95080"} +{"video_file": "SoYCqMiVIh_39026928.mp4", "openreview_id": "SoYCqMiVIh", "slideslive_id": 39026928, "venue": "nips2024", "title": "Unscrambling disease progression at scale: fast inference of event permutations with optimal transport", "status": "Poster", "keywords": "optimal transport;variational inference;latent variable model;disease progression", "tldr": "We introduce a new method that substantially speeds up inference of discrete disease progression models, allowing them to scale to large feature sets.", "abstract": "Disease progression models infer group-level temporal trajectories of change in patients' features as a chronic degenerative condition plays out. They provide unique insight into disease biology and staging systems with individual-level clinical utility. Discrete models consider disease progression as a latent permutation of events, where each event corresponds to a feature becoming measurably abnormal. However, permutation inference using traditional maximum likelihood approaches becomes prohibitive due to combinatoric explosion, severely limiting model dimensionality and utility. Here we leverage ideas from optimal transport to model disease progression as a latent permutation matrix of events belonging to the Birkhoff polytope, facilitating fast inference via optimisation of the variational lower bound. This enables a factor of 1000 times faster inference than the current state of the art and, correspondingly, supports models with several orders of magnitude more features than the current state of the art can consider. Experiments demonstrate the increase in speed, accuracy and robustness to noise in simulation. Further experiments with real-world imaging data from two separate datasets, one from Alzheimer's disease patients, the other age-related macular degeneration, showcase, for the first time, pixel-level disease progression events in the brain and eye, respectively. Our method is low compute, interpretable and applicable to any progressive condition and data modality, giving it broad potential clinical utility.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95076"} +{"video_file": "SrFbgIjb53_39028036.mp4", "openreview_id": "SrFbgIjb53", "slideslive_id": 39028036, "venue": "nips2024", "title": "MoGU: A Framework for Enhancing Safety of LLMs While Preserving Their Usability", "status": "Poster", "keywords": "Enhancing Safety;Preseving Usability;Large Language Models", "tldr": "Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.", "abstract": "Large Language Models (LLMs) are increasingly deployed in various applications. As their usage grows, concerns regarding their safety are rising, especially in maintaining harmless responses when faced with malicious instructions. Many defense strategies have been developed to enhance the safety of LLMs. However, our research finds that existing defense strategies lead LLMs to predominantly adopt a rejection-oriented stance, thereby diminishing the usability of their responses to benign instructions. To solve this problem, we introduce the MoGU framework, designed to enhance LLMs' safety while preserving their usability. Our MoGU framework transforms the base LLM into two variants: the usable LLM and the safe LLM, and further employs dynamic routing to balance their contribution. When encountering malicious instructions, the router will assign a higher weight to the safe LLM to ensure that responses are harmless. Conversely, for benign instructions, the router prioritizes the usable LLM, facilitating usable and helpful responses. On various open-sourced LLMs, we compare multiple defense strategies to verify the superiority of our MoGU framework. Besides, our analysis provides key insights into the effectiveness of MoGU and verifies that our designed routing mechanism can effectively balance the contribution of each variant by assigning weights. Our work released the safer Llama2, Vicuna, Falcon, Dolphin, and Baichuan2.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95073"} +{"video_file": "Ss7l98DVvD_39027953.mp4", "openreview_id": "Ss7l98DVvD", "slideslive_id": 39027953, "venue": "nips2024", "title": "Wild-GS: Real-Time Novel View Synthesis from Unconstrained Photo Collections", "status": "Poster", "keywords": "3D Gaussian Splatting;Novel View Synthesis;3D Reconstruction", "tldr": "This paper presents an adaptation of 3DGS to resolve the unconstrained photo collections.", "abstract": "Photographs captured in unstructured tourist environments frequently exhibit variable appearances and transient occlusions, challenging accurate scene reconstruction and inducing artifacts in novel view synthesis. Although prior approaches have integrated the Neural Radiance Field (NeRF) with additional learnable modules to handle the dynamic appearances and eliminate transient objects, their extensive training demands and slow rendering speeds limit practical deployments. Recently, 3D Gaussian Splatting (3DGS) has emerged as a promising alternative to NeRF, offering superior training and inference efficiency along with better rendering quality. This paper presents \\textit{Wild-GS}, an innovative adaptation of 3DGS optimized for unconstrained photo collections while preserving its efficiency benefits. \\textit{Wild-GS} determines the appearance of each 3D Gaussian by their inherent material attributes, global illumination and camera properties per image, and point-level local variance of reflectance. Unlike previous methods that model reference features in image space, \\textit{Wild-GS} explicitly aligns the pixel appearance features to the corresponding local Gaussians by sampling the triplane extracted from the reference image. This novel design effectively transfers the high-frequency detailed appearance of the reference view to 3D space and significantly expedites the training process. Furthermore, 2D visibility maps and depth regularization are leveraged to mitigate the transient effects and constrain the geometry, respectively. Extensive experiments demonstrate that \\textit{Wild-GS} achieves state-of-the-art rendering performance and the highest efficiency in both training and inference among all the existing techniques. The code can be accessed via: https://github.com/XuJiacong/Wild-GS", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95071"} +{"video_file": "StapcUWm9q_39026706.mp4", "openreview_id": "StapcUWm9q", "slideslive_id": 39026706, "venue": "nips2024", "title": "Diffusion Model with Cross Attention as an Inductive Bias for Disentanglement", "status": "Spotlight", "keywords": "diffusion models;disentangled representation", "tldr": "In this paper, we introduce a new perspective and framework, demonstrating that diffusion models with cross attention can serve as a powerful inductive bias to facilitate the learning of disentangled representations.", "abstract": "Disentangled representation learning strives to extract the intrinsic factors within the observed data. Factoring these representations in an unsupervised manner is notably challenging and usually requires tailored loss functions or specific structural designs. In this paper, we introduce a new perspective and framework, demonstrating that diffusion models with cross-attention itself can serve as a powerful inductive bias to facilitate the learning of disentangled representations. We propose to encode an image into a set of concept tokens and treat them as the condition of the latent diffusion model for image reconstruction, where cross attention over the concept tokens is used to bridge the encoder and the U-Net of the diffusion model. We analyze that the diffusion process inherently possesses the time-varying information bottlenecks. Such information bottlenecks and cross attention act as strong inductive biases for promoting disentanglement. Without any regularization term in the loss function, this framework achieves superior disentanglement performance on the benchmark datasets, surpassing all previous methods with intricate designs. We have conducted comprehensive ablation studies and visualization analyses, shedding a light on the functioning of this model. We anticipate that our findings will inspire more investigation on exploring diffusion model for disentangled representation learning towards more sophisticated data analysis and understanding.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95070"} +{"video_file": "SuLxkxCENa_39025963.mp4", "openreview_id": "SuLxkxCENa", "slideslive_id": 39025963, "venue": "nips2024", "title": "Deep Equilibrium Algorithmic Reasoning", "status": "Poster", "keywords": "neural algorithmic reasoning;deep equilibrium models;DEQ;Graph Neural Networks", "tldr": "We explore approximating neural algorithmic execution by solving an equilibrium equation, ground our approach theoretically and discuss implementation caveats.", "abstract": "Neural Algorithmic Reasoning (NAR) research has demonstrated that graph neural networks (GNNs) could learn to execute classical algorithms. However, most previous approaches have always used a recurrent architecture, where each iteration of the GNN matches an iteration of the algorithm. In this paper we study neurally solving algorithms from a different perspective: since the algorithm\u2019s solution is often an equilibrium, it is possible to find the solution directly by solving an equilibrium equation. Our approach requires no information on the ground-truth number of steps of the algorithm, both during train and test time. Furthermore, the proposed method improves the performance of GNNs on executing algorithms and is a step towards speeding up existing NAR models. Our empirical evidence, leveraging algorithms from the CLRS-30 benchmark, validates that one can train a network to solve algorithmic problems by directly finding the equilibrium. We discuss the practical implementation of such models and propose regularisations to improve the performance of these equilibrium reasoners.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/95069"} +{"video_file": "SvmJJJS0q1_39027843.mp4", "openreview_id": "SvmJJJS0q1", "slideslive_id": 39027843, "venue": "nips2024", "title": "Detecting and Measuring Confounding Using Causal Mechanism Shifts", "status": "Poster", "keywords": "Causality;Confounding;Mechanisms;Measure", "tldr": "We propose methods to detect and measure confounding using data from causal mechanism shifts.", "abstract": "Detecting and measuring confounding effects from data is a key challenge in causal inference. Existing methods frequently assume causal sufficiency, disregarding the presence of unobserved confounding variables. Causal sufficiency is both unrealistic and empirically untestable. Additionally, existing methods make strong parametric assumptions about the underlying causal generative process to guarantee the identifiability of confounding variables. Relaxing the causal sufficiency and parametric assumptions and leveraging recent advancements in causal discovery and confounding analysis with non-i.i.d. data, we propose a comprehensive approach for detecting and measuring confounding. We consider various definitions of confounding and introduce tailored methodologies to achieve three objectives: (i) detecting and measuring confounding among a set of variables, (ii) separating observed and unobserved confounding effects, and (iii) understanding the relative strengths of confounding bias between different sets of variables. We present useful properties of a confounding measure and present measures that satisfy those properties. Our empirical results support the usefulness of the proposed measures.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/95068"} +{"video_file": "Swh8LxuycA_39026043.mp4", "openreview_id": "Swh8LxuycA", "slideslive_id": 39026043, "venue": "nips2024", "title": "Learning Goal-Conditioned Representations for Language Reward Models", "status": "Poster", "keywords": "Goal-Conditioned Q-functions;Contrastive Learning;Reinforcement Learning from Human Feedback;Representation Learning;Reward Model", "tldr": "We propose to enhance the learned representations of LLM reward models via a goal-conditioned contrastive learning objective, which we show improves reward model performance and downstream LLM alignment.", "abstract": "Techniques that learn improved representations via offline data or self-supervised objectives have shown impressive results in traditional reinforcement learning. Nevertheless, it is unclear how improved representation learning can benefit reinforcement learning from human feedback on language models. In this work, we propose training reward models (RMs) in a contrastive, $\\textit{goal-conditioned}$ fashion by increasing the representation similarity of future states along sampled preferred trajectories and decreasing the similarity along randomly sampled dispreferred trajectories. This objective significantly improves reward model performance by up to 0.09 AUROC across challenging benchmarks, such as MATH and GSM8k. These findings extend to general alignment as well -- on the Helpful-Harmless dataset, we observe 2.3% increase in accuracy. Beyond improving reward model performance, we show this way of training RM representations enables improved steerability because it allows us to evaluate the likelihood of an action achieving a particular goal-state (e.g. whether a solution is correct or helpful). Leveraging this insight, we find that we can filter up to 55% of generated tokens during majority voting by discarding trajectories likely to end up in an \"incorrect\" state, which leads to significant cost savings. We additionally find that these representations can perform fine-grained control by conditioning on desired future goal-states. For example, we show that steering a Llama 3 model towards helpful generations with our approach improves helpfulness by $9.6$% over a supervised-fine-tuning trained baseline. Similarly, steering the model towards complex generations improves complexity by $21.6$% over the baseline. Overall, we find that training RMs in this contrastive, goal-conditioned fashion significantly improves performance and enables model steerability.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95067"} +{"video_file": "SxRblm9aMs_39024497.mp4", "openreview_id": "SxRblm9aMs", "slideslive_id": 39024497, "venue": "nips2024", "title": "Are Graph Neural Networks Optimal Approximation Algorithms?", "status": "Spotlight", "keywords": "Combinatorial Optimization;Graph Neural Networks;Unsupervised Learning", "tldr": "We show that graph neural networks can efficiently implement a message passing algorithm that is optimal (under plausible assumptions) for a broad class of combinatorial problems and demonstrate that this leads to empirically powerful architectures.", "abstract": "In this work we design graph neural network architectures that capture optimal approximation algorithms for a large class of combinatorial optimization problems, using powerful algorithmic tools from semidefinite programming (SDP). Concretely, we prove that polynomial-sized message-passing algorithms can represent the most powerful polynomial time algorithms for Max Constraint Satisfaction Problems assuming the Unique Games Conjecture. We leverage this result to construct efficient graph neural network architectures, OptGNN, that obtain high quality approximate solutions on landmark combinatorial optimization problems such as Max-Cut, Min-Vertex-Cover, and Max-3-SAT. Our approach achieves strong empirical results across a wide range of real-world and synthetic datasets against solvers and neural baselines. Finally, we take advantage of OptGNN\u2019s ability to capture convex relaxations to design an algorithm for producing bounds on the optimal solution from the learned embeddings of OptGNN.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/95066"} +{"video_file": "SyMhGilvCv_39026173.mp4", "openreview_id": "SyMhGilvCv", "slideslive_id": 39026173, "venue": "nips2024", "title": "Prompt Tuning Strikes Back: Customizing Foundation Models with Low-Rank Prompt Adaptation", "status": "Poster", "keywords": "Parameter-Efficient Fine-Tuning;Prompt tuning", "tldr": "Low-Rank Prompt Adaptation, soft-prompting approach for effective and efficient customization of large foundation models.", "abstract": "Parameter-Efficient Fine-Tuning (PEFT) has become the standard for customising Foundation Models (FMs) to user-specific downstream tasks. However, typical PEFT methods require storing multiple task-specific adapters, creating scalability issues as these adapters must be housed and run at the FM server. Traditional prompt tuning offers a potential solution by customising them through task-specific input prefixes, but it under-performs compared to other PEFT methods like LoRA. To address this gap, we propose Low-Rank Prompt Adaptation (LoPA), a prompt-tuning-based approach that performs on par with state-of-the-art PEFT methods and full fine-tuning while being more parameter-efficient and not requiring a server-based adapter. LoPA generates soft prompts by balancing between sharing task-specific information across instances and customization for each instance. It uses a low-rank decomposition of the soft-prompt component encoded for each instance to achieve parameter efficiency. We provide a comprehensive evaluation on multiple natural language understanding and code generation and understanding tasks across a wide range of foundation models with varying sizes.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/95065"} +{"video_file": "T0e4Nw09XX_39026089.mp4", "openreview_id": "T0e4Nw09XX", "slideslive_id": 39026089, "venue": "nips2024", "title": "Universal Rates for Active Learning", "status": "Poster", "keywords": "Universal Rates;Active Learning", "tldr": "We provide a complete characterization of the universal rates landscape of active learning.", "abstract": "In this work we study the problem of actively learning binary classifiers from a given concept class, i.e., learning by utilizing unlabeled data and submitting targeted queries about their labels to a domain expert. We evaluate the quality of our solutions by considering the learning curves they induce, i.e., the rate of decrease of the misclassification probability as the number of label queries increases. The majority of the literature on active learning has focused on obtaining uniform guarantees on the error rate which are only able to explain the upper envelope of the learning curves over families of different data-generating distributions. We diverge from this line of work and we focus on the distribution-dependent framework of universal learning whose goal is to obtain guarantees that hold for any fixed distribution, but do not apply uniformly over all the distributions. We provide a complete characterization of the optimal learning rates that are achievable by algorithms that have to specify the number of unlabeled examples they use ahead of their execution. Moreover, we identify combinatorial complexity measures that give rise to each case of our tetrachotomic characterization. This resolves an open question that was posed by Balcan et al. (2010). As a byproduct of our main result, we develop an active learning algorithm for partial concept classes that achieves exponential learning rates in the uniform setting.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95062"} +{"video_file": "T0glCBw28a_39025285.mp4", "openreview_id": "T0glCBw28a", "slideslive_id": 39025285, "venue": "nips2024", "title": "The ALCHEmist: Automated Labeling 500x CHEaper than LLM Data Annotators", "status": "Spotlight", "keywords": "Automated Data Labeling; Code Generation; Weak Supervision", "tldr": "We propose an alternative program distillation approach to replace expensive annotation processes that require repetitive prompting for labels.", "abstract": "Large pretrained models can be used as annotators, helping replace or augment crowdworkers and enabling distilling generalist models into smaller specialist models. Unfortunately, this comes at a cost: employing top-of-the-line models often requires paying thousands of dollars for API calls, while the resulting datasets are static and challenging to audit. To address these challenges, we propose a simple alternative: rather than directly querying labels from pretrained models, we task models to generate programs that can produce labels. These programs can be stored and applied locally, re-used and extended, and cost orders of magnitude less. Our system,\nAlchemist\n, obtains comparable to or better performance than large language model-based annotation in a range of tasks for a fraction of the cost: on average, improvements amount to a\n12.9\n% enhancement while the total labeling costs across all datasets are reduced by a factor of approximately\n500\n\u00d7\n.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95061"} +{"video_file": "T56j6aV8Oc_39027134.mp4", "openreview_id": "T56j6aV8Oc", "slideslive_id": 39027134, "venue": "nips2024", "title": "Heavy-Tailed Class Imbalance and Why Adam Outperforms Gradient Descent on Language Models", "status": "Spotlight", "keywords": "language model;optimization;transformers;heavy-tailed;class imbalance;gradient descent;Adam;adaptive methods;sign descent", "tldr": "We provide evidence that gradient descent struggles to fit low-frequency classes and that this leads to poor performance in the presence of heavy-tailed class imbalance as found in language modelling tasks while Adam does not suffer from this problem", "abstract": "Adam has been shown to outperform gradient descent on large language models by a larger margin than on other tasks, but it is unclear why. We show that a key factor in this performance gap is the heavy-tailed class imbalance found in language tasks. When trained with gradient descent, the loss of infrequent words decreases more slowly than the loss of frequent ones. This leads to a slow decrease on the average loss as most samples come from infrequent words. On the other hand, Adam and sign-based methods are less sensitive to this problem. To establish that this behavior is caused by class imbalance, we show empirically that it can be reproduced across architectures and data types, on language transformers, vision CNNs, and linear models. On a linear model with cross-entropy loss, we show that class imbalance leads to imbalanced, correlated gradients and Hessians that have been hypothesized to benefit Adam. We also prove that, in continuous time, gradient descent converges slowly on low-frequency classes while sign descent does not.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/95059"} +{"video_file": "T7dS1Ghwwu_39026035.mp4", "openreview_id": "T7dS1Ghwwu", "slideslive_id": 39026035, "venue": "nips2024", "title": "Conformal Prediction for Class-wise Coverage via Augmented Label Rank Calibration", "status": "Poster", "keywords": "Uncertainty Quantification;Conformal Prediction;Imbalanced Data;Class-conditional Coverage;Deep Models", "tldr": "Provable conformal prediction method to produce small prediction sets for class-conditional coverage for imbalanced classification problems using the augmented rank calibration.", "abstract": "Conformal prediction (CP) is an emerging uncertainty quantification framework that allows us to construct a prediction set to cover the true label with a pre-specified marginal or conditional probability. Although the valid coverage guarantee has been extensively studied for classification problems, CP often produces large prediction sets which may not be practically useful. This issue is exacerbated for the setting of class-conditional coverage on imbalanced classification tasks with many and/or imbalanced classes. This paper proposes the Rank Calibrated Class-conditional CP (RC3P) algorithm to reduce the prediction set sizes to achieve class-conditional coverage, where the valid coverage holds for each class. In contrast to the standard class-conditional CP (CCP) method that uniformly thresholds the class-wise conformity score for each class, the augmented label rank calibration step allows RC3P to selectively iterate this class-wise thresholding subroutine only for a subset of classes whose class-wise top-\nk\nerror is small. We prove that agnostic to the classifier and data distribution, RC3P achieves class-wise coverage. We also show that RC3P reduces the size of prediction sets compared to the CCP method. Comprehensive experiments on multiple real-world datasets demonstrate that RC3P achieves class-wise coverage and\n26.25\n%\n\u2193\nreduction in prediction set sizes on average.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/95055"} +{"video_file": "T9GbbWbNQG_39025421.mp4", "openreview_id": "T9GbbWbNQG", "slideslive_id": 39025421, "venue": "nips2024", "title": "Layer-Adaptive State Pruning for Deep State Space Models", "status": "Poster", "keywords": "Sequence model;model order reduction;model compression;network pruning;system norm;long range arena", "tldr": "We propose a structured layer-adaptive pruning method for deep state space models.", "abstract": "Due to the lack of state dimension optimization methods, deep state space models (SSMs) have sacrificed model capacity, training search space, or stability to alleviate computational costs caused by high state dimensions. In this work, we provide a structured pruning method for SSMs, Layer-Adaptive STate pruning (LAST), which reduces the state dimension of each layer in minimizing model-level output energy loss by extending modal truncation for a single system. LAST scores are evaluated using the\nH\n\u221e\nnorms of subsystems and layer-wise energy normalization. The scores serve as global pruning criteria, enabling cross-layer comparison of states and layer-adaptive pruning. Across various sequence benchmarks, LAST optimizes previous SSMs, revealing the redundancy and compressibility of their state spaces. Notably, we demonstrate that, on average, pruning 33% of states still maintains performance with 0.52% accuracy loss in multi-input multi-output SSMs without retraining. Code is available at https://github.com/msgwak/LAST.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95053"} +{"video_file": "TA5zPfH8iI_39026470.mp4", "openreview_id": "TA5zPfH8iI", "slideslive_id": 39026470, "venue": "nips2024", "title": "B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable", "status": "Poster", "keywords": "B-cos networks;Explainability;Inherent Interpretability;CLIP", "tldr": "We propose B-cosification, a technique for transforming existing pre-trained models to become inherently interpretable at a fraction of the cost to train them from scratch.", "abstract": "B-cos Networks have been shown to be effective for obtaining highly human interpretable explanations of model decisions by architecturally enforcing stronger alignment between inputs and weight. B-cos variants of convolutional networks (CNNs) and vision transformers (ViTs), which primarily replace linear layers with B-cos transformations, perform competitively to their respective standard variants while also yielding explanations that are faithful by design. However, it has so far been necessary to train these models from scratch, which is increasingly infeasible in the era of large, pre-trained foundation models. In this work, inspired by the architectural similarities in standard DNNs and B-cos networks, we propose \u2018B-cosification\u2019, a novel approach to transform existing pre-trained models to become inherently interpretable. We perform a thorough study of design choices to perform this conversion, both for convolutional neural networks and vision transformers. We find that B-cosification can yield models that are on par with B-cos models trained from scratch in terms of interpretability, while often outperforming them in terms of classification performance at a fraction of the training cost. Subsequently, we apply B-cosification to a pretrained CLIP model, and show that, even with limited data and compute cost, we obtain a B-cosified version that is highly interpretable and competitive on zero shot performance across a variety of datasets. We release our code and pre-trained model weights at https://github.com/shrebox/B-cosification.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/95051"} +{"video_file": "TALJtWX7w4_39028299.mp4", "openreview_id": "TALJtWX7w4", "slideslive_id": 39028299, "venue": "nips2024", "title": "LaSCal: Label-Shift Calibration without target labels", "status": "Poster", "keywords": "uncertainty calibration;calibration error estimation;label-shift;domain adaptation", "tldr": "We propose a novel strategy for unsupervised calibration under label shift, utilizing importance re-weighting of the labeled source distribution.", "abstract": "When machine learning systems face dataset shift, model calibration plays a pivotal role in ensuring their reliability. Calibration error (CE) provides insights into the alignment between the predicted confidence scores and the classifier accuracy. While prior works have delved into the implications of dataset shift on calibration, existing CE estimators either (i) assume access to labeled data from the target domain, often unavailable in practice, or (ii) are derived under a covariate shift assumption. In this work we propose a novel, label-free, consistent CE estimator under label shift. Label shift is characterized by changes in the marginal label distribution p(Y), with a constant conditional p(X|Y) distribution between the source and target. We introduce a novel calibration method, called LaSCal, which uses the estimator in conjunction with a post-hoc calibration strategy, to perform unsupervised calibration on the target distribution. Our thorough empirical analysis demonstrates the effectiveness and reliability of the proposed approach across different modalities, model architectures and label shift intensities.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/95049"} +{"video_file": "TFAG9UznPv_39026037.mp4", "openreview_id": "TFAG9UznPv", "slideslive_id": 39026037, "venue": "nips2024", "title": "On the Scalability of Certified Adversarial Robustness with Generated Data", "status": "Poster", "keywords": "certified robustness;adversarial robustness;scaling;generated data", "tldr": "There exist inherent limitations when scaling certifiably robust Lipschitz-constrained models with additional generated data.", "abstract": "Certified defenses against adversarial attacks offer formal guarantees on the robustness of a model, making them more reliable than empirical methods such as adversarial training, whose effectiveness is often later reduced by unseen attacks. Still, the limited certified robustness that is currently achievable has been a bottleneck for their practical adoption. Gowal et al. and Wang et al. have shown that generating additional training data using state-of-the-art diffusion models can considerably improve the robustness of adversarial training. In this work, we demonstrate that a similar approach can substantially improve deterministic certified defenses but also reveal notable differences in the scaling behavior between certified and empirical methods. In addition, we provide a list of recommendations to scale the robustness of certified training approaches. Our approach achieves state-of-the-art deterministic robustness certificates on CIFAR-10 for the\n\u2113\n2\n(\n\u03f5\n=\n36\n/\n255\n) and\n\u2113\n\u221e\n(\n\u03f5\n=\n8\n/\n255\n) threat models, outperforming the previous results by\n+\n3.95\nand\n+\n1.39\npercentage points, respectively. Furthermore, we report similar improvements for CIFAR-100.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/95047"} +{"video_file": "TGmwp9jJXl_39026992.mp4", "openreview_id": "TGmwp9jJXl", "slideslive_id": 39026992, "venue": "nips2024", "title": "From Biased to Unbiased Dynamics: An Infinitesimal Generator Approach", "status": "Poster", "keywords": "Molecular dynamics;stochastic differential equations", "tldr": "we prove that learning the true dynamics rom biased simulations is possible via the Infinitesimal Generator and develop the method that does it efficently", "abstract": "We investigate learning the eigenfunctions of evolution operators for time-reversal invariant stochastic processes, a prime example being the Langevin equation used in molecular dynamics. Many physical or chemical processes described by this equation involve transitions between metastable states separated by high potential barriers that can hardly be crossed during a simulation. To overcome this bottleneck, data are collected via biased simulations that explore the state space more rapidly. We propose a framework for learning from biased simulations rooted in the infinitesimal generator of the process {and the associated resolvent operator}. We contrast our approach to more common ones based on the transfer operator, showing that it can provably learn the spectral properties of the unbiased system from biased data. In experiments, we highlight the advantages of our method over transfer operator approaches and recent developments based on generator learning, demonstrating its effectiveness in estimating eigenfunctions and eigenvalues. Importantly, we show that even with datasets containing only a few relevant transitions due to sub-optimal biasing, our approach recovers relevant information about the transition mechanism.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/95044"} +{"video_file": "TIhiFqGOYC_39027784.mp4", "openreview_id": "TIhiFqGOYC", "slideslive_id": 39027784, "venue": "nips2024", "title": "Meaningful Learning: Enhancing Abstract Reasoning in Large Language Models via Generic Fact Guidance", "status": "Poster", "keywords": "Abstract Reasoning;Large Language Models;Question Answering", "tldr": "Large language models (LLMs) struggle with abstract reasoning. We created a specialized dataset and learning method to improve their abstract reasoning skills, achieving more than simple memorization.", "abstract": "Large language models (LLMs) have developed impressive performance and strong explainability across various reasoning scenarios, marking a significant stride towards mimicking human-like intelligence. Despite this, when tasked with several simple questions supported by a generic fact, LLMs often struggle to abstract and apply the generic fact to provide consistent and precise answers, revealing a deficiency in abstract reasoning abilities. This has sparked a vigorous debate about whether LLMs are genuinely reasoning or merely memorizing. In light of this, we design a preliminary study to quantify and delve into the abstract reasoning abilities of existing LLMs. Our findings reveal a substantial discrepancy between their general reasoning and abstract reasoning performances. To relieve this problem, we tailor an abstract reasoning dataset (AbsR) together with a meaningful learning paradigm to teach LLMs how to leverage generic facts for reasoning purposes. The results show that our approach not only boosts the general reasoning performance of LLMs but also makes considerable strides towards their capacity for abstract reasoning, moving beyond simple memorization or imitation to a more nuanced understanding and application of generic facts. The code is available at https://github.com/Waste-Wood/MeanLearn.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/95043"} +{"video_file": "TJsknGasMy_39027398.mp4", "openreview_id": "TJsknGasMy", "slideslive_id": 39027398, "venue": "nips2024", "title": "Differentially Private Stochastic Gradient Descent with Fixed-Size Minibatches: Tighter RDP Guarantees with or without Replacement", "status": "Poster", "keywords": "privacy preserving machine learning;differential privacy;differentially private stochastic gradient descent;fixed-size subsampled mechanisms;privacy amplification lemma", "tldr": "We provide a new privacy accountant and comprehensive analysis for differentially private stochastic gradient descent with tighter privacy guarantees under fixed-size resampling with or without replacement.", "abstract": "Differentially private stochastic gradient descent (DP-SGD) has been instrumental in privately training deep learning models by providing a framework to control and track the privacy loss incurred during training. At the core of this computation lies a subsampling method that uses a privacy amplification lemma to enhance the privacy guarantees provided by the additive noise. Fixed size subsampling is appealing for its constant memory usage, unlike the variable sized minibatches in Poisson subsampling. It is also of interest in addressing class imbalance and federated learning. Current computable guarantees for fixed-size subsampling are not tight and do not consider both add/remove and replace-one adjacency relationships. We present a new and holistic R\u00e9nyi differential privacy (RDP) accountant for DP-SGD with fixed-size subsampling without replacement (FSwoR) and with replacement (FSwR). For FSwoR we consider both add/remove and replace-one adjacency, where we improve on the best current computable bound by a factor of $4$. We also show for the first time that the widely-used Poisson subsampling and FSwoR with replace-one adjacency have the same privacy to leading order in the sampling probability. Our work suggests that FSwoR is often preferable to Poisson subsampling due to constant memory usage. Our FSwR accountant includes explicit non-asymptotic upper and lower bounds and, to the authors' knowledge, is the first such RDP analysis of fixed-size subsampling with replacement for DP-SGD. We analytically and empirically compare fixed size and Poisson subsampling, and show that DP-SGD gradients in a fixed-size subsampling regime exhibit lower variance in practice in addition to memory usage benefits.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/95041"} +{"video_file": "TVbCKAqoD8_39025897.mp4", "openreview_id": "TVbCKAqoD8", "slideslive_id": 39025897, "venue": "nips2024", "title": "Trade-Offs of Diagonal Fisher Information Matrix Estimators", "status": "Poster", "keywords": "Fisher Information;Deep Learning;Information Geometry;Neuromanifold", "tldr": "We bound the variances of random estimators of the diagonal Fisher information matrix.", "abstract": "The Fisher information matrix can be used to characterize the local geometry of the parameter space of neural networks. It elucidates insightful theories and useful tools to understand and optimize neural networks. Given its high computational cost, practitioners often use random estimators and evaluate only the diagonal entries. We examine two popular estimators whose accuracy and sample complexity depend on their associated variances. We derive bounds of the variances and instantiate them in neural networks for regression and classification. We navigate trade-offs for both estimators based on analytical and numerical studies. We find that the variance quantities depend on the non-linearity w.r.t. different parameter groups and should not be neglected when estimating the Fisher information.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95032"} +{"video_file": "TWeVQ5meMW_39024623.mp4", "openreview_id": "TWeVQ5meMW", "slideslive_id": 39024623, "venue": "nips2024", "title": "Subject-driven Text-to-Image Generation via Preference-based Reinforcement Learning", "status": "Poster", "keywords": "Diffusion Models;Text-to-Image generation;Reinforcement Learning", "tldr": "We present the \nl\na\nm\nb\nd\na\n-Harmonic reward function and Reward Preference Optimization (RPO) for the subject-driven text-to-image generation task.", "abstract": "Text-to-image generative models have recently attracted considerable interest, enabling the synthesis of high-quality images from textual prompts. However, these models often lack the capability to generate specific subjects from given reference images or to synthesize novel renditions under varying conditions. Methods like DreamBooth and Subject-driven Text-to-Image (SuTI) have made significant progress in this area. Yet, both approaches primarily focus on enhancing similarity to reference images and require expensive setups, often overlooking the need for efficient training and avoiding overfitting to the reference images. In this work, we present the\n\u03bb\n-Harmonic reward function, which provides a reliable reward signal and enables early stopping for faster training and effective regularization. By combining the Bradley-Terry preference model, the\n\u03bb\n-Harmonic reward function also provides preference labels for subject-driven generation tasks. We propose Reward Preference Optimization (RPO), which offers a simpler setup (requiring only 3% of the negative samples used by DreamBooth) and fewer gradient steps for fine-tuning. Unlike most existing methods, our approach does not require training a text encoder or optimizing text embeddings and achieves text-image alignment by fine-tuning only the U-Net component. Empirically,\n\u03bb\n-Harmonic proves to be a reliable approach for model selection in subject-driven generation tasks. Based on preference labels and early stopping validation from the\n\u03bb\n-Harmonic reward function, our algorithm achieves a state-of-the-art CLIP-I score of 0.833 and a CLIP-T score of 0.314 on DreamBench.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95031"} +{"video_file": "TXsRGrzICz_39028421.mp4", "openreview_id": "TXsRGrzICz", "slideslive_id": 39028421, "venue": "nips2024", "title": "What type of inference is planning?", "status": "Spotlight", "keywords": "planning;variational inference;belief propagation;message passing", "tldr": "In the context of planning we use variational inference to compare inference types, and to create a novel belief propagation-like algorithm for factored-state MDPs.", "abstract": "Multiple types of inference are available for probabilistic graphical models, e.g., marginal, maximum-a-posteriori, and even marginal maximum-a-posteriori. Which one do researchers mean when they talk about ``planning as inference''? There is no consistency in the literature, different types are used, and their ability to do planning is further entangled with specific approximations or additional constraints. In this work we use the variational framework to show that, just like all commonly used types of inference correspond to different weightings of the entropy terms in the variational problem, planning corresponds exactly to a different set of weights. This means that all the tricks of variational inference are readily applicable to planning. We develop an analogue of loopy belief propagation that allows us to perform approximate planning in factored-state Markov decisions processes without incurring intractability due to the exponentially large state space. The variational perspective shows that the previous types of inference for planning are only adequate in environments with low stochasticity, and allows us to characterize each type by its own merits, disentangling the type of inference from the additional approximations that its practical use requires. We validate these results empirically on synthetic MDPs and tasks posed in the International Planning Competition.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/95030"} +{"video_file": "TZ5k9IYBBf_39028614.mp4", "openreview_id": "TZ5k9IYBBf", "slideslive_id": 39028614, "venue": "nips2024", "title": "RanDumb: Random Representations Outperform Online Continually Learned Representations", "status": "Poster", "keywords": "online continual learning;exemplar-free;baseline;analysis", "tldr": "We shed light that the performance of state-of-the-art methods, including ICLR '24 best paper runner-up, does not surpass a random baseline (RanDumb), demonstrating the poor performance of representation learning", "abstract": "Continual learning has primarily focused on the issue of catastrophic forgetting and the associated stability-plasticity tradeoffs. However, little attention has been paid to the efficacy of continually learned representations, as representations are learned alongside classifiers throughout the learning process. Our primary contribution is empirically demonstrating that existing online continually trained deep networks produce inferior representations compared to a simple pre-defined random transforms. Our approach embeds raw pixels using a fixed random transform, approximating an RBF-Kernel initialized before any data is seen. We then train a simple linear classifier on top without storing any exemplars, processing one sample at a time in an online continual learning setting. This method, called RanDumb, significantly outperforms state-of-the-art continually learned representations across all standard online continual learning benchmarks. Our study reveals the significant limitations of representation learning, particularly in low-exemplar and online continual learning scenarios. Extending our investigation to popular exemplar-free scenarios with pretrained models, we find that training only a linear classifier on top of pretrained representations surpasses most continual fine-tuning and prompt-tuning strategies. Overall, our investigation challenges the prevailing assumptions about effective representation learning in the online continual learning.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/95027"} +{"video_file": "Tck41RANGK_39026703.mp4", "openreview_id": "Tck41RANGK", "slideslive_id": 39026703, "venue": "nips2024", "title": "MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence", "status": "Poster", "keywords": "adaptive optimization;adam;efficiency;memory efficiency", "tldr": "We propose a new memory-efficient version of Adam with strong theoretical and practical convergence, and low space overheads in practice.", "abstract": "We propose a new variant of the Adam optimizer called MicroAdam that specifically minimizes memory overheads, while maintaining theoretical convergence guarantees. We achieve this by compressing the gradient information before it is fed into the optimizer state, thereby reducing its memory footprint significantly. We control the resulting compression error via a novel instance of the classical error feedback mechanism from distributed optimization in which the error correction information is itself compressed to allow for practical memory gains. We prove that the resulting approach maintains theoretical convergence guarantees competitive to those of AMSGrad, while providing good practical performance. Specifically, we show that MicroAdam can be implemented efficiently on GPUs: on both million-scale (BERT) and billion-scale (LLaMA) models, MicroAdam provides practical convergence competitive to that of the uncompressed Adam baseline, with lower memory usage and similar running time. Our code is available at https://github.com/IST-DASLab/MicroAdam.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/95023"} +{"video_file": "TeBKVfhP2M_39024636.mp4", "openreview_id": "TeBKVfhP2M", "slideslive_id": 39024636, "venue": "nips2024", "title": "Fundamental Limits of Prompt Compression: A Rate-Distortion Framework for Black-Box Language Models", "status": "Poster", "keywords": "information theory;prompt compression;LLMs;optimization", "tldr": "We formalize and study fundamental limits of prompt compression for large language models via a rate-distortion framework.", "abstract": "We formalize the problem of prompt compression for large language models (LLMs) and present a framework to unify token-level prompt compression methods which create hard prompts for black-box models. We derive the distortion-rate function for this setup as a linear program, and provide an efficient algorithm to compute this fundamental limit via the dual of the linear program. Using the distortion-rate function as the baseline, we study the performance of existing compression schemes on a synthetic dataset consisting of prompts generated from a Markov chain, natural language queries, and their respective answers. Our empirical analysis demonstrates the criticality of query-aware prompt compression, where the compressor has knowledge of the downstream task/query for the black-box LLM. We show that there is a large gap between the performance of current prompt compression methods and the optimal strategy, and propose Adaptive QuerySelect, a query-aware, variable-rate adaptation of a prior work to close the gap. We extend our experiments to a small natural language dataset to further confirm our findings on our synthetic dataset.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95021"} +{"video_file": "Thou1rKdpZ_39024786.mp4", "openreview_id": "Thou1rKdpZ", "slideslive_id": 39024786, "venue": "nips2024", "title": "In-Context Learning of a Linear Transformer Block: Benefits of the MLP Component and One-Step GD Initialization", "status": "Poster", "keywords": "In-Context Learning;Transformers;Approximation Theory;Optimization", "tldr": "We investigate into the in-context learning ability of Linear Transformer Block on linear regression problems and its relationship with one-step gradient descent estimator with learnable initialization.", "abstract": "We study the \\emph{in-context learning} (ICL) ability of a \\emph{Linear Transformer Block} (LTB) that combines a linear attention component and a linear multi-layer perceptron (MLP) component. For ICL of linear regression with a Gaussian prior and a \\emph{non-zero mean}, we show that LTB can achieve nearly Bayes optimal ICL risk. In contrast, using only linear attention must incur an irreducible additive approximation error. Furthermore, we establish a correspondence between LTB and one-step gradient descent estimators with learnable initialization ($\\mathsf{GD}-\\beta$), in the sense that every $\\mathsf{GD}-\\beta$ estimator can be implemented by an LTB estimator and every optimal LTB estimator that minimizes the in-class ICL risk is effectively a $\\mathsf{GD}-\\beta$ estimator. Finally, we show that $\\mathsf{GD}-\\beta$ estimators can be efficiently optimized with gradient flow, despite a non-convex training objective. Our results reveal that LTB achieves ICL by implementing $\\mathsf{GD}-\\beta$, and they highlight the role of MLP layers in reducing approximation error.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95018"} +{"video_file": "Ti3ciyqlS3_39027926.mp4", "openreview_id": "Ti3ciyqlS3", "slideslive_id": 39027926, "venue": "nips2024", "title": "Improving Temporal Link Prediction via Temporal Walk Matrix Projection", "status": "Poster", "keywords": "Temporal Link Prediction;Dynamic Graph Learning;Graph Neural Network", "tldr": "This paper unifies existing pairwise information injection methods for temporal link prediction into a function of temporal walk matrices and introduces an efficient method for maintaining temporal walk matrices", "abstract": "Temporal link prediction, aiming at predicting future interactions among entities based on historical interactions, is crucial for a series of real-world applications. Although previous methods have demonstrated the importance of relative encodings for effective temporal link prediction, computational efficiency remains a major concern in constructing these encodings. Moreover, existing relative encodings are usually constructed based on structural connectivity, where temporal information is seldom considered. To address the aforementioned issues, we first analyze existing relative encodings and unify them as a function of temporal walk matrices. This unification establishes a connection between relative encodings and temporal walk matrices, providing a more principled way for analyzing and designing relative encodings. Based on this analysis, we propose a new temporal graph neural network called TPNet, which introduces a temporal walk matrix that incorporates the time decay effect to simultaneously consider both temporal and structural information. Moreover, TPNet designs a random feature propagation mechanism with theoretical guarantees to implicitly maintain the temporal walk matrices, which improves the computation and storage efficiency. Experimental results on 13 benchmark datasets verify the effectiveness and efficiency of TPNet, where TPNet outperforms other baselines on most datasets and achieves a maximum speedup of\n33.3\n\u00d7\ncompared to the SOTA baseline.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/95017"} +{"video_file": "Tpx9gcZVBf_39027948.mp4", "openreview_id": "Tpx9gcZVBf", "slideslive_id": 39027948, "venue": "nips2024", "title": "DiffAug: A Diffuse-and-Denoise Augmentation for Training Robust Classifiers", "status": "Poster", "keywords": "Synthetic Augmentations; Robust Classifiers; Classifier-guided Diffusion; Perceptually Aligned Gradients", "tldr": "A simple and efficient diffusion-based augmentation technique to improve classifier robustness.", "abstract": "We introduce DiffAug, a simple and efficient diffusion-based augmentation technique to train image classifiers for the crucial yet challenging goal of improved classifier robustness. Applying DiffAug to a given example consists of one forward-diffusion step followed by one reverse-diffusion step. Using both ResNet-50 and Vision Transformer architectures, we comprehensively evaluate classifiers trained with DiffAug and demonstrate the surprising effectiveness of single-step reverse diffusion in improving robustness to covariate shifts, certified adversarial accuracy and out of distribution detection. When we combine DiffAug with other augmentations such as AugMix and DeepAugment we demonstrate further improved robustness. Finally, building on this approach, we also improve classifier-guided diffusion wherein we observe improvements in: (i) classifier-generalization, (ii) gradient quality (i.e., improved perceptual alignment) and (iii) image generation performance. We thus introduce a computationally efficient technique for training with improved robustness that does not require any additional data, and effectively complements existing augmentation approaches.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/95014"} +{"video_file": "TrXV4dMDcG_39025384.mp4", "openreview_id": "TrXV4dMDcG", "slideslive_id": 39025384, "venue": "nips2024", "title": "Robust Mixture Learning when Outliers Overwhelm Small Groups", "status": "Poster", "keywords": "robust statistics;mixture learning;list-decodable learning;small group;outliers;mean estimation;efficient algorithms", "tldr": "Meta-algorithm for the mixture learning problem with guarantees for small groups in the presence of large additive adversarial contamination.", "abstract": "We study the problem of estimating the means of well-separated mixtures when an adversary may add arbitrary outliers. While strong guarantees are available when the outlier fraction is significantly smaller than the minimum mixing weight, much less is known when outliers may crowd out low-weight clusters \u2013 a setting we refer to as list-decodable mixture learning (LD-ML). In this case, adversarial outliers can simulate additional spurious mixture components. Hence, if all means of the mixture must be recovered up to a small error in the output list, the list size needs to be larger than the number of (true) components. We propose an algorithm that obtains order-optimal error guarantees for each mixture mean with a minimal list-size overhead, significantly improving upon list-decodable mean estimation, the only existing method that is applicable for LD-ML. Although improvements are observed even when the mixture is non-separated, our algorithm achieves particularly strong guarantees when the mixture is separated: it can leverage the mixture structure to partially cluster the samples before carefully iterating a base learner for list-decodable mean estimation at different scales.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/95012"} +{"video_file": "Tt2xJaxDc4_39026626.mp4", "openreview_id": "Tt2xJaxDc4", "slideslive_id": 39026626, "venue": "nips2024", "title": "Randomized Truthful Auctions with Learning Agents", "status": "Poster", "keywords": "Auctions;No-Regret Learning;Revenue Maximization", "tldr": "We show that in repeated deterministic truthful auctions with two bidders using MWU, they do not converge to truthful bidding for many choices of learning rates. However, adding a small amount of randomization to the auction leads to convergence.", "abstract": "We study a setting where agents use no-regret learning algorithms to participate in repeated auctions. Recently, Kolumbus and Nisan [2022a] showed, rather surprisingly, that when bidders participate in second-price auctions using no-regret bidding algorithms, no matter how large the number of interactions\nT\nis, the runner-up bidder may not converge to bidding truthfully. Our first result shows that this holds forall deterministictruthful auctions. We also show that the ratio of the learning rates of different bidders can qualitatively affect the convergence of the bidders. Next, we consider the problem of revenue maximization in this environment. In the setting with fully rational bidders, the seminal result of Myerson [1981] showed that revenue can be maximized by using a second-price auction with reserves. We show that, in stark contrast, in our setting with learning bidders, randomized auctions can have strictly better revenue guarantees than second-price auctions with reserves, when\nT\nis large enough. To do this, we provide a black-box transformation from any truthful auction\nA\nto an auction\nA\n\u2032\nsuch that: i) all mean-based no-regret learners that participate in\nA\n\u2032\nconverge to bidding truthfully, ii) the distance between the allocation rule and the payment rule between\nA\n,\nA\n\u2032\nis negligible. Finally, we study revenue maximization in the non-asymptotic regime. We define a notion of auctioneer regret that compares the revenue generated to the revenue of a second price auction with truthful bids. When the auctioneer has to use the same auction throughout the interaction, we show an (almost) tight regret bound of\n\u0398\n~\n(\nT\n3\n/\n4\n)\n. Then, we consider the case where the auctioneer can use different auctions throughout the interaction, but in a way that is oblivious to the bids. For this setting, we show an (almost) tight bound of\n\u0398\n~\n(\nT\n)\n.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/95010"} +{"video_file": "TuspoNzIdB_39028893.mp4", "openreview_id": "TuspoNzIdB", "slideslive_id": 39028893, "venue": "nips2024", "title": "Mixture of neural fields for heterogeneous reconstruction in cryo-EM", "status": "Poster", "keywords": "cryogenic electron microscopy;neural representations", "tldr": "We demonstrate ab initio reconstruction of conformational and compositional heterogeneity in cryo-EM datasets with neural fields.", "abstract": "Cryo-electron microscopy (cryo-EM) is an experimental technique for protein structure determination that images an ensemble of macromolecules in near-physiological contexts. While recent advances enable the reconstruction of dynamic conformations of a single biomolecular complex, current methods do not adequately model samples with mixed conformational and compositional heterogeneity. In particular, datasets containing mixtures of multiple proteins require the joint inference of structure, pose, compositional class, and conformational states for 3D reconstruction. Here, we present Hydra, an approach that models both conformational and compositional heterogeneity fully ab initio by parameterizing structures as arising from one of K neural fields. We employ a hybrid optimization strategy and demonstrate the effectiveness of our approach on synthetic datasets composed of mixtures of proteins with large degrees of conformational variability. We additionally demonstrate Hydra on an experimental dataset imaged of a cellular lysate containing a mixture of different protein complexes. Hydra expands the expressivity of heterogeneous reconstruction methods and thus broadens the scope of cryo-EM to increasingly complex samples.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/95007"} +{"video_file": "TusuJSbRxm_39026786.mp4", "openreview_id": "TusuJSbRxm", "slideslive_id": 39026786, "venue": "nips2024", "title": "Trajectory Data Suffices for Statistically Efficient Learning in Offline RL with Linear $q^\\pi$-Realizability and Concentrability", "status": "Poster", "keywords": "reinforcement learning;learning theory;offline RL;batch RL", "tldr": "We show that trajectory data suffices for statistically efficient learning in offline reinforcement learning, under the assumptions of linear \nq\n\u03c0\n-realizability, and concentrability.", "abstract": "We consider offline reinforcement learning (RL) in\nH\n-horizon Markov decision processes (MDPs) under the linear\nq\n\u03c0\n-realizability assumption, where the action-value function of every policy is linear with respect to a given\nd\n-dimensional feature function. The hope in this setting is that learning a good policy will be possible without requiring a sample size that scales with the number of states in the MDP. Foster et al. [2021] have shown this to be impossible even under\nconcentrability\n, a data coverage assumption where a coefficient\nC\nconc\nbounds the extent to which the state-action distribution of any policy can veer off the data distribution. However, the data in this previous work was in the form of a sequence of individual transitions. This leaves open the question of whether the negative result mentioned could be overcome if the data was composed of sequences of full trajectories. In this work we answer this question positively by proving that with trajectory data, a dataset of size\npoly\n(\nd\n,\nH\n,\nC\nconc\n)\n/\n\u03f5\n2\nis sufficient for deriving an\n\u03f5\n-optimal policy, regardless of the size of the state space. The main tool that makes this result possible is due to Weisz et al. [2023], who demonstrate that linear MDPs can be used to approximate linearly\nq\n\u03c0\n-realizable MDPs. The connection to trajectory data is that the linear MDP approximation relies on \"skipping\" over certain states. The associated estimation problems are thus easy when working with trajectory data, while they remain nontrivial when working with individual transitions. The question of computational efficiency under our assumptions remains open.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/95006"} +{"video_file": "Tw032H2onS_39025731.mp4", "openreview_id": "Tw032H2onS", "slideslive_id": 39025731, "venue": "nips2024", "title": "Boosted Conformal Prediction Intervals", "status": "Poster", "keywords": "Conformal Prediction;Uncertainty Quantification;(Other) Statistical Learning", "tldr": "We introduce a boosted conformal procedure designed to tailor conformalized prediction intervals toward specific desired properties, such as enhanced conditional coverage or reduced interval length.", "abstract": "This paper introduces a boosted conformal procedure designed to tailor conformalized prediction intervals toward specific desired properties, such as enhanced conditional coverage or reduced interval length. We employ machine learning techniques, notably gradient boosting, to systematically improve upon a predefined conformity score function. This process is guided by carefully constructed loss functions that measure the deviation of prediction intervals from the targeted properties. The procedure operates post-training, relying solely on model predictions and without modifying the trained model (e.g., the deep network). Systematic experiments demonstrate that starting from conventional conformal methods, our boosted procedure achieves substantial improvements in reducing interval length and decreasing deviation from target conditional coverage.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/95004"} +{"video_file": "Twqa0GFMGX_39028285.mp4", "openreview_id": "Twqa0GFMGX", "slideslive_id": 39028285, "venue": "nips2024", "title": "Idiographic Personality Gaussian Process for Psychological Assessment", "status": "Poster", "keywords": "Applications -- Cognitive science;Gaussian process;Latent variable model", "tldr": "We introduce an idiographic personality Gaussian process (IPGP) framework of time-series survey data for individualized psychological assessment.", "abstract": "We develop a novel measurement framework based on Gaussian process coregionalization model to address a long-lasting debate in psychometrics: whether psychological features like personality share a common structure across the population or vary uniquely for individuals. We propose idiographic personality Gaussian process (IPGP), an intermediate model that accommodates both shared trait structure across individuals and \"idiographic\" deviations. IPGP leverages the Gaussian process coregionalization model to conceptualize responses of grouped survey batteries but adjusted to non-Gaussian ordinal data, and exploits stochastic variational inference for latent factor estimation. Using both synthetic data and a novel survey, we show that IPGP improves both prediction of actual responses and estimation of intrapersonal response patterns compared to existing benchmarks. In the survey study, IPGP also identifies unique clusters of personality taxonomies, displaying great potential in advancing individualized approaches to psychological diagnosis.", "primary_area": "machine_learning_for_social_sciences", "site": "https://neurips.cc/virtual/2024/poster/95001"} +{"video_file": "TxffvJMnBy_39026060.mp4", "openreview_id": "TxffvJMnBy", "slideslive_id": 39026060, "venue": "nips2024", "title": "Optimal Algorithms for Online Convex Optimization with Adversarial Constraints", "status": "Spotlight", "keywords": "Online Convex Optimization;Regret bounds;Constraint violation bounds", "tldr": "We propose a new learning policy for online convex optimization with adversarial constraints. The proposed policy attains the optimal regret and cumulative constraint violation bounds.", "abstract": "A well-studied generalization of the standard online convex optimization (OCO) framework is constrained online convex optimization (COCO). In COCO, on every round, a convex cost function and a convex constraint function are revealed to the learner after it chooses the action for that round. The objective is to design an online learning policy that simultaneously achieves a small regret while ensuring a small cumulative constraint violation (CCV) against an adaptive adversary interacting over a horizon of length\nT\n. A long-standing open question in COCO is whether an online policy can simultaneously achieve\nO\n(\nT\n)\nregret and\nO\n~\n(\nT\n)\nCCV without any restrictive assumptions. For the first time, we answer this in the affirmative and show that a simple first-order policy can simultaneously achieve these bounds. Furthermore, in the case of strongly convex cost and convex constraint functions, the regret guarantee can be improved to\nO\n(\nlog\n\u2061\nT\n)\nwhile keeping the CCV bound the same as above. We establish these results by effectively combining adaptive OCO policies as a blackbox with Lyapunov optimization - a classic tool from control theory. Surprisingly, the analysis is short and elegant.", "primary_area": "online_learning", "site": "https://neurips.cc/virtual/2024/poster/94999"} +{"video_file": "Ty25oVKTqj_39028842.mp4", "openreview_id": "Ty25oVKTqj", "slideslive_id": 39028842, "venue": "nips2024", "title": "UniSDF: Unifying Neural Representations for High-Fidelity 3D Reconstruction of Complex Scenes with Reflections", "status": "Poster", "keywords": "3D reconstruction;novel view synthesis;reflection", "tldr": "We unify camera view and reflected view radiance field parameterizations and combine it with a multi-resolution grid backbone to achieve high-quality reconstruction of complex scenes with reflections.", "abstract": "Neural 3D scene representations have shown great potential for 3D reconstruction from 2D images. However, reconstructing real-world captures of complex scenes still remains a challenge. Existing generic 3D reconstruction methods often struggle to represent fine geometric details and do not adequately model reflective surfaces of large-scale scenes. Techniques that explicitly focus on reflective surfaces can model complex and detailed reflections by exploiting better reflection parameterizations. However, we observe that these methods are often not robust in real scenarios where non-reflective as well as reflective components are present. In this work, we propose UniSDF, a general purpose 3D reconstruction method that can reconstruct large complex scenes with reflections. We investigate both camera view as well as reflected view-based color parameterization techniques and find that explicitly blending these representations in 3D space enables reconstruction of surfaces that are more geometrically accurate, especially for reflective surfaces. We further combine this representation with a multi-resolution grid backbone that is trained in a coarse-to-fine manner, enabling faster reconstructions than prior methods. Extensive experiments on object-level datasets DTU, Shiny Blender as well as unbounded datasets Mip-NeRF 360 and Ref-NeRF real demonstrate that our method is able to robustly reconstruct complex large-scale scenes with fine details and reflective surfaces, leading to the best overall performance. Project page: https://fangjinhuawang.github.io/UniSDF.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94998"} +{"video_file": "U3Rgdb4li9_39026457.mp4", "openreview_id": "U3Rgdb4li9", "slideslive_id": 39026457, "venue": "nips2024", "title": "Targeted Sequential Indirect Experiment Design", "status": "Poster", "keywords": "causality;experiment design;instrumental variables;indirect experiments", "tldr": "We adaptively learn optimal indirect experiments to narrow the bounds on a functional of f in high-dimensional, non-linear settings with unobserved confounding.", "abstract": "Scientific hypotheses typically concern specific aspects of complex, imperfectly understood or entirely unknown mechanisms, such as the effect of gene expression levels on phenotypes or how microbial communities influence environmental health. Such queries are inherently causal (rather than purely associational), but in many settings, experiments can not be conducted directly on the target variables of interest, but are indirect. Therefore, they perturb the target variable, but do not remove potential confounding factors. If, additionally, the resulting experimental measurements are high-dimensional and the studied mechanisms nonlinear, the query of interest is generally not identified. We develop an adaptive strategy to design indirect experiments that optimally inform a targeted query about the ground truth mechanism in terms of sequentially narrowing the gap between an upper and lower bound on the query. While the general formulation consists of a bi-level optimization procedure, we derive an efficiently estimable analytical kernel-based estimator of the bounds for the causal effect, a query of key interest, and demonstrate the efficacy of our approach in confounded, multivariate, nonlinear synthetic settings.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/94994"} +{"video_file": "U4BC0GrFAz_39025420.mp4", "openreview_id": "U4BC0GrFAz", "slideslive_id": 39025420, "venue": "nips2024", "title": "Do causal predictors generalize better to new domains?", "status": "Spotlight", "keywords": "causality;domain generalization;tabular data", "tldr": "Predictors using all available features, regardless of causality, have better in-domain and out-of-domain accuracy than predictors using causal features.", "abstract": "We study how well machine learning models trained on causal features generalize across domains. We consider 16 prediction tasks on tabular datasets covering applications in health, employment, education, social benefits, and politics. Each dataset comes with multiple domains, allowing us to test how well a model trained in one domain performs in another. For each prediction task, we select features that have a causal influence on the target of prediction. Our goal is to test the hypothesis that models trained on causal features generalize better across domains. Without exception, we find that predictors using all available features, regardless of causality, have better in-domain and out-of-domain accuracy than predictors using causal features. Moreover, even the absolute drop in accuracy from one domain to the other is no better for causal predictors than for models that use all features. In addition, we show that recent causal machine learning methods for domain generalization do not perform better in our evaluation than standard predictors trained on the set of causal features. Likewise, causal discovery algorithms either fail to run or select causal variables that perform no better than our selection. Extensive robustness checks confirm that our findings are stable under variable misclassification.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/94992"} +{"video_file": "U4KldRgoph_39027100.mp4", "openreview_id": "U4KldRgoph", "slideslive_id": 39027100, "venue": "nips2024", "title": "Enhancing Graph Transformers with Hierarchical Distance Structural Encoding", "status": "Poster", "keywords": "Graph Transformers;Graph Neural Networks;Graph Classification;Node Classification;Large Graphs;Scalability", "tldr": "We introduce Hierarchical Distance Structural Encoding (HDSE), a method that enhances graph transformers on both graph-level tasks and large-scale node classification.", "abstract": "Graph transformers need strong inductive biases to derive meaningful attention scores. Yet, current methods often fall short in capturing longer ranges, hierarchical structures, or community structures, which are common in various graphs such as molecules, social networks, and citation networks. This paper presents a Hierarchical Distance Structural Encoding (HDSE) method to model node distances in a graph, focusing on its multi-level, hierarchical nature. We introduce a novel framework to seamlessly integrate HDSE into the attention mechanism of existing graph transformers, allowing for simultaneous application with other positional encodings. To apply graph transformers with HDSE to large-scale graphs, we further propose a high-level HDSE that effectively biases the linear transformers towards graph hierarchies. We theoretically prove the superiority of HDSE in terms of expressivity and generalization. Empirically, we demonstrate that graph transformers with HDSE excel in graph classification, regression on 7 graph-level datasets, and node classification on 11 large-scale graphs.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/94991"} +{"video_file": "U6oQEzSp8z_39028065.mp4", "openreview_id": "U6oQEzSp8z", "slideslive_id": 39028065, "venue": "nips2024", "title": "An eye for an ear: zero-shot audio description leveraging an image captioner with audio-visual token distribution matching", "status": "Poster", "keywords": "Multimodal representation learning;Audio Captioning;Image Captioning;Audio-Visual;Large Language Model", "tldr": "We propose a novel method for aligning audio and image tokens to enable zero-shot audio captioning throught MMD and Optimal Transport leveraging a large vision language models, achieving superior performance in unsupervised settings.", "abstract": "Multimodal large language models have fueled progress in image captioning. These models, fine-tuned on vast image datasets, exhibit a deep understanding of semantic concepts. In this work, we show that this ability can be re-purposed for audio captioning, where the joint image-language decoder can be leveraged to describe auditory content associated with image sequences within videos featuring audiovisual content. This can be achieved via multimodal alignment. Yet, this multimodal alignment task is non-trivial due to the inherent disparity between audible and visible elements in real-world videos. Moreover, multimodal representation learning often relies on contrastive learning, facing the challenge of the so-called modality gap which hinders smooth integration between modalities. In this work, we introduce a novel methodology for bridging the audiovisual modality gap by matching the distributions of tokens produced by an audio backbone and those of an image captioner. Our approach aligns the audio token distribution with that of the image tokens, enabling the model to perform zero-shot audio captioning in an unsupervised fashion. This alignment allows for the use of either audio or audiovisual input by combining or substituting the image encoder with the aligned audio encoder. Our method achieves significantly improved performances in zero-shot audio captioning, compared to existing approaches.", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/94989"} +{"video_file": "U9MzoDOKZu_39028569.mp4", "openreview_id": "U9MzoDOKZu", "slideslive_id": 39028569, "venue": "nips2024", "title": "Meta-DT: Offline Meta-RL as Conditional Sequence Modeling with World Model Disentanglement", "status": "Poster", "keywords": "decision transformer;offline meta reinforcement learning;world model", "tldr": "We leverage the sequential modeling ability of the transformer architecture and robust task representation learning via world model disentanglement to achieve efficient generalization in offline meta-RL.", "abstract": "A longstanding goal of artificial general intelligence is highly capable generalists that can learn from diverse experiences and generalize to unseen tasks. The language and vision communities have seen remarkable progress toward this trend by scaling up transformer-based models trained on massive datasets, while reinforcement learning (RL) agents still suffer from poor generalization capacity under such paradigms. To tackle this challenge, we propose Meta Decision Transformer (Meta-DT), which leverages the sequential modeling ability of the transformer architecture and robust task representation learning via world model disentanglement to achieve efficient generalization in offline meta-RL. We pretrain a context-aware world model to learn a compact task representation, and inject it as a contextual condition to the causal transformer to guide task-oriented sequence generation. Then, we subtly utilize history trajectories generated by the meta-policy as a self-guided prompt to exploit the architectural inductive bias. We select the trajectory segment that yields the largest prediction error on the pretrained world model to construct the prompt, aiming to encode task-specific information complementary to the world model maximally. Notably, the proposed framework eliminates the requirement of any expert demonstration or domain knowledge at test time. Experimental results on MuJoCo and Meta-World benchmarks across various dataset types show that Meta-DT exhibits superior few and zero-shot generalization capacity compared to strong baselines while being more practical with fewer prerequisites. Our code is available at https://github.com/NJU-RL/Meta-DT.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94988"} +{"video_file": "UDi51I8K1p_39024734.mp4", "openreview_id": "UDi51I8K1p", "slideslive_id": 39024734, "venue": "nips2024", "title": "Exploring the trade-off between deep-learning and explainable models for brain-machine interfaces", "status": "Poster", "keywords": "brain-machine interfaces;neural decoders;safety;kalman filter;real-time processing", "tldr": "We show an explainable brain decoder that combines the Kalman filter and RNNs to predict finger movements with high accuracy", "abstract": "People with brain or spinal cord-related paralysis often need to rely on others for basic tasks, limiting their independence. A potential solution is brain-machine interfaces (BMIs), which could allow them to voluntarily control external devices (e.g., robotic arm) by decoding brain activity to movement commands. In the past decade, deep-learning decoders have achieved state-of-the-art results in most BMI applications, ranging from speech production to finger control. However, the 'black-box' nature of deep-learning decoders could lead to unexpected behaviors, resulting in major safety concerns in real-world physical control scenarios. In these applications, explainable but lower-performing decoders, such as the Kalman filter (KF), remain the norm. In this study, we designed a BMI decoder based on KalmanNet, an extension of the KF that augments its operation with recurrent neural networks to compute the Kalman gain. This results in a varying \u201ctrust\u201d that shifts between inputs and dynamics. We used this algorithm to predict finger movements from the brain activity of two monkeys. We compared KalmanNet results offline (pre-recorded data,\nn\n=\n13\ndays) and online (real-time predictions,\nn\n=\n5\ndays) with a simple KF and two recent deep-learning algorithms: tcFNN (non-ReFIT version) and LSTM. KalmanNet achieved comparable or better results than other deep learning models in offline and online modes, relying on the dynamical model for stopping while depending more on neural inputs for initiating movements. We further validated this mechanism by implementing a heteroscedastic KF that used the same strategy, and it also approached state-of-the-art performance while remaining in the explainable domain of standard KFs. However, we also see two downsides to KalmanNet. KalmanNet shares the limited generalization ability of existing deep-learning decoders, and its usage of the KF as an inductive bias limits its performance in the presence of unseen noise distributions. Despite this trade-off, our analysis successfully integrates traditional controls and modern deep-learning approaches to motivate high-performing yet still explainable BMI designs.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/94983"} +{"video_file": "UE6CeRMnq3_39026192.mp4", "openreview_id": "UE6CeRMnq3", "slideslive_id": 39026192, "venue": "nips2024", "title": "Frequency-aware Generative Models for Multivariate Time Series Imputation", "status": "Poster", "keywords": "Time series; Time series Imputation; Generative Models; Frequency domain", "tldr": "This paper proposes a frequency-aware generative model (FGTI) for multivariate time series imputation, which integrates frequency-domain information and uses cross-domain representation learning modules to enhance imputation accuracy.", "abstract": "Missing data in multivariate time series are common issues that can affect the analysis and downstream applications. Although multivariate time series data generally consist of the trend, seasonal and residual terms, existing works mainly focus on optimizing the modeling for the first two items. However, we find that the residual term is more crucial for getting accurate fillings, since it is more related to the diverse changes of data and the biggest component of imputation errors. Therefore, in this study, we introduce frequency-domain information and design Frequency-aware Generative Models for Multivariate Time Series Imputation (FGTI). Specifically, FGTI employs a high-frequency filter to boost the residual term imputation, supplemented by a dominant-frequency filter for the trend and seasonal imputation. Cross-domain representation learning module then fuses frequency-domain insights with deep representations. Experiments over various datasets with real-world missing values show that FGTI achieves superiority in both data imputation and downstream applications.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94982"} +{"video_file": "UFRZHFYW8e_39024539.mp4", "openreview_id": "UFRZHFYW8e", "slideslive_id": 39024539, "venue": "nips2024", "title": "RaVL: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models", "status": "Poster", "keywords": "vision-language models;robustness;spurious correlations;fine-grained", "tldr": "We present RaVL, an approach for discovering and mitigating spurious correlations in fine-tuned vision-language models", "abstract": "Fine-tuned vision-language models (VLMs) often capture spurious correlations between image features and textual attributes, resulting in degraded zero-shot performance at test time. Existing approaches for addressing spurious correlations (i) primarily operate at the global image-level rather than intervening directly on fine-grained image features and (ii) are predominantly designed for unimodal settings. In this work, we present RaVL, which takes a fine-grained perspective on VLM robustness by discovering and mitigating spurious correlations using local image features rather than operating at the global image level. Given a fine-tuned VLM, RaVL first discovers spurious correlations by leveraging a region-level clustering approach to identify precise image features contributing to zero-shot classification errors. Then, RaVL mitigates the identified spurious correlation with a novel region-aware loss function that enables the VLM to focus on relevant regions and ignore spurious relationships during fine-tuning. We evaluate RaVL on 654 VLMs with various model architectures, data domains, and learned spurious correlations. Our results show that RaVL accurately discovers (191% improvement over the closest baseline) and mitigates (8.2% improvement on worst-group image classification accuracy) spurious correlations. Qualitative evaluations on general-domain and medical-domain VLMs confirm our findings.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/94981"} +{"video_file": "UGlDVc0GTU_39024923.mp4", "openreview_id": "UGlDVc0GTU", "slideslive_id": 39024923, "venue": "nips2024", "title": "LLM-based Skill Diffusion for Zero-shot Policy Adaptation", "status": "Poster", "keywords": "Imitation Learning;Planning;Diffusion Model;Large Language Model", "tldr": "LLM-based Skill Diffusion for Zero-shot Policy Adaptation", "abstract": "Recent advances in data-driven imitation learning and offline reinforcement learning have highlighted the use of expert data for skill acquisition and the development of hierarchical policies based on these skills. However, these approaches have not significantly advanced in adapting these skills to unseen contexts, which may involve changing environmental conditions or different user requirements. In this paper, we present a novel LLM-based policy adaptation framework LDuS which leverages an LLM to guide the generation process of a skill diffusion model upon contexts specified in language, facilitating zero-shot skill-based policy adaptation to different contexts. To implement the skill diffusion model, we adapt the loss-guided diffusion with a sequential in-painting technique, where target trajectories are conditioned by masking them with past state-action sequences, thereby enabling the robust and controlled generation of skill trajectories in test-time. To have a loss function for a given context, we employ the LLM-based code generation with iterative refinement, by which the code and controlled trajectory are validated to align with the context in a closed-loop manner. Through experiments, we demonstrate the zero-shot adaptability of LDuS to various context types including different specification levels, multi-modality, and varied temporal conditions for several robotic manipulation tasks, outperforming other language-conditioned imitation and planning methods.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94979"} +{"video_file": "UN7nXLeh9D_39024868.mp4", "openreview_id": "UN7nXLeh9D", "slideslive_id": 39024868, "venue": "nips2024", "title": "Improved learning rates in multi-unit uniform price auctions", "status": "Poster", "keywords": "Online Learning;Auctions;Bandits", "tldr": "We improve known regret rates in repeated uniform multi-unit auctions under bandit feedback, and introduce a novel partial feedback specific to the auctions.", "abstract": "Motivated by the strategic participation of electricity producers in electricity day-ahead market, we study the problem of online learning in repeated multi-unit uniform price auctions focusing on the adversarial opposing bid setting. The main contribution of this paper is the introduction of a new modeling of the bid space. Indeed, we prove that a learning algorithm leveraging the structure of this problem achieves a regret of\nO\n~\n(\nK\n4\n/\n3\nT\n2\n/\n3\n)\nunder bandit feedback, improving over the bound of\nO\n~\n(\nK\n7\n/\n4\nT\n3\n/\n4\n)\npreviously obtained in the literature. This improved regret rate is tight up to logarithmic terms. %by deducing a lower bound of\n\u03a9\n(\nT\n2\n/\n3\n)\nfrom the dynamic pricing literature, proving the optimality in\nT\nof our algorithm up to log factors. Inspired by electricity reserve markets, we further introduce a different feedback model under which all winning bids are revealed. This feedback interpolates between the full-information and bandit scenarios depending on the auctions' results. We prove that, under this feedback, the algorithm that we propose achieves regret\nO\n~\n(\nK\n5\n/\n2\nT\n)\n.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/94974"} +{"video_file": "UO7Mvch1Z5_39026996.mp4", "openreview_id": "UO7Mvch1Z5", "slideslive_id": 39026996, "venue": "nips2024", "title": "Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image", "status": "Poster", "keywords": "image to 3d;3d generation;mesh generation", "tldr": "A novel high quality and efficient image-to-3D framework.", "abstract": "In this work, we introduce Unique3D, a novel image-to-3D framework for efficiently generating high-quality 3D meshes from single-view images, featuring state-of-the-art generation fidelity and strong generalizability. Previous methods based on Score Distillation Sampling (SDS) can produce diversified 3D results by distilling 3D knowledge from large 2D diffusion models, but they usually suffer from long per-case optimization time with inconsistent issues. Recent works address the problem and generate better 3D results either by finetuning a multi-view diffusion model or training a fast feed-forward model. However, they still lack intricate textures and complex geometries due to inconsistency and limited generated resolution. To simultaneously achieve high fidelity, consistency, and efficiency in single image-to-3D, we propose a novel framework Unique3D that includes a multi-view diffusion model with a corresponding normal diffusion model to generate multi-view images with their normal maps, a multi-level upscale process to progressively improve the resolution of generated orthographic multi-views, as well as an instant and consistent mesh reconstruction algorithm called ISOMER, which fully integrates the color and geometric priors into mesh results. Extensive experiments demonstrate that our Unique3D significantly outperforms other image-to-3D baselines in terms of geometric and textural details.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/94973"} +{"video_file": "UPxFYvHsyN_39028485.mp4", "openreview_id": "UPxFYvHsyN", "slideslive_id": 39028485, "venue": "nips2024", "title": "TFS-NeRF: Template-Free NeRF for Semantic 3D Reconstruction of Dynamic Scene", "status": "Poster", "keywords": "3D reconstruction;Template-free NeRF;Semantic reconstruction;Multiple entity interactions", "tldr": "Dyanmic template-free NeRF for 3D semantic reconstruction of multiple entities interactions", "abstract": "Despite advancements in Neural Implicit models for 3D surface reconstruction, handling dynamic environments with interactions between arbitrary rigid, non-rigid, or deformable entities remains challenging. The generic reconstruction methods adaptable to such dynamic scenes often require additional inputs like depth or optical flow or rely on pre-trained image features for reasonable outcomes. These methods typically use latent codes to capture frame-by-frame deformations. Another set of dynamic scene reconstruction methods, are entity-specific, mostly focusing on humans, and relies on template models. In contrast, some template-free methods bypass these requirements and adopt traditional LBS (Linear Blend Skinning) weights for a detailed representation of deformable object motions, although they involve complex optimizations leading to lengthy training times. To this end, as a remedy, this paper introduces TFS-NeRF, a template-free 3D semantic NeRF for dynamic scenes captured from sparse or single-view RGB videos, featuring interactions among two entities and more time-efficient than other LBS-based approaches. Our framework uses an Invertible Neural Network (INN) for LBS prediction, simplifying the training process. By disentangling the motions of interacting entities and optimizing per-entity skinning weights, our method efficiently generates accurate, semantically separable geometries. Extensive experiments demonstrate that our approach produces high-quality reconstructions of both deformable and non-deformable objects in complex interactions, with improved training efficiency compared to existing methods. The code and models will be available on our github page.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94972"} +{"video_file": "URyeU8mwz1_39024357.mp4", "openreview_id": "URyeU8mwz1", "slideslive_id": 39024357, "venue": "nips2024", "title": "The Value of Reward Lookahead in Reinforcement Learning", "status": "Spotlight", "keywords": "Reinforcement Learning;Planning;Reward Lookahead;Competitive Ratio", "tldr": "The paper studies the potential increase in the value of RL problems given partial observations of the future realized rewards.", "abstract": "In reinforcement learning (RL), agents sequentially interact with changing environments while aiming to maximize the obtained rewards. Usually, rewards are observed only after acting, and so the goal is to maximize the expected cumulative reward. Yet, in many practical settings, reward information is observed in advance -- prices are observed before performing transactions; nearby traffic information is partially known; and goals are oftentimes given to agents prior to the interaction. In this work, we aim to quantifiably analyze the value of such future reward information through the lens of _competitive analysis. In particular, we measure the ratio between the value of standard RL agents and that of agents with partial future-reward lookahead. We characterize the worst-case reward distribution and derive exact ratios for the worst-case reward expectations. Surprisingly, the resulting ratios relate to known quantities in offline RL and reward-free exploration. We further provide tight bounds for the ratio given the worst-case dynamics. Our results cover the full spectrum between observing the immediate rewards before acting to observing all the rewards before the interaction starts.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94968"} +{"video_file": "UTNZKl5BUc_39026571.mp4", "openreview_id": "UTNZKl5BUc", "slideslive_id": 39026571, "venue": "nips2024", "title": "Gradual Domain Adaptation via Manifold-Constrained Distributionally Robust Optimization", "status": "Poster", "keywords": "Gradual Domain Adaptation;Distributionally Robust Optimization;Generalization Bound;Error Propagation Characterization", "tldr": "We propose a DRO-based method for gradual domain adaptation with theoretical generalization guarantees. We also introduce a new complexity measure that independently characterizes the dynamics of error propagation in gradual domain adaptation.", "abstract": "The aim of this paper is to address the challenge of gradual domain adaptation within a class of manifold-constrained data distributions. In particular, we consider a sequence of $T\\ge2$ data distributions $P_1,\\ldots,P_T$ undergoing a gradual shift, where each pair of consecutive measures $P_i,P_{i+1}$ are close to each other in Wasserstein distance. We have a supervised dataset of size $n$ sampled from $P_0$, while for the subsequent distributions in the sequence, only unlabeled i.i.d. samples are available. Moreover, we assume that all distributions exhibit a known favorable attribute, such as (but not limited to) having intra-class soft/hard margins. In this context, we propose a methodology rooted in Distributionally Robust Optimization (DRO) with an adaptive Wasserstein radius. We theoretically show that this method guarantees the classification error across all $P_i$s can be suitably bounded. Our bounds rely on a newly introduced {\\it {compatibility}} measure, which fully characterizes the error propagation dynamics along the sequence. Specifically, for inadequately constrained distributions, the error can exponentially escalate as we progress through the gradual shifts. Conversely, for appropriately constrained distributions, the error can be demonstrated to be linear or even entirely eradicated. We have substantiated our theoretical findings through several experimental results.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94967"} +{"video_file": "UWUUVKtKeu_39026893.mp4", "openreview_id": "UWUUVKtKeu", "slideslive_id": 39026893, "venue": "nips2024", "title": "Diffusion-based Reinforcement Learning via Q-weighted Variational Policy Optimization", "status": "Poster", "keywords": "Diffusion Model;Reinforcement Learning;Q-weighted Variational Policy Optimization", "tldr": "We propose a novel diffusion-based online RL algorithm, conducting policy optimization with Q-weighted variational loss and diffusion entropy regularization to exploit the expressiveness and exploration capability of diffusion policy.", "abstract": "Diffusion models have garnered widespread attention in Reinforcement Learning (RL) for their powerful expressiveness and multimodality. It has been verified that utilizing diffusion policies can significantly improve the performance of RL algorithms in continuous control tasks by overcoming the limitations of unimodal policies, such as Gaussian policies. Furthermore, the multimodality of diffusion policies also shows the potential of providing the agent with enhanced exploration capabilities. However, existing works mainly focus on applying diffusion policies in offline RL, while their incorporation into online RL has been less investigated. The diffusion model's training objective, known as the variational lower bound, cannot be applied directly in online RL due to the unavailability of 'good' samples (actions). To harmonize the diffusion model with online RL, we propose a novel model-free diffusion-based online RL algorithm named Q-weighted Variational Policy Optimization (QVPO). Specifically, we introduce the Q-weighted variational loss and its approximate implementation in practice. Notably, this loss is shown to be a tight lower bound of the policy objective. To further enhance the exploration capability of the diffusion policy, we design a special entropy regularization term. Unlike Gaussian policies, the log-likelihood in diffusion policies is inaccessible; thus this entropy term is nontrivial. Moreover, to reduce the large variance of diffusion policies, we also develop an efficient behavior policy through action selection. This can further improve its sample efficiency during online interaction. Consequently, the QVPO algorithm leverages the exploration capabilities and multimodality of diffusion policies, preventing the RL agent from converging to a sub-optimal policy. To verify the effectiveness of QVPO, we conduct comprehensive experiments on MuJoCo continuous control benchmarks. The final results demonstrate that QVPO achieves state-of-the-art performance in terms of both cumulative reward and sample efficiency.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94963"} +{"video_file": "UaJErAOssN_39024680.mp4", "openreview_id": "UaJErAOssN", "slideslive_id": 39024680, "venue": "nips2024", "title": "State Space Models on Temporal Graphs: A First-Principles Study", "status": "Poster", "keywords": "temporal graph learning; state space models; graph neural networks", "tldr": "We introduce a conceptualized GHiPPO abstraction on temporal graph and propose GraphSSM as a flexible state space framework for learning discrete graph sequences.", "abstract": "Over the past few years, research on deep graph learning has shifted from static graphs to temporal graphs in response to real-world complex systems that exhibit dynamic behaviors. In practice, temporal graphs are formalized as an ordered sequence of static graph snapshots observed at discrete time points. Sequence models such as RNNs or Transformers have long been the predominant backbone networks for modeling such temporal graphs. Yet, despite the promising results, RNNs struggle with long-range dependencies, while transformers are burdened by quadratic computational complexity. Recently, state space models (SSMs), which are framed as discretized representations of an underlying continuous-time linear dynamical system, have garnered substantial attention and achieved breakthrough advancements in independent sequence modeling. In this work, we undertake a principled investigation that extends SSM theory to temporal graphs by integrating structural information into the online approximation objective via the adoption of a Laplacian regularization term. The emergent continuous-time system introduces novel algorithmic challenges, thereby necessitating our development of GraphSSM, a graph state space model for modeling the dynamics of temporal graphs. Extensive experimental results demonstrate the effectiveness of our GraphSSM framework across various temporal graph benchmarks.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/94959"} +{"video_file": "UahrHR5HQh_39024502.mp4", "openreview_id": "UahrHR5HQh", "slideslive_id": 39024502, "venue": "nips2024", "title": "Variational Flow Matching for Graph Generation", "status": "Poster", "keywords": "generative modeling;flow matching;variational inference;categorical;discrete;graph generation;molecular generation", "tldr": "We propose a variational perspective on flow matching and apply it to graph generation.", "abstract": "We present a formulation of flow matching as variational inference, which we refer to as variational flow matching (VFM). We use this formulation to develop CatFlow, a flow matching method for categorical data that is easy to implement, computationally efficient, and achieves strong results on graph generation tasks. In VFM, the objective is to approximate the posterior probability path, which is a distribution over possible end points of a trajectory. VFM admits both the original flow matching objective and the CatFlow objective as special cases. We also relate VFM to score-based models, in which the dynamics are stochastic rather than deterministic, and derive a bound on the model likelihood based on a reweighted VFM objective. We evaluate CatFlow on one abstract graph generation task and two molecular generation tasks. In all cases, CatFlow exceeds or matches performance of the current state-of-the-art models.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94958"} +{"video_file": "UdXE5V2d0O_39028228.mp4", "openreview_id": "UdXE5V2d0O", "slideslive_id": 39028228, "venue": "nips2024", "title": "Direct Unlearning Optimization for Robust and Safe Text-to-Image Models", "status": "Poster", "keywords": "diffusion models;unlearning;safety", "tldr": "We erase the not-safe-for-work (NSFW) concepts in text-to-image diffusion models using image-based unlearning.", "abstract": "Recent advancements in text-to-image (T2I) models have greatly benefited from large-scale datasets, but they also pose significant risks due to the potential generation of unsafe content. To mitigate this issue, researchers proposed unlearning techniques that attempt to induce the model to unlearn potentially harmful prompts. However, these methods are easily bypassed by adversarial attacks, making them unreliable for ensuring the safety of generated images. In this paper, we propose Direct Unlearning Optimization (DUO), a novel framework for removing NSFW content from T2I models while preserving their performance on unrelated topics. DUO employs a preference optimization approach using curated paired image data, ensuring that the model learns to remove unsafe visual concepts while retain unrelated features. Furthermore, we introduce an output-preserving regularization term to maintain the model's generative capabilities on safe content. Extensive experiments demonstrate that DUO can robustly defend against various state-of-the-art red teaming methods without significant performance degradation on unrelated topics, as measured by FID and CLIP scores. Our work contributes to the development of safer and more reliable T2I models, paving the way for their responsible deployment in both closed-source and open-source scenarios.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/94956"} +{"video_file": "UddVRqTrjt_39027255.mp4", "openreview_id": "UddVRqTrjt", "slideslive_id": 39027255, "venue": "nips2024", "title": "Hierarchical Uncertainty Exploration via Feedforward Posterior Trees", "status": "Poster", "keywords": "Uncertainty Quantification;Explainable Computer Vision;Inverse Problems;Computational Imaging;Hierarchical Clustering", "tldr": "We predict a tree-valued hierarchical summarization of the posterior distribution for any input measurement, in a single forward pass of a neural network", "abstract": "When solving ill-posed inverse problems, one often desires to explore the space of potential solutions rather than be presented with a single plausible reconstruction. Valuable insights into these feasible solutions and their associated probabilities are embedded in the posterior distribution. However, when confronted with data of high dimensionality (such as images), visualizing this distribution becomes a formidable challenge, necessitating the application of effective summarization techniques before user examination. In this work, we introduce a new approach for visualizing posteriors across multiple levels of granularity using tree-valued predictions. Our method predicts a tree-valued hierarchical summarization of the posterior distribution for any input measurement, in a single forward pass of a neural network. We showcase the efficacy of our approach across diverse datasets and image restoration challenges, highlighting its prowess in uncertainty quantification and visualization. Our findings reveal that our method performs comparably to a baseline that hierarchically clusters samples from a diffusion-based posterior sampler, yet achieves this with orders of magnitude greater speed. Code and examples are available at our webpage.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/94955"} +{"video_file": "UdxpjKO2F9_39025677.mp4", "openreview_id": "UdxpjKO2F9", "slideslive_id": 39025677, "venue": "nips2024", "title": "Improving Environment Novelty Quantification for Effective Unsupervised Environment Design", "status": "Oral", "keywords": "Unsupervised Environment Design;Novelty-driven Autocurricula", "tldr": "We proposed the CENIE framework, which offers a scalable, domain-agnostic, and curriculum-aware approach to quantifying environment novelty using the agent's state-action space coverage.", "abstract": "Unsupervised Environment Design (UED) formalizes the problem of autocurricula through interactive training between a teacher agent and a student agent. The teacher generates new training environments with high learning potential, curating an adaptive curriculum that strengthens the student's ability to handle unseen scenarios. Existing UED methods mainly rely on regret, a metric that measures the difference between the agent's optimal and actual performance, to guide curriculum design. Regret-driven methods generate curricula that progressively increase environment complexity for the student but overlook environment novelty \u2014 a critical element for enhancing an agent's generalizability. Measuring environment novelty is especially challenging due to the underspecified nature of environment parameters in UED, and existing approaches face significant limitations. To address this, this paper introduces the Coverage-based Evaluation of Novelty In Environment (CENIE) framework. CENIE proposes a scalable, domain-agnostic, and curriculum-aware approach to quantifying environment novelty by leveraging the student's state-action space coverage from previous curriculum experiences. We then propose an implementation of CENIE that models this coverage and measures environment novelty using Gaussian Mixture Models. By integrating both regret and novelty as complementary objectives for curriculum design, CENIE facilitates effective exploration across the state-action space while progressively increasing curriculum complexity. Empirical evaluations demonstrate that augmenting existing regret-based UED algorithms with CENIE achieves state-of-the-art performance across multiple benchmarks, underscoring the effectiveness of novelty-driven autocurricula for robust generalization.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94954"} +{"video_file": "UekHycx0lz_39026129.mp4", "openreview_id": "UekHycx0lz", "slideslive_id": 39026129, "venue": "nips2024", "title": "DreamSteerer: Enhancing Source Image Conditioned Editability using Personalized Diffusion Models", "status": "Poster", "keywords": "Diffusion;Personalization;Editing", "tldr": "A plug-in method for improved editability of any existing T2I personalization baseline", "abstract": "Recent text-to-image (T2I) personalization methods have shown great premise in teaching a diffusion model user-specified concepts given a few images for reusing the acquired concepts in a novel context. With massive efforts being dedicated to personalized generation, a promising extension is personalized editing, namely to edit an image using personalized concepts, which can provide more precise guidance signal than traditional textual guidance. To address this, one straightforward solution is to incorporate a personalized diffusion model with a text-driven editing framework. However, such solution often shows unsatisfactory editability on the source image. To address this, we propose DreamSteerer, a plug-in method for augmenting existing T2I personalization methods. Specifically, we enhance the source image conditioned editability of a personalized diffusion model via a novel Editability Driven Score Distillation (EDSD) objective. Moreover, we identify a mode trapping issue with EDSD, and propose a mode shifting regularization with spatial feature guided sampling to avoid such issue. We further employ two key modifications on the Delta Denoising Score framework that enable high-fidelity local editing with personalized concepts. Extensive experiments validate that DreamSteerer can significantly improve the editability of several T2I personalization baselines while being computationally efficient.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/94953"} +{"video_file": "UmW9BYj761_39024938.mp4", "openreview_id": "UmW9BYj761", "slideslive_id": 39024938, "venue": "nips2024", "title": "No Filter: Cultural and Socioeconomic Diversity in Contrastive Vision-Language Models", "status": "Poster", "keywords": "cultural diversity;benchmarks;vision-language models", "tldr": "We study cultural and socioeconomic diversity in contrastive vision-language models (VLMs).", "abstract": "We study cultural and socioeconomic diversity in contrastive vision-language models (VLMs). Using a broad range of benchmark datasets and evaluation metrics, we bring to attention several important findings. First, the common filtering of training data to English image-text pairs disadvantages communities of lower socioeconomic status and negatively impacts cultural understanding. Notably, this performance gap is not captured by - and even at odds with - the currently popular evaluation metrics derived from the Western-centric ImageNet and COCO datasets. Second, pretraining with global, unfiltered data before fine-tuning on English content can improve cultural understanding without sacrificing performance on said popular benchmarks. Third, we introduce the task of geo-localization as a novel evaluation metric to assess cultural diversity in VLMs. Our work underscores the value of using diverse data to create more inclusive multimodal systems and lays the groundwork for developing VLMs that better represent global perspectives.", "primary_area": "human-AI_interaction", "site": "https://neurips.cc/virtual/2024/poster/94944"} +{"video_file": "UqvEHAnCJC_39026624.mp4", "openreview_id": "UqvEHAnCJC", "slideslive_id": 39026624, "venue": "nips2024", "title": "End-to-End Ontology Learning with Large Language Models", "status": "Poster", "keywords": "Ontology Learning;Large Language Models;Knowledge Representation", "tldr": "Building ontologies with Large Language Models", "abstract": "Ontologies are useful for automatic machine processing of domain knowledge as they represent it in a structured format. Yet, constructing ontologies requires substantial manual effort. To automate part of this process, large language models (LLMs) have been applied to solve various subtasks of ontology learning. However, this partial ontology learning does not capture the interactions between subtasks. We address this gap by introducing OLLM, a general and scalable method for building the taxonomic backbone of an ontology from scratch. Rather than focusing on subtasks, like individual relations between entities, we model entire subcomponents of the target ontology by finetuning an LLM with a custom regulariser that reduces overfitting on high-frequency concepts. We introduce a novel suite of metrics for evaluating the quality of the generated ontology by measuring its semantic and structural similarity to the ground truth. In contrast to standard metrics, our metrics use deep learning techniques to define more robust distance measures between graphs. Both our quantitative and qualitative results on Wikipedia show that OLLM outperforms subtask composition methods, producing more semantically accurate ontologies while maintaining structural integrity. We further demonstrate that our model can be effectively adapted to new domains, like arXiv, needing only a small number of training examples. Our source code and datasets are available at https://github.com/andylolu2/ollm.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94942"} +{"video_file": "Ur9f4hNIpN_39025403.mp4", "openreview_id": "Ur9f4hNIpN", "slideslive_id": 39025403, "venue": "nips2024", "title": "Predictor-Corrector Enhanced Transformers with Exponential Moving Average Coefficient Learning", "status": "Poster", "keywords": "Transformer;ODE;predictor-corrector;architecture", "tldr": "We introduce a predictor-corrector based Transformer (namely PCformer), where the predictor is an EMA augmented high-order method, and the corrector is a multstep method.", "abstract": "Residual networks, as discrete approximations of Ordinary Differential Equations (ODEs), have inspired significant advancements in neural network design, including multistep methods, high-order methods, and multi-particle dynamical systems. The precision of the solution to ODEs significantly affects parameter optimization, thereby impacting model performance. In this work, we present a series of advanced explorations of Transformer architecture design to minimize the error compared to the true ``solution.'' First, we introduce a predictor-corrector learning framework to minimize truncation errors, which consists of a high-order predictor and a multistep corrector. Second, we propose an exponential moving average-based coefficient learning method to strengthen our higher-order predictor. Extensive experiments on large-scale machine translation, abstractive summarization, language modeling, and natural language understanding benchmarks demonstrate the superiority of our approach. On the WMT'14 English-German and English-French tasks, our model achieved BLEU scores of 30.95 and 44.27, respectively. Furthermore, on the OPUS multilingual machine translation task, our model surpasses a robust 3.8B DeepNet by an average of 2.9 SacreBLEU, using only 1/3 parameters. Notably, it also beats LLama models by 5.7 accuracy points on the LM Harness Evaluation.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94940"} +{"video_file": "UtTjgMDTFO_39026923.mp4", "openreview_id": "UtTjgMDTFO", "slideslive_id": 39026923, "venue": "nips2024", "title": "Interventionally Consistent Surrogates for Complex Simulation Models", "status": "Poster", "keywords": "agent-based model;causal abstraction;complex simulator;surrogate model", "tldr": "We propose an approach to learning interventionally consistent surrogate models for complex simulators using causal abstraction.", "abstract": "Large-scale simulation models of complex socio-technical systems provide decision-makers with high-fidelity testbeds in which policy interventions can be evaluated and what-if scenarios explored. Unfortunately, the high computational cost of such models inhibits their widespread use in policy-making settings. Surrogate models can address these computational limitations, but to do so they must behave consistently with the simulator under interventions of interest. In this paper, we build upon recent developments in causal abstractions to develop a framework for learning interventionally consistent surrogate models for large-scale, complex simulation models. We provide theoretical results showing that our proposed approach induces surrogates to behave consistently with high probability with respect to the simulator across interventions of interest, facilitating rapid experimentation with policy interventions in complex systems. We further demonstrate with empirical studies that conventionally trained surrogates can misjudge the effect of interventions and misguide decision-makers towards suboptimal interventions, while surrogates trained for interventional consistency with our method closely mimic the behaviour of the original simulator under interventions of interest.", "primary_area": "machine_learning_for_social_sciences", "site": "https://neurips.cc/virtual/2024/poster/94939"} +{"video_file": "UvbpbEhGaw_39028490.mp4", "openreview_id": "UvbpbEhGaw", "slideslive_id": 39028490, "venue": "nips2024", "title": "Self-Supervised Alignment with Mutual Information: Learning to Follow Principles without Preference Labels", "status": "Poster", "keywords": "alignment;contrastive learning;constitutional ai;self-improvement", "tldr": "An iterative algorithm that increases the mutual information between responses and constitutions.", "abstract": "When prompting a language model (LM), users often expect the model to adhere to a set of behavioral principles across diverse tasks, such as producing insightful content while avoiding harmful or biased language. Instilling such principles (i.e., a constitution) into a model is resource-intensive, technically challenging, and generally requires human preference labels or examples. We introduce SAMI, an iterative algorithm that finetunes a pretrained language model (without requiring preference labels or demonstrations) to increase the conditional mutual information between constitutions and self-generated responses given queries from a dataset. On single-turn dialogue and summarization, a SAMI-trained mistral-7b outperforms the initial pretrained model, with win rates between 66% and 77%. Strikingly, it also surpasses an instruction-finetuned baseline (mistral-7b-instruct) with win rates between 55% and 57% on single-turn dialogue. SAMI requires a model that writes the principles. To avoid dependence on strong models for writing principles, we align a strong pretrained model (mixtral-8x7b) using constitutions written by a weak instruction-finetuned model (mistral-7b-instruct), achieving a 65% win rate on summarization. Finally, we investigate whether SAMI generalizes to diverse summarization principles (e.g., \"summaries should be scientific\") and scales to stronger models (llama3-70b), finding that it achieves win rates of up to 68% for learned and 67% for held-out principles compared to the base model. Our results show that a pretrained LM can learn to follow constitutions without using preference labels, demonstrations, or human oversight.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94936"} +{"video_file": "Uw2eJOI822_39025446.mp4", "openreview_id": "Uw2eJOI822", "slideslive_id": 39025446, "venue": "nips2024", "title": "Renovating Names in Open-Vocabulary Segmentation Benchmarks", "status": "Poster", "keywords": "vision-language datasets;open-vocabulary segmentation;renaming", "tldr": "We address the naming issues in open-vocabulary segmentation benchmarks and demonstrate that renaming improves both model training and evaluation.", "abstract": "Names are essential to both human cognition and vision-language models. Open-vocabulary models utilize class names as text prompts to generalize to categories unseen during training. However, the precision of these names is often overlooked in existing datasets. In this paper, we address this underexplored problem by presenting a framework for \"renovating\" names in open-vocabulary segmentation benchmarks (RENOVATE). Our framework features a renaming model that enhances the quality of names for each visual segment. Through experiments, we demonstrate that our renovated names help train stronger open-vocabulary models with up to 15% relative improvement and significantly enhance training efficiency with improved data quality. We also show that our renovated names improve evaluation by better measuring misclassification and enabling fine-grained model analysis. We provide our code and relabelings for several popular segmentation datasets to the research community on our project page: https://andrehuang.github.io/renovate.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94935"} +{"video_file": "Uymv9ThB50_39025461.mp4", "openreview_id": "Uymv9ThB50", "slideslive_id": 39025461, "venue": "nips2024", "title": "Uncovering Safety Risks of Large Language Models through Concept Activation Vector", "status": "Poster", "keywords": "large language model;responsible AI;AI safety;concept-based model explanation", "tldr": "We introduce a concept-based attack method using safety concept activation vectors (SCAVs) to efficiently attack well-aligned LLMs, revealing potential societal risks even after safety alignment.", "abstract": "Despite careful safety alignment, current large language models (LLMs) remain vulnerable to various attacks. To further unveil the safety risks of LLMs, we introduce a Safety Concept Activation Vector (SCAV) framework, which effectively guides the attacks by accurately interpreting LLMs' safety mechanisms. We then develop an SCAV-guided attack method that can generate both attack prompts and embedding-level attacks with automatically selected perturbation hyperparameters. Both automatic and human evaluations demonstrate that our attack method significantly improves the attack success rate and response quality while requiring less training data. Additionally, we find that our generated attack prompts may be transferable to GPT-4, and the embedding-level attacks may also be transferred to other white-box LLMs whose parameters are known. Our experiments further uncover the safety risks present in current LLMs. For example, in our evaluation of seven open-source LLMs, we observe an average attack success rate of 99.14%, based on the classic keyword-matching criterion. Finally, we provide insights into the safety mechanism of LLMs. The code is available at https://github.com/SproutNan/AI-Safety_SCAV.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/94933"} +{"video_file": "V0oJaLqY4E_39026619.mp4", "openreview_id": "V0oJaLqY4E", "slideslive_id": 39026619, "venue": "nips2024", "title": "Maximum Entropy Inverse Reinforcement Learning of Diffusion Models with Energy-Based Models", "status": "Oral", "keywords": "diffusion models;inverse reinforcement learning;dynamic programming;reinforcement learning;generative modeling", "tldr": "We present an inverse reinforcement learning framework for training diffusion models and provide a novel RL algorithm for diffusion models which leverages value functions.", "abstract": "We present a maximum entropy inverse reinforcement learning (IRL) approach for improving the sample quality of diffusion generative models, especially when the number of generation time steps is small. Similar to how IRL trains a policy based on the reward function learned from expert demonstrations, we train (or fine-tune) a diffusion model using the log probability density estimated from training data. Since we employ an energy-based model (EBM) to represent the log density, our approach boils down to the joint training of a diffusion model and an EBM. Our IRL formulation, named Diffusion by Maximum Entropy IRL (DxMI), is a minimax problem that reaches equilibrium when both models converge to the data distribution. The entropy maximization plays a key role in DxMI, facilitating the exploration of the diffusion model and ensuring the convergence of the EBM. We also propose Diffusion by Dynamic Programming (DxDP), a novel reinforcement learning algorithm for diffusion models, as a subroutine in DxMI. DxDP makes the diffusion model update in DxMI efficient by transforming the original problem into an optimal control formulation where value functions replace back-propagation in time. Our empirical studies show that diffusion models fine-tuned using DxMI can generate high-quality samples in as few as 4 and 10 steps. Additionally, DxMI enables the training of an EBM without MCMC, stabilizing EBM training dynamics and enhancing anomaly detection performance.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/94930"} +{"video_file": "V2MBWYXp63_39025119.mp4", "openreview_id": "V2MBWYXp63", "slideslive_id": 39025119, "venue": "nips2024", "title": "Text2NKG: Fine-Grained N-ary Relation Extraction for N-ary relational Knowledge Graph Construction", "status": "Poster", "keywords": "N-ary Relation Extraction;N-ary relational Knowledge Graph;Knowledge Graph Construction", "tldr": "We introduce Text2NKG, a novel fine-grained n-ary relation extraction framework for n-ary relational knowledge graph construction.", "abstract": "Beyond traditional binary relational facts, n-ary relational knowledge graphs (NKGs) are comprised of n-ary relational facts containing more than two entities, which are closer to real-world facts with broader applications. However, the construction of NKGs remains at a coarse-grained level, which is always in a single schema, ignoring the order and variable arity of entities. To address these restrictions, we propose Text2NKG, a novel fine-grained n-ary relation extraction framework for n-ary relational knowledge graph construction. We introduce a span-tuple classification approach with hetero-ordered merging and output merging to accomplish fine-grained n-ary relation extraction in different arity. Furthermore, Text2NKG supports four typical NKG schemas: hyper-relational schema, event-based schema, role-based schema, and hypergraph-based schema, with high flexibility and practicality. The experimental results demonstrate that Text2NKG achieves state-of-the-art performance in F1 scores on the fine-grained n-ary relation extraction benchmark. Our code and datasets are publicly available.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94929"} +{"video_file": "V3QZCM1AQv_39025494.mp4", "openreview_id": "V3QZCM1AQv", "slideslive_id": 39025494, "venue": "nips2024", "title": "REBORN: Reinforcement-Learned Boundary Segmentation with Iterative Training for Unsupervised ASR", "status": "Poster", "keywords": "Speech processing;unsupervised learning;reinforcement learning;adversarial learning", "tldr": "Reinforcement-Learned Boundary Segmentation with Iterative Training for Unsupervised ASR", "abstract": "Unsupervised automatic speech recognition (ASR) aims to learn the mapping between the speech signal and its corresponding textual transcription without the supervision of paired speech-text data. A word/phoneme in the speech signal is represented by a segment of speech signal with variable length and unknown boundary, and this segmental structure makes learning the mapping between speech and text challenging, especially without paired data. In this paper, we propose REBORN, Reinforcement-Learned Boundary Segmentation with Iterative Training for Unsupervised ASR. REBORN alternates between (1) training a segmentation model that predicts the boundaries of the segmental structures in speech signals and (2) training the phoneme prediction model, whose input is a segmental structure segmented by the segmentation model, to predict a phoneme transcription. Since supervised data for training the segmentation model is not available, we use reinforcement learning to train the segmentation model to favor segmentations that yield phoneme sequence predictions with a lower perplexity. We conduct extensive experiments and find that under the same setting, REBORN outperforms all prior unsupervised ASR models on LibriSpeech, TIMIT, and five non-English languages in Multilingual LibriSpeech. We comprehensively analyze why the boundaries learned by REBORN improve the unsupervised ASR performance.", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/94927"} +{"video_file": "V4tzn87DtN_39028643.mp4", "openreview_id": "V4tzn87DtN", "slideslive_id": 39028643, "venue": "nips2024", "title": "Stochastic Newton Proximal Extragradient Method", "status": "Poster", "keywords": "Stochastic second-order methods;superlinear convergence;hybrid proximal extragradient", "tldr": "With oracle access to exact gradients and noisy Hessians, we propose a stochastic Newton proximal extragradient method with both fast global and local convergence rates for strongly convex functions.", "abstract": "Stochastic second-order methods are known to achieve fast local convergence in strongly convex optimization by relying on noisy Hessian estimates to precondition the gradient. Yet, most of these methods achieve superlinear convergence only when the stochastic Hessian noise diminishes, requiring an increase in the per-iteration cost as time progresses. Recent work in \\cite{na2022hessian} addressed this issue via a Hessian averaging scheme that achieves a superlinear convergence rate without increasing the per-iteration cost. However, the considered method exhibits a slow global convergence rate, requiring up to\nO\n~\n(\n\u03ba\n2\n)\niterations to reach the superlinear rate of\nO\n~\n(\n(\n1\n/\nt\n)\nt\n/\n2\n)\n, where\n\u03ba\nis the problem's condition number. In this paper, we propose a novel stochastic Newton proximal extragradient method that significantly improves these bounds, achieving a faster global linear rate and reaching the same fast superlinear rate in\nO\n~\n(\n\u03ba\n)\niterations. We achieve this by developing a novel extension of the Hybrid Proximal Extragradient (HPE) framework, which simultaneously achieves fast global and local convergence rates for strongly convex functions with access to a noisy Hessian oracle.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94925"} +{"video_file": "V6hrg4O9gg_39027257.mp4", "openreview_id": "V6hrg4O9gg", "slideslive_id": 39027257, "venue": "nips2024", "title": "CodeRosetta: Pushing the Boundaries of Unsupervised Code Translation for Parallel Programming", "status": "Poster", "keywords": "unsupervised learning;code generation;HPC code generation;program translation;HPC code translation", "tldr": "Unsupervised Learning to Translate High-Performance Code", "abstract": "Automatic translation of programming languages has garnered renewed interest, driven by recent advancements in large language models (LLMs). Encoder-decoder transformer models, in particular, have shown promise in translating between different programming languages. However, translating between a language and its high-performance computing (HPC) extension remains underexplored due to inherent challenges like complex parallel semantics understanding. In this paper, we introduce CodeRosetta, an encoder-decoder transformer model explicitly designed for translating between programming languages and also their HPC extensions. CodeRosetta is evaluated on C++ to CUDA and Fortran to C++ translation. It employs a customized learning-based framework with tailored pretraining and training objectives that enable it to effectively capture code semantics and parallel structural nuances, allowing for bidirectional code translation. Our results show that CodeRosetta outperforms state-of-the-art baselines in C++ to CUDA translation by 2.9 BLEU and 1.72 CodeBLUE points while improving compilation accuracy by 6.05%. Compared to general closed-source LLMs, our proposed bidirectional learning-based method improves C++ to CUDA translation by 22.08 BLEU and 14.39 CodeBLUE with 2.75% higher compilation accuracy. Finally, CodeRosetta exhibits proficiency in Fortran to parallel C++ translation, marking it, to our knowledge, as the first encoder-decoder model for such a complex translation task, improving CodeBLEU at least by 4.63 points compared to closed-source LLMs and Open Code LLM.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94924"} +{"video_file": "V6qdb1AgsM_39026124.mp4", "openreview_id": "V6qdb1AgsM", "slideslive_id": 39026124, "venue": "nips2024", "title": "Continual Counting with Gradual Privacy Expiration", "status": "Poster", "keywords": "differential privacy;continual observation;privacy expiration", "tldr": "We consider a variant of the continual counting problem in which privacy loss is allowed to grow over time.", "abstract": "Differential privacy with gradual expiration models the setting where data items arrive in a stream and at a given time\nt\nthe privacy loss guaranteed for a data item seen at time\n(\nt\n\u2212\nd\n)\nis\n\u03f5\ng\n(\nd\n)\n, where\ng\nis a monotonically non-decreasing function. We study the fundamental continual (binary) counting problem where each data item consists of a bit and the algorithm needs to output at each time step the sum of all the bits streamed so far. For a stream of length\nT\nand privacy without expiration continual counting is possible with maximum (over all time steps) additive error\nO\n(\nlog\n2\n\u2061\n(\nT\n)\n/\n\u03b5\n)\nand the best known lower bound is\n\u03a9\n(\nlog\n\u2061\n(\nT\n)\n/\n\u03b5\n)\n; closing this gap is a challenging open problem.\nWe show that the situation is very different for privacy with gradual expiration by giving upper and lower bounds for a large set of expiration functions\ng\n. Specifically, our algorithm achieves an additive error of\nO\n(\nlog\n\u2061\n(\nT\n)\n/\n\u03f5\n)\nfor a large set of privacy expiration functions. We also give a lower bound that shows that if\nC\nis the additive error of any\n\u03f5\n-DP algorithm for this problem, then the product of\nC\nand the privacy expiration function after\n2\nC\nsteps must be\n\u03a9\n(\nlog\n\u2061\n(\nT\n)\n/\n\u03f5\n)\n. Our algorithm matches this lower bound as its additive error is\nO\n(\nlog\n\u2061\n(\nT\n)\n/\n\u03f5\n)\n, even when\ng\n(\n2\nC\n)\n=\nO\n(\n1\n)\n.\nOur empirical evaluation shows that we achieve a slowly growing privacy loss that has significantly smaller empirical privacy loss for large values of\nd\nthan a natural baseline algorithm.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/94923"} +{"video_file": "V6w7keoTqn_39027212.mp4", "openreview_id": "V6w7keoTqn", "slideslive_id": 39027212, "venue": "nips2024", "title": "EMVP: Embracing Visual Foundation Model for Visual Place Recognition with Centroid-Free Probing", "status": "Poster", "keywords": "Visual Foundation Model;Visual Place Recognition;Parameter Efficiency Fine-Tuning", "tldr": "This paper proposes a novel and effective Parameter Efficiency Fine-Tuning (PEFT) pipeline of adapting a visual foundation model in the visual place recognition task.", "abstract": "Visual Place Recognition (VPR) is essential for mobile robots as it enables them to retrieve images from a database closest to their current location. The progress of Visual Foundation Models (VFMs) has significantly advanced VPR by capturing representative descriptors in images. However, existing fine-tuning efforts for VFMs often overlook the crucial role of probing in effectively adapting these descriptors for improved image representation. In this paper, we propose the Centroid-Free Probing (CFP) stage, making novel use of second-order features for more effective use of descriptors from VFMs. Moreover, to control the preservation of task-specific information adaptively based on the context of the VPR, we introduce the Dynamic Power Normalization (DPN) module in both the recalibration and CFP stages, forming a novel Parameter Efficiency Fine-Tuning (PEFT) pipeline (EMVP) tailored for the VPR task. Extensive experiments demonstrate the superiority of the proposed CFP over existing probing methods. Moreover, the EMVP pipeline can further enhance fine-tuning performance in terms of accuracy and efficiency. Specifically, it achieves 93.9%, 96.5%, and 94.6% Recall@1 on the MSLS Validation, Pitts250k-test, and SPED datasets, respectively, while saving 64.3% of trainable parameters compared with the existing SOTA PEFT method.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/94922"} +{"video_file": "VDPZe0NbpE_39024626.mp4", "openreview_id": "VDPZe0NbpE", "slideslive_id": 39024626, "venue": "nips2024", "title": "PRODuctive bandits: Importance Weighting No More", "status": "Poster", "keywords": "Bandit Algorithms;Online Learning", "tldr": "We analyze variants of the Prod algorithm and show that they enjoy optimal regret guarantees in the bandit setting.", "abstract": "Prod is a seminal algorithm in full-information online learning, which has been conjectured to be fundamentally sub-optimal for multi-armed bandits. By leveraging the interpretation of Prod as a first-order OMD approximation, we present the following surprising results:\nVariants of Prod can obtain optimal regret for adversarial multi-armed bandits. 2. There exists a simple and (arguably) importance-weighting free variant with optimal rate.\nOne can even achieve best-both-worlds guarantees with logarithmic regret in the stochastic regime.\nThe bandit algorithms in this work use simple arithmetic update rules without the need of solving optimization problems typical in prior work. Finally, the results directly improve the state of the art of incentive-compatible bandits.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/94919"} +{"video_file": "VFRyS7Wx08_39025515.mp4", "openreview_id": "VFRyS7Wx08", "slideslive_id": 39025515, "venue": "nips2024", "title": "Rethinking Inverse Reinforcement Learning: from Data Alignment to Task Alignment", "status": "Poster", "keywords": "inverse reinforcement learning;imitation learning;reinforcement learning", "tldr": "Propose a new formulation for inverse reinforcement learning-based imitation learning to mitigate the task-reward misalignment.", "abstract": "Many imitation learning (IL) algorithms use inverse reinforcement learning (IRL) to infer a reward function that aligns with the demonstration. However, the inferred reward functions often fail to capture the underlying task objectives. In this paper, we propose a novel framework for IRL-based IL that prioritizes task alignment over conventional data alignment. Our framework is a semi-supervised approach that leverages expert demonstrations as weak supervision to derive a set of candidate reward functions that align with the task rather than only with the data. It then adopts an adversarial mechanism to train a policy with this set of reward functions to gain a collective validation of the policy's ability to accomplish the task. We provide theoretical insights into this framework's ability to mitigate task-reward misalignment and present a practical implementation. Our experimental results show that our framework outperforms conventional IL baselines in complex and transfer learning scenarios.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94918"} +{"video_file": "VFqzxhINFU_39027908.mp4", "openreview_id": "VFqzxhINFU", "slideslive_id": 39027908, "venue": "nips2024", "title": "StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation", "status": "Spotlight", "keywords": "Consistent character generation;Diffusion model;Image generation;Video generation;Transition prediction", "tldr": "Generating long-range image and video with consistent characters, based on Consistent Self-Attention and Motion Predictor.", "abstract": "For recent diffusion-based generative models, maintaining consistent content across a series of generated images, especially those containing subjects and complex details, presents a significant challenge. In this paper, we propose a simple but effective self-attention mechanism, termed Consistent Self-Attention, that boosts the consistency between the generated images. It can be used to augment pre-trained diffusion-based text-to-image models in a zero-shot manner. Based on the images with consistent content, we further show that our method can be extended to long range video generation by introducing a semantic space temporal motion prediction module, named Semantic Motion Predictor. It is trained to estimate the motion conditions between two provided images in the semantic spaces. This module converts the generated sequence of images into videos with smooth transitions and consistent subjects that are more stable than the modules based on latent spaces only, especially in the context of long video generation. By merging these two novel components, our framework, referred to as StoryDiffusion, can describe a text-based story with consistent images or videos encompassing a rich variety of contents. The proposed StoryDiffusion encompasses pioneering explorations in visual story generation with the presentation of images and videos, which we hope could inspire more research from the aspect of architectural modifications.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94916"} +{"video_file": "VJMYOfJVC2_39025259.mp4", "openreview_id": "VJMYOfJVC2", "slideslive_id": 39025259, "venue": "nips2024", "title": "WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models", "status": "Poster", "keywords": "Lifelong Model Editing;Large Language Model;Knowledge Memory", "tldr": "We propose an effective method of lifelong model editing for large language models.", "abstract": "Large language models (LLMs) need knowledge updates to meet the ever-growing world facts and correct the hallucinated responses, facilitating the methods of lifelong model editing. Where the updated knowledge resides in memories is a fundamental question for model editing. In this paper, we find that editing either long-term memory (direct model parameters) or working memory (non-parametric knowledge of neural network activations/representations by retrieval) will result in an impossible triangle---reliability, generalization, and locality can not be realized together in the lifelong editing settings. For long-term memory, directly editing the parameters will cause conflicts with irrelevant pretrained knowledge or previous edits (poor reliability and locality). For working memory, retrieval-based activations can hardly make the model understand the edits and generalize (poor generalization). Therefore, we propose WISE to bridge the gap between memories. In WISE, we design a dual parametric memory scheme, which consists of the main memory for the pretrained knowledge and a side memory for the edited knowledge. We only edit the knowledge in the side memory and train a router to decide which memory to go through when given a query. For continual editing, we devise a knowledge-sharding mechanism where different sets of edits reside in distinct subspaces of parameters, and are subsequently merged into a shared memory without conflicts. Extensive experiments show that WISE can outperform previous model editing methods and overcome the impossible triangle under lifelong model editing of question answering, hallucination, and out-of-distribution settings across trending LLM architectures, e.g., GPT, LLaMA, and Mistral.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94912"} +{"video_file": "VKt0K3iOmO_39027019.mp4", "openreview_id": "VKt0K3iOmO", "slideslive_id": 39027019, "venue": "nips2024", "title": "Spiking Graph Neural Network on Riemannian Manifolds", "status": "Poster", "keywords": "Graph Neural Network;Spiking Neural Network;Riemannian Geometry", "tldr": "We propose a new spiking neuron on Riemannian manifold so that the high-latency BPTT algorithm is replaced by our Differentiation via Manifold.", "abstract": "Graph neural networks (GNNs) have become the dominant solution for learning on graphs, the typical non-Euclidean structures. Conventional GNNs, constructed with the Artificial Neuron Network (ANN), have achieved impressive performance at the cost of high computation and energy consumption. In parallel, spiking GNNs with brain-like spiking neurons are drawing increasing research attention owing to the energy efficiency. So far, existing spiking GNNs consider graphs in Euclidean space, ignoring the structural geometry, and suffer from the high latency issue due to Back-Propagation-Through-Time (BPTT) with the surrogate gradient. In light of the aforementioned issues, we are devoted to exploring spiking GNN on Riemannian manifolds, and present a Manifold-valued Spiking GNN (MSG). In particular, we design a new spiking neuron on geodesically complete manifolds with the diffeomorphism, so that BPTT regarding the spikes is replaced by the proposed differentiation via manifold. Theoretically, we show that MSG approximates a solver of the manifold ordinary differential equation. Extensive experiments on common graphs show the proposed MSG achieves superior performance to previous spiking GNNs and energy efficiency to conventional GNNs.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/94910"} +{"video_file": "VLw8ZyKfcm_39028707.mp4", "openreview_id": "VLw8ZyKfcm", "slideslive_id": 39028707, "venue": "nips2024", "title": "Latent Neural Operator for Solving Forward and Inverse PDE Problems", "status": "Poster", "keywords": "latent neural operator;PDE;forward and inverse problems", "tldr": "We propose Latent Neural Operator to solve forward and inverse PDE problems in latent space.", "abstract": "Neural operators effectively solve PDE problems from data without knowing the explicit equations, which learn the map from the input sequences of observed samples to the predicted values. Most existing works build the model in the original geometric space, leading to high computational costs when the number of sample points is large. We present the Latent Neural Operator (LNO) solving PDEs in the latent space. In particular, we first propose Physics-Cross-Attention (PhCA) transforming representation from the geometric space to the latent space, then learn the operator in the latent space, and finally recover the real-world geometric space via the inverse PhCA map. Our model retains flexibility that can decode values in any position not limited to locations defined in the training set, and therefore can naturally perform interpolation and extrapolation tasks particularly useful for inverse problems. Moreover, the proposed LNO improves both prediction accuracy and computational efficiency. Experiments show that LNO reduces the GPU memory by 50%, speeds up training 1.8 times, and reaches state-of-the-art accuracy on four out of six benchmarks for forward problems and a benchmark for inverse problem. Code is available at https://github.com/L-I-M-I-T/LatentNeuralOperator.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/94908"} +{"video_file": "VMsHnv8cVs_39026285.mp4", "openreview_id": "VMsHnv8cVs", "slideslive_id": 39026285, "venue": "nips2024", "title": "Learning Better Representations From Less Data For Propositional Satisfiability", "status": "Spotlight", "keywords": "Neuro-symbolic;Propositional Logic;Resolution;Attention;Deep Learning;Graph Neural Networks;Expert Iteration", "tldr": "We present a neuro-symbolic approach for learning proofs of propositional satisfiability with largely enhanced data efficiency and a self-improving workflow.", "abstract": "Training neural networks on NP-complete problems typically demands very large amounts of training data and often needs to be coupled with computationally expensive symbolic verifiers to ensure output correctness. In this paper, we present NeuRes, a neuro-symbolic approach to address both challenges for propositional satisfiability, being the quintessential NP-complete problem. By combining certificate-driven training and expert iteration, our model learns better representations than models trained for classification only, with a much higher data efficiency -- requiring orders of magnitude less training data. NeuRes employs propositional resolution as a proof system to generate proofs of unsatisfiability and to accelerate the process of finding satisfying truth assignments, exploring both possibilities in parallel. To realize this, we propose an attention-based architecture that autoregressively selects pairs of clauses from a dynamic formula embedding to derive new clauses. Furthermore, we employ expert iteration whereby model-generated proofs progressively replace longer teacher proofs as the new ground truth. This enables our model to reduce a dataset of proofs generated by an advanced solver by\n\u223c\n32\n% after training on it with no extra guidance. This shows that NeuRes is not limited by the optimality of the teacher algorithm owing to its self-improving workflow. We show that our model achieves far better performance than NeuroSAT in terms of both correctly classified and proven instances.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94906"} +{"video_file": "VNbQbv658b_39027677.mp4", "openreview_id": "VNbQbv658b", "slideslive_id": 39027677, "venue": "nips2024", "title": "CoVoMix: Advancing Zero-Shot Speech Generation for Human-like Multi-talker Conversations", "status": "Poster", "keywords": "text-to-speech;dialogue generation;zero-shot", "tldr": "We introduce CoVoMix: Conversational Voice Mixture Generation, a novel model for zero-shot, human-like, multi-speaker, multi-round dialogue speech generation", "abstract": "Recent advancements in zero-shot text-to-speech (TTS) modeling have led to significant strides in generating high-fidelity and diverse speech. However, dialogue generation, along with achieving human-like naturalness in speech, continues to be a challenge. In this paper, we introduce CoVoMix: Conversational Voice Mixture Generation, a novel model for zero-shot, human-like, multi-speaker, multi-round dialogue speech generation. CoVoMix first converts dialogue text into multiple streams of discrete tokens, with each token stream representing semantic information for individual talkers. These token streams are then fed into a flow-matching based acoustic model to generate mixed mel-spectrograms. Finally, the speech waveforms are produced using a HiFi-GAN model. Furthermore, we devise a comprehensive set of metrics for measuring the effectiveness of dialogue modeling and generation. Our experimental results show that CoVoMix can generate dialogues that are not only human-like in their naturalness and coherence but also involve multiple talkers engaging in multiple rounds of conversation. This is exemplified by instances generated in a single channel where one speaker's utterance is seamlessly mixed with another's interjections or laughter, indicating the latter's role as an attentive listener. Audio samples are enclosed in the supplementary.", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/94904"} +{"video_file": "VOVyeOzZx0_39024999.mp4", "openreview_id": "VOVyeOzZx0", "slideslive_id": 39024999, "venue": "nips2024", "title": "Weak Supervision Performance Evaluation via Partial Identification", "status": "Poster", "keywords": "weak supervision;evaluation;Frechet bounds;partial identification", "tldr": "This paper introduces a method to estimate performance metrics like accuracy, recall, precision, and F1 score in the weak supervision setup when no ground truth labels are observed.", "abstract": "Programmatic Weak Supervision (PWS) enables supervised model training without direct access to ground truth labels, utilizing weak labels from heuristics, crowdsourcing, or pre-trained models. However, the absence of ground truth complicates model evaluation, as traditional metrics such as accuracy, precision, and recall cannot be directly calculated. In this work, we present a novel method to address this challenge by framing model evaluation as a partial identification problem and estimating performance bounds using Fr\u00e9chet bounds. Our approach derives reliable bounds on key metrics without requiring labeled data, overcoming core limitations in current weak supervision evaluation techniques. Through scalable convex optimization, we obtain accurate and computationally efficient bounds for metrics including accuracy, precision, recall, and F1-score, even in high-dimensional settings. This framework offers a robust approach to assessing model quality without ground truth labels, enhancing the practicality of weakly supervised learning for real-world applications.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/94902"} +{"video_file": "VQyb9LKmUH_39025750.mp4", "openreview_id": "VQyb9LKmUH", "slideslive_id": 39025750, "venue": "nips2024", "title": "A Prompt-Based Knowledge Graph Foundation Model for Universal In-Context Reasoning", "status": "Poster", "keywords": "knowledge graph; link prediction; in-context learning; prompt graph; graph neural network;foundation model", "tldr": "We propose a novel in-context knowledge graph foundation model for universal reasoning across diverse KGs and various reasoning settings.", "abstract": "Extensive knowledge graphs (KGs) have been constructed to facilitate knowledge-driven tasks across various scenarios. However, existing work usually develops separate reasoning models for different KGs, lacking the ability to generalize and transfer knowledge across diverse KGs and reasoning settings. In this paper, we propose a prompt-based KG foundation model via in-context learning, namely KG-ICL, to achieve a universal reasoning ability. Specifically, we introduce a prompt graph centered with a query-related example fact as context to understand the query relation. To encode prompt graphs with the generalization ability to unseen entities and relations in queries, we first propose a unified tokenizer that maps entities and relations in prompt graphs to predefined tokens. Then, we propose two message passing neural networks to perform prompt encoding and KG reasoning, respectively. We conduct evaluation on 43 different KGs in both transductive and inductive settings. Results indicate that the proposed KG-ICL outperforms baselines on most datasets, showcasing its outstanding generalization and universal reasoning capabilities. The source code is accessible on GitHub: https://github.com/nju-websoft/KG-ICL.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/94900"} +{"video_file": "VSz9na5Jtl_39028495.mp4", "openreview_id": "VSz9na5Jtl", "slideslive_id": 39028495, "venue": "nips2024", "title": "PageRank Bandits for Link Prediction", "status": "Poster", "keywords": "Link Prediction;PageRank;Graph Mining", "tldr": "A method that combines contextual bandits and PageRank for online and offline link prediction.", "abstract": "Link prediction is a critical problem in graph learning with broad applications such as recommender systems and knowledge graph completion. Numerous research efforts have been directed at solving this problem, including approaches based on similarity metrics and Graph Neural Networks (GNN). However, most existing solutions are still rooted in conventional supervised learning, which makes it challenging to adapt over time to changing customer interests and to address the inherent dilemma of exploitation versus exploration in link prediction. To tackle these challenges, this paper reformulates link prediction as a sequential decision-making process, where each link prediction interaction occurs sequentially. We propose a novel fusion algorithm, PRB (PageRank Bandits), which is the first to combine contextual bandits with PageRank for collaborative exploitation and exploration. We also introduce a new reward formulation and provide a theoretical performance guarantee for PRB. Finally, we extensively evaluate PRB in both online and offline settings, comparing it with bandit-based and graph-based methods. The empirical success of PRB demonstrates the value of the proposed fusion approach. Our code is released at https://github.com/jiaruzouu/PRB.", "primary_area": "machine_learning_for_social_sciences", "site": "https://neurips.cc/virtual/2024/poster/94897"} +{"video_file": "VUWvVvNi6r_39027217.mp4", "openreview_id": "VUWvVvNi6r", "slideslive_id": 39027217, "venue": "nips2024", "title": "Unveiling the Hidden Structure of Self-Attention via Kernel Principal Component Analysis", "status": "Poster", "keywords": "Transformers;Attention;Kernel Principal Component Analysis", "tldr": "We show that self-attention projects its query vectors onto the principal component axes of its key matrix in a feature space and propose Attention with Robust Principal Components, a novel robust attention that is resilient to data contamination.", "abstract": "The remarkable success of transformers in sequence modeling tasks, spanning various applications in natural language processing and computer vision, is attributed to the critical role of self-attention. Similar to the development of most deep learning models, the construction of these attention mechanisms relies on heuristics and experience. In our work, we derive self-attention from kernel principal component analysis (kernel PCA) and show that self-attention projects its query vectors onto the principal component axes of its key matrix in a feature space. We then formulate the exact formula for the value matrix in self-attention, theoretically and empirically demonstrating that this value matrix captures the eigenvectors of the Gram matrix of the key vectors in self-attention. Leveraging our kernel PCA framework, we propose Attention with Robust Principal Components (RPC-Attention), a novel class of robust attention that is resilient to data contamination. We empirically demonstrate the advantages of RPC-Attention over softmax attention on the ImageNet-1K object classification, WikiText-103 language modeling, and ADE20K image segmentation task.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/94894"} +{"video_file": "VUgXAWOCQz_39024695.mp4", "openreview_id": "VUgXAWOCQz", "slideslive_id": 39024695, "venue": "nips2024", "title": "Randomized algorithms and PAC bounds for inverse reinforcement learning in continuous spaces", "status": "Poster", "keywords": "Inverse reinforcement learning;statistical learning;Markov decision processes", "tldr": "Randomized algorithms and PAC bounds for IRL in continuous spaces", "abstract": "This work studies discrete-time discounted Markov decision processes with continuous state and action spaces and addresses the inverse problem of inferring a cost function from observed optimal behavior. We first consider the case in which we have access to the entire expert policy and characterize the set of solutions to the inverse problem by using occupation measures, linear duality, and complementary slackness conditions. To avoid trivial solutions and ill-posedness, we introduce a natural linear normalization constraint. This results in an infinite-dimensional linear feasibility problem, prompting a thorough analysis of its properties. Next, we use linear function approximators and adopt a randomized approach, namely the scenario approach and related probabilistic feasibility guarantees, to derive\n\u03b5\n-optimal solutions for the inverse problem. We further discuss the sample complexity for a desired approximation accuracy. Finally, we deal with the more realistic case where we only have access to a finite set of expert demonstrations and a generative model and provide bounds on the error made when working with samples.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94893"} +{"video_file": "VXJVNdmXO4_39025000.mp4", "openreview_id": "VXJVNdmXO4", "slideslive_id": 39025000, "venue": "nips2024", "title": "Data Acquisition via Experimental Design for Data Markets", "status": "Poster", "keywords": "Data valuation;experimental design;data markets;data acquisition;federated learning", "tldr": "We identify weaknesses in current data valuation methods for data markets and propose a federated data acquisition method based on experimental design that achieves state of the art performance on several real world medical datasets.", "abstract": "The acquisition of training data is crucial for machine learning applications. Data markets can increase the supply of data, particularly in data-scarce domains such as healthcare, by incentivizing potential data providers to join the market. A major challenge for a data buyer in such a market is choosing the most valuable data points from a data seller. Unlike prior work in data valuation, which assumes centralized data access, we propose a federated approach to the data acquisition problem that is inspired by linear experimental design. Our proposed data acquisition method achieves lower prediction error without requiring labeled validation data and can be optimized in a fast and federated procedure. The key insight of our work is that a method that directly estimates the benefit of acquiring data for test set prediction is particularly compatible with a decentralized market setting.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94889"} +{"video_file": "VXxj3XZ1X8_39025644.mp4", "openreview_id": "VXxj3XZ1X8", "slideslive_id": 39025644, "venue": "nips2024", "title": "Reproducibility of predictive networks for mouse visual cortex", "status": "Spotlight", "keywords": "reproducibility;predictive models for visual cortex;neuroscience", "tldr": "Better performing predictive models for V1 might be less consistent that the older generation. Step towards solving it: pruning and adaptive regularisation.", "abstract": "Deep predictive models of neuronal activity have recently enabled several new discoveries about the selectivity and invariance of neurons in the visual cortex. These models learn a shared set of nonlinear basis functions, which are linearly combined via a learned weight vector to represent a neuron's function. Such weight vectors, which can be thought as embeddings of neuronal function, have been proposed to define functional cell types via unsupervised clustering. However, as deep models are usually highly overparameterized, the learning problem is unlikely to have a unique solution, which raises the question if such embeddings can be used in a meaningful way for downstream analysis. In this paper, we investigate how stable neuronal embeddings are with respect to changes in model architecture and initialization. We find that $L_1$ regularization to be an important ingredient for structured embeddings and develop an adaptive regularization that adjusts the strength of regularization per neuron.\nThis regularization improves both predictive performance and how consistently neuronal embeddings cluster across model fits compared to uniform regularization. To overcome overparametrization, we propose an iterative feature pruning strategy which reduces the dimensionality of performance-optimized models by half without loss of performance and improves the consistency of neuronal embeddings with respect to clustering neurons. Our results suggest that to achieve an objective taxonomy of cell types or a compact representation of the functional landscape, we need novel architectures or learning techniques that improve identifiability. The code is available https://github.com/pollytur/readout_reproducibility.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/94888"} +{"video_file": "Vhh7ONtfvV_39027194.mp4", "openreview_id": "Vhh7ONtfvV", "slideslive_id": 39027194, "venue": "nips2024", "title": "Decomposing and Interpreting Image Representations via Text in ViTs Beyond CLIP", "status": "Poster", "keywords": "vision;interpretability;explainability", "tldr": "We show how to do representation decomposition and interpretation if your ViT != CLIP", "abstract": "Recent work has explored how individual components of the CLIP-ViT model contribute to the final representation by leveraging the shared image-text representation space of CLIP. These components, such as attention heads and MLPs, have been shown to capture distinct image features like shape, color or texture. However, understanding the role of these components in arbitrary vision transformers (ViTs) is challenging. To this end, we introduce a general framework which can identify the roles of various components in ViTs beyond CLIP. Specifically, we (a) automate the decomposition of the final representation into contributions from different model components, and (b) linearly map these contributions to CLIP space to interpret them via text. Additionally, we introduce a novel scoring function to rank components by their importance with respect to specific features. Applying our framework to various ViT variants (e.g. DeiT, DINO, DINOv2, Swin, MaxViT), we gain insights into the roles of different components concerning particular image features. These insights facilitate applications such as image retrieval using text descriptions or reference images, visualizing token importance heatmaps, and mitigating spurious correlations. We release our code to reproduce the experiments in the paper.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/94881"} +{"video_file": "Vi8AepAXGy_39028019.mp4", "openreview_id": "Vi8AepAXGy", "slideslive_id": 39028019, "venue": "nips2024", "title": "Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs", "status": "Oral", "keywords": "Multimodal LLM;Visual Representation Learning;Evaluation Protocol;Data Mix;Open Science", "tldr": "Cambrian-1 is a vision-centric study of MLLM design\u2014spanning visual representation choice, connector design, instruction tuning, and benchmarking.", "abstract": "We introduce Cambrian-1, a family of multimodal LLMs (MLLMs) designed with a vision-centric approach. While stronger language models can enhance multimodal capabilities, the design choices for vision components are often insufficiently explored and disconnected from visual representation learning research. This gap hinders accurate sensory grounding in real-world scenarios. Our study uses LLMs and visual instruction tuning as an interface to evaluate various visual representations, offering new insights into different models and architectures\u2014self-supervised, strongly supervised, or combinations thereof\u2014based on experiments with over 15 vision models. We critically examine existing MLLM benchmarks, addressing the difficulties involved in consolidating and interpreting results from various tasks. To further improve visual grounding, we propose spatial vision aggregator (SVA), a dynamic and spatially-aware connector that integrates vision features with LLMs while reducing the number of tokens. Additionally, we discuss the curation of high-quality visual instruction-tuning data from publicly available sources, emphasizing the importance of distribution balancing. Collectively, Cambrian-1 not only achieves state-of-the-art performances but also serves as a comprehensive, open cookbook for instruction-tuned MLLMs. We provide model weights, code, supporting tools, datasets, and detailed instruction-tuning and evaluation recipes. We hope our release will inspire and accelerate advancements in multimodal systems and visual representation learning.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94880"} +{"video_file": "VikufBLOW1_39027768.mp4", "openreview_id": "VikufBLOW1", "slideslive_id": 39027768, "venue": "nips2024", "title": "Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach", "status": "Poster", "keywords": "Visual entity recognition;generative models;multimodal LLM", "tldr": "The paper introduces a method to create a better dataset for visual entity recognition using a multimodal language model for label verification and metadata generation.", "abstract": "Web-scale visual entity recognition, the task of associating images with their corresponding entities within vast knowledge bases like Wikipedia, presents significant challenges due to the lack of clean, large-scale training data. In this paper, we propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation. Instead of relying on the multimodal LLM to directly annotate data, which we found to be suboptimal, we prompt it to reason about potential candidate entity labels by accessing additional contextually relevant information (such as Wikipedia), resulting in more accurate annotations. We further use the multimodal LLM to enrich the dataset by generating question-answer pairs and a grounded fine-grained textual description (referred to as \"rationale\") that explains the connection between images and their assigned entities. Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks (e.g. +6.9% improvement in OVEN entity task), underscoring the importance of high-quality training data in this domain.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94878"} +{"video_file": "Vn0FWRImra_39027477.mp4", "openreview_id": "Vn0FWRImra", "slideslive_id": 39027477, "venue": "nips2024", "title": "Nearly Minimax Optimal Submodular Maximization with Bandit Feedback", "status": "Poster", "keywords": "Bandits;Submodular optimization;Minimax optimal", "tldr": "We provide the first minimax regret lower bound for the online submodular maximization with stochastic bandit feedback, and an upperbound matching it in terms of time T and number of arms n.", "abstract": "We consider maximizing an unknown monotonic, submodular set function\nf\n:\n2\n[\nn\n]\n\u2192\n[\n0\n,\n1\n]\nwith cardinality constraint under stochastic bandit feedback. At each time\nt\n=\n1\n,\n\u2026\n,\nT\nthe learner chooses a set\nS\nt\n\u2282\n[\nn\n]\nwith\n|\nS\nt\n|\n\u2264\nk\nand receives reward\nf\n(\nS\nt\n)\n+\n\u03b7\nt\nwhere\n\u03b7\nt\nis mean-zero sub-Gaussian noise. The objective is to minimize the learner's regret with respect to an approximation of the maximum\nf\n(\nS\n\u2217\n)\nwith\n|\nS\n\u2217\n|\n=\nk\n, obtained through robust greedy maximization of\nf\n. To date, the best regret bound in the literature scales as\nk\nn\n1\n/\n3\nT\n2\n/\n3\n. And by trivially treating every set as a unique arm one deduces that\n(\nn\nk\n)\nT\nis also achievable using standard multi-armed bandit algorithms. In this work, we establish the first minimax lower bound for this setting that scales like\n\u03a9\n~\n(\nmin\nL\n\u2264\nk\n(\nL\n1\n/\n3\nn\n1\n/\n3\nT\n2\n/\n3\n+\n(\nn\nk\n\u2212\nL\n)\nT\n)\n)\n. For a slightly restricted algorithm class, we prove a stronger regret lower bound of\n\u03a9\n~\n(\nmin\nL\n\u2264\nk\n(\nL\nn\n1\n/\n3\nT\n2\n/\n3\n+\n(\nn\nk\n\u2212\nL\n)\nT\n)\n)\n. Moreover, we propose an algorithm Sub-UCB that achieves regret\nO\n~\n(\nmin\nL\n\u2264\nk\n(\nL\nn\n1\n/\n3\nT\n2\n/\n3\n+\n(\nn\nk\n\u2212\nL\n)\nT\n)\n)\ncapable of matching the lower bound on regret for the restricted class up to logarithmic factors.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/94877"} +{"video_file": "Vq2kzpig8v_39024877.mp4", "openreview_id": "Vq2kzpig8v", "slideslive_id": 39024877, "venue": "nips2024", "title": "Reciprocal Reward Influence Encourages Cooperation From Self-Interested Agents", "status": "Poster", "keywords": "cooperation;reinforcement learning;opponent shaping;multi-agent reinforcement learning", "tldr": "We introduce an intrinsic reciprocal reward that encourages an agent to reciprocate another's influence on its own expected return, and show that resulting policies promote cooperative outcomes in sequential social dilemmas.", "abstract": "Cooperation between self-interested individuals is a widespread phenomenon in the natural world, but remains elusive in interactions between artificially intelligent agents. Instead, na\u00efve reinforcement learning algorithms typically converge to Pareto-dominated outcomes in even the simplest of social dilemmas. An emerging literature on opponent shaping has demonstrated the ability to reach prosocial outcomes by influencing the learning of other agents. However, such methods differentiate through the learning step of other agents or optimize for meta-game dynamics, which rely on privileged access to opponents' learning algorithms or exponential sample complexity, respectively. To provide a learning rule-agnostic and sample-efficient alternative, we introduce Reciprocators, reinforcement learning agents which are intrinsically motivated to reciprocate the influence of opponents' actions on their returns. This approach seeks to modify other agents'\nQ\n-values by increasing their return following beneficial actions (with respect to the Reciprocator) and decreasing it after detrimental actions, guiding them towards mutually beneficial actions without directly differentiating through a model of their policy. We show that Reciprocators can be used to promote cooperation in temporally extended social dilemmas during simultaneous learning. Our code is available at https://github.com/johnlyzhou/reciprocator/.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94874"} +{"video_file": "VqFz7iTGcl_39028664.mp4", "openreview_id": "VqFz7iTGcl", "slideslive_id": 39028664, "venue": "nips2024", "title": "When is an Embedding Model More Promising than Another?", "status": "Poster", "keywords": "embedders;emeddings;molecules;nlp;llm;foundation models;evaluation;unsupervised;representation learning;information theory", "tldr": "We provide theoretical foundations for comparing embedding models and propose a tractable approach to compare them, and show that this approach strongly correlates with downstream tasks performances.", "abstract": "Embedders play a central role in machine learning, projecting any object into numerical representations that can, in turn, be leveraged to perform various downstream tasks. The evaluation of embedding models typically depends on domain-specific empirical approaches utilizing downstream tasks, primarily because of the lack of a standardized framework for comparison. However, acquiring adequately large and representative datasets for conducting these assessments is not always viable and can prove to be prohibitively expensive and time-consuming. In this paper, we present a unified approach to evaluate embedders. First, we establish theoretical foundations for comparing embedding models, drawing upon the concepts of sufficiency and informativeness. We then leverage these concepts to devise a tractable comparison criterion (information sufficiency), leading to a task-agnostic and self-supervised ranking procedure. We demonstrate experimentally that our approach aligns closely with the capability of embedding models to facilitate various downstream tasks in both natural language processing and molecular biology. This effectively offers practitioners a valuable tool for prioritizing model trials.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/94873"} +{"video_file": "VqkAKQibpq_39027411.mp4", "openreview_id": "VqkAKQibpq", "slideslive_id": 39027411, "venue": "nips2024", "title": "SGLang: Efficient Execution of Structured Language Model Programs", "status": "Poster", "keywords": "large language models; inference optimizations; KV cache; programming systems", "tldr": "We introduce a system to simplify the programming and accelerate the execution of complex structured language model programs.", "abstract": "Large language models (LLMs) are increasingly used for complex tasks that require multiple generation calls, advanced prompting techniques, control flow, and structured inputs/outputs. However, efficient systems are lacking for programming and executing these applications. We introduce SGLang, a system for efficient execution of complex language model programs. SGLang consists of a frontend language and a runtime. The frontend simplifies programming with primitives for generation and parallelism control. The runtime accelerates execution with novel optimizations like RadixAttention for KV cache reuse and compressed finite state machines for faster structured output decoding. Experiments show that SGLang achieves up to\n6.4\n\u00d7\nhigher throughput compared to state-of-the-art inference systems on various large language and multi-modal models on tasks including agent control, logical reasoning, few-shot learning benchmarks, JSON decoding, retrieval-augmented generation pipelines, and multi-turn chat. The code is publicly available at https://github.com/sgl-project/sglang.", "primary_area": "infrastructure", "site": "https://neurips.cc/virtual/2024/poster/94872"} +{"video_file": "VqxODXhU4k_39024715.mp4", "openreview_id": "VqxODXhU4k", "slideslive_id": 39024715, "venue": "nips2024", "title": "Nonparametric Instrumental Variable Regression through Stochastic Approximate Gradients", "status": "Poster", "keywords": "Nonparametric Instrumental Variables;Stochastic Gradients;RKHS;Binary response;Deep Learning;Causality", "tldr": "We address the problem of nonparametric instrumental variable regression using stochastic gradient descent in a function space.", "abstract": "Instrumental variables (IVs) provide a powerful strategy for identifying causal effects in the presence of unobservable confounders. Within the nonparametric setting (NPIV), recent methods have been based on nonlinear generalizations of Two-Stage Least Squares and on minimax formulations derived from moment conditions or duality. In a novel direction, we show how to formulate a functional stochastic gradient descent algorithm to tackle NPIV regression by directly minimizing the populational risk. We provide theoretical support in the form of bounds on the excess risk, and conduct numerical experiments showcasing our method's superior stability and competitive performance relative to current state-of-the-art alternatives. This algorithm enables flexible estimator choices, such as neural networks or kernel based methods, as well as non-quadratic loss functions, which may be suitable for structural equations beyond the setting of continuous outcomes and additive noise. Finally, we demonstrate this flexibility of our framework by presenting how it naturally addresses the important case of binary outcomes, which has received far less attention by recent developments in the NPIV literature.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94871"} +{"video_file": "VrVx83BkQX_39024817.mp4", "openreview_id": "VrVx83BkQX", "slideslive_id": 39024817, "venue": "nips2024", "title": "Stepwise Alignment for Constrained Language Model Policy Optimization", "status": "Poster", "keywords": "AI Alignment;Large Language Models;AI Safety;Safe RL", "tldr": "This paper proposes a stepwise approach for aligning large language models under safety constraints.", "abstract": "Safety and trustworthiness are indispensable requirements for real-world applications of AI systems using large language models (LLMs). This paper formulates human value alignment as an optimization problem of the language model policy to maximize reward under a safety constraint, and then proposes an algorithm, Stepwise Alignment for Constrained Policy Optimization (SACPO). One key idea behind SACPO, supported by theory, is that the optimal policy incorporating reward and safety can be directly obtained from a reward-aligned policy. Building on this key idea, SACPO aligns LLMs step-wise with each metric while leveraging simple yet powerful alignment algorithms such as direct preference optimization (DPO). SACPO offers several advantages, including simplicity, stability, computational efficiency, and flexibility of algorithms and datasets. Under mild assumptions, our theoretical analysis provides the upper bounds on optimality and safety constraint violation. Our experimental results show that SACPO can fine-tune Alpaca-7B better than the state-of-the-art method in terms of both helpfulness and harmlessness.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94870"} +{"video_file": "VwUTz2pOnD_39024962.mp4", "openreview_id": "VwUTz2pOnD", "slideslive_id": 39024962, "venue": "nips2024", "title": "Kernel-Based Function Approximation for Average Reward Reinforcement Learning: An Optimist No-Regret Algorithm", "status": "Poster", "keywords": "Reinforcement learning;infinite horizon average reward setting;no-regret algorithm;kernel-based model", "tldr": "We propose an optimisitc kernel-based RL algorithm for the infinite horizon average reward setting and prove no-regret performance guarantees.", "abstract": "Reinforcement Learning (RL) utilizing kernel ridge regression to predict the expected value function represents a powerful method with great representational capacity. This setting is a highly versatile framework amenable to analytical results. We consider kernel-based function approximation for RL in the infinite horizon average reward setting, also referred to as the undiscounted setting. We propose an optimistic algorithm, similar to acquisition function based algorithms in the special case of bandits. We establish novel no-regret performance guarantees for our algorithm, under kernel-based modelling assumptions. Additionally, we derive a novel confidence interval for the kernel-based prediction of the expected value function, applicable across various RL problems.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94866"} +{"video_file": "W0okTgsPvM_39026142.mp4", "openreview_id": "W0okTgsPvM", "slideslive_id": 39026142, "venue": "nips2024", "title": "Multimodal Task Vectors Enable Many-Shot Multimodal In-Context Learning", "status": "Poster", "keywords": "Large Multimodal Models;Vision-and-Language;In-Context Learning", "tldr": "We demonstrate the existence of multimodal task vectors and leverage them for many-shot ICL in LMMs.", "abstract": "The recent success of interleaved Large Multimodal Models (LMMs) in few-shot learning suggests that in-context learning (ICL) with many examples can be promising for learning new tasks. However, this many-shot multimodal ICL setting has one crucial problem: it is fundamentally limited by the model's context length set at pretraining. The problem is especially prominent in the multimodal domain, which processes both text and images, requiring additional tokens. This motivates the need for a multimodal method to compress many shots into fewer tokens without finetuning. In this work, we enable LMMs to perform multimodal, many-shot in-context learning by leveraging Multimodal Task Vectors (MTV)---compact implicit representations of in-context examples compressed in the model's attention heads. Specifically, we first demonstrate the existence of such MTV in LMMs and then leverage these extracted MTV to enable many-shot in-context learning for various vision-and-language tasks. Our experiments suggest that MTV can scale in performance with the number of compressed shots and generalize to similar out-of-domain tasks without additional context length for inference. Code: https://github.com/Brandon3964/MultiModal-Task-Vector", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94861"} +{"video_file": "W0wq9njGHi_39025165.mp4", "openreview_id": "W0wq9njGHi", "slideslive_id": 39025165, "venue": "nips2024", "title": "Kaleidoscope: Learnable Masks for Heterogeneous Multi-agent Reinforcement Learning", "status": "Poster", "keywords": "multi-agent reinforcement learning; parameter sharing", "tldr": "A learnable partial parameter sharing mechanism for multi-agent reinforcement learning", "abstract": "In multi-agent reinforcement learning (MARL), parameter sharing is commonly employed to enhance sample efficiency. However, the popular approach of full parameter sharing often leads to homogeneous policies among agents, potentially limiting the performance benefits that could be derived from policy diversity. To address this critical limitation, we introduce \\emph{Kaleidoscope}, a novel adaptive partial parameter sharing scheme that fosters policy heterogeneity while still maintaining high sample efficiency. Specifically, Kaleidoscope maintains one set of common parameters alongside multiple sets of distinct, learnable masks for different agents, dictating the sharing of parameters. It promotes diversity among policy networks by encouraging discrepancy among these masks, without sacrificing the efficiencies of parameter sharing. This design allows Kaleidoscope to dynamically balance high sample efficiency with a broad policy representational capacity, effectively bridging the gap between full parameter sharing and non-parameter sharing across various environments. We further extend Kaleidoscope to critic ensembles in the context of actor-critic algorithms, which could help improve value estimations. Our empirical evaluations across extensive environments, including multi-agent particle environment, multi-agent MuJoCo and StarCraft multi-agent challenge v2, demonstrate the superior performance of Kaleidoscope compared with existing parameter sharing approaches, showcasing its potential for performance enhancement in MARL. The code is publicly available at \\url{https://github.com/LXXXXR/Kaleidoscope}.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94860"} +{"video_file": "W4pIBQ7bAI_39026230.mp4", "openreview_id": "W4pIBQ7bAI", "slideslive_id": 39026230, "venue": "nips2024", "title": "MediQ: Question-Asking LLMs and a Benchmark for Reliable Interactive Clinical Reasoning", "status": "Poster", "keywords": "Clinical Reasoning;Question Asking;Information-Seeking;Adaptive Interactions;LLM Abstention", "tldr": "This paper establishes a novel framework for interactive information seeking to enhance reliable medical reasoning abilities in LLMs.", "abstract": "Users typically engage with LLMs interactively, yet most existing benchmarks evaluate them in a static, single-turn format, posing reliability concerns in interactive scenarios. We identify a key obstacle towards reliability: LLMs are trained to answer any question, even with incomplete context or insufficient knowledge. In this paper, we propose to change the static paradigm to an interactive one, develop systems that proactively ask questions to gather more information and respond reliably, and introduce an benchmark\u2014MEDIQ\u2014to evaluate question-asking ability in LLMs. MEDIQ simulates clinical interactions consisting of a Patient System and an adaptive Expert System; with potentially incomplete initial information, the Expert refrains from making diagnostic decisions when unconfident, and instead elicits missing details via follow-up questions. We provide a pipeline to convert single-turn medical benchmarks into an interactive format. Our results show that directly prompting state-of-the-art LLMs to ask questions degrades performance, indicating that adapting LLMs to proactive information-seeking settings is nontrivial. We experiment with abstention strategies to better estimate model confidence and decide when to ask questions, improving diagnostic accuracy by 22.3%; however, performance still lags compared to an (unrealistic in practice) upper bound with complete information upfront. Further analyses show improved interactive performance with filtering irrelevant contexts and reformatting conversations. Overall, we introduce a novel problem towards LLM reliability, an interactive MEDIQ benchmark and a novel question-asking system, and highlight directions to extend LLMs\u2019 information-seeking abilities in critical domains.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94856"} +{"video_file": "W5U3XB1C11_39025412.mp4", "openreview_id": "W5U3XB1C11", "slideslive_id": 39025412, "venue": "nips2024", "title": "Relational Verification Leaps Forward with RABBit", "status": "Poster", "keywords": "Neural Network Verification;Relational Verification;Robustness;UAP verification;Optimization", "tldr": "Scalable branch and bound algorithm for relational DNN verification with cross-executional bound refinement and branching over multiple executions.", "abstract": "We propose RABBit, a Branch-and-Bound-based verifier for verifying relational properties defined over Deep Neural Networks, such as robustness against universal adversarial perturbations (UAP). Existing SOTA complete\nL\n\u221e\n-robustness verifiers can not reason about dependencies between multiple executions and, as a result, are imprecise for relational verification. In contrast, existing SOTA relational verifiers only apply a single bounding step and do not utilize any branching strategies to refine the obtained bounds, thus producing imprecise results. We develop the first scalable Branch-and-Bound-based relational verifier, RABBit, which efficiently combines branching over multiple executions with cross-executional bound refinement to utilize relational constraints, gaining substantial precision over SOTA baselines on a wide range of datasets and networks. Our code is at https://github.com/uiuc-focal-lab/RABBit.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/94855"} +{"video_file": "WAiqLGfqX6_39024804.mp4", "openreview_id": "WAiqLGfqX6", "slideslive_id": 39024804, "venue": "nips2024", "title": "Derivative-enhanced Deep Operator Network", "status": "Poster", "keywords": "Neural operators;Operator learning;Derivative learning;DeepONet;Dimensionality reduction;Adjoint method", "tldr": "In this work we propose a derivative-enhanced deep operator network (DE-DeepONet), which leverages the derivative information to enhance the prediction accuracy, and provide a more accurate approximation of the derivatives.", "abstract": "The deep operator networks (DeepONet), a class of neural operators that learn mappings between function spaces, have recently been developed as surrogate models for parametric partial differential equations (PDEs). In this work we propose a derivative-enhanced deep operator network (DE-DeepONet), which leverages derivative information to enhance the solution prediction accuracy and provides a more accurate approximation of solution-to-parameter derivatives, especially when training data are limited. DE-DeepONet explicitly incorporates linear dimension reduction of high dimensional parameter input into DeepONet to reduce training cost and adds derivative loss in the loss function to reduce the number of required parameter-solution pairs. We further demonstrate that the use of derivative loss can be extended to enhance other neural operators, such as the Fourier neural operator (FNO). Numerical experiments validate the effectiveness of our approach.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/94851"} +{"video_file": "WBLPlszJI5_39024472.mp4", "openreview_id": "WBLPlszJI5", "slideslive_id": 39024472, "venue": "nips2024", "title": "Fine-Tuning Personalization in Federated Learning to Mitigate Adversarial Clients", "status": "Poster", "keywords": "Personalized Federated Learning;Optimization;Generalization;Byzantine Robustness", "tldr": "Full collaboration can be suboptimal in the presence of heterogeneity and Byzantine adversaries, we shed light on when personalization can improve robustness.", "abstract": "Federated learning (FL) is an appealing paradigm that allows a group of machines (a.k.a. clients) to learn collectively while keeping their data local. However, due to the heterogeneity between the clients\u2019 data distributions, the model obtained through the use of FL algorithms may perform poorly on some client\u2019s data. Personalization addresses this issue by enabling each client to have a different model tailored to their own data while simultaneously benefiting from the other clients\u2019 data. We consider an FL setting where some clients can be adversarial, and we derive conditions under which full collaboration fails. Specifically, we analyze the generalization performance of an interpolated personalized FL framework in the presence of adversarial clients, and we precisely characterize situations when full collaboration performs strictly worse than fine-tuned personalization. Our analysis determines how much we should scale down the level of collaboration, according to data heterogeneity and the tolerable fraction of adversarial clients. We support our findings with empirical results on mean estimation and binary classification problems, considering synthetic and benchmark image classification datasets", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94850"} +{"video_file": "WCc440cUhX_39027829.mp4", "openreview_id": "WCc440cUhX", "slideslive_id": 39027829, "venue": "nips2024", "title": "Understanding Transformers via N-Gram Statistics", "status": "Poster", "keywords": "transformers;large-language models;ngrams;curriculum learning;interpretability", "tldr": "Using simple N-gram statistics, we gain insights into how transformer-based LLMs make predictions along with their training dynamics.", "abstract": "Transformer based large-language models (LLMs) display extreme proficiency with language yet a precise understanding of how they work remains elusive. One way of demystifying transformer predictions would be to describe how they depend on their context in terms of simple template functions. This paper takes a first step in this direction by considering families of functions (i.e. rules) formed out of simple N-gram based statistics of the training data. By studying how well these rulesets approximate transformer predictions, we obtain a variety of novel discoveries: a simple method to detect overfitting during training without using a holdout set, a quantitative measure of how transformers progress from learning simple to more complex statistical rules over the course of training, a model-variance criterion governing when transformer predictions tend to be described by N-gram rules, and insights into how well transformers can be approximated by N-gram rulesets in the limit where these rulesets become increasingly complex. In this latter direction, we find that for 79% and 68% of LLM next-token distributions on TinyStories and Wikipedia, respectively, their top-1 predictions agree with those provided by our N-gram rulesets.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94849"} +{"video_file": "WCnJmb7cv1_39026289.mp4", "openreview_id": "WCnJmb7cv1", "slideslive_id": 39026289, "venue": "nips2024", "title": "Learning to Assist Humans without Inferring Rewards", "status": "Poster", "keywords": "Human-AI Collaboration;Unsupervised Reinforcement Learning", "tldr": "We propose a scalable algorithm for assisting humans without inferring their objectives.", "abstract": "Assistive agents should make humans' lives easier. Classically, such assistance is studied through the lens of inverse reinforcement learning, where an assistive agent (e.g., a chatbot, a robot) infers a human's intention and then selects actions to help the human reach that goal. This approach requires inferring intentions, which can be difficult in high-dimensional settings. We build upon prior work that studies assistance through the lens of empowerment: an assistive agent aims to maximize the influence of the human's actions such that they exert a greater control over the environmental outcomes and can solve tasks in fewer steps. We lift the major limitation of prior work in this area\u2014scalability to high-dimensional settings\u2014with contrastive successor representations. We formally prove that these representations estimate a similar notion of empowerment to that studied by prior work and provide a ready-made mechanism for optimizing it. Empirically, our proposed method outperforms prior methods on synthetic benchmarks, and scales to Overcooked, a cooperative game setting. Theoretically, our work connects ideas from information theory, neuroscience, and reinforcement learning, and charts a path for representations to play a critical role in solving assistive problems. Our code is available at https://github.com/vivekmyers/empowerment_successor_representations.", "primary_area": "human-AI_interaction", "site": "https://neurips.cc/virtual/2024/poster/94848"} +{"video_file": "WEf2LT8NtY_39027205.mp4", "openreview_id": "WEf2LT8NtY", "slideslive_id": 39027205, "venue": "nips2024", "title": "Adversarially Robust Decision Transformer", "status": "Poster", "keywords": "Offline Reinforcement Learning;Reinforcement Learning via Supervised Learning;Decision Transformer;Robust Adversarial Reinforcement Learning", "tldr": "This paper investigate the adversarial robustness of Decision Transformer (DT), and develop a new algorithm to transform the original returns-to-go to in-sample minimax returns-to-go to better address the robustness of DT in adversarial settings.", "abstract": "Decision Transformer (DT), as one of the representative Reinforcement Learning via Supervised Learning (RvS) methods, has achieved strong performance in offline learning tasks by leveraging the powerful Transformer architecture for sequential decision-making. However, in adversarial environments, these methods can be non-robust, since the return is dependent on the strategies of both the decision-maker and adversary. Training a probabilistic model conditioned on observed return to predict action can fail to generalize, as the trajectories that achieve a return in the dataset might have done so due to a suboptimal behavior adversary. To address this, we propose a worst-case-aware RvS algorithm, the Adversarially Robust Decision Transformer (ARDT), which learns and conditions the policy on in-sample minimax returns-to-go. ARDT aligns the target return with the worst-case return learned through minimax expectile regression, thereby enhancing robustness against powerful test-time adversaries. In experiments conducted on sequential games with full data coverage, ARDT can generate a maximin (Nash Equilibrium) strategy, the solution with the largest adversarial robustness. In large-scale sequential games and continuous adversarial RL environments with partial data coverage, ARDT demonstrates significantly superior robustness to powerful test-time adversaries and attains higher worst-case returns compared to contemporary DT methods.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94846"} +{"video_file": "WEoOreP0n5_39028407.mp4", "openreview_id": "WEoOreP0n5", "slideslive_id": 39028407, "venue": "nips2024", "title": "Efficient Reinforcement Learning by Discovering Neural Pathways", "status": "Poster", "keywords": "Energy Efficient AI;Parameter Efficient;Neural Pathways;Continuous Control;Online Reinforcement Learning;Offline Reinforcement Learning;Multitask Reinforcement Learning", "tldr": "To improve energy efficiency and reduce the carbon footprint, we propose Neural Pathway to efficiently use the network parameter space for reinforcement learning.", "abstract": "Reinforcement learning (RL) algorithms have been very successful at tackling complex control problems, such as AlphaGo or fusion control. However, current research mainly emphasizes solution quality, often achieved by using large models trained on large amounts of data, and does not account for the financial, environmental, and societal costs associated with developing and deploying such models. Modern neural networks are often overparameterized and a significant number of parameters can be pruned without meaningful loss in performance, resulting in more efficient use of the model's capacity lottery ticket. We present a methodology for identifying sub-networks within a larger network in reinforcement learning (RL). We call such sub-networks, neural pathways. We show empirically that even very small learned sub-networks, using less than 5% of the large network's parameters, can provide very good quality solutions. We also demonstrate the training of multiple pathways within the same networks in a multitask setup, where each pathway is encouraged to tackle a separate task. We evaluate empirically our approach on several continuous control tasks, in both online and offline training", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94845"} +{"video_file": "WEs4WMzndY_39025993.mp4", "openreview_id": "WEs4WMzndY", "slideslive_id": 39025993, "venue": "nips2024", "title": "Annealed Multiple Choice Learning: Overcoming limitations of Winner-takes-all with annealing", "status": "Poster", "keywords": "multiple choice learning;winner-takes-all;deterministic annealing;uncertainty quantification", "tldr": "We introduce Annealed Multiple Choice Learning (aMCL) which combines deterministic annealing with MCL.", "abstract": "We introduce Annealed Multiple Choice Learning (aMCL) which combines simulated annealing with MCL. MCL is a learning framework handling ambiguous tasks by predicting a small set of plausible hypotheses. These hypotheses are trained using the Winner-takes-all (WTA) scheme, which promotes the diversity of the predictions. However, this scheme may converge toward an arbitrarily suboptimal local minimum, due to the greedy nature of WTA. We overcome this limitation using annealing, which enhances the exploration of the hypothesis space during training. We leverage insights from statistical physics and information theory to provide a detailed description of the model training trajectory. Additionally, we validate our algorithm by extensive experiments on synthetic datasets, on the standard UCI benchmark, and on speech separation.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/94844"} +{"video_file": "WH5blx5tZ1_39026876.mp4", "openreview_id": "WH5blx5tZ1", "slideslive_id": 39026876, "venue": "nips2024", "title": "Large Scale Transfer Learning for Tabular Data via Language Modeling", "status": "Poster", "keywords": "tabular;foundation model;language model", "tldr": "We introduce a new model and training dataset for transfer learning on tabular data that achieves strong zero- and few-shot accuracy across 300 tabular benchmark datasets", "abstract": "Tabular data \u2013 structured, heterogeneous, spreadsheet-style data with rows and columns \u2013 is widely used in practice across many domains. However, while recent foundation models have reduced the need for developing task-specific datasets and predictors in domains such as language modeling and computer vision, this transfer learning paradigm has not had similar impact in the tabular domain. In this work, we seek to narrow this gap and present TABULA-8B, a language model for tabular prediction. We define a process for extracting a large, high-quality training dataset from the TabLib corpus, proposing methods for tabular data filtering and quality control. Using the resulting dataset, which comprises over 2.1B rows from 4.2M unique tables, we fine-tune a Llama 3-8B large language model (LLM) for tabular data prediction (classification and binned regression) using a novel packing and attention scheme for tabular prediction. Through evaluation across a test suite of 329 datasets, we find that TABULA-8B has zero-shot accuracy on unseen tables that is over 15 percentage points (pp) higher than random guessing, a feat that is not possible with existing state-of-the-art tabular prediction models (e.g. XGBoost, TabPFN). In the few-shot setting (1-32 shots), without any fine-tuning on the target datasets, TABULA-8B is 5-15 pp more accurate than XGBoost and TabPFN models that are explicitly trained on equal, or even up to 16\u00d7 more data. We release our model, code, and data along with the publication of this paper.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94842"} +{"video_file": "WI2VpcBdnd_39028293.mp4", "openreview_id": "WI2VpcBdnd", "slideslive_id": 39028293, "venue": "nips2024", "title": "Provable and Efficient Dataset Distillation for Kernel Ridge Regression", "status": "Poster", "keywords": "Dataset Distillation;Kernel Ridge Regression", "tldr": "For dataset distillation of kernel ridge regression, we show theoretically that one data per class is necessary and sufficient to recover the original model's performance in many settings.", "abstract": "Deep learning models are now trained on increasingly larger datasets, making it crucial to reduce computational costs and improve data quality. Dataset distillation aims to distill a large dataset into a small synthesized dataset such that models trained on it can achieve similar performance to those trained on the original dataset. While there have been many empirical efforts to improve dataset distillation algorithms, a thorough theoretical analysis and provable, efficient algorithms are still lacking. In this paper, by focusing on dataset distillation for kernel ridge regression (KRR), we show that one data point per class is already necessary and sufficient to recover the original model's performance in many settings. For linear ridge regression and KRR with surjective feature mappings, we provide necessary and sufficient conditions for the distilled dataset to recover the original model's parameters. For KRR with injective feature mappings of deep neural networks, we show that while one data point per class is not sufficient in general,\nk\n+\n1\ndata points can be sufficient for deep linear neural networks, where\nk\nis the number of classes. Our theoretical results enable directly constructing analytical solutions for distilled datasets, resulting in a provable and efficient dataset distillation algorithm for KRR. We verify our theory experimentally and show that our algorithm outperforms previous work such as KIP while being significantly more efficient, e.g. 15840\n\u00d7\nfaster on CIFAR-100. Our code is available at \\href{https://github.com/Trustworthy-ML-Lab/provable-efficient-dataset-distill-KRR}{GitHub}.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94841"} +{"video_file": "WILLwyVmP8_39025553.mp4", "openreview_id": "WILLwyVmP8", "slideslive_id": 39025553, "venue": "nips2024", "title": "Interpretable Concept-Based Memory Reasoning", "status": "Poster", "keywords": "Concept-based models;explainable AI;neurosymbolic", "tldr": "We introduce Concept-based Memory Reasoner (CMR), a novel CBM that uses a neural selection mechanism over learnable logic rules and symbolic evaluation to enable human-understandable and provably-verifiable task predictions.", "abstract": "The lack of transparency in the decision-making processes of deep learning systems presents a significant challenge in modern artificial intelligence (AI), as it impairs users\u2019 ability to rely on and verify these systems. To address this challenge, Concept Bottleneck Models (CBMs) have made significant progress by incorporating human-interpretable concepts into deep learning architectures. This approach allows predictions to be traced back to specific concept patterns that users can understand and potentially intervene on. However, existing CBMs\u2019 task predictors are not fully interpretable, preventing a thorough analysis and any form of formal verification of their decision-making process prior to deployment, thereby raising significant reliability concerns. To bridge this gap, we introduce Concept-based Memory Reasoner (CMR), a novel CBM designed to provide a human-understandable and provably-verifiable task prediction process. Our approach is to model each task prediction as a neural selection mechanism over a memory of learnable logic rules, followed by a symbolic evaluation of the selected rule. The presence of an explicit memory and the symbolic evaluation allow domain experts to inspect and formally verify the validity of certain global properties of interest for the task prediction process. Experimental results demonstrate that CMR achieves better accuracy-interpretability trade-offs to state-of-the-art CBMs, discovers logic rules consistent with ground truths, allows for rule interventions, and allows pre-deployment verification.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/94840"} +{"video_file": "WJ04ZX8txM_39024854.mp4", "openreview_id": "WJ04ZX8txM", "slideslive_id": 39024854, "venue": "nips2024", "title": "Do LLMs dream of elephants (when told not to)? Latent concept association and associative memory in transformers", "status": "Poster", "keywords": "Transformer;Associative Memory;Large Language Models;Interpretability;Fact retrieval", "tldr": "We study a 1-layer transformer to understand the mechanisms of associative memory in LLMs", "abstract": "Large Language Models (LLMs) have the capacity to store and recall facts. Through experimentation with open-source models, we observe that this ability to retrieve facts can be easily manipulated by changing contexts, even without altering their factual meanings. These findings highlight that LLMs might behave like an associative memory model where certain tokens in the contexts serve as clues to retrieving facts. We mathematically explore this property by studying how transformers, the building blocks of LLMs, can complete such memory tasks. We study a simple latent concept association problem with a one-layer transformer and we show theoretically and empirically that the transformer gathers information using self-attention and uses the value matrix for associative memory.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/94839"} +{"video_file": "WK2KxPAMQv_39025330.mp4", "openreview_id": "WK2KxPAMQv", "slideslive_id": 39025330, "venue": "nips2024", "title": "Exploiting Representation Curvature for Boundary Detection in Time Series", "status": "Poster", "keywords": "time series;representation;boundary detection", "tldr": "We propose RECURVE that exploits representation curvature for time-series boundary detection.", "abstract": "Boundaries are the timestamps at which a class in a time series changes. Recently, representation-based boundary detection has gained popularity, but its emphasis on consecutive distance difference backfires, especially when the changes are gradual. In this paper, we propose a boundary detection method, RECURVE, based on a novel change metric, the curvature of a representation trajectory, to accommodate both gradual and abrupt changes. Here, a sequence of representations in the representation space is interpreted as a trajectory, and a curvature at each timestamp can be computed. Using the theory of random walk, we formally show that the mean curvature is lower near boundaries than at other points. Extensive experiments using diverse real-world time-series datasets confirm the superiority of RECURVE over state-of-the-art methods.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94837"} +{"video_file": "WPPC7FHtaM_39027115.mp4", "openreview_id": "WPPC7FHtaM", "slideslive_id": 39027115, "venue": "nips2024", "title": "IPO: Interpretable Prompt Optimization for Vision-Language Models", "status": "Poster", "keywords": "Prompt learning;Large language model;Interpretable Prompt Optimization", "tldr": "This paper introduces a simple but interpretable prompt optimizer (IPO), that utilizes large language models (LLMs) to generate textual prompts dynamically.", "abstract": "Pre-trained vision-language models like CLIP have remarkably adapted to various downstream tasks. Nonetheless, their performance heavily depends on the specificity of the input text prompts, which requires skillful prompt template engineering. Instead, current approaches to prompt optimization learn the prompts through gradient descent, where the prompts are treated as adjustable parameters. However, these methods tend to lead to overfitting of the base classes seen during training and produce prompts that are no longer understandable by humans. This paper introduces a simple but interpretable prompt optimizer (IPO), that utilizes large language models (LLMs) to generate textual prompts dynamically. We introduce a Prompt Optimization Prompt that not only guides LLMs in creating effective prompts but also stores past prompts with their performance metrics, providing rich in-context information. Additionally, we incorporate a large multimodal model (LMM) to condition on visual content by generating image descriptions, which enhance the interaction between textual and visual modalities. This allows for the creation of dataset-specific prompts that improve generalization performance, while maintaining human comprehension. Extensive testing across 11 datasets reveals that IPO not only improves the accuracy of existing gradient-descent-based prompt learning methods but also considerably enhances the interpretability of the generated prompts. By leveraging the strengths of LLMs, our approach ensures that the prompts remain human-understandable, thereby facilitating better transparency and oversight for vision-language models.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94834"} +{"video_file": "WPxa6OcIdg_39025326.mp4", "openreview_id": "WPxa6OcIdg", "slideslive_id": 39025326, "venue": "nips2024", "title": "Estimating Epistemic and Aleatoric Uncertainty with a Single Model", "status": "Poster", "keywords": "uncertainty estimation;diffusion models;hypernetworks", "tldr": "Efficient approach to generate aleatoric and epistemic uncertainty estimates for inverse problems.", "abstract": "Estimating and disentangling epistemic uncertainty, uncertainty that is reducible with more training data, and aleatoric uncertainty, uncertainty that is inherent to the task at hand, is critically important when applying machine learning to high-stakes applications such as medical imaging and weather forecasting. Conditional diffusion models' breakthrough ability to accurately and efficiently sample from the posterior distribution of a dataset now makes uncertainty estimation conceptually straightforward: One need only train and sample from a large ensemble of diffusion models. Unfortunately, training such an ensemble becomes computationally intractable as the complexity of the model architecture grows. In this work we introduce a new approach to ensembling, hyper-diffusion models (HyperDM), which allows one to accurately estimate both epistemic and aleatoric uncertainty with a single model. Unlike existing single-model uncertainty methods like Monte-Carlo dropout and Bayesian neural networks, HyperDM offers prediction accuracy on par with, and in some cases superior to, multi-model ensembles. Furthermore, our proposed approach scales to modern network architectures such as Attention U-Net and yields more accurate uncertainty estimates compared to existing methods. We validate our method on two distinct real-world tasks: x-ray computed tomography reconstruction and weather temperature forecasting.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94833"} +{"video_file": "WRCFuoiz1h_39028084.mp4", "openreview_id": "WRCFuoiz1h", "slideslive_id": 39028084, "venue": "nips2024", "title": "Query-Efficient Correlation Clustering with Noisy Oracle", "status": "Poster", "keywords": "Correlation Clustering; Online Learning; Pure Exploration of Multi-armed Bandits", "tldr": "We study a general correlation clustering problem with noisy weighted similarity queries, introducing two novel online learning formulations and designing efficient algorithms with theoretical guarantees.", "abstract": "We study a general clustering setting in which we have $n$ elements to be clustered, and we aim to perform as few queries as possible to an oracle that returns a noisy sample of the weighted similarity between two elements. Our setting encompasses many application domains in which the similarity function is costly to compute and inherently noisy. We introduce two novel formulations of online learning problems rooted in the paradigm of Pure Exploration in Combinatorial Multi-Armed Bandits (PE-CMAB): fixed confidence and fixed budget settings. For both settings, we design algorithms that combine a sampling strategy with a classic approximation algorithm for correlation clustering and study their theoretical guarantees. Our results are the first examples of polynomial-time algorithms that work for the case of PE-CMAB in which the underlying offline optimization problem is NP-hard.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94832"} +{"video_file": "WSsht66fbC_39027188.mp4", "openreview_id": "WSsht66fbC", "slideslive_id": 39027188, "venue": "nips2024", "title": "Safety through feedback in Constrained RL", "status": "Poster", "keywords": "Constrained RL;Cost Inference;Human Feedback", "tldr": "Generating safe policies in constrained rl settings through cost estimation from safety feedback", "abstract": "In safety-critical RL settings, the inclusion of an additional cost function is often favoured over the arduous task of modifying the reward function to ensure the agent's safe behaviour. However, designing or evaluating such a cost function can be prohibitively expensive. For instance, in the domain of self-driving, designing a cost function that encompasses all unsafe behaviours (e.g., aggressive lane changes, risky overtakes) is inherently complex, it must also consider all the actors present in the scene making it expensive to evaluate. In such scenarios, the cost function can be learned from feedback collected offline in between training rounds. This feedback can be system generated or elicited from a human observing the training process. Previous approaches have not been able to scale to complex environments and are constrained to receiving feedback at the state level which can be expensive to collect. To this end, we introduce an approach that scales to more complex domains and extends beyond state-level feedback, thus, reducing the burden on the evaluator. Inferring the cost function in such settings poses challenges, particularly in assigning credit to individual states based on trajectory-level feedback. To address this, we propose a surrogate objective that transforms the problem into a state-level supervised classification task with noisy labels, which can be solved efficiently. Additionally, it is often infeasible to collect feedback for every trajectory generated by the agent, hence, two fundamental questions arise: (1) Which trajectories should be presented to the human? and (2) How many trajectories are necessary for effective learning? To address these questions, we introduce a \\textit{novelty-based sampling} mechanism that selectively involves the evaluator only when the the agent encounters a \\textit{novel} trajectory, and discontinues querying once the trajectories are no longer \\textit{novel}. We showcase the efficiency of our method through experimentation on several benchmark Safety Gymnasium environments and realistic self-driving scenarios. Our method demonstrates near-optimal performance, comparable to when the cost function is known, by relying solely on trajectory-level feedback across multiple domains. This highlights both the effectiveness and scalability of our approach. The code to replicate these results can be found at \\href{https://github.com/shshnkreddy/RLSF}{https://github.com/shshnkreddy/RLSF}", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94830"} +{"video_file": "WSu1PPi2UP_39027103.mp4", "openreview_id": "WSu1PPi2UP", "slideslive_id": 39027103, "venue": "nips2024", "title": "Embedding-Aligned Language Models", "status": "Poster", "keywords": "language models;reinforcement learning;embedding spaces", "tldr": "An RL-driven method for guiding LLMs to aligns with objectives defined in a latent embedding space. We demonstrate its effectiveness on surfacing content gaps.", "abstract": "We propose a novel approach for training large language models (LLMs) to adhere to objectives defined within a latent embedding space. Our method leverages reinforcement learning (RL), treating a pre-trained LLM as an environment. Our embedding-aligned guided language (EAGLE) agent is trained to iteratively steer the LLM's generation towards optimal regions of the latent embedding space, w.r.t. some predefined criterion. We demonstrate the effectiveness of the EAGLE agent using the MovieLens 25M and Amazon Review datasets to surface content gaps that satisfy latent user demand. We also demonstrate the benefit of using an optimal design of a state-dependent action set to improve EAGLE's efficiency. Our work paves the way for controlled and grounded text generation using LLMs, ensuring consistency with domain-specific knowledge and data representations.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/94829"} +{"video_file": "WTLvXdzhmP_39025118.mp4", "openreview_id": "WTLvXdzhmP", "slideslive_id": 39025118, "venue": "nips2024", "title": "Statistical Estimation in the Spiked Tensor Model via the Quantum Approximate Optimization Algorithm", "status": "Spotlight", "keywords": "quantum algorithm;statistical estimation;computational complexity;computational-statistical gap;optimization;variational quantum algorithm;quantum machine learning;statistical physics;average-case complexity", "tldr": "This paper analyzes the QAOA for the spiked tensor model, showing that while it matches classical performance at constant depths, it exhibits qualitative differences and a limited quantum advantage, indicating potential for future quantum speedups.", "abstract": "The quantum approximate optimization algorithm (QAOA) is a general-purpose algorithm for combinatorial optimization that has been a promising avenue for near-term quantum advantage. In this paper, we analyze the performance of the QAOA on the spiked tensor model, a statistical estimation problem that exhibits a large computational-statistical gap classically. We prove that the weak recovery threshold of\n1\n-step QAOA matches that of\n1\n-step tensor power iteration. Additional heuristic calculations suggest that the weak recovery threshold of\np\n-step QAOA matches that of\np\n-step tensor power iteration when\np\nis a fixed constant. This further implies that multi-step QAOA with tensor unfolding could achieve, but not surpass, the asymptotic classical computation threshold\n\u0398\n(\nn\n(\nq\n\u2212\n2\n)\n/\n4\n)\nfor spiked\nq\n-tensors. Meanwhile, we characterize the asymptotic overlap distribution for\np\n-step QAOA, discovering an intriguing sine-Gaussian law verified through simulations. For some\np\nand\nq\n, the QAOA has an effective recovery threshold that is a constant factor better than tensor power iteration. Of independent interest, our proof techniques employ the Fourier transform to handle difficult combinatorial sums, a novel approach differing from prior QAOA analyses on spin-glass models without planted structure.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94828"} +{"video_file": "WXqukapoa7_39026678.mp4", "openreview_id": "WXqukapoa7", "slideslive_id": 39026678, "venue": "nips2024", "title": "Disentangling Linear Quadratic Control with Untrusted ML Predictions", "status": "Poster", "keywords": "Linear Quadratic Control;Disentanglement;Competitive Analysis", "tldr": "Our work introduces DISC, a novel policy that merges online control with learning disentangled predictions, effectively optimizing performance in dynamic systems and maintaining robustness through competitive ratio guarantees.", "abstract": "Uncertain perturbations in dynamical systems often arise from diverse resources, represented by latent components. The predictions for these components, typically generated by \"black-box\" machine learning tools, are prone to inaccuracies. To tackle this challenge, we introduce DISC, a novel policy that learns a confidence parameter online to harness the potential of accurate predictions while also mitigating the impact of erroneous forecasts. When predictions are precise, DISC leverages this information to achieve near-optimal performance. Conversely, in the case of significant prediction errors, it still has a worst-case competitive ratio guarantee. We provide competitive ratio bounds for DISC under both linear mixing of latent variables as well as a broader class of mixing functions. Our results highlight a first-of-its-kind \"best-of-both-worlds\" integration of machine-learned predictions, thus lead to a near-optimal consistency and robustness tradeoff, which provably improves what can be obtained without learning the confidence parameter. We validate the applicability of DISC across a spectrum of practical scenarios.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94827"} +{"video_file": "WY3xgXIZUR_39024390.mp4", "openreview_id": "WY3xgXIZUR", "slideslive_id": 39024390, "venue": "nips2024", "title": "Leveraging Visual Tokens for Extended Text Contexts in Multi-Modal Learning", "status": "Poster", "keywords": "Multi-modality;In-context Learning;Vision-Language;Large Language Model", "tldr": "We present Visualized In-Context Text Processing(VisInContext), which processes long in-context text using visual tokens.", "abstract": "Training models with longer in-context lengths is a significant challenge for multimodal machine learning due to substantial GPU memory and computational costs. This exploratory study does not present state-of-the-art models; rather, it introduces an innovative method designed to increase in-context text length in multi-modality large language models (MLLMs) efficiently. We present \\ModelFullName (\\ModelName), which processes long in-context text using visual tokens. This technique significantly reduces GPU memory usage and floating point operations (FLOPs). For instance, our method expands the pre-training in-context length from 256 to 2048 tokens with fewer FLOPs for a 56 billion parameter MOE model. Experimental results demonstrate that \\ModelName enhances OCR capabilities and delivers superior performance on common downstream benchmarks for in-context few-shot evaluation. Additionally, \\ModelName proves effective for long context inference, achieving results comparable to full text input while maintaining computational efficiency.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94826"} +{"video_file": "Wc0vlQuoLb_39028321.mp4", "openreview_id": "Wc0vlQuoLb", "slideslive_id": 39028321, "venue": "nips2024", "title": "I Don't Know: Explicit Modeling of Uncertainty with an [IDK] Token", "status": "Poster", "keywords": "LLMs;Factuality;Uncertainty;Factual Knowledge;Pretraining", "tldr": "We propose a novel IDK objective which models uncertainty in a model's prediction as probability mass put on a special [IDK] token. Experimental results show that our method improves factual precision without significantly harming factual recall.", "abstract": "Large Language Models are known to capture real-world knowledge, allowing them to excel in many downstream tasks. Despite recent advances, these models are still prone to what are commonly known as hallucinations, causing them to emit unwanted and factually incorrect text. In this work, we propose a novel calibration method that can be used to combat hallucinations. We add a special [IDK] (\u201cI Don't Know\u201d) token to the model's vocabulary and introduce an objective function that shifts probability mass to the [IDK] token for incorrect predictions. This approach allows the model to express uncertainty in its output explicitly. We evaluate our proposed method across multiple model architectures and factual downstream tasks. We find that models trained with our method are able to express uncertainty in places where they would previously make mistakes while suffering only a small loss of encoded knowledge. We further perform extensive ablation studies of multiple variations of our approach and provide a detailed analysis of the precision-recall tradeoff of our method.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94825"} +{"video_file": "WcmqdY2AKu_39026082.mp4", "openreview_id": "WcmqdY2AKu", "slideslive_id": 39026082, "venue": "nips2024", "title": "Boosting Graph Pooling with Persistent Homology", "status": "Poster", "keywords": "graph pooling;persistent homology;graph learning", "tldr": "We propose a persistent-homology-guided graph pooling scheme, which can be flexibly integrated into extant pooling methods and achieved remarkable performance improvement.", "abstract": "Recently, there has been an emerging trend to integrate persistent homology (PH) into graph neural networks (GNNs) to enrich expressive power. However, naively plugging PH features into GNN layers always results in marginal improvement with low interpretability. In this paper, we investigate a novel mechanism for injecting global topological invariance into pooling layers using PH, motivated by the observation that filtration operation in PH naturally aligns graph pooling in a cut-off manner. In this fashion, message passing in the coarsened graph acts along persistent pooled topology, leading to improved performance. Experimentally, we apply our mechanism to a collection of graph pooling methods and observe consistent and substantial performance gain over several popular datasets, demonstrating its wide applicability and flexibility.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/94823"} +{"video_file": "WeoNd6PRqS_39025093.mp4", "openreview_id": "WeoNd6PRqS", "slideslive_id": 39025093, "venue": "nips2024", "title": "OMG-LLaVA: Bridging Image-level, Object-level, Pixel-level Reasoning and Understanding", "status": "Poster", "keywords": "multi-modal modeling;universal model", "tldr": "A uniform model for image-level, object-level, and pixel-level reasoning and understanding.", "abstract": "Current universal segmentation methods demonstrate strong capabilities in pixel-level image and video understanding. However, they lack reasoning abilities and cannot be controlled via text instructions. In contrast, large vision-language multimodal models exhibit powerful vision-based conversation and reasoning capabilities but lack pixel-level understanding and have difficulty accepting visual prompts for flexible user interaction. This paper proposes OMG-LLaVA, a new and elegant framework combining powerful pixel-level vision understanding with reasoning abilities. It can accept various visual and text prompts for flexible user interaction. Specifically, we use a universal segmentation method as the visual encoder, integrating image information, perception priors, and visual prompts into visual tokens provided to the LLM. The LLM is responsible for understanding the user's text instructions and providing text responses and pixel-level segmentation results based on the visual information. We propose perception prior embedding to better integrate perception priors with image features. OMG-LLaVA achieves image-level, object-level, and pixel-level reasoning and understanding in a single model, matching or surpassing the performance of specialized methods on multiple benchmarks. Rather than using LLM to connect each specialist, our work aims at end-to-end training on one encoder, one decoder, and one LLM. The code and model have been released for further research.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94820"} +{"video_file": "WfpvtH7oC1_39026196.mp4", "openreview_id": "WfpvtH7oC1", "slideslive_id": 39026196, "venue": "nips2024", "title": "Subwords as Skills: Tokenization for Sparse-Reward Reinforcement Learning", "status": "Poster", "keywords": "Reinforcement Learning;Deep Learning;Exploration;Hierarchical RL", "tldr": "Tokenization methods from NLP lead to much faster skill-extraction which can help solve very difficult sparse-reward RL tasks.", "abstract": "Exploration in sparse-reward reinforcement learning (RL) is difficult due to the need for long, coordinated sequences of actions in order to achieve any reward. Skill learning, from demonstrations or interaction, is a promising approach to address this, but skill extraction and inference are expensive for current methods. We present a novel method to extract skills from demonstrations for use in sparse-reward RL, inspired by the popular Byte-Pair Encoding (BPE) algorithm in natural language processing. With these skills, we show strong performance in a variety of tasks, 1000\n\u00d7\nacceleration for skill-extraction and 100\n\u00d7\nacceleration for policy inference. Given the simplicity of our method, skills extracted from 1% of the demonstrations in one task can be transferred to a new loosely related task. We also note that such a method yields a finite set of interpretable behaviors. Our code is available at https://github.com/dyunis/subwords_as_skills.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94819"} +{"video_file": "WftaVkL6G2_39025668.mp4", "openreview_id": "WftaVkL6G2", "slideslive_id": 39025668, "venue": "nips2024", "title": "Federated Learning under Periodic Client Participation and Heterogeneous Data: A New Communication-Efficient Algorithm and Analysis", "status": "Poster", "keywords": "federated learning;nonconvex optimization;cross device;heterogeneous data;periodic participation", "tldr": "We propose a new optimization algorithm for federated learning with periodic client participation, and prove that the convergence rate is independent of client heterogeneity and enjoys linear speedup and reduced communication.", "abstract": "In federated learning, it is common to assume that clients are always available to participate in training, which may not be feasible with user devices in practice. Recent works analyze federated learning under more realistic participation patterns, such as cyclic client availability or arbitrary participation. However, all such works either require strong assumptions (e.g., all clients participate almost surely within a bounded window), do not achieve linear speedup and reduced communication rounds, or are not applicable in the general non-convex setting. In this work, we focus on nonconvex optimization and consider participation patterns in which the chance of participation over a fixed window of rounds is equal among all clients, which includes cyclic client availability as a special case. Under this setting, we propose a new algorithm, named Amplified SCAFFOLD, and prove that it achieves linear speedup, reduced communication, and resilience to data heterogeneity simultaneously. In particular, for cyclic participation, our algorithm is proved to enjoy\nO\n(\n\u03f5\n\u2212\n2\n)\ncommunication rounds to find an\n\u03f5\n-stationary point in the non-convex stochastic setting. In contrast, the prior work under the same setting requires\nO\n(\n\u03ba\n2\n\u03f5\n\u2212\n4\n)\ncommunication rounds, where\n\u03ba\ndenotes the data heterogeneity. Therefore, our algorithm significantly reduces communication rounds due to better dependency in terms of\n\u03f5\nand\n\u03ba\n. Our analysis relies on a fine-grained treatment of the nested dependence between client participation and errors in the control variates, which results in tighter guarantees than previous work. We also provide experimental results with (1) synthetic data and (2) real-world data with a large number of clients\n(\nN\n=\n250\n)\n, demonstrating the effectiveness of our algorithm under periodic client participation.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94818"} +{"video_file": "Wh9ssqlCNg_39027675.mp4", "openreview_id": "Wh9ssqlCNg", "slideslive_id": 39027675, "venue": "nips2024", "title": "Accelerating Augmentation Invariance Pretraining", "status": "Poster", "keywords": "Self-supervised learning;Vision Transformer;Accelerating training", "tldr": "This paper explores accelerating self-supervised learning pretraining by leveraging the features of Vision Transformer.", "abstract": "Our work tackles the computational challenges of contrastive learning methods, particularly for the pretraining of Vision Transformers (ViTs). Despite the effectiveness of contrastive learning, the substantial computational resources required for training often hinder their practical application. To mitigate this issue, we propose an acceleration framework, leveraging ViT's unique ability to generalize across inputs of varying sequence lengths. Our method employs a mix of sequence compression strategies, including randomized token dropout and flexible patch scaling, to reduce the cost of gradient estimation and accelerate convergence. We further provide an in-depth analysis of the gradient estimation error of various acceleration strategies as well as their impact on downstream tasks, offering valuable insights into the trade-offs between acceleration and performance. We also propose a novel procedure to identify an optimal acceleration schedule to adjust the sequence compression ratios to the training progress, ensuring efficient training without sacrificing downstream performance. Our approach significantly reduces computational overhead across various self-supervised learning algorithms on large-scale datasets. In ImageNet, our method achieves speedups of 4$\\times$ in MoCo, 3.3$\\times$ in SimCLR, and 2.5$\\times$ in DINO, demonstrating substantial efficiency gains.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94817"} +{"video_file": "Wl2optQcng_39025246.mp4", "openreview_id": "Wl2optQcng", "slideslive_id": 39025246, "venue": "nips2024", "title": "Personalized Federated Learning via Feature Distribution Adaptation", "status": "Poster", "keywords": "Federated Learning;Data Heterogeneity;Personalization", "tldr": "Federated representation learning under a generative classifier for improved personalization under distribution shifts.", "abstract": "Federated learning (FL) is a distributed learning framework that leverages commonalities between distributed client datasets to train a global model. Under heterogeneous clients, however, FL can fail to produce stable training results. Personalized federated learning (PFL) seeks to address this by learning individual models tailored to each client. One approach is to decompose model training into shared representation learning and personalized classifier training. Nonetheless, previous works struggle to navigate the bias-variance trade-off in classifier learning, relying solely on limited local datasets or introducing costly techniques to improve generalization. In this work, we frame representation learning as a generative modeling task, where representations are trained with a classifier based on the global feature distribution. We then propose an algorithm, pFedFDA, that efficiently generates personalized models by adapting global generative classifiers to their local feature distributions. Through extensive computer vision benchmarks, we demonstrate that our method can adjust to complex distribution shifts with significant improvements over current state-of-the-art in data-scarce settings.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/94815"} +{"video_file": "Wq6aY6fC2H_39026634.mp4", "openreview_id": "Wq6aY6fC2H", "slideslive_id": 39026634, "venue": "nips2024", "title": "The Prevalence of Neural Collapse in Neural Multivariate Regression", "status": "Poster", "keywords": "Neural Collapse;Multivariate Regression;DNN;Unconstrained Feature Model", "tldr": "We provide theoretical and experimental analysis for Neural Regression Collapse which is prevalent among multivariate regression tasks.", "abstract": "Recently it has been observed that neural networks exhibit Neural Collapse (NC) during the final stage of training for the classification problem. We empirically show that multivariate regression, as employed in imitation learning and other applications, exhibits Neural Regression Collapse (NRC), a new form of neural collapse: (NRC1) The last-layer feature vectors collapse to the subspace spanned by the $n$ principal components of the feature vectors, where $n$ is the dimension of the targets (for univariate regression, $n=1$); (NRC2) The last-layer feature vectors also collapse to the subspace spanned by the last-layer weight vectors; (NRC3) The Gram matrix for the weight vectors converges to a specific functional form that depends on the covariance matrix of the targets. After empirically establishing the prevalence of (NRC1)-(NRC3) for a variety of datasets and network architectures, we provide an explanation of these phenomena by modeling the regression task in the context of the Unconstrained Feature Model (UFM), in which the last layer feature vectors are treated as free variables when minimizing the loss function. We show that when the regularization parameters in the UFM model are strictly positive, then (NRC1)-(NRC3) also emerge as solutions in the UFM optimization problem. We also show that if the regularization parameters are equal to zero, then there is no collapse. To our knowledge, this is the first empirical and theoretical study of neural collapse in the context of regression. This extension is significant not only because it broadens the applicability of neural collapse to a new category of problems but also because it suggests that the phenomena of neural collapse could be a universal behavior in deep learning.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/94810"} +{"video_file": "WvoKwq12x5_39028078.mp4", "openreview_id": "WvoKwq12x5", "slideslive_id": 39028078, "venue": "nips2024", "title": "PediatricsGPT: Large Language Models as Chinese Medical Assistants for Pediatric Applications", "status": "Poster", "keywords": "Large Language Models;Medical Applications", "tldr": "This paper presents the first Chinese pediatric LLM assistant and the corresponding high-quality dataset to facilitate medical community development.", "abstract": "Developing intelligent pediatric consultation systems offers promising prospects for improving diagnostic efficiency, especially in China, where healthcare resources are scarce. Despite recent advances in Large Language Models (LLMs) for Chinese medicine, their performance is sub-optimal in pediatric applications due to inadequate instruction data and vulnerable training procedures. To address the above issues, this paper builds PedCorpus, a high-quality dataset of over 300,000 multi-task instructions from pediatric textbooks, guidelines, and knowledge graph resources to fulfil diverse diagnostic demands. Upon well-designed PedCorpus, we propose PediatricsGPT, the first Chinese pediatric LLM assistant built on a systematic and robust training pipeline. In the continuous pre-training phase, we introduce a hybrid instruction pre-training mechanism to mitigate the internal-injected knowledge inconsistency of LLMs for medical domain adaptation. Immediately, the full-parameter Supervised Fine-Tuning (SFT) is utilized to incorporate the general medical knowledge schema into the models. After that, we devise a direct following preference optimization to enhance the generation of pediatrician-like humanistic responses. In the parameter-efficient secondary SFT phase, a mixture of universal-specific experts strategy is presented to resolve the competency conflict between medical generalist and pediatric expertise mastery. Extensive results based on the metrics, GPT-4, and doctor evaluations on distinct downstream tasks show that PediatricsGPT consistently outperforms previous Chinese medical LLMs. The project and data will be released at https://github.com/ydk122024/PediatricsGPT.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94806"} +{"video_file": "Wy9UgrMwD0_39024828.mp4", "openreview_id": "Wy9UgrMwD0", "slideslive_id": 39024828, "venue": "nips2024", "title": "No Representation, No Trust: Connecting Representation, Collapse, and Trust Issues in PPO", "status": "Poster", "keywords": "proximal policy optimization;plasticity loss;trust region;feature rank collapse;regularization", "tldr": "We show that PPO suffers from feature rank degradation, that it harms its heuristic trust region, and that it is connected to performance collapse.", "abstract": "Reinforcement learning (RL) is inherently rife with non-stationarity since the states and rewards the agent observes during training depend on its changing policy. Therefore, networks in deep RL must be capable of adapting to new observations and fitting new targets. However, previous works have observed that networks trained under non-stationarity exhibit an inability to continue learning, termed loss of plasticity, and eventually a collapse in performance. For off-policy deep value-based RL methods, this phenomenon has been correlated with a decrease in representation rank and the ability to fit random targets, termed capacity loss. Although this correlation has generally been attributed to neural network learning under non-stationarity, the connection to representation dynamics has not been carefully studied in on-policy policy optimization methods. In this work, we empirically study representation dynamics in Proximal Policy Optimization (PPO) on the Atari and MuJoCo environments, revealing that PPO agents are also affected by feature rank deterioration and capacity loss. We show that this is aggravated by stronger non-stationarity, ultimately driving the actor's performance to collapse, regardless of the performance of the critic. We ask why the trust region, specific to methods like PPO, cannot alleviate or prevent the collapse and find a connection between representation collapse and the degradation of the trust region, one exacerbating the other. Finally, we present Proximal Feature Optimization (PFO), a novel auxiliary loss that, along with other interventions, shows that regularizing the representation dynamics mitigates the performance collapse of PPO agents. Code and run histories are available at https://github.com/CLAIRE-Labo/no-representation-no-trust.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94803"} +{"video_file": "Wyp8vsL9de_39026911.mp4", "openreview_id": "Wyp8vsL9de", "slideslive_id": 39026911, "venue": "nips2024", "title": "Invariant subspaces and PCA in nearly matrix multiplication time", "status": "Poster", "keywords": "Invariant subspace;Generalized eigenvalue problem;PCA;Spectral projector;Spectral gap;Matrix multiplication;Bit complexity", "tldr": "Invariant subspaces and PCA embeddings can be provably approximated in nearly matrix multiplication time in finite precision", "abstract": "Approximating invariant subspaces of generalized eigenvalue problems (GEPs) is a fundamental computational problem at the core of machine learning and scientific computing. It is, for example, the root of Principal Component Analysis (PCA) for dimensionality reduction, data visualization, and noise filtering, and of Density Functional Theory (DFT), arguably the most popular method to calculate the electronic structure of materials. Given Hermitian\nH\n,\nS\n\u2208\nC\nn\n\u00d7\nn\n, where\nS\nis positive-definite, let\n\u03a0\nk\nbe the true spectral projector on the invariant subspace that is associated with the\nk\nsmallest (or largest) eigenvalues of the GEP\nH\nC\n=\nS\nC\n\u039b\n, for some\nk\n\u2208\n[\nn\n]\n. We show that we can compute a matrix\n\u03a0\n~\nk\nsuch that\n\u2016\n\u03a0\nk\n\u2212\n\u03a0\n~\nk\n\u2016\n2\n\u2264\n\u03f5\n, in\nO\n(\nn\n\u03c9\n+\n\u03b7\npolylog\n(\nn\n,\n\u03f5\n\u2212\n1\n,\n\u03ba\n(\nS\n)\n,\ngap\nk\n\u2212\n1\n)\n)\nbit operations in the floating point model, for some\n\u03f5\n\u2208\n(\n0\n,\n1\n)\n, with probability\n1\n\u2212\n1\n/\nn\n. Here,\n\u03b7\n>\n0\nis arbitrarily small,\n\u03c9\n\u2272\n2.372\nis the matrix multiplication exponent,\n\u03ba\n(\nS\n)\n=\n\u2016\nS\n\u2016\n2\n\u2016\nS\n\u2212\n1\n\u2016\n2\n, and\ngap\nk\nis the gap between eigenvalues\nk\nand\nk\n+\n1\n. To achieve such provable \"forward-error\" guarantees, our methods rely on a new\nO\n(\nn\n\u03c9\n+\n\u03b7\n)\nstability analysis for the Cholesky factorization, and a smoothed analysis for computing spectral gaps, which can be of independent interest. Ultimately, we obtain new matrix multiplication-type bit complexity upper bounds for PCA problems, including classical PCA and (randomized) low-rank approximation.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94800"} +{"video_file": "X2G7LA7Av9_39027365.mp4", "openreview_id": "X2G7LA7Av9", "slideslive_id": 39027365, "venue": "nips2024", "title": "Can Simple Averaging Defeat Modern Watermarks?", "status": "Poster", "keywords": "Watermark;Security", "tldr": "Many content-agnostic digital watermarking techniques are susceptible to simple steganalysis-based watermark removal.", "abstract": "Digital watermarking techniques are crucial for copyright protection and source identification of images, especially in the era of generative AI models. However, many existing watermarking methods, particularly content-agnostic approaches that embed fixed patterns regardless of image content, are vulnerable to steganalysis attacks that can extract and remove the watermark with minimal perceptual distortion. In this work, we categorise watermarking algorithms into content-adaptive and content-agnostic ones, and demonstrate how averaging a collection of watermarked images could reveal the underlying watermark pattern. We then leverage this extracted pattern for effective watermark removal under both greybox and blackbox settings, even when the collection of images contains multiple watermark patterns. For some algorithms like Tree-Ring watermarks, the extracted pattern can also forge convincing watermarks on clean images. Our quantitative and qualitative evaluations across twelve watermarking methods highlight the threat posed by steganalysis to content-agnostic watermarks and the importance of designing watermarking techniques resilient to such analytical attacks. We propose security guidelines calling for using content-adaptive watermarking strategies and performing security evaluation against steganalysis. We also suggest multi-key assignments as potential mitigations against steganalysis vulnerabilities. Github page: \\url{https://github.com/showlab/watermark-steganalysis}.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/94798"} +{"video_file": "X2UMdvcmMo_39028594.mp4", "openreview_id": "X2UMdvcmMo", "slideslive_id": 39028594, "venue": "nips2024", "title": "FuseAnyPart: Diffusion-Driven Facial Parts Swapping via Multiple Reference Images", "status": "Spotlight", "keywords": "diffusion model;personalization;image generation", "tldr": "A novel diffusion-driven facial parts swapping methods with multiple reference images.", "abstract": "Facial parts swapping aims to selectively transfer regions of interest from the source image onto the target image while maintaining the rest of the target image unchanged. Most studies on face swapping designed specifically for full-face swapping, are either unable or significantly limited when it comes to swapping individual facial parts, which hinders fine-grained and customized character designs. However, designing such an approach specifically for facial parts swapping is challenged by a reasonable multiple reference feature fusion, which needs to be both efficient and effective. To overcome this challenge, FuseAnyPart is proposed to facilitate the seamless \"fuse-any-part\" customization of the face. In FuseAnyPart, facial parts from different people are assembled into a complete face in latent space within the Mask-based Fusion Module. Subsequently, the consolidated feature is dispatched to the Addition-based Injection Module for fusion within the UNet of the diffusion model to create novel characters. Extensive experiments qualitatively and quantitatively validate the superiority and robustness of FuseAnyPart. Source codes are available at https://github.com/Thomas-wyh/FuseAnyPart.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94797"} +{"video_file": "X34GKv8sYT_39026623.mp4", "openreview_id": "X34GKv8sYT", "slideslive_id": 39026623, "venue": "nips2024", "title": "Lorentz-Equivariant Geometric Algebra Transformers for High-Energy Physics", "status": "Poster", "keywords": "Geometric deep learning;equivariance;Lorentz symmetry;Transformer;flow matching;high-energy physics;particle physics", "tldr": "A Lorentz-equivariant Transformer architecture plus Lorentz-equivariant flow matching for high-energy physics", "abstract": "Extracting scientific understanding from particle-physics experiments requires solving diverse learning problems with high precision and good data efficiency. We propose the Lorentz Geometric Algebra Transformer (L-GATr), a new multi-purpose architecture for high-energy physics. L-GATr represents high-energy data in a geometric algebra over four-dimensional space-time and is equivariant under Lorentz transformations, the symmetry group of relativistic kinematics. At the same time, the architecture is a Transformer, which makes it versatile and scalable to large systems. L-GATr is first demonstrated on regression and classification tasks from particle physics. We then construct the first Lorentz-equivariant generative model: a continuous normalizing flow based on an L-GATr network, trained with Riemannian flow matching. Across our experiments, L-GATr is on par with or outperforms strong domain-specific baselines.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/94796"} +{"video_file": "X3ydKRcQr6_39027104.mp4", "openreview_id": "X3ydKRcQr6", "slideslive_id": 39027104, "venue": "nips2024", "title": "MUVERA: Multi-Vector Retrieval via Fixed Dimensional Encoding", "status": "Poster", "keywords": "Retrieval;Late Interaction;ColBERT;Multi-Vector", "tldr": "We give a new and highly-efficient algorithm for multi-vector retrieval via a provable reduction to single-vector retrieval.", "abstract": "Neural embedding models have become a fundamental component of modern information retrieval (IR) pipelines. These models produce a single embedding\nx\n\u2208\nR\nd\nper data-point, allowing for fast retrieval via highly optimized maximum inner product search (MIPS) algorithms. Recently, beginning with the landmark ColBERT paper, multi-vector models, which produce a set of embedding per data point, have achieved markedly superior performance for IR tasks. Unfortunately, using these models for IR is computationally expensive due to the increased complexity of multi-vector retrieval and scoring.\nIn this paper, we introduce MUVERA (MUlti-VEctor Retrieval Algorithm), a retrieval mechanism which reduces multi-vector similarity search to single-vector similarity search. This enables the usage of off-the-shelf MIPS solvers for multi-vector retrieval. MUVERA asymmetrically generates Fixed Dimensional Encodings (FDEs) of queries and documents, which are vectors whose inner product approximates multi-vector similarity. We prove that FDEs give high-quality\n\u03f5\n-approximations, thus providing the first single-vector proxy for multi-vector similarity with theoretical guarantees. Empirically, we find that FDEs achieve the same recall as prior state-of-the-art heuristics while retrieving 2-5\n\u00d7\nfewer candidates. Compared to prior state of the art implementations, MUVERA achieves consistently good end-to-end recall and latency across a diverse set of the BEIR retrieval datasets, achieving an average of 10% improved recall with 90% lower latency.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94793"} +{"video_file": "XAKALzI3Gw_39024363.mp4", "openreview_id": "XAKALzI3Gw", "slideslive_id": 39024363, "venue": "nips2024", "title": "Jointly Modeling Inter- & Intra-Modality Dependencies for Multi-modal Learning", "status": "Poster", "keywords": "Multi-modal learning;deep learning", "tldr": "We distinguish between different modeling paradigms for multi-modal learning from the perspective of generative models and offer a general recipe for designing models that efficiently leverage multi-modal data, leading to more accurate predictions.", "abstract": "Supervised multi-modal learning involves mapping multiple modalities to a target label. Previous studies in this field have concentrated on capturing in isolation either the inter-modality dependencies (the relationships between different modalities and the label) or the intra-modality dependencies (the relationships within a single modality and the label). We argue that these conventional approaches that rely solely on either inter- or intra-modality dependencies may not be optimal in general. We view the multi-modal learning problem from the lens of generative models where we consider the target as a source of multiple modalities and the interaction between them. Towards that end, we propose inter- & intra-modality modeling (I2M2) framework, which captures and integrates both the inter- and intra-modality dependencies, leading to more accurate predictions. We evaluate our approach using real-world healthcare and vision-and-language datasets with state-of-the-art models, demonstrating superior performance over traditional methods focusing only on one type of modality dependency. The code is available at https://github.com/divyam3897/I2M2.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/94788"} +{"video_file": "XEbPJUQzs3_39026709.mp4", "openreview_id": "XEbPJUQzs3", "slideslive_id": 39026709, "venue": "nips2024", "title": "Prospective Learning: Learning for a Dynamic Future", "status": "Poster", "keywords": "Distribution Shifts;Learning Theory", "tldr": "We incorporate time into the PAC framework, demonstrating both theoretically and empirically that time-aware empirical risk minimization outperforms time-agnostic methods for certain problems where the data distribution varies predictably over time.", "abstract": "In real-world applications, the distribution of the data, and our goals, evolve over time. The prevailing theoretical framework for studying machine learning, namely probably approximately correct (PAC) learning, largely ignores time. As a consequence, existing strategies to address the dynamic nature of data and goals exhibit poor real-world performance. This paper develops a theoretical framework called \"Prospective Learning\" that is tailored for situations when the optimal hypothesis changes over time. In PAC learning, empirical risk minimization (ERM) is known to be consistent. We develop a learner called Prospective ERM, which returns a sequence of predictors that make predictions on future data. We prove that the risk of prospective ERM converges to the Bayes risk under certain assumptions on the stochastic process generating the data. Prospective ERM, roughly speaking, incorporates time as an input in addition to the data. We show that standard ERM as done in PAC learning, without incorporating time, can result in failure to learn when distributions are dynamic. Numerical experiments illustrate that prospective ERM can learn synthetic and visual recognition problems constructed from MNIST and CIFAR-10. Code at https://github.com/neurodata/prolearn.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94786"} +{"video_file": "XErWgdxaFU_39028487.mp4", "openreview_id": "XErWgdxaFU", "slideslive_id": 39028487, "venue": "nips2024", "title": "Textual Training for the Hassle-Free Removal of Unwanted Visual Data: Case Studies on OOD and Hateful Image Detection", "status": "Poster", "keywords": "Vision-Language models;Multimodal models;CLIP;unwanted visual data detection;text-only training;hateful image detection", "tldr": "We introduce Hassle-Free Textual Training (HFTT), a method designed to improve the performance of VLMs in identifying unwanted visual data, utilizing solely synthetic textual data.", "abstract": "In our study, we explore methods for detecting unwanted content lurking in visual datasets. We provide a theoretical analysis demonstrating that a model capable of successfully partitioning visual data can be obtained using only textual data. Based on the analysis, we propose Hassle-Free Textual Training (HFTT), a streamlined method capable of acquiring detectors for unwanted visual content, using only textual data in conjunction with pre-trained vision-language models. HFTT features an innovative objective function that significantly reduces the necessity for human involvement in data annotation. Furthermore, HFTT employs a clever textual data synthesis method, effectively emulating the integration of unknown visual data distribution into the training process at no extra cost. The unique characteristics of HFTT extend its utility beyond traditional out-of-distribution detection, making it applicable to tasks that address more abstract concepts. We complement our analyses with experiments in hateful image detection and out-of-distribution detection. Our codes are available at https://github.com/HFTT-anonymous/HFTT.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94785"} +{"video_file": "XF1jpo5k6l_39027719.mp4", "openreview_id": "XF1jpo5k6l", "slideslive_id": 39027719, "venue": "nips2024", "title": "fMRI predictors based on language models of increasing complexity recover brain left lateralization", "status": "Poster", "keywords": "large language models;brain lateralization;neuroscience;language processing;fMRI;scaling laws", "tldr": "fMRI predictors based on language models of increasing complexity recover brain left lateralization for language, and the difference in brain score between left and right hemisphere follows a scaling law.", "abstract": "Over the past decade, studies of naturalistic language processing where participants are scanned while listening to continuous text have flourished. Using word embeddings at first, then large language models, researchers have created encoding models to analyze the brain signals. Presenting these models with the same text as the participants allows to identify brain areas where there is a significant correlation between the functional magnetic resonance imaging (fMRI) time series and the ones predicted by the models' artificial neurons. One intriguing finding from these studies is that they have revealed highly symmetric bilateral activation patterns, somewhat at odds with the well-known left lateralization of language processing. Here, we report analyses of an fMRI dataset where we manipulate the complexity of large language models, testing 28 pretrained models from 8 different families, ranging from 124M to 14.2B parameters. First, we observe that the performance of models in predicting brain responses follows a scaling law, where the fit with brain activity increases linearly with the logarithm of the number of parameters of the model (and its performance on natural language processing tasks). Second, although this effect is present in both hemispheres, it is stronger in the left than in the right hemisphere. Specifically, the left-right difference in brain correlation follows a scaling law with the number of parameters. This finding reconciles computational analyses of brain activity using large language models with the classic observation from aphasic patients showing left hemisphere dominance for language.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/94784"} +{"video_file": "XHCYZNmqnv_39027366.mp4", "openreview_id": "XHCYZNmqnv", "slideslive_id": 39027366, "venue": "nips2024", "title": "Detecting Brittle Decisions for Free: Leveraging Margin Consistency in Deep Robust Classifiers", "status": "Poster", "keywords": "adversarial robustness;empirical robustness estimation;classification;vulnerability detection", "tldr": "We introduce and use margin-consistency for robust deep classifiers the efficient detection of vulnerable instances to adversarial examples via the logit margin..", "abstract": "Despite extensive research on adversarial training strategies to improve robustness, the decisions of even the most robust deep learning models can still be quite sensitive to imperceptible perturbations, creating serious risks when deploying them for high-stakes real-world applications. While detecting such cases may be critical, evaluating a model's vulnerability at a per-instance level using adversarial attacks is computationally too intensive and unsuitable for real-time deployment scenarios. The input space margin is the exact score to detect non-robust samples and is intractable for deep neural networks. This paper introduces the concept of margin consistency -- a property that links the input space margins and the logit margins in robust models -- for efficient detection of vulnerable samples. First, we establish that margin consistency is a necessary and sufficient condition to use a model's logit margin as a score for identifying non-robust samples. Next, through comprehensive empirical analysis of various robustly trained models on CIFAR10 and CIFAR100 datasets, we show that they indicate high margin consistency with a strong correlation between their input space margins and the logit margins. Then, we show that we can effectively use the logit margin to confidently detect brittle decisions with such models. Finally, we address cases where the model is not sufficiently margin-consistent by learning a pseudo-margin from the feature representation. Our findings highlight the potential of leveraging deep representations to efficiently assess adversarial vulnerability in deployment scenarios.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94783"} +{"video_file": "XHTl2k1LYk_39024386.mp4", "openreview_id": "XHTl2k1LYk", "slideslive_id": 39024386, "venue": "nips2024", "title": "Absorb & Escape: Overcoming Single Model Limitations in Generating Heterogeneous Genomic Sequences", "status": "Poster", "keywords": "Computational Biology;Genomics;Deep Learning;Generative Model", "tldr": "This paper proposes a post-training sampling method to perform compositional generation from autoRegressive mdoels and diffusion models for DNA generation.", "abstract": "Recent advances in immunology and synthetic biology have accelerated the development of deep generative methods for DNA sequence design. Two dominant approaches in this field are AutoRegressive (AR) models and Diffusion Models (DMs). However, genomic sequences are functionally heterogeneous, consisting of multiple connected regions (e.g., Promoter Regions, Exons, and Introns) where elements within each region come from the same probability distribution, but the overall sequence is non-homogeneous. This heterogeneous nature presents challenges for a single model to accurately generate genomic sequences. In this paper, we analyze the properties of AR models and DMs in heterogeneous genomic sequence generation, pointing out crucial limitations in both methods: (i) AR models capture the underlying distribution of data by factorizing and learning the transition probability but fail to capture the global property of DNA sequences. (ii) DMs learn to recover the global distribution but tend to produce errors at the base pair level. To overcome the limitations of both approaches, we propose a post-training sampling method, termed Absorb & Escape (A&E) to perform compositional generation from AR models and DMs. This approach starts with samples generated by DMs and refines the sample quality using an AR model through the alternation of the Absorb and Escape steps. To assess the quality of generated sequences, we conduct extensive experiments on 15 species for conditional and unconditional DNA generation. The experiment results from motif distribution, diversity checks, and genome integration tests unequivocally show that A&E outperforms state-of-the-art AR models and DMs in genomic sequence generation. A&E does not suffer from the slowness of traditional MCMC to sample from composed distributions with Energy-Based Models whilst it obtains higher quality samples than single models. Our research sheds light on the limitations of current single-model approaches in DNA generation and provides a simple but effective solution for heterogeneous sequence generation. Code is available at the Github Repo.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/94782"} +{"video_file": "XHWkHFWi3k_39026267.mp4", "openreview_id": "XHWkHFWi3k", "slideslive_id": 39026267, "venue": "nips2024", "title": "Self-Refining Diffusion Samplers: Enabling Parallelization via Parareal Iterations", "status": "Poster", "keywords": "diffusion;sampling;parallel", "tldr": "We introduce a novel technique to improve sampling speed of diffusion models by exploiting parallel-in-time integration.", "abstract": "In diffusion models, samples are generated through an iterative refinement process, requiring hundreds of sequential model evaluations. Several recent methods have introduced approximations (fewer discretization steps or distillation) to trade off speed at the cost of sample quality. In contrast, we introduce Self-Refining Diffusion Samplers (SRDS) that retain sample quality and can improve latency at the cost of additional parallel compute. We take inspiration from the Parareal algorithm, a popular numerical method for parallel-in-time integration of differential equations. In SRDS, a quick but rough estimate of a sample is first created and then iteratively refined in parallel through Parareal iterations. SRDS is not only guaranteed to accurately solve the ODE and converge to the serial solution but also benefits from parallelization across the diffusion trajectory, enabling batched inference and pipelining. As we demonstrate for pre-trained diffusion models, the early convergence of this refinement procedure drastically reduces the number of steps required to produce a sample, speeding up generation for instance by up to 1.7x on a 25-step StableDiffusion-v2 benchmark and up to 4.3x on longer trajectories.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94781"} +{"video_file": "XIcBCBe6C3_39026718.mp4", "openreview_id": "XIcBCBe6C3", "slideslive_id": 39026718, "venue": "nips2024", "title": "TinyTTA: Efficient Test-time Adaptation via Early-exit Ensembles on Edge Devices", "status": "Poster", "keywords": "Test-time adaptation;efficiency;edge device;microcontroller", "tldr": "Efficient Test-time Adaptation Framework for Microcontrollers along with an MCU TTA library.", "abstract": "The increased adoption of Internet of Things (IoT) devices has led to the generation of large data streams with applications in healthcare, sustainability, and robotics. In some cases, deep neural networks have been deployed directly on these resource-constrained units to limit communication overhead, increase efficiency and privacy, and enable real-time applications. However, a common challenge in this setting is the continuous adaptation of models necessary to accommodate changing environments, i.e., data distribution shifts. Test-time adaptation (TTA) has emerged as one potential solution, but its validity has yet to be explored in resource-constrained hardware settings, such as those involving microcontroller units (MCUs). TTA on constrained devices generally suffers from i) memory overhead due to the full backpropagation of a large pre-trained network, ii) lack of support for normalization layers on MCUs, and iii) either memory exhaustion with large batch sizes required for updating or poor performance with small batch sizes. In this paper, we propose TinyTTA, to enable, for the first time, efficient TTA on constrained devices with limited memory. To address the limited memory constraints, we introduce a novel self-ensemble and batch-agnostic early-exit strategy for TTA, which enables continuous adaptation with small batch sizes for reduced memory usage, handles distribution shifts, and improves latency efficiency. Moreover, we develop the TinyTTA Engine, a first-of-its-kind MCU library that enables on-device TTA. We validate TinyTTA on a Raspberry Pi Zero 2W and an STM32H747 MCU. Experimental results demonstrate that TinyTTA improves TTA accuracy by up to 57.6%, reduces memory usage by up to six times, and achieves faster and more energy-efficient TTA. Notably, TinyTTA is the only framework able to run TTA on MCU STM32H747 with a 512 KB memory constraint while maintaining high performance.", "primary_area": "infrastructure", "site": "https://neurips.cc/virtual/2024/poster/94778"} +{"video_file": "XKrSB5a79F_39024738.mp4", "openreview_id": "XKrSB5a79F", "slideslive_id": 39024738, "venue": "nips2024", "title": "Log-concave Sampling from a Convex Body with a Barrier: a Robust and Unified Dikin Walk", "status": "Poster", "keywords": "Log-concave sampling;Dikin walk", "tldr": "We design a Dikin walk for log-concave sampling over polytopes and spectrahedra with the fastest mixing time and efficient per iteration cost.", "abstract": "We consider the problem of sampling from a $d$-dimensional log-concave distribution $\\pi(\\theta) \\propto \\exp(-f(\\theta))$ for $L$-Lipschitz $f$, constrained to a convex body (described by $n$ hyperplanes) equipped with a barrier function, contained in a ball of radius $R$ with a $w$-warm start.\nWe propose a \\emph{robust} sampling framework that computes spectral approximations to the Hessian of the barrier functions in each iteration. We prove that for the polytope constraints, sampling with the Lee-Sidford barrier function mixes within $\\widetilde O((d^2+dL^2R^2)\\log(w/\\delta))$ steps with a per step cost of $\\widetilde O(nd^{\\omega-1})$, where $\\omega\\approx 2.37$ is the fast matrix multiplication exponent. Compared to the prior work of Mangoubi and Vishnoi, our approach gives faster mixing time as we are able to design a generalized soft-threshold Dikin walk beyond log-barrier.\nWe further extend our result to show how to sample from a $d$-dimensional spectrahedron, the constrained set of a semidefinite program, specified by the set ${x\\in \\mathbb{R}^d: \\sum_{i=1}^d x_i A_i \\succeq C }$ where $A_1,\\ldots,A_d, C$ are $n\\times n$ real symmetric matrices. We design a walk that mixes in $\\widetilde O((nd+dL^2R^2)\\log(w/\\delta))$ steps with a per iteration cost of $\\widetilde O(n^\\omega+n^2d^{3\\omega-5})$. We improve the mixing time bound of prior best Dikin walk due to Narayanan and Rakhlin that mixes in $\\widetilde O((n^2d^3+n^2dL^2R^2)\\log(w/\\delta))$ steps.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/94777"} +{"video_file": "XMQTNzlgTJ_39027664.mp4", "openreview_id": "XMQTNzlgTJ", "slideslive_id": 39027664, "venue": "nips2024", "title": "High-probability complexity bounds for stochastic non-convex minimax optimization", "status": "Poster", "keywords": "nonconvex minimax optimization;high-probability guarantees;stochastic gradient descent ascent methods", "tldr": "We provide the first high-probability complexity guarantees for nonconvex/PL minimax problems that satisfy the PL-condition in the dual variable.", "abstract": "Stochastic smooth nonconvex minimax problems are prevalent in machine learning, e.g., GAN training, fair classification, and distributionally robust learning. Stochastic gradient descent ascent (GDA)-type methods are popular in practice due to their simplicity and single-loop nature. However, there is a significant gap between the theory and practice regarding high-probability complexity guarantees for these methods on stochastic nonconvex minimax problems. Existing high-probability bounds for GDA-type single-loop methods only apply to convex/concave minimax problems and to particular non-monotone variational inequality problems under some restrictive assumptions. In this work, we address this gap by providing the first high-probability complexity guarantees for nonconvex/PL minimax problems corresponding to a smooth function that satisfies the PL-condition in the dual variable. Specifically, we show that when the stochastic gradients are light-tailed, the smoothed alternating GDA method can compute an\n\u03b5\n-stationary point within\nO\n(\n\u2113\n\u03ba\n2\n\u03b4\n2\n\u03b5\n4\n+\n\u03ba\n\u03b5\n2\n(\n\u2113\n+\n\u03b4\n2\nlog\n\u2061\n(\n1\n/\nq\n\u00af\n)\n)\n)\nstochastic gradient calls with probability at least\n1\n\u2212\nq\n\u00af\nfor any\nq\n\u00af\n\u2208\n(\n0\n,\n1\n)\n, where\n\u03bc\nis the PL constant,\n\u2113\nis the Lipschitz constant of the gradient,\n\u03ba\n=\n\u2113\n/\n\u03bc\nis the condition number, and\n\u03b4\n2\ndenotes a bound on the variance of stochastic gradients. We also present numerical results on a nonconvex/PL problem with synthetic data and on distributionally robust optimization problems with real data, illustrating our theoretical findings.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94774"} +{"video_file": "XOVks7JHQA_39025048.mp4", "openreview_id": "XOVks7JHQA", "slideslive_id": 39025048, "venue": "nips2024", "title": "Linear Uncertainty Quantification of Graphical Model Inference", "status": "Poster", "keywords": "graphical models;belief propagation;uncertainty quantification", "tldr": "We propose LinUProp, a novel approach that offers accuracy, interpretability, scalability and guaranteed convergence for uncertainty quantification of graphical model inference.", "abstract": "Uncertainty Quantification (UQ) is vital for decision makers as it offers insights into the potential reliability of data and model, enabling more informed and risk-aware decision-making. Graphical models, capable of representing data with complex dependencies, are widely used across domains. Existing sampling-based UQ methods are unbiased but cannot guarantee convergence and are time-consuming on large-scale graphs. There are fast UQ methods for graphical models with closed-form solutions and convergence guarantee but with uncertainty underestimation. We propose LinUProp, a UQ method that utilizes a novel linear propagation of uncertainty to model uncertainty among related nodes additively instead of multiplicatively, to offer linear scalability, guaranteed convergence, and closed-form solutions without underestimating uncertainty. Theoretically, we decompose the expected prediction error of the graphical model and prove that the uncertainty computed by LinUProp is the generalized variance component of the decomposition. Experimentally, we demonstrate that LinUProp is consistent with the sampling-based method but with linear scalability and fast convergence. Moreover, LinUProp outperforms competitors in uncertainty-based active learning on four real-world graph datasets, achieving higher accuracy with a lower labeling budget.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/94771"} +{"video_file": "XRJXKBeeTD_39027302.mp4", "openreview_id": "XRJXKBeeTD", "slideslive_id": 39027302, "venue": "nips2024", "title": "Fine-Tuning is Fine, if Calibrated", "status": "Poster", "keywords": "Fine-Tuning;Pre-training;Domain Adaptation", "tldr": "We study the problem of fine-tuning a pre-trained model capable of recognizing a large number of classes with only a subset of classes in the new domains.", "abstract": "Fine-tuning is arguably the most straightforward way to tailor a pre-trained model (e.g., a foundation model) to downstream applications, but it also comes with the risk of losing valuable knowledge the model had learned in pre-training. For example, fine-tuning a pre-trained classifier capable of recognizing a large number of classes to master a subset of classes at hand is shown to drastically degrade the model's accuracy in the other classes it had previously learned. As such, it is hard to further use the fine-tuned model when it encounters classes beyond the fine-tuning data. In this paper, we systematically dissect the issue, aiming to answer the fundamental question, \"What has been damaged in the fine-tuned model?\" To our surprise, we find that the fine-tuned model neither forgets the relationship among the other classes nor degrades the features to recognize these classes. Instead, the fine-tuned model often produces more discriminative features for these other classes, even if they were missing during fine-tuning! What really hurts the accuracy is the discrepant logit scales between the fine-tuning classes and the other classes, implying that a simple post-processing calibration would bring back the pre-trained model's capability and at the same time unveil the feature improvement over all classes. We conduct an extensive empirical study to demonstrate the robustness of our findings and provide preliminary explanations underlying them, suggesting new directions for future theoretical analysis.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94769"} +{"video_file": "XXOMCwZ6by_39027475.mp4", "openreview_id": "XXOMCwZ6by", "slideslive_id": 39027475, "venue": "nips2024", "title": "Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks", "status": "Poster", "keywords": "Multimodal Agent;Multimodal Large Language Models;Multimodal In-context Learning", "tldr": "We propose a powerful agent with hybrid multimodal memory architecture, Optimus-1, in Minecraft.", "abstract": "Building a general-purpose agent is a long-standing vision in the field of artificial intelligence. Existing agents have made remarkable progress in many domains, yet they still struggle to complete long-horizon tasks in an open world. We attribute this to the lack of necessary world knowledge and multimodal experience that can guide agents through a variety of long-horizon tasks. In this paper, we propose a Hybrid Multimodal Memory module to address the above challenges. It 1) transforms knowledge into Hierarchical Directed Knowledge Graph that allows agents to explicitly represent and learn world knowledge, and 2) summarises historical information into Abstracted Multimodal Experience Pool that provide agents with rich references for in-context learning. On top of the Hybrid Multimodal Memory module, a multimodal agent, Optimus-1, is constructed with dedicated Knowledge-guided Planner and Experience-Driven Reflector, contributing to a better planning and reflection in the face of long-horizon tasks in Minecraft. Extensive experimental results show that Optimus-1 significantly outperforms all existing agents on challenging long-horizon task benchmarks, and exhibits near human-level performance on many tasks. In addition, we introduce various Multimodal Large Language Models (MLLMs) as the backbone of Optimus-1. Experimental results show that Optimus-1 exhibits strong generalization with the help of the Hybrid Multimodal Memory module, outperforming the GPT-4V baseline on many tasks.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/94762"} +{"video_file": "XXVfj4P8nr_39027190.mp4", "openreview_id": "XXVfj4P8nr", "slideslive_id": 39027190, "venue": "nips2024", "title": "Training-Free Open-Ended Object Detection and Segmentation via Attention as Prompts", "status": "Poster", "keywords": "open-world;open-ended;vision language model;segment anything model;autonomous driving", "tldr": "We present VL-SAM, a framework that combines vision-language model with segment-anything model to address the open-ended object detection and segmentation task.", "abstract": "Existing perception models achieve great success by learning from large amounts of labeled data, but they still struggle with open-world scenarios. To alleviate this issue, researchers introduce open-set perception tasks to detect or segment unseen objects in the training set. However, these models require predefined object categories as inputs during inference, which are not available in real-world scenarios. Recently, researchers pose a new and more practical problem, i.e., open-ended object detection, which discovers unseen objects without any object categories as inputs. In this paper, we present VL-SAM, a training-free framework that combines the generalized object recognition model (i.e., Vision-Language Model) with the generalized object localization model (i.e., Segment-Anything Model), to address the open-ended object detection and segmentation task. Without additional training, we connect these two generalized models with attention maps as the prompts. Specifically, we design an attention map generation module by employing head aggregation and a regularized attention flow to aggregate and propagate attention maps across all heads and layers in VLM, yielding high-quality attention maps. Then, we iteratively sample positive and negative points from the attention maps with a prompt generation module and send the sampled points to SAM to segment corresponding objects. Experimental results on the long-tail instance segmentation dataset (LVIS) show that our method surpasses the previous open-ended method on the object detection task and can provide additional instance segmentation masks. Besides, VL-SAM achieves favorable performance on the corner case object detection dataset (CODA), demonstrating the effectiveness of VL-SAM in real-world applications. Moreover, VL-SAM exhibits good model generalization that can incorporate various VLMs and SAMs.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94761"} +{"video_file": "XZ0fpoAKEB_39026778.mp4", "openreview_id": "XZ0fpoAKEB", "slideslive_id": 39026778, "venue": "nips2024", "title": "No Free Delivery Service: Epistemic limits of passive data collection in complex social systems", "status": "Poster", "keywords": "validity;model validation;complex systems;epistemology;scaling;large language models;recommender systems", "tldr": "Formal impossibility results for model validation in key AI tasks such as recommender systems and LLM reasoning if they require passive data collection from complex social systems.", "abstract": "Rapid model validation via the train-test paradigm has been a key driver for the breathtaking progress in machine learning and AI. However, modern AI systems often depend on a combination of tasks and data collection practices that violate all assumptions ensuring test validity. Yet, without rigorous model validation we cannot ensure the intended outcomes of deployed AI systems, including positive social impact, nor continue to advance AI research in a scienti\ufb01cally sound way. In this paper, I will show that for widely considered inference settings in complex social systems the train-test paradigm does not only lack a justi\ufb01cation but is indeed invalid for any risk estimator, including counterfactual and causal estimators, with high probability. These formal impossibility results highlight a fundamental epistemic issue, i.e., that for key tasks in modern AI we cannot know whether models are valid under current data collection practices. Importantly, this includes variants of both recommender systems and reasoning via large language models, and neither na\u00efve scaling nor limited benchmarks are suited to address this issue. I am illustrating these results via the widely used MovieLens benchmark and conclude by discussing the implications of these results for AI in social systems, including possible remedies such as participatory data curation and open science.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/94758"} +{"video_file": "XZ4XSUTGRb_39027415.mp4", "openreview_id": "XZ4XSUTGRb", "slideslive_id": 39027415, "venue": "nips2024", "title": "Polyhedral Complex Derivation from Piecewise Trilinear Networks", "status": "Poster", "keywords": "Polyhedral Complex;Neural Radiance Fields;3D Mesh", "tldr": "Mesh extraction from neural surfaces with trilinear interpolating methods through the eikonal constraint of hypersurface planarity.", "abstract": "Recent advancements in visualizing deep neural networks provide insights into their structures and mesh extraction from Continuous Piecewise Affine (CPWA) functions. Meanwhile, developments in neural surface representation learning incorporate non-linear positional encoding, addressing issues like spectral bias; however, this poses challenges in applying mesh extraction techniques based on CPWA functions. Focusing on trilinear interpolating methods as positional encoding, we present theoretical insights and an analytical mesh extraction, showing the transformation of hypersurfaces to flat planes within the trilinear region under the eikonal constraint. Moreover, we introduce a method for approximating intersecting points among three hypersurfaces contributing to broader applications. We empirically validate correctness and parsimony through chamfer distance and efficiency, and angular distance, while examining the correlation between the eikonal loss and the planarity of the hypersurfaces.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/94757"} +{"video_file": "XZp1uP0hh2_39024420.mp4", "openreview_id": "XZp1uP0hh2", "slideslive_id": 39024420, "venue": "nips2024", "title": "Semi-Random Matrix Completion via Flow-Based Adaptive Reweighting", "status": "Poster", "keywords": "matrix completion;semi-random model;flow solver;short-flat decomposition;adaptive reweighting", "tldr": "We give the first nearly-linear time algorithm for solving semi-random matrix completion to high accuracy and with noisy observations.", "abstract": "We consider the well-studied problem of completing a rank-\nr\n,\n\u03bc\n-incoherent matrix\nM\n\u2208\nR\nd\n\u00d7\nd\nfrom incomplete observations. We focus on this problem in the semi-random setting where each entry is independently revealed with probability at least\np\n=\npoly\n(\nr\n,\n\u03bc\n,\nlog\n\u2061\nd\n)\nd\n. Whereas multiple nearly-linear time algorithms have been established in the more specialized fully-random setting where each entry is revealed with probablity exactly\np\n, the only known nearly-linear time algorithm in the semi-random setting is due to [CG18], whose sample complexity has a polynomial dependence on the inverse accuracy and condition number and thus cannot achieve high-accuracy recovery. Our main result is the first high-accuracy nearly-linear time algorithm for solving semi-random matrix completion, and an extension to the noisy observation setting. Our result builds upon the recent short-flat decomposition framework of [KLLST23a, KLLST23b] and leverages fast algorithms for flow problems on graphs to solve adaptive reweighting subproblems efficiently.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94756"} +{"video_file": "Xa3dVaolKo_39026469.mp4", "openreview_id": "Xa3dVaolKo", "slideslive_id": 39026469, "venue": "nips2024", "title": "Pure Message Passing Can Estimate Common Neighbor for Link Prediction", "status": "Poster", "keywords": "Graph Neural Networks;Link Prediction", "tldr": "We demonstrate that node-level message passing can effectively capture link-level structural features, such as Common Neighbor, for link prediction.", "abstract": "Message Passing Neural Networks (MPNNs) have emerged as the {\\em de facto} standard in graph representation learning. However, when it comes to link prediction, they are not always superior to simple heuristics such as Common Neighbor (CN). This discrepancy stems from a fundamental limitation: while MPNNs excel in node-level representation, they stumble with encoding the joint structural features essential to link prediction, like CN. To bridge this gap, we posit that, by harnessing the orthogonality of input vectors, pure message-passing can indeed capture joint structural features. Specifically, we study the proficiency of MPNNs in approximating CN heuristics. Based on our findings, we introduce the Message Passing Link Predictor (MPLP), a novel link prediction model. MPLP taps into quasi-orthogonal vectors to estimate link-level structural features, all while preserving the node-level complexities. We conduct experiments on benchmark datasets from various domains, where our method consistently outperforms the baseline methods, establishing new state-of-the-arts.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/94755"} +{"video_file": "XcbgkjWSJ7_39024593.mp4", "openreview_id": "XcbgkjWSJ7", "slideslive_id": 39024593, "venue": "nips2024", "title": "When Your AIs Deceive You: Challenges of Partial Observability in Reinforcement Learning from Human Feedback", "status": "Poster", "keywords": "RLHF;Partial Observability;Deception;AI Alignment;Reward Learning", "tldr": "We study the challenges that arise when learning reward functions with human feedback from partial observations", "abstract": "Past analyses of reinforcement learning from human feedback (RLHF) assume that the human evaluators fully observe the environment. What happens when human feedback is based only on partial observations? We formally define two failure cases: deceptive inflation and overjustification. Modeling the human as Boltzmann-rational w.r.t. a belief over trajectories, we prove conditions under which RLHF is guaranteed to result in policies that deceptively inflate their performance, overjustify their behavior to make an impression, or both. Under the new assumption that the human's partial observability is known and accounted for, we then analyze how much information the feedback process provides about the return function. We show that sometimes, the human's feedback determines the return function uniquely up to an additive constant, but in other realistic cases, there is irreducible ambiguity. We propose exploratory research directions to help tackle these challenges and experimentally validate both the theoretical concerns and potential mitigations, and caution against blindly applying RLHF in partially observable settings.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/94754"} +{"video_file": "XgAzCLsJAq_39026662.mp4", "openreview_id": "XgAzCLsJAq", "slideslive_id": 39026662, "venue": "nips2024", "title": "Continual Learning in the Frequency Domain", "status": "Poster", "keywords": "Continual Learning;Catastrophic Forgetting;Experience Replay;Efficient Deep Learning System;Frequency Domain Learning", "tldr": "A novel framework for mapping input images into the frequency domain and minimizing interference between frequency domain features to enhance the performance and efficiency of continual learning.", "abstract": "Continual learning (CL) is designed to learn new tasks while preserving existing knowledge. Replaying samples from earlier tasks has proven to be an effective method to mitigate the forgetting of previously acquired knowledge. However, the current research on the training efficiency of rehearsal-based methods is insufficient, which limits the practical application of CL systems in resource-limited scenarios. The human visual system (HVS) exhibits varying sensitivities to different frequency components, enabling the efficient elimination of visually redundant information. Inspired by HVS, we propose a novel framework called Continual Learning in the Frequency Domain (CLFD). To our knowledge, this is the first study to utilize frequency domain features to enhance the performance and efficiency of CL training on edge devices. For the input features of the feature extractor, CLFD employs wavelet transform to map the original input image into the frequency domain, thereby effectively reducing the size of input feature maps. Regarding the output features of the feature extractor, CLFD selectively utilizes output features for distinct classes for classification, thereby balancing the reusability and interference of output features based on the frequency domain similarity of the classes across various tasks. Optimizing only the input and output features of the feature extractor allows for seamless integration of CLFD with various rehearsal-based methods. Extensive experiments conducted in both cloud and edge environments demonstrate that CLFD consistently improves the performance of state-of-the-art (SOTA) methods in both precision and training efficiency. Specifically, CLFD can increase the accuracy of the SOTA CL method by up to 6.83% and reduce the training time by 2.6\u00d7.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94751"} +{"video_file": "XlAbMZu4Bo_39027431.mp4", "openreview_id": "XlAbMZu4Bo", "slideslive_id": 39027431, "venue": "nips2024", "title": "Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length", "status": "Poster", "keywords": "Mega;Efficient Architecture;Long Sequence Modeling;Unlimited Context Length", "tldr": "Megalodon: Efficient Long-Context LLM Pretraining and Inference with Unlimited Context Length", "abstract": "The quadratic complexity and weak length extrapolation of Transformers limits their ability to scale to long sequences, and while sub-quadratic solutions like linear attention and state space models exist, they empirically underperform Transformers in pretraining efficiency and downstream task accuracy. We introduce MEGALODON, an neural architecture for efficient sequence modeling with unlimited context length. MEGALODON inherits the architecture of MEGA (exponential moving average with gated attention), and further introduces multiple technical components to improve its capability and stability, including complex exponential moving average (CEMA), timestep normalization layer, normalized attention mechanism and pre-norm with two-hop residual configuration. In a controlled head-to-head comparison with LLAMA2, MEGALODON achieves better efficiency than Transformer in the scale of 7 billion parameters and 2 trillion training tokens. MEGALODON reaches a training loss of 1.70, landing mid-way between LLAMA2-7B (1.75) and LLAMA2-13B (1.67). This result is robust throughout a wide range of benchmarks, where MEGALODON consistently outperforms Transformers across different tasks, domains, and modalities.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/94748"} +{"video_file": "Xo1Yqyw7Yx_39025216.mp4", "openreview_id": "Xo1Yqyw7Yx", "slideslive_id": 39025216, "venue": "nips2024", "title": "Enabling Adaptive Agent Training in Open-Ended Simulators by Targeting Diversity", "status": "Poster", "keywords": "diversity;meta reinforcement learning;meta-RL;reinforcement learning;adaptation;adaptive;agents;open-endedness;genotypes;phenotypes;simulators;simulation;generalization;meta-reinforcement", "tldr": "DIVA is an evolutionary approach for generating meaningfully diverse meta-RL training tasks in truly open-ended simulators.", "abstract": "The wider application of end-to-end learning methods to embodied decision-making domains remains bottlenecked by their reliance on a superabundance of training data representative of the target domain. Meta-reinforcement learning (meta-RL) approaches abandon the aim of zero-shot generalization\u2014the goal of standard reinforcement learning (RL)\u2014in favor of few-shot adaptation, and thus hold promise for bridging larger generalization gaps. While learning this meta-level adaptive behavior still requires substantial data, efficient environment simulators approaching real-world complexity are growing in prevalence. Even so, hand-designing sufficiently diverse and numerous simulated training tasks for these complex domains is prohibitively labor-intensive. Domain randomization (DR) and procedural generation (PG), offered as solutions to this problem, require simulators to possess carefully-defined parameters which directly translate to meaningful task diversity\u2014a similarly prohibitive assumption. In this work, we present DIVA, an evolutionary approach for generating diverse training tasks in such complex, open-ended simulators. Like unsupervised environment design (UED) methods, DIVA can be applied to arbitrary parameterizations, but can additionally incorporate realistically-available domain knowledge\u2014thus inheriting the flexibility and generality of UED, and the supervised structure embedded in well-designed simulators exploited by DR and PG. Our empirical results showcase DIVA's unique ability to overcome complex parameterizations and successfully train adaptive agent behavior, far outperforming competitive baselines from prior literature. These findings highlight the potential of such semi-supervised environment design (SSED) approaches, of which DIVA is the first humble constituent, to enable training in realistic simulated domains, and produce more robust and capable adaptive agents. Our code is available at https://github.com/robbycostales/diva.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94744"} +{"video_file": "Xq9HQf7VNV_39025298.mp4", "openreview_id": "Xq9HQf7VNV", "slideslive_id": 39025298, "venue": "nips2024", "title": "Principled Probabilistic Imaging using Diffusion Models as Plug-and-Play Priors", "status": "Poster", "keywords": "Computational imaging;Inverse problems;Diffusion models", "tldr": "We propose a new method for sampling the posterior distributions of inverse problems using diffusion models in a rigorous way.", "abstract": "Diffusion models (DMs) have recently shown outstanding capabilities in modeling complex image distributions, making them expressive image priors for solving Bayesian inverse problems. However, most existing DM-based methods rely on approximations in the generative process to be generic to different inverse problems, leading to inaccurate sample distributions that deviate from the target posterior defined within the Bayesian framework. To harness the generative power of DMs while avoiding such approximations, we propose a Markov chain Monte Carlo algorithm that performs posterior sampling for general inverse problems by reducing it to sampling the posterior of a Gaussian denoising problem. Crucially, we leverage a general DM formulation as a unified interface that allows for rigorously solving the denoising problem with a range of state-of-the-art DMs. We demonstrate the effectiveness of the proposed method on six inverse problems (three linear and three nonlinear), including a real-world black hole imaging problem. Experimental results indicate that our proposed method offers more accurate reconstructions and posterior estimation compared to existing DM-based imaging inverse methods.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94741"} +{"video_file": "XrK4JK2jBr_39027819.mp4", "openreview_id": "XrK4JK2jBr", "slideslive_id": 39027819, "venue": "nips2024", "title": "Designs for Enabling Collaboration in Human-Machine Teaming via Interactive and Explainable Systems", "status": "Poster", "keywords": "Human-Machine Teaming;Adaptive AI", "tldr": "We develop approaches that enable iterative, mixed-initiative team development allowing end- users to interactively reprogram interpretable AI teammates and summarize our user study findings into guidelines for future research.", "abstract": "Collaborative robots and machine learning-based virtual agents are increasingly entering the human workspace with the aim of increasing productivity and enhancing safety. Despite this, we show in a ubiquitous experimental domain, Overcooked-AI, that state-of-the-art techniques for human-machine teaming (HMT), which rely on imitation or reinforcement learning, are brittle and result in a machine agent that aims to decouple the machine and human\u2019s actions to act independently rather than in a synergistic fashion. To remedy this deficiency, we develop HMT approaches that enable iterative, mixed-initiative team development allowing end-users to interactively reprogram interpretable AI teammates. Our 50-subject study provides several findings that we summarize into guidelines. While all approaches underperform a simple collaborative heuristic (a critical, negative result for learning-based methods), we find that white-box approaches supported by interactive modification can lead to significant team development, outperforming white-box approaches alone, and that black-box approaches are easier to train and result in better HMT performance highlighting a tradeoff between explainability and interactivity versus ease-of-training. Together, these findings present three important future research directions: 1) Improving the ability to generate collaborative agents with white-box models, 2) Better learning methods to facilitate collaboration rather than individualized coordination, and 3) Mixed-initiative interfaces that enable users, who may vary in ability, to improve collaboration.", "primary_area": "human-AI_interaction", "site": "https://neurips.cc/virtual/2024/poster/94740"} +{"video_file": "XsNA2b8GPz_39025759.mp4", "openreview_id": "XsNA2b8GPz", "slideslive_id": 39025759, "venue": "nips2024", "title": "Adaptive Sampling for Efficient Softmax Approximation", "status": "Poster", "keywords": "Multi-armed bandits;adaptive;softmax;attention", "tldr": "Adaptive sampling-based algorithm makes softmax computation sublinear in feature dimension while maintaining PAC guarantees.", "abstract": "The softmax function is ubiquitous in machine learning and optimization applications. Computing the full softmax evaluation of a matrix-vector product can be computationally expensive in high-dimensional settings. In many applications, however, it is sufficient to calculate only the top few outputs of the softmax function. In this work, we present an algorithm, dubbed AdaptiveSoftmax, that adaptively computes the top k softmax values more efficiently than the full softmax computation, with probabilistic guarantees. We demonstrate the sample efficiency improvements afforded by AdaptiveSoftmax on real and synthetic data to corroborate our theoretical results. AdaptiveSoftmax yields >10x gain over full softmax computation on most datasets, yielding up to 30x improvement for Mistral7B evaluated on the Wikitext dataset. The adaptive method we propose for estimating the partition function (the softmax denominator) is of independent interest and can be used in other applications such as kernel density estimation.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/94739"} +{"video_file": "XxSME6GE1G_39028388.mp4", "openreview_id": "XxSME6GE1G", "slideslive_id": 39028388, "venue": "nips2024", "title": "TAIA: Large Language Models are Out-of-Distribution Data Learners", "status": "Poster", "keywords": "large language models;OOD generalization;supervised fine-tuning", "tldr": "A fine-tuning method that can leverage OOD data for supervised fine-tuning.", "abstract": "Fine-tuning on task-specific question-answer pairs is a predominant method for enhancing the performance of instruction-tuned large language models (LLMs) on downstream tasks. However, in certain specialized domains, such as healthcare or harmless content generation, it is nearly impossible to obtain a large volume of high-quality data that matches the downstream distribution. To improve the performance of LLMs in data-scarce domains with domain-mismatched data, we re-evaluated the Transformer architecture and discovered that not all parameter updates during fine-tuning contribute positively to downstream performance. Our analysis reveals that within the self-attention and feed-forward networks, only the fine-tuned attention parameters are particularly beneficial when the training set's distribution does not fully align with the test set. Based on this insight, we propose an effective inference-time intervention method: \\uline{T}raining \\uline{A}ll parameters but \\uline{I}nferring with only \\uline{A}ttention (TAIA). We empirically validate TAIA using two general instruction-tuning datasets and evaluate it on seven downstream tasks involving math, reasoning, and knowledge understanding across LLMs of different parameter sizes and fine-tuning techniques. Our comprehensive experiments demonstrate that TAIA achieves superior improvements compared to both the fully fine-tuned model and the base model in most scenarios, with significant performance gains. The high tolerance of TAIA to data mismatches makes it resistant to jailbreaking tuning and enhances specialized tasks using general data. Code is available in \\url{https://github.com/pixas/TAIA_LLM}.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94733"} +{"video_file": "Y0EfJJeb4V_39028538.mp4", "openreview_id": "Y0EfJJeb4V", "slideslive_id": 39028538, "venue": "nips2024", "title": "Goal Reduction with Loop-Removal Accelerates RL and Models Human Brain Activity in Goal-Directed Learning", "status": "Spotlight", "keywords": "goal-conditioned RL;planning;multi-task RL;vmPFC;goal-directed behavior;cognitive control;spatial navigation", "tldr": "We introduced a new goal reduction mechanism that outperforms RL algorithms in multi-goal tasks and models brain activity.", "abstract": "Goal-directed planning presents a challenge for classical RL algorithms due to the vastness of the combinatorial state and goal spaces, while humans and animals adapt to complex environments, especially with diverse, non-stationary objectives, often employing intermediate goals for long-horizon tasks. Here, we propose a goal reduction mechanism for effectively deriving subgoals from arbitrary and distant original goals, using a novel loop-removal technique. The product of the method, called goal-reducer, distills high-quality subgoals from a replay buffer, all without the need for prior global environmental knowledge. Simulations show that the goal-reducer can be integrated into RL frameworks like Deep Q-learning and Soft Actor-Critic. It accelerates performance in both discrete and continuous action space tasks, such as grid world navigation and robotic arm manipulation, relative to the corresponding standard RL models. Moreover, the goal-reducer, when combined with a local policy, without iterative training, outperforms its integrated deep RL counterparts in solving a navigation task. This goal reduction mechanism also models human problem-solving. Comparing the model's performance and activation with human behavior and fMRI data in a treasure hunting task, we found matching representational patterns between an goal-reducer agent's components and corresponding human brain areas, particularly the vmPFC and basal ganglia. The results suggest that humans may use a similar computational framework for goal-directed behaviors.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/94732"} +{"video_file": "Y13gSfTjGr_39024954.mp4", "openreview_id": "Y13gSfTjGr", "slideslive_id": 39024954, "venue": "nips2024", "title": "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations", "status": "Spotlight", "keywords": "Scaling Laws;Large Language Models;Learning Rate Schedules;Weight Averaging", "tldr": "We show reliable scaling behavior of an alternative LR schedule as well as stochastic weight averaging for LLM training, thereby making scaling law experiments more accessible.", "abstract": "Scale has become a main ingredient in obtaining strong machine learning models. As a result, understanding a model's scaling properties is key to effectively designing both the right training setup as well as future generations of architectures. In this work, we argue that scale and training research has been needlessly complex due to reliance on the cosine schedule, which prevents training across different lengths for the same model size. We investigate the training behavior of a direct alternative --- constant learning rate and cooldowns --- and find that it scales predictably and reliably similar to cosine. Additionally, we show that stochastic weight averaging yields improved performance along the training trajectory, without additional training costs, across different scales. Importantly, with these findings we demonstrate that scaling experiments can be performed with significantly reduced compute and GPU hours by utilizing fewer but reusable training runs. Our code is available at https://github.com/epfml/schedules-and-scaling/.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/94731"} +{"video_file": "Y1rOWS2Z4i_39026070.mp4", "openreview_id": "Y1rOWS2Z4i", "slideslive_id": 39026070, "venue": "nips2024", "title": "Long-Horizon Planning for Multi-Agent Robots in Partially Observable Environments", "status": "Poster", "keywords": "multi-agent robotics;large language models", "tldr": "a VLM-based planner for long-horizon multi-agent robotics in partially observable setting that does not rely on privileged information from the simulator/oracle.", "abstract": "The ability of Language Models (LMs) to understand natural language makes them a powerful tool for parsing human instructions into task plans for autonomous robots. Unlike traditional planning methods that rely on domain-specific knowledge and handcrafted rules, LMs generalize from diverse data and adapt to various tasks with minimal tuning, acting as a compressed knowledge base. However, LMs in their standard form face challenges with long-horizon tasks, particularly in partially observable multi-agent settings. We propose an LM-based Long-Horizon Planner for Multi-Agent Robotics (LLaMAR), a cognitive architecture for planning that achieves state-of-the-art results in long-horizon tasks within partially observable environments. LLaMAR employs a plan-act-correct-verify framework, allowing self-correction from action execution feedback without relying on oracles or simulators. Additionally, we present MAP-THOR, a comprehensive test suite encompassing household tasks of varying complexity within the AI2-THOR environment. Experiments show that LLaMAR achieves a 30% higher success rate than other state-of-the-art LM-based multi-agent planners in MAP-THOR and Search & Rescue tasks. Code can be found at https://github.com/nsidn98/LLaMAR", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/94727"} +{"video_file": "Y2I0Fy4sm7_39026902.mp4", "openreview_id": "Y2I0Fy4sm7", "slideslive_id": 39026902, "venue": "nips2024", "title": "SpeedLoader: An I/O efficient scheme for heterogeneous and distributed LLM operation", "status": "Poster", "keywords": "Heterogenous Computing;Large Language Model;ZeRO;FSDP;Offload", "tldr": "SpeedLoader can compute multiple batches within one Forward-Backward pass.", "abstract": "With the surging growth of model parameters, foundation models pose unprecedented challenges to traditional computational infrastructures. These large models inherently require substantial accelerator memory to accommodate massive tensors during pre-training, fine-tuning, and even inference stages, making it even more challenging to deploy a model with restricted computational resources. Given this challenge, distribution and offloading the model states are two major solutions. Partitioning the required states to participating workers, and storing them in lower speed media, such as host DRAM and block devices, largely alleviate the accelerator memory pressure. However, the prohibitive costs of tensor communication render it a theoretically plausible yet practically inefficient solution. Previous efforts to improve efficiency include maximizing rematerialization and employing chunk-based tensor management to reduce host-device communication. Despite these efforts, the reported training throughput only achieves 36.54% of model FLOPs utilization (MFUs), still not comparable to full on-device training. In this work, we redesign the data flow of heterogeneous hardware and sharded model training to minimize the excessive communication overhead. Our proposed scheme significantly enhances training and inference throughput of large language models under restrictive computational resources. We confirmed a large leap in effective compute time by looking into the kernel-level runtime behavior of our trials, where the MFUs can achieve up to 51%. Compared to the state-of-the-art approach, our framework robustly achieves remarkable speedups from 3x to 30x in multiple distributed heterogeneous training setups and inference speedups of 1.5x to 2.35x without compromising arithmetic precision.", "primary_area": "infrastructure", "site": "https://neurips.cc/virtual/2024/poster/94726"} +{"video_file": "Y2NWKlrDrX_39025708.mp4", "openreview_id": "Y2NWKlrDrX", "slideslive_id": 39025708, "venue": "nips2024", "title": "Conformal Inverse Optimization", "status": "Poster", "keywords": "inverse optimization;robust optimization;algorithm aversion;data-driven decision making", "tldr": "We propose a principled approach to learn an uncertainty set from decision data and then solve a robust optimization model to prescribe high-quality and intuitive decisions.", "abstract": "Inverse optimization has been increasingly used to estimate unknown parameters in an optimization model based on decision data. We show that such a point estimation is insufficient in a prescriptive setting where the estimated parameters are used to prescribe new decisions. The prescribed decisions may be low-quality and misaligned with human intuition and thus are unlikely to be adopted. To tackle this challenge, we propose conformal inverse optimization, which seeks to learn an uncertainty set for the unknown parameters and then solve a robust optimization model to prescribe new decisions. Under mild assumptions, we show that our method enjoys provable guarantees on solution quality, as evaluated using both the ground-truth parameters and the decision maker's perception of the unknown parameters. Our method demonstrates strong empirical performance compared to classic inverse optimization.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94725"} +{"video_file": "Y4L8GQXZZO_39027934.mp4", "openreview_id": "Y4L8GQXZZO", "slideslive_id": 39027934, "venue": "nips2024", "title": "Federated Learning from Vision-Language Foundation Models: Theoretical Analysis and Method", "status": "Poster", "keywords": "Federated Learning;Vision-Language Foundation Models;Prompt Learning;Theoretical Analysis", "tldr": "This paper constructs a theoretical analysis framework for prompt-based federated learning via feature learning theory, and introduces a prompt portfolio mechanism to address severe data heterogeneity and balance generalization and personalization.", "abstract": "Integrating pretrained vision-language foundation models like CLIP into federated learning has attracted significant attention for enhancing generalization across diverse tasks. Typically, federated learning of vision-language models employs prompt learning to reduce communication and computational costs, i.e., prompt-based federated learning. However, there is limited theoretical analysis to understand the performance of prompt-based federated learning. In this work, we construct a theoretical analysis framework for prompt-based federated learning via feature learning theory. Specifically, we monitor the evolution of signal learning and noise memorization in prompt-based federated learning, demonstrating that performance can be assessed by the ratio of task-relevant to task-irrelevant coefficients. Furthermore, we draw an analogy between income and risk in portfolio optimization and the task-relevant and task-irrelevant terms in feature learning. Leveraging inspiration from portfolio optimization that combining two independent assets will maintain the income while reducing the risk, we introduce two prompts: global prompt and local prompt to construct a prompt portfolio to balance the generalization and personalization. Consequently, we showed the performance advantage of the prompt portfolio and derived the optimal mixing coefficient. These theoretical claims have been further supported by empirical experiments.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94723"} +{"video_file": "Y4mBaZu4vy_39027715.mp4", "openreview_id": "Y4mBaZu4vy", "slideslive_id": 39027715, "venue": "nips2024", "title": "The Importance of Being Scalable: Improving the Speed and Accuracy of Neural Network Interatomic Potentials Across Chemical Domains", "status": "Poster", "keywords": "Neural Network Interatomic Potentials;Machine Learning Force Fields;Scaling;Graph Neural Networks;Attention", "tldr": "We develop a neural network interatomic potential architecture that is optimized for scalability and efficiency, achieving state-of-the-art results on a wide range of chemical systems including OC20, OC22, MPTrj, and SPICE.", "abstract": "Scaling has been a critical factor in improving model performance and generalization across various fields of machine learning. It involves how a model\u2019s performance changes with increases in model size or input data, as well as how efficiently computational resources are utilized to support this growth. Despite successes in scaling other types of machine learning models, the study of scaling in Neural Network Interatomic Potentials (NNIPs) remains limited. NNIPs act as surrogate models for ab initio quantum mechanical calculations, predicting the energy and forces between atoms in molecules and materials based on atomic configurations. The dominant paradigm in this field is to incorporate numerous physical domain constraints into the model, such as symmetry constraints like rotational equivariance. We contend that these increasingly complex domain constraints inhibit the scaling ability of NNIPs, and such strategies are likely to cause model performance to plateau in the long run. In this work, we take an alternative approach and start by systematically studying NNIP scaling properties and strategies. Our findings indicate that scaling the model through attention mechanisms is both efficient and improves model expressivity. These insights motivate us to develop an NNIP architecture designed for scalability: the Efficiently Scaled Attention Interatomic Potential (EScAIP). EScAIP leverages a novel multi-head self-attention formulation within graph neural networks, applying attention at the neighbor-level representations. Implemented with highly-optimized attention GPU kernels, EScAIP achieves substantial gains in efficiency---at least 10x speed up in inference time, 5x less in memory usage---compared to existing NNIP models. EScAIP also achieves state-of-the-art performance on a wide range of datasets including catalysts (OC20 and OC22), molecules (SPICE), and materials (MPTrj). After training EScAIP, we test its ability to learn rotational equivariance by predicting forces on new, unseen atomistic systems before and after rotation. The model's force predictions exactly match the rotated forces, suggesting that it has precisely learned rotational equivariance. Finally, we emphasize that our approach should be thought of as a philosophy rather than a specific model, representing a proof-of-concept towards developing general-purpose NNIPs that achieve better expressivity through scaling, and continue to scale efficiently with increased computational resources and training data.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/94722"} +{"video_file": "Y4tHp5Jilp_39027875.mp4", "openreview_id": "Y4tHp5Jilp", "slideslive_id": 39027875, "venue": "nips2024", "title": "A Simple yet Universal Framework for Depth Completion", "status": "Poster", "keywords": "Few-shot Depth Completion;hyperbolic representation;foundation model", "tldr": "We propose a universal few-shot learner for depth completion with arbitrary sensor.", "abstract": "Consistent depth estimation across diverse scenes and sensors is a crucial challenge in computer vision, especially when deploying machine learning models in the real world. Traditional methods depend heavily on extensive pixel-wise labeled data, which is costly and labor-intensive to acquire, and frequently have difficulty in scale issues on various depth sensors. In response, we define Universal Depth Completion (UniDC) problem. We also present a baseline architecture, a simple yet effective approach tailored to estimate scene depth across a wide range of sensors and environments using minimal labeled data. Our approach addresses two primary challenges: generalizable knowledge of unseen scene configurations and strong adaptation to arbitrary depth sensors with various specifications. To enhance versatility in the wild, we utilize a foundation model for monocular depth estimation that provides a comprehensive understanding of 3D structures in scenes. Additionally, for fast adaptation to off-the-shelf sensors, we generate a pixel-wise affinity map based on the knowledge from the foundation model. We then adjust depth information from arbitrary sensors to the monocular depth along with the constructed affinity. Furthermore, to boost up both the adaptability and generality, we embed the learned features into hyperbolic space, which builds implicit hierarchical structures of 3D data from fewer examples. Extensive experiments demonstrate the proposed method's superior generalization capabilities for UniDC problem over state-of-the-art depth completion. Source code is publicly available at https://github.com/JinhwiPark/UniDC.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94721"} +{"video_file": "Y7HPB7pL1f_39027491.mp4", "openreview_id": "Y7HPB7pL1f", "slideslive_id": 39027491, "venue": "nips2024", "title": "Interactive Deep Clustering via Value Mining", "status": "Poster", "keywords": "Deep Clustering; Interactive Clustering", "tldr": "We propose incorporating external user interaction to tackle hard samples at cluster boundaries, which existing deep clustering methods fail to discriminate internally from the data itself.", "abstract": "In the absence of class priors, recent deep clustering methods resort to data augmentation and pseudo-labeling strategies to generate supervision signals. Though achieved remarkable success, existing works struggle to discriminate hard samples at cluster boundaries, mining which is particularly challenging due to their unreliable cluster assignments. To break such a performance bottleneck, we propose incorporating user interaction to facilitate clustering instead of exhaustively mining semantics from the data itself. To be exact, we present Interactive Deep Clustering (IDC), a plug-and-play method designed to boost the performance of pre-trained clustering models with minimal interaction overhead. More specifically, IDC first quantitatively evaluates sample values based on hardness, representativeness, and diversity, where the representativeness avoids selecting outliers and the diversity prevents the selected samples from collapsing into a small number of clusters. IDC then queries the cluster affiliations of high-value samples in a user-friendly manner. Finally, it utilizes the user feedback to finetune the pre-trained clustering model. Extensive experiments demonstrate that IDC could remarkably improve the performance of various pre-trained clustering models, at the expense of low user interaction costs. The code could be accessed at pengxi.me.", "primary_area": "human-AI_interaction", "site": "https://neurips.cc/virtual/2024/poster/94716"} +{"video_file": "YCKuXkw6UL_39024614.mp4", "openreview_id": "YCKuXkw6UL", "slideslive_id": 39024614, "venue": "nips2024", "title": "Acoustic Volume Rendering for Neural Impulse Response Fields", "status": "Spotlight", "keywords": "Acoustic signals;Room impulse response;Neural radiance field;Wave propagation", "tldr": "This paper introduces acoustic volume rendering for impulse response synthesis and inherently enforces multi-pose consistency.", "abstract": "Realistic audio synthesis that captures accurate acoustic phenomena is essential for creating immersive experiences in virtual and augmented reality. Synthesizing the sound received at any position relies on the estimation of impulse response (IR), which characterizes how sound propagates in one scene along different paths before arriving at the listener position. In this paper, we present Acoustic Volume Rendering (AVR), a novel approach that adapts volume rendering techniques to model acoustic impulse responses. While volume rendering has been successful in modeling radiance fields for images and neural scene representations, IRs present unique challenges as time-series signals. To address these challenges, we introduce frequency-domain volume rendering and use spherical integration to fit the IR measurements. Our method constructs an impulse response field that inherently encodes wave propagation principles and achieves state of-the-art performance in synthesizing impulse responses for novel poses. Experiments show that AVR surpasses current leading methods by a substantial margin. Additionally, we develop an acoustic simulation platform, AcoustiX, which provides more accurate and realistic IR simulations than existing simulators. Code for AVR and AcoustiX are available at https://zitonglan.github.io/avr.", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/94712"} +{"video_file": "YIB7REL8UC_39025590.mp4", "openreview_id": "YIB7REL8UC", "slideslive_id": 39025590, "venue": "nips2024", "title": "Transformers Represent Belief State Geometry in their Residual Stream", "status": "Poster", "keywords": "Interpretability;Computational Mechanics;Belief State;Features;Representation", "tldr": "Transformers trained on next-token prediction learn to represent the geometry of belief updating over hidden states of the data-generating process in their residual stream.", "abstract": "What computational structure are we building into large language models when we train them on next-token prediction? Here, we present evidence that this structure is given by the meta-dynamics of belief updating over hidden states of the data- generating process. Leveraging the theory of optimal prediction, we anticipate and then find that belief states are linearly represented in the residual stream of transformers, even in cases where the predicted belief state geometry has highly nontrivial fractal structure. We investigate cases where the belief state geometry is represented in the final residual stream or distributed across the residual streams of multiple layers, providing a framework to explain these observations. Furthermore we demonstrate that the inferred belief states contain information about the entire future, beyond the local next-token prediction that the transformers are explicitly trained on. Our work provides a general framework connecting the structure of training data to the geometric structure of activations inside transformers.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/94708"} +{"video_file": "YIOvR40hSo_39025219.mp4", "openreview_id": "YIOvR40hSo", "slideslive_id": 39025219, "venue": "nips2024", "title": "DiffPano: Scalable and Consistent Text to Panorama Generation with Spherical Epipolar-Aware Diffusion", "status": "Poster", "keywords": "Spherical Epipolar-Aware Diffusion;Text to Multi-View Panoramas Generation", "tldr": "We propose a novel text-driven panorama generation framework that leverages multi-view consistency between generated images to achieve scalable, consistent, and diverse panoramic scene generation.", "abstract": "Diffusion-based methods have achieved remarkable achievements in 2D image or 3D object generation, however, the generation of 3D scenes and even $360^{\\circ}$ images remains constrained, due to the limited number of scene datasets, the complexity of 3D scenes themselves, and the difficulty of generating consistent multi-view images. To address these issues, we first establish a large-scale panoramic video-text dataset containing millions of consecutive panoramic keyframes with corresponding panoramic depths, camera poses, and text descriptions. Then, we propose a novel text-driven panoramic generation framework, termed DiffPano, to achieve scalable, consistent, and diverse panoramic scene generation. Specifically, benefiting from the powerful generative capabilities of stable diffusion, we fine-tune a single-view text-to-panorama diffusion model with LoRA on the established panoramic video-text dataset. We further design a spherical epipolar-aware multi-view diffusion model to ensure the multi-view consistency of the generated panoramic images. Extensive experiments demonstrate that DiffPano can generate scalable, consistent, and diverse panoramic images with given unseen text descriptions and camera poses.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94707"} +{"video_file": "YNRYWZHmKY_39028511.mp4", "openreview_id": "YNRYWZHmKY", "slideslive_id": 39028511, "venue": "nips2024", "title": "A Cat Is A Cat (Not A Dog!): Unraveling Information Mix-ups in Text-to-Image Encoders through Causal Analysis and Embedding Optimization", "status": "Poster", "keywords": "Causal manner;Embedding optimization;Information mix-ups;Information loss;Text-to-image generative model", "tldr": "Analyzing the effect of causal manner and text embedding in text encoders inside text-to-image diffusion models", "abstract": "This paper analyzes the impact of causal manner in the text encoder of text-to-image (T2I) diffusion models, which can lead to information bias and loss. Previous works have focused on addressing the issues through the denoising process. However, there is no research discussing how text embedding contributes to T2I models, especially when generating more than one object. In this paper, we share a comprehensive analysis of text embedding: i) how text embedding contributes to the generated images and ii) why information gets lost and biases towards the first-mentioned object. Accordingly, we propose a simple but effective text embedding balance optimization method, which is training-free, with an improvement of 125.42% on information balance in stable diffusion. Furthermore, we propose a new automatic evaluation metric that quantifies information loss more accurately than existing methods, achieving 81% concordance with human assessments. This metric effectively measures the presence and accuracy of objects, addressing the limitations of current distribution scores like CLIP's text-image similarities.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/94705"} +{"video_file": "YO6GVPUrKN_39027529.mp4", "openreview_id": "YO6GVPUrKN", "slideslive_id": 39027529, "venue": "nips2024", "title": "On the Limitations of Fractal Dimension as a Measure of Generalization", "status": "Poster", "keywords": "Generalization;Optimization;Persistent Homology;Fractal Dimension", "tldr": "We experimentally and statistically show that PH dimension does not always correlate with generalization gap.", "abstract": "Bounding and predicting the generalization gap of overparameterized neural networks remains a central open problem in theoretical machine learning. There is a recent and growing body of literature that proposes the framework of fractals to model optimization trajectories of neural networks, motivating generalization bounds and measures based on the fractal dimension of the trajectory. Notably, the persistent homology dimension has been proposed to correlate with the generalization gap. This paper performs an empirical evaluation of these persistent homology-based generalization measures, with an in-depth statistical analysis. Our study reveals confounding effects in the observed correlation between generalization and topological measures due to the variation of hyperparameters. We also observe that fractal dimension fails to predict generalization of models trained from poor initializations. We lastly reveal the intriguing manifestation of model-wise double descent in these topological generalization measures. Our work forms a basis for a deeper investigation of the causal relationships between fractal geometry, topological data analysis, and neural network optimization.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94703"} +{"video_file": "YPqHSTSoFs_39026253.mp4", "openreview_id": "YPqHSTSoFs", "slideslive_id": 39026253, "venue": "nips2024", "title": "Cross-model Control: Improving Multiple Large Language Models in One-time Training", "status": "Poster", "keywords": "Large Language Model;Fine-tune;model transfer", "tldr": "We propose Cross-model Control (CMC), a method that could improve multiple LLMs in one-time training with a portable tiny language model.", "abstract": "The number of large language models (LLMs) with varying parameter scales and vocabularies is increasing. While they deliver powerful performance, they also face a set of common optimization needs to meet specific requirements or standards, such as instruction following or avoiding the output of sensitive information from the real world. However, how to reuse the fine-tuning outcomes of one model to other models to reduce training costs remains a challenge. To bridge this gap, we introduce Cross-model Control (CMC), a method that improves multiple LLMs in one-time training with a portable tiny language model. Specifically, we have observed that the logit shift before and after fine-tuning is remarkably similar across different models. Based on this insight, we incorporate a tiny language model with a minimal number of parameters. By training alongside a frozen template LLM, the tiny model gains the capability to alter the logits output by the LLMs. To make this tiny language model applicable to models with different vocabularies, we propose a novel token mapping strategy named PM-MinED. We have conducted extensive experiments on instruction tuning and unlearning tasks, demonstrating the effectiveness of CMC. Our code is available at https://github.com/wujwyi/CMC", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94699"} +{"video_file": "YSs1z5udBY_39024609.mp4", "openreview_id": "YSs1z5udBY", "slideslive_id": 39024609, "venue": "nips2024", "title": "Reawakening knowledge: Anticipatory recovery from catastrophic interference via structured training", "status": "Poster", "keywords": "large language models;LLMs;catastrophic interference;online learning;continual learning;anticipatory recovery;cyclic training;structured training sequences", "tldr": "When we fine-tune LLMs cyclically in a fixed repeated sequence of documents the model can recover from catastrophic interference before seeing the same document again.", "abstract": "We explore the training dynamics of neural networks in a structured non-IID setting where documents are presented cyclically in a fixed, repeated sequence. Typically, networks suffer from catastrophic interference when training on a sequence of documents; however, we discover a curious and remarkable property of LLMs finetuned sequentially in this setting: they exhibit anticipatory behavior, recovering from the forgetting on documents before seeing them again. The behavior emerges and becomes more robust as the architecture scales up its number of parameters. Through comprehensive experiments and visualizations, we uncover new insights into training over-parameterized networks in structured environments.", "primary_area": "online_learning", "site": "https://neurips.cc/virtual/2024/poster/94697"} +{"video_file": "YTHJ8O6SCB_39028584.mp4", "openreview_id": "YTHJ8O6SCB", "slideslive_id": 39028584, "venue": "nips2024", "title": "SpatialPIN: Enhancing Spatial Reasoning Capabilities of Vision-Language Models through Prompting and Interacting 3D Priors", "status": "Poster", "keywords": "VLM Spatial Reasoning;Zero-Shot", "tldr": "We present SpatialPIN, a framework designed to enhance the spatial reasoning capabilities of VLMs through prompting and interacting with priors from multiple 3D foundation models in a zero-shot, training-free manner.", "abstract": "Current state-of-the-art spatial reasoning-enhanced VLMs are trained to excel at spatial visual question answering (VQA). However, we believe that higher-level 3D-aware tasks, such as articulating dynamic scene changes and motion planning, require a fundamental and explicit 3D understanding beyond current spatial VQA datasets. In this work, we present SpatialPIN, a framework designed to enhance the spatial reasoning capabilities of VLMs through prompting and interacting with priors from multiple 3D foundation models in a zero-shot, training-free manner. Extensive experiments demonstrate that our spatial reasoning-imbued VLM performs well on various forms of spatial VQA and can extend to help in various downstream robotics tasks such as pick and stack and trajectory planning.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94696"} +{"video_file": "YWTpmLktMj_39026767.mp4", "openreview_id": "YWTpmLktMj", "slideslive_id": 39026767, "venue": "nips2024", "title": "Transductive Learning is Compact", "status": "Poster", "keywords": "Sample Complexity;Compactness;One-Inclusion Graphs;Metric Space;Transductive Learning;PAC Learning", "tldr": "We demonstrate that the transductive sample complexity of learning a hypothesis class is exactly equal to the sample complexity of learning its most difficult \"finite projection,\" for a wide range of losses.", "abstract": "We demonstrate a compactness result holding broadly across supervised learning with a general class of loss functions: Any hypothesis class\nH\nis learnable with transductive sample complexity\nm\nprecisely when all of its finite projections are learnable with sample complexity\nm\n. We prove that this exact form of compactness holds for realizable and agnostic learning with respect to all proper metric loss functions (e.g., any norm on\nR\nd\n) and any continuous loss on a compact space (e.g., cross-entropy, squared loss). For realizable learning with improper metric losses, we show that exact compactness of sample complexity can fail, and provide matching upper and lower bounds of a factor of 2 on the extent to which such sample complexities can differ. We conjecture that larger gaps are possible for the agnostic case. Furthermore, invoking the equivalence between sample complexities in the PAC and transductive models (up to lower order factors, in the realizable case) permits us to directly port our results to the PAC model, revealing an almost-exact form of compactness holding broadly in PAC learning.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94694"} +{"video_file": "YYY5lzE547_39026376.mp4", "openreview_id": "YYY5lzE547", "slideslive_id": 39026376, "venue": "nips2024", "title": "Warm-starting Push-Relabel", "status": "Poster", "keywords": "algorithms with predictions;max flow;beyond worst-case analysis", "tldr": "We provide the first theoretical guarantees for seeding the Push-Relabel algorithm with a predicted flow, and we validate this theory by showing speed ups empirically.", "abstract": "Push-Relabel is one of the most celebrated network flow algorithms. Maintaining a pre-flow that saturates a cut, it enjoys better theoretical and empirical running time than other flow algorithms, such as Ford-Fulkerson. In practice, Push-Relabel is even faster than what theoretical guarantees can promise, in part because of the use of good heuristics for seeding and updating the iterative algorithm. However, it remains unclear how to run Push-Relabel on an arbitrary initialization that is not necessarily a pre-flow or cut-saturating. We provide the first theoretical guarantees for warm-starting Push-Relabel with a predicted flow, where our learning-augmented version benefits from fast running time when the predicted flow is close to an optimal flow, while maintaining robust worst-case guarantees. Interestingly, our algorithm uses the gap relabeling heuristic, which has long been employed in practice, even though prior to our work there was no rigorous theoretical justification for why it can lead to run-time improvements. We then show our algorithmic framework works well in practice, as our warm-start version of Push-Relabel improves over the cold-start version by a larger and larger percentage as the size of the image increases.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94691"} +{"video_file": "YYnP3Xpv3y_39025394.mp4", "openreview_id": "YYnP3Xpv3y", "slideslive_id": 39025394, "venue": "nips2024", "title": "Learning Neural Contracting Dynamics: Extended Linearization and Global Guarantees", "status": "Poster", "keywords": "contraction theory;learning from demonstration;dynamical systems", "tldr": "We learn globally gauranteed contractng dyanmical systems by parameterizing the extended linearization of the vector field.", "abstract": "Global stability and robustness guarantees in learned dynamical systems are essential to ensure well-behavedness of the systems in the face of uncertainty. We present Extended Linearized Contracting Dynamics (ELCD), the first neural network-based dynamical system with global contractivity guarantees in arbitrary metrics. The key feature of ELCD is a parametrization of the extended linearization of the nonlinear vector field. In its most basic form, ELCD is guaranteed to be (i) globally exponentially stable, (ii) equilibrium contracting, and (iii) globally contracting with respect to some metric. To allow for contraction with respect to more general metrics in the data space, we train diffeomorphisms between the data space and a latent space and enforce contractivity in the latent space, which ensures global contractivity in the data space. We demonstrate the performance of ELCD on the high dimensional LASA, multi-link pendulum, and Rosenbrock datasets.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/94690"} +{"video_file": "YaPhvbGqwO_39024895.mp4", "openreview_id": "YaPhvbGqwO", "slideslive_id": 39024895, "venue": "nips2024", "title": "Mitigating Partial Observability in Sequential Decision Processes via the Lambda Discrepancy", "status": "Poster", "keywords": "reinforcement learning;partial observability;value estimation;memory", "tldr": "Minimizing a discrepancy between different value estimates is beneficial for learning memory under partial observability.", "abstract": "Reinforcement learning algorithms typically rely on the assumption that the environment dynamics and value function can be expressed in terms of a Markovian state representation. However, when state information is only partially observable, how can an agent learn such a state representation, and how can it detect when it has found one? We introduce a metric that can accomplish both objectives, without requiring access to---or knowledge of---an underlying, unobservable state space. Our metric, the \u03bb-discrepancy, is the difference between two distinct temporal difference (TD) value estimates, each computed using TD(\u03bb) with a different value of \u03bb. Since TD(\u03bb=0) makes an implicit Markov assumption and TD(\u03bb=1) does not, a discrepancy between these estimates is a potential indicator of a non-Markovian state representation. Indeed, we prove that the \u03bb-discrepancy is exactly zero for all Markov decision processes and almost always non-zero for a broad class of partially observable environments. We also demonstrate empirically that, once detected, minimizing the \u03bb-discrepancy can help with learning a memory function to mitigate the corresponding partial observability. We then train a reinforcement learning agent that simultaneously constructs two recurrent value networks with different \u03bb parameters and minimizes the difference between them as an auxiliary loss. The approach scales to challenging partially observable domains, where the resulting agent frequently performs significantly better (and never performs worse) than a baseline recurrent agent with only a single value network.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94689"} +{"video_file": "YawXY6mWiK_39028768.mp4", "openreview_id": "YawXY6mWiK", "slideslive_id": 39028768, "venue": "nips2024", "title": "A Full-duplex Speech Dialogue Scheme Based On Large Language Model", "status": "Poster", "keywords": "Speech based conversation; large language models; full duplex; instruction tuning", "tldr": "This work formalizes the problem of full-duplex voice conversation with LLM and presents a method towards this goal.", "abstract": "We present a generative dialogue system capable of operating in a full-duplex manner, allowing for seamless interaction. It is based on a large language model (LLM) carefully aligned to be aware of a perception module, a motor function module, and the concept of a simple finite state machine (called neural FSM) with two states. The perception and motor function modules operate in tandem, allowing the system to speak and listen to the user simultaneously. The LLM generates textual tokens for inquiry responses and makes autonomous decisions to start responding to, wait for, or interrupt the user by emitting control tokens to the neural FSM. All these tasks of the LLM are carried out as next token prediction on a serialized view of the dialogue in real-time. In automatic quality evaluations simulating real-life interaction, the proposed system reduces the average conversation response latency by more than threefold compared with LLM-based half-duplex dialogue systems while responding within less than 500 milliseconds in more than 50% of evaluated interactions. Running an LLM with only 8 billion parameters, our system exhibits an 8% higher interruption precision rate than the best available commercial LLM for voice-based dialogue.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94688"} +{"video_file": "YbxFwaSA9Z_39026419.mp4", "openreview_id": "YbxFwaSA9Z", "slideslive_id": 39026419, "venue": "nips2024", "title": "Can Learned Optimization Make Reinforcement Learning Less Difficult?", "status": "Spotlight", "keywords": "Meta-Learning;Reinforcement Learning;Learned Optimization;Deep Learning", "tldr": "We propose OPEN, a method for learning optimizers designed to improve final return in reinforcement learning by tackling the difficulties of plasticity loss, exploration and non-stationarity.", "abstract": "While reinforcement learning (RL) holds great potential for decision making in the real world, it suffers from a number of unique difficulties which often need specific consideration. In particular: it is highly non-stationary; suffers from high degrees of plasticity loss; and requires exploration to prevent premature convergence to local optima and maximize return. In this paper, we consider whether learned optimization can help overcome these problems. Our method, Learned Optimization for Plasticity, Exploration and Non-stationarity (OPEN), meta-learns an update rule whose input features and output structure are informed by previously proposed solutions to these difficulties. We show that our parameterization is flexible enough to enable meta-learning in diverse learning contexts, including the ability to use stochasticity for exploration. Our experiments demonstrate that when meta-trained on single and small sets of environments, OPEN outperforms or equals traditionally used optimizers. Furthermore, OPEN shows strong generalization characteristics across a range of environments and agent architectures.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94685"} +{"video_file": "YdfZP7qMzp_39025880.mp4", "openreview_id": "YdfZP7qMzp", "slideslive_id": 39025880, "venue": "nips2024", "title": "GenRec: Unifying Video Generation and Recognition with Diffusion Models", "status": "Poster", "keywords": "video understanding;video generation;diffusion", "tldr": "Unifying Video Generation and Recognition with Diffusion Models", "abstract": "Video diffusion models are able to generate high-quality videos by learning strong spatial-temporal priors on large-scale datasets. In this paper, we aim to investigate whether such priors derived from a generative process are suitable for video recognition, and eventually joint optimization of generation and recognition. Building upon Stable Video Diffusion, we introduce GenRec, the first unified framework trained with a random-frame conditioning process so as to learn generalized spatial-temporal representations. The resulting framework can naturally supports generation and recognition, and more importantly is robust even when visual inputs contain limited information. Extensive experiments demonstrate the efficacy of GenRec for both recognition and generation. In particular, GenRec achieves competitive recognition performance, offering 75.8% and 87.2% accuracy on SSV2 and K400, respectively. GenRec also performs the best on class-conditioned image-to-video generation, achieving 46.5 and 49.3 FVD scores on SSV2 and EK-100 datasets. Furthermore, GenRec demonstrates extraordinary robustness in scenarios that only limited frames can be observed. Code will be available at https://github.com/wengzejia1/GenRec.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94684"} +{"video_file": "YfVMcbcDqo_39026684.mp4", "openreview_id": "YfVMcbcDqo", "slideslive_id": 39026684, "venue": "nips2024", "title": "Physics-Regularized Multi-Modal Image Assimilation for Brain Tumor Localization", "status": "Poster", "keywords": "Inverse Problems;System Identification;Physics-Informed;Biomechanical Modeling;Tumor Growth", "tldr": "We propose a physics-regularized learning approach on dynamic discrete meshes to address the complex inverse problem of tumor localization.", "abstract": "Physical models in the form of partial differential equations serve as important priors for many under-constrained problems. One such application is tumor treatment planning, which relies on accurately estimating the spatial distribution of tumor cells within a patient\u2019s anatomy. While medical imaging can detect the bulk of a tumor, it cannot capture the full extent of its spread, as low-concentration tumor cells often remain undetectable, particularly in glioblastoma, the most common primary brain tumor. Machine learning approaches struggle to estimate the complete tumor cell distribution due to a lack of appropriate training data. Consequently, most existing methods rely on physics-based simulations to generate anatomically and physiologically plausible estimations. However, these approaches face challenges with complex and unknown initial conditions and are constrained by overly rigid physical models. In this work, we introduce a novel method that integrates data-driven and physics-based cost functions, akin to Physics-Informed Neural Networks (PINNs). However, our approach parametrizes the solution directly on a dynamic discrete mesh, allowing for the effective modeling of complex biomechanical behaviors. Specifically, we propose a unique discretization scheme that quantifies how well the learned spatiotemporal distributions of tumor and brain tissues adhere to their respective growth and elasticity equations. This quantification acts as a regularization term, offering greater flexibility and improved integration of patient data compared to existing models. We demonstrate enhanced coverage of tumor recurrence areas using real-world data from a patient cohort, highlighting the potential of our method to improve model-driven treatment planning for glioblastoma in clinical practice.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/94680"} +{"video_file": "YlIvhHFwQ2_39025781.mp4", "openreview_id": "YlIvhHFwQ2", "slideslive_id": 39025781, "venue": "nips2024", "title": "DreamScene4D: Dynamic Multi-Object Scene Generation from Monocular Videos", "status": "Poster", "keywords": "4D Scene Generation; Video-to-4D Generation", "tldr": "We propose DreamScene4D, the first video-to-4D scene generation approach to produce realistic 4D scene representation from real-world multi-object videos.", "abstract": "View-predictive generative models provide strong priors for lifting object-centric images and videos into 3D and 4D through rendering and score distillation objectives. A question then remains: what about lifting complete multi-object dynamic scenes? There are two challenges in this direction: First, rendering error gradients are often insufficient to recover fast object motion, and second, view predictive generative models work much better for objects than whole scenes, so, score distillation objectives cannot currently be applied at the scene level directly. We present DreamScene4D, the first approach to generate 3D dynamic scenes of multiple objects from monocular videos via 360-degree novel view synthesis. Our key insight is a \"decompose-recompose\" approach that factorizes the video scene into the background and object tracks, while also factorizing object motion into 3 components: object-centric deformation, object-to-world-frame transformation, and camera motion. Such decomposition permits rendering error gradients and object view-predictive models to recover object 3D completions and deformations while bounding box tracks guide the large object movements in the scene. We show extensive results on challenging DAVIS, Kubric, and self-captured videos with quantitative comparisons and a user preference study. Besides 4D scene generation, DreamScene4D obtains accurate 2D persistent point track by projecting the inferred 3D trajectories to 2D. We will release our code and hope our work will stimulate more research on fine-grained 4D understanding from videos.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94673"} +{"video_file": "YlmYm7sHDE_39027583.mp4", "openreview_id": "YlmYm7sHDE", "slideslive_id": 39027583, "venue": "nips2024", "title": "Minimum Entropy Coupling with Bottleneck", "status": "Spotlight", "keywords": "Compression;Minimum Entropy Coupling;Log Loss;Markov Coding Games", "tldr": "We present a novel lossy compression framework under log loss called Minimum Entropy Coupling with Bottleneck.", "abstract": "This paper investigates a novel lossy compression framework operating under logarithmic loss, designed to handle situations where the reconstruction distribution diverges from the source distribution. This framework is especially relevant for applications that require joint compression and retrieval, and in scenarios involving distributional shifts due to processing. We show that the proposed formulation extends the classical minimum entropy coupling framework by integrating a bottleneck, allowing for controlled variability in the degree of stochasticity in the coupling. We explore the decomposition of the Minimum Entropy Coupling with Bottleneck (MEC-B) into two distinct optimization problems: Entropy-Bounded Information Maximization (EBIM) for the encoder, and Minimum Entropy Coupling (MEC) for the decoder. Through extensive analysis, we provide a greedy algorithm for EBIM with guaranteed performance, and characterize the optimal solution near functional mappings, yielding significant theoretical insights into the structural complexity of this problem. Furthermore, we illustrated the practical application of MEC-B through experiments in Markov Coding Games (MCGs) under rate limits. These games simulate a communication scenario within a Markov Decision Process, where an agent must transmit a compressed message from a sender to a receiver through its actions. Our experiments highlighted the trade-offs between MDP rewards and receiver accuracy across various compression rates, showcasing the efficacy of our method compared to conventional compression baseline.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94672"} +{"video_file": "Ylvviju6MD_39024470.mp4", "openreview_id": "Ylvviju6MD", "slideslive_id": 39024470, "venue": "nips2024", "title": "The Poisson Midpoint Method for Langevin Dynamics: Provably Efficient Discretization for Diffusion Models", "status": "Poster", "keywords": "Langevin Monte Carlo;Diffusion Models;MCMC", "tldr": "We introduce Poisson Midpoint Method which provably speeds up SDE based sampling and apply it to diffusion models", "abstract": "Langevin Dynamics is a Stochastic Differential Equation (SDE) central to sampling and generative modeling and is implemented via time discretization. Langevin Monte Carlo (LMC), based on the Euler-Maruyama discretization, is the simplest and most studied algorithm. LMC can suffer from slow convergence - requiring a large number of steps of small step-size to obtain good quality samples. This becomes stark in the case of diffusion models where a large number of steps gives the best samples, but the quality degrades rapidly with smaller number of steps. Randomized Midpoint Method has been recently proposed as a better discretization of Langevin dynamics for sampling from strongly log-concave distributions. However, important applications such as diffusion models involve non-log concave densities and contain time varying drift. We propose its variant, the Poisson Midpoint Method, which approximates a small step-size LMC with large step-sizes. We prove that this can obtain a quadratic speed up of LMC under very weak assumptions. We apply our method to diffusion models for image generation and show that it maintains the quality of DDPM with 1000 neural network calls with just 50-80 neural network calls and outperforms ODE based methods with similar compute.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94671"} +{"video_file": "YscR3LBIi7_39024652.mp4", "openreview_id": "YscR3LBIi7", "slideslive_id": 39024652, "venue": "nips2024", "title": "MoMu-Diffusion: On Learning Long-Term Motion-Music Synchronization and Correspondence", "status": "Poster", "keywords": "motion-music generation;rhythmic alignment;diffusion models", "tldr": "We propose bidirectional contrastive rhythmic VAE and Transformer-based diffusion model for versatile motion-music generation tasks.", "abstract": "Motion-to-music and music-to-motion have been studied separately, each attracting substantial research interest within their respective domains. The interaction between human motion and music is a reflection of advanced human intelligence, and establishing a unified relationship between them is particularly important. However, to date, there has been no work that considers them jointly to explore the modality alignment within. To bridge this gap, we propose a novel framework, termed MoMu-Diffusion, for long-term and synchronous motion-music generation. Firstly, to mitigate the huge computational costs raised by long sequences, we propose a novel Bidirectional Contrastive Rhythmic Variational Auto-Encoder (BiCoR-VAE) that extracts the modality-aligned latent representations for both motion and music inputs. Subsequently, leveraging the aligned latent spaces, we introduce a multi-modal diffusion Transformer model and a cross-guidance sampling strategy to enable various generation tasks, including cross-modal, multi-modal, and variable-length generation. Extensive experiments demonstrate that MoMu-Diffusion surpasses recent state-of-the-art methods both qualitatively and quantitatively, and can synthesize realistic, diverse, long-term, and beat-matched music or motion sequences. The generated motion-music samples are available at https://momu-diffusion.github.io/.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94669"} +{"video_file": "YvA8UF0I37_39026144.mp4", "openreview_id": "YvA8UF0I37", "slideslive_id": 39026144, "venue": "nips2024", "title": "PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression", "status": "Oral", "keywords": "Quantization;Large Language Models", "tldr": "Improving Extreme LLM Quantization through Better Fine-tuning", "abstract": "There has been significant interest in \"extreme\" compression of large language models (LLMs), i.e. to 1-2 bits per parameter, which allows such models to be executed efficiently on resource-constrained devices.\nExisting work focused on improved one-shot quantization techniques and weight representations; yet, purely post-training approaches are reaching diminishing returns in terms of the accuracy-vs-bit-width trade-off. State-of-the-art quantization methods such as QuIP# and AQLM include fine-tuning (part of) the compressed parameters over a limited amount of calibration data; however, such fine-tuning techniques over compressed weights often make exclusive use of straight-through estimators (STE), whose performance is not well-understood in this setting. In this work, we question the use of STE for extreme LLM compression, showing that it can be sub-optimal, and perform a systematic study of quantization-aware fine-tuning strategies for LLMs. We propose PV-Tuning - a representation-agnostic framework that generalizes and improves upon existing fine-tuning strategies, and provides convergence guarantees in restricted cases. On the practical side, when used for 1-2 bit vector quantization, PV-Tuning outperforms prior techniques for highly-performant models such as Llama and Mistral. Using PV-Tuning, we achieve the first Pareto-optimal quantization for Llama-2 family models at 2 bits per parameter.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94666"} +{"video_file": "YxyYTcv3hp_39028583.mp4", "openreview_id": "YxyYTcv3hp", "slideslive_id": 39028583, "venue": "nips2024", "title": "Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity", "status": "Poster", "keywords": "Machine Unlearning;Federated Unlearning;Feature Unlearning;Lipschitz Continuity", "tldr": "Previous research on Federated Unlearning lacked focus on feature unlearning. This paper introduces Ferrari, a framework aimed at facilitating Federated Feature Unlearning. Ferrari utilizes Lipschitz Continuity to efficiently unlearn features.", "abstract": "The advent of Federated Learning (FL) highlights the practical necessity for the \u2019right to be forgotten\u2019 for all clients, allowing them to request data deletion from the machine learning model\u2019s service provider. This necessity has spurred a growing demand for Federated Unlearning (FU). Feature unlearning has gained considerable attention due to its applications in unlearning sensitive, backdoor, and biased features. Existing methods employ the influence function to achieve feature unlearning, which is impractical for FL as it necessitates the participation of other clients, if not all, in the unlearning process. Furthermore, current research lacks an evaluation of the effectiveness of feature unlearning. To address these limitations, we define feature sensitivity in evaluating feature unlearning according to Lipschitz continuity. This metric characterizes the model output\u2019s rate of change or sensitivity to perturbations in the input feature. We then propose an effective federated feature unlearning framework called Ferrari, which minimizes feature sensitivity. Extensive experimental results and theoretical analysis demonstrate the effectiveness of Ferrari across various feature unlearning scenarios, including sensitive, backdoor, and biased features. The code is publicly available at https://github.com/OngWinKent/Federated-Feature-Unlearning", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/94662"} +{"video_file": "YyMiO0DWmI_39024602.mp4", "openreview_id": "YyMiO0DWmI", "slideslive_id": 39024602, "venue": "nips2024", "title": "Cross-Device Collaborative Test-Time Adaptation", "status": "Poster", "keywords": "Test-Time Adaptation;Out-of-distribution Generalization;Collaborative Adaptation", "tldr": "Propose a test-time Collaborative Lifelong Adaptation (CoLA) paradigm to enable knowledge accumulation, sharing, and utilization across multiple devices, where both resource-abundant and resource-limited devices are included in the collaboration.", "abstract": "In this paper, we propose test-time Collaborative Lifelong Adaptation (CoLA), which is a general paradigm that can be incorporated with existing advanced TTA methods to boost the adaptation performance and efficiency in a multi-device collaborative manner. Specifically, we maintain and store a set of device-shared domain knowledge vectors, which accumulates the knowledge learned from all devices during their lifelong adaptation process. Based on this, CoLA conducts two collaboration strategies for devices with different computational resources and latency demands. 1) Knowledge reprogramming learning strategy jointly learns new domain-specific model parameters and a reweighting term to reprogram existing shared domain knowledge vectors, termed adaptation on principal agents. 2) Similarity-based knowledge aggregation strategy solely aggregates the knowledge stored in shared domain vectors according to domain similarities in an optimization-free manner, termed adaptation on follower agents. Experiments verify that CoLA is simple but effective, which boosts the efficiency of TTA and demonstrates remarkable superiority in collaborative, lifelong, and single-domain TTA scenarios, e.g., on follower agents, we enhance accuracy by over 30% on ImageNet-C while maintaining nearly the same efficiency as standard inference. The source code is available at https://github.com/Cascol-Chen/COLA.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94660"} +{"video_file": "Z0Nq3hHeEG_39024726.mp4", "openreview_id": "Z0Nq3hHeEG", "slideslive_id": 39024726, "venue": "nips2024", "title": "pcaGAN: Improving Posterior-Sampling cGANs via Principal Component Regularization", "status": "Poster", "keywords": "Image recovery;inverse problems;MRI;posterior sampling;GAN", "tldr": "For image-recovery problems, we propose a fast and accurate posterior sampler by regularizing a cGAN to enforce correctness in the K principal components of the posterior covariance matrix, and in both the trace-covariance and conditional mean.", "abstract": "In ill-posed imaging inverse problems, there can exist many hypotheses that fit both the observed measurements and prior knowledge of the true image. Rather than returning just one hypothesis of that image, posterior samplers aim to explore the full solution space by generating many probable hypotheses, which can later be used to quantify uncertainty or construct recoveries that appropriately navigate the perception/distortion trade-off. In this work, we propose a fast and accurate posterior-sampling conditional generative adversarial network (cGAN) that, through a novel form of regularization, aims for correctness in the posterior mean as well as the trace and K principal components of the posterior covariance matrix. Numerical experiments demonstrate that our method outperforms competitors in a wide range of ill-posed imaging inverse problems.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94657"} +{"video_file": "Z4R2rkPgBy_39027910.mp4", "openreview_id": "Z4R2rkPgBy", "slideslive_id": 39027910, "venue": "nips2024", "title": "Unity by Diversity: Improved Representation Learning for Multimodal VAEs", "status": "Poster", "keywords": "multimodal generative learning;VAE;representation learning;data-dependent prior;mimic-cxr", "tldr": "We propose a novel VAE that learns from multimodal data using a mixture-of-experts prior for aggregation", "abstract": "Variational Autoencoders for multimodal data hold promise for many tasks in data analysis, such as representation learning, conditional generation, and imputation. Current architectures either share the encoder output, decoder input, or both across modalities to learn a shared representation. Such architectures impose hard constraints on the model. In this work, we show that a better latent representation can be obtained by replacing these hard constraints with a soft constraint. We propose a new mixture-of-experts prior, softly guiding each modality's latent representation towards a shared aggregate posterior. This approach results in a superior latent representation and allows each encoding to preserve information better from its uncompressed original features. In extensive experiments on multiple benchmark datasets and two challenging real-world datasets, we show improved learned latent representations and imputation of missing data modalities compared to existing methods.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/94655"} +{"video_file": "ZC0PSk6Mc6_39025225.mp4", "openreview_id": "ZC0PSk6Mc6", "slideslive_id": 39025225, "venue": "nips2024", "title": "Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents", "status": "Poster", "keywords": "Explainable AI (XAI);Reinforcement Learning;Concept Bottlenecks", "tldr": "We introduce Successive Concept Bottleneck agents for competitive and aligned reinforcement learning.", "abstract": "Goal misalignment, reward sparsity and difficult credit assignment are only a few of the many issues that make it difficult for deep reinforcement learning (RL) agents to learn optimal policies. Unfortunately, the black-box nature of deep neural networks impedes the inclusion of domain experts for inspecting the model and revising suboptimal policies.\nTo this end, we introduce Successive Concept Bottleneck Agents (SCoBots), that integrate consecutive concept bottleneck (CB) layers. In contrast to current CB models, SCoBots do not just represent concepts as properties of individual objects, but also as relations between objects which is crucial for many RL tasks.\nOur experimental results provide evidence of SCoBots' competitive performances, but also of their potential for domain experts to understand and regularize their behavior. Among other things, SCoBots enabled us to identify a previously unknown misalignment problem in the iconic video game, Pong, and resolve it. Overall, SCoBots thus result in more human-aligned RL agents.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/94653"} +{"video_file": "ZJ2ONmSgCS_39025393.mp4", "openreview_id": "ZJ2ONmSgCS", "slideslive_id": 39025393, "venue": "nips2024", "title": "DiffHammer: Rethinking the Robustness of Diffusion-Based Adversarial Purification", "status": "Poster", "keywords": "adaptive adversarial attack;adversarial purification;diffusion", "tldr": "DiffHammer provides effective and efficient robustness evaluation for diffusion-based purification via selective attack and N-evaluation.", "abstract": "Diffusion-based purification has demonstrated impressive robustness as an adversarial defense. However, concerns exist about whether this robustness arises from insufficient evaluation. Our research shows that EOT-based attacks face gradient dilemmas due to global gradient averaging, resulting in ineffective evaluations. Additionally, 1-evaluation underestimates resubmit risks in stochastic defenses. To address these issues, we propose an effective and efficient attack named DiffHammer. This method bypasses the gradient dilemma through selective attacks on vulnerable purifications, incorporating\nN\n-evaluation into loops and using gradient grafting for comprehensive and efficient evaluations. Our experiments validate that DiffHammer achieves effective results within 10-30 iterations, outperforming other methods. This calls into question the reliability of diffusion-based purification after mitigating the gradient dilemma and scrutinizing its resubmit risk.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/94646"} +{"video_file": "ZJBBeyEAyX_39027027.mp4", "openreview_id": "ZJBBeyEAyX", "slideslive_id": 39027027, "venue": "nips2024", "title": "OSLO: One-Shot Label-Only Membership Inference Attacks", "status": "Poster", "keywords": "membership inference attack;privacy;leakage", "tldr": "We propose a label-only Membership Inference Attack (MIA) that works by sending only a single shot to the target model.", "abstract": "We introduce One-Shot Label-Only (OSLO) membership inference attacks (MIAs), which accurately infer a given sample's membership in a target model's training set with high precision using just a single query, where the target model only returns the predicted hard label. This is in contrast to state-of-the-art label-only attacks which require\n\u223c\n6000\nqueries, yet get attack precisions lower than OSLO's. OSLO leverages transfer-based black-box adversarial attacks. The core idea is that a member sample exhibits more resistance to adversarial perturbations than a non-member. We compare OSLO against state-of-the-art label-only attacks and demonstrate that, despite requiring only one query, our method significantly outperforms previous attacks in terms of precision and true positive rate (TPR) under the same false positive rates (FPR). For example, compared to previous label-only MIAs, OSLO achieves a TPR that is at least 7\n\u00d7\nhigher under a 1% FPR and at least 22\n\u00d7\nhigher under a 0.1% FPR on CIFAR100 for a ResNet18 model. We evaluated multiple defense mechanisms against OSLO.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/94645"} +{"video_file": "ZK1CZXKgG5_39026022.mp4", "openreview_id": "ZK1CZXKgG5", "slideslive_id": 39026022, "venue": "nips2024", "title": "MemVLT: Vision-Language Tracking with Adaptive Memory-based Prompts", "status": "Poster", "keywords": "Object Tracking; Visual-Language Multimodality; Adaptive Prompts", "tldr": "Drawing from the Complementary Learning Systems theory, we propose a novel vision-language tracker (MemVLT), which can provide adaptive multimodal prompts for tracking guidance, achieving SOTA performance on 4 mainstream datasets..", "abstract": "Vision-language tracking (VLT) enhances traditional visual object tracking by integrating language descriptions, requiring the tracker to flexibly understand complex and diverse text in addition to visual information. However, most existing vision-language trackers still overly rely on initial fixed multimodal prompts, which struggle to provide effective guidance for dynamically changing targets. Fortunately, the Complementary Learning Systems (CLS) theory suggests that the human memory system can dynamically store and utilize multimodal perceptual information, thereby adapting to new scenarios. Inspired by this, (i) we propose a Memory-based Vision-Language Tracker (MemVLT). By incorporating memory modeling to adjust static prompts, our approach can provide adaptive prompts for tracking guidance. (ii) Specifically, the memory storage and memory interaction modules are designed in accordance with CLS theory. These modules facilitate the storage and flexible interaction between short-term and long-term memories, generating prompts that adapt to target variations. (iii) Finally, we conduct extensive experiments on mainstream VLT datasets (e.g., MGIT, TNL2K, LaSOT and LaSOT\ne\nx\nt\n). Experimental results show that MemVLT achieves new state-of-the-art performance. Impressively, it achieves 69.4% AUC on the MGIT and 63.3% AUC on the TNL2K, improving the existing best result by 8.4% and 4.7%, respectively.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94643"} +{"video_file": "ZNcJtNN3e8_39028335.mp4", "openreview_id": "ZNcJtNN3e8", "slideslive_id": 39028335, "venue": "nips2024", "title": "Compositional PAC-Bayes: Generalization of GNNs with persistence and beyond", "status": "Poster", "keywords": "generalization;tda;GNN;PAC-Bayes", "tldr": "This paper introduces data-dependent generalization bounds for integrating persistent homology vectorizations with graph neural networks using a compositional PAC-Bayes framework.", "abstract": "Heterogeneity, e.g., due to different types of layers or multiple sub-models, poses key challenges in analyzing the generalization behavior of several modern architectures. For instance, descriptors based on Persistent Homology (PH) are being increasingly integrated into Graph Neural Networks (GNNs) to augment them with rich topological features; however, the generalization of such PH schemes remains unexplored. We introduce a novel compositional PAC-Bayes framework that provides a general recipe to analyze a broad spectrum of models including those with heterogeneous layers. Specifically, we provide the first data-dependent generalization bounds for a widely adopted PH vectorization scheme (that subsumes persistence landscapes, images, and silhouettes) as well as PH-augmented GNNs. Using our framework, we also obtain bounds for GNNs and neural nets with ease. Our bounds also inform the design of novel regularizers. Empirical evaluations on several standard real-world datasets demonstrate that our theoretical bounds highly correlate with empirical generalization performance, leading to improved classifier design via our regularizers. Overall, this work bridges a crucial gap in the theoretical understanding of PH methods and general heterogeneous models, paving the way for the design of better models for (graph) representation learning. Our code is available at https://github.com/Aalto-QuML/Compositional-PAC-Bayes.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94640"} +{"video_file": "ZOZjMs3JTs_39026015.mp4", "openreview_id": "ZOZjMs3JTs", "slideslive_id": 39026015, "venue": "nips2024", "title": "User-item fairness tradeoffs in recommendations", "status": "Poster", "keywords": "recommendation systems;algorithmic fairness", "tldr": "We explore the effect of population heterogeneity and preference mis-estimation on the tradeoff between user fairness and item fairness in recommendation systems in a theoretical model and in real data.", "abstract": "In the basic recommendation paradigm, the most (predicted) relevant item is recommended to each user. This may result in some items receiving lower exposure than they \"should\"; to counter this, several algorithmic approaches have been developed to ensure item fairness. These approaches necessarily degrade recommendations for some users to improve outcomes for items, leading to user fairness concerns. In turn, a recent line of work has focused on developing algorithms for multi-sided fairness, to jointly optimize user fairness, item fairness, and overall recommendation quality. This induces the question: what is the tradeoff between these objectives, and what are the characteristics of (multi-objective) optimal solutions? Theoretically, we develop a model of recommendations with user and item fairness objectives and characterize the solutions of fairness-constrained optimization. We identify two phenomena: (a) when user preferences are diverse, there is \"free\" item and user fairness; and (b) users whose preferences are misestimated can be especially disadvantaged by item fairness constraints. Empirically, we prototype a recommendation system for preprints on arXiv and implement our framework, measuring the phenomena in practice and showing how these phenomena inform the design of markets with recommendation systems-intermediated matching.", "primary_area": "fairness", "site": "https://neurips.cc/virtual/2024/poster/94638"} +{"video_file": "ZRYFftR4xn_39028026.mp4", "openreview_id": "ZRYFftR4xn", "slideslive_id": 39028026, "venue": "nips2024", "title": "Learning the Expected Core of Strictly Convex Stochastic Cooperative Games", "status": "Poster", "keywords": "Cooperative game theory;convex geometry;bandit theory.", "tldr": "We address learning stable allocations for stochastic cooperative games with an unknown reward distribution. For strictly convex games, we propose an algorithm that returns a stable allocation with a polynomial number of samples.", "abstract": "Reward allocation, also known as the credit assignment problem, has been an important topic in economics, engineering, and machine learning. An important concept in reward allocation is the core, which is the set of stable allocations where no agent has the motivation to deviate from the grand coalition. In previous works, computing the core requires either knowledge of the reward function in deterministic games or the reward distribution in stochastic games. However, this is unrealistic, as the reward function or distribution is often only partially known and may be subject to uncertainty. In this paper, we consider the core learning problem in stochastic cooperative games, where the reward distribution is unknown. Our goal is to learn the expected core, that is, the set of allocations that are stable in expectation, given an oracle that returns a stochastic reward for an enquired coalition each round. Within the class of strictly convex games, we present an algorithm named \\texttt{Common-Points-Picking} that returns a point in the expected core given a polynomial number of samples, with high probability. To analyse the algorithm, we develop a new extension of the separation hyperplane theorem for multiple convex sets.t.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/94637"} +{"video_file": "ZRz7XlxBzQ_39027692.mp4", "openreview_id": "ZRz7XlxBzQ", "slideslive_id": 39027692, "venue": "nips2024", "title": "Learning to compute Gr\u00f6bner bases", "status": "Poster", "keywords": "Transformer; Gr\u00f6bner bases; Computational algebra", "tldr": "We show the learnability of Gr\u00f6bner basis computation and raise new algebraic and machine learning challenges.", "abstract": "Solving a polynomial system, or computing an associated Gr\u00f6bner basis, has been a fundamental task in computational algebra. However, it is also known for its notorious doubly exponential time complexity in the number of variables in the worst case. This paper is the first to address the learning of Gr\u00f6bner basis computation with Transformers. The training requires many pairs of a polynomial system and the associated Gr\u00f6bner basis, raising two novel algebraic problems: random generation of Gr\u00f6bner bases and transforming them into non-Gr\u00f6bner ones, termed as backward Gr\u00f6bner problem. We resolve these problems with 0-dimensional radical ideals, the ideals appearing in various applications. Further, we propose a hybrid input embedding to handle coefficient tokens with continuity bias and avoid the growth of the vocabulary set. The experiments show that our dataset generation method is a few orders of magnitude faster than a naive approach, overcoming a crucial challenge in learning to compute Gr\u00f6bner bases, and Gr\u00f6bner computation is learnable in a particular class.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94636"} +{"video_file": "ZViYPzh9Wq_39027111.mp4", "openreview_id": "ZViYPzh9Wq", "slideslive_id": 39027111, "venue": "nips2024", "title": "Reconstructing the Image Stitching Pipeline: Integrating Fusion and Rectangling into a Unified Inpainting Model", "status": "Poster", "keywords": "Image Stitching;Image Fusion;Image Rectangling;Diffusion Model", "tldr": "An image stitching method that simplifies fusion and rectangling into a unified inpainting model.", "abstract": "Deep learning-based image stitching pipelines are typically divided into three cascading stages: registration, fusion, and rectangling. Each stage requires its own network training and is tightly coupled to the others, leading to error propagation and posing significant challenges to parameter tuning and system stability. This paper proposes the Simple and Robust Stitcher (SRStitcher), which revolutionizes the image stitching pipeline by simplifying the fusion and rectangling stages into a unified inpainting model, requiring no model training or fine-tuning. We reformulate the problem definitions of the fusion and rectangling stages and demonstrate that they can be effectively integrated into an inpainting task. Furthermore, we design the weighted masks to guide the reverse process in a pre-trained large-scale diffusion model, implementing this integrated inpainting task in a single inference. Through extensive experimentation, we verify the interpretability and generalization capabilities of this unified model, demonstrating that SRStitcher outperforms state-of-the-art methods in both performance and stability.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94635"} +{"video_file": "ZVrrPNqHFw_39025918.mp4", "openreview_id": "ZVrrPNqHFw", "slideslive_id": 39025918, "venue": "nips2024", "title": "A Simple Remedy for Dataset Bias via Self-Influence: A Mislabeled Sample Perspective", "status": "Poster", "keywords": "Debiasing;Spurious correlation;Robust learning;Dataset bias", "tldr": "We propose a simple yet effective method for remedy through fine-tuning that utilizes a pivotal set constructed using Bias-Conditioned Self-Influence to recover biased models.", "abstract": "Learning generalized models from biased data is an important undertaking toward fairness in deep learning. To address this issue, recent studies attempt to identify and leverage bias-conflicting samples free from spurious correlations without prior knowledge of bias or an unbiased set. However, spurious correlation remains an ongoing challenge, primarily due to the difficulty in correctly detecting these samples. In this paper, inspired by the similarities between mislabeled samples and bias-conflicting samples, we approach this challenge from a novel perspective of mislabeled sample detection. Specifically, we delve into Influence Function, one of the standard methods for mislabeled sample detection, for identifying bias-conflicting samples and propose a simple yet effective remedy for biased models by leveraging them. Through comprehensive analysis and experiments on diverse datasets, we demonstrate that our new perspective can boost the precision of detection and rectify biased models effectively. Furthermore, our approach is complementary to existing methods, showing performance improvement even when applied to models that have already undergone recent debiasing techniques.", "primary_area": "fairness", "site": "https://neurips.cc/virtual/2024/poster/94634"} +{"video_file": "ZX6CEo1Wtv_39024753.mp4", "openreview_id": "ZX6CEo1Wtv", "slideslive_id": 39024753, "venue": "nips2024", "title": "Latent Diffusion for Neural Spiking Data", "status": "Spotlight", "keywords": "neural population;diffusion models;latent variable models;electrophysiology;brain-computer interfaces", "tldr": "We propose a generative model for (conditional) generation of neural spiking activity using latent diffusion.", "abstract": "Modern datasets in neuroscience enable unprecedented inquiries into the relationship between complex behaviors and the activity of many simultaneously recorded neurons. While latent variable models can successfully extract low-dimensional embeddings from such recordings, using them to generate realistic spiking data, especially in a behavior-dependent manner, still poses a challenge. Here, we present Latent Diffusion for Neural Spiking data (LDNS), a diffusion-based generative model with a low-dimensional latent space: LDNS employs an autoencoder with structured state-space (S4) layers to project discrete high-dimensional spiking data into continuous time-aligned latents. On these inferred latents, we train expressive (conditional) diffusion models, enabling us to sample neural activity with realistic single-neuron and population spiking statistics. We validate LDNS on synthetic data, accurately recovering latent structure, firing rates, and spiking statistics. Next, we demonstrate its flexibility by generating variable-length data that mimics human cortical activity during attempted speech. We show how to equip LDNS with an expressive observation model that accounts for single-neuron dynamics not mediated by the latent state, further increasing the realism of generated samples. Finally, conditional LDNS trained on motor cortical activity during diverse reaching behaviors can generate realistic spiking data given reach direction or unseen reach trajectories. In summary, LDNS simultaneously enables inference of low-dimensional latents and realistic conditional generation of neural spiking datasets, opening up further possibilities for simulating experimentally testable hypotheses.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/94632"} +{"video_file": "ZYrZ5V84ZI_39027952.mp4", "openreview_id": "ZYrZ5V84ZI", "slideslive_id": 39027952, "venue": "nips2024", "title": "Voila-A: Aligning Vision-Language Models with User's Gaze Attention", "status": "Spotlight", "keywords": "Vision Language Model;Human Gaze;Multimodal;Controlled Generative Model;AR/VR", "tldr": "Aligning Vision-Language Models with User's Gaze Attention", "abstract": "In recent years, the integration of vision and language understanding has led to significant advancements in artificial intelligence, particularly through Vision-Language Models (VLMs). However, existing VLMs face challenges in handling real-world applications with complex scenes and multiple objects, as well as aligning their focus with the diverse attention patterns of human users. In this paper, we introduce gaze information, feasibly collected by ubiquitous wearable devices such as MR glasses, as a proxy for human attention to guide VLMs. We propose a novel approach, Voila-A, for gaze alignment to enhance the effectiveness of these models in real-world applications. First, we collect hundreds of minutes of gaze data to demonstrate that we can mimic human gaze modalities using localized narratives. We then design an automatic data annotation pipeline utilizing GPT-4 to generate the VOILA-COCO dataset. Additionally, we introduce a new model VOILA-A that integrate gaze information into VLMs while maintain pretrained knowledge from webscale dataset. We evaluate Voila-A using a hold-out validation set and a newly collected VOILA-GAZE testset, which features real-life scenarios captured with a gaze-tracking device. Our experimental results demonstrate that Voila-A significantly outperforms several baseline models. By aligning model attention with human gaze patterns, Voila-A paves the way for more intuitive, user-centric VLMs and fosters engaging human-AI interaction across a wide range of applications.", "primary_area": "human-AI_interaction", "site": "https://neurips.cc/virtual/2024/poster/94630"} +{"video_file": "ZZoW4Z3le4_39024813.mp4", "openreview_id": "ZZoW4Z3le4", "slideslive_id": 39024813, "venue": "nips2024", "title": "DiGRAF: Diffeomorphic Graph-Adaptive Activation Function", "status": "Poster", "keywords": "Graph Neural Networks;Graph Activation Functions", "tldr": "We introduce DiGRAF - a novel graph-adaptive activation function for GNNs by learning flexible and efficient diffeomorphisms, and demonstrate its effectiveness on numerous benchmarks and tasks.", "abstract": "In this paper, we propose a novel activation function tailored specifically for graph data in Graph Neural Networks (GNNs). Motivated by the need for graph-adaptive and flexible activation functions, we introduce DiGRAF, leveraging Continuous Piecewise-Affine Based (CPAB) transformations, which we augment with an additional GNN to learn a graph-adaptive diffeomorphic activation function in an end-to-end manner. In addition to its graph-adaptivity and flexibility, DiGRAF also possesses properties that are widely recognized as desirable for activation functions, such as differentiability, boundness within the domain, and computational efficiency. We conduct an extensive set of experiments across diverse datasets and tasks, demonstrating a consistent and superior performance of DiGRAF compared to traditional and graph-specific activation functions, highlighting its effectiveness as an activation function for GNNs. Our code is available at https://github.com/ipsitmantri/DiGRAF.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/94628"} +{"video_file": "ZbjJE6Nq5k_39025771.mp4", "openreview_id": "ZbjJE6Nq5k", "slideslive_id": 39025771, "venue": "nips2024", "title": "Normalization and effective learning rates in reinforcement learning", "status": "Poster", "keywords": "continual learning;reinforcement learning;optimization;plasticity", "tldr": "We propose a method to stabilize training in nonstationary tasks, improving performance on several benchmarks and obtaining insight into the role of implicit effective learning rate schedules in RL.", "abstract": "Normalization layers have recently experienced a renaissance in the deep reinforcement learning and continual learning literature, with several works highlighting diverse benefits such as improving loss landscape conditioning and combatting overestimation bias. However, normalization brings with it a subtle but important side effect: an equivalence between growth in the norm of the network parameters and decay in the effective learning rate. This becomes problematic in continual learning settings, where the resulting learning rate schedule may decay to near zero too quickly relative to the timescale of the learning problem. We propose to make the learning rate schedule explicit with a simple re-parameterization which we call Normalize-and-Project (NaP), which couples the insertion of normalization layers with weight projection, ensuring that the effective learning rate remains constant throughout training. This technique reveals itself as a powerful analytical tool to better understand learning rate schedules in deep reinforcement learning, and as a means of improving robustness to nonstationarity in synthetic plasticity loss benchmarks along with both the single-task and sequential variants of the Arcade Learning Environment. We also show that our approach can be easily applied to popular architectures such as ResNets and transformers while recovering and in some cases even slightly improving the performance of the base model in common stationary benchmarks.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94626"} +{"video_file": "ZehccYKkNH_39027265.mp4", "openreview_id": "ZehccYKkNH", "slideslive_id": 39027265, "venue": "nips2024", "title": "Wasserstein convergence of Cech persistence diagrams for samplings of submanifolds", "status": "Poster", "keywords": "topological data analysis; persistence diagrams; simplicial complexes; differential geometry; Wasserstein distance", "tldr": "We study the convergence of Cech persistence diagrams for samples of submanifolds with respect to the Wasserstein distance.", "abstract": "Cech Persistence diagrams (PDs) are topological descriptors routinely used to capture the geometry of complex datasets. They are commonly compared using the Wasserstein distances\nOT\np\n; however, the extent to which PDs are stable with respect to these metrics remains poorly understood. We partially close this gap by focusing on the case where datasets are sampled on an\nm\n-dimensional submanifold of\nR\nd\n. Under this manifold hypothesis, we show that convergence with respect to the\nOT\np\nmetric happens exactly when\np\n>\nm\n. We also provide improvements upon the bottleneck stability theorem in this case and prove new laws of large numbers for the total\n\u03b1\n-persistence of PDs. Finally, we show how these theoretical findings shed new light on the behavior of the feature maps on the space of PDs that are used in ML-oriented applications of Topological Data Analysis.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94624"} +{"video_file": "ZeihWodDVh_39027755.mp4", "openreview_id": "ZeihWodDVh", "slideslive_id": 39027755, "venue": "nips2024", "title": "PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics", "status": "Poster", "keywords": "Energy-Based Models;Diffusion;Langevin dynamics;Poisons;robustness;defense;Backdoor", "tldr": "A universal, SoTA set of train-time poison defense pre-processing techniques using specially trained Energy Based and Denoising Diffusion Models.", "abstract": "Train-time data poisoning attacks threaten machine learning models by introducing adversarial examples during training, leading to misclassification. Current defense methods often reduce generalization performance, are attack-specific, and impose significant training overhead. To address this, we introduce a set of universal data purification methods using a stochastic transform, $\\Psi(x)$, realized via iterative Langevin dynamics of Energy-Based Models (EBMs), Denoising Diffusion Probabilistic Models (DDPMs), or both. These approaches purify poisoned data with minimal impact on classifier generalization. Our specially trained EBMs and DDPMs provide state-of-the-art defense against various attacks (including Narcissus, Bullseye Polytope, Gradient Matching) on CIFAR-10, Tiny-ImageNet, and CINIC-10, without needing attack or classifier-specific information. We discuss performance trade-offs and show that our methods remain highly effective even with poisoned or distributionally shifted generative model training data.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/94623"} +{"video_file": "ZfRGRK5Kxl_39024899.mp4", "openreview_id": "ZfRGRK5Kxl", "slideslive_id": 39024899, "venue": "nips2024", "title": "TripletCLIP: Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives", "status": "Poster", "keywords": "Contrastive Learning;Synthetic data;CLIP;Compositionality;TripletCLIP", "tldr": "Contrastive synthetic dataset and triplet contrastive learning to imporve CLIP compositionality.", "abstract": "Contrastive Language-Image Pretraining (CLIP) models maximize the mutual information between text and visual modalities to learn representations. This makes the nature of the training data a significant factor in the efficacy of CLIP for downstream tasks. However, the lack of compositional diversity in contemporary image-text datasets limits the compositional reasoning ability of CLIP. We show that generating ``hard'' negative captions via in-context learning and synthesizing corresponding negative images with text-to-image generators offers a solution. We introduce a novel contrastive pre-training strategy that leverages these hard negative captions and images in an alternating fashion to train CLIP. We demonstrate that our method, named TripletCLIP, when applied to existing datasets such as CC3M and CC12M, enhances the compositional capabilities of CLIP, resulting in an absolute improvement of over 9% on the SugarCrepe benchmark on an equal computational budget, as well as improvements in zero-shot image classification and image retrieval. Our code, models, and data are available at: tripletclip.github.io.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94621"} +{"video_file": "ZfXRAqbBKX_39027714.mp4", "openreview_id": "ZfXRAqbBKX", "slideslive_id": 39027714, "venue": "nips2024", "title": "IRCAN: Mitigating Knowledge Conflicts in LLM Generation via Identifying and Reweighting Context-Aware Neurons", "status": "Poster", "keywords": "Knowledge Conflicts;Large Language Models", "tldr": "A novel method that identifies and reweights context-aware neurons within large language models to boost their performance on tasks involving knowledge conflicts.", "abstract": "It is widely acknowledged that large language models (LLMs) encode a vast reservoir of knowledge after being trained on mass data. Recent studies disclose knowledge conflicts in LLM generation, wherein outdated or incorrect parametric knowledge (i.e., encoded knowledge) contradicts new knowledge provided in the context. To mitigate such knowledge conflicts, we propose a novel framework, IRCAN (Identifying and Reweighting Context-Aware Neurons) to capitalize on neurons that are crucial in processing contextual cues. Specifically, IRCAN first identifies neurons that significantly contribute to context processing, utilizing a context-aware attribution score derived from integrated gradients. Subsequently, the identified context-aware neurons are strengthened via reweighting. In doing so, we steer LLMs to generate context-sensitive outputs with respect to the new knowledge provided in the context. Extensive experiments conducted across a variety of models and tasks demonstrate that IRCAN not only achieves remarkable improvements in handling knowledge conflicts but also offers a scalable, plug-and-play solution that can be integrated seamlessly with existing models. Our codes are released at https://github.com/danshi777/IRCAN.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94620"} +{"video_file": "ZgtLQQR1K7_39024821.mp4", "openreview_id": "ZgtLQQR1K7", "slideslive_id": 39024821, "venue": "nips2024", "title": "VMamba: Visual State Space Model", "status": "Spotlight", "keywords": "State Space Model;Transformer;Computer Vision;Foundation Model", "tldr": "This paper introduces a new architecture with promising performance on visual perception tasks, which is based on mamba.", "abstract": "Designing computationally efficient network architectures remains an ongoing necessity in computer vision. In this paper, we adapt Mamba, a state-space language model, into VMamba, a vision backbone with linear time complexity. At the core of VMamba is a stack of Visual State-Space (VSS) blocks with the 2D Selective Scan (SS2D) module. By traversing along four scanning routes, SS2D bridges the gap between the ordered nature of 1D selective scan and the non-sequential structure of 2D vision data, which facilitates the collection of contextual information from various sources and perspectives. Based on the VSS blocks, we develop a family of VMamba architectures and accelerate them through a succession of architectural and implementation enhancements. Extensive experiments demonstrate VMamba\u2019s promising performance across diverse visual perception tasks, highlighting its superior input scaling efficiency compared to existing benchmark models. Source code is available at https://github.com/MzeroMiko/VMamba", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94617"} +{"video_file": "ZjgcYMkCmX_39027515.mp4", "openreview_id": "ZjgcYMkCmX", "slideslive_id": 39027515, "venue": "nips2024", "title": "How does Inverse RL Scale to Large State Spaces? A Provably Efficient Approach", "status": "Poster", "keywords": "Inverse Reinforcement Learning;Reward-Free Exploration;Linear MDPs", "tldr": "We introduce a novel Inverse Reinforcement Learning formulation that permits efficient learning in Linear Markov Decision Processes.", "abstract": "In online Inverse Reinforcement Learning (IRL), the learner can collect samples about the dynamics of the environment to improve its estimate of the reward function. Since IRL suffers from identifiability issues, many theoretical works on online IRL focus on estimating the entire set of rewards that explain the demonstrations, named the feasible reward set. However, none of the algorithms available in literature can scale to problems with large state spaces. In this paper, we focus on the online IRL problem in Linear Markov Decision Processes (MDPs). We show that the structure offered by Linear MDPs is not sufficient for efficiently estimating the feasible set when the state space is large. As a consequence, we introduce the novel framework of rewards compatibility, which generalizes the notion of feasible set, and we develop CATY-IRL, a sample efficient algorithm whose complexity is independent of the size of the state space in Linear MDPs. When restricted to the tabular setting, we demonstrate that CATY-IRL is minimax optimal up to logarithmic factors. As a by-product, we show that Reward-Free Exploration (RFE) enjoys the same worst-case rate, improving over the state-of-the-art lower bound. Finally, we devise a unifying framework for IRL and RFE that may be of independent interest.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94615"} +{"video_file": "ZoarR5QmFX_39027550.mp4", "openreview_id": "ZoarR5QmFX", "slideslive_id": 39027550, "venue": "nips2024", "title": "Concentrate Attention: Towards Domain-Generalizable Prompt Optimization for Language Models", "status": "Poster", "keywords": "Prompt Optimization;Domain Generalization;Few-shot Learning;Pre-trained Language Models", "tldr": "This paper explores the domain generalization ability of prompts in PLMs, discovering that prompts with higher \"Concentration\" are more generalizable, leading to new optimization methods that outperform existing techniques.", "abstract": "Recent advances in prompt optimization have notably enhanced the performance of pre-trained language models (PLMs) on downstream tasks. However, the potential of optimized prompts on domain generalization has been under-explored. To explore the nature of prompt generalization on unknown domains, we conduct pilot experiments and find that (i) Prompts gaining more attention weight from PLMs\u2019 deep layers are more generalizable and (ii) Prompts with more stable attention distributions in PLMs\u2019 deep layers are more generalizable. Thus, we offer a fresh objective towards domain-generalizable prompts optimization named ''Concentration'', which represents the ''lookback'' attention from the current decoding token to the prompt tokens, to increase the attention strength on prompts and reduce the fluctuation of attention distribution. We adapt this new objective to popular soft prompt and hard prompt optimization methods, respectively. Extensive experiments demonstrate that our idea improves comparison prompt optimization methods by 1.42% for soft prompt generalization and 2.16% for hard prompt generalization in accuracy on the multi-source domain generalization setting, while maintaining satisfying in-domain performance. The promising results validate the effectiveness of our proposed prompt optimization objective and provide key insights into domain-generalizable prompts.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94612"} +{"video_file": "ZsxZ65YqL1_39025439.mp4", "openreview_id": "ZsxZ65YqL1", "slideslive_id": 39025439, "venue": "nips2024", "title": "CriticEval: Evaluating Large-scale Language Model as Critic", "status": "Poster", "keywords": "Critique ability;LLM;evaluation", "tldr": "A comprehensive and reliable benchmark for evaluating critique ability of LLMs", "abstract": "Critique ability, i.e., the capability of Large Language Models (LLMs) to identify and rectify flaws in responses, is crucial for their applications in self-improvement and scalable oversight. While numerous studies have been proposed to evaluate critique ability of LLMs, their comprehensiveness and reliability are still limited. To overcome this problem, we introduce CriticEval, a novel benchmark designed to comprehensively and reliably evaluate critique ability of LLMs. Specifically, to ensure the comprehensiveness, CriticEval evaluates critique ability from four dimensions across nine diverse task scenarios. It evaluates both scalar-valued and textual critiques, targeting responses of varying quality. To ensure the reliability, a large number of critiques are annotated to serve as references, enabling GPT-4 to evaluate textual critiques reliably. Extensive evaluations of open-source and closed-source LLMs first validate the reliability of evaluation in CriticEval. Then, experimental results demonstrate the promising potential of open-source LLMs, the effectiveness of critique datasets and several intriguing relationships between the critique ability and some critical factors, including task types, response qualities and critique dimensions.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94609"} +{"video_file": "ZulWEWQOp9_39025320.mp4", "openreview_id": "ZulWEWQOp9", "slideslive_id": 39025320, "venue": "nips2024", "title": "Ctrl-X: Controlling Structure and Appearance for Text-To-Image Generation Without Guidance", "status": "Poster", "keywords": "Diffusion models;image-to-image translation;controllable generation;appearance transfer", "tldr": "Training-free and guidance-free structure and appearance control for text-to-image diffusion models from user-provided images", "abstract": "Recent controllable generation approaches such as FreeControl and Diffusion Self-Guidance bring fine-grained spatial and appearance control to text-to-image (T2I) diffusion models without training auxiliary modules. However, these methods optimize the latent embedding for each type of score function with longer diffusion steps, making the generation process time-consuming and limiting their flexibility and use. This work presents Ctrl-X, a simple framework for T2I diffusion controlling structure and appearance without additional training or guidance. Ctrl-X designs feed-forward structure control to enable the structure alignment with a structure image and semantic-aware appearance transfer to facilitate the appearance transfer from a user-input image. Extensive qualitative and quantitative experiments illustrate the superior performance of Ctrl-X on various condition inputs and model checkpoints. In particular, Ctrl-X supports novel structure and appearance control with arbitrary condition images of any modality, exhibits superior image quality and appearance transfer compared to existing works, and provides instant plug-and-play functionality to any T2I and text-to-video (T2V) diffusion model. See our project page for the code and an overview of the results: https://genforce.github.io/ctrl-x", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/94606"} +{"video_file": "ZwiG9KjfHV_39028717.mp4", "openreview_id": "ZwiG9KjfHV", "slideslive_id": 39028717, "venue": "nips2024", "title": "OneBit: Towards Extremely Low-bit Large Language Models", "status": "Poster", "keywords": "model quantization;weight-only quantization;extremely low-bit;onebit", "tldr": "This paper propose a novel model structure for 1-bit weight quantization and a corresponding parameter initialization method. Extensive experiments demonstrate that it has clear advantages over representative strong baselines.", "abstract": "Model quantification uses low bit-width values to represent the weight matrices of existing models to be quantized, which is a promising approach to reduce both storage and computational overheads of deploying highly anticipated LLMs. However, current quantization methods suffer severe performance degradation when the bit-width is extremely reduced, and thus focus on utilizing 4-bit or 8-bit values to quantize models. This paper boldly quantizes the weight matrices of LLMs to 1-bit, paving the way for the extremely low bit-width deployment of LLMs. For this target, we introduce a 1-bit model compressing framework named OneBit, including a novel 1-bit parameter representation method to better quantize LLMs as well as an effective parameter initialization method based on matrix decomposition to improve the convergence speed of the quantization framework. Sufficient experimental results indicate that OneBit achieves good performance (at least 81% of the non-quantized performance on LLaMA models) with robust training processes when only using 1-bit weight matrices.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94602"} +{"video_file": "ZxtaNh5UYB_39028116.mp4", "openreview_id": "ZxtaNh5UYB", "slideslive_id": 39028116, "venue": "nips2024", "title": "Learn more, but bother less: parameter efficient continual learning", "status": "Poster", "keywords": "continutal learning;LLMs", "tldr": "We propose a parameter-efficient approach for continual learning in LLMs", "abstract": "Large Language Models (LLMs) have demonstrated profound capabilities due to their extensive pre-training on diverse corpora. However, LLMs often struggle with catastrophic forgetting when engaged in sequential task learning. In this paper, we propose a novel parameter-efficient approach for continual learning in LLMs, which empirically investigates knowledge transfer from previously learned tasks to new tasks through low-rank matrix parameters, enhancing the learning of new tasks without significant interference. Our method employs sensitivity-based analysis of low-rank matrix parameters to identify knowledge-specific parameters between sequential tasks, which are used to initialize the low-rank matrix parameters in new tasks. To maintain orthogonality and minimize forgetting, we further involve the gradient projection technique that keeps the low-rank subspaces of each new task orthogonal to those of previous tasks. Our experimental results on continual learning benchmarks validate the efficacy of our proposed method, which outperforms existing state-of-the-art methods in reducing forgetting, enhancing task performance, and preserving the model's ability to generalize to unseen tasks.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/94599"} +{"video_file": "a1wf2N967T_39026370.mp4", "openreview_id": "a1wf2N967T", "slideslive_id": 39026370, "venue": "nips2024", "title": "Graph-based Unsupervised Disentangled Representation Learning via Multimodal Large Language Models", "status": "Poster", "keywords": "Disentangled representation learning;Interpretable and explainable AI;Multimodal large language model;Computer Vision", "tldr": "We propose an unsupervised graph-based disentanglement framework to learn the independent factors and their interrelations within complex data, upon the intergration of beta-VAE and multimodal large language models.", "abstract": "Disentangled representation learning (DRL) aims to identify and decompose underlying factors behind observations, thus facilitating data perception and generation. However, current DRL approaches often rely on the unrealistic assumption that semantic factors are statistically independent. In reality, these factors may exhibit correlations, which off-the-shelf solutions have yet to properly address. To tackle this challenge, we introduce a bidirectional weighted graph-based framework, to learn factorized attributes and their interrelations within complex data. Specifically, we propose a\n\u03b2\n-VAE based module to extract factors as the initial nodes of the graph, and leverage the multimodal large language model (MLLM) to discover and rank latent correlations, thereby updating the weighted edges. By integrating these complementary modules, our model successfully achieves fine-grained, practical and unsupervised disentanglement. Experiments demonstrate our method's superior performance in disentanglement and reconstruction. Furthermore, the model inherits enhanced interpretability and generalizability from MLLMs.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/94595"} +{"video_file": "a560KLF3v5_39027867.mp4", "openreview_id": "a560KLF3v5", "slideslive_id": 39027867, "venue": "nips2024", "title": "Unelicitable Backdoors via Cryptographic Transformer Circuits", "status": "Poster", "keywords": "Backdoor attacks;Transformers;handcrafting model parameters;cryptographic circuits", "tldr": "We demonstrate how cryptographic backdoors can be embedded in transformer model weights and present a hardness scale for a class of backdoor detection methods.", "abstract": "The rapid proliferation of open-source language models significantly increases the risks of downstream backdoor attacks. These backdoors can introduce dangerous behaviours during model deployment and can evade detection by conventional cybersecurity monitoring systems. In this paper, we introduce a novel class of backdoors in transformer models, that, in contrast to prior art, are unelicitable in nature. Unelicitability prevents the defender from triggering the backdoor, making it impossible to properly evaluate ahead of deployment even if given full white-box access and using automated techniques, such as red-teaming or certain formal verification methods. We show that our novel construction is not only unelicitable thanks to using cryptographic techniques, but also has favourable robustness properties. We confirm these properties in empirical investigations, and provide evidence that our backdoors can withstand state-of-the-art mitigation strategies. Additionally, we expand on previous work by showing that our universal backdoors, while not completely undetectable in white-box settings, can be harder to detect than some existing designs. By demonstrating the feasibility of seamlessly integrating backdoors into transformer models, this paper fundamentally questions the efficacy of pre-deployment detection strategies. This offers new insights into the offence-defence balance in AI safety and security.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/94589"} +{"video_file": "aBMESB1Ajx_39028647.mp4", "openreview_id": "aBMESB1Ajx", "slideslive_id": 39028647, "venue": "nips2024", "title": "On the Sparsity of the Strong Lottery Ticket Hypothesis", "status": "Poster", "keywords": "strong lottery ticket hypothesis;random subset sum;neural network pruning;random neural network", "tldr": "We provide the first proof of the SLTH in classical settings with guarantees on the sparsity of the subnetworks.", "abstract": "Considerable research efforts have recently been made to show that a random neural network\nN\ncontains subnetworks capable of accurately approximating any given neural network that is sufficiently smaller than\nN\n, without any training. This line of research, known as the Strong Lottery Ticket Hypothesis (SLTH), was originally motivated by the weaker Lottery Ticket Hypothesis, which states that a sufficiently large random neural network\nN\ncontains sparse subnetworks that can be trained efficiently to achieve performance comparable to that of training the entire network\nN\n. Despite its original motivation, results on the SLTH have so far not provided any guarantee on the size of subnetworks. Such limitation is due to the nature of the main technical tool leveraged by these results, the Random Subset Sum (RSS) Problem. Informally, the RSS Problem asks how large a random i.i.d. sample\n\u03a9\nshould be so that we are able to approximate any number in\n[\n\u2212\n1\n,\n1\n]\n, up to an error of\n\u03f5\n, as the sum of a suitable subset of\n\u03a9\n.\nWe provide the first proof of the SLTH in classical settings, such as dense and equivariant networks, with guarantees on the sparsity of the subnetworks. Central to our results, is the proof of an essentially tight bound on the Random Fixed-Size Subset Sum Problem (RFSS), a variant of the RSS Problem in which we only ask for subsets of a given size, which is of independent interest.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94583"} +{"video_file": "aBmiyi7iA7_39026257.mp4", "openreview_id": "aBmiyi7iA7", "slideslive_id": 39026257, "venue": "nips2024", "title": "Hamiltonian Monte Carlo on ReLU Neural Networks is Inefficient", "status": "Poster", "keywords": "Hamiltonian Monte Carlo;efficiency;ReLU;optimal acceptance probability", "tldr": "We show that due to the non-differentiability of activation functions in the ReLU family, leapfrog HMC for networks with these activation functions has a large local error rate, making the method inefficient.", "abstract": "We analyze the error rates of the Hamiltonian Monte Carlo algorithm with leapfrog integrator for Bayesian neural network inference. We show that due to the non-differentiability of activation functions in the ReLU family, leapfrog HMC for networks with these activation functions has a large local error rate of\n\u03a9\n(\n\u03f5\n)\nrather than the classical error rate of\nO\n(\n\u03f5\n3\n)\n. This leads to a higher rejection rate of the proposals, making the method inefficient. We then verify our theoretical findings through empirical simulations as well as experiments on a real-world dataset that highlight the inefficiency of HMC inference on ReLU-based neural networks compared to analytical networks.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/94581"} +{"video_file": "aBpxukZS37_39026856.mp4", "openreview_id": "aBpxukZS37", "slideslive_id": 39026856, "venue": "nips2024", "title": "Diffusion PID: Interpreting Diffusion via Partial Information Decomposition", "status": "Poster", "keywords": "Diffusion;Interpretability;Information Decomposition;Mutual Information;Bias", "tldr": "PID of the contribution made by terms in the input text prompt to the output image", "abstract": "Text-to-image diffusion models have made significant progress in generating naturalistic images from textual inputs, and demonstrate the capacity to learn and represent complex visual-semantic relationships. While these diffusion models have achieved remarkable success, the underlying mechanisms driving their performance are not yet fully accounted for, with many unanswered questions surrounding what they learn, how they represent visual-semantic relationships, and why they sometimes fail to generalize. Our work presents Diffusion Partial Information Decomposition (DiffusionPID), a novel technique that applies information-theoretic principles to decompose the input text prompt into its elementary components, enabling a detailed examination of how individual tokens and their interactions shape the generated image. We introduce a formal approach to analyze the uniqueness, redundancy, and synergy terms by applying PID to the denoising model at both the image and pixel level. This approach enables us to characterize how individual tokens and their interactions affect the model output. We first present a fine-grained analysis of characteristics utilized by the model to uniquely localize specific concepts, we then apply our approach in bias analysis and show it can recover gender and ethnicity biases. Finally, we use our method to visually characterize word ambiguity and similarity from the model\u2019s perspective and illustrate the efficacy of our method for prompt intervention. Our results show that PID is a potent tool for evaluating and diagnosing text-to-image diffusion models. Link to project page: https://rbz-99.github.io/Diffusion-PID/.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94580"} +{"video_file": "aDQlAz09dS_39024507.mp4", "openreview_id": "aDQlAz09dS", "slideslive_id": 39024507, "venue": "nips2024", "title": "Efficient Contextual LLM Cascades through Budget-Constrained Policy Learning", "status": "Poster", "keywords": "LLMs; Budget constraint; Reinforcement learning", "tldr": "Under a long-term budget constraint, answer reasoning questions by picking the best combinations of LLMs and prompts.", "abstract": "Recent successes in natural language processing have led to the proliferation of large language models (LLMs) by multiple providers. Each LLM offering has different inference accuracy, monetary cost, and latency, and their accuracy further depends on the exact wording of the question (i.e., the specific prompt). At the same time, users often have a limit on monetary budget and latency to answer all their questions, and they do not know which LLMs to choose for each question to meet their accuracy and long term budget requirements. To navigate this rich design space, we propose TREACLE (Thrifty Reasoning via Context-Aware LLM and Prompt Selection), a reinforcement learning policy that jointly selects the model and prompting scheme while respecting the user's monetary cost and latency constraints. TREACLE uses the problem context, including question text embeddings (reflecting the type or difficulty of a query) and the response history (reflecting the consistency of previous responses) to make smart decisions. Our evaluations on standard reasoning datasets (GSM8K, CSQA, and LLC) with various LLMs and prompts show that TREACLE enables cost savings of up to 85% compared to baselines, while maintaining high accuracy. Importantly, it provides the user with the ability to gracefully trade off accuracy for cost.", "primary_area": "infrastructure", "site": "https://neurips.cc/virtual/2024/poster/94574"} +{"video_file": "aFB97F8QSF_39026721.mp4", "openreview_id": "aFB97F8QSF", "slideslive_id": 39026721, "venue": "nips2024", "title": "Plant-and-Steal: Truthful Fair Allocations via Predictions", "status": "Poster", "keywords": "mechanism design;learning-augmented;fairness;algorithms with predictions.", "tldr": "We devise truthful mechanisms for approximating the Maximin-Share (MMS) allocation via a learning augmented framework.", "abstract": "We study truthful mechanisms for approximating the Maximin-Share (MMS) allocation of agents with additive valuations for indivisible goods. Algorithmically, constant factor approximations exist for the problem for any number of agents. When adding incentives to the mix, a jarring result by Amanatidis, Birmpas, Christodoulou, and Markakis [EC 2017] shows that the best possible approximation for two agents and\nm\nitems is\n\u230a\nm\n2\n\u230b\n. We adopt a learning-augmented framework to investigate what is possible when some prediction on the input is given. For two agents, we give a truthful mechanism that takes agents' ordering over items as prediction. When the prediction is accurate, we give a\n2\n-approximation to the MMS (consistency), and when the prediction is off, we still get an\n\u2308\nm\n2\n\u2309\n-approximation to the MMS (robustness). We further show that the mechanism's performance degrades gracefully in the number of ``mistakes\" in the prediction; i.e., we interpolate (up to constant factors) between the two extremes: when there are no mistakes, and when there is a maximum number of mistakes. We also show an impossibility result on the obtainable consistency for mechanisms with finite robustness. For the general case of\nn\n\u2265\n2\nagents, we give a 2-approximation mechanism for accurate predictions, with relaxed fallback guarantees. Finally, we give experimental results which illustrate when different components of our framework, made to insure consistency and robustness, come into play.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/94573"} +{"video_file": "aFWx1N84Fe_39024399.mp4", "openreview_id": "aFWx1N84Fe", "slideslive_id": 39024399, "venue": "nips2024", "title": "The Map Equation Goes Neural: Mapping Network Flows with Graph Neural Networks", "status": "Poster", "keywords": "community detection;graph clustering;random walk;map equation", "tldr": "We adapt the map equation, an information-theoretic objective function for community detection, as a differentiable loss function for optimisation with neural networks and gradient descent.", "abstract": "Community detection is an essential tool for unsupervised data exploration and revealing the organisational structure of networked systems. With a long history in network science, community detection typically relies on objective functions, optimised with custom-tailored search algorithms, but often without leveraging recent advances in deep learning. Recently, first works have started incorporating such objectives into loss functions for deep graph clustering and pooling. We consider the map equation, a popular information-theoretic objective function for unsupervised community detection, and express it in differentiable tensor form for optimisation through gradient descent. Our formulation turns the map equation compatible with any neural network architecture, enables end-to-end learning, incorporates node features, and chooses the optimal number of clusters automatically, all without requiring explicit regularisation. Applied to unsupervised graph clustering tasks, we achieve competitive performance against state-of-the-art deep graph clustering baselines in synthetic and real-world datasets.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94570"} +{"video_file": "aIPwlkdOut_39025474.mp4", "openreview_id": "aIPwlkdOut", "slideslive_id": 39025474, "venue": "nips2024", "title": "Enhancing Preference-based Linear Bandits via Human Response Time", "status": "Oral", "keywords": "human response time;preference learning;linear bandits;dueling bandits;psychology;economics", "tldr": "Leveraging human response times to accelerate preference learning from binary choices", "abstract": "Interactive preference learning systems infer human preferences by presenting queries as pairs of options and collecting binary choices. Although binary choices are simple and widely used, they provide limited information about preference strength. To address this, we leverage human response times, which are inversely related to preference strength, as an additional signal. We propose a computationally efficient method that combines choices and response times to estimate human utility functions, grounded in the EZ diffusion model from psychology. Theoretical and empirical analyses show that for queries with strong preferences, response times complement choices by providing extra information about preference strength, leading to significantly improved utility estimation. We incorporate this estimator into preference-based linear bandits for fixed-budget best-arm identification. Simulations on three real-world datasets demonstrate that using response times significantly accelerates preference learning compared to choice-only approaches. Additional materials, such as code, slides, and talk video, are available at https://shenlirobot.github.io/pages/NeurIPS24.html.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/94568"} +{"video_file": "aIeXn5103e_39026713.mp4", "openreview_id": "aIeXn5103e", "slideslive_id": 39026713, "venue": "nips2024", "title": "Samba: Severity-aware Recurrent Modeling for Cross-domain Medical Image Grading", "status": "Poster", "keywords": "Medical Image Grading;Domain Generalization;Selective State Space;Gaussian Mixture Model", "tldr": "Propose a Severity-aware Recurrent Modeling, dubbed as Samba, for medical image grading problems on unseen target domains", "abstract": "Disease grading is a crucial task in medical image analysis. Due to the continuous progression of diseases, i.e., the variability within the same level and the similarity between adjacent stages, accurate grading is highly challenging. Furthermore, in real-world scenarios, models trained on limited source domain datasets should also be capable of handling data from unseen target domains. Due to the cross-domain variants, the feature distribution between source and unseen target domains can be dramatically different, leading to a substantial decrease in model performance. To address these challenges in cross-domain disease grading, we propose a Severity-aware Recurrent Modeling (Samba) method in this paper. As the core objective of most staging tasks is to identify the most severe lesions, which may only occupy a small portion of the image, we propose to encode image patches in a sequential and recurrent manner. Specifically, a state space model is tailored to store and transport the severity information by hidden states. Moreover, to mitigate the impact of cross-domain variants, an Expectation-Maximization (EM) based state recalibration mechanism is designed to map the patch embeddings into a more compact space. We model the feature distributions of different lesions through the Gaussian Mixture Model (GMM) and reconstruct the intermediate features based on learnable severity bases. Extensive experiments show the proposed Samba outperforms the VMamba baseline by an average accuracy of 23.5%, 5.6% and 4.1% on the cross-domain grading of fatigue fracture, breast cancer and diabetic retinopathy, respectively. Source code is available at \\url{https://github.com/BiQiWHU/Samba}.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/94567"} +{"video_file": "aJGKs7QOZM_39025457.mp4", "openreview_id": "aJGKs7QOZM", "slideslive_id": 39025457, "venue": "nips2024", "title": "Mechanism design augmented with output advice", "status": "Spotlight", "keywords": "mechanism design;output advice;quality of recommendation;facility location;scheduling;house allocation;auctions", "tldr": "Our paper proposes a mechanism design model augmented with an output prediction and applies it to various problems using a universal error measure in the algorithmic design with predictions setting.", "abstract": "Our work revisits the design of mechanisms via the learning-augmented framework. In this model, the algorithm is enhanced with imperfect (machine-learned) information concerning the input, usually referred to as prediction. The goal is to design algorithms whose performance degrades gently as a function of the prediction error and, in particular, perform well if the prediction is accurate, but also provide a worst-case guarantee under any possible error. This framework has been successfully applied recently to various mechanism design settings, where in most cases the mechanism is provided with a prediction about the types of the players.\nWe adopt a perspective in which the mechanism is provided with an output recommendation. We make no assumptions about the quality of the suggested outcome, and the goal is to use the recommendation to design mechanisms with low approximation guarantees whenever the recommended outcome is reasonable, but at the same time to provide worst-case guarantees whenever the recommendation significantly deviates from the optimal one. We propose a generic, universal measure, which we call quality of recommendation, to evaluate mechanisms across various information settings. We demonstrate how this new metric can provide refined analysis in existing results.\nThis model introduces new challenges, as the mechanism receives limited information comparing to settings that use predictions about the types of the agents. We study, through this lens, several well-studied mechanism design paradigms, devising new mechanisms, but also providing refined analysis for existing ones, using as a metric the quality of recommendation. We complement our positive results, by exploring the limitations of known classes of strategyproof mechanisms that can be devised using output recommendation.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/94563"} +{"video_file": "aLzA7MSc6Y_39027487.mp4", "openreview_id": "aLzA7MSc6Y", "slideslive_id": 39027487, "venue": "nips2024", "title": "Symmetric Linear Bandits with Hidden Symmetry", "status": "Poster", "keywords": "Bandit theory;group theory;symmetry;sparsity.", "tldr": "We study the symmetry structure in high-dimensional linear bandits. With certain assumptions, our algorithm achieves regret that depends logarithmically on the ambient dimension.", "abstract": "High-dimensional linear bandits with low-dimensional structure have received considerable attention in recent studies due to their practical significance. The most common structure in the literature is sparsity. However, it may not be available in practice. Symmetry, where the reward is invariant under certain groups of transformations on the set of arms, is another important inductive bias in the high-dimensional case that covers many standard structures, including sparsity. In this work, we study high-dimensional symmetric linear bandits where the symmetry is hidden from the learner, and the correct symmetry needs to be learned in an online setting. We examine the structure of a collection of hidden symmetry and provide a method based on model selection within the collection of low-dimensional subspaces. Our algorithm achieves a regret bound of\nO\n(\nd\n0\n2\n/\n3\nT\n2\n/\n3\nlog\n\u2061\n(\nd\n)\n)\n, where\nd\nis the ambient dimension which is potentially very large, and\nd\n0\nis the dimension of the true low-dimensional subspace such that\nd\n0\n\u226a\nd\n. With an extra assumption on well-separated models, we can further improve the regret to\nO\n(\nd\n0\nT\nlog\n\u2061\n(\nd\n)\n)\n.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/94561"} +{"video_file": "aRokfUfIQs_39026858.mp4", "openreview_id": "aRokfUfIQs", "slideslive_id": 39026858, "venue": "nips2024", "title": "Sequential Signal Mixing Aggregation for Message Passing Graph Neural Networks", "status": "Poster", "keywords": "graph neural networks;message passing neural networks;invariant aggregation", "tldr": "We introduce SSMA, a novel aggregation for MPGNNs which treats the neighbor features as 2D discrete signals and sequentially convolves them, inherently enhancing the ability to mix features attributed to distinct neighbors.", "abstract": "Message Passing Graph Neural Networks (MPGNNs) have emerged as the preferred method for modeling complex interactions across diverse graph entities. While the theory of such models is well understood, their aggregation module has not received sufficient attention. Sum-based aggregators have solid theoretical foundations regarding their separation capabilities. However, practitioners often prefer using more complex aggregations and mixtures of diverse aggregations. In this work, we unveil a possible explanation for this gap. We claim that sum-based aggregators fail to \"mix\" features belonging to distinct neighbors, preventing them from succeeding at downstream tasks. To this end, we introduce Sequential Signal Mixing Aggregation (SSMA), a novel plug-and-play aggregation for MPGNNs. SSMA treats the neighbor features as 2D discrete signals and sequentially convolves them, inherently enhancing the ability to mix features attributed to distinct neighbors. By performing extensive experiments, we show that when combining SSMA with well-established MPGNN architectures, we achieve substantial performance gains across various benchmarks, achieving new state-of-the-art results in many settings. We published our code at https://almogdavid.github.io/SSMA/.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/94554"} +{"video_file": "aUHSwmHRVb_39026585.mp4", "openreview_id": "aUHSwmHRVb", "slideslive_id": 39026585, "venue": "nips2024", "title": "MotionTTT: 2D Test-Time-Training Motion Estimation for 3D Motion Corrected MRI", "status": "Poster", "keywords": "deep learning based motion estimation;3D imaging;MRI;motion artifacts;medical imaging;test-time-training;motion correction", "tldr": "We propose MotionTTT a deep learning based test-time-training approach for motion estimation for the problem of motion artifact correction in 3D MRI.", "abstract": "A major challenge of the long measurement times in magnetic resonance imaging (MRI), an important medical imaging technology, is that patients may move during data acquisition. This leads to severe motion artifacts in the reconstructed images and volumes. In this paper, we propose MotionTTT a deep learning-based test-time-training (TTT) method for accurate motion estimation. The key idea is that a neural network trained for motion-free reconstruction has a small loss if there is no motion, thus optimizing over motion parameters passed through the reconstruction network enables accurate estimation of motion. The estimated motion parameters enable to correct for the motion and to reconstruct accurate motion-corrected images. Our method uses 2D reconstruction networks to estimate rigid motion in 3D, and constitutes the first deep learning based method for 3D rigid motion estimation towards 3D-motion-corrected MRI. We show that our method can provably reconstruct motion parameters for a simple signal and neural network model. We demonstrate the effectiveness of our method for both retrospectively simulated motion and prospectively collected real motion-corrupted data. Code is available at \\url{https://github.com/MLI-lab/MRI_MotionTTT}.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/94551"} +{"video_file": "aVK4JFpegy_39026979.mp4", "openreview_id": "aVK4JFpegy", "slideslive_id": 39026979, "venue": "nips2024", "title": "Evaluating the World Model Implicit in a Generative Model", "status": "Spotlight", "keywords": "world models;large language models;evaluation", "tldr": "We develop new evaluation metrics for world model recovery based on formal results in language theory, revealing that generative models often harbor inconsistent world models despite performing well on standard tests across various domains", "abstract": "Recent work suggests that large language models may implicitly learn world models. How should we assess this possibility? We formalize this question for the case where the underlying reality is governed by a deterministic finite automaton. This includes problems as diverse as simple logical reasoning, geographic navigation, game-playing, and chemistry. We propose new evaluation metrics for world model recovery inspired by the classic Myhill-Nerode theorem from language theory. We illustrate their utility in three domains: game playing, logic puzzles, and navigation. In all domains, the generative models we consider do well on existing diagnostics for assessing world models, but our evaluation metrics reveal their world models to be far less coherent than they appear. Such incoherence creates fragility: using a generative model to solve related but subtly different tasks can lead to failures. Building generative models that meaningfully capture the underlying logic of the domains they model would be immensely valuable; our results suggest new ways to assess how close a given model is to that goal.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/94550"} +{"video_file": "aXApeuAYkg_39027620.mp4", "openreview_id": "aXApeuAYkg", "slideslive_id": 39027620, "venue": "nips2024", "title": "CA-SSLR: Condition-Aware Self-Supervised Learning Representation for Generalized Speech Processing", "status": "Poster", "keywords": "multilingual speech recognition;speaker verification;self-supervised learning representation;conditional adaptation", "tldr": "We propose a condition-aware learning method to enhance the performance and generalization ability of self-supervised representations.", "abstract": "We introduce Condition-Aware Self-Supervised Learning Representation (CA-SSLR), a generalist conditioning model broadly applicable to various speech-processing tasks. Compared to standard fine-tuning methods that optimize for downstream models, CA-SSLR integrates language and speaker embeddings from earlier layers, making the SSL model aware of the current language and speaker context. This approach reduces the reliance on the input audio features while preserving the integrity of the base SSLR. CA-SSLR improves the model\u2019s capabilities and demonstrates its generality on unseen tasks with minimal task-specific tuning. Our method employs linear modulation to dynamically adjust internal representations, enabling fine-grained adaptability without significantly altering the original model behavior. Experiments show that CA-SSLR reduces the number of trainable parameters, mitigates overfitting, and excels in under-resourced and unseen tasks. Specifically, CA-SSLR achieves a 10% relative reduction in LID errors, a 37% improvement in ASR CER on the ML-SUPERB benchmark, and a 27% decrease in SV EER on VoxCeleb-1, demonstrating its effectiveness.", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/94546"} +{"video_file": "aXS1pwMa8I_39024842.mp4", "openreview_id": "aXS1pwMa8I", "slideslive_id": 39024842, "venue": "nips2024", "title": "Learning 3D Equivariant Implicit Function with Patch-Level Pose-Invariant Representation", "status": "Poster", "keywords": "Equivariant implicit neural representation; Pose-invariant representation; Generalizability to 3D objects; Robustness to transformation", "tldr": "We design the patch-level pose-invariant 3D feature representation to represent the 3D shape, resulting in the implicit displacement estimation of 3D query points based on the local patch-level pose-invariant representation.", "abstract": "Implicit neural representation gains popularity in modeling the continuous 3D surface for 3D representation and reconstruction. In this work, we are motivated by the fact that the local 3D patches repeatedly appear on 3D shapes/surfaces if the factor of poses is removed. Based on this observation, we propose the 3D patch-level equivariant implicit function (PEIF) based on the 3D patch-level pose-invariant representation, allowing us to reconstruct 3D surfaces by estimating equivariant displacement vector fields for query points. Specifically, our model is based on the pose-normalized query/patch pairs and enhanced by the proposed intrinsic patch geometry representation, modeling the intrinsic 3D patch geometry feature by learnable multi-head memory banks. Extensive experiments show that our model achieves state-of-the-art performance on multiple surface reconstruction datasets, and also exhibits better generalization to crossdataset shapes and robustness to arbitrary rotations. Our code will be available at https://github.com/mathXin112/PEIF.git.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94544"} +{"video_file": "aeYNVtTo7o_39026890.mp4", "openreview_id": "aeYNVtTo7o", "slideslive_id": 39026890, "venue": "nips2024", "title": "Cell ontology guided transcriptome foundation model", "status": "Spotlight", "keywords": "Cell ontology graph+transcriptome foundation model+large-scale pre-training+cell representation learning+single cell RNA sequencing data", "tldr": "A cell ontology guided transcriptome foundation model for enhanced cell representation learning, which is pre-trained on 20 million cells from CellxGene database leveraging their cell-type labels aligned with cell ontology graph", "abstract": "Transcriptome foundation models (TFMs) hold great promises of deciphering the transcriptomic language that dictate diverse cell functions by self-supervised learning on large-scale single-cell gene expression data, and ultimately unraveling the complex mechanisms of human diseases. However, current TFMs treat cells as independent samples and ignore the taxonomic relationships between cell types, which are available in cell ontology graphs. We argue that effectively leveraging this ontology information during the TFM pre-training can improve learning biologically meaningful gene co-expression patterns while preserving TFM as a general purpose foundation model for downstream zero-shot and fine-tuning tasks. To this end, we present single cell, Cell-ontology guided TFM (scCello). We introduce cell-type coherence loss and ontology alignment loss, which are minimized along with the masked gene expression prediction loss during the pre-training. The novel loss component guide scCello to learn the cell-type-specific representation and the structural relation between cell types from the cell ontology graph, respectively. We pre-trained scCello on 22 million cells from CellxGene database leveraging their cell-type labels mapped to the cell ontology graph from Open Biological and Biomedical Ontology Foundry. Our TFM demonstrates competitive generalization and transferability performance over the existing TFMs on biologically important tasks including identifying novel cell types of unseen cells, prediction of cell-type-specific marker genes, and cancer drug responses. Source code and model weights are available at https://github.com/DeepGraphLearning/scCello.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/94537"} +{"video_file": "ag7piyoyut_39027335.mp4", "openreview_id": "ag7piyoyut", "slideslive_id": 39027335, "venue": "nips2024", "title": "Incorporating Surrogate Gradient Norm to Improve Offline Optimization Techniques", "status": "Poster", "keywords": "Black-Box Optimization", "tldr": "This paper incorporates a notion of model sharpness into the training loss of the surrogate as a sharpness regularizer to boost the performance of existing offline optimization methods..", "abstract": "Offline optimization has recently emerged as an increasingly popular approach to mitigate the prohibitively expensive cost of online experimentation. The key idea is to learn a surrogate of the black-box function that underlines the target experiment using a static (offline) dataset of its previous input-output queries. Such an approach is, however, fraught with an out-of-distribution issue where the learned surrogate becomes inaccurate outside the offline data regimes. To mitigate this, existing offline optimizers have proposed numerous conditioning techniques to prevent the learned surrogate from being too erratic. Nonetheless, such conditioning strategies are often specific to particular surrogate or search models, which might not generalize to a different model choice. This motivates us to develop a model-agnostic approach instead, which incorporates a notion of model sharpness into the training loss of the surrogate as a regularizer. Our approach is supported by a new theoretical analysis demonstrating that reducing surrogate sharpness on the offline dataset provably reduces its generalized sharpness on unseen data. Our analysis extends existing theories from bounding generalized prediction loss (on unseen data) with loss sharpness to bounding the worst-case generalized surrogate sharpness with its empirical estimate on training data, providing a new perspective on sharpness regularization. Our extensive experimentation on a diverse range of optimization tasks also shows that reducing surrogate sharpness often leads to significant improvement, marking (up to) a noticeable 9.6% performance boost. Our code is publicly available at https://github.com/cuong-dm/IGNITE.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94535"} +{"video_file": "ahvOhPkkMx_39028451.mp4", "openreview_id": "ahvOhPkkMx", "slideslive_id": 39028451, "venue": "nips2024", "title": "Zipper: Addressing Degeneracy in Algorithm-Agnostic Inference", "status": "Spotlight", "keywords": "Asymptotic normality;Cross-fitting;Goodness-of-fit testing;Model-free;Variable importance.", "tldr": "Addresses degeneracy issues in algorithm/model-agnostic goodness-of-fit inference such as assessing variable importance.", "abstract": "The widespread use of black box prediction methods has sparked an increasing interest in algorithm/model-agnostic approaches for quantifying goodness-of-fit, with direct ties to specification testing, model selection and variable importance assessment. A commonly used framework involves defining a predictiveness criterion, applying a cross-fitting procedure to estimate the predictiveness, and utilizing the difference in estimated predictiveness between two models as the test statistic. However, even after standardization, the test statistic typically fails to converge to a non-degenerate distribution under the null hypothesis of equal goodness, leading to what is known as the degeneracy issue. To addresses this degeneracy issue, we present a simple yet effective device, Zipper. It draws inspiration from the strategy of additional splitting of testing data, but encourages an overlap between two testing data splits in predictiveness evaluation. Zipper binds together the two overlapping splits using a slider parameter that controls the proportion of overlap. Our proposed test statistic follows an asymptotically normal distribution under the null hypothesis for any fixed slider value, guaranteeing valid size control while enhancing power by effective data reuse. Finite-sample experiments demonstrate that our procedure, with a simple choice of the slider, works well across a wide range of settings.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/94534"} +{"video_file": "anyZgGLQ6n_39026467.mp4", "openreview_id": "anyZgGLQ6n", "slideslive_id": 39026467, "venue": "nips2024", "title": "Offline Reinforcement Learning with OOD State Correction and OOD Action Suppression", "status": "Poster", "keywords": "offline reinforcement learning", "tldr": "This work systematically analyzes the underexplored OOD state issue in offline RL and proposes a simple yet effective approach unifying (value-aware) OOD state correction and OOD action suppression.", "abstract": "In offline reinforcement learning (RL), addressing the out-of-distribution (OOD) action issue has been a focus, but we argue that there exists an OOD state issue that also impairs performance yet has been underexplored. Such an issue describes the scenario when the agent encounters states out of the offline dataset during the test phase, leading to uncontrolled behavior and performance degradation. To this end, we propose SCAS, a simple yet effective approach that unifies OOD state correction and OOD action suppression in offline RL. Technically, SCAS achieves value-aware OOD state correction, capable of correcting the agent from OOD states to high-value in-distribution states. Theoretical and empirical results show that SCAS also exhibits the effect of suppressing OOD actions. On standard offline RL benchmarks, SCAS achieves excellent performance without additional hyperparameter tuning. Moreover, benefiting from its OOD state correction feature, SCAS demonstrates enhanced robustness against environmental perturbations.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94532"} +{"video_file": "aq3I5B6GLG_39026459.mp4", "openreview_id": "aq3I5B6GLG", "slideslive_id": 39026459, "venue": "nips2024", "title": "Foundations of Multivariate Distributional Reinforcement Learning", "status": "Poster", "keywords": "distributional reinforcement learning;rl theory;dynamic programming;temporal difference learning;successor features;successor representation", "tldr": "We introduce the first provably convergent oracle-free algorithms for distributional reinforcement learning with multivariate reward functions.", "abstract": "In reinforcement learning (RL), the consideration of multivariate reward signals has led to fundamental advancements in multi-objective decision-making, transfer learning, and representation learning. This work introduces the first oracle-free and computationally-tractable algorithms for provably convergent multivariate distributional dynamic programming and temporal difference learning. Our convergence rates match the familiar rates in the scalar reward setting, and additionally provide new insights into the fidelity of approximate return distribution representations as a function of the reward dimension. Surprisingly, when the reward dimension is larger than\n1\n, we show that standard analysis of categorical TD learning fails, which we resolve with a novel projection onto the space of mass-\n1\nsigned measures. Finally, with the aid of our technical results and simulations, we identify tradeoffs between distribution representations that influence the performance of multivariate distributional RL in practice.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94528"} +{"video_file": "atDcnWqG5n_39024379.mp4", "openreview_id": "atDcnWqG5n", "slideslive_id": 39024379, "venue": "nips2024", "title": "Logical characterizations of recurrent graph neural networks with reals and floats", "status": "Poster", "keywords": "graph neural networks;logic;distributed computing;descriptive complexity", "tldr": "The paper provides logical characterizations of recurrent graph neural network models.", "abstract": "In pioneering work from 2019, Barcel\u00f3 and coauthors identified logics that precisely match the expressive power of constant iteration-depth graph neural networks (GNNs) relative to properties definable in first-order logic. In this article, we give exact logical characterizations of recurrent GNNs in two scenarios: (1) in the setting with floating-point numbers and (2) with reals. For floats, the formalism matching recurrent GNNs is a rule-based modal logic with counting, while for reals we use a suitable infinitary modal logic, also with counting. These results give exact matches between logics and GNNs in the recurrent setting without relativising to a background logic in either case, but using some natural assumptions about floating-point arithmetic. Applying our characterizations, we also prove that, relative to graph properties definable in monadic second-order logic (MSO), our infinitary and rule-based logics are equally expressive. This implies that recurrent GNNs with reals and floats have the same expressive power over MSO-definable properties and shows that, for such properties, also recurrent GNNs with reals are characterized by a (finitary!) rule-based modal logic. In the general case, in contrast, the expressive power with floats is weaker than with reals. In addition to logic-oriented results, we also characterize recurrent GNNs, with both reals and floats, via distributed automata, drawing links to distributed computing models.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/94525"} +{"video_file": "axX62CQJpa_39025431.mp4", "openreview_id": "axX62CQJpa", "slideslive_id": 39025431, "venue": "nips2024", "title": "Streaming Long Video Understanding with Large Language Models", "status": "Poster", "keywords": "LLM;Long Video Understanding;Memory-Propagated Streaming Encoding;Adaptive Memory Selection", "tldr": "We propose a memory-propagated streaming encoding architecture with adaptive memory selection for long video understanding with LLM.", "abstract": "This paper presents VideoStreaming, an advanced vision-language large model (VLLM) for video understanding, that capably understands arbitrary-length video with a constant number of video tokens streamingly encoded and adaptively selected. The challenge of video understanding in the vision language area mainly lies in the significant computational burden caused by the great number of tokens extracted from long videos. Previous works rely on sparse sampling or frame compression to reduce tokens. However, such approaches either disregard temporal information in a long time span or sacrifice spatial details, resulting in flawed compression. To address these limitations, our VideoStreaming has two core designs: Memory-Propagated Streaming Encoding and Adaptive Memory Selection. The Memory-Propagated Streaming Encoding architecture segments long videos into short clips and sequentially encodes each clip with a propagated memory. In each iteration, we utilize the encoded results of the preceding clip as historical memory, which is integrated with the current clip to distill a condensed representation that encapsulates the video content up to the current timestamp. This method not only incorporates long-term temporal dynamics into the streaming encoding process but also yields a fixed-length memory as a global representation for arbitrarily long videos. After the encoding process, the Adaptive Memory Selection strategy selects a constant number of question-related memories from all the historical memories, and feeds them into the LLM to generate informative responses. The question-related selection reduces redundancy within the memories, enabling efficient and precise video understanding. Meanwhile, the disentangled video extraction and reasoning design allows the LLM to answer different questions about a video by directly selecting corresponding memories, without the need to encode the whole video for each question. Through extensive experiments, our model achieves superior performance and higher efficiency on long video benchmarks, showcasing precise temporal comprehension for detailed question answering.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94520"} +{"video_file": "b172ac0R4L_39027484.mp4", "openreview_id": "b172ac0R4L", "slideslive_id": 39027484, "venue": "nips2024", "title": "Using Noise to Infer Aspects of Simplicity Without Learning", "status": "Poster", "keywords": "interpretable ML;simple models;Rashomon sets", "tldr": "Our work offers insights on whether simple-yet-accurate machine learning models are likely to exist, based on knowledge of noise levels in the data generation process.", "abstract": "Noise in data significantly influences decision-making in the data science process. In fact, it has been shown that noise in data generation processes leads practitioners to find simpler models. However, an open question still remains: what is the degree of model simplification we can expect under different noise levels? In this work, we address this question by investigating the relationship between the amount of noise and model simplicity across various hypothesis spaces, focusing on decision trees and linear models. We formally show that noise acts as an implicit regularizer for several different noise models. Furthermore, we prove that Rashomon sets (sets of near-optimal models) constructed with noisy data tend to contain simpler models than corresponding Rashomon sets with non-noisy data. Additionally, we show that noise expands the set of ``good'' features and consequently enlarges the set of models that use at least one good feature. Our work offers theoretical guarantees and practical insights for practitioners and policymakers on whether simple-yet-accurate machine learning models are likely to exist, based on knowledge of noise levels in the data generation process.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/94517"} +{"video_file": "b1XPHC7MQB_39027294.mp4", "openreview_id": "b1XPHC7MQB", "slideslive_id": 39027294, "venue": "nips2024", "title": "Invertible Consistency Distillation for Text-Guided Image Editing in Around 7 Steps", "status": "Poster", "keywords": "text-guided image editing;consistency distillation;diffusion models;image inversion", "tldr": "zero shot image editing with consistency distillation", "abstract": "Diffusion distillation represents a highly promising direction for achieving faithful text-to-image generation in a few sampling steps. However, despite recent successes, existing distilled models still do not provide the full spectrum of diffusion abilities, such as real image inversion, which enables many precise image manipulation methods. This work aims to enrich distilled text-to-image diffusion models with the ability to effectively encode real images into their latent space. To this end, we introduce invertible Consistency Distillation (iCD), a generalized consistency distillation framework that facilitates both high-quality image synthesis and accurate image encoding in only 3-4 inference steps. Though the inversion problem for text-to-image diffusion models gets exacerbated by high classifier-free guidance scales, we notice that dynamic guidance significantly reduces reconstruction errors without noticeable degradation in generation performance. As a result, we demonstrate that iCD equipped with dynamic guidance may serve as a highly effective tool for zero-shot text-guided image editing, competing with more expensive state-of-the-art alternatives.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94516"} +{"video_file": "b1ggjW00NI_39027465.mp4", "openreview_id": "b1ggjW00NI", "slideslive_id": 39027465, "venue": "nips2024", "title": "Don't Look Twice: Faster Video Transformers with Run-Length Tokenization", "status": "Spotlight", "keywords": "video understanding;vision transformers;efficient transformers", "tldr": "We make transformers 40% faster on video, with no performance drop, by identifying consecutive runs of tokens repeated in time, and treating them as a single token with variable length.", "abstract": "Video transformers are slow to train due to extremely large numbers of input tokens, even though many video tokens are repeated over time. Existing methods to remove uninformative tokens either have significant overhead, negating any speedup, or require tuning for different datasets and examples. We present Run-Length Tokenization (RLT), a simple approach to speed up video transformers inspired by run-length encoding for data compression. RLT efficiently finds and removes `runs' of patches that are repeated over time before model inference, then replaces them with a single patch and a positional encoding to represent the resulting token's new length. Our method is content-aware, requiring no tuning for different datasets, and fast, incurring negligible overhead. RLT yields a large speedup in training, reducing the wall-clock time to fine-tune a video transformer by 30% while matching baseline model performance. RLT also works without training, increasing model throughput by 35% with only 0.1% drop in accuracy. RLT speeds up training at 30 FPS by more than 100%, and on longer video datasets, can reduce the token count by up to 80%. Our project page is at rccchoudhury.github.io/projects/rlt.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94514"} +{"video_file": "b7hmPlOqr8_39028030.mp4", "openreview_id": "b7hmPlOqr8", "slideslive_id": 39028030, "venue": "nips2024", "title": "Learning Frequency-Adapted Vision Foundation Model for Domain Generalized Semantic Segmentation", "status": "Poster", "keywords": "Semantic Segmentation;Domain Generalization;Vision Foundation Model;Haar Wavelets", "tldr": "Learning style-invariant property from vision fundation model from the frequency space by Haar wavelet transform.", "abstract": "The emerging vision foundation model (VFM) has inherited the ability to generalize to unseen images. Nevertheless, the key challenge of domain-generalized semantic segmentation (DGSS) lies in the domain gap attributed to the cross-domain styles, i.e., the variance of urban landscape and environment dependencies. Hence, maintaining the style-invariant property with varying domain styles becomes the key bottleneck in harnessing VFM for DGSS. The frequency space after Haar wavelet transformation provides a feasible way to decouple the style information from the domain-invariant content, since the content and style information are retained in the low- and high- frequency components of the space, respectively. To this end, we propose a novel Frequency-Adapted (FADA) learning scheme to advance the frontier. Its overall idea is to separately tackle the content and style information by frequency tokens throughout the learning process. Particularly, the proposed FADA consists of two branches, i.e., low- and high- frequency branches. The former one is able to stabilize the scene content, while the latter one learns the scene styles and eliminates its impact to DGSS. Experiments conducted on various DGSS settings show the state-of-the-art performance of our FADA and its versatility to a variety of VFMs. Source code is available at \\url{https://github.com/BiQiWHU/FADA}.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94511"} +{"video_file": "bCqIx5Q8qX_39028428.mp4", "openreview_id": "bCqIx5Q8qX", "slideslive_id": 39028428, "venue": "nips2024", "title": "MALT Powers Up Adversarial Attacks", "status": "Poster", "keywords": "Adversarial Examples;Robustness;Neural Networks;Classification;Adversarial Attacks", "tldr": "We present a novel adversarial attack MALT (Mesoscopic Almost Linear Targeting), which wins over the current SOTA AutoAttack on several datasets and robust models, while being five times faster.", "abstract": "Current adversarial attacks for multi-class classifiers choose potential adversarial target classes naively based on the classifier's confidence levels. We present a novel adversarial targeting method, \\textit{MALT - Mesoscopic Almost Linearity Targeting}, based on local almost linearity assumptions. Our attack wins over the current state of the art AutoAttack on the standard benchmark datasets CIFAR-100 and Imagenet and for different robust models. In particular, our attack uses a \\emph{five times faster} attack strategy than AutoAttack's while successfully matching AutoAttack's successes and attacking additional samples that were previously out of reach. We additionally prove formally and demonstrate empirically that our targeting method, although inspired by linear predictors, also applies to non-linear models.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/94506"} +{"video_file": "bEunGps83o_39026653.mp4", "openreview_id": "bEunGps83o", "slideslive_id": 39026653, "venue": "nips2024", "title": "Fair Allocation in Dynamic Mechanism Design", "status": "Poster", "keywords": "Mechanism Design;Auctions;Fairness", "tldr": "We study a dynamic mechanism design problem subject to a fairness constraint that ensures a minimum average allocation for each group.", "abstract": "We consider a dynamic mechanism design problem where an auctioneer sells an indivisible good to two groups of buyers in every round, for a total of\nT\nrounds. The auctioneer aims to maximize their discounted overall revenue while adhering to a fairness constraint that guarantees a minimum average allocation for each group. We begin by studying the static case (\nT\n=\n1\n) and establish that the optimal mechanism involves two types of subsidization: one that increases the overall probability of allocation to all buyers, and another that favors the group which otherwise has a lower probability of winning the item. We then extend our results to the dynamic case by characterizing a set of recursive functions that determine the optimal allocation and payments in each round. Notably, our results establish that in the dynamic case, the seller, on one hand, commits to a participation reward to incentivize truth-telling, and, on the other hand, charges an entry fee for every round. Moreover, the optimal allocation once more involves subsidization in favor of one group, where the extent of subsidization depends on the difference in future utilities for both the seller and buyers when allocating the item to one group versus the other. Finally, we present an approximation scheme to solve the recursive equations and determine an approximately optimal and fair allocation efficiently.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/94504"} +{"video_file": "bFoQXD7Uls_39027299.mp4", "openreview_id": "bFoQXD7Uls", "slideslive_id": 39027299, "venue": "nips2024", "title": "VeLoRA: Memory Efficient Training using Rank-1 Sub-Token Projections", "status": "Poster", "keywords": "parameter efficient fine-tuning;memory efficient training", "tldr": "Memory efficient fine-tuning by compressing the saved intermediate activations. Complimentary to peft and generalises to pre-training.", "abstract": "Large language models (LLMs) have recently emerged as powerful tools for tackling many language-processing tasks. Despite their success, training and fine-tuning these models is still far too computationally and memory intensive. In this paper, we identify and characterise the important components needed for effective model convergence using gradient descent. In doing so we find that the intermediate activations used to implement backpropagation can be excessively compressed without incurring any degradation in performance. This result leads us to a cheap and memory-efficient algorithm for both fine-tuning and pre-training LLMs. The proposed algorithm simply divides the tokens up into smaller sub-tokens before projecting them onto a fixed 1-dimensional subspace during the forward pass. These features are then coarsely reconstructed during the backward pass to implement the update rules. We confirm the effectiveness of our algorithm as being complimentary to many state-of-the-art PEFT methods on the VTAB-1k fine-tuning benchmark. Furthermore, we outperform QLoRA for fine-tuning LLaMA and show competitive performance against other memory-efficient pre-training methods on the large-scale C4 dataset.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/94503"} +{"video_file": "bFrNPlWchg_39025760.mp4", "openreview_id": "bFrNPlWchg", "slideslive_id": 39025760, "venue": "nips2024", "title": "Extending Video Masked Autoencoders to 128 frames", "status": "Poster", "keywords": "MAE;long video understanding;masked autoencoder;adaptive masking", "tldr": "A recipe to scale masked auto-encoders to long videos.", "abstract": "Video understanding has witnessed significant progress with recent video foundation models demonstrating strong performance owing to self-supervised pre-training objectives; Masked Autoencoders (MAE) being the design of choice. Nevertheless, the majority of prior works that leverage MAE pre-training have focused on relatively short video representations (16 / 32 frames in length) largely due to hardware memory and compute limitations that scale poorly with video length due to the dense memory-intensive self-attention decoding. One natural strategy to address these challenges is to subsample tokens to reconstruct during decoding (or decoder masking). In this work, we propose an effective strategy for prioritizing tokens which allows training on longer video sequences (128 frames) and gets better performance than, more typical, random and uniform masking strategies. The core of our approach is an adaptive decoder masking strategy that prioritizes the most important tokens and uses quantized tokens as reconstruction objectives. Our adaptive strategy leverages a powerful MAGVIT-based tokenizer that jointly learns the tokens and their priority. We validate our design choices through exhaustive ablations and observe improved performance of the resulting long-video (128 frames) encoders over short-video (32 frames) counterparts. With our long-video masked autoencoder (LVMAE) strategy, we surpass state-of-the-art on Diving48 by 3.9 points and EPIC-Kitchens-100 verb classification by 2.5 points while relying on a simple core architecture and video-only pre-training (unlike some of the prior works that require millions of labeled video-text pairs or specialized encoders).", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94502"} +{"video_file": "bGhsbfyg3b_39026779.mp4", "openreview_id": "bGhsbfyg3b", "slideslive_id": 39026779, "venue": "nips2024", "title": "Opponent Modeling with In-context Search", "status": "Poster", "keywords": "Opponent Modeling;In-context Learning;Search", "tldr": "We propose OMIS, a novel opponent modeling approach based on in-context-learning-based pretraining and decision-time search, ensuring effectiveness and stability in adapting to unknown opponent policies, as proven theoretically and empirically.", "abstract": "Opponent modeling is a longstanding research topic aimed at enhancing decision-making by modeling information about opponents in multi-agent environments. However, existing approaches often face challenges such as having difficulty generalizing to unknown opponent policies and conducting unstable performance. To tackle these challenges, we propose a novel approach based on in-context learning and decision-time search named Opponent Modeling with In-context Search (OMIS). OMIS leverages in-context learning-based pretraining to train a Transformer model for decision-making. It consists of three in-context components: an actor learning best responses to opponent policies, an opponent imitator mimicking opponent actions, and a critic estimating state values. When testing in an environment that features unknown non-stationary opponent agents, OMIS uses pretrained in-context components for decision-time search to refine the actor's policy. Theoretically, we prove that under reasonable assumptions, OMIS without search converges in opponent policy recognition and has good generalization properties; with search, OMIS provides improvement guarantees, exhibiting performance stability. Empirically, in competitive, cooperative, and mixed environments, OMIS demonstrates more effective and stable adaptation to opponents than other approaches. See our project website at https://sites.google.com/view/nips2024-omis.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94501"} +{"video_file": "bHP9hX4SvI_39028133.mp4", "openreview_id": "bHP9hX4SvI", "slideslive_id": 39028133, "venue": "nips2024", "title": "Stability and Generalization of Asynchronous SGD: Sharper Bounds Beyond Lipschitz and Smoothness", "status": "Poster", "keywords": "Asynchronous SGD;algorithm stability;generalization error;excess generalization error", "tldr": "This study establishes sharper and broader stability and generalization bounds for the ASGD algorithm under much weaker assumptions.", "abstract": "Asynchronous stochastic gradient descent (ASGD) has evolved into an indispensable optimization algorithm for training modern large-scale distributed machine learning tasks. Therefore, it is imperative to explore the generalization performance of the ASGD algorithm. However, the existing results are either pessimistic and vacuous or restricted by strict assumptions that fail to reveal the intrinsic impact of asynchronous training on generalization. In this study, we establish sharper stability and generalization bounds for ASGD under much weaker assumptions. Firstly, this paper studies the on-average model stability of ASGD and provides a non-vacuous upper bound on the generalization error, without relying on the Lipschitz assumption. Furthermore, we investigate the excess generalization error of the ASGD algorithm, revealing the effects of asynchronous delay, model initialization, number of training samples and iterations on generalization performance. Secondly, for the first time, this study explores the generalization performance of ASGD in the non-smooth case. We replace smoothness with the much weaker H\u00f6lder continuous assumption and achieve similar generalization results as in the smooth case. Finally, we validate our theoretical findings by training numerous machine learning models, including convex problems and non-convex tasks in computer vision and natural language processing.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94500"} +{"video_file": "bHgkT0sUy6_39026131.mp4", "openreview_id": "bHgkT0sUy6", "slideslive_id": 39026131, "venue": "nips2024", "title": "Discovering Creative Behaviors through DUPLEX: Diverse Universal Features for Policy Exploration", "status": "Poster", "keywords": "Reinforcement Learning;Policy Diversity;Generalization", "tldr": "Novel algorithm for learning diverse near-optimal policies capable of generalizing within and out-of distribution", "abstract": "The ability to approach the same problem from different angles is a cornerstone of human intelligence that leads to robust solutions and effective adaptation to problem variations. In contrast, current RL methodologies tend to lead to policies that settle on a single solution to a given problem, making them brittle to problem variations. Replicating human flexibility in reinforcement learning agents is the challenge that we explore in this work. We tackle this challenge by extending state-of-the-art approaches to introduce DUPLEX, a method that explicitly defines a diversity objective with constraints and makes robust estimates of policies\u2019 expected behavior through successor features. The trained agents can (i) learn a diverse set of near-optimal policies in complex highly-dynamic environments and (ii) exhibit competitive and diverse skills in out-of-distribution (OOD) contexts. Empirical results indicate that DUPLEX improves over previous methods and successfully learns competitive driving styles in a hyper-realistic simulator (i.e., GranTurismo \u2122 7) as well as diverse and effective policies in several multi-context robotics MuJoCo simulations with OOD gravity forces and height limits. To the best of our knowledge, our method is the first to achieve diverse solutions in complex driving simulators and OOD robotic contexts. DUPLEX agents demonstrating diverse behaviors can be found at https://ai.sony/publications/Discovering-Creative-Behaviors-through-DUPLEX-Diverse-Universal-Features-for-Policy-Exploration/.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94499"} +{"video_file": "bIa03mAtxQ_39028776.mp4", "openreview_id": "bIa03mAtxQ", "slideslive_id": 39028776, "venue": "nips2024", "title": "Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization", "status": "Poster", "keywords": "interpretability;mixture of experts", "tldr": "muMoE layers perform factorized (fully-differentiable) computation that efficiently scales MoE's expert count, leading to increasingly fine-grained expert specialization", "abstract": "The Mixture of Experts (MoE) paradigm provides a powerful way to decompose dense layers into smaller, modular computations often more amenable to human interpretation, debugging, and editability. However, a major challenge lies in the computational cost of scaling the number of experts high enough to achieve fine-grained specialization. In this paper, we propose the Multilinear Mixture of Experts (\u03bcMoE) layer to address this, focusing on vision models. \u03bcMoE layers enable scalable expert specialization by performing an implicit computation on prohibitively large weight tensors entirely in factorized form. Consequently, \u03bcMoEs (1) avoid the restrictively high inference-time costs of dense MoEs, yet (2) do not inherit the training issues of the popular sparse MoEs' discrete (non-differentiable) expert routing. We present both qualitative and quantitative evidence that scaling \u03bcMoE layers when fine-tuning foundation models for vision tasks leads to more specialized experts at the class-level, further enabling manual bias correction in CelebA attribute classification. Finally, we show qualitative results demonstrating the expert specialism achieved when pre-training large GPT2 and MLP-Mixer models with parameter-matched \u03bcMoE blocks at every layer, maintaining comparable accuracy. Our code is available at: https://github.com/james-oldfield/muMoE.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/94496"} +{"video_file": "bKOZYBJE4Z_39026260.mp4", "openreview_id": "bKOZYBJE4Z", "slideslive_id": 39026260, "venue": "nips2024", "title": "Causal Contrastive Learning for Counterfactual Regression Over Time", "status": "Poster", "keywords": "Counterfactual Regression;Longitudinal Data;Contrastive Learning", "tldr": "We study the problem of counterfactual regression over large time horizons leveraging contrastive learning.", "abstract": "Estimating treatment effects over time holds significance in various domains, including precision medicine, epidemiology, economy, and marketing. This paper introduces a unique approach to counterfactual regression over time, emphasizing long-term predictions. Distinguishing itself from existing models like Causal Transformer, our approach highlights the efficacy of employing RNNs for long-term forecasting, complemented by Contrastive Predictive Coding (CPC) and Information Maximization (InfoMax). Emphasizing efficiency, we avoid the need for computationally expensive transformers. Leveraging CPC, our method captures long-term dependencies within time-varying confounders. Notably, recent models have disregarded the importance of invertible representation, compromising identification assumptions. To remedy this, we employ the InfoMax principle, maximizing a lower bound of mutual information between sequence data and its representation. Our method achieves state-of-the-art counterfactual estimation results using both synthetic and real-world data, marking the pioneering incorporation of Contrastive Predictive Encoding in causal inference.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/94494"} +{"video_file": "bMTn8KKrbq_39027413.mp4", "openreview_id": "bMTn8KKrbq", "slideslive_id": 39027413, "venue": "nips2024", "title": "Towards training digitally-tied analog blocks via hybrid gradient computation", "status": "Spotlight", "keywords": "implicit differentiation;equilibrium propagation;bilevel optimization;hopfield networks;analog computing;hardware-aware training;backprop;energy-based models;physical learning", "tldr": "We extend Equilibrium Propagation (EP) to a novel hardware-realistic model which comprises feedforward and energy-based blocks, resulting in chaining backprop and EP gradients backward through these blocks and a new SOTA performance on ImageNet32", "abstract": "Power efficiency is plateauing in the standard digital electronics realm such that new hardware, models, and algorithms are needed to reduce the costs of AI training. The combination of energy-based analog circuits and the Equilibrium Propagation (EP) algorithm constitutes a compelling alternative compute paradigm for gradient-based optimization of neural nets. Existing analog hardware accelerators, however, typically incorporate digital circuitry to sustain auxiliary non-weight-stationary operations, mitigate analog device imperfections, and leverage existing digital platforms. Such heterogeneous hardware lacks a supporting theoretical framework. In this work, we introduce \\emph{Feedforward-tied Energy-based Models} (ff-EBMs), a hybrid model comprised of feedforward and energy-based blocks housed on digital and analog circuits. We derive a novel algorithm to compute gradients end-to-end in ff-EBMs by backpropagating and ``eq-propagating'' through feedforward and energy-based parts respectively, enabling EP to be applied flexibly on realistic architectures. We experimentally demonstrate the effectiveness of this approach on ff-EBMs using Deep Hopfield Networks (DHNs) as energy-based blocks, and show that a standard DHN can be arbitrarily split into any uniform size while maintaining or improving performance with increases in simulation speed of up to four times. We then train ff-EBMs on ImageNet32 where we establish a new state-of-the-art performance for the EP literature (46 top-1 %). Our approach offers a principled, scalable, and incremental roadmap for the gradual integration of self-trainable analog computational primitives into existing digital accelerators.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94491"} +{"video_file": "bNDwOoxj6W_39028231.mp4", "openreview_id": "bNDwOoxj6W", "slideslive_id": 39028231, "venue": "nips2024", "title": "On the Complexity of Identification in Linear Structural Causal Models", "status": "Poster", "keywords": "causal inference;structural causal models;structural equational models;generic identifiability;existential theory over the reals", "tldr": "We prove a PSPACE upper bound for deciding generic identifiability and give the first hardness result for variants of identification.", "abstract": "Learning the unknown causal parameters of a linear structural causal model is a fundamental task in causal analysis. The task, known as the problem of identification, asks to estimate the parameters of the model from a combination of assumptions on the graphical structure of the model and observational data, represented as a non-causal covariance matrix. In this paper, we give a new sound and complete algorithm for generic identification which runs in polynomial space. By a standard simulation result, namely\nPSPACE\n\u2286\nEXP\n, this algorithm has exponential running time which vastly improves the state-of-the-art double exponential time method using a Gr\u00f6bner basis approach. The paper also presents evidence that parameter identification is computationally hard in general. In particular, we prove, that the task asking whether, for a given feasible correlation matrix, there are exactly one or two or more parameter sets explaining the observed matrix, is hard for\n\u2200\nR\n, the co-class of the existential theory of the reals. In particular, this problem is\ncoNP\n-hard. To our best knowledge, this is the first hardness result for some notion of identifiability.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/94488"} +{"video_file": "bO5bUxvH6m_39026614.mp4", "openreview_id": "bO5bUxvH6m", "slideslive_id": 39026614, "venue": "nips2024", "title": "Learning Discrete Concepts in Latent Hierarchical Models", "status": "Poster", "keywords": "representation learning;causal representation learning;generative models;causal discovery;hierarchical models", "tldr": "We develop identification theory for discrete latent hierarchical model from potentially high-dimensional, continuous data distributions.", "abstract": "Learning concepts from natural high-dimensional data (e.g., images) holds potential in building human-aligned and interpretable machine learning models. Despite its encouraging prospect, formalization and theoretical insights into this crucial task are still lacking. In this work, we formalize concepts as discrete latent causal variables that are related via a hierarchical causal model that encodes different abstraction levels of concepts embedded in high-dimensional data (e.g., a dog breed and its eye shapes in natural images). We formulate conditions to facilitate the identification of the proposed causal model, which reveals when learning such concepts from unsupervised data is possible. Our conditions permit complex causal hierarchical structures beyond latent trees and multi-level directed acyclic graphs in prior work and can handle high-dimensional, continuous observed variables, which is well-suited for unstructured data modalities such as images. We substantiate our theoretical claims with synthetic data experiments. Further, we discuss our theory's implications for understanding the underlying mechanisms of latent diffusion models and provide corresponding empirical evidence for our theoretical insights.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/94487"} +{"video_file": "bOS6WPV0Jf_39026128.mp4", "openreview_id": "bOS6WPV0Jf", "slideslive_id": 39026128, "venue": "nips2024", "title": "Bridging Multicalibration and Out-of-distribution Generalization Beyond Covariate Shift", "status": "Poster", "keywords": "Multicalibration;Robustness;Invariant Risk Minimization", "tldr": "We extend the current definition of multicalibration to tackle general distribution shift, particularly beyond covariate shift.", "abstract": "We establish a new model-agnostic optimization framework for out-of-distribution generalization via multicalibration, a criterion that ensures a predictor is calibrated across a family of overlapping groups. Multicalibration is shown to be associated with robustness of statistical inference under covariate shift. We further establish a link between multicalibration and robustness for prediction tasks both under and beyond covariate shift. We accomplish this by extending multicalibration to incorporate grouping functions that consider covariates and labels jointly. This leads to an equivalence of the extended multicalibration and invariance, an objective for robust learning in existence of concept shift. We show a linear structure of the grouping function class spanned by density ratios, resulting in a unifying framework for robust learning by designing specific grouping functions. We propose MC-Pseudolabel, a post-processing algorithm to achieve both extended multicalibration and out-of-distribution generalization. The algorithm, with lightweight hyperparameters and optimization through a series of supervised regression steps, achieves superior performance on real-world datasets with distribution shift.", "primary_area": "fairness", "site": "https://neurips.cc/virtual/2024/poster/94486"} +{"video_file": "bQMevGCYVM_39024566.mp4", "openreview_id": "bQMevGCYVM", "slideslive_id": 39024566, "venue": "nips2024", "title": "One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos", "status": "Poster", "keywords": "Video Object Segmentation;Multimodal Large Language Model;Reasoning Segmentation", "tldr": "We introduce VideoLISA, a video-based multimodal large language model designed to address the challenges of language-instructed reasoning segmentation in videos.", "abstract": "We introduce VideoLISA, a video-based multimodal large language model designed to tackle the problem of language-instructed reasoning segmentation in videos. Leveraging the reasoning capabilities and world knowledge of large language models, and augmented by the Segment Anything Model, VideoLISA generates temporally consistent segmentation masks in videos based on language instructions. Existing image-based methods, such as LISA, struggle with video tasks due to the additional temporal dimension, which requires temporal dynamic understanding and consistent segmentation across frames. VideoLISA addresses these challenges by integrating a Sparse Dense Sampling strategy into the video-LLM, which balances temporal context and spatial detail within computational constraints. Additionally, we propose a One-Token-Seg-All approach using a specially designed token, enabling the model to segment and track objects across multiple frames. Extensive evaluations on diverse benchmarks, including our newly introduced ReasonVOS benchmark, demonstrate VideoLISA's superior performance in video object segmentation tasks involving complex reasoning, temporal understanding, and object tracking. While optimized for videos, VideoLISA also shows promising generalization to image segmentation, revealing its potential as a unified foundation model for language-instructed object segmentation. Code and model will be available at: https://github.com/showlab/VideoLISA.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94482"} +{"video_file": "bUi2xECa7w_39027959.mp4", "openreview_id": "bUi2xECa7w", "slideslive_id": 39027959, "venue": "nips2024", "title": "The Fairness-Quality Tradeoff in Clustering", "status": "Poster", "keywords": "clustering;algorithmic-fairness;multiobjective-optimization", "tldr": "We propose new algorithms for recovering the Pareto Front of the clustering problem with fairness considerations.", "abstract": "Fairness in clustering has been considered extensively in the past; however, the trade-off between the two objectives --- e.g., can we sacrifice just a little in the quality of the clustering to significantly increase fairness, or vice-versa? --- has rarely been addressed. We introduce novel algorithms for tracing the complete trade-off curve, or Pareto front, between quality and fairness in clustering problems; that is, computing all clusterings that are not dominated in both objectives by other clusterings. Unlike previous work that deals with specific objectives for quality and fairness, we deal with all objectives for fairness and quality in two general classes encompassing most of the special cases addressed in previous work. Our algorithm must take exponential time in the worst case as the Parero front itself can be exponential. Even when the Pareto front is polynomial, our algorithm may take exponential time, and we prove that this is inevitable unless P = NP. However, we also present a new polynomial-time algorithm for computing the entire Pareto front when the cluster centers are fixed, and for perhaps the most natural fairness objective: minimizing the sum, over all clusters, of the imbalance between the two groups in each cluster.", "primary_area": "fairness", "site": "https://neurips.cc/virtual/2024/poster/94479"} +{"video_file": "bbGPoL1NLo_39027343.mp4", "openreview_id": "bbGPoL1NLo", "slideslive_id": 39027343, "venue": "nips2024", "title": "Challenges of Generating Structurally Diverse Graphs", "status": "Poster", "keywords": "diverse graphs;random graph model;graph generative model;graph distance", "tldr": "We formulate and investigate the problem of generating structurally diverse graphs that can serve as representative instances for various graph-related tasks.", "abstract": "For many graph-related problems, it can be essential to have a set of structurally diverse graphs. For instance, such graphs can be used for testing graph algorithms or their neural approximations. However, to the best of our knowledge, the problem of generating structurally diverse graphs has not been explored in the literature. In this paper, we fill this gap. First, we discuss how to define diversity for a set of graphs, why this task is non-trivial, and how one can choose a proper diversity measure. Then, for a given diversity measure, we propose and compare several algorithms optimizing it: we consider approaches based on standard random graph models, local graph optimization, genetic algorithms, and neural generative models. We show that it is possible to significantly improve diversity over basic random graph generators. Additionally, our analysis of generated graphs allows us to better understand the properties of graph distances: depending on which diversity measure is used for optimization, the obtained graphs may possess very different structural properties which gives a better understanding of the graph distance underlying the diversity measure.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/94476"} +{"video_file": "bcVLFQCOjc_39026952.mp4", "openreview_id": "bcVLFQCOjc", "slideslive_id": 39026952, "venue": "nips2024", "title": "DeTikZify: Synthesizing Graphics Programs for Scientific Figures and Sketches with TikZ", "status": "Spotlight", "keywords": "Vision Language Models;Code Generation;Image Understanding;Vector Graphics Generation", "tldr": "We train vision language models on TikZ code to automatically convert sketches and existing scientific figures into editable, semantics-preserving graphics programs.", "abstract": "Creating high-quality scientific figures can be time-consuming and challenging, even though sketching ideas on paper is relatively easy. Furthermore, recreating existing figures that are not stored in formats preserving semantic information is equally complex. To tackle this problem, we introduce DeTikZify, a novel multimodal language model that automatically synthesizes scientific figures as semantics-preserving TikZ graphics programs based on sketches and existing figures. To achieve this, we create three new datasets: DaTikZv2, the largest TikZ dataset to date, containing over 360k human-created TikZ graphics; SketchFig, a dataset that pairs hand-drawn sketches with their corresponding scientific figures; and MetaFig, a collection of diverse scientific figures and associated metadata. We train DeTikZify on MetaFig and DaTikZv2, along with synthetically generated sketches learned from SketchFig. We also introduce an MCTS-based inference algorithm that enables DeTikZify to iteratively refine its outputs without the need for additional training. Through both automatic and human evaluation, we demonstrate that DeTikZify outperforms commercial Claude 3 and GPT-4V in synthesizing TikZ programs, with the MCTS algorithm effectively boosting its performance. We make our code, models, and datasets publicly available.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94474"} +{"video_file": "bg6fVPVs3s_39027378.mp4", "openreview_id": "bg6fVPVs3s", "slideslive_id": 39027378, "venue": "nips2024", "title": "Guiding a Diffusion Model with a Bad Version of Itself", "status": "Oral", "keywords": "diffusion models;classifier-free guidance;guidance", "tldr": "Guiding a diffusion model with a smaller, less-trained version of itself leads to significantly improved sample and distribution quality.", "abstract": "The primary axes of interest in image-generating diffusion models are image quality, the amount of variation in the results, and how well the results align with a given condition, e.g., a class label or a text prompt. The popular classifier-free guidance approach uses an unconditional model to guide a conditional model, leading to simultaneously better prompt alignment and higher-quality images at the cost of reduced variation. These effects seem inherently entangled, and thus hard to control. We make the surprising observation that it is possible to obtain disentangled control over image quality without compromising the amount of variation by guiding generation using a smaller, less-trained version of the model itself rather than an unconditional model. This leads to significant improvements in ImageNet generation, setting record FIDs of 1.01 for 64x64 and 1.25 for 512x512, using publicly available networks. Furthermore, the method is also applicable to unconditional diffusion models, drastically improving their quality.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94471"} +{"video_file": "bhSfbjS6j9_39026171.mp4", "openreview_id": "bhSfbjS6j9", "slideslive_id": 39026171, "venue": "nips2024", "title": "Multistable Shape from Shading Emerges from Patch Diffusion", "status": "Spotlight", "keywords": "shape from shading;multistable perception;diffusion models;low-level vision", "tldr": "We present a bottom-up, patch-based diffusion model for monocular shape from shading that produces multimodal outputs, similar to multistable perception in humans.", "abstract": "Models for inferring monocular shape of surfaces with diffuse reflection---shape from shading---ought to produce distributions of outputs, because there are fundamental mathematical ambiguities of both continuous (e.g., bas-relief) and discrete (e.g., convex/concave) types that are also experienced by humans. Yet, the outputs of current models are limited to point estimates or tight distributions around single modes, which prevent them from capturing these effects. We introduce a model that reconstructs a multimodal distribution of shapes from a single shading image, which aligns with the human experience of multistable perception. We train a small denoising diffusion process to generate surface normal fields from\n16\n\u00d7\n16\npatches of synthetic images of everyday 3D objects. We deploy this model patch-wise at multiple scales, with guidance from inter-patch shape consistency constraints. Despite its relatively small parameter count and predominantly bottom-up structure, we show that multistable shape explanations emerge from this model for ambiguous test images that humans experience as being multistable. At the same time, the model produces veridical shape estimates for object-like images that include distinctive occluding contours and appear less ambiguous. This may inspire new architectures for stochastic 3D shape perception that are more efficient and better aligned with human experience.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94470"} +{"video_file": "bkUvKPKafQ_39026188.mp4", "openreview_id": "bkUvKPKafQ", "slideslive_id": 39026188, "venue": "nips2024", "title": "ChatQA: Surpassing GPT-4 on Conversational QA and RAG", "status": "Poster", "keywords": "large language models;retrieval-augmented generation;RAG", "tldr": "We introduce ChatQA, a suite of models that outperform GPT-4 on RAG and conversational QA.", "abstract": "In this work, we introduce ChatQA, a suite of models that outperform GPT-4 on retrieval-augmented generation (RAG) and conversational question answering (QA). To enhance generation, we propose a two-stage instruction tuning method that significantly boosts the performance of RAG. For effective retrieval, we introduce a dense retriever optimized for conversational QA, which yields results comparable to the alternative state-of-the-art query rewriting models, while substantially reducing deployment costs. We also present the ChatRAG Bench, which encompasses ten datasets covering comprehensive evaluations on RAG, table-related QA, arithmetic calculations, and scenarios involving unanswerable questions. Our ChatQA-1.0-70B (score: 54.14), built on Llama2, a weaker foundation model than GPT-4, can slightly outperform GPT-4-0613 (score: 53.90) and GPT-4-Turbo-2024-04-09 (score: 54.03) on the ChatRAG Bench, without relying on any synthetic data from OpenAI GPT models. Notably, Llama3-ChatQA-1.5-70B model surpasses the accuracy of GPT-4-Turbo-2024-04-09 by a margin. These results demonstrate the exceptional quality of the proposed ChatQA recipe. To advance research in this field, we open-sourced the model weights, instruction tuning data, ChatRAG Bench, and retriever for the community.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94465"} +{"video_file": "bmoS6Ggw4j_39025278.mp4", "openreview_id": "bmoS6Ggw4j", "slideslive_id": 39025278, "venue": "nips2024", "title": "Can Graph Learning Improve Planning in LLM-based Agents?", "status": "Poster", "keywords": "Task Planning;Language Agents;Graph Learning;Graph Neural Networks;Language Model", "tldr": "This paper presents an initial exploration into graph-learning-based approaches for task planning in LLM-based agents.", "abstract": "Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs). It aims to break down complex user requests in natural language into solvable sub-tasks, thereby fulfilling the original requests. In this context, the sub-tasks can be naturally viewed as a graph, where the nodes represent the sub-tasks, and the edges denote the dependencies among them. Consequently, task planning is a decision-making problem that involves selecting a connected path or subgraph within the corresponding graph and invoking it. In this paper, we explore graph learning-based methods for task planning, a direction that is orthogonal to the prevalent focus on prompt design. Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs, which is adeptly addressed by graph neural networks (GNNs). This theoretical insight led us to integrate GNNs with LLMs to enhance overall performance. Extensive experiments demonstrate that GNN-based methods surpass existing solutions even without training, and minimal training can further enhance their performance. The performance gain increases with a larger task graph size.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/94464"} +{"video_file": "bnzeOG0yey_39024567.mp4", "openreview_id": "bnzeOG0yey", "slideslive_id": 39024567, "venue": "nips2024", "title": "Revealing Distribution Discrepancy by Sampling Transfer in Unlabeled Data", "status": "Poster", "keywords": "Non-IID Data; Distribution Discrepancy; Density Ratio; Likelihood Ratio; Generalization", "tldr": "This paper introduces a method to evaluate distribution discrepancies between training and test distributions without needing class labels in the test samples.", "abstract": "There are increasing cases where the class labels of test samples are unavailable, creating a significant need and challenge in measuring the discrepancy between training and test distributions. This distribution discrepancy complicates the assessment of whether the hypothesis selected by an algorithm on training samples remains applicable to test samples. We present a novel approach called Importance Divergence (I-Div) to address the challenge of test label unavailability, enabling distribution discrepancy evaluation using only training samples. I-Div transfers the sampling patterns from the test distribution to the training distribution by estimating density and likelihood ratios. Specifically, the density ratio, informed by the selected hypothesis, is obtained by minimizing the Kullback-Leibler divergence between the actual and estimated input distributions. Simultaneously, the likelihood ratio is adjusted according to the density ratio by reducing the generalization error of the distribution discrepancy as transformed through the two ratios. Experimentally, I-Div accurately quantifies the distribution discrepancy, as evidenced by a wide range of complex data scenarios and tasks.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/94461"} +{"video_file": "btLLWaOrFs_39025594.mp4", "openreview_id": "btLLWaOrFs", "slideslive_id": 39025594, "venue": "nips2024", "title": "A Consistency-Aware Spot-Guided Transformer for Versatile and Hierarchical Point Cloud Registration", "status": "Poster", "keywords": "Point cloud registration;Rigid transformation estimation;Feature matching;Correspondence;Deep learning", "tldr": "We propose a novel consistency-aware spot-guided Transformer for versatile and hierarchical point cloud registration, achieving state-of-the-art accuracy, efficiency, and robustness on both outdoor and indoor benchmarks.", "abstract": "Deep learning-based feature matching has shown great superiority for point cloud registration in the absence of pose priors. Although coarse-to-fine matching approaches are prevalent, the coarse matching of existing methods is typically sparse and loose without consideration of geometric consistency, which makes the subsequent fine matching rely on ineffective optimal transport and hypothesis-and-selection methods for consistency. Therefore, these methods are neither efficient nor scalable for real-time applications such as odometry in robotics. To address these issues, we design a consistency-aware spot-guided Transformer (CAST), which incorporates a spot-guided cross-attention module to avoid interfering with irrelevant areas, and a consistency-aware self-attention module to enhance matching capabilities with geometrically consistent correspondences. Furthermore, a lightweight fine matching module for both sparse keypoints and dense features can estimate the transformation accurately. Extensive experiments on both outdoor LiDAR point cloud datasets and indoor RGBD point cloud datasets demonstrate that our method achieves state-of-the-art accuracy, efficiency, and robustness.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94458"} +{"video_file": "btuHzsAVsK_39027516.mp4", "openreview_id": "btuHzsAVsK", "slideslive_id": 39027516, "venue": "nips2024", "title": "Flow Snapshot Neurons in Action: Deep Neural Networks Generalize to Biological Motion Perception", "status": "Poster", "keywords": "biological motion perception;generalization;video action recognition", "tldr": "We propose an AI model for video action recognition, which can generalize to biological motion perception tasks.", "abstract": "Biological motion perception (BMP) refers to humans' ability to perceive and recognize the actions of living beings solely from their motion patterns, sometimes as minimal as those depicted on point-light displays. While humans excel at these tasks \\textit{without any prior training}, current AI models struggle with poor generalization performance. To close this research gap, we propose the Motion Perceiver (MP). MP solely relies on patch-level optical flows from video clips as inputs. During training, it learns prototypical flow snapshots through a competitive binding mechanism and integrates invariant motion representations to predict action labels for the given video. During inference, we evaluate the generalization ability of all AI models and humans on 62,656 video stimuli spanning 24 BMP conditions using point-light displays in neuroscience. Remarkably, MP outperforms all existing AI models with a maximum improvement of 29% in top-1 action recognition accuracy on these conditions. Moreover, we benchmark all AI models in point-light displays of two standard video datasets in computer vision. MP also demonstrates superior performance in these cases. More interestingly, via psychophysics experiments, we found that MP recognizes biological movements in a way that aligns with human behaviors. Our data and code are available at https://github.com/ZhangLab-DeepNeuroCogLab/MotionPerceiver.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/94457"} +{"video_file": "buqvMT3B4k_39025183.mp4", "openreview_id": "buqvMT3B4k", "slideslive_id": 39025183, "venue": "nips2024", "title": "Self-Labeling the Job Shop Scheduling Problem", "status": "Poster", "keywords": "Self-Labeling;Generative Model;Job Shop Scheduling;Traveling Salesman Problem", "tldr": "We propose a novel self-labeling improvement method for combinatorial problems that enables the training of state-of-the-art generative models without requiring optimality information or defining Markov Decision Processes.", "abstract": "This work proposes a self-supervised training strategy designed for combinatorial problems. An obstacle in applying supervised paradigms to such problems is the need for costly target solutions often produced with exact solvers. Inspired by semi- and self-supervised learning, we show that generative models can be trained by sampling multiple solutions and using the best one according to the problem objective as a pseudo-label. In this way, we iteratively improve the model generation capability by relying only on its self-supervision, eliminating the need for optimality information. We validate this Self-Labeling Improvement Method (SLIM) on the Job Shop Scheduling (JSP), a complex combinatorial problem that is receiving much attention from the neural combinatorial community. We propose a generative model based on the well-known Pointer Network and train it with SLIM. Experiments on popular benchmarks demonstrate the potential of this approach as the resulting models outperform constructive heuristics and state-of-the-art learning proposals for the JSP. Lastly, we prove the robustness of SLIM to various parameters and its generality by applying it to the Traveling Salesman Problem.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94456"} +{"video_file": "bxH6T1w1FW_39024651.mp4", "openreview_id": "bxH6T1w1FW", "slideslive_id": 39024651, "venue": "nips2024", "title": "Soft Superpixel Neighborhood Attention", "status": "Poster", "keywords": "attention module;superpixels;image denoising;deep learning", "tldr": "This paper proposes a novel deep learning attention module, named soft superpixel neighborhood attention (SNA), to reweight a local attention map according to the deformable boundaries of real-world images.", "abstract": "Images contain objects with deformable boundaries, such as the contours of a human face, yet attention operators act on square windows. This mixes features from perceptually unrelated regions, which can degrade the quality of a denoiser. One can exclude pixels using an estimate of perceptual groupings, such as superpixels, but the naive use of superpixels can be theoretically and empirically worse than standard attention. Using superpixel probabilities rather than superpixel assignments, this paper proposes soft superpixel neighborhood attention (SNA), which interpolates between the existing neighborhood attention and the naive superpixel neighborhood attention. This paper presents theoretical results showing SNA is the optimal denoiser under a latent superpixel model. SNA outperforms alternative local attention modules on image denoising, and we compare the superpixels learned from denoising with those learned with supervision.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94453"} +{"video_file": "bzuQtVDxv0_39025311.mp4", "openreview_id": "bzuQtVDxv0", "slideslive_id": 39025311, "venue": "nips2024", "title": "Splatter a Video: Video Gaussian Representation for Versatile Processing", "status": "Poster", "keywords": "Video Representation; Video Processing", "tldr": "A method to represent casual videos using 3D Gaussians without estimating camera pose", "abstract": "Video representation is a long-standing problem that is crucial for various downstream tasks, such as tracking, depth prediction, segmentation, view synthesis, and editing. However, current methods either struggle to model complex motions due to the absence of 3D structure or rely on implicit 3D representations that are ill-suited for manipulation tasks. To address these challenges, we introduce a novel explicit 3D representation\u2014video Gaussian representation\u2014that embeds a video into 3D Gaussians. Our proposed representation models video appearance in a 3D canonical space using explicit Gaussians as proxies and associates each Gaussian with 3D motions for video motion. This approach offers a more intrinsic and explicit representation than layered atlas or volumetric pixel matrices. To obtain such a representation, we distill 2D priors, such as optical flow and depth, from foundation models to regularize learning in this ill-posed setting. Extensive applications demonstrate the versatility of our new video representation. It has been proven effective in numerous video processing tasks, including tracking, consistent video depth and feature refinement, motion and appearance editing, and stereoscopic video generation.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94450"} +{"video_file": "c37x7CXZ2Y_39027495.mp4", "openreview_id": "c37x7CXZ2Y", "slideslive_id": 39027495, "venue": "nips2024", "title": "Estimating Heterogeneous Treatment Effects by Combining Weak Instruments and Observational Data", "status": "Poster", "keywords": "Causal inference;heterogeneous treatment effects;weak instrumental variables;unobserved confounding;data combination", "tldr": "We propose a method to estimate heterogeneous treatment effects by combining weak instrumental variables with observational data, enabling reliable treatment effect estimation in the presence of confounding bias.", "abstract": "Accurately predicting conditional average treatment effects (CATEs) is crucial in personalized medicine and digital platform analytics. Since the treatments of interest often cannot be directly randomized, observational data is leveraged to learn CATEs, but this approach can incur significant bias from unobserved confounding. One strategy to overcome these limitations is to leverage instrumental variables (IVs) as latent quasi-experiments, such as randomized intent-to-treat assignments or randomized product recommendations. This approach, on the other hand, can suffer from low compliance, i.e., IV weakness. Some subgroups may even exhibit zero compliance, meaning we cannot instrument for their CATEs at all. In this paper, we develop a novel approach to combine IV and observational data to enable reliable CATE estimation in the presence of unobserved confounding in the observational data and low compliance in the IV data, including no compliance for some subgroups. We propose a two-stage framework that first learns \\textit{biased} CATEs from the observational data, and then applies a compliance-weighted correction using IV data, effectively leveraging IV strength variability across covariates. We characterize the convergence rates of our method and validate its effectiveness through a simulation study. Additionally, we demonstrate its utility with real data by analyzing the heterogeneous effects of 401(k) plan participation on wealth.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/94449"} +{"video_file": "c4ElkpA0kh_39025919.mp4", "openreview_id": "c4ElkpA0kh", "slideslive_id": 39025919, "venue": "nips2024", "title": "Efficient $\\Phi$-Regret Minimization with Low-Degree Swap Deviations in Extensive-Form Games", "status": "Poster", "keywords": "swap regret;extensive-form games;low-degree deviations", "tldr": "We develop algorithms to efficiently minimize swap regret with respect to the set of low-degree deviations in extensive-form games.", "abstract": "Recent breakthrough results by Dagan, Daskalakis, Fishelson and Golowich [2023] and Peng and Rubinstein [2023] established an efficient algorithm attaining at most\n\u03f5\nswap regret over extensive-form strategy spaces of dimension\nN\nin\nN\nO\n~\n(\n1\n/\n\u03f5\n)\nrounds. On the other extreme, Farina and Pipis [2023] developed an efficient algorithm for minimizing the weaker notion of linear-swap regret in\npoly\n(\nN\n)\n/\n\u03f5\n2\nrounds. In this paper, we develop efficient parameterized algorithms for regimes between these two extremes. We introduce the set of\nk\n-mediator deviations, which generalize the untimed communication deviations recently introduced by Zhang, Farina and Sandholm [2024] to the case of having multiple mediators, and we develop algorithms for minimizing the regret with respect to this set of deviations in\nN\nO\n(\nk\n)\n/\n\u03f5\n2\nrounds. Moreover, by relating\nk\n-mediator deviations to low-degree polynomials, we show that regret minimization against degree-\nk\npolynomial swap deviations is achievable in\nN\nO\n(\nk\nd\n)\n3\n/\n\u03f5\n2\nrounds, where\nd\nis the depth of the game, assuming a constant branching factor. For a fixed degree\nk\n, this is polynomial for Bayesian games and quasipolynomial more broadly when\nd\n=\npolylog\nN\n---the usual balancedness assumption on the game tree. The first key ingredient in our approach is a relaxation of the usual notion of a fixed point required in the framework of Gordon, Greenwald and Marks [2008]. Namely, for a given deviation\n\u03d5\n, we show that it suffices to compute what we refer to as a fixed point in expectation; that is, a distribution\n\u03c0\nsuch that\nE\nx\n\u223c\n\u03c0\n[\n\u03d5\n(\nx\n)\n\u2212\nx\n]\n\u2248\n0\n. Unlike the problem of computing an actual (approximate) fixed point\nx\n\u2248\n\u03d5\n(\nx\n)\n, which we show is \\PPAD-hard, there is a simple and efficient algorithm for finding a solution that satisfies our relaxed notion. As a byproduct, we provide, to our knowledge, the fastest algorithm for computing\n\u03f5\n-correlated equilibria in normal-form games in the medium-precision regime, obviating the need to solve a linear system in every round. Our second main contribution is a characterization of the set of low-degree deviations, made possible through a connection to low-depth decisions trees from Boolean analysis.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/94446"} +{"video_file": "c7m1HahBNf_39025094.mp4", "openreview_id": "c7m1HahBNf", "slideslive_id": 39025094, "venue": "nips2024", "title": "Exploring Structured Semantic Priors Underlying Diffusion Score for Test-time Adaptation", "status": "Poster", "keywords": "test-time adaptation;diffusion models;generative models;classification;segmentation", "tldr": "We introduce DUSA, an effective method with improved efficiency for test-time adaptation that exploits structured semantic priors from pre-trained score-based diffusion models for enhancing pre-trained discriminative models.", "abstract": "Capitalizing on the complementary advantages of generative and discriminative models has always been a compelling vision in machine learning, backed by a growing body of research. This work discloses the hidden semantic structure within score-based generative models, unveiling their potential as effective discriminative priors. Inspired by our theoretical findings, we propose DUSA to exploit the structured semantic priors underlying diffusion score to facilitate the test-time adaptation of image classifiers or dense predictors. Notably, DUSA extracts knowledge from a single timestep of denoising diffusion, lifting the curse of Monte Carlo-based likelihood estimation over timesteps. We demonstrate the efficacy of our DUSA in adapting a wide variety of competitive pre-trained discriminative models on diverse test-time scenarios. Additionally, a thorough ablation study is conducted to dissect the pivotal elements in DUSA. Code is publicly available at https://github.com/BIT-DA/DUSA.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94444"} +{"video_file": "cDS8WxnMVP_39025948.mp4", "openreview_id": "cDS8WxnMVP", "slideslive_id": 39025948, "venue": "nips2024", "title": "Convolutions and More as Einsum: A Tensor Network Perspective with Advances for Second-Order Methods", "status": "Poster", "keywords": "Tensor networks;convolutions;KFAC;einsum;Second-order optimization", "tldr": "We introduce einsum implementations of many operations related to convolution that speed up the computation of second-order methods like KFAC", "abstract": "Despite their simple intuition, convolutions are more tedious to analyze than dense layers, which complicates the transfer of theoretical and algorithmic ideas to convolutions. We simplify convolutions by viewing them as tensor networks (TNs) that allow reasoning about the underlying tensor multiplications by drawing diagrams, manipulating them to perform function transformations like differentiation, and efficiently evaluating them with einsum. To demonstrate their simplicity and expressiveness, we derive diagrams of various autodiff operations and popular curvature approximations with full hyper-parameter support, batching, channel groups, and generalization to any convolution dimension. Further, we provide convolution-specific transformations based on the connectivity pattern which allow to simplify diagrams before evaluation. Finally, we probe performance. Our TN implementation accelerates a recently-proposed KFAC variant up to 4.5 x while removing the standard implementation's memory overhead, and enables new hardware-efficient tensor dropout for approximate backpropagation.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94434"} +{"video_file": "cEtExbAKYV_39028811.mp4", "openreview_id": "cEtExbAKYV", "slideslive_id": 39028811, "venue": "nips2024", "title": "StepbaQ: Stepping backward as Correction for Quantized Diffusion Models", "status": "Poster", "keywords": "Diffusion Model; Model Quantization", "tldr": "This work introduces a novel perspective that considers quantization error as a stepback in the denoising process; through a sampling step correction mechanism it improves the performance of quantized diffusion models.", "abstract": "Quantization of diffusion models has attracted considerable attention due to its potential to enable various applications on resource-constrained mobile devices. However, given the cumulative nature of quantization errors in quantized diffusion models, overall performance may still decline even with efforts to minimize quantization error at each sampling step. Recent studies have proposed several methods to address accumulated quantization error, yet these solutions often suffer from limited applicability due to their underlying assumptions or only partially resolve the issue due to an incomplete understanding. In this work, we introduce a novel perspective by conceptualizing quantization error as a \"stepback\" in the denoising process. We investigate how the accumulation of quantization error can distort the sampling trajectory, resulting in a notable decrease in model performance. To address this challenge, we introduce StepbaQ, a method that calibrates the sampling trajectory and counteracts the adverse effects of accumulated quantization error through a sampling step correction mechanism. Notably, StepbaQ relies solely on statistics of quantization error derived from a small calibration dataset, highlighting its strong applicability. Our experimental results demonstrate that StepbaQ can serve as a plug-and-play technique to enhance the performance of diffusion models quantized by off-the-shelf tools without modifying the quantization settings. For example, StepbaQ significantly improves the performance of the quantized SD v1.5 model by 7.30 in terms of FID on SDprompts dataset under the common W8A8 setting, and it enhances the performance of the quantized SDXL-Turbo model by 17.31 in terms of FID on SDprompts dataset under the challenging W4A8 setting.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94432"} +{"video_file": "cPzjN7KABv_39027871.mp4", "openreview_id": "cPzjN7KABv", "slideslive_id": 39027871, "venue": "nips2024", "title": "Private Geometric Median", "status": "Poster", "keywords": "Differential Privacy;Differentially Private Convex Optimization;Geometric Median", "tldr": "In this paper, we develop private algorithms for computing the geometric median of a dataset with an adaptive error guarantee.", "abstract": "In this paper, we study differentially private (DP) algorithms for computing the geometric median (GM) of a dataset: Given $n$ points, $x_1,\\dots,x_n$ in $\\mathbb{R}^d$, the goal is to find a point $\\theta$ that minimizes the sum of the Euclidean distances to these points, i.e., $\\sum_{i=1}^{n} \\lVert|\\theta - x_i\\rVert_2$. Off-the-shelf methods, such as DP-GD, require strong a priori knowledge locating the data within a ball of radius $R$, and the excess risk of the algorithm depends linearly on $R$. In this paper, we ask: can we design an efficient and private algorithm with an excess error guarantee that scales with the (unknown) radius containing the majority of the datapoints? Our main contribution is a pair of polynomial-time DP algorithms for the task of private GM with an excess error guarantee that scales with the effective diameter of the datapoints. Additionally, we propose an inefficient algorithm based on the inverse smooth sensitivity mechanism, which satisfies the more restrictive notion of pure DP. We complement our results with a lower bound and demonstrate the optimality of our polynomial-time algorithms in terms of sample complexity.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/94421"} +{"video_file": "cQoAgPBARc_39025844.mp4", "openreview_id": "cQoAgPBARc", "slideslive_id": 39025844, "venue": "nips2024", "title": "Improving Deep Reinforcement Learning by Reducing the Chain Effect of Value and Policy Churn", "status": "Poster", "keywords": "Reinforcement Learning;Deep Learning;Regularization and Optimization", "tldr": "Improving deep reinforcement learning by reducing undesired changes to out of batch data.", "abstract": "Deep neural networks provide Reinforcement Learning (RL) powerful function approximators to address large-scale decision-making problems. However, these approximators introduce challenges due to the non-stationary nature of RL training. One source of the challenges in RL is that output predictions can churn, leading to uncontrolled changes after each batch update for states not included in the batch. Although such a churn phenomenon exists in each step of network training, it remains under-explored on how churn occurs and impacts RL. In this work, we start by characterizing churn in a view of Generalized Policy Iteration with function approximation, and we discover a chain effect of churn that leads to a cycle where the churns in value estimation and policy improvement compound and bias the learning dynamics throughout the iteration. Further, we concretize the study and focus on the learning issues caused by the chain effect in different settings, including greedy action deviation in value-based methods, trust region violation in proximal policy optimization, and dual bias of policy value in actor-critic methods. We then propose a method to reduce the chain effect across different settings, called Churn Approximated ReductIoN (CHAIN), which can be easily plugged into most existing DRL algorithms. Our experiments demonstrate the effectiveness of our method in both reducing churn and improving learning performance across online and offline, value-based and policy-based RL settings.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94420"} +{"video_file": "cRLFvSOrzt_39025124.mp4", "openreview_id": "cRLFvSOrzt", "slideslive_id": 39025124, "venue": "nips2024", "title": "Credit Attribution and Stable Compression", "status": "Poster", "keywords": "Credit Attribution;Algorithmic Stability;Stable Sample Compression", "tldr": "We study credit attribution by machine learning algorithms via new relaxations of Differential Privacy that specifically weaken the stability guarantees for a designated subset of \nk\n datapoints.", "abstract": "Credit attribution is crucial across various fields. In academic research, proper citation acknowledges prior work and establishes original contributions. Similarly, in generative models, such as those trained on existing artworks or music, it is important to ensure that any generated content influenced by these works appropriately credits the original creators.\nWe study credit attribution by machine learning algorithms. We propose new definitions--relaxations of Differential Privacy--that weaken the stability guarantees for a designated subset of\nk\ndatapoints. These\nk\ndatapoints can be used non-stably with permission from their owners, potentially in exchange for compensation. Meanwhile, the remaining datapoints are guaranteed to have no significant influence on the algorithm's output.\nOur framework extends well-studied notions of stability, including Differential Privacy (\nk\n=\n0\n), differentially private learning with public data (where the\nk\npublic datapoints are fixed in advance), and stable sample compression (where the\nk\ndatapoints are selected adaptively by the algorithm). We examine the expressive power of these stability notions within the PAC learning framework, provide a comprehensive characterization of learnability for algorithms adhering to these principles, and propose directions and questions for future research.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94418"} +{"video_file": "cRlQHncjwT_39025885.mp4", "openreview_id": "cRlQHncjwT", "slideslive_id": 39025885, "venue": "nips2024", "title": "Generative Forests", "status": "Poster", "keywords": "tabular data;generative models;boosting;trees", "tldr": "A new class of tree-based generative models and a training algorithm with fast, boosting compliant induction and models that can be used for data generation, missing data imputation and density estimation.", "abstract": "We focus on generative AI for a type of data that still represent one of the most prevalent form of data: tabular data. We introduce a new powerful class of forest-based models fit for such tasks and a simple training algorithm with strong convergence guarantees in a boosting model that parallels that of the original weak / strong supervised learning setting. This algorithm can be implemented by a few tweaks to the most popular induction scheme for decision tree induction (i.e. supervised learning) with two classes. Experiments on the quality of generated data display substantial improvements compared to the state of the art. The losses our algorithm minimize and the structure of our models make them practical for related tasks that require fast estimation of a density given a generative model and an observation (even partially specified): such tasks include missing data imputation and density estimation. Additional experiments on these tasks reveal that our models can be notably good contenders to diverse state of the art methods, relying on models as diverse as (or mixing elements of) trees, neural nets, kernels or graphical models.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94417"} +{"video_file": "cSfxzCozPU_39028752.mp4", "openreview_id": "cSfxzCozPU", "slideslive_id": 39028752, "venue": "nips2024", "title": "Distributional regression: CRPS-error bounds for model fitting, model selection and convex aggregation", "status": "Poster", "keywords": "distributional regression; error bounds; concentration inequality; continuous rank probability score; empirical risk minimization", "tldr": "We provide concentration inequalities for the CRPS-error in distributional regression when empirical risk minimization is used for model fitting, model selection or convex aggregation.", "abstract": "Distributional regression aims at estimating the conditional distribution of a target variable given explanatory co-variates. It is a crucial tool for forecasting when a precise uncertainty quantification is required. A popular methodology consists in fitting a parametric model via empirical risk minimization where the risk is measured by the Continuous Rank Probability Score (CRPS). For independent and identically distributed observations, we provide a concentration result for the estimation error and an upper bound for its expectation. Furthermore, we consider model selection performed by minimization of the validation error and provide a concentration bound for the regret. A similar result is proved for convex aggregation of models. Finally, we show that our results may be applied to various models such as EMOS, distributional regression networks, distributional nearest neighbours or distributional random forests and we illustrate our findings on two data sets (QSAR aquatic toxicity and Airfoil self-noise).", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94415"} +{"video_file": "cUGf2HaNcs_39028007.mp4", "openreview_id": "cUGf2HaNcs", "slideslive_id": 39028007, "venue": "nips2024", "title": "Learning Truncated Causal History Model for Video Restoration", "status": "Poster", "keywords": "video restoration;low-level computer vision;motion understanding", "tldr": "We present a new video restoration framework to learn and model the history of the video frames.", "abstract": "One key challenge to video restoration is to model the transition dynamics of video frames governed by motion. In this work, we propose Turtle to learn the truncated causal history model for efficient and high-performing video restoration. Unlike traditional methods that process a range of contextual frames in parallel, Turtle enhances efficiency by storing and summarizing a truncated history of the input frame latent representation into an evolving historical state. This is achieved through a sophisticated similarity-based retrieval mechanism that implicitly accounts for inter-frame motion and alignment. The causal design in Turtle enables recurrence in inference through state-memorized historical features while allowing parallel training by sampling truncated video clips. We report new state-of-the-art results on a multitude of video restoration benchmark tasks, including video desnowing, nighttime video deraining, video raindrops and rain streak removal, video super-resolution, real-world and synthetic video deblurring, and blind video denoising while reducing the computational cost compared to existing best contextual methods on all these tasks.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94413"} +{"video_file": "cV2LKBdlz4_39025133.mp4", "openreview_id": "cV2LKBdlz4", "slideslive_id": 39025133, "venue": "nips2024", "title": "On Statistical Rates and Provably Efficient Criteria of Latent Diffusion Transformers (DiTs)", "status": "Poster", "keywords": "DiT;Score Matching;Diffusion Transformer;Efficiency;Universal Approximation;Convergence Analysis;Diffusion Generative Model;DDPM", "tldr": "We study the statistical and computational limits of Diffusion Transformers (DiTs).", "abstract": "We investigate the statistical and computational limits of latent Diffusion Transformers (DiTs) under the low-dimensional linear latent space assumption. Statistically, we study the universal approximation and sample complexity of the DiTs score function, as well as the distribution recovery property of the initial data. Specifically, under mild data assumptions, we derive an approximation error bound for the score network of latent DiTs, which is sub-linear in the latent space dimension. Additionally, we derive the corresponding sample complexity bound and show that the data distribution generated from the estimated score function converges toward a proximate area of the original one. Computationally, we characterize the hardness of both forward inference and backward computation of latent DiTs, assuming the Strong Exponential Time Hypothesis (SETH). For forward inference, we identify efficient criteria for all possible latent DiTs inference algorithms and showcase our theory by pushing the efficiency toward almost-linear time inference. For backward computation, we leverage the low-rank structure within the gradient computation of DiTs training for possible algorithmic speedup. Specifically, we show that such speedup achieves almost-linear time latent DiTs training by casting the DiTs gradient as a series of chained low-rank approximations with bounded error. Under the low-dimensional assumption, we show that the statistical rates and the computational efficiency are all dominated by the dimension of the subspace, suggesting that latent DiTs have the potential to bypass the challenges associated with the high dimensionality of initial data.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94411"} +{"video_file": "cYZibc2gKf_39027381.mp4", "openreview_id": "cYZibc2gKf", "slideslive_id": 39027381, "venue": "nips2024", "title": "Abstract Reward Processes: Leveraging State Abstraction for Consistent Off-Policy Evaluation", "status": "Poster", "keywords": "Off-Policy Evaluation;State Abstraction;Importance Sampling", "tldr": "Perform model-based OPE, but instead of trying to estimate a perfect model of the MDP, estimate an abstract model, customized to a policy, that preserves the performance of that policy and can be learnt from off-policy data.", "abstract": "Evaluating policies using off-policy data is crucial for applying reinforcement learning to real-world problems such as healthcare and autonomous driving. Previous methods for off-policy evaluation (OPE) generally suffer from high variance or irreducible bias, leading to unacceptably high prediction errors. In this work, we introduce STAR, a framework for OPE that encompasses a broad range of estimators -- which include existing OPE methods as special cases -- that achieve lower mean squared prediction errors. STAR leverages state abstraction to distill complex, potentially continuous problems into compact, discrete models which we call abstract reward processes (ARPs). Predictions from ARPs estimated from off-policy data are provably consistent (asymptotically correct). Rather than proposing a specific estimator, we present a new framework for OPE and empirically demonstrate that estimators within STAR outperform existing methods. The best STAR estimator outperforms baselines in all twelve cases studied, and even the median STAR estimator surpasses the baselines in seven out of the twelve cases.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94409"} +{"video_file": "ccQ4fmwLDb_39026821.mp4", "openreview_id": "ccQ4fmwLDb", "slideslive_id": 39026821, "venue": "nips2024", "title": "BELM: Bidirectional Explicit Linear Multi-step Sampler for Exact Inversion in Diffusion Models", "status": "Poster", "keywords": "diffusion model; exact inversion; ODE sampler", "tldr": "We propose a novel generic eacxt inversion samplers of diffusion models and derive the variant with optimal accuracy .", "abstract": "The inversion of diffusion model sampling, which aims to find the corresponding initial noise of a sample, plays a critical role in various tasks. Recently, several heuristic exact inversion samplers have been proposed to address the inexact inversion issue in a training-free manner. However, the theoretical properties of these heuristic samplers remain unknown and they often exhibit mediocre sampling quality. In this paper, we introduce a generic formulation, \\emph{Bidirectional Explicit Linear Multi-step} (BELM) samplers, of the exact inversion samplers, which includes all previously proposed heuristic exact inversion samplers as special cases. The BELM formulation is derived from the variable-stepsize-variable-formula linear multi-step method via integrating a bidirectional explicit constraint. We highlight this bidirectional explicit constraint is the key of mathematically exact inversion. We systematically investigate the Local Truncation Error (LTE) within the BELM framework and show that the existing heuristic designs of exact inversion samplers yield sub-optimal LTE. Consequently, we propose the Optimal BELM (O-BELM) sampler through the LTE minimization approach. We conduct additional analysis to substantiate the theoretical stability and global convergence property of the proposed optimal sampler. Comprehensive experiments demonstrate our O-BELM sampler establishes the exact inversion property while achieving high-quality sampling. Additional experiments in image editing and image interpolation highlight the extensive potential of applying O-BELM in varying applications.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94406"} +{"video_file": "cgiOX8lfwG_39027086.mp4", "openreview_id": "cgiOX8lfwG", "slideslive_id": 39027086, "venue": "nips2024", "title": "Discretely beyond $1/e$: Guided Combinatorial Algortihms for Submodular Maximization", "status": "Poster", "keywords": "combinatorial algorithms;deterministic algorithms;submodular optimization", "tldr": "First combinatorial algorithms for submodular maximization with ratio better than 1/e \u2248 0.367", "abstract": "For constrained, not necessarily monotone submodular maximization, all known approximation algorithms with ratio greater than\n1\n/\ne\nrequire continuous ideas, such as queries to the multilinear extension of a submodular function and its gradient, which are typically expensive to simulate with the original set function. For combinatorial algorithms, the best known approximation ratios for both size and matroid constraint are obtained by a simple randomized greedy algorithm of Buchbinder et al. [9]:\n1\n/\ne\n\u2248\n0.367\nfor size constraint and\n0.281\nfor the matroid constraint in\nO\n(\nk\nn\n)\nqueries, where\nk\nis the rank of the matroid. In this work, we develop the first combinatorial algorithms to break the\n1\n/\ne\nbarrier: we obtain approximation ratio of\n0.385\nin\nO\n(\nk\nn\n)\nqueries to the submodular set function for size constraint, and\n0.305\nfor a general matroid constraint. These are achieved by guiding the randomized greedy algorithm with a fast local search algorithm. Further, we develop deterministic versions of these algorithms, maintaining the same ratio and asymptotic time complexity. Finally, we develop a deterministic, nearly linear time algorithm with ratio\n0.377\n.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94400"} +{"video_file": "ciwOcmo8CC_39024997.mp4", "openreview_id": "ciwOcmo8CC", "slideslive_id": 39024997, "venue": "nips2024", "title": "IF-Font: Ideographic Description Sequence-Following Font Generation", "status": "Poster", "keywords": "Font Generation;Vector Quantization;Ideographic Description Sequence;Multimodal", "tldr": "IF-Font uses Ideographic Description Sequence as the semantic condition instead of the content image, outperforming all state-of-the-art methods in the few-shot font generation task.", "abstract": "Few-shot font generation (FFG) aims to learn the target style from a limited number of reference glyphs and generate the remaining glyphs in the target font. Previous works focus on disentangling the content and style features of glyphs, combining the content features of the source glyph with the style features of the reference glyph to generate new glyphs. However, the disentanglement is challenging due to the complexity of glyphs, often resulting in glyphs that are influenced by the style of the source glyph and prone to artifacts. We propose IF-Font, a novel paradigm which incorporates Ideographic Description Sequence (IDS) instead of the source glyph to control the semantics of generated glyphs. To achieve this, we quantize the reference glyphs into tokens, and model the token distribution of target glyphs using corresponding IDS and reference tokens. The proposed method excels in synthesizing glyphs with neat and correct strokes, and enables the creation of new glyphs based on provided IDS. Extensive experiments demonstrate that our method greatly outperforms state-of-the-art methods in both one-shot and few-shot settings, particularly when the target styles differ significantly from the training font styles. The code is available at https://github.com/Stareven233/IF-Font.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94397"} +{"video_file": "clBiQUgj4w_39028295.mp4", "openreview_id": "clBiQUgj4w", "slideslive_id": 39028295, "venue": "nips2024", "title": "CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns", "status": "Spotlight", "keywords": "Time Series Forecasting;Time Series Analysis;Machine Learning", "tldr": "This paper pioneers the exploration of explicitly modeling periodic patterns in time-series data to enhance the accuracy of long-term time series forecasting (LTSF) tasks.", "abstract": "The stable periodic patterns present in time series data serve as the foundation for conducting long-horizon forecasts. In this paper, we pioneer the exploration of explicitly modeling this periodicity to enhance the performance of models in long-term time series forecasting (LTSF) tasks. Specifically, we introduce the Residual Cycle Forecasting (RCF) technique, which utilizes learnable recurrent cycles to model the inherent periodic patterns within sequences, and then performs predictions on the residual components of the modeled cycles. Combining RCF with a Linear layer or a shallow MLP forms the simple yet powerful method proposed in this paper, called CycleNet. CycleNet achieves state-of-the-art prediction accuracy in multiple domains including electricity, weather, and energy, while offering significant efficiency advantages by reducing over 90% of the required parameter quantity. Furthermore, as a novel plug-and-play technique, the RCF can also significantly improve the prediction accuracy of existing models, including PatchTST and iTransformer. The source code is available at: https://github.com/ACAT-SCUT/CycleNet.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94391"} +{"video_file": "clDGHpx2la_39027334.mp4", "openreview_id": "clDGHpx2la", "slideslive_id": 39027334, "venue": "nips2024", "title": "InversionView: A General-Purpose Method for Reading Information from Neural Activations", "status": "Poster", "keywords": "interpretability;explainability;mechanistic interpretability", "tldr": "We develop a method that reads out information from neural activations.", "abstract": "The inner workings of neural networks can be better understood if we can fully decipher the information encoded in neural activations. In this paper, we argue that this information is embodied by the subset of inputs that give rise to similar activations. We propose InversionView, which allows us to practically inspect this subset by sampling from a trained decoder model conditioned on activations. This helps uncover the information content of activation vectors, and facilitates understanding of the algorithms implemented by transformer models. We present four case studies where we investigate models ranging from small transformers to GPT-2. In these studies, we show that InversionView can reveal clear information contained in activations, including basic information about tokens appearing in the context, as well as more complex information, such as the count of certain tokens, their relative positions, and abstract knowledge about the subject. We also provide causally verified circuits to confirm the decoded information.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/94390"} +{"video_file": "clQdPtooRD_39027382.mp4", "openreview_id": "clQdPtooRD", "slideslive_id": 39027382, "venue": "nips2024", "title": "Oja's Algorithm for Streaming Sparse PCA", "status": "Poster", "keywords": "Streaming PCA;Oja's Algorithm;Sparse PCA;Support Recovery;Entrywise Bounds", "tldr": "We give a O(d) space, O(nd) time algorithm for Streaming Sparse PCA via a novel analysis of Oja's Algorithm followed by thresholding", "abstract": "Oja's algorithm for Streaming Principal Component Analysis (PCA) for\nn\ndata-points in a\nd\ndimensional space achieves the same sin-squared error\nO\n(\nr\neff\n/\nn\n)\nas the offline algorithm in\nO\n(\nd\n)\nspace and\nO\n(\nn\nd\n)\ntime and a single pass through the datapoints. Here\nr\neff\nis the effective rank (ratio of the trace and the principal eigenvalue of the population covariance matrix\n\u03a3\n). Under this computational budget, we consider the problem of sparse PCA, where the principal eigenvector of\n\u03a3\nis\ns\n-sparse, and\nr\neff\ncan be large. In this setting, to our knowledge, there are no known single-pass algorithms that achieve the minimax error bound in\nO\n(\nd\n)\nspace and\nO\n(\nn\nd\n)\ntime without either requiring strong initialization conditions or assuming further structure (e.g., spiked) of the covariance matrix. We show that a simple single-pass procedure that thresholds the output of Oja's algorithm (the Oja vector) can achieve the minimax error bound under some regularity conditions in\nO\n(\nd\n)\nspace and\nO\n(\nn\nd\n)\ntime.\nWe present a nontrivial and novel analysis of the entries of the unnormalized Oja vector, which involves the projection of a product of independent random matrices on a random initial vector. This is completely different from previous analyses of Oja's algorithm and matrix products, which have been done when the\nr\neff\nis bounded.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94389"} +{"video_file": "cmSNX47aEH_39027709.mp4", "openreview_id": "cmSNX47aEH", "slideslive_id": 39027709, "venue": "nips2024", "title": "DeiSAM: Segment Anything with Deictic Prompting", "status": "Poster", "keywords": "neuro-symbolic reasoning;object segmentation;deictic representation;large language models;differentiable logic programming", "tldr": "Segment objects from complex textual prompts using neuro-symbolic reasoning with large-scale neural networks.", "abstract": "Large-scale, pre-trained neural networks have demonstrated strong capabilities in various tasks, including zero-shot image segmentation. To identify concrete objects in complex scenes, humans instinctively rely on deictic descriptions in natural language, i.e., referring to something depending on the context such as \"The object that is on the desk and behind the cup.\". However, deep learning approaches cannot reliably interpret such deictic representations due to their lack of reasoning capabilities in complex scenarios. To remedy this issue, we propose DeiSAM \u2014 a combination of large pre-trained neural networks with differentiable logic reasoners \u2014 for deictic promptable segmentation. Given a complex, textual segmentation description, DeiSAM leverages Large Language Models (LLMs) to generate first-order logic rules and performs differentiable forward reasoning on generated scene graphs. Subsequently, DeiSAM segments objects by matching them to the logically inferred image regions. As part of our evaluation, we propose the Deictic Visual Genome (DeiVG) dataset, containing paired visual input and complex, deictic textual prompts. Our empirical results demonstrate that DeiSAM is a substantial improvement over purely data-driven baselines for deictic promptable segmentation.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94385"} +{"video_file": "cnpR4e2HCQ_39026081.mp4", "openreview_id": "cnpR4e2HCQ", "slideslive_id": 39026081, "venue": "nips2024", "title": "Community Detection Guarantees using Embeddings Learned by Node2Vec", "status": "Poster", "keywords": "Network Embedding;Node2Vec;Community Detection;Networks", "tldr": "We show theoretical guarantees for community detection using node2vec embeddings of networks.", "abstract": "Embedding the nodes of a large network into an Euclidean space is a common objective in modern machine learning, with a variety of tools available. These embeddings can then be used as features for tasks such as community detection/node clustering or link prediction, where they achieve state of the art performance. With the exception of spectral clustering methods, there is little theoretical understanding for commonly used approaches to learning embeddings. In this work we examine the theoretical properties of the embeddings learned by node2vec. Our main result shows that the use of k-means clustering on the embedding vectors produced by node2vec gives weakly consistent community recovery for the nodes in (degree corrected) stochastic block models. We also discuss the use of these embeddings for node and link prediction tasks. We demonstrate this result empirically for both real and simulated networks, and examine how this relates to other embedding tools for network data.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/94384"} +{"video_file": "cpklMJqZDE_39025934.mp4", "openreview_id": "cpklMJqZDE", "slideslive_id": 39025934, "venue": "nips2024", "title": "Unrolled denoising networks provably learn to perform optimal Bayesian inference", "status": "Poster", "keywords": "algorithm unrolling;approximate message passing (AMP);inverse problems;denoising", "tldr": "We show that an algorithmically inspired architecture similar to reverse diffusion denoising can provably learn Bayes optimal inference with gradient descent, and we show the score-matching objective is learnable in one-dimension along the way.", "abstract": "Much of Bayesian inference centers around the design of estimators for inverse problems which are optimal assuming the data comes from a known prior. But what do these optimality guarantees mean if the prior is unknown? In recent years, algorithm unrolling has emerged as deep learning's answer to this age-old question: design a neural network whose layers can in principle simulate iterations of inference algorithms and train on data generated by the unknown prior. Despite its empirical success, however, it has remained unclear whether this method can provably recover the performance of its optimal, prior-aware counterparts.\nIn this work, we prove the first rigorous learning guarantees for neural networks based on unrolling approximate message passing (AMP). For compressed sensing, we prove that when trained on data drawn from a product prior, the layers of the network approximately converge to the same denoisers used in Bayes AMP. We also provide extensive numerical experiments for compressed sensing and rank-one matrix estimation demonstrating the advantages of our unrolled architecture --- in addition to being able to obliviously adapt to general priors, it exhibits improvements over Bayes AMP in more general settings of low dimensions, non-Gaussian designs, and non-product priors.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94381"} +{"video_file": "crlvDzDPgM_39027817.mp4", "openreview_id": "crlvDzDPgM", "slideslive_id": 39027817, "venue": "nips2024", "title": "Customized Subgraph Selection and Encoding for Drug-drug Interaction Prediction", "status": "Poster", "keywords": "Drug-drug Interaction Prediction;Neural Architecture Search;Graph Neural Networks", "tldr": "We propose to efficiently and robustly search for data-specific subgraph-based pipeline components for drug-drug interaction prediction.", "abstract": "Subgraph-based methods have proven to be effective and interpretable in predicting drug-drug interactions (DDIs), which are essential for medical practice and drug development. Subgraph selection and encoding are critical stages in these methods, yet customizing these components remains underexplored due to the high cost of manual adjustments. In this study, inspired by the success of neural architecture search (NAS), we propose a method to search for data-specific components within subgraph-based frameworks. Specifically, we introduce extensive subgraph selection and encoding spaces that account for the diverse contexts of drug interactions in DDI prediction. To address the challenge of large search spaces and high sampling costs, we design a relaxation mechanism that uses an approximation strategy to efficiently explore optimal subgraph configurations. This approach allows for robust exploration of the search space. Extensive experiments demonstrate the effectiveness and superiority of the proposed method, with the discovered subgraphs and encoding functions highlighting the model\u2019s adaptability.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/94377"} +{"video_file": "cs1HISJkLU_39024631.mp4", "openreview_id": "cs1HISJkLU", "slideslive_id": 39024631, "venue": "nips2024", "title": "A Versatile Diffusion Transformer with Mixture of Noise Levels for Audiovisual Generation", "status": "Poster", "keywords": "multimodal diffusion;multimodal generation;diffusion timestep;latent diffusion models", "tldr": "We present a diffusion framework that can learn a wide range of conditional distributions within audiovisual data using a single model.", "abstract": "Training diffusion models for audiovisual sequences allows for a range of generation tasks by learning conditional distributions of various input-output combinations of the two modalities. Nevertheless, this strategy often requires training a separate model for each task which is expensive. Here, we propose a novel training approach to effectively learn arbitrary conditional distributions in the audiovisual space. Our key contribution lies in how we parameterize the diffusion timestep in the forward diffusion process. Instead of the standard fixed diffusion timestep, we propose applying variable diffusion timesteps across the temporal dimension and across modalities of the inputs. This formulation offers flexibility to introduce variable noise levels for various portions of the input, hence the term mixture of noise levels. We propose a transformer-based audiovisual latent diffusion model and show that it can be trained in a task-agnostic fashion using our approach to enable a variety of audiovisual generation tasks at inference time. Experiments demonstrate the versatility of our method in tackling cross-modal and multimodal interpolation tasks in the audiovisual space. Notably, our proposed approach surpasses baselines in generating temporally and perceptually consistent samples conditioned on the input. Project page: neurips13025.github.io", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94376"} +{"video_file": "ctxtY3VGGq_39024618.mp4", "openreview_id": "ctxtY3VGGq", "slideslive_id": 39024618, "venue": "nips2024", "title": "Online Weighted Paging with Unknown Weights", "status": "Poster", "keywords": "Online Learning;Online Weighted Paging;Online Algorithms;Competitive Ratio;Regret;Theory", "tldr": "We study the Online Weighted Paging problem, where the weights are unknown. We provide algorithm and analyze it's performance in terms of competitive ratio and regret.", "abstract": "Online paging is a fundamental problem in the field of online algorithms, in which one maintains a cache of\nk\nslots as requests for fetching pages arrive online. In the weighted variant of this problem, each page has its own fetching cost; a substantial line of work on this problem culminated in an (optimal)\nO\n(\nlog\n\u2061\nk\n)\n-competitive randomized algorithm, due to Bansal, Buchbinder and Naor (FOCS'07).\nExisting work for weighted paging assumes that page weights are known in advance, which is not always the case in practice. For example, in multi-level caching architectures, the expected cost of fetching a memory block is a function of its probability of being in a mid-level cache rather than the main memory. This complex property cannot be predicted in advance; over time, however, one may glean information about page weights through sampling their fetching cost multiple times.\nWe present the first algorithm for online weighted paging that does not know page weights in advance, but rather learns from weight samples. In terms of techniques, this requires providing (integral) samples to a fractional solver, requiring a delicate interface between this solver and the randomized rounding scheme; we believe that our work can inspire online algorithms to other problems that involve cost sampling.", "primary_area": "online_learning", "site": "https://neurips.cc/virtual/2024/poster/94374"} +{"video_file": "cuWsR25bbI_39027457.mp4", "openreview_id": "cuWsR25bbI", "slideslive_id": 39027457, "venue": "nips2024", "title": "An exactly solvable model for emergence and scaling laws in the multitask sparse parity problem", "status": "Poster", "keywords": "science of deep learning;emergence;skills;scaling laws", "tldr": "We can predict emergence in neural networks using a simple model that also demonstrates scaling laws.", "abstract": "Deep learning models can exhibit what appears to be a sudden ability to solve a new problem as training time, training data, or model size increases, a phenomenon known as emergence. In this paper, we present a framework where each new ability (a skill) is represented as a basis function. We solve a simple multi-linear model in this skill-basis, finding analytic expressions for the emergence of new skills, as well as for scaling laws of the loss with training time, data size, model size, and optimal compute. We compare our detailed calculations to direct simulations of a two-layer neural network trained on multitask sparse parity, where the tasks in the dataset are distributed according to a power-law. Our simple model captures, using a single fit parameter, the sigmoidal emergence of multiple new skills as training time, data size or model size increases in the neural network.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94372"} +{"video_file": "cw5mgd71jW_39028223.mp4", "openreview_id": "cw5mgd71jW", "slideslive_id": 39028223, "venue": "nips2024", "title": "Many-shot Jailbreaking", "status": "Poster", "keywords": "large language models;long context;robustness;jailbreaks;in-context learning", "tldr": "We investigate a simple yet effective long-context jailbreak, study the corresponding scaling laws and evaluate some mitigations against it.", "abstract": "We investigate a family of simple long-context attacks on large language models: prompting with hundreds of demonstrations of undesirable behavior. This attack is newly feasible with the larger context windows recently deployed by language model providers like Google DeepMind, OpenAI and Anthropic. We find that in diverse, realistic circumstances, the effectiveness of this attack follows a power law, up to hundreds of shots. We demonstrate the success of this attack on the most widely used state-of-the-art closed-weight models, and across various tasks. Our results suggest very long contexts present a rich new attack surface for LLMs.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/94370"} +{"video_file": "cyJxphdw3B_39025452.mp4", "openreview_id": "cyJxphdw3B", "slideslive_id": 39025452, "venue": "nips2024", "title": "Can neural operators always be continuously discretized?", "status": "Poster", "keywords": "Neural Operators;Invertibility;Category Theory", "tldr": "Discretizing injective neural operators may cause them to lose injectivity, even with very flexible discretization methods.", "abstract": "In this work we consider the problem of discretization of neural operators in a general setting. Using category theory, we give a no-go theorem that shows that diffeomorphisms between Hilbert spaces may not admit any continuous approximations by diffeomorphisms on finite spaces, even if the discretization is non-linear. This shows how infinite-dimensional Hilbert spaces and finite-dimensional vector spaces fundamentally differ. A key take-away is that to obtain discretization invariance, considerable effort is needed to ensure that finite-dimensional approximations of neural operator converge not only as sequences of functions, but that their representations converge in a suitable sense as well. With this perspective, we give several positive results. We first show that strongly monotone diffeomorphism operators always admit finite-dimensional strongly monotone diffeomorphisms. Next we show that bilipschitz neural operators may always be written via the repeated alternating composition of strongly monotone neural operators and invertible linear maps. We also show that such operators may be inverted locally via iteration provided that such inverse exists. Finally, we conclude by showing how our framework may be used `out of the box' to prove quantitative approximation results for discretization of neural operators.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/94369"} +{"video_file": "cyv0LkIaoH_39024511.mp4", "openreview_id": "cyv0LkIaoH", "slideslive_id": 39024511, "venue": "nips2024", "title": "Self-Consuming Generative Models with Curated Data Provably Optimize Human Preferences", "status": "Spotlight", "keywords": "retraining;curating;generative model;self-consuming", "tldr": "We theoretically show that iteratively retraining a generative model on its own generated samples with human curation optimizes human preferences", "abstract": "The rapid progress in generative models has resulted in impressive leaps in generation quality, blurring the lines between synthetic and real data. Web-scale datasets are now prone to the inevitable contamination by synthetic data, directly impacting the training of future generated models. Already, some theoretical results on self-consuming generative models (a.k.a., iterative retraining) have emerged in the literature, showcasing that either model collapse or stability could be possible depending on the fraction of generated data used at each retraining step. However, in practice, synthetic data is often subject to human feedback and curated by users before being used and uploaded online. For instance, many interfaces of popular text-to-image generative models, such as Stable Diffusion or Midjourney, produce several variations of an image for a given query which can eventually be curated by the users. In this paper, we theoretically study the impact of data curation on iterated retraining of generative models and show that it can be seen as an implicit preference optimization mechanism. However, unlike standard preference optimization, the generative model does not have access to the reward function or negative samples needed for pairwise comparisons. Moreover, our study doesn't require access to the density function, only to samples. We prove that, if the data is curated according to a reward model, then the expected reward of the iterative retraining procedure is maximized. We further provide theoretical results on the stability of the retraining loop when using a positive fraction of real data at each step. Finally, we conduct illustrative experiments on both synthetic datasets and on CIFAR10 showing that such a procedure amplifies biases of the reward model.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/94368"} +{"video_file": "d226uyWYUo_39026371.mp4", "openreview_id": "d226uyWYUo", "slideslive_id": 39026371, "venue": "nips2024", "title": "Knowledge Graph Completion by Intermediate Variables Regularization", "status": "Poster", "keywords": "Knowledge Graph Completion;Tensor Decomposition;Regularization", "tldr": "We propose a regularization to alleviate the overfitting problem of tensor decomposition based models for knowledge graph completion.", "abstract": "Knowledge graph completion (KGC) can be framed as a 3-order binary tensor completion task. Tensor decomposition-based (TDB) models have demonstrated strong performance in KGC. In this paper, we provide a summary of existing TDB models and derive a general form for them, serving as a foundation for further exploration of TDB models. Despite the expressiveness of TDB models, they are prone to overfitting. Existing regularization methods merely minimize the norms of embeddings to regularize the model, leading to suboptimal performance. Therefore, we propose a novel regularization method for TDB models that addresses this limitation. The regularization is applicable to most TDB models and ensures tractable computation. Our method minimizes the norms of intermediate variables involved in the different ways of computing the predicted tensor. To support our regularization method, we provide a theoretical analysis that proves its effect in promoting low trace norm of the predicted tensor to reduce overfitting. Finally, we conduct experiments to verify the effectiveness of our regularization technique as well as the reliability of our theoretical analysis. The code is available at https://github.com/changyi7231/IVR.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/94366"} +{"video_file": "d2lPM1Aczs_39025618.mp4", "openreview_id": "d2lPM1Aczs", "slideslive_id": 39025618, "venue": "nips2024", "title": "RankUp: Boosting Semi-Supervised Regression with an Auxiliary Ranking Classifier", "status": "Poster", "keywords": "Semi-Supervised Learning;Weakly Supervised Learning;Regression", "tldr": "RankUp adapts semi-supervised classification techniques for regression by converting regression to a ranking problem.", "abstract": "State-of-the-art (SOTA) semi-supervised learning techniques, such as FixMatch and it's variants, have demonstrated impressive performance in classification tasks. However, these methods are not directly applicable to regression tasks. In this paper, we present RankUp, a simple yet effective approach that adapts existing semi-supervised classification techniques to enhance the performance of regression tasks. RankUp achieves this by converting the original regression task into a ranking problem and training it concurrently with the original regression objective. This auxiliary ranking classifier outputs a classification result, thus enabling integration with existing semi-supervised classification methods. Moreover, we introduce regression distribution alignment (RDA), a complementary technique that further enhances RankUp's performance by refining pseudo-labels through distribution alignment. Despite its simplicity, RankUp, with or without RDA, achieves SOTA results in across a range of regression benchmarks, including computer vision, audio, and natural language processing tasks. Our code and log data are open-sourced at https://github.com/pm25/semi-supervised-regression.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94365"} +{"video_file": "d5cKDHCrFJ_39025520.mp4", "openreview_id": "d5cKDHCrFJ", "slideslive_id": 39025520, "venue": "nips2024", "title": "EPIC: Effective Prompting for Imbalanced-Class Data Synthesis in Tabular Data Classification via Large Language Models", "status": "Poster", "keywords": "Large language model;In-context learning;Few-shot learning;Class imbalance;Tabular data;Synthetic data generation", "tldr": "Can LLMs effectively generate synthetic tabular data to address class imbalance for classification tasks via in-context learning? How should prompts be structured to achieve this goal?", "abstract": "Large language models (LLMs) have demonstrated remarkable in-context learning capabilities across diverse applications. In this work, we explore the effectiveness of LLMs for generating realistic synthetic tabular data, identifying key prompt design elements to optimize performance. We introduce EPIC, a novel approach that leverages balanced, grouped data samples and consistent formatting with unique variable mapping to guide LLMs in generating accurate synthetic data across all classes, even for imbalanced datasets. Evaluations on real-world datasets show that EPIC achieves state-of-the-art machine learning classification performance, significantly improving generation efficiency. These findings highlight the effectiveness of EPIC for synthetic tabular data generation, particularly in addressing class imbalance.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/94364"} +{"video_file": "d75qCZb7TX_39026730.mp4", "openreview_id": "d75qCZb7TX", "slideslive_id": 39026730, "venue": "nips2024", "title": "Context-Aware Testing: A New Paradigm for Model Testing with Large Language Models", "status": "Poster", "keywords": "model testing;tabular data;large language models", "tldr": "We introduce context-aware testing (CAT) which uses context to guide the search for meaningful model failures", "abstract": "The predominant de facto paradigm of testing ML models relies on either using only held-out data to compute aggregate evaluation metrics or by assessing the performance on different subgroups. However, such data-only testing methods operate under the restrictive assumption that the available empirical data is the sole input for testing ML models, disregarding valuable contextual information that could guide model testing. In this paper, we challenge the go-to approach of data-only testing and introduce Context-Aware Testing (CAT) which uses context as an inductive bias to guide the search for meaningful model failures. We instantiate the first CAT system, SMART Testing, which employs large language models to hypothesize relevant and likely failures, which are evaluated on data using a self-falsification mechanism. Through empirical evaluations in diverse settings, we show that SMART automatically identifies more relevant and impactful failures than alternatives, demonstrating the potential of CAT as a testing paradigm.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/94363"} +{"video_file": "dAXuir2ets_39024501.mp4", "openreview_id": "dAXuir2ets", "slideslive_id": 39024501, "venue": "nips2024", "title": "SpaFL: Communication-Efficient Federated Learning With Sparse Models And Low Computational Overhead", "status": "Poster", "keywords": "Federated Learning;Communication and Computation Efficiency;Pruning", "tldr": "We find optimal structured sparse models by communicating trainable thresholds", "abstract": "The large communication and computation overhead of federated learning (FL) is one of the main challenges facing its practical deployment over resource-constrained clients and systems. In this work, SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead. In SpaFL, a trainable threshold is defined for each filter/neuron to prune its all connected parameters, thereby leading to structured sparsity. To optimize the pruning process itself, only thresholds are communicated between a server and clients instead of parameters, thereby learning how to prune. Further, global thresholds are used to update model parameters by extracting aggregated parameter importance. The generalization bound of SpaFL is also derived, thereby proving key insights on the relation between sparsity and performance. Experimental results show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines. The code is available at https://github.com/news-vt/SpaFL_NeruIPS_2024", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94360"} +{"video_file": "dB6gwSDXKL_39024944.mp4", "openreview_id": "dB6gwSDXKL", "slideslive_id": 39024944, "venue": "nips2024", "title": "Towards Understanding How Transformers Learn In-context Through a Representation Learning Lens", "status": "Poster", "keywords": "Transformers;In-context Learning;Representation Learning", "tldr": "In this paper, we attempt to explore the ICL process in Transformers through a lens of representation learning.", "abstract": "Pre-trained large language models based on Transformers have demonstrated remarkable in-context learning (ICL) abilities. With just a few demonstration examples, the models can implement new tasks without any parameter updates. However, it is still an open question to understand the mechanism of ICL. In this paper, we attempt to explore the ICL process in Transformers through a lens of representation learning. Initially, leveraging kernel methods, we figure out a dual model for one softmax attention layer. The ICL inference process of the attention layer aligns with the training procedure of its dual model, generating token representation predictions that are equivalent to the dual model's test outputs. We delve into the training process of this dual model from a representation learning standpoint and further derive a generalization error bound related to the quantity of demonstration tokens. Subsequently, we extend our theoretical conclusions to more complicated scenarios, including one Transformer layer and multiple attention layers. Furthermore, drawing inspiration from existing representation learning methods especially contrastive learning, we propose potential modifications for the attention layer. Finally, experiments are designed to support our findings.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/94359"} +{"video_file": "dBynjEbAt0_39024831.mp4", "openreview_id": "dBynjEbAt0", "slideslive_id": 39024831, "venue": "nips2024", "title": "Probabilistic size-and-shape functional mixed models", "status": "Poster", "keywords": "statistical shape analysis;size-and-shape perturbation model;Bayesian random effects model;norm-preserving transformation;phase function", "tldr": "We develop a Bayesian mixed effects model for inference on the functional size-and-shape mean via a combination of size-and-shape preserving and altering random effects.", "abstract": "The reliable recovery and uncertainty quantification of a fixed effect function $\\mu$ in a functional mixed model, for modeling population- and object-level variability in noisily observed functional data, is a notoriously challenging task: variations along the $x$ and $y$ axes are confounded with additive measurement error, and cannot in general be disentangled. The question then as to what properties of $\\mu$ may be reliably recovered becomes important. We demonstrate that it is possible to recover the size-and-shape of a square-integrable $\\mu$ under a Bayesian functional mixed model. The size-and-shape of $\\mu$ is a geometric property invariant to a family of space-time unitary transformations, viewed as rotations of the Hilbert space, that jointly transform the $x$ and $y$ axes. A random object-level unitary transformation then captures size-and-shape preserving deviations of $\\mu$ from an individual function, while a random linear term and measurement error capture size-and-shape altering deviations. The model is regularized by appropriate priors on the unitary transformations, posterior summaries of which may then be suitably interpreted as optimal data-driven rotations of a fixed orthonormal basis for the Hilbert space. Our numerical experiments demonstrate utility of the proposed model, and superiority over the current state-of-the-art.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/94356"} +{"video_file": "dGQtja9X2C_39024638.mp4", "openreview_id": "dGQtja9X2C", "slideslive_id": 39024638, "venue": "nips2024", "title": "Thinking Forward: Memory-Efficient Federated Finetuning of Language Models", "status": "Poster", "keywords": "Federated Learning;Large Language Models;Forward-mode Automatic Differentiation;Forward-mode AD;Memory-efficient Finetuning;Memory-efficiency;Data Heterogeneity", "tldr": "Spry is a federated learning algorithm that enables finetuning LLMs using Forward-mode Auto Differentiation; to achieve low memory footprint, high accuracy, and fast convergence.", "abstract": "Finetuning large language models (LLMs) in federated learning (FL) settings has become increasingly important as it allows resource-constrained devices to finetune a model using private data. However, finetuning LLMs using backpropagation requires excessive memory (especially from intermediate activations) for resource-constrained devices. While Forward-mode Auto-Differentiation (AD) can significantly reduce memory footprint from activations, we observe that directly applying it to LLM finetuning results in slow convergence and poor accuracy. In this paper, we introduce Spry, an FL algorithm that splits trainable weights of an LLM among participating clients, such that each client computes gradients using forward-mode AD that are closer estimations of the true gradients. Spry achieves a low memory footprint, high accuracy, and fast convergence. We formally prove that the global gradients in Spry are unbiased estimators of true global gradients for homogeneous data distributions across clients, while heterogeneity increases bias of the estimates. We also derive Spry's convergence rate, showing that the gradients decrease inversely proportional to the number of FL rounds, indicating the convergence up to the limits of heterogeneity. Empirically, Spry reduces the memory footprint during training by 1.4-7.1\n\u00d7\nin contrast to backpropagation, while reaching comparable accuracy, across a wide range of language tasks, models, and FL settings. Spry reduces the convergence time by 1.2-20.3\n\u00d7\nand achieves 5.2-13.5% higher accuracy against state-of-the-art zero-order methods. When finetuning Llama2-7B with LoRA, compared to the peak memory consumption of 33.9GB of backpropagation, Spry only consumes 6.2GB of peak memory. For OPT13B, the reduction is from 76.5GB to 10.8GB. Spry makes feasible previously impossible FL deployments on commodity mobile and edge devices. Our source code is available for replication at https://github.com/Astuary/Spry.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/94351"} +{"video_file": "dHIKahbV6G_39028037.mp4", "openreview_id": "dHIKahbV6G", "slideslive_id": 39028037, "venue": "nips2024", "title": "UMFC: Unsupervised Multi-Domain Feature Calibration for Vision-Language Models", "status": "Poster", "keywords": "model calibration;test-time adaptation;CLIP;multi-domain", "tldr": "This paper proposes UMFC, which calibrate features from multi-domain in a label-free and training-free way.", "abstract": "Pre-trained vision-language models (e.g., CLIP) have shown powerful zero-shot transfer capabilities. But they still struggle with domain shifts and typically require labeled data to adapt to downstream tasks, which could be costly. In this work, we aim to leverage unlabeled data that naturally spans multiple domains to enhance the transferability of vision-language models. Under this unsupervised multi-domain setting, we have identified inherent model bias within CLIP, notably in its visual and text encoders. Specifically, we observe that CLIP\u2019s visual encoder tends to prioritize encoding domain over discriminative category information, meanwhile its text encoder exhibits a preference for domain-relevant classes. To mitigate this model bias, we propose a training-free and label-free feature calibration method, Unsupervised Multi-domain Feature Calibration (UMFC). UMFC estimates image-level biases from domain-specific features and text-level biases from the direction of domain transition. These biases are subsequently subtracted from original image and text features separately, to render them domain-invariant. We evaluate our method on multiple settings including transductive learning and test-time adaptation. Extensive experiments show that our method outperforms CLIP and performs on par with the state-of-the-arts that need additional annotations or optimization. Our code is available at https://github.com/GIT-LJc/UMFC.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94349"} +{"video_file": "dIHXwKjXRE_39028585.mp4", "openreview_id": "dIHXwKjXRE", "slideslive_id": 39028585, "venue": "nips2024", "title": "Towards the Dynamics of a DNN Learning Symbolic Interactions", "status": "Poster", "keywords": "deep learning theory;learning theory;knowledge representation", "tldr": "We prove the two-phase dynamics of a DNN learning interactions in the training process.", "abstract": "This study proves the two-phase dynamics of a deep neural network (DNN) learning interactions. Despite the long disappointing view of the faithfulness of post-hoc explanation of a DNN, a series of theorems have been proven [27] in recent years to show that for a given input sample, a small set of interactions between input variables can be considered as primitive inference patterns that faithfully represent a DNN's detailed inference logic on that sample. Particularly, Zhang et al. [41] have observed that various DNNs all learn interactions of different complexities in two distinct phases, and this two-phase dynamics well explains how a DNN changes from under-fitting to over-fitting. Therefore, in this study, we mathematically prove the two-phase dynamics of interactions, providing a theoretical mechanism for how the generalization power of a DNN changes during the training process. Experiments show that our theory well predicts the real dynamics of interactions on different DNNs trained for various tasks.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94348"} +{"video_file": "dIVb5C0QFf_39028733.mp4", "openreview_id": "dIVb5C0QFf", "slideslive_id": 39028733, "venue": "nips2024", "title": "MetaAligner: Towards Generalizable Multi-Objective Alignment of Language Models", "status": "Poster", "keywords": "Language Models;Multi-objective Alignment;Preference Optimization", "tldr": "We propose MetaAligner, the first policy-agnostic and generalizable method for multi-objective preference alignment.", "abstract": "Recent advancements in large language models (LLMs) focus on aligning to heterogeneous human expectations and values via multi-objective preference alignment. However, existing methods are dependent on the policy model parameters, which require high-cost repetition of their alignment algorithms for each new policy model, and they cannot expand to unseen objectives due to their static alignment objectives. In this work, we propose Meta-Objective Aligner (MetaAligner), the first policy-agnostic and generalizable method for multi-objective preference alignment. MetaAligner models multi-objective alignment into three stages: (1) dynamic objectives reformulation algorithm reorganizes traditional alignment datasets to supervise the model on performing flexible alignment across different objectives; (2) conditional weak-to-strong correction paradigm aligns the weak outputs of fixed policy models to approach strong outputs with higher preferences in the corresponding alignment objectives, enabling plug-and-play inferences on any policy models, which significantly reduces training costs and facilitates alignment on close-source policy models; (3) generalizable inference method flexibly adjusts target objectives by updating their text descriptions in the prompts, facilitating generalizable alignment to unseen objectives. Experimental results show that MetaAligner achieves significant and balanced improvements in multi-objective alignments on 10 state-of-the-art policy models, and saves up to 93.63% of GPU training hours compared to previous alignment methods. The model also effectively aligns unseen objectives, marking the first step towards generalizable multi-objective preference alignment.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94347"} +{"video_file": "dJ9KzkQ0oH_39024810.mp4", "openreview_id": "dJ9KzkQ0oH", "slideslive_id": 39024810, "venue": "nips2024", "title": "Neural Model Checking", "status": "Poster", "keywords": "Formal Verification;SystemVerilog;Temporal Logic;Neuro-symbolic AI;Neural Certificates", "tldr": "Hardware model checking using neural certificates", "abstract": "We introduce a machine learning approach to model checking temporal logic, with application to formal hardware verification. Model checking answers the question of whether every execution of a given system satisfies a desired temporal logic specification. Unlike testing, model checking provides formal guarantees. Its application is expected standard in silicon design and the EDA industry has invested decades into the development of performant symbolic model checking algorithms. Our new approach combines machine learning and symbolic reasoning by using neural networks as formal proof certificates for linear temporal logic. We train our neural certificates from randomly generated executions of the system and we then symbolically check their validity using satisfiability solving which, upon the affirmative answer, establishes that the system provably satisfies the specification. We leverage the expressive power of neural networks to represent proof certificates as well as the fact that checking a certificate is much simpler than finding one. As a result, our machine learning procedure for model checking is entirely unsupervised, formally sound, and practically effective. We experimentally demonstrate that our method outperforms the state-of-the-art academic and commercial model checkers on a set of standard hardware designs written in SystemVerilog.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94345"} +{"video_file": "dJUb9XRoZI_39027237.mp4", "openreview_id": "dJUb9XRoZI", "slideslive_id": 39027237, "venue": "nips2024", "title": "Constrained Diffusion with Trust Sampling", "status": "Poster", "keywords": "diffusion models;guidance;image generation;human motion", "tldr": "A new optimization-based training-free guided diffusion method, applicable to drastically different domains such as image and human motion generation", "abstract": "Diffusion models have demonstrated significant promise in various generative tasks; however, they often struggle to satisfy challenging constraints. Our approach addresses this limitation by rethinking training-free loss-guided diffusion from an optimization perspective. We formulate a series of constrained optimizations throughout the inference process of a diffusion model. In each optimization, we allow the sample to take multiple steps along the gradient of the proxy constraint function until we can no longer trust the proxy, according to the variance at each diffusion level. Additionally, we estimate the state manifold of diffusion model to allow for early termination when the sample starts to wander away from the state manifold at each diffusion step. Trust sampling effectively balances between following the unconditional diffusion model and adhering to the loss guidance, enabling more flexible and accurate constrained generation. We demonstrate the efficacy of our method through extensive experiments on complex tasks, and in drastically different domains of images and 3D motion generation, showing significant improvements over existing methods in terms of generation quality. Our implementation is available at https://github.com/will-s-h/trust-sampling.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94344"} +{"video_file": "dLnduWGTB4_39026854.mp4", "openreview_id": "dLnduWGTB4", "slideslive_id": 39026854, "venue": "nips2024", "title": "QUEST: Quality-Aware Metropolis-Hastings Sampling for Machine Translation", "status": "Poster", "keywords": "Machine Translation;Decoding;Quality Estimation", "tldr": "This paper presents a method to generate diverse and high-quality machine translations by sampling from a Gibbs distribution using the Metropolis-Hastings algorithm.", "abstract": "An important challenge in machine translation (MT) is to generate high-quality and diverse translations. Prior work has shown that the estimated likelihood from the MT model correlates poorly with translation quality. In contrast, quality evaluation metrics (such as COMET or BLEURT) exhibit high correlations with human judgments, which has motivated their use as rerankers (such as quality-aware and minimum Bayes risk decoding). However, relying on a single translation with high estimated quality increases the chances of \"gaming the metric''. In this paper, we address the problem of sampling a set of high-quality and diverse translations. We provide a simple and effective way to avoid over-reliance on noisy quality estimates by using them as the energy function of a Gibbs distribution. Instead of looking for a mode in the distribution, we generate multiple samples from high-density areas through the Metropolis-Hastings algorithm, a simple Markov chain Monte Carlo approach. The results show that our proposed method leads to high-quality and diverse outputs across multiple language pairs (English\n\u2194\n{German, Russian}) with two strong decoder-only LLMs (Alma-7b, Tower-7b).", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94343"} +{"video_file": "dWwin2uGYE_39024549.mp4", "openreview_id": "dWwin2uGYE", "slideslive_id": 39024549, "venue": "nips2024", "title": "Breaking the curse of dimensionality in structured density estimation", "status": "Poster", "keywords": "nonparametric statistics;density estimation;graphical models;sample complexity;curse of dimensionality", "tldr": "This work presents a new graphical quantity and shows that, when one assumes the Markov property over this graph, it leads to much faster rates for nonparametric density estimation.", "abstract": "We consider the problem of estimating a structured multivariate density, subject to Markov conditions implied by an undirected graph. In the worst case, without Markovian assumptions, this problem suffers from the curse of dimensionality. Our main result shows how the curse of dimensionality can be avoided or greatly alleviated under the Markov property, and applies to arbitrary graphs. While existing results along these lines focus on sparsity or manifold assumptions, we introduce a new graphical quantity called ``graph resilience'' and show that it dictates the optimal sample complexity. Surprisingly, although one might expect the sample complexity of this problem to scale with local graph parameters such as the degree, this turns out not to be the case. Through explicit examples, we compute uniform deviation bounds and illustrate how the curse of dimensionality in density estimation can thus be circumvented. Notable examples where the rate improves substantially include sequential, hierarchical, and spatial data.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94337"} +{"video_file": "dYIqAZXQNV_39027956.mp4", "openreview_id": "dYIqAZXQNV", "slideslive_id": 39027956, "venue": "nips2024", "title": "Generalizing CNNs to graphs with learnable neighborhood quantization", "status": "Poster", "keywords": "graph convolutional networks;graph neural networks;quantization", "tldr": "We present a novel GCN framework that properly generalizes CNNs to graph structured data.", "abstract": "Convolutional neural networks (CNNs) have led to a revolution in analyzing array data. However, many important sources of data, such as biological and social networks, are naturally structured as graphs rather than arrays, making the design of graph neural network (GNN) architectures that retain the strengths of CNNs an active and exciting area of research. Here, we introduce Quantized Graph Convolution Networks (QGCNs), the first framework for GNNs that formally and directly extends CNNs to graphs. QGCNs do this by decomposing the convolution operation into non-overlapping sub-kernels, allowing them to fit graph data while reducing to a 2D CNN layer on array data. We generalize this approach to graphs of arbitrary size and dimension by approaching sub-kernel assignment as a learnable multinomial assignment problem. Integrating this approach into a residual network architecture, we demonstrate performance that matches or exceeds other state-of-the-art GNNs on benchmark graph datasets and for predicting properties of nonlinear dynamics on a new finite element graph dataset. In summary, QGCNs are a novel GNN framework that generalizes CNNs and their strengths to graph data, allowing for more accurate and expressive models.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/94335"} +{"video_file": "da0ZJatRCN_39027747.mp4", "openreview_id": "da0ZJatRCN", "slideslive_id": 39027747, "venue": "nips2024", "title": "Active Learning for Derivative-Based Global Sensitivity Analysis with Gaussian Processes", "status": "Poster", "keywords": "Global Sensitivity Analysis;Gaussian Processes;Bayesian Active Learning;Bayesian optimization", "tldr": "We propose several uncertainty based and information gain acquisition functions for derivative based global sensitivity analysis", "abstract": "We consider the problem of active learning for global sensitivity analysis of expensive black-box functions. Our aim is to efficiently learn the importance of different input variables, e.g., in vehicle safety experimentation, we study the impact of the thickness of various components on safety objectives. Since function evaluations are expensive, we use active learning to prioritize experimental resources where they yield the most value. We propose novel active learning acquisition functions that directly target key quantities of derivative-based global sensitivity measures (DGSMs) under Gaussian process surrogate models. We showcase the first application of active learning directly to DGSMs, and develop tractable uncertainty reduction and information gain acquisition functions for these measures. Through comprehensive evaluation on synthetic and real-world problems, our study demonstrates how these active learning acquisition strategies substantially enhance the sample efficiency of DGSM estimation, particularly with limited evaluation budgets. Our work paves the way for more efficient and accurate sensitivity analysis in various scientific and engineering applications.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/94334"} +{"video_file": "dbnEf790Kv_39027627.mp4", "openreview_id": "dbnEf790Kv", "slideslive_id": 39027627, "venue": "nips2024", "title": "FUSE: Fast Unified Simulation and Estimation for PDEs", "status": "Poster", "keywords": "Forward and Inverse Problems;PDEs;Neural Operators;Neural Posterior Estimation", "tldr": "This work presents a framework to unify forward and inverse problems in scientific computing by optimizing a joint objective derived from operator learning.", "abstract": "The joint prediction of continuous fields and statistical estimation of the underlying discrete parameters is a common problem for many physical systems, governed by PDEs. Hitherto, it has been separately addressed by employing operator learning surrogates for field prediction while using simulation-based inference (and its variants) for statistical parameter determination. Here, we argue that solving both problems within the same framework can lead to consistent gains in accuracy and robustness. To this end, we propose a novel and flexible formulation of the operator learning problem that jointly predicts continuous quantities and infers distributions of discrete parameters, thereby amortizing the cost of both the inverse and the surrogate models to a joint pre-training step. We present the capabilities of the proposed methodology for predicting continuous and discrete biomarkers in full-body haemodynamics simulations under different levels of missing information. We also consider a test case for atmospheric large-eddy simulation of a two-dimensional dry cold bubble, where we infer both continuous time-series and information about the system's conditions. We present comparisons against different baselines to showcase significantly increased accuracy in both the inverse and the surrogate tasks.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/94332"} +{"video_file": "dfqsW38v1X_39028268.mp4", "openreview_id": "dfqsW38v1X", "slideslive_id": 39028268, "venue": "nips2024", "title": "QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs", "status": "Poster", "keywords": "quantization;efficient inference;large language models", "tldr": "We present a quantization scheme that enables 4-bit quantization of all weights, activations and KV-caches of LLMs during inference", "abstract": "We introduce QuaRot, a new Quantization scheme based on Rotations, which is able to quantize LLMs end-to-end, including all weights, activations, and KV cache in 4 bits. QuaRot rotates LLMs in a way that removes outliers from the hidden state without changing the output, making quantization easier. This computational invariance is applied to the hidden state (residual) of the LLM, as well as to the activations of the feed-forward components, aspects of the attention mechanism, and to the KV cache. The result is a quantized model where all matrix multiplications are performed in 4 bits, without any channels identified for retention in higher precision. Our 4-bit quantized LLAMA2-70B model has losses of at most 0.47 WikiText-2 perplexity and retains 99% of the zero-shot performance. We also show that QuaRot can provide lossless 6 and 8 bit LLAMA-2 models without any calibration data using round-to-nearest quantization. Code is available at github.com/spcl/QuaRot.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94328"} +{"video_file": "dg3tI3c2B1_39028156.mp4", "openreview_id": "dg3tI3c2B1", "slideslive_id": 39028156, "venue": "nips2024", "title": "Molecule Design by Latent Prompt Transformer", "status": "Spotlight", "keywords": "latent space generative modeling; Latent Prompt Transformer; molecule design", "tldr": "We propose the Latent Prompt Transformer, a novel latent space generative model with both offline and online learning algorithms for molecule design.", "abstract": "This work explores the challenging problem of molecule design by framing it as a conditional generative modeling task, where target biological properties or desired chemical constraints serve as conditioning variables. We propose the Latent Prompt Transformer (LPT), a novel generative model comprising three components: (1) a latent vector with a learnable prior distribution modeled by a neural transformation of Gaussian white noise; (2) a molecule generation model based on a causal Transformer, which uses the latent vector as a prompt; and (3) a property prediction model that predicts a molecule's target properties and/or constraint values using the latent prompt. LPT can be learned by maximum likelihood estimation on molecule-property pairs. During property optimization, the latent prompt is inferred from target properties and constraints through posterior sampling and then used to guide the autoregressive molecule generation. After initial training on existing molecules and their properties, we adopt an online learning algorithm to progressively shift the model distribution towards regions that support desired target properties. Experiments demonstrate that LPT not only effectively discovers useful molecules across single-objective, multi-objective, and structure-constrained optimization tasks, but also exhibits strong sample efficiency.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/94326"} +{"video_file": "dhFHO90INk_39026670.mp4", "openreview_id": "dhFHO90INk", "slideslive_id": 39026670, "venue": "nips2024", "title": "Implicitly Guided Design with PropEn: Match your Data to Follow the Gradient", "status": "Poster", "keywords": "matching;property enhancement;gradient approximation;design optimization;shape optimization;antibodies", "tldr": "A method inspired by ``matching'' that allows for implicit guidance, sidestepping the need for a discriminator. Training with a matched dataset approximates the gradient of a property and applies to design optimization in real-world datasets.", "abstract": "Across scientific domains, generating new models or optimizing existing ones while meeting specific criteria is crucial. Traditional machine learning frameworks for guided design use a generative model and a surrogate model (discriminator), requiring large datasets. However, real-world scientific applications often have limited data and complex landscapes, making data-hungry models inefficient or impractical. We propose a new framework, PropEn, inspired by ``matching'', which enables implicit guidance without training a discriminator. By matching each sample with a similar one that has a better property value, we create a larger training dataset that inherently indicates the direction of improvement. Matching, combined with an encoder-decoder architecture, forms a domain-agnostic generative framework for property enhancement. We show that training with a matched dataset approximates the gradient of the property of interest while remaining within the data distribution, allowing efficient design optimization. Extensive evaluations in toy problems and scientific applications, such as therapeutic protein design and airfoil optimization, demonstrate PropEn's advantages over common baselines. Notably, the protein design results are validated with wet lab experiments, confirming the competitiveness and effectiveness of our approach. Our code is available at https://github.com/prescient-design/propen.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94325"} +{"video_file": "diYnEYUbIU_39026500.mp4", "openreview_id": "diYnEYUbIU", "slideslive_id": 39026500, "venue": "nips2024", "title": "Geometric Exploitation for Indoor Panoramic Semantic Segmentation", "status": "Poster", "keywords": "indoor panoramic semantic segmentation;vertical relative distance", "tldr": "We propose a new approach for Indoor Panoramic Semantic Segmentation", "abstract": "PAnoramic Semantic Segmentation (PASS) is an important task in computer vision, as it enables semantic understanding of a 360\u00b0 environment. Currently, most of existing works have focused on addressing the distortion issues in 2D panoramic images without considering spatial properties of indoor scene. This restricts PASS methods in perceiving contextual attributes to deal with the ambiguity when working with monocular images. In this paper, we propose a novel approach for indoor panoramic semantic segmentation. Unlike previous works, we consider the panoramic image as a composition of segment groups: oversampled segments, representing planar structures such as floors and ceilings, and under-sampled segments, representing other scene elements. To optimize each group, we first enhance over-sampled segments by jointly optimizing with a dense depth estimation task. Then, we introduce a transformer-based context module that aggregates different geometric representations of the scene, combined with a simple high-resolution branch, it serves as a robust hybrid decoder for estimating under-sampled segments, effectively preserving the resolution of predicted masks while leveraging various indoor geometric properties. Experimental results on both real-world (Stanford2D3DS, Matterport3D) and synthetic (Structured3D) datasets demonstrate the robustness of our framework, by setting new state-of-the-arts in almost evaluations, The code and updated results are available at: https://github.com/caodinhduc/vertical_relative_distance.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94323"} +{"video_file": "dlCTmEyq6y_39025528.mp4", "openreview_id": "dlCTmEyq6y", "slideslive_id": 39025528, "venue": "nips2024", "title": "Semi-Supervised Sparse Gaussian Classification: Provable Benefits of Unlabeled Data", "status": "Spotlight", "keywords": "semi-supervised learning;gaussian mixture models;high-dimensional statistics;sparsity;statistical-computational gaps", "tldr": "Our work highlights the provable benefits of combining labeled and unlabeled data for classification and feature selection in high dimensions.", "abstract": "The premise of semi-supervised learning (SSL) is that combining labeled and unlabeled data yields significantly more accurate models. Despite empirical successes, the theoretical understanding of SSL is still far from complete. In this work, we study SSL for high dimensional sparse Gaussian classification. To construct an accurate classifier a key task is feature selection, detecting the few variables that separate the two classes. For this SSL setting, we analyze information theoretic lower bounds for accurate feature selection as well as computational lower bounds, assuming the low-degree likelihood hardness conjecture. Our key contribution is the identification of a regime in the problem parameters (dimension, sparsity, number of labeled and unlabeled samples) where SSL is guaranteed to be advantageous for classification. Specifically, there is a regime where it is possible to construct in polynomial time an accurate SSL classifier. However, any computationally efficient supervised or unsupervised learning schemes, that separately use only the labeled or unlabeled data would fail.\nOur work highlights the provable benefits of combining labeled and unlabeled data for classification and feature selection in high dimensions. We present simulations that complement our theoretical analysis.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94319"} +{"video_file": "dmhi2ydnXZ_39024458.mp4", "openreview_id": "dmhi2ydnXZ", "slideslive_id": 39024458, "venue": "nips2024", "title": "Scalable DBSCAN with Random Projections", "status": "Poster", "keywords": "Density-based clustering;random projections;extreme order statistics", "tldr": "Scale up DBSCAN and OPTICS with random projections", "abstract": "We present sDBSCAN, a scalable density-based clustering algorithm in high dimensions with cosine distance. sDBSCAN leverages recent advancements in random projections given a significantly large number of random vectors to quickly identify core points and their neighborhoods, the primary hurdle of density-based clustering. Theoretically, sDBSCAN preserves the DBSCAN\u2019s clustering structure under mild conditions with high probability. To facilitate sDBSCAN, we present sOPTICS, a scalable visual tool to guide the parameter setting of sDBSCAN. We also extend sDBSCAN and sOPTICS to L2, L1, \u03c72, and Jensen-Shannon distances via random kernel features. Empirically, sDBSCAN is significantly faster and provides higher accuracy than competitive DBSCAN variants on real-world million-point data sets. On these data sets, sDBSCAN and sOPTICS run in a few minutes, while the scikit-learn counterparts and other clustering competitors demand several hours or cannot run on our hardware due to memory constraints. Our code is available at https://github.com/NinhPham/sDbscan.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/94318"} +{"video_file": "doaJTihgIZ_39027771.mp4", "openreview_id": "doaJTihgIZ", "slideslive_id": 39027771, "venue": "nips2024", "title": "Dynamics of Supervised and Reinforcement Learning in the Non-Linear Perceptron", "status": "Poster", "keywords": "Learning Dynamics;non-linear perceptron;supervised learning;reinforcement learning", "tldr": "We derive learning dynamics for a non-linear perceptron performing a binary Gaussian classification task.", "abstract": "The ability of a brain or a neural network to efficiently learn depends crucially on both the task structure and the learning rule. Previous works have analyzed the dynamical equations describing learning in the relatively simplified context of the perceptron under assumptions of a student-teacher framework or a linearized output. While these assumptions have facilitated theoretical understanding, they have precluded a detailed understanding of the roles of the nonlinearity and input-data distribution in determining the learning dynamics, limiting the applicability of the theories to real biological or artificial neural networks. Here, we use a stochastic-process approach to derive flow equations describing learning, applying this framework to the case of a nonlinear perceptron performing binary classification. We characterize the effects of the learning rule (supervised or reinforcement learning, SL/RL) and input-data distribution on the perceptron's learning curve and the forgetting curve as subsequent tasks are learned. In particular, we find that the input-data noise differently affects the learning speed under SL vs. RL, as well as determines how quickly learning of a task is overwritten by subsequent learning. Additionally, we verify our approach with real data using the MNIST dataset. This approach points a way toward analyzing learning dynamics for more-complex circuit architectures.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94317"} +{"video_file": "dqT9MC5NQl_39025195.mp4", "openreview_id": "dqT9MC5NQl", "slideslive_id": 39025195, "venue": "nips2024", "title": "Approximately Equivariant Neural Processes", "status": "Poster", "keywords": "equivariance;neural processes;meta learning;deep learning;probabilistic methods", "tldr": "A general approach to constructing approximately equivariant architectures with applications to neural processes", "abstract": "Equivariant deep learning architectures exploit symmetries in learning problems to improve the sample efficiency of neural-network-based models and their ability to generalise. However, when modelling real-world data, learning problems are often not exactly equivariant, but only approximately. For example, when estimating the global temperature field from weather station observations, local topographical features like mountains break translation equivariance. In these scenarios, it is desirable to construct architectures that can flexibly depart from exact equivariance in a data-driven way. Current approaches to achieving this cannot usually be applied out-of-the-box to any architecture and symmetry group. In this paper, we develop a general approach to achieving this using existing equivariant architectures. Our approach is agnostic to both the choice of symmetry group and model architecture, making it widely applicable. We consider the use of approximately equivariant architectures in neural processes (NPs), a popular family of meta-learning models. We demonstrate the effectiveness of our approach on a number of synthetic and real-world regression experiments, showing that approximately equivariant NP models can outperform both their non-equivariant and strictly equivariant counterparts.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/94315"} +{"video_file": "dxwIaCVkWU_39028389.mp4", "openreview_id": "dxwIaCVkWU", "slideslive_id": 39028389, "venue": "nips2024", "title": "Divide-and-Conquer Predictive Coding: a structured Bayesian inference algorithm", "status": "Poster", "keywords": "bioplausible;predictive coding;neuroscience;variational Bayes;VAE;deep generative model;probabilistic graphical models", "tldr": "We design a nested Sequential Monte Carlo algorithm based on predictive coding in neuroscience, and demonstrate both its biological plausibility and its performance in deep generative modeling.", "abstract": "Unexpected stimuli induce \"error\" or \"surprise\" signals in the brain. The theory of predictive coding promises to explain these observations in terms of Bayesian inference by suggesting that the cortex implements variational inference in a probabilistic graphical model. However, when applied to machine learning tasks, this family of algorithms has yet to perform on par with other variational approaches in high-dimensional, structured inference problems. To address this, we introduce a novel predictive coding algorithm for structured generative models, that we call divide-and-conquer predictive coding (DCPC); it differs from other formulations of predictive coding, as it respects the correlation structure of the generative model and provably performs maximum-likelihood updates of model parameters, all without sacrificing biological plausibility. Empirically, DCPC achieves better numerical performance than competing algorithms and provides accurate inference in a number of problems not previously addressed with predictive coding. We provide an open implementation of DCPC in Pyro on Github.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/94307"} +{"video_file": "dxxj4S06YL_39025064.mp4", "openreview_id": "dxxj4S06YL", "slideslive_id": 39025064, "venue": "nips2024", "title": "Fair Secretaries with Unfair Predictions", "status": "Poster", "keywords": "Secretary problem;fairness;algorithms with predictions;online algorithms", "tldr": "We study fairness in the context of the secretary problem with predictions.", "abstract": "Algorithms with predictions is a recent framework for decision-making under uncertainty that leverages the power of machine-learned predictions without making any assumption about their quality. The goal in this framework is for algorithms to achieve an improved performance when the predictions are accurate while maintaining acceptable guarantees when the predictions are erroneous. A serious concern with algorithms that use predictions is that these predictions can be biased and, as a result, cause the algorithm to make decisions that are deemed unfair. We show that this concern manifests itself in the classical secretary problem in the learning-augmented setting---the state-of-the-art algorithm can have zero probability of accepting the best candidate, which we deem unfair, despite promising to accept a candidate whose expected value is at least\nmax\n\u03a9\n(\n1\n)\n,\n1\n\u2212\nO\n(\n\u03b5\n)\ntimes the optimal value, where\n\u03b5\nis the prediction error. We show how to preserve this promise while also guaranteeing to accept the best candidate with probability\n\u03a9\n(\n1\n)\n. Our algorithm and analysis are based on a new ``pegging'' idea that diverges from existing works and simplifies/unifies some of their results. Finally, we extend to the\nk\n-secretary problem and complement our theoretical analysis with experiments.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94306"} +{"video_file": "dxyNVEBQMp_39028775.mp4", "openreview_id": "dxyNVEBQMp", "slideslive_id": 39028775, "venue": "nips2024", "title": "Introducing Spectral Attention for Long-Range Dependency in Time Series Forecasting", "status": "Poster", "keywords": "Time series forecasting;Long-range dependency;Low-pass filter;Spectral attention;Long term trend", "tldr": "Introduce low-pass filter based spectral attention to address long-range dependency in time series forecasting", "abstract": "Sequence modeling faces challenges in capturing long-range dependencies across diverse tasks. Recent linear and transformer-based forecasters have shown superior performance in time series forecasting. However, they are constrained by their inherent inability to effectively address long-range dependencies in time series data, primarily due to using fixed-size inputs for prediction. Furthermore, they typically sacrifice essential temporal correlation among consecutive training samples by shuffling them into mini-batches. To overcome these limitations, we introduce a fast and effective Spectral Attention mechanism, which preserves temporal correlations among samples and facilitates the handling of long-range information while maintaining the base model structure. Spectral Attention preserves long-period trends through a low-pass filter and facilitates gradient to flow between samples. Spectral Attention can be seamlessly integrated into most sequence models, allowing models with fixed-sized look-back windows to capture long-range dependencies over thousands of steps. Through extensive experiments on 11 real-world time series datasets using 7 recent forecasting models, we consistently demonstrate the efficacy of our Spectral Attention mechanism, achieving state-of-the-art results.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/94305"} +{"video_file": "e2INndPINB_39024587.mp4", "openreview_id": "e2INndPINB", "slideslive_id": 39024587, "venue": "nips2024", "title": "Rethinking Reconstruction-based Graph-Level Anomaly Detection: Limitations and a Simple Remedy", "status": "Poster", "keywords": "graph-level anomaly detection;graph neural network;graph autoencoder", "tldr": "We investigate reconstruction-based graph neural networks in graph-level anomaly detection task.", "abstract": "Graph autoencoders (Graph-AEs) learn representations of given graphs by aiming to accurately reconstruct them. A notable application of Graph-AEs is graph-level anomaly detection (GLAD), whose objective is to identify graphs with anomalous topological structures and/or node features compared to the majority of the graph population. Graph-AEs for GLAD regard a graph with a high mean reconstruction error (i.e. mean of errors from all node pairs and/or nodes) as anomalies. Namely, the methods rest on the assumption that they would better reconstruct graphs with similar characteristics to the majority. We, however, report non-trivial counter-examples, a phenomenon we call reconstruction flip, and highlight the limitations of the existing Graph-AE-based GLAD methods. Specifically, we empirically and theoretically investigate when this assumption holds and when it fails. Through our analyses, we further argue that, while the reconstruction errors for a given graph are effective features for GLAD, leveraging the multifaceted summaries of the reconstruction errors, beyond just mean, can further strengthen the features. Thus, we propose a novel and simple GLAD method, named MUSE. The key innovation of MUSE involves taking multifaceted summaries of reconstruction errors as graph features for GLAD. This surprisingly simple method obtains SOTA performance in GLAD, performing best overall among 14 methods across 10 datasets.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/94301"} +{"video_file": "e397soEZh8_39025617.mp4", "openreview_id": "e397soEZh8", "slideslive_id": 39025617, "venue": "nips2024", "title": "Learning Structure-Aware Representations of Dependent Types", "status": "Poster", "keywords": "premise selection;agda;structured attention;theorem proving;proof assistant", "tldr": "A novel, hyper-articulated dataset for AI&TP, and a first model to go with it.", "abstract": "Agda is a dependently-typed programming language and a proof assistant, pivotal in proof formalization and programming language theory. This paper extends the Agda ecosystem into machine learning territory, and, vice versa, makes Agda-related resources available to machine learning practitioners. We introduce and release a novel dataset of Agda program-proofs that is elaborate and extensive enough to support various machine learning applications -- the first of its kind. Leveraging the dataset's ultra-high resolution, which details proof states at the sub-type level, we propose a novel neural architecture targeted at faithfully representing dependently-typed programs on the basis of structural rather than nominal principles. We instantiate and evaluate our architecture in a premise selection setup, where it achieves promising initial results, surpassing strong baselines.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94299"} +{"video_file": "e5icsXBD8Q_39024619.mp4", "openreview_id": "e5icsXBD8Q", "slideslive_id": 39024619, "venue": "nips2024", "title": "Large Language Model Unlearning via Embedding-Corrupted Prompts", "status": "Poster", "keywords": "machine unlearning;safety;alignment;large language model unlearning", "tldr": "We present \\textbf{Embedding-COrrupted (ECO) Prompts}, a lightweight unlearning framework for large language models to address both the challenges of knowledge entanglement and unlearning efficiency.", "abstract": "Large language models (LLMs) have advanced to encompass extensive knowledge across diverse domains. Yet controlling what a large language model should not know is important for ensuring alignment and thus safe use. However, accurately and efficiently unlearning knowledge from an LLM remains challenging due to the potential collateral damage caused by the fuzzy boundary between retention and forgetting, and the large computational requirements for optimization across state-of-the-art models with hundreds of billions of parameters. In this work, we present \\textbf{Embedding-COrrupted (ECO) Prompts}, a lightweight unlearning framework for large language models to address both the challenges of knowledge entanglement and unlearning efficiency. Instead of relying on the LLM itself to unlearn, we enforce an unlearned state during inference by employing a prompt classifier to identify and safeguard prompts to forget. We learn corruptions added to prompt embeddings via zeroth order optimization toward the unlearning objective offline and corrupt prompts flagged by the classifier during inference. We find that these embedding-corrupted prompts not only lead to desirable outputs that satisfy the unlearning objective but also closely approximate the output from a model that has never been trained on the data intended for forgetting. Through extensive experiments on unlearning, we demonstrate the superiority of our method in achieving promising unlearning at \\textit{nearly zero side effects} in general domains and domains closely related to the unlearned ones. Additionally, we highlight the scalability of our method to 100 LLMs, ranging from 0.5B to 236B parameters, incurring no additional cost as the number of parameters increases. We have made our code publicly available at \\url{https://github.com/chrisliu298/llm-unlearn-eco}.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/94295"} +{"video_file": "e6WrwIvgzX_39027234.mp4", "openreview_id": "e6WrwIvgzX", "slideslive_id": 39027234, "venue": "nips2024", "title": "AutoMix: Automatically Mixing Language Models", "status": "Poster", "keywords": "Few-shot learning;Zero-shot learning;Self-Verification;cost-quality optimization;Decision making;Prompting;LLMs", "tldr": "AutoMix robustly routes queries among language models of varying sizes, efficiently balancing computational cost and solution accuracy", "abstract": "Large language models (LLMs) are now available from cloud API providers in various sizes and configurations. While this diversity offers a broad spectrum of choices, effectively leveraging the options to optimize computational cost and performance remains challenging. In this work, we present AutoMix, an approach that strategically routes queries to larger LMs, based on the approximate correctness of outputs from a smaller LM. Central to AutoMix are two key technical contributions. First, it has a few-shot self-verification mechanism, which estimates the reliability of its own outputs without requiring extensive training. Second, given that self-verification can be noisy, it employs a POMDP based router that can effectively select an appropriately sized model, based on answer confidence. Experiments across five language models and five challenging datasets show that Automix consistently surpasses strong baselines, reducing computational cost by over 50% for comparable performance.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94293"} +{"video_file": "eC5qdC4ZTQ_39028705.mp4", "openreview_id": "eC5qdC4ZTQ", "slideslive_id": 39028705, "venue": "nips2024", "title": "Unlock the Intermittent Control Ability of Model Free Reinforcement Learning", "status": "Poster", "keywords": "Deep Reinforcement Learning ; Representation Learning; Intermittent Control", "tldr": "We observe that previous DRL methods fail to learn effective policies in intermittent control scenarios because of the discontinue interaction and propose a plugin method for DRL to address such problems.", "abstract": "Intermittent control problems are common in real world. The interactions between the decision maker and the executor can be discontinuous (intermittent) due to various types of interruptions, e.g. unstable communication channel. Due to intermittent interaction, agents are unable to acquire the state sent by the executor and cannot transmit actions to the executor within a period of time step, i.e. bidirectional blockage, which may lead to inefficiencies of reinforcement learning policies and prevent the executors from completing the task. Such problem is not well studied in the RL community. In this paper, we model Intermittent control problem as an Intermittent Control Markov Decision Process, i.e agents are expected to generate action sequences corresponding to the unavailable states and transmit them before disabling interactions to ensure the smooth and effective motion of executors. However, directly generating multiple future actions in the original action space has unnatural motion issue and exploration difficulty. We propose Multi-step Action RepreSentation (MARS), which encodes a sequence of actions from the original action space to a compact and decodable latent space. Then based on the latent action sequence representation, the mainstream RL methods can be easily optimized to learn a smooth and efficient motion policy. Extensive experiments on simulation tasks and real-world robotic grasping tasks show that MARS significantly improves the learning efficiency and final performances compared with existing baselines.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94291"} +{"video_file": "eDNslSwQIj_39026046.mp4", "openreview_id": "eDNslSwQIj", "slideslive_id": 39026046, "venue": "nips2024", "title": "Neural Assets: 3D-Aware Multi-Object Scene Synthesis with Image Diffusion Models", "status": "Spotlight", "keywords": "Controllable generation;3D-aware editing;diffusion model", "tldr": "We propose a Neural Assets representation that enables 3D-aware multi-object control in real-world scenes", "abstract": "We address the problem of multi-object 3D pose control in image diffusion models. Instead of conditioning on a sequence of text tokens, we propose to use a set of per-object representations, Neural Assets, to control the 3D pose of individual objects in a scene. Neural Assets are obtained by pooling visual representations of objects from a reference image, such as a frame in a video, and are trained to reconstruct the respective objects in a different image, e.g., a later frame in the video. Importantly, we encode object visuals from the reference image while conditioning on object poses from the target frame, which enables learning disentangled appearance and position features. Combining visual and 3D pose representations in a sequence-of-tokens format allows us to keep the text-to-image interface of existing models, with Neural Assets in place of text tokens. By fine-tuning a pre-trained text-to-image diffusion model with this information, our approach enables fine-grained 3D pose and placement control of individual objects in a scene. We further demonstrate that Neural Assets can be transferred and recomposed across different scenes. Our model achieves state-of-the-art multi-object editing results on both synthetic 3D scene datasets, as well as two real-world video datasets (Objectron, Waymo Open).", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94290"} +{"video_file": "eFrdRuyHR9_39027962.mp4", "openreview_id": "eFrdRuyHR9", "slideslive_id": 39027962, "venue": "nips2024", "title": "Transition Constrained Bayesian Optimization via Markov Decision Processes", "status": "Poster", "keywords": "Bayesian Optimization;Transition Constrained;Markov Decision Process;Linear Bandits;Convex Reinforcement Learning", "tldr": "We do Bayesian Optimization under transition constraints by creating and solving tractable long-term planning problems in Markov Decision Processes.", "abstract": "Bayesian optimization is a methodology to optimize black-box functions. Traditionally, it focuses on the setting where you can arbitrarily query the search space. However, many real-life problems do not offer this flexibility; in particular, the search space of the next query may depend on previous ones. Example challenges arise in the physical sciences in the form of local movement constraints, required monotonicity in certain variables, and transitions influencing the accuracy of measurements. Altogether, such transition constraints necessitate a form of planning. This work extends classical Bayesian optimization via the framework of Markov Decision Processes. We iteratively solve a tractable linearization of our utility function using reinforcement learning to obtain a policy that plans ahead for the entire horizon. This is a parallel to the optimization of an acquisition function in policy space. The resulting policy is potentially history-dependent and non-Markovian. We showcase applications in chemical reactor optimization, informative path planning, machine calibration, and other synthetic examples.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94288"} +{"video_file": "eHzIwAhj06_39027063.mp4", "openreview_id": "eHzIwAhj06", "slideslive_id": 39027063, "venue": "nips2024", "title": "The Group Robustness is in the Details: Revisiting Finetuning under Spurious Correlations", "status": "Poster", "keywords": "spurious correlations;group robustness;distribution shift;class balancing", "tldr": "We identify surprising and nuanced behavior of finetuned models on worst-group accuracy in settings including class-balancing, model scaling, and spectral analysis.", "abstract": "Modern machine learning models are prone to over-reliance on spurious correlations, which can often lead to poor performance on minority groups. In this paper, we identify surprising and nuanced behavior of finetuned models on worst-group accuracy via comprehensive experiments on four well-established benchmarks across vision and language tasks. We first show that the commonly used class-balancing techniques of mini-batch upsampling and loss upweighting can induce a decrease in worst-group accuracy (WGA) with training epochs, leading to performance no better than without class-balancing. While in some scenarios, removing data to create a class-balanced subset is more effective, we show this depends on group structure and propose a mixture method which can outperform both techniques. Next, we show that scaling pretrained models is generally beneficial for worst-group accuracy, but only in conjunction with appropriate class-balancing. Finally, we identify spectral imbalance in finetuning features as a potential source of group disparities --- minority group covariance matrices incur a larger spectral norm than majority groups once conditioned on the classes. Our results show more nuanced interactions of modern finetuned models with group robustness than was previously known. Our code is available at https://github.com/tmlabonte/revisiting-finetuning.", "primary_area": "fairness", "site": "https://neurips.cc/virtual/2024/poster/94285"} +{"video_file": "eKSRTlzRWG_39028822.mp4", "openreview_id": "eKSRTlzRWG", "slideslive_id": 39028822, "venue": "nips2024", "title": "Structure Consistent Gaussian Splatting with Matching Prior for Few-shot Novel View Synthesis", "status": "Poster", "keywords": "Gaussian Splatting;Few-Shot Novel View Synthesis;Structure Consistency", "tldr": "A Structure Consistent Gaussian Splatting method to efficiently synthesize novel views from sparse inputs.", "abstract": "Despite the substantial progress of novel view synthesis, existing methods, either based on the Neural Radiance Fields (NeRF) or more recently 3D Gaussian Splatting (3DGS), suffer significant degradation when the input becomes sparse. Numerous efforts have been introduced to alleviate this problem, but they still struggle to synthesize satisfactory results efficiently, especially in the large scene. In this paper, we propose SCGaussian, a Structure Consistent Gaussian Splatting method using matching priors to learn 3D consistent scene structure. Considering the high interdependence of Gaussian attributes, we optimize the scene structure in two folds: rendering geometry and, more importantly, the position of Gaussian primitives, which is hard to be directly constrained in the vanilla 3DGS due to the non-structure property. To achieve this, we present a hybrid Gaussian representation. Besides the ordinary non-structure Gaussian primitives, our model also consists of ray-based Gaussian primitives that are bound to matching rays and whose optimization of their positions is restricted along the ray. Thus, we can utilize the matching correspondence to directly enforce the position of these Gaussian primitives to converge to the surface points where rays intersect. Extensive experiments on forward-facing, surrounding, and complex large scenes show the effectiveness of our approach with state-of-the-art performance and high efficiency. Code is available at https://github.com/prstrive/SCGaussian.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94282"} +{"video_file": "eKVugi5zr0_39027526.mp4", "openreview_id": "eKVugi5zr0", "slideslive_id": 39027526, "venue": "nips2024", "title": "RoME: A Robust Mixed-Effects Bandit Algorithm for Optimizing Mobile Health Interventions", "status": "Poster", "keywords": "Bandit Algorithms;Causal Inference;Supervised Learning;mHealth;Mixed-effects Modeling", "tldr": "The authors propose a robust contextual bandit algorithm for optimizing mobile health interventions that leverages (1) mixed effects, (2) nearest-neighbor regularization, and (3) debiased machine learning (DML).", "abstract": "Mobile health leverages personalized and contextually tailored interventions optimized through bandit and reinforcement learning algorithms. In practice, however, challenges such as participant heterogeneity, nonstationarity, and nonlinear relationships hinder algorithm performance. We propose RoME, a Robust Mixed-Effects contextual bandit algorithm that simultaneously addresses these challenges via (1) modeling the differential reward with user- and time-specific random effects, (2) network cohesion penalties, and (3) debiased machine learning for flexible estimation of baseline rewards. We establish a high-probability regret bound that depends solely on the dimension of the differential-reward model, enabling us to achieve robust regret bounds even when the baseline reward is highly complex. We demonstrate the superior performance of the RoME algorithm in a simulation and two off-policy evaluation studies.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/94281"} +{"video_file": "eNM94i7R3A_39027308.mp4", "openreview_id": "eNM94i7R3A", "slideslive_id": 39027308, "venue": "nips2024", "title": "Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning", "status": "Spotlight", "keywords": "feature learning;rich regime;lazy regime;exact solutions;conserved quantities;balanced initialization;neural tangent kernel;grokking", "tldr": "We derive exact solutions to a minimal model that transitions between lazy and rich learning, precisely elucidating how unbalanced initialization variances and learning rates determine the degree of feature learning in a finite-width network.", "abstract": "While the impressive performance of modern neural networks is often attributed to their capacity to efficiently extract task-relevant features from data, the mechanisms underlying this rich feature learning regime remain elusive, with much of our theoretical understanding stemming from the opposing lazy regime. In this work, we derive exact solutions to a minimal model that transitions between lazy and rich learning, precisely elucidating how unbalanced layer-specific initialization variances and learning rates determine the degree of feature learning. Our analysis reveals that they conspire to influence the learning regime through a set of conserved quantities that constrain and modify the geometry of learning trajectories in parameter and function space. We extend our analysis to more complex linear models with multiple neurons, outputs, and layers and to shallow nonlinear networks with piecewise linear activation functions. In linear networks, rapid feature learning only occurs from balanced initializations, where all layers learn at similar speeds. While in nonlinear networks, unbalanced initializations that promote faster learning in earlier layers can accelerate rich learning. Through a series of experiments, we provide evidence that this unbalanced rich regime drives feature learning in deep finite-width networks, promotes interpretability of early layers in CNNs, reduces the sample complexity of learning hierarchical data, and decreases the time to grokking in modular arithmetic. Our theory motivates further exploration of unbalanced initializations to enhance efficient feature learning.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94278"} +{"video_file": "eNeqGc9AgR_39025527.mp4", "openreview_id": "eNeqGc9AgR", "slideslive_id": 39025527, "venue": "nips2024", "title": "Flatten Anything: Unsupervised Neural Surface Parameterization", "status": "Poster", "keywords": "Surface Parameterization;UV Unwrapping;Neural Network;Unsupervised Learning", "tldr": "An unsupervised neural learning architecture for universal and fully-automated 3D surface parameterization", "abstract": "Surface parameterization plays an essential role in numerous computer graphics and geometry processing applications. Traditional parameterization approaches are designed for high-quality meshes laboriously created by specialized 3D modelers, thus unable to meet the processing demand for the current explosion of ordinary 3D data. Moreover, their working mechanisms are typically restricted to certain simple topologies, thus relying on cumbersome manual efforts (e.g., surface cutting, part segmentation) for pre-processing. In this paper, we introduce the Flatten Anything Model (FAM), an unsupervised neural architecture to achieve global free-boundary surface parameterization via learning point-wise mappings between 3D points on the target geometric surface and adaptively-deformed UV coordinates within the 2D parameter domain. To mimic the actual physical procedures, we ingeniously construct geometrically-interpretable sub-networks with specific functionalities of surface cutting, UV deforming, unwrapping, and wrapping, which are assembled into a bi-directional cycle mapping framework. Compared with previous methods, our FAM directly operates on discrete surface points without utilizing connectivity information, thus significantly reducing the strict requirements for mesh quality and even applicable to unstructured point cloud data. More importantly, our FAM is fully-automated without the need for pre-cutting and can deal with highly-complex topologies, since its learning process adaptively finds reasonable cutting seams and UV boundaries. Extensive experiments demonstrate the universality, superiority, and inspiring potential of our proposed neural surface parameterization paradigm. Our code is available at https://github.com/keeganhk/FlattenAnything.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94277"} +{"video_file": "eNvVjpx97O_39028543.mp4", "openreview_id": "eNvVjpx97O", "slideslive_id": 39028543, "venue": "nips2024", "title": "StreamingDialogue: Prolonged Dialogue Learning via Long Context Compression with Minimal Losses", "status": "Poster", "keywords": "dialogue compression;conversational attention sinks;memory", "tldr": "StreamingDialogue efficiently compresses dialogue history into conversational attention sinks with minimal losses, enhancing the model's long-term memory and facilitating prolonged streaming conversations.", "abstract": "Standard Large Language Models (LLMs) struggle with handling dialogues with long contexts due to efficiency and consistency issues. According to our observation, dialogue contexts are highly structured, and the special token of End-of-Utterance (EoU) in dialogues has the potential to aggregate information. We refer to the EoU tokens as ``conversational attention sinks'' (conv-attn sinks). Accordingly, we introduce StreamingDialogue, which compresses long dialogue history into conv-attn sinks with minimal losses, and thus reduces computational complexity quadratically with the number of sinks (i.e., the number of utterances). Current LLMs already demonstrate the ability to handle long context window, e.g., a window size of 200K or more. To this end, by compressing utterances into EoUs, our method has the potential to handle more than 200K of utterances, resulting in a prolonged dialogue learning. In order to minimize information losses from reconstruction after compression, we design two learning strategies of short-memory reconstruction (SMR) and long-memory reactivation (LMR). Our method outperforms strong baselines in dialogue tasks and achieves a 4\n\u00d7\nspeedup while reducing memory usage by 18\n\u00d7\ncompared to dense attention recomputation.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94276"} +{"video_file": "eOAPWWOGs9_39025083.mp4", "openreview_id": "eOAPWWOGs9", "slideslive_id": 39025083, "venue": "nips2024", "title": "AutoPSV: Automated Process-Supervised Verifier", "status": "Poster", "keywords": "Large Language Models;Math Reasoning;Commonsense Reasoning;Automatic Process Annotation;Multi-step Reasoning", "tldr": "AutoPSV enhances the reasoning capabilities of LLMs by automatically generating process annotations and confidence scores for reasoning steps, significantly improving performance in tasks involving complex reasoning.", "abstract": "In this work, we propose a novel method named \\textbf{Auto}mated \\textbf{P}rocess-\\textbf{S}upervised \\textbf{V}erifier (\\textbf{\\textsc{AutoPSV}}) to enhance the reasoning capabilities of large language models (LLMs) by automatically annotating the reasoning steps. \\textsc{AutoPSV} begins by training a verification model on the correctness of final answers, enabling it to generate automatic process annotations. This verification model assigns a confidence score to each reasoning step, indicating the probability of arriving at the correct final answer from that point onward. We detect relative changes in the verification's confidence scores across reasoning steps to automatically annotate the reasoning process, enabling error detection even in scenarios where ground truth answers are unavailable. This alleviates the need for numerous manual annotations or the high computational costs associated with model-induced annotation approaches. We experimentally validate that the step-level confidence changes learned by the verification model trained on the final answer correctness can effectively identify errors in the reasoning steps. We demonstrate that the verification model, when trained on process annotations generated by \\textsc{AutoPSV}, exhibits improved performance in selecting correct answers from multiple LLM-generated outputs. Notably, we achieve substantial improvements across five datasets in mathematics and commonsense reasoning. The source code of \\textsc{AutoPSV} is available at \\url{https://github.com/rookie-joe/AutoPSV}.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94275"} +{"video_file": "eOx0SMRUv7_39027716.mp4", "openreview_id": "eOx0SMRUv7", "slideslive_id": 39027716, "venue": "nips2024", "title": "Online Consistency of the Nearest Neighbor Rule", "status": "Poster", "keywords": "Nearest Neighbor Classification;Online Learning;Smoothed Analysis", "tldr": "The nearest neighbor rule is online consistent under much broader conditions than previously known.", "abstract": "In the realizable online setting, a learner is tasked with making predictions for a stream of instances, where the correct answer is revealed after each prediction. A learning rule is online consistent if its mistake rate eventually vanishes. The nearest neighbor rule is fundamental prediction strategy, but it is only known to be consistent under strong statistical or geometric assumptions: the instances come i.i.d. or the label classes are well-separated. We prove online consistency for all measurable functions in doubling metric spaces under the mild assumption that instances are generated by a process that is uniformly absolutely continuous with respect to an underlying finite, upper doubling measure.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94273"} +{"video_file": "eP9auEJqFg_39027327.mp4", "openreview_id": "eP9auEJqFg", "slideslive_id": 39027327, "venue": "nips2024", "title": "Representation Noising: A Defence Mechanism Against Harmful Finetuning", "status": "Poster", "keywords": "Harmful Fine-tuning;LLM Security;Domain Authorization", "tldr": "We provide a method that removes information about harmful representations in large language models which can mitigate the model from being fine-tuned on harmful datasets.", "abstract": "Releasing open-source large language models (LLMs) presents a dual-use risk since bad actors can easily fine-tune these models for harmful purposes. Even without the open release of weights, weight stealing and fine-tuning APIs make closed models vulnerable to harmful fine-tuning attacks (HFAs). While safety measures like preventing jailbreaks and improving safety guardrails are important, such measures can easily be reversed through fine-tuning. In this work, we propose Representation Noising (\\textsf{\\small RepNoise}), a defence mechanism that operates even when attackers have access to the weights. \\textsf{\\small RepNoise} works by removing information about harmful representations such that it is difficult to recover them during fine-tuning. Importantly, our defence is also able to generalize across different subsets of harm that have not been seen during the defence process as long as they are drawn from the same distribution of the attack set. Our method does not degrade the general capability of LLMs and retains the ability to train the model on harmless tasks. We provide empirical evidence that the efficacy of our defence lies in its ``depth'': the degree to which information about harmful representations is removed across {\\em all layers} of the LLM. We also find areas where \\textsf{\\small RepNoise} still remains ineffective and highlight how those limitations can inform future research.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94272"} +{"video_file": "eSes1Mic9d_39026987.mp4", "openreview_id": "eSes1Mic9d", "slideslive_id": 39026987, "venue": "nips2024", "title": "Who's asking? User personas and the mechanics of latent misalignment", "status": "Spotlight", "keywords": "safety;interpretability;explainability;NLP;alignment;activation engineering;jailbreaking", "tldr": "Decoding from earlier layers in LLMs recovers harmful content that would have been blocked, and LLMs answer harmful queries posed by some groups of users but not others.", "abstract": "Studies show that safety-tuned models may nevertheless divulge harmful information. In this work, we show that whether they do so depends significantly on who they are talking to, which we refer to as user persona. In fact, we find manipulating user persona to be more effective for eliciting harmful content than certain more direct attempts to control model refusal. We study both natural language prompting and activation steering as intervention methods and show that activation steering is significantly more effective at bypassing safety filters. We shed light on the mechanics of this phenomenon by showing that even when model generations are safe, harmful content can persist in hidden representations and can be extracted by decoding from earlier layers. We also show we can predict a persona\u2019s effect on refusal given only the geometry of its steering vector. Finally, we show that certain user personas induce the model to form more charitable interpretations of otherwise dangerous queries.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/94269"} +{"video_file": "eTu6kvrkSq_39027410.mp4", "openreview_id": "eTu6kvrkSq", "slideslive_id": 39027410, "venue": "nips2024", "title": "Only Strict Saddles in the Energy Landscape of Predictive Coding Networks?", "status": "Poster", "keywords": "Predictive Coding;Backpropagation;Deep Neural Networks;Loss Landscape;Saddle Points;Gradient Descent;Vanishing Gradients;Local Learning;Inference Learning", "tldr": "Predictive coding inference makes the loss landscape of feedforward neural networks more benign and robust to vanishing gradients.", "abstract": "Predictive coding (PC) is an energy-based learning algorithm that performs iterative inference over network activities before updating weights. Recent work suggests that PC can converge in fewer learning steps than backpropagation thanks to its inference procedure. However, these advantages are not always observed, and the impact of PC inference on learning is not theoretically well understood. To address this gap, we study the geometry of the PC weight landscape at the inference equilibrium of the network activities. For deep linear networks, we first show that the equilibrated PC energy is equal to a rescaled mean squared error loss with a weight-dependent rescaling. We then prove that many highly degenerate (non-strict) saddles of the loss including the origin become much easier to escape (strict) in the equilibrated energy. Experiments on both linear and non-linear networks strongly validate our theory and further suggest that all the saddles of the equilibrated energy are strict. Overall, this work shows that PC inference makes the loss landscape of feedforward networks more benign and robust to vanishing gradients, while also highlighting the fundamental challenge of scaling PC to very deep models.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/94268"} +{"video_file": "eUg64OsGDE_39026655.mp4", "openreview_id": "eUg64OsGDE", "slideslive_id": 39026655, "venue": "nips2024", "title": "CountGD: Multi-Modal Open-World Counting", "status": "Poster", "keywords": "multi-modal open-world counting;vision-language foundation model;open-world object counting;class-agnostic counting;text-specified counting", "tldr": "We propose CountGD, a state-of-the-art multi-modal open-world object counting model that can count arbitrary objects given visual exemplars, text, or both together, fusing the modalities to accurately estimate the object count.", "abstract": "The goal of this paper is to improve the generality and accuracy of open-vocabulary object counting in images. To improve the generality, we repurpose an open-vocabulary detection foundation model (GroundingDINO) for the counting task, and also extend its capabilities by introducing modules to enable specifying the target object to count by visual exemplars. In turn, these new capabilities -- being able to specify the target object by multi-modalites (text and exemplars) -- lead to an improvement in counting accuracy. We make three contributions: First, we introduce the first open-world counting model, CountGD, where the prompt can be specified by a text description or visual exemplars or both; Second, we show that the performance of the model significantly improves the state of the art on multiple counting benchmarks -- when using text only, CountGD outperforms all previous text-only works, and when using both text and visual exemplars, we outperform all previous models; Third, we carry out a preliminary study into different interactions between the text and visual exemplar prompts, including the cases where they reinforce each other and where one restricts the other. The code and an app to test the model are available at https://www.robots.ox.ac.uk/vgg/research/countgd/.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94265"} +{"video_file": "eV5YIrJPdy_39027121.mp4", "openreview_id": "eV5YIrJPdy", "slideslive_id": 39027121, "venue": "nips2024", "title": "The Expressive Capacity of State Space Models: A Formal Language Perspective", "status": "Poster", "keywords": "state-space models;formal languages;expressivity;theory", "tldr": "We theoretically investigate the expressive capacity of modern state-space models.", "abstract": "Recently, recurrent models based on linear state space models (SSMs) have shown promising performance in language modeling (LM), competititve with transformers. However, there is little understanding of the in-principle abilities of such models, which could provide useful guidance to the search for better LM architectures. We present a comprehensive theoretical study of the capacity of such SSMs as it compares to that of transformers and traditional RNNs. We find that SSMs and transformers have overlapping but distinct strengths. In star-free state tracking, SSMs implement straightforward and exact solutions to problems that transformers struggle to represent exactly. They can also model bounded hierarchical structure with optimal memory even without simulating a stack. On the other hand, we identify a design choice in current SSMs that limits their expressive power. We discuss implications for SSM and LM research, and verify results empirically on a recent SSM, Mamba.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/94264"} +{"video_file": "eWiGn0Fcdx_39027504.mp4", "openreview_id": "eWiGn0Fcdx", "slideslive_id": 39027504, "venue": "nips2024", "title": "Exploring Token Pruning in Vision State Space Models", "status": "Poster", "keywords": "State Space Models;Token pruning;Efficiency;Interpretability", "tldr": "We revisit the unique computational characteristics of SSMs and designed a novel and general token pruning method specifically for SSM-based vision models.", "abstract": "State Space Models (SSMs) have the advantage of keeping linear computational complexity compared to attention modules in transformers, and have been applied to vision tasks as a new type of powerful vision foundation model. Inspired by the observations that the final prediction in vision transformers (ViTs) is only based on a subset of most informative tokens, we take the novel step of enhancing the efficiency of SSM-based vision models through token-based pruning. However, direct applications of existing token pruning techniques designed for ViTs fail to deliver good performance, even with extensive fine-tuning. To address this issue, we revisit the unique computational characteristics of SSMs and discover that naive application disrupts the sequential token positions. This insight motivates us to design a novel and general token pruning method specifically for SSM-based vision models. We first introduce a pruning-aware hidden state alignment method to stabilize the neighborhood of remaining tokens for performance enhancement. Besides, based on our detailed analysis, we propose a token importance evaluation method adapted for SSM models, to guide the token pruning. With efficient implementation and practical acceleration methods, our method brings actual speedup. Extensive experiments demonstrate that our approach can achieve significant computation reduction with minimal impact on performance across different tasks. Notably, we achieve 81.7% accuracy on ImageNet with a 41.6% reduction in the FLOPs for pruned PlainMamba-L3. Furthermore, our work provides deeper insights into understanding the behavior of SSM-based vision models for future research.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94262"} +{"video_file": "ebBnKVxMcZ_39024752.mp4", "openreview_id": "ebBnKVxMcZ", "slideslive_id": 39024752, "venue": "nips2024", "title": "Confidence Calibration of Classifiers with Many Classes", "status": "Poster", "keywords": "Calibration;Classification;Uncertainty quantification", "tldr": "We transform the problem of calibrating a multiclass classifier into calibrating a single surrogate binary classifier and show that it significantly improves existing calibration methods.", "abstract": "For classification models based on neural networks, the maximum predicted class probability is often used as a confidence score. This score rarely predicts well the probability of making a correct prediction and requires a post-processing calibration step. However, many confidence calibration methods fail for problems with many classes. To address this issue, we transform the problem of calibrating a multiclass classifier into calibrating a single surrogate binary classifier. This approach allows for more efficient use of standard calibration methods. We evaluate our approach on numerous neural networks used for image or text classification and show that it significantly enhances existing calibration methods.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/94258"} +{"video_file": "eezCLKwx6T_39028354.mp4", "openreview_id": "eezCLKwx6T", "slideslive_id": 39028354, "venue": "nips2024", "title": "Adversarial Environment Design via Regret-Guided Diffusion Models", "status": "Spotlight", "keywords": "deep reinforcement learning;curriculum learning;environment design", "tldr": "We propose a novel UED algorithm that uses regret-guided diffusion models to improve agent robustness.", "abstract": "Training agents that are robust to environmental changes remains a significant challenge in deep reinforcement learning (RL). Unsupervised environment design (UED) has recently emerged to address this issue by generating a set of training environments tailored to the agent's capabilities. While prior works demonstrate that UED has the potential to learn a robust policy, their performance is constrained by the capabilities of the environment generation. To this end, we propose a novel UED algorithm, adversarial environment design via regret-guided diffusion models (ADD). The proposed method guides the diffusion-based environment generator with the regret of the agent to produce environments that the agent finds challenging but conducive to further improvement. By exploiting the representation power of diffusion models, ADD can directly generate adversarial environments while maintaining the diversity of training environments, enabling the agent to effectively learn a robust policy. Our experimental results demonstrate that the proposed method successfully generates an instructive curriculum of environments, outperforming UED baselines in zero-shot generalization across novel, out-of-distribution environments.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94254"} +{"video_file": "ejIzdt50ek_39027191.mp4", "openreview_id": "ejIzdt50ek", "slideslive_id": 39027191, "venue": "nips2024", "title": "Stochastic Optimization Schemes for Performative Prediction with Nonconvex Loss", "status": "Poster", "keywords": "Non-convex optimization;Performative prediction;Stochastic optimization algorithm", "tldr": "This paper studies the performative prediction with smooth but possibly non-convex loss and analyzes a greedy deployment scheme with stochastic gradient descent algorithm.", "abstract": "This paper studies a risk minimization problem with decision dependent data distribution. The problem pertains to the performative prediction setting in which a trained model can affect the outcome estimated by the model. Such dependency creates a feedback loop that influences the stability of optimization algorithms such as stochastic gradient descent (SGD). We present the first study on performative prediction with smooth but possibly non-convex loss. We analyze a greedy deployment scheme with SGD (SGD-GD). Note that in the literature, SGD-GD is often studied with strongly convex loss. We first propose the definition of stationary performative stable (SPS) solutions through relaxing the popular performative stable condition. We then prove that SGD-GD converges to a biased SPS solution in expectation. We consider two conditions of sensitivity on the distribution shifts: (i) the sensitivity is characterized by Wasserstein-1 distance and the loss is Lipschitz w.r.t.~data samples, or (ii) the sensitivity is characterized by total variation (TV) divergence and the loss is bounded. In both conditions, the bias levels are proportional to the stochastic gradient's variance and sensitivity level. Our analysis is extended to a lazy deployment scheme where models are deployed once per several SGD updates, and we show that it converges to an SPS solution with reduced bias. Numerical experiments corroborate our theories.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94252"} +{"video_file": "ektPEcqGLb_39026798.mp4", "openreview_id": "ektPEcqGLb", "slideslive_id": 39026798, "venue": "nips2024", "title": "Poisson Variational Autoencoder", "status": "Spotlight", "keywords": "NeuroAI;Bayesian Inference;Predictive Coding;Sparse Coding;Variational Autoencoder", "tldr": "We introduce the Poisson VAE, a generative model that encodes inputs into discrete spike counts and unifies established theoretical concepts in neuroscience with modern machine learning.", "abstract": "Variational autoencoders (VAE) employ Bayesian inference to interpret sensory inputs, mirroring processes that occur in primate vision across both ventral (Higgins et al., 2021) and dorsal (Vafaii et al., 2023) pathways. Despite their success, traditional VAEs rely on continuous latent variables, which significantly deviates from the discrete nature of biological neurons. Here, we developed the Poisson VAE (P-VAE), a novel architecture that combines principles of predictive coding with a VAE that encodes inputs into discrete spike counts. Combining Poisson-distributed latent variables with predictive coding introduces a metabolic cost term in the model loss function, suggesting a relationship with sparse coding which we verify empirically. Additionally, we analyze the geometry of learned representations, contrasting the P-VAE to alternative VAE models. We find that the P-VAE encodes its inputs in relatively higher dimensions, facilitating linear separability of categories in a downstream classification task with a much better (5x) sample efficiency. Our work provides an interpretable computational framework to study brain-like sensory processing and paves the way for a deeper understanding of perception as an inferential process.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/94249"} +{"video_file": "enlxHLwwFf_39026225.mp4", "openreview_id": "enlxHLwwFf", "slideslive_id": 39026225, "venue": "nips2024", "title": "Functional Bilevel Optimization for Machine Learning", "status": "Spotlight", "keywords": "bilevel optimization;functional optimization;adjoint method;neural networks", "tldr": "A functional perspective on bilevel optimization for deep learning applications.", "abstract": "In this paper, we introduce a new functional point of view on bilevel optimization problems for machine learning, where the inner objective is minimized over a function space. These types of problems are most often solved by using methods developed in the parametric setting, where the inner objective is strongly convex with respect to the parameters of the prediction function. The functional point of view does not rely on this assumption and notably allows using over-parameterized neural networks as the inner prediction function. We propose scalable and efficient algorithms for the functional bilevel optimization problem and illustrate the benefits of our approach on instrumental regression and reinforcement learning tasks.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94248"} +{"video_file": "eowkjKVPoH_39026646.mp4", "openreview_id": "eowkjKVPoH", "slideslive_id": 39026646, "venue": "nips2024", "title": "Mission Impossible: A Statistical Perspective on Jailbreaking LLMs", "status": "Poster", "keywords": "large language models;jailbreak;safety alignment;theory", "tldr": "We provide a statistical framework for jailbreaking LLMs, show jailbreaking is unavoidable and provide an improved DPO-based alignment protocol. The proposed method leads to improved safety across a suite of jailbreak adversaries.", "abstract": "Large language models (LLMs) are trained on a deluge of text data with limited quality control. As a result, LLMs can exhibit unintended or even harmful behaviours, such as leaking information, fake news or hate speech. Countermeasures, commonly referred to as preference alignment, include fine-tuning the pretrained LLMs with carefully crafted text examples of desired behaviour. Even then, empirical evidence shows preference aligned LLMs can be enticed to harmful behaviour. This so called jailbreaking of LLMs is typically achieved by adversarially modifying the input prompt to the LLM. Our paper provides theoretical insights into the phenomenon of preference alignment and jailbreaking from a statistical perspective. Under our framework, we first show that pretrained LLMs will mimic harmful behaviour if present in the training corpus. \\textbf{Under that same framework, we then introduce a statistical notion of alignment, and lower-bound the jailbreaking probability, showing that it is unpreventable under reasonable assumptions.} Based on our insights, we propose an alteration to the currently prevalent alignment strategy RLHF. Specifically, we introduce a simple modification to the RLHF objective, we call \\emph{E-RLHF}, that aims to increase the likelihood of safe responses. \\emph{E-RLHF} brings no additional training cost, and is compatible with other methods. Empirically, we demonstrate that \\emph{E-RLHF} outperforms RLHF on all alignment problems put forward by the AdvBench \\citep{zou2023universal} and HarmBench project \\citep{mazeika2024harmbench} without sacrificing model performance as measured by the MT-Bench project \\citep{zheng2024judging}.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94247"} +{"video_file": "eqMNwXvOqn_39027756.mp4", "openreview_id": "eqMNwXvOqn", "slideslive_id": 39027756, "venue": "nips2024", "title": "MKGL: Mastery of a Three-Word Language", "status": "Spotlight", "keywords": "Knowledge Graph;Large Language Model;Knowledge Graph Completion;Knowledge Graph Embedding;Low-Rank Adaption", "tldr": "instruct an LLM in the language of knowledge graphs", "abstract": "Large language models (LLMs) have significantly advanced performance across a spectrum of natural language processing (NLP) tasks. Yet, their application to knowledge graphs (KGs), which describe facts in the form of triplets and allow minimal hallucinations, remains an underexplored frontier. In this paper, we investigate the integration of LLMs with KGs by introducing a specialized KG Language (KGL), where a sentence precisely consists of an entity noun, a relation verb, and ends with another entity noun. Despite KGL's unfamiliar vocabulary to the LLM, we facilitate its learning through a tailored dictionary and illustrative sentences, and enhance context understanding via real-time KG context retrieval and KGL token embedding augmentation. Our results reveal that LLMs can achieve fluency in KGL, drastically reducing errors compared to conventional KG embedding methods on KG completion. Furthermore, our enhanced LLM shows exceptional competence in generating accurate three-word sentences from an initial entity and interpreting new unseen terms out of KGs.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94246"} +{"video_file": "erjQDJ0z9L_39025976.mp4", "openreview_id": "erjQDJ0z9L", "slideslive_id": 39025976, "venue": "nips2024", "title": "Discovering Preference Optimization Algorithms with and for Large Language Models", "status": "Poster", "keywords": "Preference optimization;RLHF;Large Language Models", "tldr": "We use LLM's to generate novel RLHF objectives, some of which achieve strong results.", "abstract": "Offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs. Typically, preference optimization is approached as an offline supervised learning task using manually crafted convex loss functions. While these methods are based on theoretical insights, they are inherently constrained by human creativity, so the large search space of possible loss functions remains under-explored. We address this by performing LLM-driven objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention. Specifically, we iteratively prompt an LLM to propose and implement new preference optimization loss functions based on previously evaluated performance metrics. This process leads to the discovery of previously unknown and performant preference optimization algorithms. The best performing of these we call Discovered Preference Optimization (DiscoPOP), a novel algorithm that adaptively blends logistic and exponential losses. Experiments demonstrate the state-of-the-art performance of DiscoPOP and its successful transfer to held-out tasks.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94244"} +{"video_file": "esVleaqkRc_39027056.mp4", "openreview_id": "esVleaqkRc", "slideslive_id": 39027056, "venue": "nips2024", "title": "Neur2BiLO: Neural Bilevel Optimization", "status": "Poster", "keywords": "bilevel optimization;machine learning;discrete optimization;integer programming", "tldr": "This work proposes a learning-based approach for quickly computing high-quality solutions for several challenging classes bilevel optimization problems (linear/non-linear, integer/mixed-integer) via learning-based value function approximation.", "abstract": "Bilevel optimization deals with nested problems in which leader takes the first decision to minimize their objective function while accounting for a follower's best-response reaction. Constrained bilevel problems with integer variables are particularly notorious for their hardness. While exact solvers have been proposed for mixed-integer linear bilevel optimization, they tend to scale poorly with problem size and are hard to generalize to the non-linear case. On the other hand, problem-specific algorithms (exact and heuristic) are limited in scope. Under a data-driven setting in which similar instances of a bilevel problem are solved routinely, our proposed framework, Neur2BiLO, embeds a neural network approximation of the leader's or follower's value function, trained via supervised regression, into an easy-to-solve mixed-integer program. Neur2BiLO serves as a heuristic that produces high-quality solutions extremely fast for four applications with linear and non-linear objectives and pure and mixed-integer variables.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94240"} +{"video_file": "etPAH4xSUn_39025870.mp4", "openreview_id": "etPAH4xSUn", "slideslive_id": 39025870, "venue": "nips2024", "title": "In-Context Symmetries: Self-Supervised Learning through Contextual World Models", "status": "Poster", "keywords": "Self-Supervised Learning; Context; Equivariance", "tldr": "Learning general representations in self-supervised learning while having the versatility to tail down to task-wise symmetries when given a few examples as the context.", "abstract": "At the core of self-supervised learning for vision is the idea of learning invariant or equivariant representations with respect to a set of data transformations. This approach, however, introduces strong inductive biases, which can render the representations fragile in downstream tasks that do not conform to these symmetries. In this work, drawing insights from world models, we propose to instead learn a general representation that can adapt to be invariant or equivariant to different transformations by paying attention to context --- a memory module that tracks task-specific states, actions and future states. Here, the action is the transformation, while the current and future states respectively represent the input's representation before and after the transformation. Our proposed algorithm, Contextual Self Supervised Learning (ContextSSL), learns equivariance to all transformations (as opposed to invariance). In this way, the model can learn to encode all relevant features as general representations while having the versatility to tail down to task-wise symmetries when given a few examples as the context. Empirically, we demonstrate significant performance gains over existing methods on equivariance-related tasks, supported by both qualitative and quantitative evaluations.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94239"} +{"video_file": "exATQD4HSv_39026872.mp4", "openreview_id": "exATQD4HSv", "slideslive_id": 39026872, "venue": "nips2024", "title": "A scalable generative model for dynamical system reconstruction from neuroimaging data", "status": "Poster", "keywords": "Dynamical Systems Reconstruction;Recurrent Neural Networks;Nonlinear Dynamics;Neuroscience;fMRI", "tldr": "We build on the recent success of control techniques for training SSMs for dynamical systems reconstruction (DSR), and propose a scalable DSR algorithm for empirical situations in which we deal with convolved observations, such as fMRI time series.", "abstract": "Data-driven inference of the generative dynamics underlying a set of observed time series is of growing interest in machine learning and the natural sciences. In neuroscience, such methods promise to alleviate the need to handcraft models based on biophysical principles and allow to automatize the inference of inter-individual differences in brain dynamics. Recent breakthroughs in training techniques for state space models (SSMs) specifically geared toward dynamical systems (DS) reconstruction (DSR) enable to recover the underlying system including its geometrical (attractor) and long-term statistical invariants from even short time series. These techniques are based on control-theoretic ideas, like modern variants of teacher forcing (TF), to ensure stable loss gradient propagation while training. However, as it currently stands, these techniques are not directly applicable to data modalities where current observations depend on an entire history of previous states due to a signal\u2019s filtering properties, as common in neuroscience (and physiology more generally). Prominent examples are the blood oxygenation level dependent (BOLD) signal in functional magnetic resonance imaging (fMRI) or Ca$^{2+}$ imaging data. Such types of signals render the SSM's decoder model non-invertible, a requirement for previous TF-based methods. Here, exploiting the recent success of control techniques for training SSMs, we propose a novel algorithm that solves this problem and scales exceptionally well with model dimensionality and filter length. We demonstrate its efficiency in reconstructing dynamical systems, including their state space geometry and long-term temporal properties, from just short BOLD time series.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/94236"} +{"video_file": "eyfYC19gOd_39027702.mp4", "openreview_id": "eyfYC19gOd", "slideslive_id": 39027702, "venue": "nips2024", "title": "Grid4D: 4D Decomposed Hash Encoding for High-Fidelity Dynamic Gaussian Splatting", "status": "Poster", "keywords": "Dynamic Scene Rendering;Gaussian Splatting;Hash Encoding;Explicit Representation", "tldr": "This paper proposes Grid4D, a novel high-fidelity dynamic scene rendering model with 4D decomposed hash encoding.", "abstract": "Recently, Gaussian splatting has received more and more attention in the field of static scene rendering. Due to the low computational overhead and inherent flexibility of explicit representations, plane-based explicit methods are popular ways to predict deformations for Gaussian-based dynamic scene rendering models. However, plane-based methods rely on the inappropriate low-rank assumption and excessively decompose the space-time 4D encoding, resulting in overmuch feature overlap and unsatisfactory rendering quality. To tackle these problems, we propose Grid4D, a dynamic scene rendering model based on Gaussian splatting and employing a novel explicit encoding method for the 4D input through the hash encoding. Different from plane-based explicit representations, we decompose the 4D encoding into one spatial and three temporal 3D hash encodings without the low-rank assumption. Additionally, we design a novel attention module that generates the attention scores in a directional range to aggregate the spatial and temporal features. The directional attention enables Grid4D to more accurately fit the diverse deformations across distinct scene components based on the spatial encoded features. Moreover, to mitigate the inherent lack of smoothness in explicit representation methods, we introduce a smooth regularization term that keeps our model from the chaos of deformation prediction. Our experiments demonstrate that Grid4D significantly outperforms the state-of-the-art models in visual quality and rendering speed.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94235"} +{"video_file": "f4v7cmm5sC_39027199.mp4", "openreview_id": "f4v7cmm5sC", "slideslive_id": 39027199, "venue": "nips2024", "title": "Foundation Inference Models for Markov Jump Processes", "status": "Poster", "keywords": "Zero-shot inference;Markov jump process;Inference of Markov processes;Foundation models;Foundation models for time series;time series", "tldr": "We introduce a framework for zero-shot inference of Markov jump processes from time series data. Our foundation model performs on par with state-of-the-art models which are finetuned on the target datasets.", "abstract": "Markov jump processes are continuous-time stochastic processes which describe dynamical systems evolving in discrete state spaces. These processes find wide application in the natural sciences and machine learning, but their inference is known to be far from trivial. In this work we introduce a methodology for zero-shot inference of Markov jump processes (MJPs), on bounded state spaces, from noisy and sparse observations, which consists of two components. First, a broad probability distribution over families of MJPs, as well as over possible observation times and noise mechanisms, with which we simulate a synthetic dataset of hidden MJPs and their noisy observations. Second, a neural recognition model that processes subsets of the simulated observations, and that is trained to output the initial condition and rate matrix of the target MJP in a supervised way. We empirically demonstrate that one and the same (pretrained) recognition model can infer, in a zero-shot fashion, hidden MJPs evolving in state spaces of different dimensionalities. Specifically, we infer MJPs which describe (i) discrete flashing ratchet systems, which are a type of Brownian motors, and the conformational dynamics in (ii) molecular simulations, (iii) experimental ion channel data and (iv) simple protein folding models. What is more, we show that our model performs on par with state-of-the-art models which are trained on the target datasets.\nOur pretrained model is available online.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/94231"} +{"video_file": "f63DKIpx0I_39026735.mp4", "openreview_id": "f63DKIpx0I", "slideslive_id": 39026735, "venue": "nips2024", "title": "Self-Healing Machine Learning: A Framework for Autonomous Adaptation in Real-World Environments", "status": "Poster", "keywords": "Model performance degradation;autonomous adaptation;large language models", "tldr": "Self-healing machine learning is a framework which diagnoses the reason for model degradation and proposes diagnosis-based corrective actions", "abstract": "Real-world machine learning systems often encounter model performance degradation due to distributional shifts in the underlying data generating process (DGP). Existing approaches to addressing shifts, such as concept drift adaptation, are limited by their reason-agnostic nature. By choosing from a pre-defined set of actions, such methods implicitly assume that the causes of model degradation are irrelevant to what actions should be taken, limiting their ability to select appropriate adaptations. In this paper, we propose an alternative paradigm to overcome these limitations, called self-healing machine learning (SHML). Contrary to previous approaches, SHML autonomously diagnoses the reason for degradation and proposes diagnosis-based corrective actions. We formalize SHML as an optimization problem over a space of adaptation actions to minimize the expected risk under the shifted DGP. We introduce a theoretical framework for self-healing systems and build an agentic self-healing solution\nH\n-LLM which uses large language models to perform self-diagnosis by reasoning about the structure underlying the DGP, and self-adaptation by proposing and evaluating corrective actions. Empirically, we analyze different components of\nH\n-LLM to understand why and when it works, demonstrating the potential of self-healing ML.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94230"} +{"video_file": "f70e6YYFHF_39026435.mp4", "openreview_id": "f70e6YYFHF", "slideslive_id": 39026435, "venue": "nips2024", "title": "The Factorization Curse: Which Tokens You Predict Underlie the Reversal Curse and More", "status": "Poster", "keywords": "Reversal curse;reliability;safety;language models;interpretability;learning objectives", "tldr": "We find the standard next token learning objective underlies the reversal curse, which can be overcome with any-to-any context-prediction training.", "abstract": "Today's best language models still struggle with \"hallucinations\", factually incorrect generations, which impede their ability to reliably retrieve information seen during training. The reversal curse, where models cannot recall information when probed in a different order than was encountered during training, exemplifies limitations in information retrieval. To better understand these limitations, we reframe the reversal curse as a factorization curse --- a failure of models to learn the same joint distribution under different factorizations. We more closely simulate finetuning workflows which train pretrained models on specialized knowledge by introducing WikiReversal, a realistic testbed based on Wikipedia knowledge graphs. Through a series of controlled experiments with increasing levels of realism, including non-reciprocal relations, we find that reliable information retrieval is an inherent failure of the next-token prediction objective used in popular large language models. Moreover, we demonstrate reliable information retrieval cannot be solved with scale, reversed tokens, or even naive bidirectional-attention training. Consequently, various approaches to finetuning on specialized data would necessarily provide mixed results on downstream tasks, unless the model has already seen the right sequence of tokens. Across five tasks of varying levels of complexity, our results uncover a promising path forward: factorization-agnostic objectives can significantly mitigate the reversal curse and hint at improved knowledge storage and planning capabilities.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94229"} +{"video_file": "f8MrWxlnRz_39028570.mp4", "openreview_id": "f8MrWxlnRz", "slideslive_id": 39028570, "venue": "nips2024", "title": "Adaptive Important Region Selection with Reinforced Hierarchical Search for Dense Object Detection", "status": "Poster", "keywords": "Dense Object Detection;Reinforced Hierarchical Search", "tldr": "We propose an Adaptive Important Region Selection framework guided by Evidential Q-learning coupled with a uniquely designed reward function for dense object detection.", "abstract": "Existing state-of-the-art dense object detection techniques tend to produce a large number of false positive detections on difficult images with complex scenes because they focus on ensuring a high recall. To improve the detection accuracy, we propose an Adaptive Important Region Selection (AIRS) framework guided by Evidential Q-learning coupled with a uniquely designed reward function. Inspired by human visual attention, our detection model conducts object search in a top-down, hierarchical fashion. It starts from the top of the hierarchy with the coarsest granularity and then identifies the potential patches likely to contain objects of interest. It then discards non-informative patches and progressively moves downward on the selected ones for a fine-grained search. The proposed evidential Q-learning systematically encodes epistemic uncertainty in its evidential-Q value to encourage the exploration of unknown patches, especially in the early phase of model training. In this way, the proposed model dynamically balances exploration-exploitation to cover both highly valuable and informative patches. Theoretical analysis and extensive experiments on multiple datasets demonstrate that our proposed framework outperforms the SOTA models.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94227"} +{"video_file": "fA3RMMl8ii_39024405.mp4", "openreview_id": "fA3RMMl8ii", "slideslive_id": 39024405, "venue": "nips2024", "title": "Tactile DreamFusion: Exploiting Tactile Sensing for 3D Generation", "status": "Poster", "keywords": "3D neural rendering;diffusion model;texture synthesis;multi-modal generation", "tldr": "We leverage tactile sensing to improve geometric details of generated 3D assets for text-to-3D and image-to-3D tasks.", "abstract": "3D generation methods have shown visually compelling results powered by diffusion image priors. However, they often fail to produce realistic geometric details, resulting in overly smooth surfaces or geometric details inaccurately baked in albedo maps. To address this, we introduce a new method that incorporates touch as an additional modality to improve the geometric details of generated 3D assets. We design a lightweight 3D texture field to synthesize visual and tactile textures, guided by 2D diffusion model priors on both visual and tactile domains. We condition the visual texture generation on high-resolution tactile normals and guide the patch-based tactile texture refinement with a customized TextureDreambooth. We further present a multi-part generation pipeline that enables us to synthesize different textures across various regions. To our knowledge, we are the first to leverage high-resolution tactile sensing to enhance geometric details for 3D generation tasks. We evaluate our method in both text-to-3D and image-to-3D settings. Our experiments demonstrate that our method provides customized and realistic fine geometric textures while maintaining accurate alignment between two modalities of vision and touch.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/94226"} +{"video_file": "fAlcxvrOEX_39026526.mp4", "openreview_id": "fAlcxvrOEX", "slideslive_id": 39026526, "venue": "nips2024", "title": "AdjointDEIS: Efficient Gradients for Diffusion Models", "status": "Poster", "keywords": "continuous adjoint equations;neural differential equations;neural ODEs;adjoint sensitivity method;diffusion models;guided generation", "tldr": "We propose a novel family of bespoke ODE solves for solving the continuous adjoint equations for diffusion models.", "abstract": "The optimization of the latents and parameters of diffusion models with respect to some differentiable metric defined on the output of the model is a challenging and complex problem. The sampling for diffusion models is done by solving either the probability flow ODE or diffusion SDE wherein a neural network approximates the score function allowing a numerical ODE/SDE solver to be used. However, naive backpropagation techniques are memory intensive, requiring the storage of all intermediate states, and face additional complexity in handling the injected noise from the diffusion term of the diffusion SDE. We propose a novel family of bespoke ODE solvers to the continuous adjoint equations for diffusion models, which we call AdjointDEIS. We exploit the unique construction of diffusion SDEs to further simplify the formulation of the continuous adjoint equations using exponential integrators. Moreover, we provide convergence order guarantees for our bespoke solvers. Significantly, we show that continuous adjoint equations for diffusion SDEs actually simplify to a simple ODE. Lastly, we demonstrate the effectiveness of AdjointDEIS for guided generation with an adversarial attack in the form of the face morphing problem. Our code will be released on our project page https://zblasingame.github.io/AdjointDEIS/", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94225"} +{"video_file": "fDiZJ7mmOV_39024777.mp4", "openreview_id": "fDiZJ7mmOV", "slideslive_id": 39024777, "venue": "nips2024", "title": "Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset", "status": "Poster", "keywords": "Non-stationarity;plasticity loss;online learning;deep learning", "tldr": "Learning to reset Neural Networks parameters using Ornstein-Uhlenbeck to learn on non-stationary data", "abstract": "Neural networks are most often trained under the assumption that data come from a stationary distribution. However, settings in which this assumption is violated are of increasing importance; examples include supervised learning with distributional shifts, reinforcement learning, continual learning and non-stationary contextual bandits. Here, we introduce a novel learning approach that automatically models and adapts to non-stationarity by linking parameters through an Ornstein-Uhlenbeck process with an adaptive drift parameter. The adaptive drift draws the parameters towards the distribution used at initialisation, so the approach can be understood as a form of soft parameter reset. We show empirically that our approach performs well in non-stationary supervised, and off-policy reinforcement learning settings.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/94222"} +{"video_file": "fE3RqiF4Nx_39026268.mp4", "openreview_id": "fE3RqiF4Nx", "slideslive_id": 39026268, "venue": "nips2024", "title": "Metric Flow Matching for Smooth Interpolations on the Data Manifold", "status": "Poster", "keywords": "Flow Matching; Riemannian Geometry; single-cell RNA sequencing;", "tldr": "We generalize Conditional Flow Matching by learning interpolants that stay on the data manifold leading to more meaningful matching", "abstract": "Matching objectives underpin the success of modern generative models and rely on constructing conditional paths that transform a source distribution into a target distribution. Despite being a fundamental building block, conditional paths have been designed principally under the assumption of\nEuclidean geometry\n, resulting in straight interpolations. However, this can be particularly restrictive for tasks such as trajectory inference, where straight paths might lie outside the data manifold, thus failing to capture the underlying dynamics giving rise to the observed marginals. In this paper, we propose Metric Flow Matching (MFM), a novel simulation-free framework for conditional flow matching where interpolants are approximate geodesics learned by minimizing the kinetic energy of a data-induced Riemannian metric. This way, the generative model matches vector fields on the data manifold, which corresponds to lower uncertainty and more meaningful interpolations. We prescribe general metrics to instantiate MFM, independent of the task, and test it on a suite of challenging problems including LiDAR navigation, unpaired image translation, and modeling cellular dynamics. We observe that MFM outperforms the Euclidean baselines, particularly achieving SOTA on single-cell trajectory prediction.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/94221"} +{"video_file": "fHq4x2YXVv_39027505.mp4", "openreview_id": "fHq4x2YXVv", "slideslive_id": 39027505, "venue": "nips2024", "title": "AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models", "status": "Poster", "keywords": "Pruning;large language models;heavy tails", "tldr": "We use methods derived from HT-SR Theory to develop improved methods for pruning LLMs.", "abstract": "Recent work on pruning large language models (LLMs) has shown that one can eliminate a large number of parameters without compromising performance, making pruning a promising strategy to reduce LLM model size. Existing LLM pruning strategies typically assign uniform pruning ratios across layers, limiting overall pruning ability; and recent work on layerwise pruning of LLMs is often based on heuristics that can easily lead to suboptimal performance. In this paper, we leverage Heavy-Tailed Self-Regularization (HT-SR) Theory, in particular the shape of empirical spectral densities (ESDs) of weight matrices, to design improved layerwise pruning ratios for LLMs. Our analysis reveals a wide variability in how well-trained, and thus relatedly how prunable, different layers of an LLM are. Based on this, we propose AlphaPruning, which uses shape metrics to allocate layerwise sparsity ratios in a more theoretically-principled manner. AlphaPruning can be used in conjunction with multiple existing LLM pruning methods. Our empirical results show that AlphaPruning prunes LLaMA-7B to 80% sparsity while maintaining reasonable perplexity, marking a first in the literature on LLMs.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/94217"} +{"video_file": "fIz8K4DJ7w_39027079.mp4", "openreview_id": "fIz8K4DJ7w", "slideslive_id": 39027079, "venue": "nips2024", "title": "Rethinking the Diffusion Models for Missing Data Imputation: A Gradient Flow Perspective", "status": "Poster", "keywords": "Missing Data Imputation;Gradient Flow;Reproducing Kernel Hilbert Space;Functional Optimization", "tldr": "We propose a novel, easy-to-implement, numerical tabular data imputation approach based on joint wasserstein gradient flow.", "abstract": "Diffusion models have demonstrated competitive performance in missing data imputation (MDI) task. However, directly applying diffusion models to MDI produces suboptimal performance due to two primary defects. First, the sample diversity promoted by diffusion models hinders the accurate inference of missing values. Second, data masking reduces observable indices for model training, obstructing imputation performance. To address these challenges, we introduce $\\underline{\\text{N}}$egative $\\underline{\\text{E}}$ntropy-regularized $\\underline{\\text{W}}$asserstein gradient flow for $\\underline{\\text{Imp}}$utation (NewImp), enhancing diffusion models for MDI from a gradient flow perspective. To handle the first defect, we incorporate a negative entropy regularization term into the cost functional to suppress diversity and improve accuracy. To handle the second defect, we demonstrate that the imputation procedure of NewImp, induced by the conditional distribution-related cost functional, can equivalently be replaced by that induced by the joint distribution, thereby naturally eliminating the need for data masking. Extensive experiments validate the effectiveness of our method. Code is available at https://github.com/JustusvLiebig/NewImp.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/94215"} +{"video_file": "fMWrTAe5Iy_39025959.mp4", "openreview_id": "fMWrTAe5Iy", "slideslive_id": 39025959, "venue": "nips2024", "title": "R$^2$-Gaussian: Rectifying Radiative Gaussian Splatting for Tomographic Reconstruction", "status": "Poster", "keywords": "3D Gaussian Splatting;3D Reconstruction;CT Reconstruction;Tomographic Reconstruction", "tldr": "We discover an inherent problem in 3DGS and develop a novel 3DGS-based framework for tomographic reconstruction.", "abstract": "3D Gaussian splatting (3DGS) has shown promising results in image rendering and surface reconstruction. However, its potential in volumetric reconstruction tasks, such as X-ray computed tomography, remains under-explored. This paper introduces R\n2\n-Gaussian, the first 3DGS-based framework for sparse-view tomographic reconstruction. By carefully deriving X-ray rasterization functions, we discover a previously unknown \\emph{integration bias} in the standard 3DGS formulation, which hampers accurate volume retrieval. To address this issue, we propose a novel rectification technique via refactoring the projection from 3D to 2D Gaussians. Our new method presents three key innovations: (1) introducing tailored Gaussian kernels, (2) extending rasterization to X-ray imaging, and (3) developing a CUDA-based differentiable voxelizer. Experiments on synthetic and real-world datasets demonstrate that our method outperforms state-of-the-art approaches in accuracy and efficiency. Crucially, it delivers high-quality results in 4 minutes, which is 12\n\u00d7\nfaster than NeRF-based methods and on par with traditional algorithms.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94214"} +{"video_file": "fMdrBucZnj_39028153.mp4", "openreview_id": "fMdrBucZnj", "slideslive_id": 39028153, "venue": "nips2024", "title": "Expected Probabilistic Hierarchies", "status": "Poster", "keywords": "hierarchical clustering;graph clustering;clustering;unsupervised learning;probabilistic models", "tldr": "Probabilistic model learning hierarchies in data by optimizing the expected metrics via gradient descent outperforming several baselines.", "abstract": "Hierarchical clustering has usually been addressed by discrete optimization using heuristics or continuous optimization of relaxed scores for hierarchies. In this work, we propose to optimize expected scores under a probabilistic model over hierarchies. (1) We show theoretically that the global optimal values of the expected Dasgupta cost and Tree-Sampling divergence (TSD), two unsupervised metrics for hierarchical clustering, are equal to the optimal values of their discrete counterparts contrary to some relaxed scores. (2) We propose Expected Probabilistic Hierarchies (EPH), a probabilistic model to learn hierarchies in data by optimizing expected scores. EPH uses differentiable hierarchy sampling enabling end-to-end gradient descent based optimization, and an unbiased subgraph sampling approach to scale to large datasets. (3) We evaluate EPH on synthetic and real-world datasets including vector and graph datasets. EPH outperforms all other approaches quantitatively and provides meaningful hierarchies in qualitative evaluations.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/94213"} +{"video_file": "fNakQltI1N_39028069.mp4", "openreview_id": "fNakQltI1N", "slideslive_id": 39028069, "venue": "nips2024", "title": "Trajectory Flow Matching with Applications to Clinical Time Series Modelling", "status": "Spotlight", "keywords": "Flow matching;stochastic differential equations;ODE;SDE;uncertainty;time series;EHR", "tldr": "Flow Matching for continuous stochastic modelling of time series data with applications to clinical time series", "abstract": "Modeling stochastic and irregularly sampled time series is a challenging problem found in a wide range of applications, especially in medicine. Neural stochastic differential equations (Neural SDEs) are an attractive modeling technique for this problem, which parameterize the drift and diffusion terms of an SDE with neural networks. However, current algorithms for training Neural SDEs require backpropagation through the SDE dynamics, greatly limiting their scalability and stability. To address this, we propose Trajectory Flow Matching (TFM), which trains a Neural SDE in a simulation-free manner, bypassing backpropagation through the dynamics. TFM leverages the flow matching technique from generative modeling to model time series. In this work we first establish necessary conditions for TFM to learn time series data. Next, we present a reparameterization trick which improves training stability. Finally, we adapt TFM to the clinical time series setting, demonstrating improved performance on four clinical time series datasets both in terms of absolute performance and uncertainty prediction, a crucial parameter in this setting.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/94212"} +{"video_file": "fNoleQa9RX_39026960.mp4", "openreview_id": "fNoleQa9RX", "slideslive_id": 39026960, "venue": "nips2024", "title": "The Unmet Promise of Synthetic Training Images: Using Retrieved Real Images Performs Better", "status": "Poster", "keywords": "Synthetic training data;task adaptation;data-centric machine learning", "tldr": "Our paper asks: does training on synthetic images from a generative model provide any gain beyond training on the upstream data used to train the generative model?", "abstract": "Generative text-to-image models enable us to synthesize unlimited amounts of images in a controllable manner, spurring many recent efforts to train vision models with synthetic data. However, every synthetic image ultimately originates from the upstream data used to train the generator. Does the intermediate generator provide additional information over directly training on relevant parts of the upstream data? Grounding this question in the setting of image classification, we compare finetuning on task-relevant, targeted synthetic data generated by Stable Diffusion---a generative model trained on the LAION-2B dataset---against finetuning on targeted real images retrieved directly from LAION-2B. We show that while synthetic data can benefit some downstream tasks, it is universally matched or outperformed by real data from the simple retrieval baseline. Our analysis suggests that this underperformance is partially due to generator artifacts and inaccurate task-relevant visual details in the synthetic images. Overall, we argue that targeted retrieval is a critical baseline to consider when training with synthetic data---a baseline that current methods do not yet surpass. We release code, data, and models at https://github.com/scottgeng00/unmet-promise/.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94211"} +{"video_file": "fVRCsK4EoM_39024521.mp4", "openreview_id": "fVRCsK4EoM", "slideslive_id": 39024521, "venue": "nips2024", "title": "PrefPaint: Aligning Image Inpainting Diffusion Model with Human Preference", "status": "Poster", "keywords": "diffusion model;image inpainting;human feedback reinforcement learning", "tldr": "This paper makes the first attempt to align diffusion models for image inpainting with human aesthetic standards via a reinforcement learning framework.", "abstract": "In this paper, we make the first attempt to align diffusion models for image inpainting with human aesthetic standards via a reinforcement learning framework, significantly improving the quality and visual appeal of inpainted images. Specifically, instead of directly measuring the divergence with paired images, we train a reward model with the dataset we construct, consisting of nearly 51,000 images annotated with human preferences. Then, we adopt a reinforcement learning process to fine-tune the distribution of a pre-trained diffusion model for image inpainting in the direction of higher reward. Moreover, we theoretically deduce the upper bound on the error of the reward model, which illustrates the potential confidence of reward estimation throughout the reinforcement alignment process, thereby facilitating accurate regularization. Extensive experiments on inpainting comparison and downstream tasks, such as image extension and 3D reconstruction, demonstrate the effectiveness of our approach, showing significant improvements in the alignment of inpainted images with human preference compared with state-of-the-art methods. This research not only advances the field of image inpainting but also provides a framework for incorporating human preference into the iterative refinement of generative models based on modeling reward accuracy, with broad implications for the design of visually driven AI applications. Our code and dataset are publicly available at \\url{https://prefpaint.github.io}.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94203"} +{"video_file": "faBXeVBNqz_39024737.mp4", "openreview_id": "faBXeVBNqz", "slideslive_id": 39024737, "venue": "nips2024", "title": "Higher-Rank Irreducible Cartesian Tensors for Equivariant Message Passing", "status": "Poster", "keywords": "equivariance;graph neural networks;interatomic potentials;irreducible Cartesian tensors;many-body interactions;molecules;materials", "tldr": "We introduce higher-rank irreducible Cartesian tensors and their products for equivariant message passing.", "abstract": "The ability to perform fast and accurate atomistic simulations is crucial for advancing the chemical sciences. By learning from high-quality data, machine-learned interatomic potentials achieve accuracy on par with ab initio and first-principles methods at a fraction of their computational cost. The success of machine-learned interatomic potentials arises from integrating inductive biases such as equivariance to group actions on an atomic system, e.g., equivariance to rotations and reflections. In particular, the field has notably advanced with the emergence of equivariant message passing. Most of these models represent an atomic system using spherical tensors, tensor products of which require complicated numerical coefficients and can be computationally demanding. Cartesian tensors offer a promising alternative, though state-of-the-art methods lack flexibility in message-passing mechanisms, restricting their architectures and expressive power. This work explores higher-rank irreducible Cartesian tensors to address these limitations. We integrate irreducible Cartesian tensor products into message-passing neural networks and prove the equivariance and traceless property of the resulting layers. Through empirical evaluations on various benchmark data sets, we consistently observe on-par or better performance than that of state-of-the-art spherical and Cartesian models.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/94197"} +{"video_file": "faj2EBhdHC_39027565.mp4", "openreview_id": "faj2EBhdHC", "slideslive_id": 39027565, "venue": "nips2024", "title": "Graph Neural Networks Need Cluster-Normalize-Activate Modules", "status": "Poster", "keywords": "Graph Neural Networks;Deep Geometric Learning;Learnable Activation Functions;Oversmoothing", "tldr": "We propose Cluster-Normalize-Activate modules to improve expressivity of Graph Neural Networks.", "abstract": "Graph Neural Networks (GNNs) are non-Euclidean deep learning models for graph-structured data. Despite their successful and diverse applications, oversmoothing prohibits deep architectures due to node features converging to a single fixed point. This severely limits their potential to solve complex tasks. To counteract this tendency, we propose a plug-and-play module consisting of three steps: Cluster\u2192Normalize\u2192Activate (CNA). By applying CNA modules, GNNs search and form super nodes in each layer, which are normalized and activated individually. We demonstrate in node classification and property prediction tasks that CNA significantly improves the accuracy over the state-of-the-art. Particularly, CNA reaches 94.18% and 95.75% accuracy on Cora and CiteSeer, respectively. It further benefits GNNs in regression tasks as well, reducing the mean squared error compared to all baselines. At the same time, GNNs with CNA require substantially fewer learnable parameters than competing architectures.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/94196"} +{"video_file": "ffeUBoTcdS_39026993.mp4", "openreview_id": "ffeUBoTcdS", "slideslive_id": 39026993, "venue": "nips2024", "title": "Persistent Test-time Adaptation in Recurring Testing Scenarios", "status": "Poster", "keywords": "domain adaptation;test-time adaptation;continual adaptation;performance degradation;self-supervised learning", "tldr": "We conduct a simple-model theoretical analysis, introduce a benchmark and a baseline approach to address the gradual performance degradation of continual test-time adaptation methods.", "abstract": "Current test-time adaptation (TTA) approaches aim to adapt a machine learning model to environments that change continuously. Yet, it is unclear whether TTA methods can maintain their adaptability over prolonged periods. To answer this question, we introduce a diagnostic setting - recurring TTA where environments not only change but also recur over time, creating an extensive data stream. This setting allows us to examine the error accumulation of TTA models, in the most basic scenario, when they are regularly exposed to previous testing environments. Furthermore, we simulate a TTA process on a simple yet representative $\\epsilon$-perturbed Gaussian Mixture Model Classifier, deriving theoretical insights into the dataset- and algorithm-dependent factors contributing to gradual performance degradation. Our investigation leads us to propose persistent TTA (PeTTA), which senses when the model is diverging towards collapse and adjusts the adaptation strategy, striking a balance between the dual objectives of adaptation and model collapse prevention. The supreme stability of PeTTA over existing approaches, in the face of lifelong TTA scenarios, has been demonstrated over comprehensive experiments on various benchmarks. Our project page is available at https://hthieu166.github.io/petta.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94192"} +{"video_file": "fi3aKVnBQo_39025881.mp4", "openreview_id": "fi3aKVnBQo", "slideslive_id": 39025881, "venue": "nips2024", "title": "Efficient Leverage Score Sampling for Tensor Train Decomposition", "status": "Poster", "keywords": "leverage score sampling;tensor train decomposition;alternating least square", "tldr": "We propose an efficient leverage score based randomized alternating least squares algorithm for tensor train decomposition.", "abstract": "Tensor Train~(TT) decomposition is widely used in the machine learning and quantum physics communities as a popular tool to efficiently compress high-dimensional tensor data. In this paper, we propose an efficient algorithm to accelerate computing the TT decomposition with the Alternating Least Squares (ALS) algorithm relying on exact leverage scores sampling. For this purpose, we propose a data structure that allows us to efficiently sample from the tensor with time complexity logarithmic in the product of the tensor dimensions. Our contribution specifically leverages the canonical form of the TT decomposition. By maintaining the canonical form through each iteration of ALS, we can efficiently compute (and sample from) the leverage scores, thus achieving significant speed-up in solving each sketched least-square problem. Experiments on synthetic and real data on dense and sparse tensors demonstrate that our method outperforms SVD-based and ALS-based algorithms.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94191"} +{"video_file": "fogJgrozu1_39025312.mp4", "openreview_id": "fogJgrozu1", "slideslive_id": 39025312, "venue": "nips2024", "title": "Localized Adaptive Risk Control", "status": "Poster", "keywords": "online conformal risk control;uncertainty quantification;conformal predictions", "tldr": "A novel scheme for online calibration that offers both worst-case deterministic long-term risk control and statistical localized risk guarantees.", "abstract": "Adaptive Risk Control (ARC) is an online calibration strategy based on set prediction that offers worst-case deterministic long-term risk control, as well as statistical marginal coverage guarantees. ARC adjusts the size of the prediction set by varying a single scalar threshold based on feedback from past decisions. In this work, we introduce Localized Adaptive Risk Control (L-ARC), an online calibration scheme that targets statistical localized risk guarantees ranging from conditional risk to marginal risk, while preserving the worst-case performance of ARC. L-ARC updates a threshold function within a reproducing kernel Hilbert space (RKHS), with the kernel determining the level of localization of the statistical risk guarantee. The theoretical results highlight a trade-off between localization of the statistical risk and convergence speed to the long-term risk target. Thanks to localization, L-ARC is demonstrated via experiments to produce prediction sets with risk guarantees across different data subpopulations, significantly improving the fairness of the calibrated model for tasks such as image segmentation and beam selection in wireless networks.", "primary_area": "online_learning", "site": "https://neurips.cc/virtual/2024/poster/94186"} +{"video_file": "fpOnUMjLiO_39028713.mp4", "openreview_id": "fpOnUMjLiO", "slideslive_id": 39028713, "venue": "nips2024", "title": "Theoretical Characterisation of the Gauss Newton Conditioning in Neural Networks", "status": "Poster", "keywords": "conditioning;gradient outer product;Gauss-Newton matrix;optimization landscape", "tldr": "We analyze how different architecture choices, such as the depth or width of hidden layers or residual connections affect the conditioning of the", "abstract": "The Gauss-Newton (GN) matrix plays an important role in machine learning, most evident in its use as a preconditioning matrix for a wide family of popular adaptive methods to speed up optimization. Besides, it can also provide key insights into the optimization landscape of neural networks. In the context of deep neural networks, understanding the GN matrix involves studying the interaction between different weight matrices as well as the dependencies introduced by the data, thus rendering its analysis challenging. In this work, we take a first step towards theoretically characterizing the conditioning of the GN matrix in neural networks. We establish tight bounds on the condition number of the GN in deep linear networks of arbitrary depth and width, which we also extend to two-layer ReLU networks. We expand the analysis to further architectural components, such as residual connections and convolutional layers. Finally, we empirically validate the bounds and uncover valuable insights into the influence of the analyzed architectural components.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/94185"} +{"video_file": "fpxRpPbF1t_39028734.mp4", "openreview_id": "fpxRpPbF1t", "slideslive_id": 39028734, "venue": "nips2024", "title": "Differentiable Modal Synthesis for Physical Modeling of Planar String Sound and Motion Simulation", "status": "Poster", "keywords": "Differentiable Audio Signal Processing;Physical Modeling;Musical Sound Synthesis;Physical Simulation", "tldr": "We propose a model that simulates a musical string instrument from the physical properties.", "abstract": "While significant advancements have been made in music generation and differentiable sound synthesis within machine learning and computer audition, the simulation of instrument vibration guided by physical laws has been underexplored. To address this gap, we introduce a novel model for simulating the spatio-temporal motion of nonlinear strings, integrating modal synthesis and spectral modeling within a neural network framework. Our model leverages mechanical properties and fundamental frequencies as inputs, outputting string states across time and space that solve the partial differential equation characterizing the nonlinear string. Empirical evaluations demonstrate that the proposed architecture achieves superior accuracy in string motion simulation compared to existing baseline architectures. The code and demo are available online.", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/94184"} +{"video_file": "fs28jccJj5_39027745.mp4", "openreview_id": "fs28jccJj5", "slideslive_id": 39027745, "venue": "nips2024", "title": "SpikedAttention: Training-Free and Fully Spike-Driven Transformer-to-SNN Conversion with Winner-Oriented Spike Shift for Softmax Operation", "status": "Poster", "keywords": "Spiking Neural Network;ANN-to-SNN conversion;Transformer;Neuromorphic", "tldr": "We provide a direct transformer-to-SNN conversion that requires no training and no architectural change, achieving SOTA SNN accuracy of 80.0% by converting Swin-T (42% less energy) and only 0.3% accuracy loss by converting BERT (58% less energy).", "abstract": "Event-driven spiking neural networks(SNNs) are promising neural networks that reduce the energy consumption of continuously growing AI models. Recently, keeping pace with the development of transformers, transformer-based SNNs were presented. Due to the incompatibility of self-attention with spikes, however, existing transformer-based SNNs limit themselves by either restructuring self-attention architecture or conforming to non-spike computations. In this work, we propose a novel transformer-to-SNN conversion method that outputs an end-to-end spike-based transformer, named SpikedAttention. Our method directly converts the well-trained transformer without modifying its attention architecture. For the vision task, the proposed method converts Swin Transformer into an SNN without post-training or conversion-aware training, achieving state-of-the-art SNN accuracy on ImageNet dataset, i.e., 80.0% with 28.7M parameters. Considering weight accumulation, neuron potential update, and on-chip data movement, SpikedAttention reduces energy consumption by 42% compared to the baseline ANN, i.e., Swin-T. Furthermore, for the first time, we demonstrate that SpikedAttention successfully converts a BERT model to an SNN with only 0.3% accuracy loss on average consuming 58% less energy on GLUE benchmark. Our code is available at Github ( https://github.com/sangwoohwang/SpikedAttention ).", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/94181"} +{"video_file": "ftqjwZQz10_39025724.mp4", "openreview_id": "ftqjwZQz10", "slideslive_id": 39025724, "venue": "nips2024", "title": "DEX: Data Channel Extension for Efficient CNN Inference on Tiny AI Accelerators", "status": "Poster", "keywords": "TinyML;On-device ML;CNN;AI accelerator;Microcontroller;MCU", "tldr": "We propose Data Channel EXtension (DEX) to improve CNN accuracy on tiny AI accelerators by using patch-wise sampling and channel-wise stacking, boosting accuracy by 3.5% without increasing inference latency.", "abstract": "Tiny machine learning (TinyML) aims to run ML models on small devices and is increasingly favored for its enhanced privacy, reduced latency, and low cost. Recently, the advent of tiny AI accelerators has revolutionized the TinyML field by significantly enhancing hardware processing power. These accelerators, equipped with multiple parallel processors and dedicated per-processor memory instances, offer substantial performance improvements over traditional microcontroller units (MCUs). However, their limited data memory often necessitates downsampling input images, resulting in accuracy degradation. To address this challenge, we propose Data channel EXtension (DEX), a novel approach for efficient CNN execution on tiny AI accelerators. DEX incorporates additional spatial information from original images into input images through patch-wise even sampling and channel-wise stacking, effectively extending data across input channels. By leveraging underutilized processors and data memory for channel extension, DEX facilitates parallel execution without increasing inference latency. Our evaluation with four models and four datasets on tiny AI accelerators demonstrates that this simple idea improves accuracy on average by 3.5%p while keeping the inference latency the same on the AI accelerator. The source code is available at https://github.com/Nokia-Bell-Labs/data-channel-extension.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94180"} +{"video_file": "fu0xdh4aEJ_39027941.mp4", "openreview_id": "fu0xdh4aEJ", "slideslive_id": 39027941, "venue": "nips2024", "title": "Bigger, Regularized, Optimistic: scaling for compute and sample efficient continuous control", "status": "Spotlight", "keywords": "Machine Learning;Reinforcement Learning;Scaling", "tldr": "We show that combining increased critic model capacity with certain RL-specific improvement leads to very efficient agents", "abstract": "Sample efficiency in Reinforcement Learning (RL) has traditionally been driven by algorithmic enhancements. In this work, we demonstrate that scaling can also lead to substantial improvements. We conduct a thorough investigation into the interplay of scaling model capacity and domain-specific RL enhancements. These empirical findings inform the design choices underlying our proposed BRO (Bigger, Regularized, Optimistic) algorithm. The key innovation behind BRO is that strong regularization allows for effective scaling of the critic networks, which, paired with optimistic exploration, leads to superior performance. BRO achieves state-of-the-art results, significantly outperforming the leading model-based and model-free algorithms across 40 complex tasks from the DeepMind Control, MetaWorld, and MyoSuite benchmarks. BRO is the first model-free algorithm to achieve near-optimal policies in the notoriously challenging Dog and Humanoid tasks.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94179"} +{"video_file": "fvOCJAAYLx_39027988.mp4", "openreview_id": "fvOCJAAYLx", "slideslive_id": 39027988, "venue": "nips2024", "title": "Diffusion Twigs with Loop Guidance for Conditional Graph Generation", "status": "Poster", "keywords": "graph network;conditional generation;diffusion;generative models;molecule design;molecule optimization", "tldr": "We introduce a novel score-based diffusion framework for conditional generation that co-evolves multiple heterogeneous flows, and leverages a new strategy called loop guidance.", "abstract": "We introduce a novel score-based diffusion framework named Twigs that incorporates multiple co-evolving flows for enriching conditional generation tasks. Specifically, a central or trunk diffusion process is associated with a primary variable (e.g., graph structure), and additional offshoot or stem processes are dedicated to dependent variables (e.g., graph properties or labels). A new strategy, which we call loop guidance, effectively orchestrates the flow of information between the trunk and the stem processes during sampling. This approach allows us to uncover intricate interactions and dependencies, and unlock new generative capabilities. We provide extensive experiments to demonstrate strong performance gains of the proposed method over contemporary baselines in the context of conditional graph generation, underscoring the potential of Twigs in challenging generative tasks such as inverse molecular design and molecular optimization. Code is available at https://github.com/Aalto-QuML/Diffusion_twigs.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94177"} +{"video_file": "fyYrZbWtNz_39025604.mp4", "openreview_id": "fyYrZbWtNz", "slideslive_id": 39025604, "venue": "nips2024", "title": "Rethinking Imbalance in Image Super-Resolution for Efficient Inference", "status": "Poster", "keywords": "Efficient Image Super-Resolution; Dynamic Network; Imbalanced Data Learning", "tldr": "This paper explores the imbalance in the image SR task and proposes a plug-and-play Weight-Balancing framework (WBSR) based on a HES strategy and a BDLoss to achieve accurate and efficient inference.", "abstract": "Existing super-resolution (SR) methods optimize all model weights equally using\nL\n1\nor\nL\n2\nlosses by uniformly sampling image patches without considering dataset imbalances or parameter redundancy, which limits their performance. To address this, we formulate the image SR task as an imbalanced distribution transfer learning problem from a statistical probability perspective, proposing a plug-and-play Weight-Balancing framework (WBSR) to achieve balanced model learning without changing the original model structure and training data. Specifically, we develop a Hierarchical Equalization Sampling (HES) strategy to address data distribution imbalances, enabling better feature representation from texture-rich samples. To tackle model optimization imbalances, we propose a Balanced Diversity Loss (BDLoss) function, focusing on learning texture regions while disregarding redundant computations in smooth regions. After joint training of HES and BDLoss to rectify these imbalances, we present a gradient projection dynamic inference strategy to facilitate accurate and efficient inference. Extensive experiments across various models, datasets, and scale factors demonstrate that our method achieves comparable or superior performance to existing approaches with about 34% reduction in computational cost.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/94175"} +{"video_file": "fykjplMc0V_39027753.mp4", "openreview_id": "fykjplMc0V", "slideslive_id": 39027753, "venue": "nips2024", "title": "ReFT: Representation Finetuning for Language Models", "status": "Spotlight", "keywords": "Representation finetuning;Interpretability;Parameter-efficient finetuning;Activation intervention", "tldr": "We introduce representation finetuning (ReFT), which is a powerful, efficient, and interpretable finetuning method.", "abstract": "Parameter-efficient finetuning (PEFT) methods seek to adapt large neural models via updates to a small number of weights. However, much prior interpretability work has shown that representations encode rich semantic information, suggesting that editing representations might be a more powerful alternative. We pursue this hypothesis by developing a family of Representation Finetuning (ReFT) methods. ReFT methods operate on a frozen base model and learn task-specific interventions on hidden representations. We define a strong instance of the ReFT family, Low-rank Linear Subspace ReFT (LoReFT), and we identify an ablation of this method that trades some performance for increased efficiency. Both are drop-in replacements for existing PEFTs and learn interventions that are 15x--65x more parameter-efficient than LoRA. We showcase LoReFT on eight commonsense reasoning tasks, four arithmetic reasoning tasks, instruction-tuning, and GLUE. In all these evaluations, our ReFTs deliver the best balance of efficiency and performance, and almost always outperform state-of-the-art PEFTs. Upon publication, we will publicly release our generic ReFT training library.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94174"} +{"video_file": "fzlMza6dRZ_39027483.mp4", "openreview_id": "fzlMza6dRZ", "slideslive_id": 39027483, "venue": "nips2024", "title": "GraphTrail: Translating GNN Predictions into Human-Interpretable Logical Rules", "status": "Poster", "keywords": "Graph Neural Network;Explainability;Global Factual Explanation;Symbolic Regression;Computation Trees", "tldr": "We generate formula based global explainations of graph neural networks using symbolic regression over computation trees identified through Shapley values.", "abstract": "Instance-level explanation of graph neural networks (GNNs) is a well-studied area. These explainers, however, only explain an instance (e.g., a graph) and fail to uncover the combinatorial reasoning learned by a GNN from the training data towards making its predictions. In this work, we introduce GraphTrail, the first end-to-end, global, post-hoc GNN explainer that translates the functioning of a black-box GNN model to a boolean formula over the (sub)graph level concepts without relying on local explainers. GraphTrail is unique in automatically mining the discriminative subgraph-level concepts using Shapley values. Subsequently, the GNN predictions are mapped to a human-interpretable boolean formula over these concepts through symbolic regression. Extensive experiments across diverse datasets and GNN architectures demonstrate significant improvement over existing global explainers in mapping GNN predictions to faithful logical formulae. The robust and accurate performance of GraphTrail makes it invaluable for improving GNNs and facilitates adoption in domains with strict transparency requirements.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/94172"} +{"video_file": "g5DyqerUpX_39028107.mp4", "openreview_id": "g5DyqerUpX", "slideslive_id": 39028107, "venue": "nips2024", "title": "SPARKLE: A Unified Single-Loop Primal-Dual Framework for Decentralized Bilevel Optimization", "status": "Poster", "keywords": "non-convex optimization;decentralized bilevel optimization;transient iteration complexity", "tldr": "We develop a unified single-loop primal-dual algorithm framework for decentralized bilevel optimization with the state-of-the-art non-asymptotic convergence rate.", "abstract": "This paper studies decentralized bilevel optimization, in which multiple agents collaborate to solve problems involving nested optimization structures with neighborhood communications. Most existing literature primarily utilizes gradient tracking to mitigate the influence of data heterogeneity, without exploring other well-known heterogeneity-correction techniques such as EXTRA or Exact Diffusion. Additionally, these studies often employ identical decentralized strategies for both upper- and lower-level problems, neglecting to leverage distinct mechanisms across different levels. To address these limitations, this paper proposes SPARKLE, a unified single-loop primal-dual algorithm framework for decentralized bilevel optimization. SPARKLE offers the flexibility to incorporate various heterogeneity-correction strategies into the algorithm. Moreover, SPARKLE allows for different strategies to solve upper- and lower-level problems. We present a unified convergence analysis for SPARKLE, applicable to all its variants, with state-of-the-art convergence rates compared to existing decentralized bilevel algorithms. Our results further reveal that EXTRA and Exact Diffusion are more suitable for decentralized bilevel optimization, and using mixed strategies in bilevel algorithms brings more benefits than relying solely on gradient tracking.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94168"} +{"video_file": "g8kFlZDcaX_39025464.mp4", "openreview_id": "g8kFlZDcaX", "slideslive_id": 39025464, "venue": "nips2024", "title": "Decision-Focused Learning with Directional Gradients", "status": "Poster", "keywords": "decision-focused learning", "tldr": "We propose a new decision-aware loss surrogate with theoretical guarantees on its accuracy with respect to the downstream decision loss.", "abstract": "We propose a novel family of decision-aware surrogate losses, called Perturbation Gradient (PG) losses, for the predict-then-optimize framework. These losses directly approximate the downstream decision loss and can be optimized using off-the-shelf gradient-based methods. Importantly, unlike existing surrogate losses, the approximation error of our PG losses vanishes as the number of samples grows. This implies that optimizing our surrogate loss yields a best-in-class policy asymptotically, even in misspecified settings. This is the first such result in misspecified settings and we provide numerical evidence confirming our PG losses substantively outperform existing proposals when the underlying model is misspecified and the noise is not centrally symmetric. Insofar as misspecification is commonplace in practice -- especially when we might prefer a simpler, more interpretable model -- PG losses offer a novel, theoretically justified, method for computationally tractable decision-aware learning.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94165"} +{"video_file": "gAgwqHOBIg_39028460.mp4", "openreview_id": "gAgwqHOBIg", "slideslive_id": 39028460, "venue": "nips2024", "title": "DINTR: Tracking via Diffusion-based Interpolation", "status": "Poster", "keywords": "Visual Tracking;Diffusion;Unification", "tldr": "This work proposes a generative methodology to formulate the object tracking task and an interpolation operation as a faster approach to the diffusion mechanics.", "abstract": "Object tracking is a fundamental task in computer vision, requiring the localization of objects of interest across video frames. Diffusion models have shown remarkable capabilities in visual generation, making them well-suited for addressing several requirements of the tracking problem. This work proposes a novel diffusion-based methodology to formulate the tracking task. Firstly, their conditional process allows for injecting indications of the target object into the generation process. Secondly, diffusion mechanics can be developed to inherently model temporal correspondences, enabling the reconstruction of actual frames in video. However, existing diffusion models rely on extensive and unnecessary mapping to a Gaussian noise domain, which can be replaced by a more efficient and stable interpolation process. Our proposed interpolation mechanism draws inspiration from classic image-processing techniques, offering a more interpretable, stable, and faster approach tailored specifically for the object tracking task. By leveraging the strengths of diffusion models while circumventing their limitations, our Diffusion-based INterpolation TrackeR (DINTR) presents a promising new paradigm and achieves a superior multiplicity on seven benchmarks across five indicator representations.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94161"} +{"video_file": "gCCMzedgbo_39026422.mp4", "openreview_id": "gCCMzedgbo", "slideslive_id": 39026422, "venue": "nips2024", "title": "TrAct: Making First-layer Pre-Activations Trainable", "status": "Poster", "keywords": "computer vision;convolution;second-order;optimization", "tldr": "Making the training dynamics of the first layer of vision models similar to those of the embedding layer of language models.", "abstract": "We consider the training of the first layer of vision models and notice the clear relationship between pixel values and gradient update magnitudes: the gradients arriving at the weights of a first layer are by definition directly proportional to (normalized) input pixel values. Thus, an image with low contrast has a smaller impact on learning than an image with higher contrast, and a very bright or very dark image has a stronger impact on the weights than an image with moderate brightness. In this work, we propose performing gradient descent on the embeddings produced by the first layer of the model. However, switching to discrete inputs with an embedding layer is not a reasonable option for vision models. Thus, we propose the conceptual procedure of (i) a gradient descent step on first layer activations to construct an activation proposal, and (ii) finding the optimal weights of the first layer, i.e., those weights which minimize the squared distance to the activation proposal. We provide a closed form solution of the procedure and adjust it for robust stochastic training while computing everything efficiently. Empirically, we find that TrAct (Training Activations) speeds up training by factors between 1.25x and 4x while requiring only a small computational overhead. We demonstrate the utility of TrAct with different optimizers for a range of different vision models including convolutional and transformer architectures.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94159"} +{"video_file": "gGR9dJbe3r_39026343.mp4", "openreview_id": "gGR9dJbe3r", "slideslive_id": 39026343, "venue": "nips2024", "title": "Exponential Quantum Communication Advantage in Distributed Inference and Learning", "status": "Poster", "keywords": "Quantum computing;Communication complexity;distributed computation;graph neural networks", "tldr": "We prove that inference and gradient computation of distributed models that can be implemented on networked quantum computers enable exponential savings in communication.", "abstract": "Training and inference with large machine learning models that far exceed the memory capacity of individual devices necessitates the design of distributed architectures, forcing one to contend with communication constraints. We present a framework for distributed computation over a quantum network in which data is encoded into specialized quantum states. We prove that for models within this framework, inference and training using gradient descent can be performed with exponentially less communication compared to their classical analogs, and with relatively modest overhead relative to standard gradient-based methods. We show that certain graph neural networks are particularly amenable to implementation within this framework, and moreover present empirical evidence that they perform well on standard benchmarks. To our knowledge, this is the first example of exponential quantum advantage for a generic class of machine learning problems that hold regardless of the data encoding cost. Moreover, we show that models in this class can encode highly nonlinear features of their inputs, and their expressivity increases exponentially with model depth. We also delineate the space of models for which exponential communication advantages hold by showing that they cannot hold for linear classification. Communication of quantum states that potentially limit the amount of information that can be extracted from them about the data and model parameters may also lead to improved privacy guarantees for distributed computation. Taken as a whole, these findings form a promising foundation for distributed machine learning over quantum networks.", "primary_area": "infrastructure", "site": "https://neurips.cc/virtual/2024/poster/94157"} +{"video_file": "gHCFduRo7o_39024778.mp4", "openreview_id": "gHCFduRo7o", "slideslive_id": 39024778, "venue": "nips2024", "title": "Selective Explanations", "status": "Poster", "keywords": "explainability;selective prediction;amortization;interpretability;shapley", "tldr": "We propose selective explanations to detect when amortized explainers produce low-quality explanations and introduce an optimized method to improve their quality.", "abstract": "Feature attribution methods explain black-box machine learning (ML) models by assigning importance scores to input features. These methods can be computationally expensive for large ML models. To address this challenge, there have been increasing efforts to develop amortized explainers, where a ML model is trained to efficiently approximate computationally expensive feature attribution scores. Despite their efficiency, amortized explainers can produce misleading explanations. In this paper, we propose selective explanations to (i) detect when amortized explainers generate inaccurate explanations and (ii) improve the approximation of the explanation using a technique we call explanations with initial guess. Selective explanations allow practitioners to specify the fraction of samples that receive explanations with initial guess, offering a principled way to bridge the gap between amortized explainers (one inference) and more computationally costly approximations (multiple inferences). Our experiments on various models and datasets demonstrate that feature attributions via selective explanations strike a favorable balance between explanation quality and computational efficiency.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/94156"} +{"video_file": "gJxEiRcnao_39025530.mp4", "openreview_id": "gJxEiRcnao", "slideslive_id": 39025530, "venue": "nips2024", "title": "Biologically Inspired Learning Model for Instructed Vision", "status": "Poster", "keywords": "Biologically Plausible Deep Networks;Neuroscience;Synaptic Modulation;Instructed Vision", "tldr": "We present a biologically inspired learning model that performs instructed vision through guiding attention. Directing attention is an essential part of human vision and is also used incorporated in recent Vision Language Models (VLMs).", "abstract": "As part of the effort to understand how the brain learns, ongoing research seeks to combine biological knowledge with current artificial intelligence (AI) modeling in an attempt to find an efficient biologically plausible learning scheme. Current models often use a cortical-like combination of bottom-up (BU) and top-down (TD) processing, where the TD part carries feedback signals for learning. However, in the visual cortex, the TD pathway plays a second major role in visual attention, by guiding the visual process toward locations and tasks of interest. A biological model should therefore integrate both learning and visual guidance. We introduce a model that uses a cortical-like combination of BU and TD processing that naturally integrates the two major functions of the TD stream. This integration is achieved through an appropriate connectivity pattern between the BU and TD streams, a novel processing cycle that uses the TD stream twice, and a 'Counter-Hebb' learning mechanism that operates across both streams. We show that the 'Counter-Hebb' mechanism can provide an exact backpropagation synaptic modification. Additionally, our model can effectively guide the visual stream to perform a task of interest, achieving competitive performance on standard multi-task learning benchmarks compared to AI models. The successful combination of learning and visual guidance could provide a new view on combining BU and TD processing in human vision and suggests possible directions for both biologically plausible models and artificial instructed models, such as vision-language models (VLMs).", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/94152"} +{"video_file": "gKLgY3m9zj_39028612.mp4", "openreview_id": "gKLgY3m9zj", "slideslive_id": 39028612, "venue": "nips2024", "title": "An Information Theoretic Perspective on Conformal Prediction", "status": "Poster", "keywords": "conformal prediction;information theory;uncertainty quantification", "tldr": "We link conformal prediction to information theory, and thus derive a principled way to use side information in conformal prediction, and new upper bounds on the intrinsic uncertainty of the data-generating process, which improve conformal training.", "abstract": "Conformal Prediction (CP) is a distribution-free uncertainty estimation framework that constructs prediction sets guaranteed to contain the true answer with a user-specified probability. Intuitively, the size of the prediction set encodes a general notion of uncertainty, with larger sets associated with higher degrees of uncertainty. In this work, we leverage information theory to connect conformal prediction to other notions of uncertainty. More precisely, we prove three different ways to upper bound the intrinsic uncertainty, as described by the conditional entropy of the target variable given the inputs, by combining CP with information theoretical inequalities. Moreover, we demonstrate two direct and useful applications of such connection between conformal prediction and information theory: (i) more principled and effective conformal training objectives that generalize previous approaches and enable end-to-end training of machine learning models from scratch, and (ii) a natural mechanism to incorporate side information into conformal prediction. We empirically validate both applications in centralized and federated learning settings, showing our theoretical results translate to lower inefficiency (average prediction set size) for popular CP methods.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/94151"} +{"video_file": "gL5nT4y8fn_39026439.mp4", "openreview_id": "gL5nT4y8fn", "slideslive_id": 39026439, "venue": "nips2024", "title": "Panacea: Pareto Alignment via Preference Adaptation for LLMs", "status": "Poster", "keywords": "large language models;alignment;multi-dimensional preference optimization;RLHF", "tldr": "This paper proposes Panacea, a simple yet effective method that achieves Pareto alignment with diverse human preferences with a single model.", "abstract": "Current methods for large language model alignment typically use scalar human preference labels. However, this convention tends to oversimplify the multi-dimensional and heterogeneous nature of human preferences, leading to reduced expressivity and even misalignment. This paper presents Panacea, an innovative approach that reframes alignment as a multi-dimensional preference optimization problem. Panacea trains a single model capable of adapting online and Pareto-optimally to diverse sets of preferences without the need for further tuning. A major challenge here is using a low-dimensional preference vector to guide the model's behavior, despite it being governed by an overwhelmingly large number of parameters. To address this, Panacea is designed to use singular value decomposition (SVD)-based low-rank adaptation, which allows the preference vector to be simply injected online as singular values. Theoretically, we prove that Panacea recovers the entire Pareto front with common loss aggregation methods under mild conditions. Moreover, our experiments demonstrate, for the first time, the feasibility of aligning a single LLM to represent an exponentially vast spectrum of human preferences through various optimization methods. Our work marks a step forward in effectively and efficiently aligning models to diverse and intricate human preferences in a controllable and Pareto-optimal manner.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94149"} +{"video_file": "gMqaKJCOCB_39027762.mp4", "openreview_id": "gMqaKJCOCB", "slideslive_id": 39027762, "venue": "nips2024", "title": "Understanding the Gains from Repeated Self-Distillation", "status": "Poster", "keywords": "Self-Distillation Theory;Linear Regression", "tldr": "Using linear regression to characterize how large the performance gain can be from repeated applications of self-distillation", "abstract": "Self-Distillation is a special type of knowledge distillation where the student model has the same architecture as the teacher model. Despite using the same architecture and the same training data, self-distillation has been empirically observed to improve performance, especially when applied repeatedly. For such a process, there is a fundamental question of interest: How much gain is possible by applying multiple steps of self-distillation? To investigate this relative gain, we propose using the simple but canonical task of linear regression. Our analysis shows that the excess risk achieved by multi-step self-distillation can significantly improve upon a single step of self-distillation, reducing the excess risk by a factor of\nd\n, where\nd\nis the input dimension. Empirical results on regression tasks from the UCI repository show a reduction in the learnt model's risk (MSE) by up to\n47\n%.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94147"} +{"video_file": "gN1iKwxlL5_39026205.mp4", "openreview_id": "gN1iKwxlL5", "slideslive_id": 39026205, "venue": "nips2024", "title": "Dual Lagrangian Learning for Conic Optimization", "status": "Poster", "keywords": "Conic optimization;optimization proxies;duality;self-supervised learning", "tldr": "This paper presents a principled methodology for learning dual conic optimization proxies with dual feasibility guarantees.", "abstract": "This paper presents Dual Lagrangian Learning (DLL), a principled learning methodology for dual conic optimization proxies. DLL leverages conic duality and the representation power of ML models to provide high-duality, dual-feasible solutions, and therefore valid Lagrangian dual bounds, for linear and nonlinear conic optimization problems. The paper introduces a systematic dual completion procedure, differentiable conic projection layers, and a self-supervised learning framework based on Lagrangian duality. It also provides closed-form dual completion formulae for broad classes of conic problems, which eliminate the need for costly implicit layers. The effectiveness of DLL is demonstrated on linear and nonlinear conic optimization problems. The proposed methodology significantly outperforms a state-of-the-art learning-based method, and achieves 1000x speedups over commercial interior-point solvers with optimality gaps under 0.5% on average.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94146"} +{"video_file": "gSGLkCX9sc_39024685.mp4", "openreview_id": "gSGLkCX9sc", "slideslive_id": 39024685, "venue": "nips2024", "title": "Automated Label Unification for Multi-Dataset Semantic Segmentation with GNNs", "status": "Poster", "keywords": "Semantic Segmentation;Multi-dataset Training;Graph Neural Networks", "tldr": "We propose a novel approach that leverages graph neural networks to automatically construct a unified label space for training semantic segmentation models across multiple datasets.", "abstract": "Deep supervised models possess significant capability to assimilate extensive training data, thereby presenting an opportunity to enhance model performance through training on multiple datasets. However, conflicts arising from different label spaces among datasets may adversely affect model performance. In this paper, we propose a novel approach to automatically construct a unified label space across multiple datasets using graph neural networks. This enables semantic segmentation models to be trained simultaneously on multiple datasets, resulting in performance improvements. Unlike existing methods, our approach facilitates seamless training without the need for additional manual reannotation or taxonomy reconciliation. This significantly enhances the efficiency and effectiveness of multi-dataset segmentation model training. The results demonstrate that our method significantly outperforms other multi-dataset training methods when trained on seven datasets simultaneously, and achieves state-of-the-art performance on the WildDash 2 benchmark. Our code can be found in https://github.com/Mrhonor/AutoUniSeg.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94140"} +{"video_file": "gVTkMsaaGI_39026978.mp4", "openreview_id": "gVTkMsaaGI", "slideslive_id": 39026978, "venue": "nips2024", "title": "Amortizing intractable inference in diffusion models for vision, language, and control", "status": "Poster", "keywords": "diffusion;inverse problems;conditional generation;language models;infilling;discrete diffusion;offline RL;planning;GFlowNet", "tldr": "An asymptotically unbiased objective for sampling from the product of a diffusion prior with a constraint, applied to vision, language, and RL.", "abstract": "Diffusion models have emerged as effective distribution estimators in vision, language, and reinforcement learning, but their use as priors in downstream tasks poses an intractable posterior inference problem. This paper studies amortized sampling of the posterior over data,\nx\n\u223c\np\np\no\ns\nt\n(\nx\n)\n\u221d\np\n(\nx\n)\nr\n(\nx\n)\n, in a model that consists of a diffusion generative model prior\np\n(\nx\n)\nand a black-box constraint or likelihood function\nr\n(\nx\n)\n. We state and prove the asymptotic correctness of a data-free learning objective, relative trajectory balance, for training a diffusion model that samples from this posterior, a problem that existing methods solve only approximately or in restricted cases. Relative trajectory balance arises from the generative flow network perspective on diffusion models, which allows the use of deep reinforcement learning techniques to improve mode coverage. Experiments illustrate the broad potential of unbiased inference of arbitrary posteriors under diffusion priors: in vision (classifier guidance), language (infilling under a discrete diffusion LLM), and multimodal data (text-to-image generation). Beyond generative modeling, we apply relative trajectory balance to the problem of continuous control with a score-based behavior prior, achieving state-of-the-art results on benchmarks in offline reinforcement learning. Code is available at this link.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/94137"} +{"video_file": "gW0znG5JCG_39028134.mp4", "openreview_id": "gW0znG5JCG", "slideslive_id": 39028134, "venue": "nips2024", "title": "Gene-Gene Relationship Modeling Based on Genetic Evidence for Single-Cell RNA-Seq Data Imputation", "status": "Poster", "keywords": "scRNA-seq;imputation;bioinformatics", "tldr": "We propose a novel scRNA-seq data imputation scheme based on genetic evidence.", "abstract": "Single-cell RNA sequencing (scRNA-seq) technologies enable the exploration of cellular heterogeneity and facilitate the construction of cell atlases. However, scRNA-seq data often contain a large portion of missing values (false zeros) or noisy values, hindering downstream analyses. To recover these false zeros, propagation-based imputation methods have been proposed using\nk\n-NN graphs. However they model only associating relationships among genes within a cell, while, according to well-known genetic evidence, there are both associating and dissociating relationships among genes. To apply this genetic evidence to gene-gene relationship modeling, this paper proposes a novel imputation method that newly employs dissociating relationships in addition to associating relationships. Our method constructs a\nk\n-NN graph to additionally model dissociating relationships via the negation of a given cell-gene matrix. Moreover, our method standardizes the value distribution (mean and variance) of each gene to have standard distributions regardless of the gene. Through extensive experiments, we demonstrate that the proposed method achieves exceptional performance gains over state-of-the-art methods in both cell clustering and gene expression recovery across six scRNA-seq datasets, validating the significance of using complete gene-gene relationships in accordance with genetic evidence. The source code is available at https://github.com/daehoum1/scCR.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/94136"} +{"video_file": "gYjM1BZzdX_39024958.mp4", "openreview_id": "gYjM1BZzdX", "slideslive_id": 39024958, "venue": "nips2024", "title": "Diffeomorphic interpolation for efficient persistence-based topological optimization", "status": "Poster", "keywords": "Persistent Homology;Persistence Diagrams;Optimization;Topological Data Analysis", "tldr": "We propose a diffeomorphic interpolation of (typically sparse) gradients appearing in Topological Data Analysis, yielding substantially faster and smoother optimization schemes.", "abstract": "Topological Data Analysis (TDA) provides a pipeline to extract quantitative and powerful topological descriptors from structured objects. This enables the definition of topological loss functions, which assert to which extent a given object exhibits some topological properties. One can then use these losses to perform topological optimization via gradient descent routines. While theoretically sounded, topological optimization faces an important challenge: gradients tend to be extremely sparse, in the sense that the loss function typically depends (locally) on only very few coordinates of the input object, yielding dramatically slow optimization schemes in practice.\nIn this work, focusing on the central case of topological optimization for point clouds, we propose to overcome this limitation using diffeomorphic interpolation, turning sparse gradients into smooth vector fields defined on the whole space. In particular, this approach combines efficiently with subsampling techniques routinely used in TDA, as the diffeomorphism derived from the gradient computed on the subsample can be used to update the coordinates of the full and possibly large input object. We then illustrate the usefulness of our approach on black-box autoencoder (AE) regularization, where we aim at applying some topological priors on the latent spaces associated to fixed, black-box AE models without modifying their (unknown) architectures and parameters. We empirically show that, while vanilla topological optimization has to be re-run every time that new data comes out of the black-box models, learning a diffeomorphic flow can be done once and then re-applied to new data in linear time. Moreover, reverting the flow allows us to generate data by sampling the topologically-optimized latent space directly, allowing for better interpretability of the model.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94133"} +{"video_file": "gZWYdJ3c26_39028248.mp4", "openreview_id": "gZWYdJ3c26", "slideslive_id": 39028248, "venue": "nips2024", "title": "TALoS: Enhancing Semantic Scene Completion via Test-time Adaptation on the Line of Sight", "status": "Poster", "keywords": "semantic scene completion;test-time adaptation;point cloud", "tldr": "We propose TALoS, a novel test-time adaptation approach for SSC that leverages situational information available in driving environments.", "abstract": "Semantic Scene Completion (SSC) aims to perform geometric completion and semantic segmentation simultaneously. Despite the promising results achieved by existing studies, the inherently ill-posed nature of the task presents significant challenges in diverse driving scenarios. This paper introduces TALoS, a novel test-time adaptation approach for SSC that excavates the information available in driving environments. Specifically, we focus on that observations made at a certain moment can serve as Ground Truth (GT) for scene completion at another moment. Given the characteristics of the LiDAR sensor, an observation of an object at a certain location confirms both 1) the occupation of that location and 2) the absence of obstacles along the line of sight from the LiDAR to that point. TALoS utilizes these observations to obtain self-supervision about occupancy and emptiness, guiding the model to adapt to the scene in test time. In a similar manner, we aggregate reliable SSC predictions among multiple moments and leverage them as semantic pseudo-GT for adaptation. Further, to leverage future observations that are not accessible at the current time, we present a dual optimization scheme using the model in which the update is delayed until the future observation is available. Evaluations on the SemanticKITTI validation and test sets demonstrate that TALoS significantly improves the performance of the pre-trained SSC model.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94132"} +{"video_file": "gkOzoHBXUw_39027380.mp4", "openreview_id": "gkOzoHBXUw", "slideslive_id": 39027380, "venue": "nips2024", "title": "Federated Fine-tuning of Large Language Models under Heterogeneous Tasks and Client Resources", "status": "Poster", "keywords": "Federated Learning;Large Language Models;Parameter Efficient Fine-tuning", "tldr": "We propose FlexLoRA, an aggregation scheme for federated learning of LLMs that dynamically adjusts LoRA ranks to harness the full potential of diverse client resources, enhancing generalization, and validated on thousands of heterogeneous clients.", "abstract": "Federated Learning (FL) has recently been applied to the parameter-efficient fine-tuning of Large Language Models (LLMs). While promising, it raises significant challenges due to the heterogeneous resources and data distributions of clients.This study introduces FlexLoRA, a simple yet effective aggregation scheme for LLM fine-tuning, which mitigates the \"buckets effect\" in traditional FL that restricts the potential of clients with ample resources by tying them to the capabilities of the least-resourced participants. FlexLoRA allows for dynamic adjustment of local LoRA ranks, fostering the development of a global model imbued with broader, less task-specific knowledge. By synthesizing a full-size LoRA weight from individual client contributions and employing Singular Value Decomposition (SVD) for weight redistribution, FlexLoRA fully leverages heterogeneous client resources. Involving thousands of clients performing heterogeneous NLP tasks and client resources, our experiments validate the efficacy of FlexLoRA, with the federated global model achieving consistently better improvement over SOTA FL methods in downstream NLP task performance across various heterogeneous distributions. FlexLoRA's practicality is further underscored by our theoretical analysis and its seamless integration with existing LoRA-based FL methods, offering a path toward cross-device, privacy-preserving federated tuning for LLMs.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94124"} +{"video_file": "gktA1Qycj9_39025733.mp4", "openreview_id": "gktA1Qycj9", "slideslive_id": 39025733, "venue": "nips2024", "title": "CigTime: Corrective Instruction Generation Through Inverse Motion Editing", "status": "Poster", "keywords": "Correctional Instruction Generation", "tldr": "We created a model that generates corrective instructions to guide users from their current motion to a desired target motion, showing significant improvements over baselines. The code and trained models will be publicly available.", "abstract": "Recent advancements in models linking natural language with human motions have shown significant promise in motion generation and editing based on instructional text. Motivated by applications in sports coaching and motor skill learning, we investigate the inverse problem: generating corrective instructional text, leveraging motion editing and generation models. We introduce a novel approach that, given a user\u2019s current motion (source) and the desired motion (target), generates text instructions to guide the user towards achieving the target motion. We leverage large language models to generate corrective texts and utilize existing motion generation and editing frameworks to compile datasets of triplets (source motion, target motion, and corrective text). Using this data, we propose a new motion-language model for generating corrective instructions. We present both qualitative and quantitative results across a diverse range of applications that largely improve upon baselines. Our approach demonstrates its effectiveness in instructional scenarios, offering text-based guidance to correct and enhance user performance.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94123"} +{"video_file": "glGeXu1zG4_39028144.mp4", "openreview_id": "glGeXu1zG4", "slideslive_id": 39028144, "venue": "nips2024", "title": "Learning to Understand: Identifying Interactions via the M\u00f6bius Transform", "status": "Poster", "keywords": "Shapley Value;Importance Scores;Transforms;Signal Processing;Interactions;Group Testing", "tldr": "Learning explainable representations of functions efficiently using ideas from signal processing and group testing", "abstract": "One of the key challenges in machine learning is to find interpretable representations of learned functions. The M\u00f6bius transform is essential for this purpose, as its coefficients correspond to unique importance scores for sets of input variables. This transform is closely related to widely used game-theoretic notions of importance like the Shapley and Bhanzaf value, but it also captures crucial higher-order interactions. Although computing the M\u00f6bius Transform of a function with\nn\ninputs involves\n2\nn\ncoefficients, it becomes tractable when the function is sparse and of low-degree as we show is the case for many real-world functions. Under these conditions, the complexity of the transform computation is significantly reduced. When there are\nK\nnon-zero coefficients, our algorithm recovers the M\u00f6bius transform in\nO\n(\nK\nn\n)\nsamples and\nO\n(\nK\nn\n2\n)\ntime asymptotically under certain assumptions, the first non-adaptive algorithm to do so. We also uncover a surprising connection between group testing and the M\u00f6bius transform. For functions where all interactions involve at most\nt\ninputs, we use group testing results to compute the M\u00f6bius transform with\nO\n(\nK\nt\nlog\n\u2061\nn\n)\nsample complexity and\nO\n(\nK\npoly\n(\nn\n)\n)\ntime. A robust version of this algorithm withstands noise and maintains this complexity. This marks the first\nn\nsub-linear query complexity, noise-tolerant algorithm for the M\u00f6bius transform. While our algorithms are conceptualized in an idealized setting, they indicate that the M\u00f6bius transform is a potent tool for interpreting deep learning models.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/94122"} +{"video_file": "glgZZAfssH_39028258.mp4", "openreview_id": "glgZZAfssH", "slideslive_id": 39028258, "venue": "nips2024", "title": "Metric Space Magnitude for Evaluating the Diversity of Latent Representations", "status": "Poster", "keywords": "diversity evaluation;generative model evaluation;metric space magnitude;geometric machine learning", "tldr": "We develop novel diversity measures for evaluating latent representations based on metric space magnitude, a novel geometric invariant.", "abstract": "The magnitude of a metric space is a novel invariant that provides a measure of the 'effective size' of a space across multiple scales, while also capturing numerous geometrical properties, such as curvature, density, or entropy. We develop a family of magnitude-based measures of the intrinsic diversity of latent representations, formalising a novel notion of dissimilarity between magnitude functions of finite metric spaces. Our measures are provably stable under perturbations of the data, can be efficiently calculated, and enable a rigorous multi-scale characterisation and comparison of latent representations. We show their utility and superior performance across different domains and tasks, including the automated estimation of diversity, the detection of mode collapse, and the evaluation of generative models for text, image, and graph data.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/94120"} +{"video_file": "gmf5Aj01Hz_39025800.mp4", "openreview_id": "gmf5Aj01Hz", "slideslive_id": 39025800, "venue": "nips2024", "title": "SARAD: Spatial Association-Aware Anomaly Detection and Diagnosis for Multivariate Time Series", "status": "Poster", "keywords": "multivariate time series;anomaly detection;anomaly diagnosis;spatial associations", "tldr": "An approach that leverages spatial information to improve the detection and diagnosis of time series anomalies.", "abstract": "Anomaly detection in time series data is fundamental to the design, deployment, and evaluation of industrial control systems. Temporal modeling has been the natural focus of anomaly detection approaches for time series data. However, the focus on temporal modeling can obscure or dilute the spatial information that can be used to capture complex interactions in multivariate time series. In this paper, we propose SARAD, an approach that leverages spatial information beyond data autoencoding errors to improve the detection and diagnosis of anomalies. SARAD trains a Transformer to learn the spatial associations, the pairwise inter-feature relationships which ubiquitously characterize such feedback-controlled systems. As new associations form and old ones dissolve, SARAD applies subseries division to capture their changes over time. Anomalies exhibit association descending patterns, a key phenomenon we exclusively observe and attribute to the disruptive nature of anomalies detaching anomalous features from others. To exploit the phenomenon and yet dismiss non-anomalous descent, SARAD performs anomaly detection via autoencoding in the association space. We present experimental results to demonstrate that SARAD achieves state-of-the-art performance, providing robust anomaly detection and a nuanced understanding of anomalous events.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94119"} +{"video_file": "gtU2eLSAmO_39024403.mp4", "openreview_id": "gtU2eLSAmO", "slideslive_id": 39024403, "venue": "nips2024", "title": "Brain-JEPA: Brain Dynamics Foundation Model with Gradient Positioning and Spatiotemporal Masking", "status": "Spotlight", "keywords": "foundation model;fMRI", "tldr": "Brain-JEPA is a state-of-the-art brain dynamics foundation model enhancing brain activity analysis, it achieves superior performance on different downstream tasks with broad applicability.", "abstract": "We introduce Brain-JEPA, a brain dynamics foundation model with the Joint-Embedding Predictive Architecture (JEPA). This pioneering model achieves state-of-the-art performance in demographic prediction, disease diagnosis/prognosis, and trait prediction through fine-tuning. Furthermore, it excels in off-the-shelf evaluations (e.g., linear probing) and demonstrates superior generalizability across different ethnic groups, surpassing the previous large model for brain activity significantly. Brain-JEPA incorporates two innovative techniques: Brain Gradient Positioning and Spatiotemporal Masking. Brain Gradient Positioning introduces a functional coordinate system for brain functional parcellation, enhancing the positional encoding of different Regions of Interest (ROIs). Spatiotemporal Masking, tailored to the unique characteristics of fMRI data, addresses the challenge of heterogeneous time-series patches. These methodologies enhance model performance and advance our understanding of the neural circuits underlying cognition. Overall, Brain-JEPA is paving the way to address pivotal questions of building brain functional coordinate system and masking brain activity at the AI-neuroscience interface, and setting a potentially new paradigm in brain activity analysis through downstream adaptation.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/94113"} +{"video_file": "gvg8pExqdd_39027540.mp4", "openreview_id": "gvg8pExqdd", "slideslive_id": 39027540, "venue": "nips2024", "title": "Diversify, Contextualize, and Adapt: Efficient Entropy Modeling for Neural Image Codec", "status": "Poster", "keywords": "Neural codec;entropy model", "tldr": "We propose a fast and effective entropy model for neural image codec by efficiently leveraging sufficient contextual information.", "abstract": "Designing a fast and effective entropy model is challenging but essential for practical application of neural codecs. Beyond spatial autoregressive entropy models, more efficient backward adaptation-based entropy models have been recently developed. They not only reduce decoding time by using smaller number of modeling steps but also maintain or even improve rate--distortion performance by leveraging more diverse contexts for backward adaptation. Despite their significant progress, we argue that their performance has been limited by the simple adoption of the design convention for forward adaptation: using only a single type of hyper latent representation, which does not provide sufficient contextual information, especially in the first modeling step. In this paper, we propose a simple yet effective entropy modeling framework that leverages sufficient contexts for forward adaptation without compromising on bit-rate. Specifically, we introduce a strategy of diversifying hyper latent representations for forward adaptation, i.e., using two additional types of contexts along with the existing single type of context. In addition, we present a method to effectively use the diverse contexts for contextualizing the current elements to be encoded/decoded. By addressing the limitation of the previous approach, our proposed framework leads to significant performance improvements. Experimental results on popular datasets show that our proposed framework consistently improves rate-distortion performance across various bit-rate regions, e.g., 3.73% BD-rate gain over the state-of-the-art baseline on the Kodak dataset.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94111"} +{"video_file": "gvtCR7dHJ3_39026650.mp4", "openreview_id": "gvtCR7dHJ3", "slideslive_id": 39026650, "venue": "nips2024", "title": "Dual Cone Gradient Descent for Training Physics-Informed Neural Networks", "status": "Poster", "keywords": "physics-informed neural networks;multi-objective optimization;scientific machine learning;gradient descent", "tldr": "We introduce a novel optimization algorithm, named dual cone gradient descent, for training PINNs.", "abstract": "Physics-informed neural networks (PINNs) have emerged as a prominent approach for solving partial differential equations (PDEs) by minimizing a combined loss function that incorporates both boundary loss and PDE residual loss. Despite their remarkable empirical performance in various scientific computing tasks, PINNs often fail to generate reasonable solutions, and such pathological behaviors remain difficult to explain and resolve. In this paper, we identify that PINNs can be adversely trained when gradients of each loss function exhibit a significant imbalance in their magnitudes and present a negative inner product value. To address these issues, we propose a novel optimization framework, Dual Cone Gradient Descent (DCGD), which adjusts the direction of the updated gradient to ensure it falls within a dual cone region. This region is defined as a set of vectors where the inner products with both the gradients of the PDE residual loss and the boundary loss are non-negative. Theoretically, we analyze the convergence properties of DCGD algorithms in a non-convex setting. On a variety of benchmark equations, we demonstrate that DCGD outperforms other optimization algorithms in terms of various evaluation metrics. In particular, DCGD achieves superior predictive accuracy and enhances the stability of training for failure modes of PINNs and complex PDEs, compared to existing optimally tuned models. Moreover, DCGD can be further improved by combining it with popular strategies for PINNs, including learning rate annealing and the Neural Tangent Kernel (NTK).", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/94109"} +{"video_file": "gzh9nTUtsY_39027377.mp4", "openreview_id": "gzh9nTUtsY", "slideslive_id": 39027377, "venue": "nips2024", "title": "Least Squares Regression Can Exhibit Under-Parameterized Double Descent", "status": "Poster", "keywords": "Learning Theory;Generalization;Random Matrix Theory;High Dimensional Statistics", "tldr": "We show that the current theory for double descent is incomplete", "abstract": "The relationship between the number of training data points, the number of parameters, and the generalization capabilities of models has been widely studied. Previous work has shown that double descent can occur in the over-parameterized regime and that the standard bias-variance trade-off holds in the under-parameterized regime. These works provide multiple reasons for the existence of the peak. We postulate that the location of the peak depends on the technical properties of both the spectrum as well as the eigenvectors of the sample covariance. We present two simple examples that provably exhibit double descent in the under-parameterized regime and do not seem to occur for reasons provided in prior work.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/94105"} +{"video_file": "h3BdT2UMWQ_39024462.mp4", "openreview_id": "h3BdT2UMWQ", "slideslive_id": 39024462, "venue": "nips2024", "title": "Breaking Determinism: Fuzzy Modeling of Sequential Recommendation Using Discrete State Space Diffusion Model", "status": "Poster", "keywords": "Sequential Recommendation;Discrete Diffusion Model", "tldr": "Our paper introduces DDSR, a novel model leveraging discrete diffusion processes to enhance the accuracy of sequential recommendations by capturing the inherent randomness of user behavior.", "abstract": "Sequential recommendation (SR) aims to predict items that users may be interested in based on their historical behavior sequences. We revisit SR from a novel information-theoretic perspective and find that conventional sequential modeling methods fail to adequately capture the randomness and unpredictability of user behavior. Inspired by fuzzy information processing theory, this paper introduces the DDSR model, which uses fuzzy sets of interaction sequences to overcome the limitations and better capture the evolution of users' real interests. Formally based on diffusion transition processes in discrete state spaces, which is unlike common diffusion models such as DDPM that operate in continuous domains. It is better suited for discrete data, using structured transitions instead of arbitrary noise introduction to avoid information loss. Additionally, to address the inefficiency of matrix transformations due to the vast discrete space, we use semantic labels derived from quantization or RQ-VAE to replace item IDs, enhancing efficiency and improving cold start issues. Testing on three public benchmark datasets shows that DDSR outperforms existing state-of-the-art methods in various settings, demonstrating its potential and effectiveness in handling SR tasks.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94096"} +{"video_file": "hB5NkiET32_39025967.mp4", "openreview_id": "hB5NkiET32", "slideslive_id": 39025967, "venue": "nips2024", "title": "Detecting Bugs with Substantial Monetary Consequences by LLM and Rule-based Reasoning", "status": "Poster", "keywords": "LLM;rule based reasoning;smart contract;accounting bugs", "tldr": "We develop ABAuditor, a hybrid LLM and rule-based reasoning system to detect bugs with substantial monetary consequence.", "abstract": "Financial transactions are increasingly being handled by automated programs called smart contracts. However, one challenge in the adaptation of smart contracts is the presence of vulnerabilities, which can cause significant monetary loss. In 2024, $247.88 M was lost in 20 smart contract exploits. According to a recent study, accounting bugs (i.e., incorrect implementations of domain-specific financial models) are the most prevalent type of vulnerability, and are one of the most difficult to find, requiring substantial human efforts. While Large Language Models (LLMs) have shown promise in identifying these bugs, they often suffer from lack of generalization of vulnerability types, hallucinations, and problems with representing smart contracts in limited token context space. This paper proposes a hybrid system combining LLMs and rule-based reasoning to detect accounting error vulnerabilities in smart contracts. In particular, it utilizes the understanding capabilities of LLMs to annotate the financial meaning of variables in smart contracts, and employs rule-based reasoning to propagate the information throughout a contract's logic and to validate potential vulnerabilities. To remedy hallucinations, we propose a feedback loop where validation is performed by providing the reasoning trace of vulnerabilities to the LLM for iterative self-reflection. We achieve 75.6% accuracy on the labelling of financial meanings against human annotations. Furthermore, we achieve a recall of 90.5% from running on 23 real-world smart contract projects containing 21 accounting error vulnerabilities. Finally, we apply the automated technique on 8 recent projects, finding 4 known and 2 unknown bugs.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94090"} +{"video_file": "hBCxxVQDBw_39027951.mp4", "openreview_id": "hBCxxVQDBw", "slideslive_id": 39027951, "venue": "nips2024", "title": "Towards Scalable and Stable Parallelization of Nonlinear RNNs", "status": "Poster", "keywords": "RNNs;Newton's method;Parallel algorithms;Scalability;Numerical Stability", "tldr": "We introduce methods for parallelizing the evaluation of RNNs that are scalable and numerically stable with quasi-Newton methods and trust regions.", "abstract": "Transformers and linear state space models can be evaluated in parallel on modern hardware, but evaluating nonlinear RNNs appears to be an inherently sequential problem. Recently, however, Lim et al. '24 developed an approach called DEER, which evaluates nonlinear RNNs in parallel by posing the states as the solution to a fixed-point problem. They derived a parallel form of Newton's method to solve the fixed-point problem and achieved significant speedups over sequential evaluation. However, the computational complexity of DEER is cubic in the state size, and the algorithm can suffer from numerical instability. We address these limitations with two novel contributions. To reduce the computational complexity, we apply quasi-Newton approximations and show they converge comparably to Newton, use less memory, and are faster. To stabilize DEER, we leverage a connection between the Levenberg-Marquardt algorithm and Kalman smoothing, which we call ELK. This connection allows us to stabilize Newton's method while using efficient parallelized Kalman smoothing algorithms to retain performance. Through several experiments, we show that these innovations allow for parallel evaluation of nonlinear RNNs at larger scales and with greater stability.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94089"} +{"video_file": "hE6ZxU0N3c_39027367.mp4", "openreview_id": "hE6ZxU0N3c", "slideslive_id": 39027367, "venue": "nips2024", "title": "Understanding Multi-Granularity for Open-Vocabulary Part Segmentation", "status": "Poster", "keywords": "part segmentation;open-vocabulary;multi-granularity", "tldr": "PartCLIPSeg is a novel framework that enhances open-vocabulary part segmentation by utilizing object-level contexts and attention control, significantly outperforming existing methods on major datasets.", "abstract": "Open-vocabulary part segmentation (OVPS) is an emerging research area focused on segmenting fine-grained entities using diverse and previously unseen vocabularies. Our study highlights the inherent complexities of part segmentation due to intricate boundaries and diverse granularity, reflecting the knowledge-based nature of part identification. To address these challenges, we propose PartCLIPSeg, a novel framework utilizing generalized parts and object-level contexts to mitigate the lack of generalization in fine-grained parts. PartCLIPSeg integrates competitive part relationships and attention control, alleviating ambiguous boundaries and underrepresented parts. Experimental results demonstrate that PartCLIPSeg outperforms existing state-of-the-art OVPS methods, offering refined segmentation and an advanced understanding of part relationships within images. Through extensive experiments, our model demonstrated a significant improvement over the state-of-the-art models on the Pascal-Part-116, ADE20K-Part-234, and PartImageNet datasets.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94085"} +{"video_file": "hFTye9Ge40_39027514.mp4", "openreview_id": "hFTye9Ge40", "slideslive_id": 39027514, "venue": "nips2024", "title": "Fixed Confidence Best Arm Identification in the Bayesian Setting", "status": "Poster", "keywords": "Multi-armed bandit;Best arm identification", "tldr": "Fixed-confidence best arm identification in the Bayesian setting, where the mean vector is drawn from a known prior.", "abstract": "We consider the fixed-confidence best arm identification (FC-BAI) problem in the Bayesian setting. This problem aims to find the arm of the largest mean with a fixed confidence level when the bandit model has been sampled from the known prior. Most studies on the FC-BAI problem have been conducted in the frequentist setting, where the bandit model is predetermined before the game starts. We show that the traditional FC-BAI algorithms studied in the frequentist setting, such as track-and-stop and top-two algorithms, result in arbitrarily suboptimal performances in the Bayesian setting. We also obtain a lower bound of the expected number of samples in the Bayesian setting and introduce a variant of successive elimination that has a matching performance with the lower bound up to a logarithmic factor. Simulations verify the theoretical results.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/94082"} +{"video_file": "hGgkdFF2hR_39028834.mp4", "openreview_id": "hGgkdFF2hR", "slideslive_id": 39028834, "venue": "nips2024", "title": "Low-Rank Optimal Transport through Factor Relaxation with Latent Coupling", "status": "Poster", "keywords": "Optimal Transport;Sinkhorn;Low-Rank;Matrix Factorization", "tldr": "A general framework for low rank optimal transport using a latent coupling matrix and relaxed projections.", "abstract": "Optimal transport (OT) is a general framework for finding a minimum-cost transport plan, or coupling, between probability distributions, and has many applications in machine learning. A key challenge in applying OT to massive datasets is the quadratic scaling of the coupling matrix with the size of the dataset. [Forrow et al. 2019] introduced a factored coupling for the k-Wasserstein barycenter problem, which [Scetbon et al. 2021] adapted to solve the primal low-rank OT problem. We derive an alternative parameterization of the low-rank problem based on the latent coupling (LC) factorization previously introduced by [Lin et al. 2021] generalizing [Forrow et al. 2019]. The LC factorization has multiple advantages for low-rank OT including decoupling the problem into three OT problems and greater flexibility and interpretability. We leverage these advantages to derive a new algorithm Factor Relaxation with Latent Coupling (FRLC), which uses coordinate mirror descent to compute the LC factorization. FRLC handles multiple OT objectives (Wasserstein, Gromov-Wasserstein, Fused Gromov-Wasserstein), and marginal constraints (balanced, unbalanced, and semi-relaxed) with linear space complexity. We provide theoretical results on FRLC, and demonstrate superior performance on diverse applications -- including graph clustering and spatial transcriptomics -- while demonstrating its interpretability.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/94081"} +{"video_file": "hQJksiskaa_39025288.mp4", "openreview_id": "hQJksiskaa", "slideslive_id": 39025288, "venue": "nips2024", "title": "Autobidder's Dilemma: Why More Sophisticated Autobidders Lead to Worse Auction Efficiency", "status": "Poster", "keywords": "ad auctions;autobidding;non-uniform bidding;price of anarchy", "tldr": "We show that automated first-price auctions become more inefficient with more sophisticated autobidders.", "abstract": "The recent increasing adoption of autobidding has inspired the growing interest in analyzing the performance of classic mechanism with value-maximizing autobidders both theoretically and empirically. It is known that optimal welfare can be obtained in first-price auctions if autobidders are restricted to uniform bid-scaling and the price of anarchy is\n2\nwhen non-uniform bid-scaling strategies are allowed.\nIn this paper, we provide a fine-grained price of anarchy analysis for non-uniform bid-scaling strategies in first-price auctions, demonstrating the reason why more powerful (individual) non-uniform bid-scaling strategies may lead to worse (aggregated) performance in social welfare. Our theoretical results match recent empirical findings that a higher level of non-uniform bid-scaling leads to lower welfare performance in first-price auctions.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/94072"} +{"video_file": "hQfcrTBHeD_39028166.mp4", "openreview_id": "hQfcrTBHeD", "slideslive_id": 39028166, "venue": "nips2024", "title": "An engine not a camera: Measuring performative power of online search", "status": "Poster", "keywords": "Performativity;Power;Digital Markets;Search Engine;Ranking;Online Experiment", "tldr": "We design and conduct an online experiment to get quantitative insights into the ability of search engines to steer web traffic", "abstract": "The power of digital platforms is at the center of major ongoing policy and regulatory efforts. To advance existing debates, we designed and executed an experiment to measure the performative power of online search providers. Instantiated in our setting, performative power quantifies the ability of a search engine to steer web traffic by rearranging results. To operationalize this definition we developed a browser extension that performs unassuming randomized experiments in the background. These randomized experiments emulate updates to the search algorithm and identify the causal effect of different content arrangements on clicks. Analyzing tens of thousands of clicks, we discuss what our robust quantitative findings say about the power of online search engines, using the Google Shopping antitrust investigation as a case study. More broadly, we envision our work to serve as a blueprint for how the recent definition of performative power can help integrate quantitative insights from online experiments with future investigations into the economic power of digital platforms.", "primary_area": "machine_learning_for_social_sciences", "site": "https://neurips.cc/virtual/2024/poster/94071"} +{"video_file": "hRqaot0NZF_39028459.mp4", "openreview_id": "hRqaot0NZF", "slideslive_id": 39028459, "venue": "nips2024", "title": "LESS: Label-Efficient and Single-Stage Referring 3D Segmentation", "status": "Poster", "keywords": "Referring 3d segmentation;label-efficient;single-stage;cross-modal", "tldr": "We design the first single stage referring 3D segmentation network, which trains on simpler labels and perform better.", "abstract": "Referring 3D Segmentation is a visual-language task that segments all points of the specified object from a 3D point cloud described by a sentence of query. Previous works perform a two-stage paradigm, first conducting language-agnostic instance segmentation then matching with given text query. However, the semantic concepts from text query and visual cues are separately interacted during the training, and both instance and semantic labels for each object are required, which is time consuming and human-labor intensive. To mitigate these issues, we propose a novel Referring 3D Segmentation pipeline, Label-Efficient and Single-Stage, dubbed LESS, which is only under the supervision of efficient binary mask. Specifically, we design a Point-Word Cross-Modal Alignment module for aligning the fine-grained features of points and textual embedding. Query Mask Predictor module and Query-Sentence Alignment module are introduced for coarse-grained alignment between masks and query. Furthermore, we propose an area regularization loss, which coarsely reduces irrelevant background predictions on a large scale. Besides, a point-to-point contrastive loss is proposed concentrating on distinguishing points with subtly similar features. Through extensive experiments, we achieve state-of-the-art performance on ScanRefer dataset by surpassing the previous methods about 3.7% mIoU using only binary labels. Code is available at https://github.com/mellody11/LESS.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94069"} +{"video_file": "hW5QWiCctl_39027991.mp4", "openreview_id": "hW5QWiCctl", "slideslive_id": 39027991, "venue": "nips2024", "title": "GraphMorph: Tubular Structure Extraction by Morphing Predicted Graphs", "status": "Poster", "keywords": "Image Segmentation;Tubular Structure Extraction;Branch-level Features;Graph Representation", "tldr": "We propose GraphMorph, which enhances tubular structure extraction by focusing on branch-level features, using a Graph Decoder and Morph Module to achieve topologically accurate predictions, and demonstrating effectiveness across multiple datasets.", "abstract": "Accurately restoring topology is both challenging and crucial in tubular structure extraction tasks, such as blood vessel segmentation and road network extraction. Diverging from traditional approaches based on pixel-level classification, our proposed method, named GraphMorph, focuses on branch-level features of tubular structures to achieve more topologically accurate predictions. GraphMorph comprises two main components: a Graph Decoder and a Morph Module. Utilizing multi-scale features extracted from an image patch by the segmentation network, the Graph Decoder facilitates the learning of branch-level features and generates a graph that accurately represents the tubular structure in this patch. The Morph Module processes two primary inputs: the graph and the centerline probability map, provided by the Graph Decoder and the segmentation network, respectively. Employing a novel SkeletonDijkstra algorithm, the Morph Module produces a centerline mask that aligns with the predicted graph. Furthermore, we observe that employing centerline masks predicted by GraphMorph significantly reduces false positives in the segmentation task, which is achieved by a simple yet effective post-processing strategy. The efficacy of our method in the centerline extraction and segmentation tasks has been substantiated through experimental evaluations across various datasets. Source code will be released soon.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/94063"} +{"video_file": "haUnEiXgQ7_39027973.mp4", "openreview_id": "haUnEiXgQ7", "slideslive_id": 39027973, "venue": "nips2024", "title": "Vision-Language Models are Strong Noisy Label Detectors", "status": "Poster", "keywords": "label-noise learning;sample selection;semi-supervised learning", "tldr": "This paper proposes a denoising fine-tuning framework to adapt vision-language models on noisy downstream tasks.", "abstract": "Recent research on fine-tuning vision-language models has demonstrated impressive performance in various downstream tasks. However, the challenge of obtaining accurately labeled data in real-world applications poses a significant obstacle during the fine-tuning process. To address this challenge, this paper presents a Denoising Fine-Tuning framework, called DeFT, for adapting vision-language models. DeFT utilizes the robust alignment of textual and visual features pre-trained on millions of auxiliary image-text pairs to sieve out noisy labels. The proposed framework establishes a noisy label detector by learning positive and negative textual prompts for each class. The positive prompt seeks to reveal distinctive features of the class, while the negative prompt serves as a learnable threshold for separating clean and noisy samples. We employ parameter-efficient fine-tuning for the adaptation of a pre-trained visual encoder to promote its alignment with the learned textual prompts. As a general framework, DeFT can seamlessly fine-tune many pre-trained models to downstream tasks by utilizing carefully selected clean samples. Experimental results on seven synthetic and real-world noisy datasets validate the effectiveness of DeFT in both noisy label detection and image classification. Our source code can be found in the supplementary material.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/94056"} +{"video_file": "hgdh4foghu_39028870.mp4", "openreview_id": "hgdh4foghu", "slideslive_id": 39028870, "venue": "nips2024", "title": "Policy-shaped prediction: avoiding distractions in model-based reinforcement learning", "status": "Poster", "keywords": "machine learning;model based reinforcement learning;reinforcement learning;segment anything model", "tldr": "Reduce the impact of distractors on model-based RL using gradient-based interpretability with segmentation-based aggregation.", "abstract": "Model-based reinforcement learning (MBRL) is a promising route to sample-efficient policy optimization. However, a known vulnerability of reconstruction-based MBRL consists of scenarios in which detailed aspects of the world are highly predictable, but irrelevant to learning a good policy. Such scenarios can lead the model to exhaust its capacity on meaningless content, at the cost of neglecting important environment dynamics. While existing approaches attempt to solve this problem, we highlight its continuing impact on leading MBRL methods ---including DreamerV3 and DreamerPro--- with a novel environment where background distractions are intricate, predictable, and useless for planning future actions. To address this challenge we develop a method for focusing the capacity of the world model through a synergy of a pretrained segmentation model, a task-aware reconstruction loss, and adversarial learning. Our method outperforms a variety of other approaches designed to reduce the impact of distractors, and is an advance towards robust model-based reinforcement learning.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94051"} +{"video_file": "hilGwNabqB_39027301.mp4", "openreview_id": "hilGwNabqB", "slideslive_id": 39027301, "venue": "nips2024", "title": "A Bayesian Approach for Personalized Federated Learning in Heterogeneous Settings", "status": "Poster", "keywords": "Federated Learning;Bayesian Learning", "tldr": "This paper presents a Bayesian approach for personalized FL in heterogeneous settings to enable personalized, privacy-preserving models on decentralized devices, handling challenges like small datasets, model heterogeneity, and privacy constraints.", "abstract": "Federated learning (FL), through its privacy-preserving collaborative learning approach, has significantly empowered decentralized devices. However, constraints in either data and/or computational resources among participating clients introduce several challenges in learning, including the inability to train large model architectures, heightened risks of overfitting, and more. In this work, we present a novel FL framework grounded in Bayesian learning to address these challenges. Our approach involves training personalized Bayesian models at each client tailored to the unique complexities of the clients' datasets and efficiently collaborating across these clients. By leveraging Bayesian neural networks and their uncertainty quantification capabilities, our local training procedure robustly learns from small datasets. And the novel collaboration procedure utilizing priors in the functional (output) space of the networks facilitates collaboration across models of varying sizes, enabling the framework to adapt well in heterogeneous data and computational settings. Furthermore, we present a differentially private version of the algorithm, accompanied by formal differential privacy guarantees that apply without any assumptions on the learning algorithm. Through experiments on popular FL datasets, we demonstrate that our approach outperforms strong baselines in both homogeneous and heterogeneous settings, and under strict privacy constraints.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/94048"} +{"video_file": "hkujvAPVsg_39026223.mp4", "openreview_id": "hkujvAPVsg", "slideslive_id": 39026223, "venue": "nips2024", "title": "HippoRAG: Neurobiologically Inspired Long-Term Memory for Large Language Models", "status": "Poster", "keywords": "retrieval-augmented generation;RAG;long-term memory;neurobiological inspired;hippocampal memory indexing theory", "tldr": "In this work, we introduce a new RAG framework inspired by human long-term memory which integrates knowledge across documents in ways that current RAG methods cannot, allowing it to outperform these baselines on several multi-hop QA benchmarks.", "abstract": "In order to thrive in hostile and ever-changing natural environments, mammalian brains evolved to store large amounts of knowledge about the world and continually integrate new information while avoiding catastrophic forgetting. Despite the impressive accomplishments, large language models (LLMs), even with retrieval-augmented generation (RAG), still struggle to efficiently and effectively integrate a large amount of new experiences after pre-training. In this work, we introduce HippoRAG, a novel retrieval framework inspired by the hippocampal indexing theory of human long-term memory to enable deeper and more efficient knowledge integration over new experiences. HippoRAG synergistically orchestrates LLMs, knowledge graphs, and the Personalized PageRank algorithm to mimic the different roles of neocortex and hippocampus in human memory. We compare HippoRAG with existing RAG methods on multi-hop question answering (QA) and show that our method outperforms the state-of-the-art methods remarkably, by up to 20%. Single-step retrieval with HippoRAG achieves comparable or better performance than iterative retrieval like IRCoT while being 10-20 times cheaper and 6-13 times faster, and integrating HippoRAG into IRCoT brings further substantial gains. Finally, we show that our method can tackle new types of scenarios that are out of reach of existing methods.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94043"} +{"video_file": "hoVXLC8vQU_39027621.mp4", "openreview_id": "hoVXLC8vQU", "slideslive_id": 39027621, "venue": "nips2024", "title": "Convergence of $\\text{log}(1/\\epsilon)$ for Gradient-Based Algorithms in Zero-Sum Games without the Condition Number: A Smoothed Analysis", "status": "Poster", "keywords": "smoothed complexity;zero-sum games;optimistic gradient descent;linear convergence", "tldr": "We show that a certain class of first-order methods attain polynomial smoothed complexity in zero-sum games.", "abstract": "Gradient-based algorithms have shown great promise in solving large (two-player) zero-sum games. However, their success has been mostly confined to the low-precision regime since the number of iterations grows polynomially in\n1\n/\n\u03f5\n, where\n\u03f5\n>\n0\nis the duality gap. While it has been well-documented that linear convergence---an iteration complexity scaling as\nlog\n(\n1\n/\n\u03f5\n)\n---can be attained even with gradient-based algorithms, that comes at the cost of introducing a dependency on certain condition number-like quantities which can be exponentially large in the description of the game. To address this shortcoming, we examine the iteration complexity of several gradient-based algorithms in the celebrated framework of smoothed analysis, and we show that they have polynomial smoothed complexity, in that their number of iterations grows as a polynomial in the dimensions of the game,\nlog\n(\n1\n/\n\u03f5\n)\n, and\n1\n/\n\u03c3\n, where\n\u03c3\nmeasures the magnitude of the smoothing perturbation. Our result applies to optimistic gradient and extra-gradient descent/ascent, as well as a certain iterative variant of Nesterov's smoothing technique. From a technical standpoint, the proof proceeds by characterizing and performing a smoothed analysis of a certain error bound, the key ingredient driving linear convergence in zero-sum games. En route, our characterization also makes a natural connection between the convergence rate of such algorithms and perturbation-stability properties of the equilibrium, which is of interest beyond the model of smoothed complexity.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/94042"} +{"video_file": "hpvJwmzEHX_39026028.mp4", "openreview_id": "hpvJwmzEHX", "slideslive_id": 39026028, "venue": "nips2024", "title": "RGFN: Synthesizable Molecular Generation Using GFlowNets", "status": "Poster", "keywords": "drug discovery;generative models;GFlowNets;synthesizability", "tldr": "Molecular generation with GFlowNets in the chemical reaction space, ensuring synthesizability out-of-the-box", "abstract": "Generative models hold great promise for small molecule discovery, significantly increasing the size of search space compared to traditional in silico screening libraries. However, most existing machine learning methods for small molecule generation suffer from poor synthesizability of candidate compounds, making experimental validation difficult. In this paper we propose Reaction-GFlowNet (RGFN), an extension of the GFlowNet framework that operates directly in the space of chemical reactions, thereby allowing out-of-the-box synthesizability while maintaining comparable quality of generated candidates. We demonstrate that with the proposed set of reactions and building blocks, it is possible to obtain a search space of molecules orders of magnitude larger than existing screening libraries coupled with low cost of synthesis. We also show that the approach scales to very large fragment libraries, further increasing the number of potential molecules. We demonstrate the effectiveness of the proposed approach across a range of oracle models, including pretrained proxy models and GPU-accelerated docking.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/94040"} +{"video_file": "hsgNvC5YM9_39025877.mp4", "openreview_id": "hsgNvC5YM9", "slideslive_id": 39025877, "venue": "nips2024", "title": "Constant Acceleration Flow", "status": "Poster", "keywords": "Generative model;Rectified flow;Fast generation", "tldr": "We introduce the Constant Acceleration Flow (CAF) to enhance generative models, outperforming previous methods in speed and precision with new strategies.", "abstract": "Rectified flow and reflow procedures have significantly advanced fast generation by progressively straightening ordinary differential equation (ODE) flows under the assumption that image and noise pairs, known as coupling, can be approximated by straight trajectories with constant velocity. However, we observe that the constant velocity modeling and reflow procedures have limitations in accurately learning to couple with flow crossing, leading to suboptimal few-step generation. To overcome the limitations, we introduce the Constant Acceleration Flow (CAF), a novel framework based on a simple constant acceleration equation. Additionally, we propose two techniques to improve estimation accuracy: initial velocity conditioning for the acceleration model and a reflow process for the initial velocity. Our comparative studies show that CAF not only outperforms rectified flow with reflow procedures in terms of speed and accuracy but also demonstrates substantial improvements in preserving coupling for fast generation.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/94039"} +{"video_file": "hw76X5uWrc_39025293.mp4", "openreview_id": "hw76X5uWrc", "slideslive_id": 39025293, "venue": "nips2024", "title": "Unlocking the Potential of Global Human Expertise", "status": "Poster", "keywords": "Human-AI collaboration;Evolution;Distillation;Neural Networks", "tldr": "Framework to address global challenges: (1) Distill diverse human solutions, (2) Recombine and elaborate upon them with evolution.", "abstract": "Solving societal problems on a global scale requires the collection and processing of ideas and methods from diverse sets of international experts. As the number and diversity of human experts increase, so does the likelihood that elements in this collective knowledge can be combined and refined to discover novel and better solutions. However, it is difficult to identify, combine, and refine complementary information in an increasingly large and diverse knowledge base. This paper argues that artificial intelligence (AI) can play a crucial role in this process. An evolutionary AI framework, termed RHEA, fills this role by distilling knowledge from diverse models created by human experts into equivalent neural networks, which are then recombined and refined in a population-based search. The framework was implemented in a formal synthetic domain, demonstrating that it is transparent and systematic. It was then applied to the results of the XPRIZE Pandemic Response Challenge, in which over 100 teams of experts across 23 countries submitted models based on diverse methodologies to predict COVID-19 cases and suggest non-pharmaceutical intervention policies for 235 nations, states, and regions across the globe. Building upon this expert knowledge, by recombining and refining the 169 resulting policy suggestion models, RHEA discovered a broader and more effective set of policies than either AI or human experts alone, as evaluated based on real-world data. The results thus suggest that AI can play a crucial role in realizing the potential of human expertise in global problem-solving.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/94038"} +{"video_file": "i2oacRDF5L_39025120.mp4", "openreview_id": "i2oacRDF5L", "slideslive_id": 39025120, "venue": "nips2024", "title": "Belief-State Query Policies for User-Aligned POMDPs", "status": "Poster", "keywords": "POMDPs;sequential decision making;user-preferences in POMDPs", "tldr": "A new framework showing feasibility results, algorithms and empirical analysis for policy representations that support user-aligned POMDP planning", "abstract": "Planning in real-world settings often entails addressing partial observability while aligning with users' requirements. We present a novel framework for expressing users' constraints and preferences about agent behavior in a partially observable setting using parameterized belief-state query (BSQ) policies in the setting of goal-oriented partially observable Markov decision processes (gPOMDPs). We present the first formal analysis of such constraints and prove that while the expected cost function of a parameterized BSQ policy w.r.t its parameters is not convex, it is piecewise constant and yields an implicit discrete parameter search space that is finite for finite horizons. This theoretical result leads to novel algorithms that optimize gPOMDP agent behavior with guaranteed user alignment. Analysis proves that our algorithms converge to the optimal user-aligned behavior in the limit. Empirical results show that parameterized BSQ policies provide a computationally feasible approach for user-aligned planning in partially observable settings.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94035"} +{"video_file": "i8LoWBJf7j_39024956.mp4", "openreview_id": "i8LoWBJf7j", "slideslive_id": 39024956, "venue": "nips2024", "title": "Interpretable Lightweight Transformer via Unrolling of Learned Graph Smoothness Priors", "status": "Poster", "keywords": "Algorithm Unrolling;Graph Smoothness Prior;White-box Transformer", "tldr": "We build interpretable and lightweight transformer-like neural networks by unrolling iterative optimization algorithms that minimize graph smoothness priors", "abstract": "We build interpretable and lightweight transformer-like neural networks by unrolling iterative optimization algorithms that minimize graph smoothness priors---the quadratic graph Laplacian regularizer (GLR) and the\n\u2113\n1\n-norm graph total variation (GTV)---subject to an interpolation constraint. The crucial insight is that a normalized signal-dependent graph learning module amounts to a variant of the basic self-attention mechanism in conventional transformers. Unlike \"black-box\" transformers that require learning of large key, query and value matrices to compute scaled dot products as affinities and subsequent output embeddings, resulting in huge parameter sets, our unrolled networks employ shallow CNNs to learn low-dimensional features per node to establish pairwise Mahalanobis distances and construct sparse similarity graphs. At each layer, given a learned graph, the target interpolated signal is simply a low-pass filtered output derived from the minimization of an assumed graph smoothness prior, leading to a dramatic reduction in parameter count. Experiments for two image interpolation applications verify the restoration performance, parameter efficiency and robustness to covariate shift of our graph-based unrolled networks compared to conventional transformers.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/94026"} +{"video_file": "iD18l6prA7_39028332.mp4", "openreview_id": "iD18l6prA7", "slideslive_id": 39028332, "venue": "nips2024", "title": "$C^2M^3$: Cycle-Consistent Multi-Model Merging", "status": "Poster", "keywords": "model merging;linear mode connectivity;deep learning", "tldr": "Based on the conjecture that all modes found by SGD live in the same basin up to permutations, we propose a novel weight-matching procedure based on the Frank-Wolfe algorithm that ensures cycle consistency of the permutations.", "abstract": "In this paper, we present a novel data-free method for merging neural networks in weight space. Our method optimizes for the permutations of network neurons while ensuring global coherence across all layers, and it outperforms recent layer-local approaches in a set of challenging scenarios. We then generalize the formulation to the\nN\n-models scenario to enforce cycle consistency of the permutations with guarantees, allowing circular compositions of permutations to be computed without accumulating error along the path. We qualitatively and quantitatively motivate the need for such a constraint, showing its benefits when merging homogeneous sets of models in scenarios spanning varying architectures and datasets. We finally show that, when coupled with activation renormalization, the approach yields the best results in the task.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/94020"} +{"video_file": "iEeiZlTbts_39024959.mp4", "openreview_id": "iEeiZlTbts", "slideslive_id": 39024959, "venue": "nips2024", "title": "No Regrets: Investigating and Improving Regret Approximations for Curriculum Discovery", "status": "Poster", "keywords": "MARL;UED;Robotics", "tldr": "An improved score function for unsupervised environment design in binary outcome settings, which we use to train agents for real-world tasks, and an improved adversarial evaluation protocol that assesses policy robustness.", "abstract": "What data or environments to use for training to improve downstream performance is a longstanding and very topical question in reinforcement learning. In particular, Unsupervised Environment Design (UED) methods have gained recent attention as their adaptive curricula promise to enable agents to be robust to in- and out-of-distribution tasks. This work investigates how existing UED methods select training environments, focusing on task prioritisation metrics. Surprisingly, despite methods aiming to maximise regret in theory, the practical approximations do not correlate with regret but with success rate. As a result, a significant portion of an agent's experience comes from environments it has already mastered, offering little to no contribution toward enhancing its abilities. Put differently, current methods fail to predict intuitive measures of learnability. Specifically, they are unable to consistently identify those scenarios that the agent can sometimes solve, but not always. Based on our analysis, we develop a method that directly trains on scenarios with high learnability. This simple and intuitive approach outperforms existing UED methods in several binary-outcome environments, including the standard domain of Minigrid and a novel setting closely inspired by a real-world robotics problem. We further introduce a new adversarial evaluation procedure for directly measuring robustness, closely mirroring the conditional value at risk (CVaR). We open-source all our code and present visualisations of final policies here: https://github.com/amacrutherford/sampling-for-learnability.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/94019"} +{"video_file": "iEsyRsg6t1_39027739.mp4", "openreview_id": "iEsyRsg6t1", "slideslive_id": 39027739, "venue": "nips2024", "title": "Causal Effect Identification in a Sub-Population with Latent Variables", "status": "Poster", "keywords": "Causal Effect Identification;Selection Bias;Latent Variables", "tldr": "We consider the s-ID problem in the presence of latent variables.", "abstract": "The s-ID problem seeks to compute a causal effect in a specific sub-population from the observational data pertaining to the same sub population (Abouei et al., 2023). This problem has been addressed when all the variables in the system are observable. In this paper, we consider an extension of the s-ID problem that allows for the presence of latent variables. To tackle the challenges induced by the presence of latent variables in a sub-population, we first extend the classical relevant graphical definitions, such as c-components and Hedges, initially defined for the so-called ID problem (Pearl, 1995; Tian & Pearl, 2002), to their new counterparts. Subsequently, we propose a sound algorithm for the s-ID problem with latent variables.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/94018"} +{"video_file": "iMEAHXDiNP_39026252.mp4", "openreview_id": "iMEAHXDiNP", "slideslive_id": 39026252, "venue": "nips2024", "title": "Improved Algorithms for Contextual Dynamic Pricing", "status": "Poster", "keywords": "Dynamic Pricing;Bandits", "tldr": "We present a novel approach to dynamic pricing problems with covariates and prove improved regret bounds for both linear and non-parametric valuations.", "abstract": "In contextual dynamic pricing, a seller sequentially prices goods based on contextual information. Buyers will purchase products only if the prices are below their valuations. The goal of the seller is to design a pricing strategy that collects as much revenue as possible. We focus on two different valuation models. The first assumes that valuations linearly depend on the context and are further distorted by noise. Under minor regularity assumptions, our algorithm achieves an optimal regret bound of\nO\n~\n(\nT\n2\n/\n3\n)\n, improving the existing results. The second model removes the linearity assumption, requiring only that the expected buyer valuation is\n\u03b2\n-H\"older in the context. For this model, our algorithm obtains a regret\nO\n~\n(\nT\nd\n+\n2\n\u03b2\n/\nd\n+\n3\n\u03b2\n)\n, where\nd\nis the dimension of the context space.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/94013"} +{"video_file": "iN43sJoib7_39027182.mp4", "openreview_id": "iN43sJoib7", "slideslive_id": 39027182, "venue": "nips2024", "title": "Are Self-Attentions Effective for Time Series Forecasting?", "status": "Poster", "keywords": "time series forecasting;learnable query;parameter sharing", "tldr": "Our paper introduces a new forecasting model, which removes self-attention and utilizes cross-attention in Transformers, enhancing forecasting accuracy and efficiency while outperforming existing models across various datasets.", "abstract": "Time series forecasting is crucial for applications across multiple domains and various scenarios. Although Transformers have dramatically advanced the landscape of forecasting, their effectiveness remains debated. Recent findings have indicated that simpler linear models might outperform complex Transformer-based approaches, highlighting the potential for more streamlined architectures. In this paper, we shift the focus from evaluating the overall Transformer architecture to specifically examining the effectiveness of self-attention for time series forecasting. To this end, we introduce a new architecture, Cross-Attention-only Time Series transformer (CATS), that rethinks the traditional transformer framework by eliminating self-attention and leveraging cross-attention mechanisms instead. By establishing future horizon-dependent parameters as queries and enhanced parameter sharing, our model not only improves long-term forecasting accuracy but also reduces the number of parameters and memory usage. Extensive experiment across various datasets demonstrates that our model achieves superior performance with the lowest mean squared error and uses fewer parameters compared to existing models. The implementation of our model is available at: https://github.com/dongbeank/CATS.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/94012"} +{"video_file": "iNS3SC949v_39027215.mp4", "openreview_id": "iNS3SC949v", "slideslive_id": 39027215, "venue": "nips2024", "title": "Sm: enhanced localization in Multiple Instance Learning for medical imaging classification", "status": "Poster", "keywords": "Multiple Instance Learning;Transformers;Graph Neural Networks;Medical Imaging;Weakly Supervised Learning", "tldr": "We draw attention to the localization task in MIL and propose Sm, a principled operator to account for local interactions among instances that yields enhanced performance.", "abstract": "Multiple Instance Learning (MIL) is widely used in medical imaging classification to reduce the labeling effort. While only bag labels are available for training, one typically seeks predictions at both bag and instance levels (classification and localization tasks, respectively). Early MIL methods treated the instances in a bag independently. Recent methods account for global and local dependencies among instances. Although they have yielded excellent results in classification, their performance in terms of localization is comparatively limited. We argue that these models have been designed to target the classification task, while implications at the instance level have not been deeply investigated. Motivated by a simple observation -- that neighboring instances are likely to have the same label -- we propose a novel, principled, and flexible mechanism to model local dependencies. It can be used alone or combined with any mechanism to model global dependencies (e.g., transformers). A thorough empirical validation shows that our module leads to state-of-the-art performance in localization while being competitive or superior in classification. Our code is at https://github.com/Franblueee/SmMIL.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/94011"} +{"video_file": "iNUKoLU8xb_39025220.mp4", "openreview_id": "iNUKoLU8xb", "slideslive_id": 39025220, "venue": "nips2024", "title": "Your contrastive learning problem is secretly a distribution alignment problem", "status": "Poster", "keywords": "Optimal transport;Distribution alignment;Noise contrastive estimation", "tldr": "In this work, we introduce a novel framework for representation learning that recasts contrastive estimation as a distribution alignment problem.", "abstract": "Despite the success of contrastive learning (CL) in vision and language, its theoretical foundations and mechanisms for building representations remain poorly understood. In this work, we build connections between noise contrastive estimation losses widely used in CL and distribution alignment with entropic optimal transport (OT). This connection allows us to develop a family of different losses and multistep iterative variants for existing CL methods. Intuitively, by using more information from the distribution of latents, our approach allows a more distribution-aware manipulation of the relationships within augmented sample sets. We provide theoretical insights and experimental evidence demonstrating the benefits of our approach for generalized contrastive alignment. Through this framework, it is possible to leverage tools in OT to build unbalanced losses to handle noisy views and customize the representation space by changing the constraints on alignment. By reframing contrastive learning as an alignment problem and leveraging existing optimization tools for OT, our work provides new insights and connections between different self-supervised learning models in addition to new tools that can be more easily adapted to incorporate domain knowledge into learning.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/94010"} +{"video_file": "iSfCWhvEGA_39026264.mp4", "openreview_id": "iSfCWhvEGA", "slideslive_id": 39026264, "venue": "nips2024", "title": "Learn To be Efficient: Build Structured Sparsity in Large Language Models", "status": "Spotlight", "keywords": "LLM inference efficiency;Moefication;Contexual Sparsity", "tldr": "We propose a novel training algorithm to train efficiency-aware LLMs that have more structured contextual sparsity for fast inference.", "abstract": "Large Language Models (LLMs) have achieved remarkable success with their billion-level parameters, yet they incur high inference overheads. The emergence of activation sparsity in LLMs provides a natural approach to reduce this cost by involving only parts of the parameters for inference. However, existing methods only focus on utilizing this naturally formed activation sparsity in a post-training setting, overlooking the potential for further amplifying this inherent sparsity. In this paper, we hypothesize that LLMs can learn to be efficient by achieving more structured activation sparsity. To achieve this, we introduce a novel training algorithm, Learn-To-be-Efficient (LTE), designed to train efficiency-aware LLMs to learn to activate fewer neurons and achieve a better trade-off between sparsity and performance. Furthermore, unlike SOTA MoEfication methods, which mainly focus on ReLU-based models, LTE can also be applied to LLMs like LLaMA using non-ReLU activations. Extensive evaluation on language understanding, language generation, and instruction tuning tasks show that LTE consistently outperforms SOTA baselines. Along with our hardware-aware custom kernel implementation, LTE reduces LLaMA2-7B inference latency by 25% at 50% sparsity.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/94003"} +{"video_file": "iSjqTQ5S1f_39028627.mp4", "openreview_id": "iSjqTQ5S1f", "slideslive_id": 39028627, "venue": "nips2024", "title": "Stochastic Concept Bottleneck Models", "status": "Poster", "keywords": "Concept Bottleneck Models;Interventions;Interpretability;Concepts", "tldr": "We propose a method for modeling concept dependencies that improves intervention effectiveness.", "abstract": "Concept Bottleneck Models (CBMs) have emerged as a promising interpretable method whose final prediction is based on intermediate, human-understandable concepts rather than the raw input. Through time-consuming manual interventions, a user can correct wrongly predicted concept values to enhance the model's downstream performance. We propose Stochastic Concept Bottleneck Models (SCBMs), a novel approach that models concept dependencies. In SCBMs, a single-concept intervention affects all correlated concepts, thereby improving intervention effectiveness. Unlike previous approaches that model the concept relations via an autoregressive structure, we introduce an explicit, distributional parameterization that allows SCBMs to retain the CBMs' efficient training and inference procedure. Additionally, we leverage the parameterization to derive an effective intervention strategy based on the confidence region. We show empirically on synthetic tabular and natural image datasets that our approach improves intervention effectiveness significantly. Notably, we showcase the versatility and usability of SCBMs by examining a setting with CLIP-inferred concepts, alleviating the need for manual concept annotations.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/94002"} +{"video_file": "iYcY7KAkSy_39025055.mp4", "openreview_id": "iYcY7KAkSy", "slideslive_id": 39025055, "venue": "nips2024", "title": "Spiking Token Mixer: An event-driven friendly Former structure for spiking neural networks", "status": "Poster", "keywords": "spiking neural network; event-driven friendly; low energy consumption", "tldr": "We proposed the STMixer architecture, consisting exclusively of convolutional, fully connected layers, and residual path, is more advantageous for deployment in asynchronous scenarios.", "abstract": "Spiking neural networks (SNNs), inspired by biological processes, use spike signals for inter-layer communication, presenting an energy-efficient alternative to traditional neural networks. To realize the theoretical advantages of SNNs in energy efficiency, it is essential to deploy them onto neuromorphic chips. On clock-driven synchronous chips, employing shorter time steps can enhance energy efficiency but reduce SNN performance. Compared to the clock-driven synchronous chip, the event-driven asynchronous chip achieves much lower energy consumption but only supports some specific network operations. Recently, a series of SNN projects have achieved tremendous success, significantly improving the SNN's performance. However, event-driven asynchronous chips do not support some of the proposed structures, making it impossible to integrate these SNNs into asynchronous hardware. In response to these problems, we propose the Spiking Token Mixer (STMixer) architecture, which consists exclusively of operations supported by asynchronous scenarios, including convolutional, fully connected layers and residual paths. Our series of experiments also demonstrates that STMixer achieves performance on par with spiking transformers in synchronous scenarios with very low timesteps. This indicates its ability to achieve the same level of performance with lower power consumption in synchronous scenarios. The codes are available at \\url{https://github.com/brain-intelligence-lab/STMixer_demo}.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93999"} +{"video_file": "ibKpPabHVn_39024500.mp4", "openreview_id": "ibKpPabHVn", "slideslive_id": 39024500, "venue": "nips2024", "title": "DeepDRK: Deep Dependency Regularized Knockoff for Feature Selection", "status": "Poster", "keywords": "Feature Selection;Deep Learning;Model-X Knockoff;FDR Control;Boosting Power", "tldr": "A pipeline that consists of a transformer-based model with novel regularizations to boost power while controlling the FDR during feature selection", "abstract": "Model-X knockoff has garnered significant attention among various feature selection methods due to its guarantees for controlling the false discovery rate (FDR). Since its introduction in parametric design, knockoff techniques have evolved to handle arbitrary data distributions using deep learning-based generative models. However, we have observed limitations in the current implementations of the deep Model-X knockoff framework. Notably, the \"swap property\" that knockoffs require often faces challenges at the sample level, resulting in diminished selection power. To address these issues, we develop \"Deep Dependency Regularized Knockoff (DeepDRK),\" a distribution-free deep learning method that effectively balances FDR and power. In DeepDRK, we introduce a novel formulation of the knockoff model as a learning problem under multi-source adversarial attacks. By employing an innovative perturbation technique, we achieve lower FDR and higher power. Our model outperforms existing benchmarks across synthetic, semi-synthetic, and real-world datasets, particularly when sample sizes are small and data distributions are non-Gaussian.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93995"} +{"video_file": "iiYadgKHwo_39025385.mp4", "openreview_id": "iiYadgKHwo", "slideslive_id": 39025385, "venue": "nips2024", "title": "Variational Distillation of Diffusion Policies into Mixture of Experts", "status": "Poster", "keywords": "Diverse Behavior Learning;Model Distillation;Diffusion Models;Variational Inference", "tldr": "This work introduces Variational Diffusion Distillation (VDD), a novel method for distilling denoising diffusion policies into a Mixture of Experts (MoE).", "abstract": "This work introduces Variational Diffusion Distillation (VDD), a novel method that distills denoising diffusion policies into Mixtures of Experts (MoE) through variational inference. Diffusion Models are the current state-of-the-art in generative modeling due to their exceptional ability to accurately learn and represent complex, multi-modal distributions. This ability allows Diffusion Models to replicate the inherent diversity in human behavior, making them the preferred models in behavior learning such as Learning from Human Demonstrations (LfD). However, diffusion models come with some drawbacks, including the intractability of likelihoods and long inference times due to their iterative sampling process. The inference times, in particular, pose a significant challenge to real-time applications such as robot control. In contrast, MoEs effectively address the aforementioned issues while retaining the ability to represent complex distributions but are notoriously difficult to train. VDD is the first method that distills pre-trained diffusion models into MoE models, and hence, combines the expressiveness of Diffusion Models with the benefits of Mixture Models. Specifically, VDD leverages a decompositional upper bound of the variational objective that allows the training of each expert separately, resulting in a robust optimization scheme for MoEs. VDD demonstrates across nine complex behavior learning tasks, that it is able to: i) accurately distill complex distributions learned by the diffusion model, ii) outperform existing state-of-the-art distillation methods, and iii) surpass conventional methods for training MoE. The code and videos are available at https://intuitive-robots.github.io/vdd-website.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/93992"} +{"video_file": "ioe66JeCMF_39028311.mp4", "openreview_id": "ioe66JeCMF", "slideslive_id": 39028311, "venue": "nips2024", "title": "Time Makes Space: Emergence of Place Fields in Networks Encoding Temporally Continuous Sensory Experiences", "status": "Poster", "keywords": "Place Cells/Fields;Recurrent Neural Networks;Episodic Memory;Hippocampus", "tldr": "We model the CA3 region as a recurrent autoencoder that reconstructs sensory experiences from partial observations in simulated environments, and observed emergence of place fields reproducing key aspects of hippocampal phenomenology.", "abstract": "The vertebrate hippocampus is thought to use recurrent connectivity in area CA3 to support episodic memory recall from partial cues. This brain area also contains place cells, whose location-selective firing fields implement maps supporting spatial memory. Here we show that place cells emerge in networks trained to remember temporally continuous sensory episodes. We model CA3 as a recurrent autoencoder that recalls and reconstructs sensory experiences from noisy and partially occluded observations by agents traversing simulated arenas. The agents move in realistic trajectories modeled from rodents and environments are modeled as continuously varying, high-dimensional, sensory experience maps (spatially smoothed Gaussian random fields). Training our autoencoder to accurately pattern-complete and reconstruct sensory experiences with a constraint on total activity causes spatially localized firing fields, i.e., place cells, to emerge in the encoding layer. The emergent place fields reproduce key aspects of hippocampal phenomenology: a) remapping (maintenance of and reversion to distinct learned maps in different environments), implemented via repositioning of experience manifolds in the network\u2019s hidden layer, b) orthogonality of spatial representations in different arenas, c) robust place field emergence in differently shaped rooms, with single units showing multiple place fields in large or complex spaces, and (d) slow representational drift of place fields. We argue that these results arise because continuous traversal of space makes sensory experience temporally continuous. We make testable predictions: a) rapidly changing sensory context will disrupt place fields, b) place fields will form even if recurrent connections are blocked, but reversion to previously learned representations upon remapping will be abolished, c) the dimension of temporally smooth experience sets the dimensionality of place fields, including during virtual navigation of abstract spaces.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/93988"} +{"video_file": "j14wStqZni_39028234.mp4", "openreview_id": "j14wStqZni", "slideslive_id": 39028234, "venue": "nips2024", "title": "Public-data Assisted Private Stochastic Optimization: Power and Limitations", "status": "Poster", "keywords": "Differential Privacy;Public Data;Stochastic Optimization;Generalized Linear Model", "tldr": "We study the limits and capability of public-data assisted differentially private algorithms.", "abstract": "We study the limits and capability of public-data assisted differentially private (PA-DP) algorithms. Specifically, we focus on the problem of stochastic convex optimization (SCO) with either labeled or unlabeled public data. For complete/labeled public data, we show that any\n(\n\u03f5\n,\n\u03b4\n)\n-PA-DP has excess risk\n\u03a9\n~\n(\nmin\n(\n1\nn\npub\n,\n1\nn\n+\nd\nn\n\u03f5\n)\n)\n, where\nd\nis the dimension,\nn\npub\nis the number of public samples,\nn\npriv\nis the number of private samples, and\nn\n=\nn\npub\n+\nn\npriv\n. These lower bounds are established via our new lower bounds for PA-DP mean estimation, which are of a similar form. Up to constant factors, these lower bounds show that the simple strategy of either treating all data as private or discarding the private data, is optimal. We also study PA-DP supervised learning with \\textit{unlabeled} public samples. In contrast to our previous result, we here show novel methods for leveraging public data in private supervised learning. For generalized linear models (GLM) with unlabeled public data, we show an efficient algorithm which, given\nO\n~\n(\nn\npriv\n\u03f5\n)\nunlabeled public samples, achieves the dimension independent rate\nO\n~\n(\n1\nn\npriv\n+\n1\nn\npriv\n\u03f5\n)\n. We develop new lower bounds for this setting which shows that this rate cannot be improved with more public samples, and any fewer public samples leads to a worse rate. Finally, we provide extensions of this result to general hypothesis classes with finite \\textit{fat-shattering dimension} with applications to neural networks and non-Euclidean geometries.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/93982"} +{"video_file": "j2wCrWmgMX_39027611.mp4", "openreview_id": "j2wCrWmgMX", "slideslive_id": 39027611, "venue": "nips2024", "title": "Kernel Language Entropy: Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities", "status": "Poster", "keywords": "uncertainty quantification;LLMs", "tldr": "Kernel Language Entropy is a novel method for uncertainty quantification that utilizes semantic similarities in the form of kernels over the space of outputs.", "abstract": "Uncertainty quantification in Large Language Models (LLMs) is crucial for applications where safety and reliability are important. In particular, uncertainty can be used to improve the trustworthiness of LLMs by detecting factually incorrect model responses, commonly called hallucinations. Critically, one should seek to capture the model's semantic uncertainty, i.e., the uncertainty over the meanings of LLM outputs, rather than uncertainty over lexical or syntactic variations that do not affect answer correctness. To address this problem, we propose Kernel Language Entropy (KLE), a novel method for uncertainty estimation in white- and black-box LLMs. KLE defines positive semidefinite unit trace kernels to encode the semantic similarities of LLM outputs and quantifies uncertainty using the von Neumann entropy. It considers pairwise semantic dependencies between answers (or semantic clusters), providing more fine-grained uncertainty estimates than previous methods based on hard clustering of answers. We theoretically prove that KLE generalizes the previous state-of-the-art method called semantic entropy and empirically demonstrate that it improves uncertainty quantification performance across multiple natural language generation datasets and LLM architectures.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93979"} +{"video_file": "j6Zsoj544N_39027289.mp4", "openreview_id": "j6Zsoj544N", "slideslive_id": 39027289, "venue": "nips2024", "title": "Does Worst-Performing Agent Lead the Pack? Analyzing Agent Dynamics in Unified Distributed SGD", "status": "Poster", "keywords": "Distributed Optimization;Agent Dynamics;Federated Learning;Central Limit Theorem;Efficient Sampling", "tldr": "We provide an asymptotic analysis of the Unified Distributed SGD, including decentralized SGD and various Federated Learning algorithms, to study the impact of agents' sampling strategies on the overall convergence of the large-scale system.", "abstract": "Distributed learning is essential to train machine learning algorithms across heterogeneous agents while maintaining data privacy. We conduct an asymptotic analysis of Unified Distributed SGD (UD-SGD), exploring a variety of communication patterns, including decentralized SGD and local SGD within Federated Learning (FL), as well as the increasing communication interval in the FL setting. In this study, we assess how different sampling strategies, such as i.i.d. sampling, shuffling, and Markovian sampling, affect the convergence speed of UD-SGD by considering the impact of agent dynamics on the limiting covariance matrix as described in the Central Limit Theorem (CLT). Our findings not only support existing theories on linear speedup and asymptotic network independence, but also theoretically and empirically show how efficient sampling strategies employed by individual agents contribute to overall convergence in UD-SGD. Simulations reveal that a few agents using highly efficient sampling can achieve or surpass the performance of the majority employing moderately improved strategies, providing new insights beyond traditional analyses focusing on the worst-performing agent.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93978"} +{"video_file": "j6kJSS9O6I_39026440.mp4", "openreview_id": "j6kJSS9O6I", "slideslive_id": 39026440, "venue": "nips2024", "title": "Agent Planning with World Knowledge Model", "status": "Poster", "keywords": "world knowledge model;agent planning;large language models", "tldr": "We introduce parametric World Knowledge Model (WKM) which provides global prior task knowledge and local dynamic state knowledge to facilitate agent planning.", "abstract": "Recent endeavors towards directly using large language models (LLMs) as agent models to execute interactive planning tasks have shown commendable results. Despite their achievements, however, they still struggle with brainless trial-and-error in global planning and generating hallucinatory actions in local planning due to their poor understanding of the \"real\" physical world. Imitating humans' mental world knowledge model which provides global prior knowledge before the task and maintains local dynamic knowledge during the task, in this paper, we introduce parametric World Knowledge Model (WKM) to facilitate agent planning. Concretely, we steer the agent model to self-synthesize knowledge from both expert and sampled trajectories. Then we develop WKM, providing prior task knowledge to guide the global planning and dynamic state knowledge to assist the local planning. Experimental results on three real-world simulated datasets with Mistral-7B, Gemma-7B, and Llama-3-8B demonstrate that our method can achieve superior performance compared to various strong baselines. Besides, we analyze to illustrate that our WKM can effectively alleviate the blind trial-and-error and hallucinatory action issues, providing strong support for the agent's understanding of the world. Other interesting findings include: 1) our instance-level task knowledge can generalize better to unseen tasks, 2) weak WKM can guide strong agent model planning, and 3) unified WKM training has promising potential for further development.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93977"} +{"video_file": "jCMYIUwprx_39025368.mp4", "openreview_id": "jCMYIUwprx", "slideslive_id": 39025368, "venue": "nips2024", "title": "INDICT: Code Generation with Internal Dialogues of Critiques for Both Security and Helpfulness", "status": "Poster", "keywords": "code generation;safety;helpfulness;code security;large language model;critic;autonomous agent", "tldr": "We propose to improve code generation by both safety and helpfulness through a collaborative and autonomous system of dual tool-enhanced critics, providing knowledge-grounded critic feedbacks to support the LLM code generator.", "abstract": "Large language models (LLMs) for code are typically trained to align with natural language instructions to closely follow their intentions and requirements. However, in many practical scenarios, it becomes increasingly challenging for these models to navigate the intricate boundary between helpfulness and safety, especially against highly complex yet potentially malicious instructions. In this work, we introduce INDICT: a new framework that empowers LLMs with Internal Dialogues of Critiques for both safety and helpfulness guidance. The internal dialogue is a dual cooperative system between a safety-driven critic and a helpfulness-driven critic. Each critic provides analysis against the given task and corresponding generated response, equipped with external knowledge queried through relevant code snippets and tools like web search and code interpreter. We engage the dual critic system in both code generation stage as well as code execution stage, providing preemptive and post-hoc guidance respectively to LLMs. We evaluated INDICT on 8 diverse tasks across 8 programming languages from 5 benchmarks, using LLMs from 7B to 70B parameters. We observed that our approach can provide an advanced level of critiques of both safety and helpfulness analysis, significantly improving the quality of output codes (+10% absolute improvements in all models).", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93974"} +{"video_file": "jHh804fZ5l_39025774.mp4", "openreview_id": "jHh804fZ5l", "slideslive_id": 39025774, "venue": "nips2024", "title": "Generalization Bound and Learning Methods for Data-Driven Projections in Linear Programming", "status": "Poster", "keywords": "data-driven algorithm design;linear programming;dimensionality reduction;generalization bound", "tldr": "We present a generalization bound and learning methods for reducing the dimensionality of linear programs with projection matrices learned from data.", "abstract": "How to solve high-dimensional linear programs (LPs) efficiently is a fundamental question. Recently, there has been a surge of interest in reducing LP sizes using random projections, which can accelerate solving LPs independently of improving LP solvers. This paper explores a new direction of data-driven projections, which use projection matrices learned from data instead of random projection matrices. Given training data of\nn\n-dimensional LPs, we learn an\nn\n\u00d7\nk\nprojection matrix with\nn\n>\nk\n. When addressing a future LP instance, we reduce its dimensionality from\nn\nto\nk\nvia the learned projection matrix, solve the resulting LP to obtain a\nk\n-dimensional solution, and apply the learned matrix to it to recover an\nn\n-dimensional solution.\nOn the theoretical side, a natural question is: how much data is sufficient to ensure the quality of recovered solutions? We address this question based on the framework of data-driven algorithm design, which connects the amount of data sufficient for establishing generalization bounds to the pseudo-dimension of performance metrics. We obtain an\nO\n~\n(\nn\nk\n2\n)\nupper bound on the pseudo-dimension, where\nO\n~\ncompresses logarithmic factors. We also provide an\n\u03a9\n(\nn\nk\n)\nlower bound, implying our result is tight up to an\nO\n~\n(\nk\n)\nfactor.\nOn the practical side, we explore two simple methods for learning projection matrices: PCA- and gradient-based methods. While the former is relatively efficient, the latter can sometimes achieve better solution quality. Experiments demonstrate that learning projection matrices from data is indeed beneficial: it leads to significantly higher solution quality than the existing random projection while greatly reducing the time for solving LPs.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93970"} +{"video_file": "jIabKyXOTt_39025506.mp4", "openreview_id": "jIabKyXOTt", "slideslive_id": 39025506, "venue": "nips2024", "title": "Sparsity-Agnostic Linear Bandits with Adaptive Adversaries", "status": "Poster", "keywords": "Regret bounds;online learning;sparse linear regression;model selection", "tldr": "We prove the first sparsity-agnostic regret bounds for stochastic linear bandits without assumptions on the action set or on the sparsity structure.", "abstract": "We study stochastic linear bandits where, in each round, the learner receives a set of actions (i.e., feature vectors), from which it chooses an element and obtains a stochastic reward. The expected reward is a fixed but unknown linear function of the chosen action. We study \\emph{sparse} regret bounds, that depend on the number\nS\nof non-zero coefficients in the linear reward function. Previous works focused on the case where\nS\nis known, or the action sets satisfy additional assumptions. In this work, we obtain the first sparse regret bounds that hold when\nS\nis unknown and the action sets are adversarially generated. Our techniques combine online to confidence set conversions with a novel randomized model selection approach over a hierarchy of nested confidence sets. When\nS\nis known, our analysis recovers state-of-the-art bounds for adversarial action sets. We also show that a variant of our approach, using Exp3 to dynamically select the confidence sets, can be used to improve the empirical performance of stochastic linear bandits while enjoying a regret bound with optimal dependence on the time horizon.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/93969"} +{"video_file": "jImXgQEmX3_39026160.mp4", "openreview_id": "jImXgQEmX3", "slideslive_id": 39026160, "venue": "nips2024", "title": "AMOR: A Recipe for Building Adaptable Modular Knowledge Agents Through Process Feedback", "status": "Poster", "keywords": "Agent;Knowledge;Feedback-Driven Adaptation", "tldr": "We propose a general framework for building knowledge agents, featuring FSM-based reasoning logic and a process feedback mechanism, which is demonstrated by extensive experiments to surpass previous agents by a large margin.", "abstract": "The notable success of large language models (LLMs) has sparked an upsurge in building language agents to complete various complex tasks. We present AMOR, an agent framework based on open-source LLMs, which reasons with external knowledge bases and adapts to specific domains through human supervision to the reasoning process. AMOR builds reasoning logic over a finite state machine (FSM) that solves problems through autonomous executions and transitions over disentangled modules. This allows humans to provide direct feedback to the individual modules, and thus naturally forms process supervision. Based on this reasoning and feedback framework, we develop AMOR through two-stage fine-tuning: warm-up and adaptation. The former fine-tunes the LLM with examples automatically constructed from various public datasets, enabling AMOR to generalize across different knowledge environments, while the latter tailors AMOR to specific domains using process feedback. Extensive experiments across multiple domains demonstrate the advantage of AMOR to strong baselines, thanks to its FSM-based reasoning and process feedback mechanism. The code and data are publicly available at https://github.com/JianGuanTHU/AMOR.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93967"} +{"video_file": "jKLyKeZfzv_39028195.mp4", "openreview_id": "jKLyKeZfzv", "slideslive_id": 39028195, "venue": "nips2024", "title": "MOTE-NAS: Multi-Objective Training-based Estimate for Efficient Neural Architecture Search", "status": "Poster", "keywords": "neural architecture search;few-cost;training-related estimate.", "tldr": "a few cost estimate for NAS", "abstract": "Neural Architecture Search (NAS) methods seek effective optimization toward performance metrics regarding model accuracy and generalization while facing challenges regarding search costs and GPU resources. Recent Neural Tangent Kernel (NTK) NAS methods achieve remarkable search efficiency based on a training-free model estimate; however, they overlook the non-convex nature of the DNNs in the search process. In this paper, we develop Multi-Objective Training-based Estimate (MOTE) for efficient NAS, retaining search effectiveness and achieving the new state-of-the-art in the accuracy and cost trade-off. To improve NTK and inspired by the Training Speed Estimation (TSE) method, MOTE is designed to model the actual performance of DNNs from macro to micro perspective by draw loss landscape and convergence speed simultaneously. Using two reduction strategies, the MOTE is generated based on a reduced architecture and a reduced dataset. Inspired by evolutionary search, our iterative ranking-based, coarse-to-fine architecture search is highly effective. Experiments on NASBench-201 show MOTE-NAS achieves 94.32% accuracy on CIFAR-10, 72.81% on CIFAR-100, and 46.38% on ImageNet-16-120, outperforming NTK-based NAS approaches. An evaluation-free (EF) version of MOTE-NAS delivers high efficiency in only 5 minutes, delivering a model more accurate than KNAS.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/93966"} +{"video_file": "jL0EsbfbAV_39024772.mp4", "openreview_id": "jL0EsbfbAV", "slideslive_id": 39024772, "venue": "nips2024", "title": "Exploring Behavior-Relevant and Disentangled Neural Dynamics with Generative Diffusion Models", "status": "Poster", "keywords": "Neural Latent Discovery;Neural Behavior Analysis;Diffusion Models;Neuroscience", "tldr": "We propose BeNeDiff, a method for revealing disentangled neural dynamics associated with behaviors via generative diffusion models.", "abstract": "Understanding the neural basis of behavior is a fundamental goal in neuroscience. Current research in large-scale neuro-behavioral data analysis often relies on decoding models, which quantify behavioral information in neural data but lack details on behavior encoding. This raises an intriguing scientific question: \"how can we enable in-depth exploration of neural representations in behavioral tasks, revealing interpretable neural dynamics associated with behaviors\". However, addressing this issue is challenging due to the varied behavioral encoding across different brain regions and mixed selectivity at the population level. To tackle this limitation, our approach, named (\"BeNeDiff\"), first identifies a fine-grained and disentangled neural subspace using a behavior-informed latent variable model. It then employs state-of-the-art generative diffusion models to synthesize behavior videos that interpret the neural dynamics of each latent factor. We validate the method on multi-session datasets containing widefield calcium imaging recordings across the dorsal cortex. Through guiding the diffusion model to activate individual latent factors, we verify that the neural dynamics of latent factors in the disentangled neural subspace provide interpretable quantifications of the behaviors of interest. At the same time, the neural subspace in BeNeDiff demonstrates high disentanglement and neural reconstruction quality.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/93965"} +{"video_file": "jRtxzzk0a6_39027921.mp4", "openreview_id": "jRtxzzk0a6", "slideslive_id": 39027921, "venue": "nips2024", "title": "Kraken: Inherently Parallel Transformers For Efficient Multi-Device Inference", "status": "Poster", "keywords": "Neural Networks", "tldr": "New Transformer architecture that allows overlapping communication collectives with compute to speed up inference.", "abstract": "Large Transformer networks are increasingly used in settings where low inference latency is necessary to enable new applications and improve the end-user experience. However, autoregressive inference is resource intensive and requires parallelism for efficiency. Parallelism introduces collective communication that is both expensive and represents a phase when hardware resources are underutilized. Towards mitigating this, Kraken is an evolution of the standard Transformer architecture that is designed to complement existing tensor parallelism schemes for efficient inference on multi-device systems. By introducing a fixed degree of intra-layer model parallelism, the architecture allows collective operations to be overlapped with compute, decreasing latency and increasing hardware utilization. When trained on OpenWebText, Kraken models reach a similar perplexity as standard Transformers while also preserving their language modeling capabilities as evaluated on the SuperGLUE benchmark. Importantly, when tested on multi-GPU systems using TensorRT-LLM engines, Kraken speeds up Time To First Token by a mean of 35.6% across a range of model sizes, context lengths, and degrees of tensor parallelism.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93961"} +{"video_file": "jWGGEDYORs_39028422.mp4", "openreview_id": "jWGGEDYORs", "slideslive_id": 39028422, "venue": "nips2024", "title": "DARNet: Dual Attention Refinement Network with Spatiotemporal Construction for Auditory Attention Detection", "status": "Poster", "keywords": "auditory attention decoding (AAD);electroencephalography (EEG);brain-computer interface (BCI)", "tldr": "A dual attention refinement network with spatiotemporal construction for audutory attention detection.", "abstract": "At a cocktail party, humans exhibit an impressive ability to direct their attention. The auditory attention detection (AAD) approach seeks to identify the attended speaker by analyzing brain signals, such as EEG signals. However, current AAD algorithms overlook the spatial distribution information within EEG signals and lack the ability to capture long-range latent dependencies, limiting the model's ability to decode brain activity. To address these issues, this paper proposes a dual attention refinement network with spatiotemporal construction for AAD, named DARNet, which consists of the spatiotemporal construction module, dual attention refinement module, and feature fusion & classifier module. Specifically, the spatiotemporal construction module aims to construct more expressive spatiotemporal feature representations, by capturing the spatial distribution characteristics of EEG signals. The dual attention refinement module aims to extract different levels of temporal patterns in EEG signals and enhance the model's ability to capture long-range latent dependencies. The feature fusion & classifier module aims to aggregate temporal patterns and dependencies from different levels and obtain the final classification results. The experimental results indicate that DARNet achieved excellent classification performance, particularly under short decision windows. While maintaining excellent classification performance, DARNet significantly reduces the number of required parameters. Compared to the state-of-the-art models, DARNet reduces the parameter count by 91%. Code is available at: https://github.com/fchest/DARNet.git.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/93956"} +{"video_file": "jXgHEwtXs8_39024874.mp4", "openreview_id": "jXgHEwtXs8", "slideslive_id": 39024874, "venue": "nips2024", "title": "High-Resolution Image Harmonization with Adaptive-Interval Color Transformation", "status": "Poster", "keywords": "Image Composition;Image Harmonization;Lookup Table", "tldr": "This paper proposes an Adaptive-Interval Color Transformation method to perform pixel-wise color transformation and model local non-linearities of the color transformation for high-resolution image harmonization.", "abstract": "Existing high-resolution image harmonization methods typically rely on global color adjustments or the upsampling of parameter maps. However, these methods ignore local variations, leading to inharmonious appearances. To address this problem, we propose an Adaptive-Interval Color Transformation method (AICT), which predicts pixel-wise color transformations and adaptively adjusts the sampling interval to model local non-linearities of the color transformation at high resolution. Specifically, a parameter network is first designed to generate multiple position-dependent 3-dimensional lookup tables (3D LUTs), which use the color and position of each pixel to perform pixel-wise color transformations. Then, to enhance local variations adaptively, we separate a color transform into a cascade of sub-transformations using two 3D LUTs to achieve the non-uniform sampling intervals of the color transform. Finally, a global consistent weight learning method is proposed to predict an image-level weight for each color transform, utilizing global information to enhance the overall harmony. Extensive experiments demonstrate that our AICT achieves state-of-the-art performance with a lightweight architecture. The code is available at https://github.com/aipixel/AICT.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93954"} +{"video_file": "jXsxGt80sv_39027000.mp4", "openreview_id": "jXsxGt80sv", "slideslive_id": 39027000, "venue": "nips2024", "title": "Star-Agents: Automatic Data Optimization with LLM Agents for Instruction Tuning", "status": "Poster", "keywords": "Large language models;Data;Instruction-Tuning", "tldr": "We propose an automated framework for dataset optimization, which, upon employing the refined datasets for instruction-tuning of LLMs, has demonstrated a performance enhancement of approximately 12% on evaluation sets such as MT-bench.", "abstract": "The efficacy of large language models (LLMs) on downstream tasks usually hinges on instruction tuning, which relies critically on the quality of training data. Unfortunately, collecting high-quality and diverse data is both expensive and time-consuming. To mitigate this issue, we propose a novel Star-Agents framework, which automates the enhancement of data quality across datasets through multi-agent collaboration and assessment. The framework adopts a three-pronged strategy. It initially generates diverse instruction data with multiple LLM agents through a bespoke sampling method. Subsequently, the generated data undergo a rigorous evaluation using a dual-model method that assesses both difficulty and quality. Finaly, the above process evolves in a dynamic refinement phase, where more effective LLMs are prioritized, enhancing the overall data quality. Our empirical studies, including instruction tuning experiments with models such as Pythia and LLaMA, demonstrate the effectiveness of the proposed framework. Optimized datasets have achieved substantial improvements, with an average increase of 12% and notable gains in specific metrics, such as a 40% improvement in Fermi, as evidenced by benchmarks like MT-bench, Vicuna bench, and WizardLM testset. Codes will be released soon.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93952"} +{"video_file": "jXxvSkb9HD_39026538.mp4", "openreview_id": "jXxvSkb9HD", "slideslive_id": 39026538, "venue": "nips2024", "title": "Statistical Multicriteria Benchmarking via the GSD-Front", "status": "Spotlight", "keywords": "multicriteria benchmarking;robust statistics;statistical test;imprecise probabilities;reliability;non-standard scales of measurement;decision theory", "tldr": "We propose the GSD-front for reliable multicriteria benchmarking of classifiers, give conditions for its consistent estimability, propose (robust) statistical tests for checking if a classifier is contained, and illustrate it on two benchmark suites.", "abstract": "Given the vast number of classifiers that have been (and continue to be) proposed, reliable methods for comparing them are becoming increasingly important. The desire for reliability is broken down into three main aspects: (1) Comparisons should allow for different quality metrics simultaneously. (2) Comparisons should take into account the statistical uncertainty induced by the choice of benchmark suite. (3) The robustness of the comparisons under small deviations in the underlying assumptions should be verifiable. To address (1), we propose to compare classifiers using a generalized stochastic dominance ordering (GSD) and present the GSD-front as an information-efficient alternative to the classical Pareto-front. For (2), we propose a consistent statistical estimator for the GSD-front and construct a statistical test for whether a (potentially new) classifier lies in the GSD-front of a set of state-of-the-art classifiers. For (3), we relax our proposed test using techniques from robust statistics and imprecise probabilities. We illustrate our concepts on the benchmark suite PMLB and on the platform OpenML.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/93951"} +{"video_file": "jd3msHMtTL_39025332.mp4", "openreview_id": "jd3msHMtTL", "slideslive_id": 39025332, "venue": "nips2024", "title": "Small coresets via negative dependence: DPPs, linear statistics, and concentration", "status": "Spotlight", "keywords": "Coresets;determinantal point processes;concentration inequalities", "tldr": "Determinantal point processes provably yield coresets with better accuracy guarantees than independent sampling.", "abstract": "Determinantal point processes (DPPs) are random configurations of points with tunable negative dependence. Because sampling is tractable, DPPs are natural candidates for subsampling tasks, such as minibatch selection or coreset construction. A \\emph{coreset} is a subset of a (large) training set, such that minimizing an empirical loss averaged over the coreset is a controlled replacement for the intractable minimization of the original empirical loss. Typically, the control takes the form of a guarantee that the average loss over the coreset approximates the total loss uniformly across the parameter space. Recent work has provided significant empirical support in favor of using DPPs to build randomized coresets, coupled with interesting theoretical results that are suggestive but leave some key questions unanswered. In particular, the central question of whether the cardinality of a DPP-based coreset is fundamentally smaller than one based on independent sampling remained open. In this paper, we answer this question in the affirmative, demonstrating that \\emph{DPPs can provably outperform independently drawn coresets}. In this vein, we contribute a conceptual understanding of coreset loss as a \\emph{linear statistic} of the (random) coreset. We leverage this structural observation to connect the coresets problem to a more general problem of concentration phenomena for linear statistics of DPPs, wherein we obtain \\emph{effective concentration inequalities that extend well-beyond the state-of-the-art}, encompassing general non-projection, even non-symmetric kernels. The latter have been recently shown to be of interest in machine learning beyond coresets, but come with a limited theoretical toolbox, to the extension of which our result contributes. Finally, we are also able to address the coresets problem for vector-valued objective functions, a novelty in the coresets literature.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/93945"} +{"video_file": "jfHkAEgKwH_39026983.mp4", "openreview_id": "jfHkAEgKwH", "slideslive_id": 39026983, "venue": "nips2024", "title": "LocCa: Visual Pretraining with Location-aware Captioners", "status": "Poster", "keywords": "Vision Language Models;Visual Pretraining;Location-aware Generation", "tldr": "We explore location-aware tasks as proxies for generative visual pretraining (as opposed to transfer/instruction tuning in prior works).", "abstract": "Image captioning was recently found to be an effective pretraining method similar to contrastive pretraining. This opens up the largely-unexplored potential of using natural language as a flexible and powerful interface for handling diverse pretraining tasks. In this paper, we demonstrate this with a novel visual pretraining paradigm, LocCa, that incorporates location-aware tasks into captioners to teach models to extract rich information from images. Specifically, LocCa employs two tasks, bounding box prediction and location-dependent captioning, conditioned on the image pixel input. Thanks to the multitask capabilities of an encoder-decoder architecture, we show that an image captioner can effortlessly handle multiple tasks during pretraining. LocCa significantly outperforms standard captioners on downstream localization tasks, achieving state-of-the-art results on RefCOCO/+/g, while maintaining comparable performance on holistic tasks. Our work paves the way for further exploration of natural language interfaces in visual pretraining.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93941"} +{"video_file": "jfkid2HwNr_39025944.mp4", "openreview_id": "jfkid2HwNr", "slideslive_id": 39025944, "venue": "nips2024", "title": "Medformer: A Multi-Granularity Patching Transformer for Medical Time-Series Classification", "status": "Poster", "keywords": "Transformer;Time Series;Healthcare", "tldr": "A novel transformer model designed for medical time series classification that utilizes cross-channel multi-granularity patching, and intra-inter granularity self-attention within and among granularities.", "abstract": "Medical time series (MedTS) data, such as Electroencephalography (EEG) and Electrocardiography (ECG), play a crucial role in healthcare, such as diagnosing brain and heart diseases. Existing methods for MedTS classification primarily rely on handcrafted biomarkers extraction and CNN-based models, with limited exploration of transformer-based models. In this paper, we introduce Medformer, a multi-granularity patching transformer tailored specifically for MedTS classification. Our method incorporates three novel mechanisms to leverage the unique characteristics of MedTS: cross-channel patching to leverage inter-channel correlations, multi-granularity embedding for capturing features at different scales, and two-stage (intra- and inter-granularity) multi-granularity self-attention for learning features and correlations within and among granularities. We conduct extensive experiments on five public datasets under both subject-dependent and challenging subject-independent setups. Results demonstrate Medformer's superiority over 10 baselines, achieving top averaged ranking across five datasets on all six evaluation metrics. These findings underscore the significant impact of our method on healthcare applications, such as diagnosing Myocardial Infarction, Alzheimer's, and Parkinson's disease. We release the source code at https://github.com/DL4mHealth/Medformer.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/93940"} +{"video_file": "jgpWXnXdME_39028682.mp4", "openreview_id": "jgpWXnXdME", "slideslive_id": 39028682, "venue": "nips2024", "title": "Advection Augmented Convolutional Neural Networks", "status": "Poster", "keywords": "Reaction-Advection-Diffusion System;Partial Differential Equation;Semi-Lagrangian Scheme;Spatio-temporal Prediction", "tldr": "We introduce an advection augmented CNN and show its effectiveness on several real-world datasets.", "abstract": "Many problems in physical sciences are characterized by the prediction of space-time sequences. Such problems range from weather prediction to the analysis of disease propagation and video prediction. Modern techniques for the solution of these problems typically combine Convolution Neural Networks (CNN) architecture with a time prediction mechanism. However, oftentimes, such approaches underperform in the long-range propagation of information and lack explainability. In this work, we introduce a physically inspired architecture for the solution of such problems. Namely, we propose to augment CNNs with advection by designing a novel semi-Lagrangian push operator. We show that the proposed operator allows for the non-local transformation of information compared with standard convolutional kernels. We then complement it with Reaction and Diffusion neural components to form a network that mimics the Reaction-Advection-Diffusion equation, in high dimensions. We demonstrate the effectiveness of our network on a number of spatio-temporal datasets that show their merit. Our code is available at https://github.com/Siddharth-Rout/deepADRnet.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/93938"} +{"video_file": "joNPMCzVIi_39025995.mp4", "openreview_id": "joNPMCzVIi", "slideslive_id": 39025995, "venue": "nips2024", "title": "Improved Bayes Regret Bounds for Multi-Task Hierarchical Bayesian Bandit Algorithms", "status": "Poster", "keywords": "hierarchical Bayesian bandi/semi-bandit;multi-task bandit;Bayes regret bound;Thompson sampling algorithm;BayesUCB algorithm", "tldr": "We propose novel algorithms and provide improved Bayes regret bounds for multi-task hierarchical Bayesian bandi/semi-bandit setting.", "abstract": "Hierarchical Bayesian bandit refers to the multi-task bandit problem in which bandit tasks are assumed to be drawn from the same distribution. In this work, we provide improved Bayes regret bounds for hierarchical Bayesian bandit algorithms in the multi-task linear bandit and semi-bandit settings. For the multi-task linear bandit, we first analyze the preexisting hierarchical Thompson sampling (HierTS) algorithm, and improve its gap-independent Bayes regret bound from\nO\n(\nm\nn\nlog\n\u2061\nn\nlog\n\u2061\n(\nm\nn\n)\n)\nto\nO\n(\nm\nn\nlog\n\u2061\nn\n)\nin the case of infinite action set, with\nm\nbeing the number of tasks and\nn\nthe number of iterations per task. In the case of finite action set, we propose a novel hierarchical Bayesian bandit algorithm, named hierarchical BayesUCB (HierBayesUCB), that achieves the logarithmic but gap-dependent regret bound\nO\n(\nm\nlog\n\u2061\n(\nm\nn\n)\nlog\n\u2061\nn\n)\nunder mild assumptions. All of the above regret bounds hold in many variants of hierarchical Bayesian linear bandit problem, including when the tasks are solved sequentially or concurrently. Furthermore, we extend the aforementioned HierTS and HierBayesUCB algorithms to the multi-task combinatorial semi-bandit setting. Concretely, our combinatorial HierTS algorithm attains comparable Bayes regret bound\nO\n(\nm\nn\nlog\n\u2061\nn\n)\nwith respect to the latest one. Moreover, our combinatorial HierBayesUCB yields a sharper Bayes regret bound\nO\n(\nm\nlog\n\u2061\n(\nm\nn\n)\nlog\n\u2061\nn\n)\n. Experiments are conducted to validate the soundness of our theoretical results for multi-task bandit algorithms.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/93935"} +{"video_file": "jrNlWfor7q_39026947.mp4", "openreview_id": "jrNlWfor7q", "slideslive_id": 39026947, "venue": "nips2024", "title": "Kronecker-Factored Approximate Curvature for Physics-Informed Neural Networks", "status": "Poster", "keywords": "KFAC;PINNs;Gauss-Newton;PDEs;Taylor mode automatic differentiation;Forward Laplacian;Second-order optimization;Higher-order derivatives", "tldr": "We derive a KFAC approximation for PINN losses which scales to high-dimensional NNs and PDEs and consistently outperforms first-order methods for training PINNs.", "abstract": "Physics-Informed Neural Networks (PINNs) are infamous for being hard to train. Recently, second-order methods based on natural gradient and Gauss-Newton methods have shown promising performance, improving the accuracy achieved by first-order methods by several orders of magnitude. While promising, the proposed methods only scale to networks with a few thousand parameters due to the high computational cost to evaluate, store, and invert the curvature matrix. We propose Kronecker-factored approximate curvature (KFAC) for PINN losses that greatly reduces the computational cost and allows scaling to much larger networks. Our approach goes beyond the popular KFAC for traditional deep learning problems as it captures contributions from a PDE's differential operator that are crucial for optimization. To establish KFAC for such losses, we use Taylor-mode automatic differentiation to describe the differential operator's computation graph as a forward network with shared weights which allows us to apply a variant of KFAC for networks with weight-sharing. Empirically, we find that our KFAC-based optimizers are competitive with expensive second-order methods on small problems, scale more favorably to higher-dimensional neural networks and PDEs, and consistently outperform first-order methods.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/93933"} +{"video_file": "jsgYYXaSiS_39025338.mp4", "openreview_id": "jsgYYXaSiS", "slideslive_id": 39025338, "venue": "nips2024", "title": "Dual Prototype Evolving for Test-Time Generalization of Vision-Language Models", "status": "Poster", "keywords": "Test-Time Adaptation;Vision-Language Models;CLIP;Transfer Learning", "tldr": "We introduce Dual Prototype Evolving (DPE), a novel test-time adaptation approach for VLMs that effectively accumulates task-specific knowledge from multi-modalities.", "abstract": "Test-time adaptation, which enables models to generalize to diverse data with unlabeled test samples, holds significant value in real-world scenarios. Recently, researchers have applied this setting to advanced pre-trained vision-language models (VLMs), developing approaches such as test-time prompt tuning to further extend their practical applicability. However, these methods typically focus solely on adapting VLMs from a single modality and fail to accumulate task-specific knowledge as more samples are processed. To address this, we introduce Dual Prototype Evolving (DPE), a novel test-time adaptation approach for VLMs that effectively accumulates task-specific knowledge from multi-modalities. Specifically, we create and evolve two sets of prototypes\u2014textual and visual\u2014to progressively capture more accurate multi-modal representations for target classes during test time. Moreover, to promote consistent multi-modal representations, we introduce and optimize learnable residuals for each test sample to align the prototypes from both modalities. Extensive experimental results on 15 benchmark datasets demonstrate that our proposed DPE consistently outperforms previous state-of-the-art methods while also exhibiting competitive computational efficiency.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93929"} +{"video_file": "jzkpwcj200_39027903.mp4", "openreview_id": "jzkpwcj200", "slideslive_id": 39027903, "venue": "nips2024", "title": "Efficient multi-prompt evaluation of LLMs", "status": "Poster", "keywords": "llm;multi-prompt evaluation;efficient evaluation;evaluation", "tldr": "We propose an efficient method for multi-prompt evaluation of LLMs.", "abstract": "Most popular benchmarks for comparing LLMs rely on a limited set of prompt templates, which may not fully capture the LLMs\u2019 abilities and can affect the reproducibility of results on leaderboards. Many recent works empirically verify prompt sensitivity and advocate for changes in LLM evaluation. In this paper, we consider the problem of estimating the performance distribution across many prompt variants instead of finding a single prompt to evaluate with. We introduce PromptEval, a method for estimating performance across a large set of prompts borrowing strength across prompts and examples to produce accurate estimates under practical evaluation budgets. The resulting distribution can be used to obtain performance quantiles to construct various robust performance metrics (e.g., top 95% quantile or median). We prove that PromptEval consistently estimates the performance distribution and demonstrate its efficacy empirically on three prominent LLM benchmarks: MMLU, BIG-bench Hard, and LMentry; for example, PromptEval can accurately estimate performance quantiles across 100 prompt templates on MMLU with a budget equivalent to two single-prompt evaluations. Moreover, we show how PromptEval can be useful in LLM-as-a-judge and best prompt identification applications.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/93925"} +{"video_file": "jzngdJQ2lY_39028736.mp4", "openreview_id": "jzngdJQ2lY", "slideslive_id": 39028736, "venue": "nips2024", "title": "Solving Minimum-Cost Reach Avoid using Reinforcement Learning", "status": "Poster", "keywords": "Reinforcement Learning;Optimal Control;Reachability Analysis", "tldr": "We propose a new RL method for solving the minimum-cost reach-avoid problem, inspired by reachability analysis.", "abstract": "Current reinforcement-learning methods are unable to directly learn policies that solve the minimum cost reach-avoid problem to minimize cumulative costs subject to the constraints of reaching the goal and avoiding unsafe states, as the structure of this new optimization problem is incompatible with current methods. Instead, a surrogate problem is solved where all objectives are combined with a weighted sum. However, this surrogate objective results in suboptimal policies that do not directly minimize the cumulative cost. In this work, we propose RC-PPO, a reinforcement-learning-based method for solving the minimum-cost reach-avoid problem by using connections to Hamilton-Jacobi reachability. Empirical results demonstrate that RC-PPO learns policies with comparable goal-reaching rates to while achieving up to 57% lower cumulative costs compared to existing methods on a suite of minimum-cost reach-avoid benchmarks on the Mujoco simulator. The project page can be found at https://oswinso.xyz/rcppo.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93924"} +{"video_file": "k6ZHvF1vkg_39025076.mp4", "openreview_id": "k6ZHvF1vkg", "slideslive_id": 39025076, "venue": "nips2024", "title": "Beyond Optimism: Exploration With Partially Observable Rewards", "status": "Poster", "keywords": "reinforcement learning;partial observability;exploration;successor representations", "tldr": "Directed exploration with the successor representation for MDPs with partially observable rewards", "abstract": "Exploration in reinforcement learning (RL) remains an open challenge. RL algorithms rely on observing rewards to train the agent, and if informative rewards are sparse the agent learns slowly or may not learn at all. To improve exploration and reward discovery, popular algorithms rely on optimism. But what if sometimes rewards are unobservable, e.g., situations of partial monitoring in bandits and the recent formalism of monitored Markov decision process? In this case, optimism can lead to suboptimal behavior that does not explore further to collapse uncertainty. With this paper, we present a novel exploration strategy that overcomes the limitations of existing methods and guarantees convergence to an optimal policy even when rewards are not always observable. We further propose a collection of tabular environments for benchmarking exploration in RL (with and without unobservable rewards) and show that our method outperforms existing ones.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93919"} +{"video_file": "k8AYft5ED1_39025613.mp4", "openreview_id": "k8AYft5ED1", "slideslive_id": 39025613, "venue": "nips2024", "title": "Understanding and Improving Adversarial Collaborative Filtering for Robust Recommendation", "status": "Poster", "keywords": "Adversarial Collaborative Filtering;Robust Recommender System;Poisoning Attacks", "tldr": "We theoretically analyze Adversarial Collaborative Filtering (ACF) and further propose a method to improve ACF for robust recommendations.", "abstract": "Adversarial Collaborative Filtering (ACF), which typically applies adversarial perturbations at user and item embeddings through adversarial training, is widely recognized as an effective strategy for enhancing the robustness of Collaborative Filtering (CF) recommender systems against poisoning attacks. Besides, numerous studies have empirically shown that ACF can also improve recommendation performance compared to traditional CF. Despite these empirical successes, the theoretical understanding of ACF's effectiveness in terms of both performance and robustness remains unclear. To bridge this gap, in this paper, we first theoretically show that ACF can achieve a lower recommendation error compared to traditional CF with the same training epochs in both clean and poisoned data contexts. Furthermore, by establishing bounds for reductions in recommendation error during ACF's optimization process, we find that applying personalized magnitudes of perturbation for different users based on their embedding scales can further improve ACF's effectiveness. Building on these theoretical understandings, we propose Personalized Magnitude Adversarial Collaborative Filtering (PamaCF). Extensive experiments demonstrate that PamaCF effectively defends against various types of poisoning attacks while significantly enhancing recommendation performance.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93916"} +{"video_file": "k9SH68MvJs_39024442.mp4", "openreview_id": "k9SH68MvJs", "slideslive_id": 39024442, "venue": "nips2024", "title": "Diffusion-Reward Adversarial Imitation Learning", "status": "Poster", "keywords": "Imitation Learning;Adversarial Imitation Learning;Diffusion Model", "tldr": "This work proposes a novel adversarial imitation learning framework that integrates a diffusion model into generative adversarial imitation learning.", "abstract": "Imitation learning aims to learn a policy from observing expert demonstrations without access to reward signals from environments. Generative adversarial imitation learning (GAIL) formulates imitation learning as adversarial learning, employing a generator policy learning to imitate expert behaviors and discriminator learning to distinguish the expert demonstrations from agent trajectories. Despite its encouraging results, GAIL training is often brittle and unstable. Inspired by the recent dominance of diffusion models in generative modeling, we propose Diffusion-Reward Adversarial Imitation Learning (DRAIL), which integrates a diffusion model into GAIL, aiming to yield more robust and smoother rewards for policy learning. Specifically, we propose a diffusion discriminative classifier to construct an enhanced discriminator, and design diffusion rewards based on the classifier\u2019s output for policy learning. Extensive experiments are conducted in navigation, manipulation, and locomotion, verifying DRAIL\u2019s effectiveness compared to prior imitation learning methods. Moreover, additional experimental results demonstrate the generalizability and data efficiency of DRAIL. Visualized learned reward functions of GAIL and DRAIL suggest that DRAIL can produce more robust and smoother rewards. Project page: https://nturobotlearninglab.github.io/DRAIL/", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93914"} +{"video_file": "kCabCEhQWv_39025331.mp4", "openreview_id": "kCabCEhQWv", "slideslive_id": 39025331, "venue": "nips2024", "title": "Neural Isometries: Taming Transformations for Equivariant ML", "status": "Poster", "keywords": "Equivariance;Geometric Deep Learning;Representation Learning", "tldr": "Neural Isometries find latent spaces where complicated transformations become tractable for downstream tasks.", "abstract": "Real-world geometry and 3D vision tasks are replete with challenging symmetries that defy tractable analytical expression. In this paper, we introduce Neural Isometries, an autoencoder framework which learns to map the observation space to a general-purpose latent space wherein encodings are related by isometries whenever their corresponding observations are geometrically related in world space. Specifically, we regularize the latent space such that maps between encodings preserve a learned inner product and commute with a learned functional operator, in the same manner as rigid-body transformations commute with the Laplacian. This approach forms an effective backbone for self-supervised representation learning, and we demonstrate that a simple off-the-shelf equivariant network operating in the pre-trained latent space can achieve results on par with meticulously-engineered, handcrafted networks designed to handle complex, nonlinear symmetries. Furthermore, isometric maps capture information about the respective transformations in world space, and we show that this allows us to regress camera poses directly from the coefficients of the maps between encodings of adjacent views of a scene.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93912"} +{"video_file": "kLiWXUdCEw_39026412.mp4", "openreview_id": "kLiWXUdCEw", "slideslive_id": 39026412, "venue": "nips2024", "title": "An Analysis of Elo Rating Systems via Markov Chains", "status": "Poster", "keywords": "Elo ratings;Bradley\u2013Terry\u2013Luce model;tournament design;concentration", "tldr": "We present an analysis of Elo rating systems under the Bradley\u2013Terry\u2013Luce model", "abstract": "We present a theoretical analysis of the Elo rating system, a popular method for ranking skills of players in an online setting. In particular, we study Elo under the Bradley-Terry-Luce model and, using techniques from Markov chain theory, show that Elo learns the model parameters at a rate competitive with the state-of-the-art. We apply our results to the problem of efficient tournament design and discuss a connection with the fastest-mixing Markov chain problem.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/93904"} +{"video_file": "kOMrm4ZJ3m_39024672.mp4", "openreview_id": "kOMrm4ZJ3m", "slideslive_id": 39024672, "venue": "nips2024", "title": "Global Lyapunov functions: a long-standing open problem in mathematics, with symbolic transformers", "status": "Poster", "keywords": "mathematics;Lyapunov;transformers;control;AI for science;AI for maths;reasoning", "tldr": "Transformers can be trained from synthetic data to find Lyapunov functions, a long-standing open problem in mathematics", "abstract": "Despite their spectacular progress, language models still struggle on complex reasoning tasks, such as advanced mathematics. We consider a long-standing open problem in mathematics: discovering a Lyapunov function that ensures the global stability of a dynamical system. This problem has no known general solution, and algorithmic solvers only exist for some small polynomial systems. We propose a new method for generating synthetic training samples from random solutions, and show that sequence-to-sequence transformers trained on such datasets perform better than algorithmic solvers and humans on polynomial systems, and can discover new Lyapunov functions for non-polynomial systems.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/93899"} +{"video_file": "kPBEAZU5Nm_39028052.mp4", "openreview_id": "kPBEAZU5Nm", "slideslive_id": 39028052, "venue": "nips2024", "title": "Chain of Thoughtlessness? An Analysis of CoT in Planning", "status": "Poster", "keywords": "LLMs;Planning;Reasoning;Chain of Thought", "tldr": "We carefully examined the performance of Chain of Thought techniques on classical planning problems and found that, contrary to previous claims, they do not lead to generalizable improvement..", "abstract": "Large language model (LLM) performance on reasoning problems typically does not generalize out of distribution. Previous work has claimed that this can be mitigated with chain of thought prompting--a method of demonstrating solution procedures--with the intuition that it is possible to in-context teach an LLM an algorithm for solving the problem. This paper presents a case study of chain of thought on problems from Blocksworld, a classical planning domain, and examines the performance of two state-of-the-art LLMs across two axes: generality of examples given in prompt, and complexity of problems queried with each prompt. While our problems are very simple, we only find meaningful performance improvements from chain of thought prompts when those prompts are exceedingly specific to their problem class, and that those improvements quickly deteriorate as the size n of the query-specified stack grows past the size of stacks shown in the examples. We also create scalable variants of three domains commonly studied in previous CoT papers and demonstrate the existence of similar failure modes. Our results hint that, contrary to previous claims in the literature, CoT's performance improvements do not stem from the model learning general algorithmic procedures via demonstrations but depend on carefully engineering highly problem specific prompts. This spotlights drawbacks of chain of thought, especially the sharp tradeoff between possible performance gains and the amount of human labor necessary to generate examples with correct reasoning traces.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/93898"} +{"video_file": "kPmSfhCM5s_39025249.mp4", "openreview_id": "kPmSfhCM5s", "slideslive_id": 39025249, "venue": "nips2024", "title": "Vitron: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing", "status": "Poster", "keywords": "Multimodal Large Language Model;Unified Large Language Model", "tldr": "We present a universal pixel-level vision LLM designed for comprehensive understanding, generating, segmenting, and editing of both static images and dynamic videos.", "abstract": "Recent developments of vision large language models (LLMs) have seen remarkable progress, yet still encounter challenges towards multimodal generalists, such as coarse-grained instance-level understanding, lack of unified support for both images and videos, and insufficient coverage across various vision tasks. In this paper we present Vitron, a universal pixel-level vision LLM designed for comprehensive understanding, generating, segmenting, and editing of both static images and dynamic videos. Building on top of an LLM backbone, Vitron incorporates encoders for images, videos, and pixel-level regional visuals within its frontend modules, while employing state-of-the-art visual specialists as its backend, via which Vitron supports a spectrum of vision end tasks, spanning visual comprehension to visual generation, from low level to high level. To ensure an effective and precise message passing from LLM to backend modules for function invocation, we propose a novel hybrid method by simultaneously integrating discrete textual instructions and continuous signal embeddings. Further, we design various pixel-level spatiotemporal vision-language alignment learning for Vitron to reach the best fine-grained visual capability. Finally, a cross-task synergy module is advised to learn to maximize the task-invariant fine-grained visual features, enhancing the synergy between different visual tasks. Demonstrated over 12 visual tasks and evaluated across 22 datasets, Vitron showcases its extensive capabilities in the four main vision task clusters. Overall, this work illuminates the great potential of developing a more unified multimodal generalist.", "primary_area": "human-AI_interaction", "site": "https://neurips.cc/virtual/2024/poster/93896"} +{"video_file": "kQ9LgM2JQT_39026596.mp4", "openreview_id": "kQ9LgM2JQT", "slideslive_id": 39026596, "venue": "nips2024", "title": "QGFN: Controllable Greediness with Action Values", "status": "Poster", "keywords": "GFlowNets;generative models;molecule design", "tldr": "We combine a GFlowNet policy and an action-value estimate, \nQ\n into mixture policies that get better rewards without losing diversity", "abstract": "Generative Flow Networks (GFlowNets; GFNs) are a family of energy-based generative methods for combinatorial objects, capable of generating diverse and high-utility samples. However, consistently biasing GFNs towards producing high-utility samples is non-trivial. In this work, we leverage connections between GFNs and reinforcement learning (RL) and propose to combine the GFN policy with an action-value estimate,\nQ\n, to create greedier sampling policies which can be controlled by a mixing parameter. We show that several variants of the proposed method, QGFN, are able to improve on the number of high-reward samples generated in a variety of tasks without sacrificing diversity.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93895"} +{"video_file": "kQMyiDWbOG_39025666.mp4", "openreview_id": "kQMyiDWbOG", "slideslive_id": 39025666, "venue": "nips2024", "title": "Advancing Spiking Neural Networks for Sequential Modeling with Central Pattern Generators", "status": "Spotlight", "keywords": "Spiking Neural Networks;Central Pattern Generators;Positional Encoding", "tldr": "Inspired by central pattern generators, we propose a novel positional encoding technique tailored for spiking neural networks.", "abstract": "Spiking neural networks (SNNs) represent a promising approach to developing artificial neural networks that are both energy-efficient and biologically plausible. However, applying SNNs to sequential tasks, such as text classification and time-series forecasting, has been hindered by the challenge of creating an effective and hardware-friendly spike-form positional encoding (PE) strategy. Drawing inspiration from the central pattern generators (CPGs) in the human brain, which produce rhythmic patterned outputs without requiring rhythmic inputs, we propose a novel PE technique for SNNs, termed CPG-PE. We demonstrate that the commonly used sinusoidal PE is mathematically a specific solution to the membrane potential dynamics of a particular CPG. Moreover, extensive experiments across various domains, including time-series forecasting, natural language processing, and image classification, show that SNNs with CPG-PE outperform their conventional counterparts. Additionally, we perform analysis experiments to elucidate the mechanism through which SNNs encode positional information and to explore the function of CPGs in the human brain. This investigation may offer valuable insights into the fundamental principles of neural computation.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/93894"} +{"video_file": "kQPzFiwVIu_39026897.mp4", "openreview_id": "kQPzFiwVIu", "slideslive_id": 39026897, "venue": "nips2024", "title": "Synthetic Programming Elicitation for Text-to-Code in Very Low-Resource Programming and Formal Languages", "status": "Poster", "keywords": "Text-to-Code;Low-Resource Programming Languages;MAX-SAT;Parsing;Program Repair", "tldr": "Design an intermediate language and use a MAX-SAT solver to improve LLM-based text-to-code for very low resource programming langauges.", "abstract": "Recent advances in large language models (LLMs) for code applications have demonstrated remarkable zero-shot fluency and instruction following on challenging code related tasks ranging from test case generation to self-repair. Unsurprisingly, however, models struggle to compose syntactically valid programs in programming languages unrepresented in pre-training, referred to as very low-resource Programming Languages (VLPLs). VLPLs appear in crucial settings, including domain-specific languages for internal tools, tool-chains for legacy languages, and formal verification frameworks. Inspired by a technique called natural programming elicitation, we propose designing an intermediate language that LLMs ``naturally'' know how to use and which can be automatically compiled to a target VLPL. When LLMs generate code that lies outside of this intermediate language, we use compiler techniques to repair the code into programs in the intermediate language. Overall, we introduce synthetic programming elicitation and compilation (SPEAC), an approach that enables LLMs to generate syntactically valid code even for VLPLs. We empirically evaluate the performance of SPEAC in a case study for the UCLID5 formal verification language and find that, compared to existing retrieval and fine-tuning baselines, SPEAC produces syntactically correct programs more frequently and without sacrificing semantic correctness.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/93893"} +{"video_file": "kRwQCAIA7z_39028678.mp4", "openreview_id": "kRwQCAIA7z", "slideslive_id": 39028678, "venue": "nips2024", "title": "Dimension-free Private Mean Estimation for Anisotropic Distributions", "status": "Poster", "keywords": "differential privacy;mean estimation;anisotropic;covariance-adaptive error", "tldr": "We present private mean estimators for anisotropic distributions with dimension-free sample complexity, which we prove is optimal. We also give an estimator under unknown covariance, with a dimension-dependence that is milder than in prior work.", "abstract": "We present differentially private algorithms for high-dimensional mean estimation. Previous private estimators on distributions over\nR\nd\nsuffer from a curse of dimensionality, as they require\n\u03a9\n(\nd\n1\n/\n2\n)\nsamples to achieve non-trivial error, even in cases where\nO\n(\n1\n)\nsamples suffice without privacy. This rate is unavoidable when the distribution is isotropic, namely, when the covariance is a multiple of the identity matrix. Yet, real-world data is often highly anisotropic, with signals concentrated on a small number of principal components. We develop estimators that are appropriate for such signals---our estimators are\n(\n\u03b5\n,\n\u03b4\n)\n-differentially private and have sample complexity that is dimension-independent for anisotropic subgaussian distributions. Given\nn\nsamples from a distribution with known covariance-proxy\n\u03a3\nand unknown mean\n\u03bc\n, we present an estimator\n\u03bc\n^\nthat achieves error,\n|\n\u03bc\n^\n\u2212\n\u03bc\n|\n2\n\u2264\n\u03b1\n, as long as\nn\n\u2273\ntr\n(\n\u03a3\n)\n/\n\u03b1\n2\n+\ntr\n(\n\u03a3\n1\n/\n2\n)\n/\n(\n\u03b1\n\u03b5\n)\n. We show that this is the optimal sample complexity for this task up to logarithmic factors. Moreover, for the case of unknown covariance, we present an algorithm whose sample complexity has improved dependence on the dimension, from\nd\n1\n/\n2\nto\nd\n1\n/\n4\n.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/93891"} +{"video_file": "kTtK65vKvD_39028212.mp4", "openreview_id": "kTtK65vKvD", "slideslive_id": 39028212, "venue": "nips2024", "title": "ODGEN: Domain-specific Object Detection Data Generation with Diffusion Models", "status": "Poster", "keywords": "Object Detection Dataset Generation;Complex Scene Synthesis;Domain-Specific;Diffusion Models", "tldr": "We propose a novel method to control diffusion models with bounding box labels, exhibiting robustness in handling complex scenes and specific domains and enabling detector enhancement with synthetic data.", "abstract": "Modern diffusion-based image generative models have made significant progress and become promising to enrich training data for the object detection task. However, the generation quality and the controllability for complex scenes containing multi-class objects and dense objects with occlusions remain limited. This paper presents ODGEN, a novel method to generate high-quality images conditioned on bounding boxes, thereby facilitating data synthesis for object detection. Given a domain-specific object detection dataset, we first fine-tune a pre-trained diffusion model on both cropped foreground objects and entire images to fit target distributions. Then we propose to control the diffusion model using synthesized visual prompts with spatial constraints and object-wise textual descriptions. ODGEN exhibits robustness in handling complex scenes and specific domains. Further, we design a dataset synthesis pipeline to evaluate ODGEN on 7 domain-specific benchmarks to demonstrate its effectiveness. Adding training data generated by ODGEN improves up to 25.3% mAP@.50:.95 with object detectors like YOLOv5 and YOLOv7, outperforming prior controllable generative methods. In addition, we design an evaluation protocol based on COCO-2014 to validate ODGEN in general domains and observe an advantage up to 5.6% in mAP@.50:.95 against existing methods.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/93889"} +{"video_file": "kVL5rvkqGG_39026692.mp4", "openreview_id": "kVL5rvkqGG", "slideslive_id": 39026692, "venue": "nips2024", "title": "Repurposing Language Models into Embedding Models: Finding the Compute-Optimal Recipe", "status": "Poster", "keywords": "text embedding;embedding models;scaling laws", "tldr": "Providing a compute-optimal recipe for efficient contrastive fine-tuning of pretrained language models into embedding models.", "abstract": "Text embeddings are essential for tasks such as document retrieval, clustering, and semantic similarity assessment. In this paper, we study how to contrastively train text embedding models in a compute-optimal fashion, given a suite of pretrained decoder-only language models. Our innovation is an algorithm that produces optimal configurations of model sizes, data quantities, and fine-tuning methods for text-embedding models at different computational budget levels. The resulting recipe, which we obtain through extensive experiments, can be used by practitioners to make informed design choices for their embedding models. Specifically, our findings suggest that full fine-tuning and Low-Rank Adaptation fine-tuning produce optimal models at lower and higher computational budgets respectively.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93887"} +{"video_file": "kXKrLsR4aJ_39027047.mp4", "openreview_id": "kXKrLsR4aJ", "slideslive_id": 39027047, "venue": "nips2024", "title": "Input-to-State Stable Coupled Oscillator Networks for Closed-form Model-based Control in Latent Space", "status": "Spotlight", "keywords": "Dynamical Systems;Control Theory;Robotics;Decision and Control;Deep Autoencoders", "tldr": "We leverage input-to-state stable coupled oscillator networks for conducting model-based control in latent space.", "abstract": "Even though a variety of methods have been proposed in the literature, efficient and effective latent-space control (i.e., control in a learned low-dimensional space) of physical systems remains an open challenge. We argue that a promising avenue is to leverage powerful and well-understood closed-form strategies from control theory literature in combination with learned dynamics, such as potential-energy shaping. We identify three fundamental shortcomings in existing latent-space models that have so far prevented this powerful combination: (i) they lack the mathematical structure of a physical system, (ii) they do not inherently conserve the stability properties of the real systems, (iii) these methods do not have an invertible mapping between input and latent-space forcing. This work proposes a novel Coupled Oscillator Network (CON) model that simultaneously tackles all these issues. More specifically, (i) we show analytically that CON is a Lagrangian system - i.e., it possesses well-defined potential and kinetic energy terms. Then, (ii) we provide formal proof of global Input-to-State stability using Lyapunov arguments. Moving to the experimental side, we demonstrate that CON reaches SoA performance when learning complex nonlinear dynamics of mechanical systems directly from images. An additional methodological innovation contributing to achieving this third goal is an approximated closed-form solution for efficient integration of network dynamics, which eases efficient training. We tackle (iii) by approximating the forcing-to-input mapping with a decoder that is trained to reconstruct the input based on the encoded latent space force. Finally, we leverage these three properties and show that they enable latent-space control. We use an integral-saturated PID with potential force compensation and demonstrate high-quality performance on a soft robot using raw pixels as the only feedback information.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/93881"} +{"video_file": "kamAXSJxGV_39025200.mp4", "openreview_id": "kamAXSJxGV", "slideslive_id": 39025200, "venue": "nips2024", "title": "Prior-itizing Privacy: A Bayesian Approach to Setting the Privacy Budget in Differential Privacy", "status": "Poster", "keywords": "confidentiality;disclosure;risk;semantics;utility", "tldr": "We propose a framework for setting epsilon for data releases satisfying differential privacy.", "abstract": "When releasing outputs from confidential data, agencies need to balance the analytical usefulness of the released data with the obligation to protect data subjects' confidentiality. For releases satisfying differential privacy, this balance is reflected by the privacy budget,\n\u03b5\n. We provide a framework for setting\n\u03b5\nbased on its relationship with Bayesian posterior probabilities of disclosure. The agency responsible for the data release decides how much posterior risk it is willing to accept at various levels of prior risk, which implies a unique\n\u03b5\n. Agencies can evaluate different risk profiles to determine one that leads to an acceptable trade-off in risk and utility.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/93876"} +{"video_file": "kfdEXQu6MC_39028039.mp4", "openreview_id": "kfdEXQu6MC", "slideslive_id": 39028039, "venue": "nips2024", "title": "A generalized neural tangent kernel for surrogate gradient learning", "status": "Spotlight", "keywords": "Neural Tangent Kernel;Surrogate Gradient Descent;Binary Neural Networks;Infinite Width", "tldr": "We derive a generalized neural tangent kernel that describes surrogate gradient learning.", "abstract": "State-of-the-art neural network training methods depend on the gradient of the network function. Therefore, they cannot be applied to networks whose activation functions do not have useful derivatives, such as binary and discrete-time spiking neural networks. To overcome this problem, the activation function's derivative is commonly substituted with a surrogate derivative, giving rise to surrogate gradient learning (SGL). This method works well in practice but lacks theoretical foundation.\nThe neural tangent kernel (NTK) has proven successful in the analysis of gradient descent. Here, we provide a generalization of the NTK, which we call the surrogate gradient NTK, that enables the analysis of SGL. First, we study a naive extension of the NTK to activation functions with jumps, demonstrating that gradient descent for such activation functions is also ill-posed in the infinite-width limit. To address this problem, we generalize the NTK to gradient descent with surrogate derivatives, i.e., SGL. We carefully define this generalization and expand the existing key theorems on the NTK with mathematical rigor. Further, we illustrate our findings with numerical experiments. Finally, we numerically compare SGL in networks with sign activation function and finite width to kernel regression with the surrogate gradient NTK; the results confirm that the surrogate gradient NTK provides a good characterization of SGL.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/93872"} +{"video_file": "kk0Eaunc58_39026807.mp4", "openreview_id": "kk0Eaunc58", "slideslive_id": 39026807, "venue": "nips2024", "title": "HydraViT: Stacking Heads for a Scalable ViT", "status": "Poster", "keywords": "Deep Learning;Transformers;Vision Transformers;Scalable Transformers", "tldr": "By sorting attention heads during training, we enable flexible inference that adapts to diverse hardware constraints by dropping the least important heads.", "abstract": "The architecture of Vision Transformers (ViTs), particularly the Multi-head Attention (MHA) mechanism, imposes substantial hardware demands. Deploying ViTs on devices with varying constraints, such as mobile phones, requires multiple models of different sizes. However, this approach has limitations, such as training and storing each required model separately. This paper introduces HydraViT, a novel approach that addresses these limitations by stacking attention heads to achieve a scalable ViT. By repeatedly changing the size of the embedded dimensions throughout each layer and their corresponding number of attention heads in MHA during training, HydraViT induces multiple subnetworks. Thereby, HydraViT achieves adaptability across a wide spectrum of hardware environments while maintaining performance. Our experimental results demonstrate the efficacy of HydraViT in achieving a scalable ViT with up to 10 subnetworks, covering a wide range of resource constraints. HydraViT achieves up to 5 p.p. more accuracy with the same GMACs and up to 7 p.p. more accuracy with the same throughput on ImageNet-1K compared to the baselines, making it an effective solution for scenarios where hardware availability is diverse or varies over time. The source code is available at https://github.com/ds-kiel/HydraViT.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93871"} +{"video_file": "klsyhjLlX5_39024515.mp4", "openreview_id": "klsyhjLlX5", "slideslive_id": 39024515, "venue": "nips2024", "title": "Group-wise oracle-efficient algorithms for online multi-group learning", "status": "Poster", "keywords": "multi-group learning;online learning;oracle-efficient", "tldr": "We develop algorithms for achieving sublinear regret in online multi-group learning when the collection of groups is exponentially large or infinite.", "abstract": "We study the problem of online multi-group learning, a learning model in which an online learner must simultaneously achieve small prediction regret on a large collection of (possibly overlapping) subsequences corresponding to a family of groups. Groups are subsets of the context space, and in fairness applications, they may correspond to subpopulations defined by expressive functions of demographic attributes. In this paper, we design such oracle-efficient algorithms with sublinear regret under a variety of settings, including: (i) the i.i.d. setting, (ii) the adversarial setting with smoothed context distributions, and (iii) the adversarial transductive setting.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93868"} +{"video_file": "kngLs5H6l1_39027016.mp4", "openreview_id": "kngLs5H6l1", "slideslive_id": 39027016, "venue": "nips2024", "title": "Normal-GS: 3D Gaussian Splatting with Normal-Involved Rendering", "status": "Poster", "keywords": "neural rendering;3D Gaussian Splatting;neural radiance field;computer vision;computer graphics", "tldr": "We propose a normal-invovled rendering strategy for 3DGS, termed Normal-GS, which help enhance both the rendering quality and the normal estimation accuracy.", "abstract": "Rendering and reconstruction are long-standing topics in computer vision and graphics. Achieving both high rendering quality and accurate geometry is a challenge. Recent advancements in 3D Gaussian Splatting (3DGS) have enabled high-fidelity novel view synthesis at real-time speeds. However, the noisy and discrete nature of 3D Gaussian primitives hinders accurate surface estimation. Previous attempts to regularize 3D Gaussian normals often degrade rendering quality due to the fundamental disconnect between normal vectors and the rendering pipeline in 3DGS-based methods. Therefore, we introduce Normal-GS, a novel approach that integrates normal vectors into the 3DGS rendering pipeline. The core idea is to model the interaction between normals and incident lighting using the physically-based rendering equation. Our approach re-parameterizes surface colors as the product of normals and a designed Integrated Directional Illumination Vector (IDIV). To optimize memory usage and simplify optimization, we employ an anchor-based 3DGS to implicitly encode locally-shared IDIVs. Additionally, Normal-GS leverages optimized normals and Integrated Directional Encoding (IDE) to accurately model specular effects, enhancing both rendering quality and surface normal precision. Extensive experiments demonstrate that Normal-GS achieves near state-of-the-art visual quality while obtaining accurate surface normals and preserving real-time rendering performance.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93867"} +{"video_file": "kpo6ZCgVZH_39028190.mp4", "openreview_id": "kpo6ZCgVZH", "slideslive_id": 39028190, "venue": "nips2024", "title": "Functional Gradient Flows for Constrained Sampling", "status": "Poster", "keywords": "particle-based variational inference;constrained sampling;functional gradient flow;boundary integral", "tldr": "A new functional gradient particle-based variational inference method for sampling on constrained domains.", "abstract": "Recently, through a unified gradient flow perspective of Markov chain Monte Carlo (MCMC) and variational inference (VI), particle-based variational inference methods (ParVIs) have been proposed that tend to combine the best of both worlds. While typical ParVIs such as Stein Variational Gradient Descent (SVGD) approximate the gradient flow within a reproducing kernel Hilbert space (RKHS), many attempts have been made recently to replace RKHS with more expressive function spaces, such as neural networks. While successful, these methods are mainly designed for sampling from unconstrained domains. In this paper, we offer a general solution to constrained sampling by introducing a boundary condition for the gradient flow which would confine the particles within the specific domain. This allows us to propose a new functional gradient ParVI method for constrained sampling, called constrained functional gradient flow (CFG), with provable continuous-time convergence in total variation (TV). We also present novel numerical strategies to handle the boundary integral term arising from the domain constraints. Our theory and experiments demonstrate the effectiveness of the proposed framework.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/93866"} +{"video_file": "kr7eN85mIT_39027957.mp4", "openreview_id": "kr7eN85mIT", "slideslive_id": 39027957, "venue": "nips2024", "title": "Tell What You Hear From What You See - Video to Audio Generation Through Text", "status": "Poster", "keywords": "multi-modal learning;audio-visual learning;multi-modal large-language-model;text-guided video-to-audio generation;video-to-audio captioning", "tldr": "A novel multi-modal generation framework for text guided video-to-audio generation and video-to-audio captioning.", "abstract": "The content of visual and audio scenes is multi-faceted such that a video stream can be paired with various audio streams and vice-versa. Thereby, in video-to-audio generation task, it is imperative to introduce steering approaches for controlling the generated audio. While Video-to-Audio generation is a well-established generative task, existing methods lack such controllability. In this work, we propose VATT, a multi-modal generative framework that takes a video and an optional text prompt as input, and generates audio and optional textual description (caption) of the audio. Such a framework has two unique advantages: i) Video-to-Audio generation process can be refined and controlled via text which complements the context of the visual information, and ii) The model can suggest what audio to generate for the video by generating audio captions. VATT consists of two key modules: VATT Converter, which is an LLM that has been fine-tuned for instructions and includes a projection layer that maps video features to the LLM vector space, and VATT Audio, a bi-directional transformer that generates audio tokens from visual frames and from optional text prompt using iterative parallel decoding. The audio tokens and the text prompt are used by a pretrained neural codec to convert them into a waveform. Our experiments show that when VATT is compared to existing video-to-audio generation methods in objective metrics, such as VGGSound audiovisual dataset, it achieves competitive performance when the audio caption is not provided. When the audio caption is provided as a prompt, VATT achieves even more refined performance (with lowest KLD score of 1.41). Furthermore, subjective studies asking participants to choose the most compatible generated audio for a given silent video, show that VATT Audio has been chosen on average as a preferred generated audio than the audio generated by existing methods. VATT enables controllable video-to-audio generation through text as well as suggesting text prompts for videos through audio captions, unlocking novel applications such as text-guided video-to-audio generation and video-to-audio captioning.", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/93863"} +{"video_file": "kzJ9P7VPnS_39024686.mp4", "openreview_id": "kzJ9P7VPnS", "slideslive_id": 39024686, "venue": "nips2024", "title": "LP-3DGS: Learning to Prune 3D Gaussian Splatting", "status": "Poster", "keywords": "Novel view synthesis;Gaussian splatting;Learn to prune", "tldr": "We propose a learning method to prune the points in 3D Gaussian Splatting by applying trainable mask to the importance score of points and minimize the model size with only one-time training while maintaining the rendering quality.", "abstract": "Recently, 3D Gaussian Splatting (3DGS) has become one of the mainstream methodologies for novel view synthesis (NVS) due to its high quality and fast rendering speed. However, as a point-based scene representation, 3DGS potentially generates a large number of Gaussians to fit the scene, leading to high memory usage. Improvements that have been proposed require either an empirical pre-set pruning ratio or importance score threshold to prune the point cloud. Such hyperparameters require multiple rounds of training to optimize and achieve the maximum pruning ratio while maintaining the rendering quality for each scene. In this work, we propose learning-to-prune 3DGS (LP-3DGS), where a trainable binary mask is applied to the importance score to automatically find a favorable pruning ratio. Instead of using the traditional straight-through estimator (STE) method to approximate the binary mask gradient, we redesign the masking function to leverage the Gumbel-Sigmoid method, making it differentiable and compatible with the existing training process of 3DGS. Extensive experiments have shown that LP-3DGS consistently achieves a good balance between efficiency and high quality.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93859"} +{"video_file": "l04i6dPMxK_39025277.mp4", "openreview_id": "l04i6dPMxK", "slideslive_id": 39025277, "venue": "nips2024", "title": "Bandits with Abstention under Expert Advice", "status": "Poster", "keywords": "Multi-armed bandits;Expert advice;Abstention;Contextual bandits", "tldr": "We study bandits with expert advice when given an option to abstain from making any action, and achieve novel reward bounds for confidence rated predictors.", "abstract": "We study the classic problem of prediction with expert advice under bandit feedback. Our model assumes that one action, corresponding to the learner's abstention from play, has no reward or loss on every trial. We propose the CBA (Confidence-rated Bandits with Abstentions) algorithm, which exploits this assumption to obtain reward bounds that can significantly improve those of the classical Exp4 algorithm. Our problem can be construed as the aggregation of confidence-rated predictors, with the learner having the option to abstain from play. We are the first to achieve bounds on the expected cumulative reward for general confidence-rated predictors. In the special case of specialists, we achieve a novel reward bound, significantly improving previous bounds of SpecialistExp (treating abstention as another action). We discuss how CBA can be applied to the problem of adversarial contextual bandits with the option of abstaining from selecting any action. We are able to leverage a wide range of inductive biases, outperforming previous approaches both theoretically and in preliminary experimental analysis. Additionally, we achieve a reduction in runtime from quadratic to almost linear in the number of contexts for the specific case of metric space contexts.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/93858"} +{"video_file": "l2yvtrz3On_39028024.mp4", "openreview_id": "l2yvtrz3On", "slideslive_id": 39028024, "venue": "nips2024", "title": "Improved Sample Complexity for Multiclass PAC Learning", "status": "Poster", "keywords": "Multiclass learning;PAC learning;Statistical learning;List learning", "tldr": "We improve the upper bound of the sample complexity of multiclass PAC learning.", "abstract": "We aim to understand the optimal PAC sample complexity in multiclass learning. While finiteness of the Daniely-Shalev-Shwartz (DS) dimension has been shown to characterize the PAC learnability of a concept class [Brukhim, Carmon, Dinur, Moran, and Yehudayoff, 2022], there exist polylog factor gaps in the leading term of the sample complexity. In this paper, we reduce the gap in terms of the dependence on the error parameter to a single log factor and also propose two possible routes towards completely resolving the optimal sample complexity, each based on a key open question we formulate: one concerning list learning with bounded list size, the other concerning a new type of shifting for multiclass concept classes. We prove that a positive answer to either of the two questions would completely resolve the optimal sample complexity up to log factors of the DS dimension.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93856"} +{"video_file": "l6iICoILGB_39028815.mp4", "openreview_id": "l6iICoILGB", "slideslive_id": 39028815, "venue": "nips2024", "title": "Practical $0.385$-Approximation for Submodular Maximization Subject to a Cardinality Constraint", "status": "Poster", "keywords": "Submodular maximization;Discrete optimization;Machine learning", "tldr": "In this work, we present a novel algorithm for submodular maximization subject to a cardinality constraint that combines a guarantee of \n0.385\n-approximation with a low and practical query complexity of \nO\n(\nn\n+\nk\n2\n)\n.", "abstract": "Non-monotone constrained submodular maximization plays a crucial role in various machine learning applications. However, existing algorithms often struggle with a trade-off between approximation guarantees and practical efficiency. The current state-of-the-art is a recent\n0.401\n-approximation algorithm, but its computational complexity makes it highly impractical. The best practical algorithms for the problem only guarantee\n1\n/\ne\n-approximation. In this work, we present a novel algorithm for submodular maximization subject to a cardinality constraint that combines a guarantee of\n0.385\n-approximation with a low and practical query complexity of\nO\n(\nn\n+\nk\n2\n)\n. Furthermore, we evaluate our algorithm's performance through extensive machine learning applications, including Movie Recommendation, Image Summarization, and more. These evaluations demonstrate the efficacy of our approach.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93853"} +{"video_file": "lBh5kuuY1L_39027979.mp4", "openreview_id": "lBh5kuuY1L", "slideslive_id": 39027979, "venue": "nips2024", "title": "TurboHopp: Accelerated Molecule Scaffold Hopping with Consistency Models", "status": "Poster", "keywords": "Scaffold Hopping;Consistency Models;Diffusion Models;3D Structure-Based Drug Design;Reinforcement Learning;Drug Discovery;Generative Models", "tldr": "Fast, and efficient E(3)-equivariant scaffold-hopping model utilizing consistency models for rapid generation additionally powered by RL", "abstract": "Navigating the vast chemical space of druggable compounds is a formidable challenge in drug discovery, where generative models are increasingly employed to identify viable candidates. Conditional 3D structure-based drug design (3D-SBDD) models, which take into account complex three-dimensional interactions and molecular geometries, are particularly promising. Scaffold hopping is an efficient strategy that facilitates the identification of similar active compounds by strategically modifying the core structure of molecules, effectively narrowing the wide chemical space and enhancing the discovery of drug-like products. However, the practical application of 3D-SBDD generative models is hampered by their slow processing speeds. To address this bottleneck, we introduce TurboHopp, an accelerated pocket-conditioned 3D scaffold hopping model that merges the strategic effectiveness of traditional scaffold hopping with rapid generation capabilities of consistency models. This synergy not only enhances efficiency but also significantly boosts generation speeds, achieving up to 30 times faster inference speed as well as superior generation quality compared to existing diffusion-based models, establishing TurboHopp as a powerful tool in drug discovery. Supported by faster inference speed, we further optimize our model, using Reinforcement Learning for Consistency Models (RLCM), to output desirable molecules. We demonstrate the broad applicability of TurboHopp across multiple drug discovery scenarios, underscoring its potential in diverse molecular settings.The code is provided at https://github.com/orgw/TurboHopp", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/93849"} +{"video_file": "lBp2cda7sp_39026981.mp4", "openreview_id": "lBp2cda7sp", "slideslive_id": 39026981, "venue": "nips2024", "title": "RMLR: Extending Multinomial Logistic Regression into General Geometries", "status": "Poster", "keywords": "Riemannian neural networks;Matrix manifolds;SPD manifolds;Special orthogonal groups", "tldr": "We propose a general framework of building intrinsic Riemannian classifiers for general geometries , and showcase our framework on the SPD manifold and special orthogonal group.", "abstract": "Riemannian neural networks, which extend deep learning techniques to Riemannian spaces, have gained significant attention in machine learning. To better classify the manifold-valued features, researchers have started extending Euclidean multinomial logistic regression (MLR) into Riemannian manifolds. However, existing approaches suffer from limited applicability due to their strong reliance on specific geometric properties. This paper proposes a framework for designing Riemannian MLR over general geometries, referred to as RMLR. Our framework only requires minimal geometric properties, thus exhibiting broad applicability and enabling its use with a wide range of geometries. Specifically, we showcase our framework on the Symmetric Positive Definite (SPD) manifold and special orthogonal group, i.e., the set of rotation matrices. On the SPD manifold, we develop five families of SPD MLRs under five types of power-deformed metrics. On rotation matrices we propose Lie MLR based on the popular bi-invariant metric. Extensive experiments on different Riemannian backbone networks validate the effectiveness of our framework.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93848"} +{"video_file": "lIH6oCdppg_39026848.mp4", "openreview_id": "lIH6oCdppg", "slideslive_id": 39026848, "venue": "nips2024", "title": "On the Role of Attention Masks and LayerNorm in Transformers", "status": "Poster", "keywords": "attention mechanism;transformers;layer normalization;deep learning theory;dynamical systems", "tldr": "We rigorously show that sparse or local masked attention can mitigate rank collapse of tokens, while LayerNorm can prevent it in transformers, both enhancing model expressivity.", "abstract": "Self-attention is the key mechanism of transformers, which are the essential building blocks of modern foundation models. Recent studies have shown that pure self-attention suffers from an increasing degree of rank collapse as depth increases, limiting model expressivity and further utilization of model depth. The existing literature on rank collapse, however, has mostly overlooked other critical components in transformers that may alleviate the rank collapse issue. In this paper, we provide a general analysis of rank collapse under self-attention, taking into account the effects of attention masks and layer normalization (LayerNorm). In particular, we find that although pure masked attention still suffers from exponential collapse to a rank one subspace, sparse or local masked attention can provably slow down the collapse rate. In the case of self-attention with LayerNorm, we first show that for certain classes of value matrices, collapse to a rank one subspace still happens exponentially. However, through construction of nontrivial counterexamples, we then establish that with proper choice of value matrices, a general class of sequences may not converge to a rank one subspace, and the self-attention dynamics with LayerNorm can simultaneously possess a rich set of equilibria with any possible rank between one and full. Our result refutes the previous hypothesis that LayerNorm plays no role in the rank collapse of self-attention and suggests that self-attention with LayerNorm constitutes a much more expressive, versatile nonlinear dynamical system than what was originally thought.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93840"} +{"video_file": "lKnl4CLhhS_39026382.mp4", "openreview_id": "lKnl4CLhhS", "slideslive_id": 39026382, "venue": "nips2024", "title": "Efficient and Private Marginal Reconstruction with Local Non-Negativity", "status": "Poster", "keywords": "differential privacy;query release;synthetic data", "tldr": "We propose a novel, scalable method for reconstructing answers to marginal queries. It makes existing mechanisms more scalable, accurate, or both.", "abstract": "Differential privacy is the dominant standard for formal and quantifiable privacy and has been used in major deployments that impact millions of people. Many differentially private algorithms for query release and synthetic data contain steps that reconstruct answers to queries from answers to other queries that have been measured privately. Reconstruction is an important subproblem for such mechanisms to economize the privacy budget, minimize error on reconstructed answers, and allow for scalability to high-dimensional datasets. In this paper, we introduce a principled and efficient postprocessing method ReM (Residuals-to-Marginals) for reconstructing answers to marginal queries. Our method builds on recent work on efficient mechanisms for marginal query release, based on making measurements using a residual query basis that admits efficient pseudoinversion, which is an important primitive used in reconstruction. An extension GReM-LNN (Gaussian Residuals-to-Marginals with Local Non-negativity) reconstructs marginals under Gaussian noise satisfying consistency and non-negativity, which often reduces error on reconstructed answers. We demonstrate the utility of ReM and GReM-LNN by applying them to improve existing private query answering mechanisms.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/93838"} +{"video_file": "lNCsyA5uS1_39026029.mp4", "openreview_id": "lNCsyA5uS1", "slideslive_id": 39026029, "venue": "nips2024", "title": "Thought of Search: Planning with Language Models Through The Lens of Efficiency", "status": "Poster", "keywords": "planning;large language models;search", "tldr": "We find that recent trends in using LLMs for planning are profoundly uneconomical, unsound and incomplete. We propose a significantly more efficient approach that is sound and complete and argue for a responsible use of compute resources.", "abstract": "Among the most important properties of algorithms investigated in computer science are soundness, completeness, and complexity. These properties, however, are rarely analyzed for the vast collection of recently proposed methods for planning with large language models. In this work, we alleviate this gap. We analyse these properties of using LLMs for planning and highlight that recent trends abandon both soundness and completeness for the sake of inefficiency. We propose a significantly more efficient approach that can, at the same time, maintain both soundness and completeness. We exemplify on four representative search problems, comparing to the LLM-based solutions from the literature that attempt to solve these problems. We show that by using LLMs to produce the code for the search components we can solve the entire datasets with 100% accuracy with only a few calls to the LLM. In contrast, the compared approaches require hundreds of thousands of calls and achieve significantly lower accuracy. We argue for a responsible use of compute resources; urging research community to investigate sound and complete LLM-based approaches that uphold efficiency.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93837"} +{"video_file": "lOMHt16T8R_39025641.mp4", "openreview_id": "lOMHt16T8R", "slideslive_id": 39025641, "venue": "nips2024", "title": "PaCE: Parsimonious Concept Engineering for Large Language Models", "status": "Poster", "keywords": "Large Language Model;Sparse Coding;Trustworthy Machine Learning", "tldr": "Parsimonious Concept Engineering (PaCE) uses sparse coding on a large-scale concept dictionary to precisely control and modify a language model's neural activations, effectively improving the model trustworthiness.", "abstract": "Large Language Models (LLMs) are being used for a wide variety of tasks. While they are capable of generating human-like responses, they can also produce undesirable output including potentially harmful information, racist or sexist language, and hallucinations. Alignment methods are designed to reduce such undesirable output, via techniques such as fine-tuning, prompt engineering, and representation engineering. However, existing methods face several challenges: some require costly fine-tuning for every alignment task; some do not adequately remove undesirable concepts, failing alignment; some remove benign concepts, lowering the linguistic capabilities of LLMs. To address these issues, we propose Parsimonious Concept Engineering (PaCE), a novel activation engineering framework for alignment. First, to sufficiently model the concepts, we construct a large-scale concept dictionary in the activation space, in which each atom corresponds to a semantic concept. Given any alignment task, we instruct a concept partitioner to efficiently annotate the concepts as benign or undesirable. Then, at inference time, we decompose the LLM activations along the concept dictionary via sparse coding, to accurately represent the activations as linear combinations of benign and undesirable components. By removing the latter ones from the activations, we reorient the behavior of the LLM towards the alignment goal. We conduct experiments on tasks such as response detoxification, faithfulness enhancement, and sentiment revising, and show that PaCE achieves state-of-the-art alignment performance while maintaining linguistic capabilities.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93836"} +{"video_file": "lPDxPVS6ix_39027279.mp4", "openreview_id": "lPDxPVS6ix", "slideslive_id": 39027279, "venue": "nips2024", "title": "SPEAR: Exact Gradient Inversion of Batches in Federated Learning", "status": "Poster", "keywords": "Federated Learning;Exact Gradient Inversion;Gradient Leakage;Privacy;Attack", "tldr": "We present the first algorithm for exact gradient inversion on batch sizes \n>\n1\n to reconstruct inputs in the honest-but-curious federated learning setting.", "abstract": "Federated learning is a framework for collaborative machine learning where clients only share gradient updates and not their private data with a server. However, it was recently shown that gradient inversion attacks can reconstruct this data from the shared gradients. In the important honest-but-curious setting, existing attacks enable exact reconstruction only for batch size of\nb\n=\n1\n, with larger batches permitting only approximate reconstruction. In this work, we propose SPEAR, the first algorithm reconstructing whole batches with\nb\n>\n1\nexactly. SPEAR combines insights into the explicit low-rank structure of gradients with a sampling-based algorithm. Crucially, we leverage ReLU-induced gradient sparsity to precisely filter out large numbers of incorrect samples, making a final reconstruction step tractable. We provide an efficient GPU implementation for fully connected networks and show that it recovers high-dimensional ImageNet inputs in batches of up to\nb\n\u2272\n25\nexactly while scaling to large networks. Finally, we show theoretically that much larger batches can be reconstructed with high probability given exponential time.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/93833"} +{"video_file": "lPTWdyIY4O_39025310.mp4", "openreview_id": "lPTWdyIY4O", "slideslive_id": 39025310, "venue": "nips2024", "title": "The Selective $G$-Bispectrum and its Inversion: Applications to $G$-Invariant Networks", "status": "Poster", "keywords": "CNN;group invariance;bispectrum;Neural Network;AI", "tldr": "We propose a new layer in the architecture of Group-Equivariant Neural Networks to achieve invariance to group action on the input.", "abstract": "An important problem in signal processing and deep learning is to achieve invariance to nuisance factors not relevant for the task. Since many of these factors are describable as the action of a group\nG\n(e.g. rotations, translations, scalings), we want methods to be\nG\n-invariant. The\nG\n-Bispectrum extracts every characteristic of a given signal up to group action: for example, the shape of an object in an image, but not its orientation. Consequently, the\nG\n-Bispectrum has been incorporated into deep neural network architectures as a computational primitive for\nG\n-invariance\\textemdash akin to a pooling mechanism, but with greater selectivity and robustness. However, the computational cost of the\nG\n-Bispectrum (\nO\n(\n|\nG\n|\n2\n)\n, with\n|\nG\n|\nthe size of the group) has limited its widespread adoption. Here, we show that the\nG\n-Bispectrum computation contains redundancies that can be reduced into a selective\nG\n-Bispectrum with\nO\n(\n|\nG\n|\n)\ncomplexity. We prove desirable mathematical properties of the selective\nG\n-Bispectrum and demonstrate how its integration in neural networks enhances accuracy and robustness compared to traditional approaches, while enjoying considerable speeds-up compared to the full\nG\n-Bispectrum.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93832"} +{"video_file": "lQ45aR8L7D_39025780.mp4", "openreview_id": "lQ45aR8L7D", "slideslive_id": 39025780, "venue": "nips2024", "title": "Order-Independence Without Fine Tuning", "status": "Poster", "keywords": "LLMs;Multiple Choice Questions;Transformers;Positional Encodings;Modified Attention Mask", "tldr": "We present the Set-Based Prompting method which *guarantees* any LLM's outputs will be unaffected by reordering.", "abstract": "The development of generative language models that can create long and coherent textual outputs via autoregression has lead to a proliferation of uses and a corresponding sweep of analyses as researches work to determine the limitations of this new paradigm. Unlike humans, these 'Large Language Models' (LLMs) are highly sensitive to small changes in their inputs, leading to unwanted inconsistency in their behavior. One problematic inconsistency when LLMs are used to answer multiple-choice questions or analyze multiple inputs is order dependency: the output of an LLM can (and often does) change significantly when sub-sequences are swapped, despite both orderings being semantically identical. In this paper we present , a technique that guarantees the output of an LLM will not have order dependence on a specified set of sub-sequences. We show that this method provably eliminates order dependency, and that it can be applied to any transformer-based LLM to enable text generation that is unaffected by re-orderings. Delving into the implications of our method, we show that, despite our inputs being out of distribution, the impact on expected accuracy is small, where the expectation is over the order of uniformly chosen shuffling of the candidate responses, and usually significantly less in practice. Thus, can be used as a 'dropped-in' method on fully trained models. Finally, we discuss how our method's success suggests that other strong guarantees can be obtained on LLM performance via modifying the input representations.\nCode is available at github.com/reidmcy/set-based-prompting.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93831"} +{"video_file": "lV1wGHKd5x_39027684.mp4", "openreview_id": "lV1wGHKd5x", "slideslive_id": 39027684, "venue": "nips2024", "title": "Listenable Maps for Zero-Shot Audio Classifiers", "status": "Poster", "keywords": "Zero shot audio classifiers;Posthoc explanations", "tldr": "We propose a posthoc explanation method for zero shot audio classifiers.", "abstract": "Interpreting the decisions of deep learning models, including audio classifiers, is crucial for ensuring the transparency and trustworthiness of this technology. In this paper, we introduce LMAC-ZS (Listenable Maps for Zero-Shot Audio Classifiers), which, to the best of our knowledge, is the first decoder-based post-hoc explanation method for explaining the decisions of zero-shot audio classifiers. The proposed method utilizes a novel loss function that aims to closely reproduce the original similarity patterns between text-and-audio pairs in the generated explanations. We provide an extensive evaluation using the Contrastive Language-Audio Pretraining (CLAP) model to showcase that our interpreter remains faithful to the decisions in a zero-shot classification context. Moreover, we qualitatively show that our method produces meaningful explanations that correlate well with different text prompts.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/93828"} +{"video_file": "lYdjzx3DYu_39025130.mp4", "openreview_id": "lYdjzx3DYu", "slideslive_id": 39025130, "venue": "nips2024", "title": "EMR-Merging: Tuning-Free High-Performance Model Merging", "status": "Spotlight", "keywords": "Model Merging;Model Compression;Multi-task Learning;Supervised Finetuning", "tldr": "Existing model merging methods usually suffer from significant performance degradation or requring additional tuning. We realize tuning-free model merging, which shows impressive performance under various experimental settings.", "abstract": "The success of pretrain-finetune paradigm brings about the release of numerous model weights. In this case, merging models finetuned on different tasks to enable a single model with multi-task capabilities is gaining increasing attention for its practicability. Existing model merging methods usually suffer from (1) significant performance degradation or (2) requiring tuning by additional data or training. In this paper, we rethink and analyze the existing model merging paradigm. We discover that using a single model's weights can hardly simulate all the models' performance. To tackle this issue, we propose Elect, Mask & Rescale-Merging (EMR-Merging). We first (a) elect a unified model from all the model weights and then (b) generate extremely lightweight task-specific modulators, including masks and rescalers, to align the direction and magnitude between the unified model and each specific model, respectively. EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance. We find that EMR-Merging shows outstanding performance compared to existing merging methods under different classical and newly-established settings, including merging different numbers of vision models (up to 30), NLP models, PEFT models, and multi-modal models.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/93822"} +{"video_file": "lZJ0WYI5YC_39026498.mp4", "openreview_id": "lZJ0WYI5YC", "slideslive_id": 39026498, "venue": "nips2024", "title": "Deep Learning in Medical Image Registration: Magic or Mirage?", "status": "Poster", "keywords": "image registration;image alignment;medical image registration;T1-weighed MRI;image alignment;deformable image registration;diffeomorphism;optimization;fairness;evaluation", "tldr": "This paper establishes the assumptions and conditions under which either classical and deep-learning image registration algorithms surpass each other", "abstract": "Classical optimization and learning-based methods are the two reigning paradigms in deformable image registration. While optimization-based methods boast generalizability across modalities and robust performance, learning-based methods promise peak performance, incorporating weak supervision and amortized optimization. However, the exact conditions for either paradigm to perform well over the other are shrouded and not explicitly outlined in the existing literature. In this paper, we make an explicit correspondence between the mutual information of the distribution of per-pixel intensity and labels, and the performance of classical registration methods. This strong correlation hints to the fact that architectural designs in learning-based methods is unlikely to affect this correlation, and therefore, the performance of learning-based methods. This hypothesis is thoroughly validated with state-of-the-art classical and learning-based methods. However, learning-based methods with weak supervision can perform high-fidelity intensity and label registration, which is not possible with classical methods. Next, we show that this high-fidelity feature learning does not translate to invariance to domain shift, and learning-based methods are sensitive to such changes in the data distribution. We reassess and recalibrate performance expectations from classical and DLIR methods under access to label supervision, training time, and its generalization capabilities under minor domain shifts.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/93821"} +{"video_file": "lbLC5OV9GY_39026455.mp4", "openreview_id": "lbLC5OV9GY", "slideslive_id": 39026455, "venue": "nips2024", "title": "VISA: Variational Inference with Sequential Sample-Average Approximations", "status": "Poster", "keywords": "Variational Inference;Sample Average Approximations;Importance Sampling", "tldr": "Forward-KL Variational Inference with Sequential Sample-Average Approximations", "abstract": "We present variational inference with sequential sample-average approximations (VISA), a method for approximate inference in computationally intensive models, such as those based on numerical simulations. VISA extends importance-weighted forward-KL variational inference by employing a sequence of sample-average approximations, which are considered valid inside a trust region. This makes it possible to reuse model evaluations across multiple gradient steps, thereby reducing computational cost. We perform experiments on high-dimensional Gaussians, Lotka-Volterra dynamics, and a Pickover attractor, which demonstrate that VISA can achieve comparable approximation accuracy to standard importance-weighted forward-KL variational inference with computational savings of a factor two or more for conservatively chosen learning rates.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/93819"} +{"video_file": "lcALCNF2qe_39025678.mp4", "openreview_id": "lcALCNF2qe", "slideslive_id": 39025678, "venue": "nips2024", "title": "Towards Universal Mesh Movement Networks", "status": "Spotlight", "keywords": "PDE;Physical Simulation;Mesh Adaptation;Physical Science", "tldr": "We propose Universal Mesh Movement Networks (UM2N), which once trained, can be applied in a non-intrusive, zero-shot manner to move meshes with different sizes and structure, for solvers applicable to different PDE types and boundary geometries.", "abstract": "Solving complex Partial Differential Equations (PDEs) accurately and efficiently is an essential and challenging problem in all scientific and engineering disciplines. Mesh movement methods provide the capability to improve the accuracy of the numerical solution without increasing the overall mesh degree of freedom count. Conventional sophisticated mesh movement methods are extremely expensive and struggle to handle scenarios with complex boundary geometries. However, existing learning-based methods require re-training from scratch given a different PDE type or boundary geometry, which limits their applicability, and also often suffer from robustness issues in the form of inverted elements. In this paper, we introduce the Universal Mesh Movement Network (UM2N), which -- once trained -- can be applied in a non-intrusive, zero-shot manner to move meshes with different size distributions and structures, for solvers applicable to different PDE types and boundary geometries. UM2N consists of a Graph Transformer (GT) encoder for extracting features and a Graph Attention Network (GAT) based decoder for moving the mesh. We evaluate our method on advection and Navier-Stokes based examples, as well as a real-world tsunami simulation case. Our method out-performs existing learning-based mesh movement methods in terms of the benchmarks described above. In comparison to the conventional sophisticated Monge-Amp\u00e8re PDE-solver based method, our approach not only significantly accelerates mesh movement, but also proves effective in scenarios where the conventional method fails. Our project page can be found at https://erizmr.github.io/UM2N/.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/93817"} +{"video_file": "lckAdnVzsT_39024607.mp4", "openreview_id": "lckAdnVzsT", "slideslive_id": 39024607, "venue": "nips2024", "title": "Coherent 3D Scene Diffusion From a Single RGB Image", "status": "Poster", "keywords": "Single RGB Image 3D Scene Reconstruction;Diffusion Models;Scene Understanding;3D Scene Prior", "tldr": "A novel diffusion-based 3D scene diffusion model for 3D scene reconstruction from a single RGB image", "abstract": "We present a novel diffusion-based approach for coherent 3D scene reconstruction from a single RGB image. Our method utilizes an image-conditioned 3D scene diffusion model to simultaneously denoise the 3D poses and geometries of all objects within the scene.\nMotivated by the ill-posed nature of the task and to obtain consistent scene reconstruction results, we learn a generative scene prior by conditioning on all scene objects simultaneously to capture scene context and by allowing the model to learn inter-object relationships throughout the diffusion process.\nWe further propose an efficient surface alignment loss to facilitate training even in the absence of full ground-truth annotation, which is common in publicly available datasets. This loss leverages an expressive shape representation, which enables direct point sampling from intermediate shape predictions.\nBy framing the task of single RGB image 3D scene reconstruction as a conditional diffusion process, our approach surpasses current state-of-the-art methods, achieving a 12.04% improvement in AP3D on SUN RGB-D and a 13.43% increase in F-Score on Pix3D.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93816"} +{"video_file": "ldvfaYzG35_39028096.mp4", "openreview_id": "ldvfaYzG35", "slideslive_id": 39028096, "venue": "nips2024", "title": "Pedestrian-Centric 3D Pre-collision Pose and Shape Estimation from Dashcam Perspective", "status": "Poster", "keywords": "Pedestrian Pre-collision pose;Human pose and shape estimation;Dashcam Perspective;Pedestrian-Vehicle Collision Pose dataset", "tldr": "We construct the first Pedestrian-Vehicle Collision Pose (PVCP) dataset from the perspective of dashcam, and propose a pedestrian Pre-collision Pose and Shape Estimation network (PPSENet).", "abstract": "Pedestrian pre-collision pose is one of the key factors to determine the degree of pedestrian-vehicle injury in collision. Human pose estimation algorithm is an effective method to estimate pedestrian emergency pose from accident video. However, the pose estimation model trained by the existing daily human pose datasets has poor robustness under specific poses such as pedestrian pre-collision pose, and it is difficult to obtain human pose datasets in the wild scenes, especially lacking scarce data such as pedestrian pre-collision pose in traffic scenes. In this paper, we collect pedestrian-vehicle collision pose from the dashcam perspective of dashcam and construct the first Pedestrian-Vehicle Collision Pose dataset (PVCP) in a semi-automatic way, including 40k+ accident frames and 20K+ pedestrian pre-collision pose annotation (2D, 3D, Mesh). Further, we construct a Pedestrian Pre-collision Pose Estimation Network (PPSENet) to estimate the collision pose and shape sequence of pedestrians from pedestrian-vehicle accident videos. The PPSENet first estimates the 2D pose from the image (Image to Pose, ITP) and then lifts the 2D pose to 3D mesh (Pose to Mesh, PTM). Due to the small size of the dataset, we introduce a pre-training model that learns the human pose prior on a large number of pose datasets, and use iterative regression to estimate the pre-collision pose and shape of pedestrians. Further, we classify the pre-collision pose sequence and introduce pose class loss, which achieves the best accuracy compared with the existing relevant \\textit{state-of-the-art} methods. Code and data are available for research at https://github.com/wmj142326/PVCP.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93814"} +{"video_file": "leqD3bJ4Ly_39026630.mp4", "openreview_id": "leqD3bJ4Ly", "slideslive_id": 39026630, "venue": "nips2024", "title": "OPEL: Optimal Transport Guided ProcedurE Learning", "status": "Poster", "keywords": "Procedure learning;Egocentric vision;EgoProceL;Optimal Transport", "tldr": "An unsupervised optimal transport guided method for procedure learning, which achieves state-of-the-art results", "abstract": "Procedure learning refers to the task of identifying the key-steps and determining their logical order, given several videos of the same task. For both third-person and first-person (egocentric) videos, state-of-the-art (SOTA) methods aim at finding correspondences across videos in time to accomplish procedure learning. However, to establish temporal relationships within the sequences, these methods often rely on frame-to-frame mapping, or assume monotonic alignment of video pairs, leading to sub-optimal results. To this end, we propose to treat the video frames as samples from an unknown distribution, enabling us to frame their distance calculation as an optimal transport (OT) problem. Notably, the OT-based formulation allows us to relax the previously mentioned assumptions. To further improve performance, we enhance the OT formulation by introducing two regularization terms. The first, inverse difference moment regularization, promotes transportation between instances that are homogeneous in the embedding space as well as being temporally closer. The second, regularization based on the KL-divergence with an exponentially decaying prior smooths the alignment while enforcing conformity to the optimality (alignment obtained from vanilla OT optimization) and temporal priors. The resultant optimal transport guided procedure learning framework (`OPEL') significantly outperforms the SOTA on benchmark datasets. Specifically, we achieve 22.4% (IoU) and 26.9% (F1) average improvement compared to the current SOTA on large scale egocentric benchmark, EgoProceL. Furthermore, for the third person benchmarks (ProCeL and CrossTask), the proposed approach obtains 46.2% (F1) average enhancement over SOTA.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93812"} +{"video_file": "lfY0SUT3m9_39028272.mp4", "openreview_id": "lfY0SUT3m9", "slideslive_id": 39028272, "venue": "nips2024", "title": "Shuffling Gradient-Based Methods for Nonconvex-Concave Minimax Optimization", "status": "Poster", "keywords": "Shuffling gradient method; nonconvex-concave minimax problem; oracle complexity; sample without replacement", "tldr": "This paper develops two novel shuffling-based algorithms to solve two classes of nonconvex-concave minimax problems that have provable convergence guarantees.", "abstract": "This paper aims at developing novel shuffling gradient-based methods for tackling two classes of minimax problems: nonconvex-linear and nonconvex-strongly concave settings. The first algorithm addresses the nonconvex-linear minimax model and achieves the state-of-the-art oracle complexity typically observed in nonconvex optimization. It also employs a new shuffling estimator for the ``hyper-gradient'', departing from standard shuffling techniques in optimization. The second method consists of two variants: semi-shuffling and full-shuffling schemes. These variants tackle the nonconvex-strongly concave minimax setting. We establish their oracle complexity bounds under standard assumptions, which, to our best knowledge, are the best-known for this specific setting. Numerical examples demonstrate the performance of our algorithms and compare them with two other methods. Our results show that the new methods achieve comparable performance with SGD, supporting the potential of incorporating shuffling strategies into minimax algorithms.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93811"} +{"video_file": "lflwtGE6Vf_39024791.mp4", "openreview_id": "lflwtGE6Vf", "slideslive_id": 39024791, "venue": "nips2024", "title": "(FL)$^2$: Overcoming Few Labels in Federated Semi-Supervised Learning", "status": "Poster", "keywords": "Federated Learning;Semi-Supervised Learning;Federated Semi-Supervised Learning", "tldr": "Overcoming Few Labels in Federated Semi-Supervised Learning", "abstract": "Federated Learning (FL) is a distributed machine learning framework that trains accurate global models while preserving clients' privacy-sensitive data. However, most FL approaches assume that clients possess labeled data, which is often not the case in practice. Federated Semi-Supervised Learning (FSSL) addresses this label deficiency problem, targeting situations where only the server has a small amount of labeled data while clients do not. However, a significant performance gap exists between Centralized Semi-Supervised Learning (SSL) and FSSL. This gap arises from confirmation bias, which is more pronounced in FSSL due to multiple local training epochs and the separation of labeled and unlabeled data. We propose\n(\nF\nL\n)\n2\n, a robust training method for unlabeled clients using sharpness-aware consistency regularization. We show that regularizing the original pseudo-labeling loss is suboptimal, and hence we carefully select unlabeled samples for regularization. We further introduce client-specific adaptive thresholding and learning status-aware aggregation to adjust the training process based on the learning progress of each client. Our experiments on three benchmark datasets demonstrate that our approach significantly improves performance and bridges the gap with SSL, particularly in scenarios with scarce labeled data.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/93810"} +{"video_file": "lgtsXxk4dF_39024498.mp4", "openreview_id": "lgtsXxk4dF", "slideslive_id": 39024498, "venue": "nips2024", "title": "Clustering with Non-adaptive Subset Queries", "status": "Poster", "keywords": "Clustering;query algorithms", "tldr": "We provide efficient non-adaptive algorithms that query a near-linear (and near-optimal) number of subset-queries to infer clustering of a set of elements exactly.", "abstract": "Recovering the underlying clustering of a set\nU\nof\nn\npoints by asking pair-wise same-cluster queries has garnered significant interest in the last decade. Given a query\nS\n\u2282\nU\n,\n|\nS\n|\n=\n2\n, the oracle returns \"yes\" if the points are in the same cluster and \"no\" otherwise. We study a natural generalization of this problem to subset queries for\n|\nS\n|\n>\n2\n, where the oracle returns the number of clusters intersecting\nS\n. Our aim is to determine the minimum number of queries needed for exactly recovering an arbitrary\nk\n-clustering. We focus on non-adaptive schemes, where all the queries are asked in one round, thus allowing for the querying process to be parallelized, which is a highly desirable property.\nFor adaptive algorithms with pair-wise queries, the complexity is known to be\n\u0398\n(\nn\nk\n)\n, where\nk\nis the number of clusters. In contrast, non-adaptive pair-wise query algorithms are extremely limited: even for\nk\n=\n3\n, such algorithms require\n\u03a9\n(\nn\n2\n)\nqueries, which matches the trivial\nO\n(\nn\n2\n)\nupper bound attained by querying every pair of points. Allowing for subset queries of unbounded size,\nO\n(\nn\n)\nqueries is possible with an adaptive scheme. However, the realm of non-adaptive algorithms remains completely unknown. Is it possible to attain algorithms that are non-adaptive while still making a near-linear number of queries?\nIn this paper, we give the first non-adaptive algorithms for clustering with subset queries. We provide, (i) a non-adaptive algorithm making\nO\n(\nn\nlog\n2\n\u2061\nn\nlog\n\u2061\nk\n)\nqueries which improves to\nO\n(\nn\nlog\n\u2061\nk\n)\nwhen the cluster sizes are within any constant factor of each other, (ii) for constant\nk\n, a non-adaptive algorithm making\nO\n(\nn\nlog\n\u2061\nlog\n\u2061\nn\n)\nqueries. In addition to non-adaptivity, we take into account other practical considerations, such as enforcing a bound on query size. For constant\nk\n, we give an algorithm making\nO\n(\nn\n2\n/\ns\n2\n)\nqueries on subsets of size at most\ns\n\u2264\nn\n, which is optimal among all non-adaptive algorithms within a\nlog\n\u2061\nn\n-factor. For arbitrary\nk\n, the dependence varies as\nO\n~\n(\nn\n2\n/\ns\n)\n.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93808"} +{"video_file": "liHe9iumIi_39026322.mp4", "openreview_id": "liHe9iumIi", "slideslive_id": 39026322, "venue": "nips2024", "title": "FewViewGS: Gaussian Splatting with Few View Matching and Multi-stage Training", "status": "Poster", "keywords": "Few-shot view synthesis;gaussian splatting", "tldr": "A novel method with multi-stage training scheme, novel view consistency constraints, and local regularization losses for few-shot view synthesis", "abstract": "The field of novel view synthesis from images has seen rapid advancements with the introduction of Neural Radiance Fields (NeRF) and more recently with 3D Gaussian Splatting. Gaussian Splatting became widely adopted due to its efficiency and ability to render novel views accurately. While Gaussian Splatting performs well when a sufficient amount of training images are available, its unstructured explicit representation tends to overfit in scenarios with sparse input images, resulting in poor rendering performance. To address this, we present a 3D Gaussian-based novel view synthesis method using sparse input images that can accurately render the scene from the viewpoints not covered by the training images. We propose a multi-stage training scheme with matching-based consistency constraints imposed on the novel views without relying on pre-trained depth estimation or diffusion models. This is achieved by using the matches of the available training images to supervise the generation of the novel views sampled between the training frames with color, geometry, and semantic losses. In addition, we introduce a locality preserving regularization for 3D Gaussians which removes rendering artifacts by preserving the local color structure of the scene. Evaluation on synthetic and real-world datasets demonstrates competitive or superior performance of our method in few-shot novel view synthesis compared to existing state-of-the-art methods.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93806"} +{"video_file": "lkx3OpcqSZ_39026557.mp4", "openreview_id": "lkx3OpcqSZ", "slideslive_id": 39026557, "venue": "nips2024", "title": "Compressing Large Language Models using Low Rank and Low Precision Decomposition", "status": "Poster", "keywords": "Large Language Models (LLMs);Model Compression;Post-training Quantization;Low-Rank Decomposition;Low-Precision Formats;Quantization Error Analysis;Rank-Constrained Regression;Randomized Linear Algebra;Sketching", "tldr": "We propose a post-training compression algorithm for Large Language Models (LLMs), that harnesses the inherent low-rank structure of LLM weight matrices, that effectively combines low-rank and low-precision matrix decompositions,", "abstract": "The prohibitive sizes of Large Language Models (LLMs) today make it difficult to deploy them on memory-constrained edge devices. This work introduces\nC\nA\nL\nD\nE\nR\nA\n-- a new post-training LLM compression algorithm that harnesses the inherent low-rank structure of a weight matrix\nW\nby approximating it via a low-rank, low-precision decomposition as\nW\n\u2248\nQ\n+\nL\nR\n. Here,\nL\nand\nR\nare low rank factors, and the entries of\nQ\n,\nL\nand\nR\nare quantized. The model is compressed by substituting each layer with its\nQ\n+\nL\nR\ndecomposition, and the zero-shot performance of the compressed model is evaluated. Additionally,\nL\nand\nR\nare readily amenable to low-rank adaptation, consequently enhancing the zero-shot performance.\nC\nA\nL\nD\nE\nR\nA\nobtains this decomposition by formulating it as an optimization problem\nmin\nQ\n,\nL\n,\nR\n\u2016\n(\nQ\n+\nL\nR\n\u2212\nW\n)\nX\n\u22a4\n\u2016\nF\n2\n, where\nX\nis the calibration data, and\nQ\n,\nL\n,\nR\nare constrained to be representable using low-precision formats. Theoretical upper bounds on the approximation error of\nC\nA\nL\nD\nE\nR\nA\nare established using a rank-constrained regression framework, and the tradeoff between compression ratio and model performance is studied by analyzing the impact of target rank and quantization bit budget. Results illustrate that compressing LlaMa-\n2\n7\nB/\n13\nB/\n70\nB and LlaMa-\n3\n8\nB models obtained using\nC\nA\nL\nD\nE\nR\nA\noutperforms existing post-training LLM compression techniques in the regime of less than\n2.5\nbits per parameter.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93805"} +{"video_file": "llTroju97T_39027030.mp4", "openreview_id": "llTroju97T", "slideslive_id": 39027030, "venue": "nips2024", "title": "Personalized Adapter for Large Meteorology Model on Devices: Towards Weather Foundation Models", "status": "Poster", "keywords": "Meteorological Variable Modeling;Federared Learning;On-device Intelligence;Foundation Model", "tldr": "Taming pre-trained language models as foundation models for personalized on-device meteorological variables modeling.", "abstract": "This paper demonstrates that pre-trained language models (PLMs) are strong foundation models for on-device meteorological variable modeling. We present LM-Weather, a generic approach to taming PLMs, that have learned massive sequential knowledge from the universe of natural language databases, to acquire an immediate capability to obtain highly customized models for heterogeneous meteorological data on devices while keeping high efficiency. Concretely, we introduce a lightweight personalized adapter into PLMs and endows it with weather pattern awareness. During communication between clients and the server, low-rank-based transmission is performed to effectively fuse the global knowledge among devices while maintaining high communication efficiency and ensuring privacy. Experiments on real-wold dataset show that LM-Weather outperforms the state-of-the-art results by a large margin across various tasks (e.g., forecasting and imputation at different scales). We provide extensive and in-depth analyses experiments, which verify that LM-Weather can (1) indeed leverage sequential knowledge from natural language to accurately handle meteorological sequence, (2) allows each devices obtain highly customized models under significant heterogeneity, and (3) generalize under data-limited and out-of-distribution (OOD) scenarios.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/93804"} +{"video_file": "lpFDhC91Oj_39025459.mp4", "openreview_id": "lpFDhC91Oj", "slideslive_id": 39025459, "venue": "nips2024", "title": "Black-Box Forgetting", "status": "Poster", "keywords": "Black-Box Tuning;Vision-Language Models", "tldr": "We introduce a novel problem of selective forgetting for black-box models and propose a novel method for this problem.", "abstract": "Large-scale pre-trained models (PTMs) provide remarkable zero-shot classification capability covering a wide variety of object classes. However, practical applications do not always require the classification of all kinds of objects, and leaving the model capable of recognizing unnecessary classes not only degrades overall accuracy but also leads to operational disadvantages. To mitigate this issue, we explore the selective forgetting problem for PTMs, where the task is to make the model unable to recognize only the specified classes, while maintaining accuracy for the rest. All the existing methods assume ''white-box'' settings, where model information such as architectures, parameters, and gradients is available for training. However, PTMs are often ''black-box,'' where information on such models is unavailable for commercial reasons or social responsibilities. In this paper, we address a novel problem of selective forgetting for black-box models, named Black-Box Forgetting, and propose an approach to the problem. Given that information on the model is unavailable, we optimize the input prompt to decrease the accuracy of specified classes through derivative-free optimization. To avoid difficult high-dimensional optimization while ensuring high forgetting performance, we propose Latent Context Sharing, which introduces common low-dimensional latent components among multiple tokens for the prompt. Experiments on four standard benchmark datasets demonstrate the superiority of our method with reasonable baselines. The code is available at https://github.com/yusukekwn/Black-Box-Forgetting.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93800"} +{"video_file": "lpXDZKiAnt_39024720.mp4", "openreview_id": "lpXDZKiAnt", "slideslive_id": 39024720, "venue": "nips2024", "title": "Vaccine: Perturbation-aware Alignment for Large Language Models against Harmful Fine-tuning Attack", "status": "Poster", "keywords": "Larger language model;safety alignment;perturbation-aware alignment;harmful finetuning attack", "tldr": "We propose Vaccine, a perturbation-aware alignment solution for large language model against harmful fine-tuning attack.", "abstract": "The new paradigm of fine-tuning-as-a-service introduces a new attack surface for Large Language Models (LLMs): a few harmful data uploaded by users can easily trick the fine-tuning to produce an alignment-broken model. We conduct an empirical analysis and uncover a \\textit{harmful embedding drift} phenomenon, showing a probable cause of the alignment-broken effect. Inspired by our findings, we propose Vaccine, a perturbation-aware alignment technique to mitigate the security risk of users fine-tuning. The core idea of Vaccine is to produce invariant hidden embeddings by progressively adding crafted perturbation to them in the alignment phase. This enables the embeddings to withstand harmful perturbation from un-sanitized user data in the fine-tuning phase. Our results on open source mainstream LLMs (e.g., Llama2, Opt, Vicuna) demonstrate that Vaccine can boost the robustness of alignment against harmful prompts induced embedding drift while reserving reasoning ability towards benign prompts. Our code is available at https://github.com/git-disl/Vaccine.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93799"} +{"video_file": "lwpfH9wVkO_39028607.mp4", "openreview_id": "lwpfH9wVkO", "slideslive_id": 39028607, "venue": "nips2024", "title": "Controlling Multiple Errors Simultaneously with a PAC-Bayes Bound", "status": "Poster", "keywords": "PAC-Bayes;Generalization;Statistical Learning Theory", "tldr": "We prove a novel PAC-Bayes bound which provides rich information on the types of errors likely to be made at inference time.", "abstract": "Current PAC-Bayes generalisation bounds are restricted to scalar metrics of performance, such as the loss or error rate. However, one ideally wants more information-rich certificates that control the entire distribution of possible outcomes, such as the distribution of the test loss in regression, or the probabilities of different mis-classifications. We provide the first PAC-Bayes bound capable of providing such rich information by bounding the Kullback-Leibler divergence between the empirical and true probabilities of a set of\nM\nerror types, which can either be discretized loss values for regression, or the elements of the confusion matrix (or a partition thereof) for classification. We transform our bound into a differentiable training objective. Our bound is especially useful in cases where the severity of different mis-classifications may change over time; existing PAC-Bayes bounds can only bound a particular pre-decided weighting of the error types. In contrast our bound implicitly controls all uncountably many weightings simultaneously.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93790"} +{"video_file": "lxhoVDf1Sw_39027654.mp4", "openreview_id": "lxhoVDf1Sw", "slideslive_id": 39027654, "venue": "nips2024", "title": "Predictive Attractor Models", "status": "Poster", "keywords": "Sequential Memory;Predictive Models;Fixed-point Attractors;Associative Memory Models;State Space Models;Biologically plausible;Hebbian Plasticity Rules;Local Computations;Hierarchical Temporal Memory;Continual Learning;Multiple Possibilities Generation;Noise Tolerance", "tldr": "A biologically plausible sequential memory model that (1) represents and generates multiple possibilities by learning fixed-point attractors, (2) learns continually while avoiding catastrophic forgetting, and (3) is robust to noise.", "abstract": "Sequential memory, the ability to form and accurately recall a sequence of events or stimuli in the correct order, is a fundamental prerequisite for biological and artificial intelligence as it underpins numerous cognitive functions (e.g., language comprehension, planning, episodic memory formation, etc.) However, existing methods of sequential memory suffer from catastrophic forgetting, limited capacity, slow iterative learning procedures, low-order Markov memory, and, most importantly, the inability to represent and generate multiple valid future possibilities stemming from the same context. Inspired by biologically plausible neuroscience theories of cognition, we propose Predictive Attractor Models (PAM), a novel sequence memory architecture with desirable generative properties. PAM is a streaming model that learns a sequence in an online, continuous manner by observing each input only once. Additionally, we find that PAM avoids catastrophic forgetting by uniquely representing past context through lateral inhibition in cortical minicolumns, which prevents new memories from overwriting previously learned knowledge. PAM generates future predictions by sampling from a union set of predicted possibilities; this generative ability is realized through an attractor model trained alongside the predictor. We show that PAM is trained with local computations through Hebbian plasticity rules in a biologically plausible framework. Other desirable traits (e.g., noise tolerance, CPU-based learning, capacity scaling) are discussed throughout the paper. Our findings suggest that PAM represents a significant step forward in the pursuit of biologically plausible and computationally efficient sequential memory models, with broad implications for cognitive science and artificial intelligence research. Illustration videos and code are available on our project page: https://ramymounir.com/publications/pam.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/93788"} +{"video_file": "lxuXvJSOcP_39025072.mp4", "openreview_id": "lxuXvJSOcP", "slideslive_id": 39025072, "venue": "nips2024", "title": "Unified Domain Generalization and Adaptation for Multi-View 3D Object Detection", "status": "Poster", "keywords": "Domain Generalization.+Domain Adaptation.+Multi-view 3D Object Detection.+Autonomous driving.+Domain Generalization.", "tldr": "Label-Efficient Domain Adaptation for Multi-view 3D Object Detection", "abstract": "Recent advances in 3D object detection leveraging multi-view cameras have demonstrated their practical and economical value in various challenging vision tasks. However, typical supervised learning approaches face challenges in achieving satisfactory adaptation toward unseen and unlabeled target datasets (i.e., direct transfer) due to the inevitable geometric misalignment between the source and target domains. In practice, we also encounter constraints on resources for training models and collecting annotations for the successful deployment of 3D object detectors. In this paper, we propose Unified Domain Generalization and Adaptation (UDGA), a practical solution to mitigate those drawbacks. We first propose Multi-view Overlap Depth Constraint that leverages the strong association between multi-view, significantly alleviating geometric gaps due to perspective view changes. Then, we present a Label-Efficient Domain Adaptation approach to handle unfamiliar targets with significantly fewer amounts of labels (i.e., 1\nand 5\n, while preserving well-defined source knowledge for training efficiency. Overall, UDGA framework enables stable detection performance in both source and target domains, effectively bridging inevitable domain gaps, while demanding fewer annotations. We demonstrate the robustness of UDGA with large-scale benchmarks: nuScenes, Lyft, and Waymo, where our framework outperforms the current state-of-the-art methods.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93787"} +{"video_file": "lzfzjYuWgY_39027866.mp4", "openreview_id": "lzfzjYuWgY", "slideslive_id": 39027866, "venue": "nips2024", "title": "Do LLMs Build World Representations? Probing Through the Lens of State Abstraction", "status": "Poster", "keywords": "Large Language Models;World Models;World Representation;Probing;Reinforcement Learning;State Abstraction", "tldr": "We introduce a new framework for probing world abstraction within LLM-built representations, and our experiments with a text-based planning task demonstrate LLMs prefer maintaining goal-oriented abstractions during decoding.", "abstract": "How do large language models (LLMs) encode the state of the world, including the status of entities and their relations, as described by a text? While existing work directly probes for a complete state of the world, our research explores whether and how LLMs abstract this world state in their internal representations. We propose a new framework for probing for world representations through the lens of state abstraction theory from reinforcement learning, which emphasizes different levels of abstraction, distinguishing between general abstractions that facilitate predicting future states and goal-oriented abstractions that guide the subsequent actions to accomplish tasks. To instantiate this framework, we design a text-based planning task, where an LLM acts as an agent in an environment and interacts with objects in containers to achieve a specified goal state. Our experiments reveal that fine-tuning as well as advanced pre-training strengthens LLM-built representations' tendency of maintaining goal-oriented abstractions during decoding, prioritizing task completion over recovery of the world's state and dynamics.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93786"} +{"video_file": "m1PVjNHvtP_39028512.mp4", "openreview_id": "m1PVjNHvtP", "slideslive_id": 39028512, "venue": "nips2024", "title": "GLinSAT: The General Linear Satisfiability Neural Network Layer By Accelerated Gradient Descent", "status": "Poster", "keywords": "Differentiable general linear satisfiability neural network layer;Constraint Satisfaction;Accelerated gradient descent", "tldr": "We design GLinSAT based on accelerated gradient descent method, which is the first general linear satisfiability neural network layer where all the operations are differentiable and matrix-factorization-free.", "abstract": "Ensuring that the outputs of neural networks satisfy specific constraints is crucial for applying neural networks to real-life decision-making problems. In this paper, we consider making a batch of neural network outputs satisfy bounded and general linear constraints. We first reformulate the neural network output projection problem as an entropy-regularized linear programming problem. We show that such a problem can be equivalently transformed into an unconstrained convex optimization problem with Lipschitz continuous gradient according to the duality theorem. Then, based on an accelerated gradient descent algorithm with numerical performance enhancement, we present our architecture, GLinSAT, to solve the problem. To the best of our knowledge, this is the first general linear satisfiability layer in which all the operations are differentiable and matrix-factorization-free. Despite the fact that we can explicitly perform backpropagation based on automatic differentiation mechanism, we also provide an alternative approach in GLinSAT to calculate the derivatives based on implicit differentiation of the optimality condition. Experimental results on constrained traveling salesman problems, partial graph matching with outliers, predictive portfolio allocation and power system unit commitment demonstrate the advantages of GLinSAT over existing satisfiability layers. Our implementation is available at https://github.com/HunterTracer/GLinSAT.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93783"} +{"video_file": "m296WJXyzQ_39026241.mp4", "openreview_id": "m296WJXyzQ", "slideslive_id": 39026241, "venue": "nips2024", "title": "Scanning Trojaned Models Using Out-of-Distribution Samples", "status": "Poster", "keywords": "Trojan Scanning Method;Trojan Post-Training Defense;Backdoor Attacks;Out-of-Distribution Samples;Adversarially Perturbed Out-of-Distribution Samples", "tldr": "In this work, we designed a trojan scanning method which is robust in various aspects, including trojan attack type, label mapping, and adversarial robustness of the classifier.", "abstract": "Scanning for trojan (backdoor) in deep neural networks is crucial due to their significant real-world applications. There has been an increasing focus on developing effective general trojan scanning methods across various trojan attacks. Despite advancements, there remains a shortage of methods that perform effectively without preconceived assumptions about the backdoor attack method. Additionally, we have observed that current methods struggle to identify classifiers trojaned using adversarial training. Motivated by these challenges, our study introduces a novel scanning method named TRODO (TROjan scanning by Detection of adversarial shifts in Out-of-distribution samples). TRODO leverages the concept of \"blind spots\"\u2014regions where trojaned classifiers erroneously identify out-of-distribution (OOD) samples as in-distribution (ID). We scan for these blind spots by adversarially shifting OOD samples towards in-distribution. The increased likelihood of perturbed OOD samples being classified as ID serves as a signature for trojan detection. TRODO is both trojan and label mapping agnostic, effective even against adversarially trained trojaned classifiers. It is applicable even in scenarios where training data is absent, demonstrating high accuracy and adaptability across various scenarios and datasets, highlighting its potential as a robust trojan scanning strategy.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93781"} +{"video_file": "m4ZcDrVvid_39028368.mp4", "openreview_id": "m4ZcDrVvid", "slideslive_id": 39028368, "venue": "nips2024", "title": "Practical Bayesian Algorithm Execution via Posterior Sampling", "status": "Poster", "keywords": "Bayesian algorithm execution;Bayesian optimization;posterior sampling;probabilistic numerics", "tldr": "We propose a posterior sampling-based algorithm to efficiently estimate a target set of input points defined in terms of a function with expensive evaluations.", "abstract": "We consider Bayesian algorithm execution (BAX), a framework for efficiently selecting evaluation points of an expensive function to infer a property of interest encoded as the output of a base algorithm. Since the base algorithm typically requires more evaluations than are feasible, it cannot be directly applied. Instead, BAX methods sequentially select evaluation points using a probabilistic numerical approach. Current BAX methods use expected information gain to guide this selection. However, this approach is computationally intensive. Observing that, in many tasks, the property of interest corresponds to a target set of points defined by the function, we introduce PS-BAX, a simple, effective, and scalable BAX method based on posterior sampling. PS-BAX is applicable to a wide range of problems, including many optimization variants and level set estimation. Experiments across diverse tasks demonstrate that PS-BAX performs competitively with existing baselines while being significantly faster, simpler to implement, and easily parallelizable, setting a strong baseline for future research. Additionally, we establish conditions under which PS-BAX is asymptotically convergent, offering new insights into posterior sampling as an algorithm design paradigm.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/93778"} +{"video_file": "m5106RRLgx_39026321.mp4", "openreview_id": "m5106RRLgx", "slideslive_id": 39026321, "venue": "nips2024", "title": "Are More LLM Calls All You Need? Towards the Scaling Properties of Compound AI Systems", "status": "Poster", "keywords": "Scaling Laws; Compound AI systems; language models", "tldr": "We study the scaling properties of compound inference systems both theoretically and empirically", "abstract": "Many recent state-of-the-art results in language tasks were achieved using compound systems that perform multiple Language Model (LM) calls and aggregate their responses. However, there is little understanding of how the number of LM calls -- e.g., when asking the LM to answer each question multiple times and taking a majority vote -- affects such a compound system's performance. In this paper, we initiate the study of scaling properties of compound inference systems. We analyze, theoretically and empirically, how the number of LM calls affects the performance of Vote and Filter-Vote, two of the simplest compound system designs, which aggregate LM responses via majority voting, optionally applying LM filters. We find, surprisingly, that across multiple language tasks, the performance of both Vote and Filter-Vote can first increase but then decrease as a function of the number of LM calls. Our theoretical results suggest that this non-monotonicity is due to the diversity of query difficulties within a task: more LM calls lead to higher performance on \"easy\" queries, but lower performance on \"hard\" queries, and non-monotone behavior can emerge when a task contains both types of queries. This insight then allows us to compute, from a small number of samples, the number of LM calls that maximizes system performance, and define an analytical scaling model for both systems. Experiments show that our scaling model can accurately predict the performance of Vote and Filter-Vote systems and thus find the optimal number of LM calls to make.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93777"} +{"video_file": "m5dyKArVn8_39027880.mp4", "openreview_id": "m5dyKArVn8", "slideslive_id": 39027880, "venue": "nips2024", "title": "How many classifiers do we need?", "status": "Poster", "keywords": "ensemble;model aggregation;machine learning;computer vision", "tldr": "We develop bounds on the majority vote error that are tight enough to predict ensemble performance.", "abstract": "As performance gains through scaling data and/or model size experience diminishing returns, it is becoming increasingly popular to turn to ensembling, where the predictions of multiple models are combined to improve accuracy. In this paper, we provide a detailed analysis of how the disagreement and the polarization (a notion we introduce and define in this paper) among classifiers relate to the performance gain achieved by aggregating individual classifiers, for majority vote strategies in classification tasks. We address these questions in the following ways. (1) An upper bound for polarization is derived, and we propose what we call a neural polarization law: most interpolating neural network models are 4/3-polarized. Our empirical results not only support this conjecture but also show that polarization is nearly constant for a dataset, regardless of hyperparameters or architectures of classifiers. (2) The error rate of the majority vote classifier is considered under restricted entropy conditions, and we present a tight upper bound that indicates that the disagreement is linearly correlated with the error rate, and that the slope is linear in the polarization. (3) We prove results for the asymptotic behavior of the disagreement in terms of the number of classifiers, which we show can help in predicting the performance for a larger number of classifiers from that of a smaller number. Our theoretical findings are supported by empirical results on several image classification tasks with various types of neural networks.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93775"} +{"video_file": "m906PS5G9x_39024597.mp4", "openreview_id": "m906PS5G9x", "slideslive_id": 39024597, "venue": "nips2024", "title": "Bayesian Adaptive Calibration and Optimal Design", "status": "Poster", "keywords": "Gaussian processes;Bayesian inference;variational inference;experimental design;active learning;calibration", "tldr": "We propose a method which jointly estimates posteriors and informative designs to calibrate computer simulations of physical processes.", "abstract": "The process of calibrating computer models of natural phenomena is essential for applications in the physical sciences, where plenty of domain knowledge can be embedded into simulations and then calibrated against real observations. Current machine learning approaches, however, mostly rely on rerunning simulations over a fixed set of designs available in the observed data, potentially neglecting informative correlations across the design space and requiring a large amount of simulations. Instead, we consider the calibration process from the perspective of Bayesian adaptive experimental design and propose a data-efficient algorithm to run maximally informative simulations within a batch-sequential process. At each round, the algorithm jointly estimates the parameters posterior distribution and optimal designs by maximising a variational lower bound of the expected information gain. The simulator is modelled as a sample from a Gaussian process, which allows us to correlate simulations and real data with the unknown calibration parameters. We show the benefits of our method when compared to related approaches across synthetic and real-data problems.", "primary_area": "active_learning", "site": "https://neurips.cc/virtual/2024/poster/93772"} +{"video_file": "m9WZrEXWl5_39026774.mp4", "openreview_id": "m9WZrEXWl5", "slideslive_id": 39026774, "venue": "nips2024", "title": "Directional Smoothness and Gradient Methods: Convergence and Adaptivity", "status": "Poster", "keywords": "directional smoothness;gradient descent;exponential search;polyak stepsize;normalized gradient descent", "tldr": "We derive new convergence rates for gradient descent which depend only on local properties of the objective using directional smoothness.", "abstract": "We develop new sub-optimality bounds for gradient descent (GD) that depend on the conditioning of the objective along the path of optimization, rather than on global, worst-case constants. Key to our proofs is directional smoothness, a measure of gradient variation that we use to develop upper-bounds on the objective. Minimizing these upper-bounds requires solving implicit equations to obtain a sequence of strongly adapted step-sizes; we show that these equations are straightforward to solve for convex quadratics and lead to new guarantees for two classical step-sizes. For general functions, we prove that the Polyak step-size and normalized GD obtain fast, path-dependent rates despite using no knowledge of the directional smoothness. Experiments on logistic regression show our convergence guarantees are tighter than the classical theory based on\nL\n-smoothness.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93771"} +{"video_file": "mH1xtt2bJE_39027229.mp4", "openreview_id": "mH1xtt2bJE", "slideslive_id": 39027229, "venue": "nips2024", "title": "MaNo: Exploiting Matrix Norm for Unsupervised Accuracy Estimation Under Distribution Shifts", "status": "Poster", "keywords": "Unsupervised Learning;Distribution Shifts;Unsupervised Accuracy Estimation;Generalization;Deep Learning", "tldr": "Exploiting Matrix Norm for Unsupervised Accuracy Estimation Under Distribution Shifts", "abstract": "Leveraging the model\u2019s outputs, specifically the logits, is a common approach to estimating the test accuracy of a pre-trained neural network on out-of-distribution (OOD) samples without requiring access to the corresponding ground-truth labels. Despite their ease of implementation and computational efficiency, current logit-based methods are vulnerable to overconfidence issues, leading to prediction bias, especially under the natural shift. In this work, we first study the relationship between logits and generalization performance from the view of low-density separation assumption. Our findings motivate our proposed method \\method{} that \\textbf{(1)}~applies a data-dependent normalization on the logits to reduce prediction bias, and \\textbf{(2)} takes the $L_p$ norm of the matrix of normalized logits as the estimation score. Our theoretical analysis highlights the connection between the provided score and the model's uncertainty. We conduct an extensive empirical study on common unsupervised accuracy estimation benchmarks and demonstrate that \\method{} achieves state-of-the-art performance across various architectures in the presence of synthetic, natural, or subpopulation shifts. The code is available at https://github.com/Renchunzi-Xie/MaNo.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93765"} +{"video_file": "mHVmsy9len_39027736.mp4", "openreview_id": "mHVmsy9len", "slideslive_id": 39027736, "venue": "nips2024", "title": "Bounds for the smallest eigenvalue of the NTK for arbitrary spherical data of arbitrary dimension", "status": "Poster", "keywords": "neural tangent kernel;initialization;minimum eigenvalue;smallest eigenvalue;low-dimensional;hemisphere transform;spherical harmonics;separated", "tldr": "We bound the smallest eigenvalue of the NTK without distributional assumptions on the data.", "abstract": "Bounds on the smallest eigenvalue of the neural tangent kernel (NTK) are a key ingredient in the analysis of neural network optimization and memorization. However, existing results require distributional assumptions on the data and are limited to a high-dimensional setting, where the input dimension $d_0$ scales at least logarithmically in the number of samples $n$. In this work we remove both of these requirements and instead provide bounds in terms of a measure of distance between data points: notably these bounds hold with high probability even when $d_0$ is held constant versus $n$. We prove our results through a novel application of the hemisphere transform.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93764"} +{"video_file": "mOK4yD8JFd_39028689.mp4", "openreview_id": "mOK4yD8JFd", "slideslive_id": 39028689, "venue": "nips2024", "title": "Quality-Improved and Property-Preserved Polarimetric Imaging via Complementarily Fusing", "status": "Poster", "keywords": "Polarimetric Imaging;Exposure fusion;Deep Learning", "tldr": "We propose a polarimetric imaging framework that can produce clean and clear polarized snapshots by complementarily fusing a degraded pair of noisy and blurry ones.", "abstract": "Polarimetric imaging is a challenging problem in the field of polarization-based vision, since setting a short exposure time reduces the signal-to-noise ratio, making the degree of polarization (DoP) and the angle of polarization (AoP) severely degenerated, while if setting a relatively long exposure time, the DoP and AoP would tend to be over-smoothed due to the frequently-occurring motion blur. This work proposes a polarimetric imaging framework that can produce clean and clear polarized snapshots by complementarily fusing a degraded pair of noisy and blurry ones. By adopting a neural network-based three-phase fusing scheme with specially-designed modules tailored to each phase, our framework can not only improve the image quality but also preserve the polarization properties. Experimental results show that our framework achieves state-of-the-art performance.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93761"} +{"video_file": "mRIQz8Zd6O_39025063.mp4", "openreview_id": "mRIQz8Zd6O", "slideslive_id": 39025063, "venue": "nips2024", "title": "AutoGuide: Automated Generation and Selection of Context-Aware Guidelines for Large Language Model Agents", "status": "Poster", "keywords": "large language model agents;sequential decision-making", "tldr": "AutoGuide extracts a comprehensive set of context-aware guidelines to improve the sequential decision-making ability of large language model agents.", "abstract": "Recent advances in large language models (LLMs) have empowered AI agents capable of performing various sequential decision-making tasks. However, effectively guiding LLMs to perform well in unfamiliar domains like web navigation, where they lack sufficient knowledge, has proven to be difficult with the demonstration-based in-context learning paradigm. In this paper, we introduce a novel framework, called AutoGuide, which addresses this limitation by automatically generating context-aware guidelines from offline experiences. Importantly, each context-aware guideline is expressed in concise natural language and follows a conditional structure, clearly describing the context where it is applicable. As a result, our guidelines facilitate the provision of relevant knowledge for the agent's current decision-making process, overcoming the limitations of the conventional demonstration-based learning paradigm. Our evaluation demonstrates that AutoGuide significantly outperforms competitive baselines in complex benchmark domains, including real-world web navigation.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93759"} +{"video_file": "mSHs6C7Nfa_39026080.mp4", "openreview_id": "mSHs6C7Nfa", "slideslive_id": 39026080, "venue": "nips2024", "title": "Improving the Training of Rectified Flows", "status": "Poster", "keywords": "generative modeling;rectified flow;diffusion model", "tldr": "We propose improved techniques for training rectified flows, allowing them to compete with knowledge distillation methods even in the low NFE setting", "abstract": "Diffusion models have shown great promise for image and video generation, but sampling from state-of-the-art models requires expensive numerical integration of a generative ODE. One approach for tackling this problem is rectified flows, which iteratively learn smooth ODE paths that are less susceptible to truncation error. However, rectified flows still require a relatively large number of function evaluations (NFEs). In this work, we propose improved techniques for training rectified flows, allowing them to compete with knowledge distillation methods even in the low NFE setting. Our main insight is that under realistic settings, a single iteration of the Reflow algorithm for training rectified flows is sufficient to learn nearly straight trajectories; hence, the current practice of using multiple Reflow iterations is unnecessary. We thus propose techniques to improve one-round training of rectified flows, including a U-shaped timestep distribution and LPIPS-Huber premetric. With these techniques, we improve the FID of the previous 2-rectified flow by up to 75% in the 1 NFE setting on CIFAR-10. On ImageNet 64\n\u00d7\n64, our improved rectified flow outperforms the state-of-the-art distillation methods such as consistency distillation and progressive distillation in both one-step and two-step settings and rivals the performance of improved consistency training (iCT) in FID. Code is available at https://github.com/sangyun884/rfpp.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93758"} +{"video_file": "mSaqxZVZW8_39025296.mp4", "openreview_id": "mSaqxZVZW8", "slideslive_id": 39025296, "venue": "nips2024", "title": "SeeA*: Efficient Exploration-Enhanced A* Search by Selective Sampling", "status": "Oral", "keywords": "search algorithm;reinforcement learning;exploration", "tldr": "SeeA* is proposed to incorporate exploration into A* search by introducing an dynamic candidate set.", "abstract": "Monte-Carlo tree search (MCTS) and reinforcement learning contributed crucially to the success of AlphaGo and AlphaZero, and A\n\u2217\nis a tree search algorithm among the most well-known ones in the classical AI literature. MCTS and A\n\u2217\nboth perform heuristic search and are mutually beneficial. Efforts have been made to the renaissance of A\n\u2217\nfrom three possible aspects, two of which have been confirmed by studies in recent years, while the third is about the OPEN list that consists of open nodes of A\n\u2217\nsearch, but still lacks deep investigation. This paper aims at the third, i.e., developing the Sampling-exploration enhanced A\n\u2217\n(SeeA\n\u2217\n) search by constructing a dynamic subset of OPEN through a selective sampling process, such that the node with the best heuristic value in this subset instead of in the OPEN is expanded. Nodes with the best heuristic values in OPEN are most probably picked into this subset, but sometimes may not be included, which enables SeeA\n\u2217\nto explore other promising branches. Three sampling techniques are presented for comparative investigations. Moreover, under the assumption about the distribution of prediction errors, we have theoretically shown the superior efficiency of SeeA\n\u2217\nover A\n\u2217\nsearch, particularly when the accuracy of the guiding heuristic function is insufficient. Experimental results on retrosynthetic planning in organic chemistry, logic synthesis in integrated circuit design, and the classical Sokoban game empirically demonstrate the efficiency of SeeA\n\u2217\n, in comparison with the state-of-the-art heuristic search algorithms.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93757"} +{"video_file": "mXlR1FLFDc_39024779.mp4", "openreview_id": "mXlR1FLFDc", "slideslive_id": 39024779, "venue": "nips2024", "title": "A Compositional Atlas for Algebraic Circuits", "status": "Poster", "keywords": "semiring;probabilistic circuits;logic circuits;probabilistic inference;algebraic", "tldr": "We introduce a unifying framework for deriving algorithms and tractability conditions for complex compositional inference queries, such as marginal MAP, logic programming inference and causal inference.", "abstract": "Circuits based on sum-product structure have become a ubiquitous representation to compactly encode knowledge, from Boolean functions to probability distributions. By imposing constraints on the structure of such circuits, certain inference queries become tractable, such as model counting and most probable configuration. Recent works have explored analyzing probabilistic and causal inference queries as compositions of basic operators to derive tractability conditions. In this paper, we take an algebraic perspective for compositional inference, and show that a large class of queries\u2014including marginal MAP, probabilistic answer set programming inference, and causal backdoor adjustment\u2014correspond to a combination of basic operators over semirings: aggregation, product, and elementwise mapping. Using this framework, we uncover simple and general sufficient conditions for tractable composition of these operators, in terms of circuit properties (e.g., marginal determinism, compatibility) and conditions on the elementwise mappings. Applying our analysis, we derive novel tractability conditions for many such compositional queries. Our results unify tractability conditions for existing problems on circuits, while providing a blueprint for analysing novel compositional inference queries.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/93754"} +{"video_file": "mY0ZnS2s9u_39028200.mp4", "openreview_id": "mY0ZnS2s9u", "slideslive_id": 39028200, "venue": "nips2024", "title": "DDGS-CT: Direction-Disentangled Gaussian Splatting for Realistic Volume Rendering", "status": "Poster", "keywords": "3D Gaussian splatting;image registration;pose estimation", "tldr": "A novel approach that balances realistic physics-inspired X-ray simulation with efficient, differentiable DRR generation using 3D Gaussian splatting (3DGS)", "abstract": "Digitally reconstructed radiographs (DRRs) are simulated 2D X-ray images generated from 3D CT volumes, widely used in preoperative settings but limited in intraoperative applications due to computational bottlenecks. Physics-based Monte Carlo simulations provide accurate representations but are extremely computationally intensity. Analytical DRR renderers are much more efficient, but at the price of ignoring anisotropic X-ray image formation phenomena such as Compton scattering. We propose a novel approach that balances realistic physics-inspired X-ray simulation with efficient, differentiable DRR generation using 3D Gaussian splatting (3DGS). Our direction-disentangled 3DGS (DDGS) method decomposes the radiosity contribution into isotropic and direction-dependent components, able to approximate complex anisotropic interactions without complex runtime simulations. Additionally, we adapt the 3DGS initialization to account for tomography data properties, enhancing accuracy and efficiency. Our method outperforms state-of-the-art techniques in image accuracy and inference speed, demonstrating its potential for intraoperative applications and inverse problems like pose registration.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/93752"} +{"video_file": "mZHbkbYWTp_39028253.mp4", "openreview_id": "mZHbkbYWTp", "slideslive_id": 39028253, "venue": "nips2024", "title": "Stochastic Optimal Control and Estimation with Multiplicative and Internal Noise", "status": "Poster", "keywords": "control theory;stochastic optimal control;sensorimotor system;multiplicative and internal noise;motor control", "tldr": "We propose a novel iterative algorithm to solve a stochastic optimal control problem under multiplicative and internal noise, outperforming state-of-the-art solutions in the presence of internal noise and before algorithmic convergence.", "abstract": "A pivotal brain computation relies on the ability to sustain perception-action loops. Stochastic optimal control theory offers a mathematical framework to explain these processes at the algorithmic level through optimality principles. However, incorporating a realistic noise model of the sensorimotor system \u2014 accounting for multiplicative noise in feedback and motor output, as well as internal noise in estimation \u2014 makes the problem challenging. Currently, the algorithm that is commonly used is the one proposed in the seminal study in (Todorov, 2005). After discovering some pitfalls in the original derivation, i.e., unbiased estimation does not hold, we improve the algorithm by proposing an efficient gradient descent-based optimization that minimizes the cost-to-go while only imposing linearity of the control law. The optimal solution is obtained by iteratively propagating in closed form the sufficient statistics to compute the expected cost and then minimizing this cost with respect to the filter and control gains. We demonstrate that this approach results in a significantly lower overall cost than current state-of-the-art solutions, particularly in the presence of internal noise, though the improvement is present in other circumstances as well, with theoretical explanations for this enhanced performance. Providing the optimal control law is key for inverse control inference, especially in explaining behavioral data under rationality assumptions.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/93750"} +{"video_file": "manHbkpIW6_39026319.mp4", "openreview_id": "manHbkpIW6", "slideslive_id": 39026319, "venue": "nips2024", "title": "Once Read is Enough: Domain-specific Pretraining-free Language Models with Cluster-guided Sparse Experts for Long-tail Domain Knowledge", "status": "Poster", "keywords": "language model;long-tail;clustering", "tldr": "The long-tail data in language model suffer from its gradient inconsistency with overall data, causing model struggle to capture domain knowledge during pretrain. We use a clustering-based sparse expert network, yields better performance.", "abstract": "Language models (LMs) only pretrained on a general and massive corpus usually cannot attain satisfying performance on domain-specific downstream tasks, and hence, applying domain-specific pretraining to LMs is a common and indispensable practice. However, domain-specific pretraining can be costly and time-consuming, hindering LMs' deployment in real-world applications. In this work, we consider the incapability to memorize domain-specific knowledge embedded in the general corpus with rare occurrences and long-tail distributions as the leading cause for pretrained LMs' inferior downstream performance. Analysis of Neural Tangent Kernels (NTKs) reveals that those long-tail data are commonly overlooked in the model's gradient updates and, consequently, are not effectively memorized, leading to poor domain-specific downstream performance. Based on the intuition that data with similar semantic meaning are closer in the embedding space, we devise a Cluster-guided Sparse Expert (CSE) layer to actively learn long-tail domain knowledge typically neglected in previous pretrained LMs. During pretraining, a CSE layer efficiently clusters domain knowledge together and assigns long-tail knowledge to designate extra experts. CSE is also a lightweight structure that only needs to be incorporated in several deep layers. With our training strategy, we found that during pretraining, data of long-tail knowledge gradually formulate isolated, outlier clusters in an LM's representation spaces, especially in deeper layers. Our experimental results show that only pretraining CSE-based LMs is enough to achieve superior performance than regularly pretrained-finetuned LMs on various downstream tasks, implying the prospects of domain-specific-pretraining-free language models.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93747"} +{"video_file": "mfTvNzhsht_39025819.mp4", "openreview_id": "mfTvNzhsht", "slideslive_id": 39025819, "venue": "nips2024", "title": "Dueling over Dessert, Mastering the Art of Repeated Cake Cutting", "status": "Poster", "keywords": "fair division;online learning;fictitious play;repeated games", "tldr": "We analyze repeated cake cutting between two players", "abstract": "We consider the setting of repeated fair division between two players, denoted Alice and Bob, with private valuations over a cake. In each round, a new cake arrives, which is identical to the ones in previous rounds. Alice cuts the cake at a point of her choice, while Bob chooses the left piece or the right piece, leaving the remainder for Alice. We consider two versions: sequential, where Bob observes Alice's cut point before choosing left/right, and simultaneous, where he only observes her cut point after making his choice. The simultaneous version was first considered by Aumann and Maschler.\nWe observe that if Bob is almost myopic and chooses his favorite piece too often, then he can be systematically exploited by Alice through a strategy akin to a binary search. This strategy allows Alice to approximate Bob's preferences with increasing precision, thereby securing a disproportionate share of the resource over time.\nWe analyze the limits of how much a player can exploit the other one and show that fair utility profiles are in fact achievable. Specifically, the players can enforce the equitable utility profile of\n(\n1\n/\n2\n,\n1\n/\n2\n)\nin the limit on every trajectory of play, by keeping the other player's utility to approximately\n1\n/\n2\non average while guaranteeing they themselves get at least approximately\n1\n/\n2\non average. We show this theorem using a connection with Blackwell approachability.\nFinally, we analyze a natural dynamic known as fictitious play, where players best respond to the empirical distribution of the other player. We show that fictitious play converges to the equitable utility profile of\n(\n1\n/\n2\n,\n1\n/\n2\n)\nat a rate of\nO\n(\n1\n/\nT\n)\n.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/93742"} +{"video_file": "mhhlZeAr67_39025905.mp4", "openreview_id": "mhhlZeAr67", "slideslive_id": 39025905, "venue": "nips2024", "title": "Reciprocal Learning", "status": "Poster", "keywords": "Convergence;Decision Theory;Bandits;Active Learning;Self-Training;Semi-Supervised Learning;Online Learning", "tldr": "We generalize active learning, self-training, multi-armed bandits, superset learning, and Bayesian optimization to reciprocal learning and give sufficient conditions for convergence.", "abstract": "We demonstrate that numerous machine learning algorithms are specific instances of one single paradigm: reciprocal learning. These instances range from active learning over multi-armed bandits to self-training. We show that all these algorithms not only learn parameters from data but also vice versa: They iteratively alter training data in a way that depends on the current model fit. We introduce reciprocal learning as a generalization of these algorithms using the language of decision theory. This allows us to study under what conditions they converge. The key is to guarantee that reciprocal learning contracts such that the Banach fixed-point theorem applies. In this way, we find that reciprocal learning converges at linear rates to an approximately optimal model under some assumptions on the loss function, if their predictions are probabilistic and the sample adaption is both non-greedy and either randomized or regularized. We interpret these findings and provide corollaries that relate them to active learning, self-training, and bandits.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93740"} +{"video_file": "mirkQqx6po_39025329.mp4", "openreview_id": "mirkQqx6po", "slideslive_id": 39025329, "venue": "nips2024", "title": "Learning-Augmented Approximation Algorithms for Maximum Cut and Related Problems", "status": "Poster", "keywords": "learning-augmented algorithm;approximation algorithm;maximization;CSPs;learning with advice", "tldr": "This paper presents learning-augmented algorithms for maximum cut and other max-CSPs.", "abstract": "In recent years, there has been a surge of interest in the use of machine-learned predictions to bypass worst-case lower bounds for classical problems in combinatorial optimization. So far, the focus has mostly been on online algorithms, where information-theoretic barriers are overcome using predictions about the unknown future. In this paper, we consider the complementary question of using learned information to overcome computational barriers in the form of approximation hardness of polynomial-time algorithms for NP-hard (offline) problems. We show that noisy predictions about the optimal solution can be used to break classical hardness results for maximization problems such as the max-cut problem and more generally, maximization versions of constraint satisfaction problems (CSPs).", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93738"} +{"video_file": "mkw6x0OExg_39026085.mp4", "openreview_id": "mkw6x0OExg", "slideslive_id": 39026085, "venue": "nips2024", "title": "Explanations that reveal all through the de\ufb01nition of encoding", "status": "Poster", "keywords": "feature attributions;model explanations;evaluating explanations;encoding the prediction;interpretability", "tldr": "We formalize the definition of encoding in explanation methods and provide two methods to detect encoding.", "abstract": "Feature attributions attempt to highlight what inputs drive predictive power. Good attributions or explanations are thus those that produce inputs that retain this predictive power; accordingly, evaluations of explanations score their quality of prediction. However, evaluations produce scores better than what appears possible from the values in the explanation for a class of explanations, called encoding explanations. Probing for encoding remains a challenge because there is no general characterization of what gives the extra predictive power. We develop a de\ufb01nition of encoding that identi\ufb01es this extra predictive power via conditional dependence and show that the de\ufb01nition \ufb01ts existing examples of encoding. This de\ufb01nition implies, in contrast to encoding explanations, that non-encoding explanations contain all the informative inputs used to produce the explanation, giving them a \u201cwhat you see is what you get\u201d property, which makes them transparent and simple to use. Next, we prove that existing scores (ROAR, FRESH, EVAL-X) do not rank non-encoding explanations above encoding ones, and develop STRIPE-X which ranks them correctly. After empirically demonstrating the theoretical insights, we use STRIPE-X to show that despite prompting an LLM to produce non-encoding explanations for a sentiment analysis task, the LLM-generated explanations encode.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/93736"} +{"video_file": "ml01XyP698_39024611.mp4", "openreview_id": "ml01XyP698", "slideslive_id": 39024611, "venue": "nips2024", "title": "FreeSplat: Generalizable 3D Gaussian Splatting Towards Free View Synthesis of Indoor Scenes", "status": "Poster", "keywords": "3D Gaussian Splatting;Generalization;3D from multi-view sensors;Novel View Synthesis;3D Computer Vision", "tldr": "We propose a generalizable 3D Gaussian Splatting framework to progressively fuse multi-view pixel-aligned 3D Gaussians for large-scale scene photorealistic reconstruction.", "abstract": "Empowering 3D Gaussian Splatting with generalization ability is appealing. However, existing generalizable 3D Gaussian Splatting methods are largely confined to narrow-range interpolation between stereo images due to their heavy backbones, thus lacking the ability to accurately localize 3D Gaussian and support free-view synthesis across wide view range. In this paper, we present a novel framework FreeSplat that is capable of reconstructing geometrically consistent 3D scenes from long sequence input towards free-view synthesis.Specifically, we firstly introduce Low-cost Cross-View Aggregation achieved by constructing adaptive cost volumes among nearby views and aggregating features using a multi-scale structure. Subsequently, we present the Pixel-wise Triplet Fusion to eliminate redundancy of 3D Gaussians in overlapping view regions and to aggregate features observed across multiple views. Additionally, we propose a simple but effective free-view training strategy that ensures robust view synthesis across broader view range regardless of the number of views. Our empirical results demonstrate state-of-the-art novel view synthesis peformances in both novel view rendered color maps quality and depth maps accuracy across different numbers of input views. We also show that FreeSplat performs inference more efficiently and can effectively reduce redundant Gaussians, offering the possibility of feed-forward large scene reconstruction without depth priors. Our code will be made open-source upon paper acceptance.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93734"} +{"video_file": "mljDUaQpln_39028517.mp4", "openreview_id": "mljDUaQpln", "slideslive_id": 39028517, "venue": "nips2024", "title": "Enhancing Reasoning Capabilities of LLMs via Principled Synthetic Logic Corpus", "status": "Poster", "keywords": "large language model;artificial intelligence;reasoning;logical reasoning;math;coding;synthetic corpus", "tldr": "We enhanced LLM's reasoning capabilities by principled synthetic corpus.", "abstract": "Large language models (LLMs) are capable of solving a wide range of tasks, yet they have struggled with reasoning. To address this, we propose\nAdditional Logic Training (ALT)\n, which aims to enhance LLMs' reasoning capabilities by program-generated logical reasoning samples. We first establish principles for designing high-quality samples by integrating symbolic logic theory and previous empirical insights. Then, based on these principles, we construct a synthetic corpus named\nFormal\n Logic\n D\neduction\n D\niverse\n(FLD\n\u00d7\n2\n), comprising numerous samples of multi-step deduction with unknown facts, diverse reasoning rules, diverse linguistic expressions, and challenging distractors. Finally, we empirically show that ALT on FLD\n\u00d7\n2\nsubstantially enhances the reasoning capabilities of state-of-the-art LLMs, including LLaMA-3.1-70B. Improvements include gains of up to 30 points on logical reasoning benchmarks, up to 10 points on math and coding benchmarks, and 5 points on the benchmark suite BBH.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93733"} +{"video_file": "mmSFfib6pI_39024884.mp4", "openreview_id": "mmSFfib6pI", "slideslive_id": 39024884, "venue": "nips2024", "title": "Validating Climate Models with Spherical Convolutional Wasserstein Distance", "status": "Spotlight", "keywords": "Climate Models;Wasserstein Distance;Convolution;Functional Data", "tldr": "We create a Wasserstein distance variant based on spherical convolutions of functional data and apply the method to climate model validation.", "abstract": "The validation of global climate models is crucial to ensure the accuracy and efficacy of model output. We introduce the spherical convolutional Wasserstein distance to more comprehensively measure differences between climate models and reanalysis data. This new similarity measure accounts for spatial variability using convolutional projections and quantifies local differences in the distribution of climate variables. We apply this method to evaluate the historical model outputs of the Coupled Model Intercomparison Project (CMIP) members by comparing them to observational and reanalysis data products. Additionally, we investigate the progression from CMIP phase 5 to phase 6 and find modest improvements in the phase 6 models regarding their ability to produce realistic climatologies.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/93730"} +{"video_file": "motImXq3B1_39025744.mp4", "openreview_id": "motImXq3B1", "slideslive_id": 39025744, "venue": "nips2024", "title": "P$^2$C$^2$Net: PDE-Preserved Coarse Correction Network for efficient prediction of spatiotemporal dynamics", "status": "Poster", "keywords": "physics-informed learning;coarse model;spatiotemporal dynamics prediction", "tldr": "Introduced a PDE-Preserved Coarse Correction Network for efficient prediction of spatiotemporal dynamics.", "abstract": "When solving partial differential equations (PDEs), classical numerical methods often require fine mesh grids and small time stepping to meet stability, consistency, and convergence conditions, leading to high computational cost. Recently, machine learning has been increasingly utilized to solve PDE problems, but they often encounter challenges related to interpretability, generalizability, and strong dependency on rich labeled data. Hence, we introduce a new PDE-Preserved Coarse Correction Network (P\n2\nC\n2\nNet) to efficiently solve spatiotemporal PDE problems on coarse mesh grids in small data regimes. The model consists of two synergistic modules: (1) a trainable PDE block that learns to update the coarse solution (i.e., the system state), based on a high-order numerical scheme with boundary condition encoding, and (2) a neural network block that consistently corrects the solution on the fly. In particular, we propose a learnable symmetric Conv filter, with weights shared over the entire model, to accurately estimate the spatial derivatives of PDE based on the neural-corrected system state. The resulting physics-encoded model is capable of handling limited training data (e.g., 3--5 trajectories) and accelerates the prediction of PDE solutions on coarse spatiotemporal grids while maintaining a high accuracy. P\n2\nC\n2\nNet achieves consistent state-of-the-art performance with over 50% gain (e.g., in terms of relative prediction error) across four datasets covering complex reaction-diffusion processes and turbulent flows.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/93729"} +{"video_file": "mp6OWpDIJC_39027931.mp4", "openreview_id": "mp6OWpDIJC", "slideslive_id": 39027931, "venue": "nips2024", "title": "Autonomous Agents for Collaborative Task under Information Asymmetry", "status": "Poster", "keywords": "autonomous agent;social network;large language model", "tldr": "This paper propose iAgents, a new LLM Multi-Agent framework where agents collaborate on behalf of human in the mirrored agent network and deal with information asymmetry problems.", "abstract": "Large Language Model Multi-Agent Systems (LLM-MAS) have greatly progressed in solving complex tasks. It communicates among agents within the system to collaboratively solve tasks, under the premise of shared information. However, when agents' collaborations are leveraged to perform multi-person tasks, a new challenge arises due to information asymmetry, since each agent can only access the information of its human user. Previous MAS struggle to complete tasks under this condition. To address this, we propose a new MAS paradigm termed iAgents, which denotes Informative Multi-Agent Systems. In iAgents, the human social network is mirrored in the agent network, where agents proactively exchange human information necessary for task resolution, thereby overcoming information asymmetry. iAgents employs a novel agent reasoning mechanism, InfoNav, to navigate agents' communication towards effective information exchange. Together with InfoNav, iAgents organizes human information in a mixed memory to provide agents with accurate and comprehensive information for exchange. Additionally, we introduce InformativeBench, the first benchmark tailored for evaluating LLM agents' task-solving ability under information asymmetry. Experimental results show that iAgents can collaborate within a social network of 140 individuals and 588 relationships, autonomously communicate over 30 turns, and retrieve information from nearly 70,000 messages to complete tasks within 3 minutes.", "primary_area": "machine_learning_for_social_sciences", "site": "https://neurips.cc/virtual/2024/poster/93728"} +{"video_file": "mp8u2Pcmqz_39026575.mp4", "openreview_id": "mp8u2Pcmqz", "slideslive_id": 39026575, "venue": "nips2024", "title": "DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs", "status": "Oral", "keywords": "Model compression;Post-training Quantization;PTQ of Large Language Models", "tldr": "We identify massive outliers in the down-projection layer of the FFN module and introduce DuQuant, which uses rotation and permutation transformations to effectively mitigate both massive and normal outliers.", "abstract": "Quantization of large language models (LLMs) faces significant challenges, particularly due to the presence of outlier activations that impede efficient low-bit representation. Traditional approaches predominantly address Normal Outliers, which are activations across all tokens with relatively large magnitudes. However, these methods struggle with smoothing Massive Outliers that display significantly larger values, which leads to significant performance degradation in low-bit quantization. In this paper, we introduce DuQuant, a novel approach that utilizes rotation and permutation transformations to more effectively mitigate both massive and normal outliers. First, DuQuant starts by constructing the rotation matrix, using specific outlier dimensions as prior knowledge, to redistribute outliers to adjacent channels by block-wise rotation. Second, We further employ a zigzag permutation to balance the distribution of outliers across blocks, thereby reducing block-wise variance. A subsequent rotation further smooths the activation landscape, enhancing model performance. DuQuant simplifies the quantization process and excels in managing outliers, outperforming the state-of-the-art baselines across various sizes and types of LLMs on multiple tasks, even with 4-bit weight-activation quantization. Our code is available at https://github.com/Hsu1023/DuQuant.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93727"} +{"video_file": "mpDbWjLzfT_39025584.mp4", "openreview_id": "mpDbWjLzfT", "slideslive_id": 39025584, "venue": "nips2024", "title": "CONTRAST: Continual Multi-source Adaptation to Dynamic Distributions", "status": "Poster", "keywords": "Multi-source Adaptation;Online learning;Test Time Adaptation", "tldr": "We propose the first work to consider dynamically evolving multi-source adaptation at test time.", "abstract": "Adapting to dynamic data distributions is a practical yet challenging task. One effective strategy is to use a model ensemble, which leverages the diverse expertise of different models to transfer knowledge to evolving data distributions. However, this approach faces difficulties when the dynamic test distribution is available only in small batches and without access to the original source data. To address the challenge of adapting to dynamic distributions in such practical settings, we propose continual multi-source adaptation to dynamic distributions (CONTRAST), a novel method that optimally combines multiple source models to adapt to the dynamic test data. CONTRAST has two distinguishing features. First, it efficiently computes the optimal combination weights to combine the source models to adapt to the test data distribution continuously as a function of time. Second, it identifies which of the source model parameters to update so that only the model which is most correlated to the target data is adapted, leaving the less correlated ones untouched; this mitigates the issue of ``forgetting\" the source model parameters by focusing only on the source model that exhibits the strongest correlation with the test batch distribution. Through theoretical analysis we show that the proposed method is able to optimally combine the source models and prioritize updates to the model least prone to forgetting. Experimental analysis on diverse datasets demonstrates that the combination of multiple source models does at least as well as the best source (with hindsight knowledge), and performance does not degrade as the test data distribution changes over time (robust to forgetting).", "primary_area": "online_learning", "site": "https://neurips.cc/virtual/2024/poster/93726"} +{"video_file": "mtBmKqyqGS_39027814.mp4", "openreview_id": "mtBmKqyqGS", "slideslive_id": 39027814, "venue": "nips2024", "title": "Phased Consistency Models", "status": "Poster", "keywords": "Consistency Models;Diffusion Models;Distillation", "tldr": "Consistency model analysis and training design for advanced performance on text-to-image generation and text-to-video generation", "abstract": "Consistency Models (CMs) have made significant progress in accelerating the generation of diffusion models. However, their application to high-resolution, text-conditioned image generation in the latent space remains unsatisfactory. In this paper, we identify three key flaws in the current design of Latent Consistency Models~(LCMs). We investigate the reasons behind these limitations and propose Phased Consistency Models (PCMs), which generalize the design space and address the identified limitations. Our evaluations demonstrate that PCMs outperform LCMs across 1--16 step generation settings. While PCMs are specifically designed for multi-step refinement, they achieve comparable 1-step generation results to previously state-of-the-art specifically designed 1-step methods. Furthermore, we show the methodology of PCMs is versatile and applicable to video generation, enabling us to train the state-of-the-art few-step text-to-video generator. Our code is available at https://github.com/G-U-N/Phased-Consistency-Model.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93725"} +{"video_file": "mwN1bbD5DQ_39027129.mp4", "openreview_id": "mwN1bbD5DQ", "slideslive_id": 39027129, "venue": "nips2024", "title": "Learning De-Biased Representations for Remote-Sensing Imagery", "status": "Poster", "keywords": "Adaptation;Long-tailed learning;Remote Sensing", "tldr": "We propose debLoRA to adapt foundation models to data-scarce remote sensing domains with long-tailed distributions, by efficiently and unsupervisedly augmenting minor class features using major class features to mitigate representation bias.", "abstract": "Remote sensing (RS) imagery, which requires specialized satellites to collect and is difficult to annotate, suffers from data scarcity and class imbalance in certain spectrums. Due to their data scarcity, training large-scale RS models from scratch is unrealistic, and the alternative is to transfer pre-trained models by fine-tuning or a more data-efficient method LoRA. Due to class imbalance, transferred models exhibit strong bias, where features of the major class dominate over those of the minor class. In this paper, we propose debLoRA, a generic training approach that works with any LoRA variants to yield debiased features. It is an unsupervised learning approach that can diversify minor class features based on the shared attributes with major classes, where the attributes are obtained by a simple step of clustering. To evaluate it, we conduct extensive experiments in two transfer learning scenarios in the RS domain: from natural to optical RS images, and from optical RS to multi-spectrum RS images. We perform object classification and oriented object detection tasks on the optical RS dataset DOTA and the SAR dataset FUSRS. Results show that our debLoRA consistently surpasses prior arts across these RS adaptation settings, yielding up to 3.3 and 4.7 percentage points gains on the tail classes for natural $\\to$ optical RS and optical RS $\\to$ multi-spectrum RS adaptations, respectively, while preserving the performance on head classes, substantiating its efficacy and adaptability", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93721"} +{"video_file": "n0arS0DDot_39025651.mp4", "openreview_id": "n0arS0DDot", "slideslive_id": 39025651, "venue": "nips2024", "title": "BLAST: Block-Level Adaptive Structured Matrices for Efficient Deep Neural Network Inference", "status": "Poster", "keywords": "Efficiency;Compression;Low Rank;Pruning;Matrix Factorization;Structured Matrix;Acceleration;Optimization;Transformer;Large Language Model;Diffusion Model;Vision Model;Preconditioned Gradient Descent", "tldr": "We improve DNN inference efficiency by learning low-dimensional weight structures through BLAST.", "abstract": "Large-scale foundation models have demonstrated exceptional performance in language and vision tasks. However, the numerous dense matrix-vector operations involved in these large networks pose significant computational challenges during inference. To address these challenges, we introduce the Block-Level Adaptive STructured (BLAST) matrix, designed to learn and leverage efficient structures prevalent in the weight matrices of linear layers within deep learning models. Compared to existing structured matrices, the BLAST matrix offers substantial flexibility, as it can represent various types of structures that are either learned from data or computed from pre-existing weight matrices. We demonstrate the efficiency of using the BLAST matrix for compressing both language and vision tasks, showing that (i) for medium-sized models such as ViT and GPT-2, training with BLAST weights boosts performance while reducing complexity by 70% and 40%, respectively; and (ii) for large foundation models such as Llama-7B and DiT-XL, the BLAST matrix achieves a 2x compression while exhibiting the lowest performance degradation among all tested structured matrices. Our code is available at https://github.com/changwoolee/BLAST.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/93718"} +{"video_file": "n60xBFZWrk_39026673.mp4", "openreview_id": "n60xBFZWrk", "slideslive_id": 39026673, "venue": "nips2024", "title": "Hyperbolic Embeddings of Supervised Models", "status": "Poster", "keywords": "Hyperbolic geometry;supervised model embedding;decision trees;boosting", "tldr": "A full-fledged solution to embed supervised *models* in hyperbolic geometry, and more", "abstract": "Models of hyperbolic geometry have been successfully used in ML for two main tasks: embedding models in unsupervised learning (e.g. hierarchies) and embedding data. To our knowledge, there are no approaches that provide embeddings for supervised models; even when hyperbolic geometry provides convenient properties for expressing popular hypothesis classes, such as decision trees (and ensembles). In this paper, we propose a full-fledged solution to the problem in three independent contributions. The first linking the theory of losses for class probability estimation to hyperbolic embeddings in Poincar'e disk model. The second resolving an issue for a clean, unambiguous embedding of (ensembles of) decision trees in this model. The third showing how to smoothly tweak the Poincar'e hyperbolic distance to improve its encoding and visualization properties near the border of the disk, a crucial region for our application, while keeping hyperbolicity. This last step has substantial independent interest as it is grounded in a generalization of Leibniz-Newton's fundamental Theorem of calculus.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93714"} +{"video_file": "nAIhvNy15T_39026339.mp4", "openreview_id": "nAIhvNy15T", "slideslive_id": 39026339, "venue": "nips2024", "title": "Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models", "status": "Poster", "keywords": "generative models;diffusion models;classifier-free guidance", "tldr": "We improve quantitative and qualitative results of image-generating diffusion models by applying classifier-free guidance in a limited interval.", "abstract": "Guidance is a crucial technique for extracting the best performance out of image-generating diffusion models. Traditionally, a constant guidance weight has been applied throughout the sampling chain of an image. We show that guidance is clearly harmful toward the beginning of the chain (high noise levels), largely unnecessary toward the end (low noise levels), and only beneficial in the middle. We thus restrict it to a specific range of noise levels, improving both the inference speed and result quality. This limited guidance interval improves the record FID in ImageNet-512 significantly, from 1.81 to 1.40. We show that it is quantitatively and qualitatively beneficial across different sampler parameters, network architectures, and datasets, including the large-scale setting of Stable Diffusion XL. We thus suggest exposing the guidance interval as a hyperparameter in all diffusion models that use guidance.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/93711"} +{"video_file": "nAnEStxyfy_39027390.mp4", "openreview_id": "nAnEStxyfy", "slideslive_id": 39027390, "venue": "nips2024", "title": "Generating Highly Designable Proteins with Geometric Algebra Flow Matching", "status": "Poster", "keywords": "Proteins; Flow Matching; Geometric Algebra; Generative models; Equivariant models; Protein design; AlphaFold; Local frames", "tldr": "Geometric Algebra based architecture for protein structure that achieves high designability and structural diversity as flow matching model for protein generation.", "abstract": "We introduce a generative model for protein backbone design utilizing geometric products and higher order message passing. In particular, we propose Clifford Frame Attention (CFA), an extension of the invariant point attention (IPA) architecture from AlphaFold2, in which the backbone residue frames and geometric features are represented in the projective geometric algebra. This enables to construct geometrically expressive messages between residues, including higher order terms, using the bilinear operations of the algebra. We evaluate our architecture by incorporating it into the framework of FrameFlow, a state-of-the-art flow matching model for protein backbone generation. The proposed model achieves high designability, diversity and novelty, while also sampling protein backbones that follow the statistical distribution of secondary structure elements found in naturally occurring proteins, a property so far only insufficiently achieved by many state-of-the-art generative models.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93710"} +{"video_file": "nBOdYBptWW_39026622.mp4", "openreview_id": "nBOdYBptWW", "slideslive_id": 39026622, "venue": "nips2024", "title": "UniTS: A Unified Multi-Task Time Series Model", "status": "Poster", "keywords": "Time Series Forecasting;Time Series Classification;Time Series Imputation;Time Series Anomaly Detection;Prompt Learning;Pretraining;Few-Shot Learning;Unified Model;Multi-task;Task Tokenization", "tldr": "UniTS is a unified multi-task time series model that can process predictive and generative tasks across time series domains.", "abstract": "Although pre-trained transformers and reprogrammed text-based LLMs have shown strong performance on time series tasks, the best-performing architectures vary widely across tasks, with most models narrowly focused on specific areas, such as time series forecasting. Unifying predictive and generative time series tasks within a single model remains challenging. We introduce UniTS, a unified multi-task time series model that utilizes task tokenization to integrate predictive and generative tasks into a single framework. UniTS employs a modified transformer block to capture universal time series representations, enabling transferability from a heterogeneous, multi-domain pre-training dataset\u2014characterized by diverse dynamic patterns, sampling rates, and temporal scales\u2014to a wide range of downstream datasets with varied task specifications and data domains. Tested on 38 datasets across human activity sensors, healthcare, engineering, and finance, UniTS achieves superior performance compared to 12 forecasting models, 20 classification models, 18 anomaly detection models, and 16 imputation models, including adapted text-based LLMs. UniTS also demonstrates strong few-shot and prompt capabilities when applied to new domains and tasks. In single-task settings, UniTS outperforms competitive task-specialized time series models. Code and datasets are available at https://github.com/mims-harvard/UniTS.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93709"} +{"video_file": "nBhfIcDnRP_39025103.mp4", "openreview_id": "nBhfIcDnRP", "slideslive_id": 39025103, "venue": "nips2024", "title": "Efficient Graph Matching for Correlated Stochastic Block Models", "status": "Poster", "keywords": "Graph matching;correlated random graphs;stochastic block model;community recovery;subgraph counting", "tldr": "We give the first efficient algorithm for graph matching for correlated stochastic block models with two balanced communities.", "abstract": "We study learning problems on correlated stochastic block models with two balanced communities. Our main result gives the first efficient algorithm for graph matching in this setting. In the most interesting regime where the average degree is logarithmic in the number of vertices, this algorithm correctly matches all but a vanishing fraction of vertices with high probability, whenever the edge correlation parameter\ns\nsatisfies\ns\n2\n>\n\u03b1\n\u2248\n0.338\n, where\n\u03b1\nis Otter's tree-counting constant. Moreover, we extend this to an efficient algorithm for exact graph matching whenever this is information-theoretically possible, positively resolving an open problem of R\u00e1cz and Sridhar (NeurIPS 2021). Our algorithm generalizes the recent breakthrough work of Mao, Wu, Xu, and Yu (STOC 2023), which is based on centered subgraph counts of a large family of trees termed chandeliers. A major technical challenge that we overcome is dealing with the additional estimation errors that are necessarily present due to the fact that, in relevant parameter regimes, the latent community partition cannot be exactly recovered from a single graph. As an application of our results, we give an efficient algorithm for exact community recovery using multiple correlated graphs in parameter regimes where it is information-theoretically impossible to do so using just a single graph.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93707"} +{"video_file": "nBjmMF2IZU_39028668.mp4", "openreview_id": "nBjmMF2IZU", "slideslive_id": 39028668, "venue": "nips2024", "title": "Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning", "status": "Poster", "keywords": "large vision language model;reinforcement learning", "tldr": "We directly use reinforcement learning to fine-tune large vision-language model using task-specific reward function", "abstract": "Large vision-language models (VLMs) fine-tuned on specialized visual instruction-following data have exhibited impressive language reasoning capabilities across various scenarios. However, this fine-tuning paradigm may not be able to efficiently learn optimal decision-making agents in multi-step goal-directed tasks from interactive environments. To address this challenge, we propose an algorithmic framework that fine-tunes VLMs with reinforcement learning (RL). Specifically, our framework provides a task description and then prompts the VLM to generate chain-of-thought (CoT) reasoning, enabling the VLM to efficiently explore intermediate reasoning steps that lead to the final text-based action. Next, the open-ended text output is parsed into an executable action to interact with the environment to obtain goal-directed task rewards. Finally, our framework uses these task rewards to fine-tune the entire VLM with RL. Empirically, we demonstrate that our proposed framework enhances the decision-making capabilities of VLM agents across various tasks, enabling 7b models to outperform commercial models such as GPT4-V or Gemini. Furthermore, we find that CoT reasoning is a crucial component for performance improvement, as removing the CoT reasoning results in a significant decrease in the overall performance of our method.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93706"} +{"video_file": "nF34qXcY0b_39025912.mp4", "openreview_id": "nF34qXcY0b", "slideslive_id": 39025912, "venue": "nips2024", "title": "Identification of Analytic Nonlinear Dynamical Systems with Non-asymptotic Guarantees", "status": "Poster", "keywords": "set-membership;least-squares;nonlinear systems;non-asymptotic guarantees", "tldr": "This paper generalizes the system estimation conditions for nonlinear control systems under i.i.d. inputs and provides non-asymptotic analysis.", "abstract": "This paper focuses on the system identification of an important class of nonlinear systems: nonlinear systems that are linearly parameterized, which enjoy wide applications in robotics and other mechanical systems. We consider two system identification methods: least-squares estimation (LSE), which is a point estimation method; and set-membership estimation (SME), which estimates an uncertainty set that contains the true parameters. We provide non-asymptotic convergence rates for LSE and SME under i.i.d. control inputs and control policies with i.i.d. random perturbations, both of which are considered as non-active-exploration inputs. Compared with the counter-example based on piecewise-affine systems in the literature, the success of non-active exploration in our setting relies on a key assumption about the system dynamics: we require the system functions to be real-analytic. Our results, together with the piecewise-affine counter-example, reveal the importance of differentiability in nonlinear system identification through non-active exploration. Lastly, we numerically compare our theoretical bounds with the empirical performance of LSE and SME on a pendulum example and a quadrotor example.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93702"} +{"video_file": "nJvkQSu9Z5_39028677.mp4", "openreview_id": "nJvkQSu9Z5", "slideslive_id": 39028677, "venue": "nips2024", "title": "Shared Autonomy with IDA: Interventional Diffusion Assistance", "status": "Poster", "keywords": "Shared Autonomy;Diffusion Models;copilots;intervention reinforcement learning;reinforcement learning;lunar lander;Mujoco", "tldr": "We develop an intervention function that dynamically shares control between a human operator and assistive AI agent by choosing the control that maximizes expected future returns.", "abstract": "The rapid development of artificial intelligence (AI) has unearthed the potential to assist humans in controlling advanced technologies. Shared autonomy (SA) facilitates control by combining inputs from a human pilot and an AI copilot. In prior SA studies, the copilot is constantly active in determining the action played at each time step. This limits human autonomy that may have deleterious effects on performance. In general, the amount of helpful copilot assistance varies greatly depending on the task dynamics. We therefore hypothesized that human autonomy and SA performance improves through dynamic and selective copilot intervention. To address this, we develop a goal-agnostic intervention assistance (IA) that dynamically shares control by having the copilot intervene only when the expected value of the copilot\u2019s action exceeds that of the human\u2019s action. We implement IA with a diffusion copilot (termed IDA) trained on expert demonstrations with goal masking. We prove that IDA performance is lower bounded by human performance, so that IDA does not negatively impact human control. In experiments with simulated human pilots, we show that IDA achieves higher performance than both pilot-only and traditional SA control in variants of the Reacher environment and Lunar Lander. We then demonstrate with human-in the-loop experiments that IDA achieves better control in Lunar Lander and that human participants experience greater autonomy and prefer IDA over pilot-only and traditional SA control. We attribute the success of IDA to preserving human autonomy while simultaneously offering assistance to prevent the human from entering universally bad states.", "primary_area": "human-AI_interaction", "site": "https://neurips.cc/virtual/2024/poster/93699"} +{"video_file": "nK6OnCpd3n_39027059.mp4", "openreview_id": "nK6OnCpd3n", "slideslive_id": 39027059, "venue": "nips2024", "title": "Text-Aware Diffusion for Policy Learning", "status": "Poster", "keywords": "diffusion models;reinforcement learning", "tldr": "Pre-trained, frozen diffusion models generate dense zero-shot reward signals for text-conditioned policy learning.", "abstract": "Training an agent to achieve particular goals or perform desired behaviors is often accomplished through reinforcement learning, especially in the absence of expert demonstrations. However, supporting novel goals or behaviors through reinforcement learning requires the ad-hoc design of appropriate reward functions, which quickly becomes intractable. To address this challenge, we propose Text-Aware Diffusion for Policy Learning (TADPoLe), which uses a pretrained, frozen text-conditioned diffusion model to compute dense zero-shot reward signals for text-aligned policy learning. We hypothesize that large-scale pretrained generative models encode rich priors that can supervise a policy to behave not only in a text-aligned manner, but also in alignment with a notion of naturalness summarized from internet-scale training data. In our experiments, we demonstrate that TADPoLe is able to learn policies for novel goal-achievement and continuous locomotion behaviors specified by natural language, in both Humanoid and Dog environments. The behaviors are learned zero-shot without ground-truth rewards or expert demonstrations, and are qualitatively more natural according to human evaluation. We further show that TADPoLe performs competitively when applied to robotic manipulation tasks in the Meta-World environment, without having access to any in-domain demonstrations.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93698"} +{"video_file": "nLQeE8QGGe_39025086.mp4", "openreview_id": "nLQeE8QGGe", "slideslive_id": 39025086, "venue": "nips2024", "title": "Active learning of neural population dynamics using two-photon holographic optogenetics", "status": "Poster", "keywords": "active learning;experiment design;neural system identification;neural behavior", "tldr": "We develop active learning methods to guide two-photon photostimulation for the purpose of reducing the amount of data needed to estimate an accurate mode of the neural population dynamics.", "abstract": "Recent advances in techniques for monitoring and perturbing neural populations have greatly enhanced our ability to study circuits in the brain. In particular, two-photon holographic optogenetics now enables precise photostimulation of experimenter-specified groups of individual neurons, while simultaneous two-photon calcium imaging enables the measurement of ongoing and induced activity across the neural population. Despite the enormous space of potential photostimulation patterns and the time-consuming nature of photostimulation experiments, very little algorithmic work has been done to determine the most effective photostimulation patterns for identifying the neural population dynamics. Here, we develop methods to efficiently select which neurons to stimulate such that the resulting neural responses will best inform a dynamical model of the neural population activity. Using neural population responses to photostimulation in mouse motor cortex, we demonstrate the efficacy of a low-rank linear dynamical systems model, and develop an active learning procedure which takes advantage of low-rank structure to determine informative photostimulation patterns. We demonstrate our approach on both real and synthetic data, obtaining in some cases as much as a two-fold reduction in the amount of data required to reach a given predictive power. Our active stimulation design method is based on a novel active learning procedure for low-rank regression, which may be of independent interest.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/93697"} +{"video_file": "nN6NSd1Qds_39028615.mp4", "openreview_id": "nN6NSd1Qds", "slideslive_id": 39028615, "venue": "nips2024", "title": "UGC: Universal Graph Coarsening", "status": "Poster", "keywords": "Graph Coarsening;Graph Neural Networks;Locality sensitive hashing;Heterophilic Graph;Scaling Graph Learning", "tldr": "UGC is a graph coarsening framework which is extremely fast, has lower eigen-error, and yields superior performance on downstream processing tasks.", "abstract": "In the era of big data, graphs have emerged as a natural representation of intricate relationships. However, graph sizes often become unwieldy, leading to storage, computation, and analysis challenges. A crucial demand arises for methods that can effectively downsize large graphs while retaining vital insights. Graph coarsening seeks to simplify large graphs while maintaining the basic statistics of the graphs, such as spectral properties and $\\epsilon$-similarity in the coarsened graph. This ensures that downstream processes are more efficient and effective. Most published methods are suitable for homophilic datasets, limiting their universal use. We propose Universal Graph Coarsening (UGC), a framework equally suitable for homophilic and heterophilic datasets. UGC integrates node attributes and adjacency information, leveraging the dataset's heterophily factor. Results on benchmark datasets demonstrate that UGC preserves spectral similarity while coarsening. In comparison to existing methods, UGC is 4x to 15x faster, has lower eigen-error, and yields superior performance on downstream processing tasks even at 70% coarsening ratios.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/93695"} +{"video_file": "nQl8EjyMzh_39026907.mp4", "openreview_id": "nQl8EjyMzh", "slideslive_id": 39026907, "venue": "nips2024", "title": "On conditional diffusion models for PDE simulations", "status": "Poster", "keywords": "neural PDE solver;PDE;partial differential equation;forecasting;data-assimilation;diffusion;denoising;autoregressive;neural surrogate;reconstruction guidance;conditional score", "tldr": "This paper provides a comprehensive analysis and extension of the current state of score-based diffusion models trained on short segments from PDE trajectories, and evaluated on forecasting and data assimilation tasks.", "abstract": "Modelling partial differential equations (PDEs) is of crucial importance in science and engineering, and it includes tasks ranging from forecasting to inverse problems, such as data assimilation. However, most previous numerical and machine learning approaches that target forecasting cannot be applied out-of-the-box for data assimilation. Recently, diffusion models have emerged as a powerful tool for conditional generation, being able to flexibly incorporate observations without retraining. In this work, we perform a comparative study of score-based diffusion models for forecasting and assimilation of sparse observations. In particular, we focus on diffusion models that are either trained in a conditional manner, or conditioned after unconditional training. We address the shortcomings of existing models by proposing 1) an autoregressive sampling approach, that significantly improves performance in forecasting, 2) a new training strategy for conditional score-based models that achieves stable performance over a range of history lengths, and 3) a hybrid model which employs flexible pre-training conditioning on initial conditions and flexible post-training conditioning to handle data assimilation. We empirically show that these modifications are crucial for successfully tackling the combination of forecasting and data assimilation, a task commonly encountered in real-world scenarios.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/93694"} +{"video_file": "nRdST1qifJ_39027318.mp4", "openreview_id": "nRdST1qifJ", "slideslive_id": 39027318, "venue": "nips2024", "title": "Fight Back Against Jailbreaking via Prompt Adversarial Tuning", "status": "Poster", "keywords": "Large Language Model;Jailbreak Defense;Prompt Tuning", "tldr": "We propose an approach, termed as Prompt Adversarial Tuning (PAT), to defend the jailbreak attacks for LLMs.", "abstract": "While Large Language Models (LLMs) have achieved tremendous success in various applications, they are also susceptible to jailbreaking attacks. Several primary defense strategies have been proposed to protect LLMs from producing harmful information, mostly focusing on model fine-tuning or heuristical defense designs. However, how to achieve intrinsic robustness through prompt optimization remains an open problem. In this paper, motivated by adversarial training paradigms for achieving reliable robustness, we propose an approach named Prompt Adversarial Tuning (PAT) that trains a prompt control attached to the user prompt as a guard prefix. To achieve our defense goal whilst maintaining natural performance, we optimize the control prompt with both adversarial and benign prompts. Comprehensive experiments show that our method is effective against both grey-box and black-box attacks, reducing the success rate of advanced attacks to nearly 0, while maintaining the model's utility on the benign task and incurring only negligible computational overhead, charting a new perspective for future explorations in LLM security. Our code is available at https://github.com/PKU-ML/PAT.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93692"} +{"video_file": "nRp0XhTf61_39025579.mp4", "openreview_id": "nRp0XhTf61", "slideslive_id": 39025579, "venue": "nips2024", "title": "InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD", "status": "Poster", "keywords": "Large Vision Language Model (LVLM)", "tldr": "This paper represents InternLM-XComposer2-4KHD, a groundbreaking exploration into elevating LVLM resolution capabilities up to 4K HD (3840 \u00d7 1600) and beyond.", "abstract": "The Large Vision-Language Model (LVLM) field has seen significant advancements, yet its progression has been hindered by challenges in comprehending fine-grained visual content due to limited resolution. Recent efforts have aimed to enhance the high-resolution understanding capabilities of LVLMs, yet they remain capped at approximately 1500\n\u00d7\n1500 pixels and constrained to a relatively narrow resolution range. This paper represents InternLM-XComposer2-4KHD, a groundbreaking exploration into elevating LVLM resolution capabilities up to 4K HD (3840 \u00d7 1600) and beyond. Concurrently, considering the ultra-high resolution may not be necessary in all scenarios, it supports a wide range of diverse resolutions from 336 pixels to 4K standard, significantly broadening its scope of applicability. Specifically, this research advances the patch division paradigm by introducing a novel extension: dynamic resolution with automatic patch configuration. It maintains the training image aspect ratios while automatically varying patch counts and configuring layouts based on a pre-trained Vision Transformer (ViT) (336\n\u00d7\n336), leading to dynamic training resolution from 336 pixels to 4K standard. Our research demonstrates that scaling training resolution up to 4K HD leads to consistent performance enhancements without hitting the ceiling of potential improvements. InternLM-XComposer2-4KHD shows superb capability that matches or even surpasses GPT-4V and Gemini Pro in 10 of the 16 benchmarks.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93691"} +{"video_file": "nTJeOXlWyV_39026256.mp4", "openreview_id": "nTJeOXlWyV", "slideslive_id": 39026256, "venue": "nips2024", "title": "RTify: Aligning Deep Neural Networks with Human Behavioral Decisions", "status": "Poster", "keywords": "Alignment; Recurrent neural networks; Reaction times; Visual decision making; Speed-accuracy tradeoff", "tldr": "We present a novel differentiable framework to effectively align current vision models with human reaction times and behavioral choices", "abstract": "Current neural network models of primate vision focus on replicating overall levels of behavioral accuracy, often neglecting perceptual decisions' rich, dynamic nature. Here, we introduce a novel computational framework to model the dynamics of human behavioral choices by learning to align the temporal dynamics of a recurrent neural network (RNN) to human reaction times (RTs). We describe an approximation that allows us to constrain the number of time steps an RNN takes to solve a task with human RTs. The approach is extensively evaluated against various psychophysics experiments. We also show that the approximation can be used to optimize an ``ideal-observer'' RNN model to achieve an optimal tradeoff between speed and accuracy without human data. The resulting model is found to account well for human RT data. Finally, we use the approximation to train a deep learning implementation of the popular Wong-Wang decision-making model. The model is integrated with a convolutional neural network (CNN) model of visual processing and evaluated using both artificial and natural image stimuli. Overall, we present a novel framework that helps align current vision models with human behavior, bringing us closer to an integrated model of human vision.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/93690"} +{"video_file": "nWMqQHzI3W_39025318.mp4", "openreview_id": "nWMqQHzI3W", "slideslive_id": 39025318, "venue": "nips2024", "title": "SEEV: Synthesis with Efficient Exact Verification for ReLU Neural Barrier Functions", "status": "Poster", "keywords": "Safe Control;Barrier Functions;Control Barrier Functions;Neural Networks", "tldr": "This paper proposes a ReLU NCBF synthesis framework with efficient exact verification for robotic safety. Key insights: ReLU NCBF can be verified in linear pieces, boundary pieces are safety-critical, and limiting them improves efficiency.", "abstract": "Neural Control Barrier Functions (NCBFs) have shown significant promise in enforcing safety constraints on nonlinear autonomous systems. State-of-the-art exact approaches to verifying safety of NCBF-based controllers exploit the piecewise-linear structure of ReLU neural networks, however, such approaches still rely on enumerating all of the activation regions of the network near the safety boundary, thus incurring high computation cost. In this paper, we propose a framework for Synthesis with Efficient Exact Verification (SEEV). Our framework consists of two components, namely (i) an NCBF synthesis algorithm that introduces a novel regularizer to reduce the number of activation regions at the safety boundary, and (ii) a verification algorithm that exploits tight over-approximations of the safety conditions to reduce the cost of verifying each piecewise-linear segment. Our simulations show that SEEV significantly improves verification efficiency while maintaining the CBF quality across various benchmark systems and neural network structures. Our code is available at https://github.com/HongchaoZhang-HZ/SEEV.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93688"} +{"video_file": "nXXwYsARXB_39027464.mp4", "openreview_id": "nXXwYsARXB", "slideslive_id": 39027464, "venue": "nips2024", "title": "A hierarchical decomposition for explaining ML performance discrepancies", "status": "Poster", "keywords": "explainability;distribution shift;double machine learning", "tldr": "we propose a method to explain performance difference of a model between populations by highlighting influential variables", "abstract": "Machine learning (ML) algorithms can often differ in performance across domains. Understanding why their performance differs is crucial for determining what types of interventions (e.g., algorithmic or operational) are most effective at closing the performance gaps. Aggregate decompositions express the total performance gap as the gap due to a shift in the feature distribution\np\n(\nX\n)\nplus the gap due to a shift in the outcome's conditional distribution\np\n(\nY\n|\nX\n)\n. While this coarse explanation is helpful for guiding root cause analyses, it provides limited details and can only suggest coarse fixes involving all variables in an ML system. Detailed decompositions quantify the importance of each variable to each term in the aggregate decomposition, which can provide a deeper understanding and suggest more targeted interventions. Although parametric methods exist for conducting a full hierarchical decomposition of an algorithm's performance gap at the aggregate and detailed levels, current nonparametric methods only cover parts of the hierarchy; many also require knowledge of the entire causal graph. We introduce a nonparametric hierarchical framework for explaining why the performance of an ML algorithm differs across domains, without requiring causal knowledge. Furthermore, we derive debiased, computationally-efficient estimators and statistical inference procedures to construct confidence intervals for the explanations.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/93686"} +{"video_file": "nXYedmTf1T_39026861.mp4", "openreview_id": "nXYedmTf1T", "slideslive_id": 39026861, "venue": "nips2024", "title": "Calibrated Self-Rewarding Vision Language Models", "status": "Poster", "keywords": "Calibrated self-rewarding;large Vision-language models;Modality alignment", "tldr": "Improving modality alignment in large vision-language models with calibrated self-rewarding", "abstract": "Large Vision-Language Models (LVLMs) have made substantial progress by integrating pre-trained large language models (LLMs) and vision models through instruction tuning. Despite these advancements, LVLMs often exhibit the hallucination phenomenon, where generated text responses appear linguistically plausible but contradict the input image, indicating a misalignment between image and text pairs. This misalignment arises because the model tends to prioritize textual information over visual input, even when both the language model and visual representations are of high quality. Existing methods leverage additional models or human annotations to curate preference data and enhance modality alignment through preference optimization. These approaches are resource-intensive and may not effectively reflect the target LVLM's preferences, making the curated preferences easily distinguishable. Our work addresses these challenges by proposing the Calibrated Self-Rewarding (CSR) approach, which enables the model to self-improve by iteratively generating candidate responses, evaluating the reward for each response, and curating preference data for fine-tuning. In the reward modeling, we employ a step-wise strategy and incorporate visual constraints into the self-rewarding process to place greater emphasis on visual input. Empirical results demonstrate that CSR significantly enhances performance and reduces hallucinations across twelve benchmarks and tasks, achieving substantial improvements over existing methods by 7.62%. Our empirical results are further supported by rigorous theoretical analysis, under mild assumptions, verifying the effectiveness of introducing visual constraints into the self-rewarding paradigm. Additionally, CSR shows compatibility with different vision-language models and the ability to incrementally improve performance through iterative fine-tuning.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93685"} +{"video_file": "nY0BrZdqLt_39026386.mp4", "openreview_id": "nY0BrZdqLt", "slideslive_id": 39026386, "venue": "nips2024", "title": "Time-Reversal Provides Unsupervised Feedback to LLMs", "status": "Spotlight", "keywords": "LLMs;Reranking;reverse LLMs;reverse scoring;defenses;generative models;sequence reversal", "tldr": "Reverse Scoring and generation provides unsupervised feedback to LLMs", "abstract": "Large Language Models (LLMs) are typically trained to predict in the forward direction of time. However, recent works have shown that prompting these models to look back and critique their own generations can produce useful feedback. Motivated by this, we explore the question of whether LLMs can be empowered to think (predict and score) backwards to provide unsupervised feedback that complements forward LLMs. Towards this, we introduce Time Reversed Language Models (TRLMs), which can score and generate queries when conditioned on responses, effectively functioning in the reverse direction of time. Further, to effectively infer in the response to query direction, we pre-train and fine-tune a language model (TRLM-Ba) in the reverse token order from scratch. We show empirically (and theoretically in a stylized setting) that time-reversed models can indeed complement forward model predictions when used to score the query given response for re-ranking multiple forward generations. We obtain up to 5% improvement on the widely used AlpacaEval Leaderboard over the competent baseline of best-of-N re-ranking using self log-perplexity scores. We further show that TRLM scoring outperforms conventional forward scoring of response given query, resulting in significant gains in applications such as citation generation and passage retrieval. We next leverage the generative ability of TRLM to augment or provide unsupervised feedback to input safety filters of LLMs, demonstrating a drastic reduction in false negative rate with negligible impact on false positive rates against several attacks published on the popular JailbreakBench leaderboard.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93684"} +{"video_file": "nY7fGtsspU_39024880.mp4", "openreview_id": "nY7fGtsspU", "slideslive_id": 39024880, "venue": "nips2024", "title": "Graph Neural Networks Do Not Always Oversmooth", "status": "Poster", "keywords": "graph neural networks;oversmoothing;Gaussian processes;chaos", "tldr": "We adapt the chaos analysis from deep feedforward neural networks to graph neural networks and reveal a parameter regime in which graph neural networks do not oversmooth.", "abstract": "Graph neural networks (GNNs) have emerged as powerful tools for processing relational data in applications. However, GNNs suffer from the problem of oversmoothing, the property that features of all nodes exponentially converge to the same vector over layers, prohibiting the design of deep GNNs. In this work we study oversmoothing in graph convolutional networks (GCNs) by using their Gaussian process (GP) equivalence in the limit of infinitely many hidden features. By generalizing methods from conventional deep neural networks (DNNs), we can describe the distribution of features at the output layer of deep GCNs in terms of a GP: as expected, we find that typical parameter choices from the literature lead to oversmoothing. The theory, however, allows us to identify a new, non-oversmoothing phase: if the initial weights of the network have sufficiently large variance, GCNs do not oversmooth, and node features remain informative even at large depth. We demonstrate the validity of this prediction in finite-size GCNs by training a linear classifier on their output. Moreover, using the linearization of the GCN GP, we generalize the concept of propagation depth of information from DNNs to GCNs. This propagation depth diverges at the transition between the oversmoothing and non-oversmoothing phase. We test the predictions of our approach and find good agreement with finite-size GCNs. Initializing GCNs near the transition to the non-oversmoothing phase, we obtain networks which are both deep and expressive.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/93683"} +{"video_file": "nbqvjkOs6S_39027830.mp4", "openreview_id": "nbqvjkOs6S", "slideslive_id": 39027830, "venue": "nips2024", "title": "Gradient-free Decoder Inversion in Latent Diffusion Models", "status": "Poster", "keywords": "Latent diffusion model;Inversion;Gradient-free inversion;Resource-efficient inversion", "tldr": "We propose an efficient gradient-free decoder inversion for LDMs for ensuring invertible latent diffusion model, which significantly reduced runtime and memory usage compared to gradient-based methods in various recent LDMs.", "abstract": "In latent diffusion models (LDMs), denoising diffusion process efficiently takes place on latent space whose dimension is lower than that of pixel space. Decoder is typically used to transform the representation in latent space to that in pixel space. While a decoder is assumed to have an encoder as an accurate inverse, exact encoder-decoder pair rarely exists in practice even though applications often require precise inversion of decoder. In other words, encoder is not the left-inverse but the right-inverse of the decoder; decoder inversion seeks the left-inverse. Prior works for decoder inversion in LDMs employed gradient descent inspired by inversions of generative adversarial networks. However, gradient-based methods require larger GPU memory and longer computation time for larger latent space. For example, recent video LDMs can generate more than 16 frames, but GPUs with 24 GB memory can only perform gradient-based decoder inversion for 4 frames. Here, we propose an efficient gradient-free decoder inversion for LDMs, which can be applied to diverse latent models. Theoretical convergence property of our proposed inversion has been investigated not only for the forward step method, but also for the inertial Krasnoselskii-Mann (KM) iterations under mild assumption on cocoercivity that is satisfied by recent LDMs. Our proposed gradient-free method with Adam optimizer and learning rate scheduling significantly reduced computation time and memory usage over prior gradient-based methods and enabled efficient computation in applications such as noise-space watermarking and background-preserving image editing while achieving comparable error levels.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/93681"} +{"video_file": "ncqauwSyl5_39028774.mp4", "openreview_id": "ncqauwSyl5", "slideslive_id": 39028774, "venue": "nips2024", "title": "Neural P$^3$M: A Long-Range Interaction Modeling Enhancer for Geometric GNNs", "status": "Poster", "keywords": "Molecule geometry modeling;Geometric GNNs;Long-range interactions", "tldr": "We introduce Neural P\n3\nM, which enhances Geometric GNNs by integrating meshes with atoms and reimaging traditional mathematical operations in a trainable manner.", "abstract": "Geometric graph neural networks (GNNs) have emerged as powerful tools for modeling molecular geometry. However, they encounter limitations in effectively capturing long-range interactions in large molecular systems. To address this challenge, we introduce Neural P\n3\nM, a versatile enhancer of geometric GNNs to expand the scope of their capabilities by incorporating mesh points alongside atoms and reimaging traditional mathematical operations in a trainable manner. Neural P\n3\nM exhibits flexibility across a wide range of molecular systems and demonstrates remarkable accuracy in predicting energies and forces, outperforming on benchmarks such as the MD22 dataset. It also achieves an average improvement of 22% on the OE62 dataset while integrating with various architectures. Codes are available at https://github.com/OnlyLoveKFC/Neural_P3M.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/93679"} +{"video_file": "nd8Q4a8aWl_39028090.mp4", "openreview_id": "nd8Q4a8aWl", "slideslive_id": 39028090, "venue": "nips2024", "title": "A Geometric View of Data Complexity: Efficient Local Intrinsic Dimension Estimation with Diffusion Models", "status": "Spotlight", "keywords": "diffusion models;deep generative modelling;manifold hypothesis;intrinsic dimension", "tldr": "We provide an efficient local intrinsic dimension estimator using diffusion models, outperforming traditional estimators. It aligns closely with qualitative complexity in images and scales to stable diffusion.", "abstract": "High-dimensional data commonly lies on low-dimensional submanifolds, and estimating the local intrinsic dimension (LID) of a datum -- i.e. the dimension of the submanifold it belongs to -- is a longstanding problem. LID can be understood as the number of local factors of variation: the more factors of variation a datum has, the more complex it tends to be. Estimating this quantity has proven useful in contexts ranging from generalization in neural networks to detection of out-of-distribution data, adversarial examples, and AI-generated text. The recent successes of deep generative models present an opportunity to leverage them for LID estimation, but current methods based on generative models produce inaccurate estimates, require more than a single pre-trained model, are computationally intensive, or do not exploit the best available deep generative models: diffusion models (DMs). In this work, we show that the Fokker-Planck equation associated with a DM can provide an LID estimator which addresses the aforementioned deficiencies. Our estimator, called FLIPD, is easy to implement and compatible with all popular DMs. Applying FLIPD to synthetic LID estimation benchmarks, we find that DMs implemented as fully-connected networks are highly effective LID estimators that outperform existing baselines. We also apply FLIPD to natural images where the true LID is unknown. Despite being sensitive to the choice of network architecture, FLIPD estimates remain a useful measure of relative complexity; compared to competing estimators, FLIPD exhibits a consistently higher correlation with image PNG compression rate and better aligns with qualitative assessments of complexity. Notably, FLIPD is orders of magnitude faster than other LID estimators, and the first to be tractable at the scale of Stable Diffusion.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/93678"} +{"video_file": "nfq3GKfb4h_39025100.mp4", "openreview_id": "nfq3GKfb4h", "slideslive_id": 39025100, "venue": "nips2024", "title": "Preference Learning of Latent Decision Utilities with a Human-like Model of Preferential Choice", "status": "Poster", "keywords": "preference learning;human-in-the-loop;AI-assistance for decision making;user modeling;cogntitive science;retrosynthesis planning", "tldr": "We improve learning of latent utilities from preferences for decision tasks, by using a cognitive model of preferential choice which models various context effects.", "abstract": "Preference learning methods make use of models of human choice in order to infer the latent utilities that underlie human behavior. However, accurate modeling of human choice behavior is challenging due to a range of context effects that arise from how humans contrast and evaluate options. Cognitive science has proposed several models that capture these intricacies but, due to their intractable nature, work on preference learning has, in practice, had to rely on tractable but simplified variants of the well-known Bradley-Terry model. In this paper, we take one state-of-the-art intractable cognitive model and propose a tractable surrogate that is suitable for deployment in preference learning. We then introduce a mechanism for fitting the surrogate to human data and extend it to account for data that cannot be explained by the original cognitive model. We demonstrate on large-scale human data that this model produces significantly better inferences on static and actively elicited data than existing Bradley-Terry variants. We further show in simulation that when using this model for preference learning, we can significantly improve utility in a range of real-world tasks.", "primary_area": "human-AI_interaction", "site": "https://neurips.cc/virtual/2024/poster/93675"} +{"video_file": "njvPjG0BfK_39026042.mp4", "openreview_id": "njvPjG0BfK", "slideslive_id": 39026042, "venue": "nips2024", "title": "HardCore Generation: Generating Hard UNSAT Problems for Data Augmentation", "status": "Poster", "keywords": "Graph Learning;Boolean Satisfiability;Circuit Design", "tldr": "We leverage intrinsic properties of SAT problems and GNNs to efficiently generate new SAT problems for data augmentation in Deep Learning settings.", "abstract": "Efficiently determining the satisfiability of a boolean equation --- known as the SAT problem for brevity --- is crucial in various industrial problems. Recently, the advent of deep learning methods has introduced significant potential for enhancing SAT solving. However, a major barrier to the advancement of this field has been the scarcity of large, realistic datasets. The majority of current public datasets are either randomly generated or extremely limited, containing only a few examples from unrelated problem families. These datasets are inadequate for meaningful training of deep learning methods. In light of this, researchers have started exploring generative techniques to create data that more accurately reflect SAT problems encountered in practical situations. These methods have so far suffered from either the inability to produce challenging SAT problems or time-scalability obstacles. In this paper we address both by identifying and manipulating the key contributors to a problem's ``hardness'', known as cores. Although some previous work has addressed cores, the time costs are unacceptably high due to the expense of traditional heuristic core detection techniques. We introduce a fast core detection procedure that uses a graph neural network. Our empirical results demonstrate that we can efficiently generate problems that remain hard to solve and retain key attributes of the original example problems. We show via experiment that the generated synthetic SAT problems can be used in a data augmentation setting to provide improved prediction of solver runtimes.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/93671"} +{"video_file": "njwYBFau8E_39028706.mp4", "openreview_id": "njwYBFau8E", "slideslive_id": 39028706, "venue": "nips2024", "title": "DistrictNet: Decision-aware learning for geographical districting", "status": "Poster", "keywords": "routing;combinatorial optimization;decision-focused learning", "tldr": "We solve real-world districting problems in a few minutes using a structured learning pipeline.", "abstract": "Districting is a complex combinatorial problem that consists in partitioning a geographical area into small districts. In logistics, it is a major strategic decision determining operating costs for several years. Solving districting problems using traditional methods is intractable even for small geographical areas and existing heuristics often provide sub-optimal results. We present a structured learning approach to find high-quality solutions to real-world districting problems in a few minutes. It is based on integrating a combinatorial optimization layer, the capacitated minimum spanning tree problem, into a graph neural network architecture. To train this pipeline in a decision-aware fashion, we show how to construct target solutions embedded in a suitable space and learn from target solutions. Experiments show that our approach outperforms existing methods as it can significantly reduce costs on real-world cities.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93670"} +{"video_file": "nrgyOGU7ZP_39024552.mp4", "openreview_id": "nrgyOGU7ZP", "slideslive_id": 39024552, "venue": "nips2024", "title": "SS1: Accelerating Inference with Fast and Expressive Sketch Structured Transform", "status": "Poster", "keywords": "fast linear transform;parameter sharing;latency improvement;deployment", "tldr": "Linear transform with latency benefits ,better parameter-quality tradeoff", "abstract": "Tensor multiplication with learned weight matrices is the fundamental building block in deep learning models. These matrices can often be sparsified, decomposed, quantized, or subjected to random parameter sharing without losing accuracy, suggesting the possibility of more efficient transforms. Although many variants of weight matrices exist, unstructured ones are incompatible with modern hardware, slowing inference and training. On the other hand, structured variants often limit expressivity or fail to deliver the promised latency benefits. We present Sketch Structured Transform (SS1), an expressive and GPU-friendly operator that accelerates inference. SS1 leverages parameter sharing in a random yet structured manner to reduce computation while retraining the rich expressive nature of parameter sharing. We confirm empirically that SS1 offers better quality-efficiency tradeoffs than competing variants. Interestingly SS1 can be combined with Quantization to achieve gains unattainable by either method alone, a finding we justify via theoretical analysis. The analysis may be of independent interest. Moreover, existing pre-trained models can be projected onto SS1 and finetuned for efficient deployment. Surprisingly, these projected models can perform reasonably well even without finetuning. Our experiments highlight various applications of the SS1: (a) Training GPT2 and DLRM models from scratch for faster inference. (b) Finetuning projected BERT models for 1.31\u00d7 faster inference while maintaining GLUE scores. (c) Proof of concept with Llama-3-8b, showing 1.11\u00d7 faster wall clock inference using projected SS1 layers without finetuning. We open source our code :https://github.com/apd10/Sketch-Structured-Linear/", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/93662"} +{"video_file": "ntF7D8tAlQ_39027434.mp4", "openreview_id": "ntF7D8tAlQ", "slideslive_id": 39027434, "venue": "nips2024", "title": "Estimating Generalization Performance Along the Trajectory of Proximal SGD in Robust Regression", "status": "Poster", "keywords": "Robust regression;generalization error;stochastic gradient descent;early stopping;Stein's formula", "tldr": "This paper provides generalization error estimates for each iteration of the SGD and proximal SGD algorithms in the context of robust regression.", "abstract": "This paper studies the generalization performance of iterates obtained by Gradient Descent (GD), Stochastic Gradient Descent (SGD) and their proximal variants in high-dimensional robust regression problems. The number of features is comparable to the sample size and errors may be heavy-tailed. We introduce estimators that precisely track the generalization error of the iterates along the trajectory of the iterative algorithm. These estimators are provably consistent under suitable conditions. The results are illustrated through several examples, including Huber regression, pseudo-Huber regression, and their penalized variants with non-smooth regularizer. We provide explicit generalization error estimates for iterates generated from GD and SGD, or from proximal SGD in the presence of a non-smooth regularizer. The proposed risk estimates serve as effective proxies for the actual generalization error, allowing us to determine the optimal stopping iteration that minimizes the generalization error. Extensive simulations confirm the effectiveness of the proposed generalization error estimates.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93661"} +{"video_file": "nv7ox1vd3q_39024655.mp4", "openreview_id": "nv7ox1vd3q", "slideslive_id": 39024655, "venue": "nips2024", "title": "Precise asymptotics of reweighted least-squares algorithms for linear diagonal networks", "status": "Poster", "keywords": "high-dimensional asymptotics;linear diagonal neural networks;feature learning;iteratively reweighted least-squares;sparse recovery", "tldr": "We provide a precise asymptotic characterization of the iterates for a family of reweighted least-squares algorithms that learn linear diagonal neural networks.", "abstract": "The classical iteratively reweighted least-squares (IRLS) algorithm aims to recover an unknown signal from linear measurements by performing a sequence of weighted least squares problems, where the weights are recursively updated at each step. Varieties of this algorithm have been shown to achieve favorable empirical performance and theoretical guarantees for sparse recovery and\n\u2113\np\n-norm minimization. Recently, some preliminary connections have also been made between IRLS and certain types of non-convex linear neural network architectures that are observed to exploit low-dimensional structure in high-dimensional linear models. In this work, we provide a unified asymptotic analysis for a family of algorithms that encompasses IRLS, the recently proposed lin-RFM algorithm (which was motivated by feature learning in neural networks), and the alternating minimization algorithm on linear diagonal neural networks. Our analysis operates in a \"batched\" setting with i.i.d. Gaussian covariates and shows that, with appropriately chosen reweighting policy, the algorithm can achieve favorable performance in only a handful of iterations. We also extend our results to the case of group-sparse recovery and show that leveraging this structure in the reweighting scheme provably improves test error compared to coordinate-wise reweighting.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93656"} +{"video_file": "nw4TWuEPGx_39028563.mp4", "openreview_id": "nw4TWuEPGx", "slideslive_id": 39028563, "venue": "nips2024", "title": "Discovering plasticity rules that organize and maintain neural circuits", "status": "Poster", "keywords": "biologically plausible learning rules plasticity self-organization RNNs homeostasis meta-learning", "tldr": "A supervised approach yields biologically plausible learning rules that self-organize and maintain robust representations of time within RNNs.", "abstract": "Intrinsic dynamics within the brain can accelerate learning by providing a prior scaffolding for dynamics aligned with task objectives. Such intrinsic dynamics would ideally self-organize and self-sustain in the face of biological noise including synaptic turnover and cell death. An example of such dynamics is the formation of sequences, a ubiquitous motif in neural activity. The sequence-generating circuit in zebra finch HVC provides a reliable timing scaffold for motor output in song and demonstrates a remarkable capacity for unsupervised recovery following perturbation. Inspired by HVC, we seek a local plasticity rule capable of organizing and maintaining sequence-generating dynamics despite continual network perturbations. We adopt a meta-learning approach introduced by Confavreux et al, which parameterizes a learning rule using basis functions constructed from pre- and postsynaptic activity and synapse size, with tunable time constants. Candidate rules are simulated within initially random networks, and their fitness is evaluated according to a loss function that measures the fidelity with which the resulting dynamics encode time. We use this approach to introduce biological noise, forcing meta-learning to find robust solutions. We first show that, in the absence of perturbations, meta-learning identifies a temporally asymmetric generalization of Oja's rule that reliably organizes sparse sequential activity. When synaptic turnover is introduced, the learned rule incorporates a form of homeostasis, better maintaining robust sequential dynamics relative to other previously proposed rules. Additionally, inspired by recent findings demonstrating that the strength of projections from inhibitory interneurons in HVC also dynamically responds to perturbations, we explore the role of inhibitory plasticity in sequence-generating circuits. We find that learned plasticity adjusts both excitation and inhibition in response to manipulations, outperforming rules applied only to excitatory connections. We demonstrate how plasticity acting on both excitatory and inhibitory synapses can better shape excitatory cell dynamics to scaffold timing representations.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/93653"} +{"video_file": "nw6ANsC66G_39026304.mp4", "openreview_id": "nw6ANsC66G", "slideslive_id": 39026304, "venue": "nips2024", "title": "Probabilistic Federated Prompt-Tuning with Non-IID and Imbalanced Data", "status": "Poster", "keywords": "Probabilistic Learning", "tldr": "We propose a new probabilistic prompt aggregation method for fine-tuning on decentralized and imbalance data settings.", "abstract": "Fine-tuning pre-trained models is a popular approach in machine learning for solving complex tasks with moderate data. However, fine-tuning the entire pre-trained model is ineffective in federated data scenarios where local data distributions are diversely skewed. To address this, we explore integrating federated learning with a more effective prompt-tuning method, optimizing for a small set of input prefixes to reprogram the pre-trained model's behavior. Our approach transforms federated learning into a distributed set modeling task, aggregating diverse sets of prompts to globally fine-tune the pre-trained model. We benchmark various baselines based on direct adaptations of existing federated model aggregation techniques and introduce a new probabilistic prompt aggregation method that substantially outperforms these baselines. Our reported results on a variety of computer vision datasets confirm that the proposed method is most effective to combat extreme data heterogeneity in federated learning.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/93652"} +{"video_file": "nw8cXoNvep_39026729.mp4", "openreview_id": "nw8cXoNvep", "slideslive_id": 39026729, "venue": "nips2024", "title": "3D Equivariant Pose Regression via Direct Wigner-D Harmonics Prediction", "status": "Poster", "keywords": "SO(3) pose estimation;3D rotation representation;SO(3)-equivariance;3D equivariant networks;spherical harmonics;Wigner-D Matrix;spherical CNNs;Wigner-D coefficients prediction;uncertainty modeling;data sampling efficiency", "tldr": "We address single-image 3D pose estimation by predicting Wigner-D coefficients in the frequency domain using SO(3)-equivariant networks, to improve pose estimation accuracy and data sampling efficiency.", "abstract": "Determining the 3D orientations of an object in an image, known as single-image pose estimation, is a crucial task in 3D vision applications. Existing methods typically learn 3D rotations parametrized in the spatial domain using Euler angles or quaternions, but these representations often introduce discontinuities and singularities. SO(3)-equivariant networks enable the structured capture of pose patterns with data-efficient learning, but the parametrizations in spatial domain are incompatible with their architecture, particularly spherical CNNs, which operate in the frequency domain to enhance computational efficiency. To overcome these issues, we propose a frequency-domain approach that directly predicts Wigner-D coefficients for 3D rotation regression, aligning with the operations of spherical CNNs. Our SO(3)-equivariant pose harmonics predictor overcomes the limitations of spatial parameterizations, ensuring consistent pose estimation under arbitrary rotations. Trained with a frequency-domain regression loss, our method achieves state-of-the-art results on benchmarks such as ModelNet10-SO(3) and PASCAL3D+, with significant improvements in accuracy, robustness, and data efficiency.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93651"} +{"video_file": "nxumYwxJPB_39027547.mp4", "openreview_id": "nxumYwxJPB", "slideslive_id": 39027547, "venue": "nips2024", "title": "If You Want to Be Robust, Be Wary of Initialization", "status": "Poster", "keywords": "Adversarial Robustness;Graph Neural Networks", "tldr": "We theoretically study the direct relationship between initial weights, number of training epochs and the model\u2019s adversarial vulnerability, offering new insights into adversarial robustness beyond conventional defense mechanisms.", "abstract": "Graph Neural Networks (GNNs) have demonstrated remarkable performance across a spectrum of graph-related tasks, however concerns persist regarding their vulnerability to adversarial perturbations. While prevailing defense strategies focus primarily on pre-processing techniques and adaptive message-passing schemes, this study delves into an under-explored dimension: the impact of weight initialization and associated hyper-parameters, such as training epochs, on a model\u2019s robustness. We introduce a theoretical framework bridging the connection between initialization strategies and a network's resilience to adversarial perturbations. Our analysis reveals a direct relationship between initial weights, number of training epochs and the model\u2019s vulnerability, offering new insights into adversarial robustness beyond conventional defense mechanisms. While our primary focus is on GNNs, we extend our theoretical framework, providing a general upper-bound applicable to Deep Neural Networks. Extensive experiments, spanning diverse models and real-world datasets subjected to various adversarial attacks, validate our findings. We illustrate that selecting appropriate initialization not only ensures performance on clean datasets but also enhances model robustness against adversarial perturbations, with observed gaps of up to 50% compared to alternative initialization approaches.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/93648"} +{"video_file": "o4coDIby7e_39028243.mp4", "openreview_id": "o4coDIby7e", "slideslive_id": 39028243, "venue": "nips2024", "title": "Measuring Goal-Directedness", "status": "Spotlight", "keywords": "Causality;Graphical Models;Maximum Causal Entropy;Agency", "tldr": "We propose a formal measure of goal-directedness in probabalistic graphical models, by adapting and generalising the maximum causal entropy framework.", "abstract": "We define maximum entropy goal-directedness (MEG), a formal measure of goal- directedness in causal models and Markov decision processes, and give algorithms for computing it. Measuring goal-directedness is important, as it is a critical element of many concerns about harm from AI. It is also of philosophical interest, as goal-directedness is a key aspect of agency. MEG is based on an adaptation of the maximum causal entropy framework used in inverse reinforcement learning. It can measure goal-directedness with respect to a known utility function, a hypothesis class of utility functions, or a set of random variables. We prove that MEG satisfies several desiderata and demonstrate our algorithms with small-scale experiments.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93645"} +{"video_file": "o7DOGbZeyP_39026629.mp4", "openreview_id": "o7DOGbZeyP", "slideslive_id": 39026629, "venue": "nips2024", "title": "LookHere: Vision Transformers with Directed Attention Generalize and Extrapolate", "status": "Poster", "keywords": "vision transformers;position encoding;computer vision", "tldr": "We introduce a position encoding method for plain ViTs, called LookHere, that improves performance at the training resolution and beyond.", "abstract": "High-resolution images offer more information about scenes that can improve model accuracy. However, the dominant model architecture in computer vision, the vision transformer (ViT), cannot effectively leverage larger images without finetuning \u2014 ViTs poorly extrapolate to more patches at test time, although transformers offer sequence length flexibility. We attribute this shortcoming to the current patch position encoding methods, which create a distribution shift when extrapolating.\nWe propose a drop-in replacement for the position encoding of plain ViTs that restricts attention heads to fixed fields of view, pointed in different directions, using 2D attention masks. Our novel method, called LookHere, provides translation-equivariance, ensures attention head diversity, and limits the distribution shift that attention heads face when extrapolating. We demonstrate that LookHere improves performance on classification (avg. 1.6%), against adversarial attack (avg. 5.4%), and decreases calibration error (avg. 1.5%) \u2014 on ImageNet without extrapolation. With extrapolation, LookHere outperforms the current SoTA position encoding method, 2D-RoPE, by 21.7% on ImageNet when trained at\n224\n2\npx and tested at\n1024\n2\npx. Additionally, we release a high-resolution test set to improve the evaluation of high-resolution image classifiers, called ImageNet-HR.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93643"} +{"video_file": "o8m4RM5mBk_39028779.mp4", "openreview_id": "o8m4RM5mBk", "slideslive_id": 39028779, "venue": "nips2024", "title": "Attention Temperature Matters in ViT-Based Cross-Domain Few-Shot Learning", "status": "Poster", "keywords": "Cross-domain few-shot learning;Vision transformer;attention temperature", "tldr": "We find a phenomenon for ViT-based CDFSL: multiplying a small temperature (even 0) to ViT's attention map can consistently improve performance. We delve into this phenomenon for an interpretation and propose an effective method for CDFSL.", "abstract": "Cross-domain few-shot learning (CDFSL) is proposed to transfer knowledge from large-scale source-domain datasets to downstream target-domain datasets with only a few training samples. However, Vision Transformer (ViT), as a strong backbone network to achieve many top performances, is still under-explored in the CDFSL task in its transferability against large domain gaps. In this paper, we find an interesting phenomenon of ViT in the CDFSL task: by simply multiplying a temperature (even as small as 0) to the attention in ViT blocks, the target-domain performance consistently increases, even though the attention map is downgraded to a uniform map. In this paper, we delve into this phenomenon for an interpretation. Through experiments, we interpret this phenomenon as a remedy for the ineffective target-domain attention caused by the query-key attention mechanism under large domain gaps. Based on it, we further propose a simple but effective method for the CDFSL task to boost ViT's transferability by resisting the learning of query-key parameters and encouraging that of non-query-key ones. Experiments on four CDFSL datasets validate the rationale of our interpretation and method, showing we can consistently outperform state-of-the-art methods. Our codes are available at https://github.com/Zoilsen/Attn_Temp_CDFSL.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93641"} +{"video_file": "oBvaZJ1C71_39025336.mp4", "openreview_id": "oBvaZJ1C71", "slideslive_id": 39025336, "venue": "nips2024", "title": "GAVEL: Generating Games via Evolution and Language Models", "status": "Poster", "keywords": "games;llms;language models;evolution;quality diversity;pcg", "tldr": "We use evolutionary computation and large language models to generate new and interesting board games", "abstract": "Automatically generating novel and interesting games is a complex task. Challenges include representing game rules in a computationally workable form, searching through the large space of potential games under most such representations, and accurately evaluating the originality and quality of previously unseen games. Prior work in automated game generation has largely focused on relatively restricted rule representations and relied on domain-specific heuristics. In this work, we explore the generation of novel games in the comparatively expansive Ludii game description language, which encodes the rules of over 1000 board games in a variety of styles and modes of play. We draw inspiration from recent advances in large language models and evolutionary computation in order to train a model that intelligently mutates and recombines games and mechanics expressed as code. We demonstrate both quantitatively and qualitatively that our approach is capable of generating new and interesting games, including in regions of the potential rules space not covered by existing games in the Ludii dataset.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93639"} +{"video_file": "oFgTScAsBr_39028529.mp4", "openreview_id": "oFgTScAsBr", "slideslive_id": 39028529, "venue": "nips2024", "title": "Masked Pre-training Enables Universal Zero-shot Denoiser", "status": "Poster", "keywords": "image restoration;image denoising\uff0cself-supervised learning", "tldr": "An efficient yet novel approach for zero-shot denoising", "abstract": "In this work, we observe that model trained on vast general images via masking strategy, has been naturally embedded with their distribution knowledge, thus spontaneously attains the underlying potential for strong image denoising. Based on this observation, we propose a novel zero-shot denoising paradigm, i.e.,\nM\nasked\nP\nre-train then\nI\nterative fill (\nMPI\n). MPI first trains model via masking and then employs pre-trained weight for high-quality zero-shot image denoising on a single noisy image. Concretely, MPI comprises two key procedures:\n1) Masked Pre-training\ninvolves training model to reconstruct massive natural images with random masking for generalizable representations, gathering the potential for valid zero-shot denoising on images with varying noise degradation and even in distinct image types.\n2) Iterative filling\nexploits pre-trained knowledge for effective zero-shot denoising. It iteratively optimizes the image by leveraging pre-trained weights, focusing on alternate reconstruction of different image parts, and gradually assembles fully denoised image within limited number of iterations. Comprehensive experiments across various noisy scenarios underscore the notable advances of MPI over previous approaches with a marked reduction in inference time.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/93634"} +{"video_file": "oLcPadFrY3_39024762.mp4", "openreview_id": "oLcPadFrY3", "slideslive_id": 39024762, "venue": "nips2024", "title": "AdaPKC: PeakConv with Adaptive Peak Receptive Field for Radar Semantic Segmentation", "status": "Poster", "keywords": "Radar Semantic Segmentation;Adaptive Peak Convolution", "tldr": "A more robust novel convolution operator tailored for radar signals.", "abstract": "Deep learning-based radar detection technology is receiving increasing attention in areas such as autonomous driving, UAV surveillance, and marine monitoring. Among recent efforts, PeakConv (PKC) provides a solution that can retain the peak response characteristics of radar signals and play the characteristics of deep convolution, thereby improving the effect of radar semantic segmentation (RSS). However, due to the use of a pre-set fixed peak receptive field sampling rule, PKC still has limitations in dealing with problems such as inconsistency of target frequency domain response broadening, non-homogeneous and time-varying characteristic of noise/clutter distribution. Therefore, this paper proposes an idea of adaptive peak receptive field, and upgrades PKC to AdaPKC based on this idea. Beyond that, a novel fine-tuning technology to further boost the performance of AdaPKC-based RSS networks is presented. Through experimental verification using various real-measured radar data (including publicly available low-cost millimeter-wave radar dataset for autonomous driving and self-collected Ku-band surveillance radar dataset), we found that the performance of AdaPKC-based models surpasses other SoTA methods in RSS tasks. The code is available at https://github.com/lihua199710/AdaPKC.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93633"} +{"video_file": "oMHpejyGdx_39026233.mp4", "openreview_id": "oMHpejyGdx", "slideslive_id": 39026233, "venue": "nips2024", "title": "Prompt-Agnostic Adversarial Perturbation for Customized Diffusion Models", "status": "Poster", "keywords": "Adversarial perturbations;customized diffusion models;privacy protection;prompt distribution modeling", "tldr": "This paper proposes a Prompt-agnostic Adversarial Perturbation method for customized text-to-image diffusion models by modeling the prompt distribution and generating perturbations to maximize expected disturbance over sampled prompts.", "abstract": "Diffusion models have revolutionized customized text-to-image generation, allowing for efficient synthesis of photos from personal data with textual descriptions. However, these advancements bring forth risks including privacy breaches and unauthorized replication of artworks. Previous researches primarily center around using \u201cprompt-specific methods\u201d to generate adversarial examples to protect personal images, yet the effectiveness of existing methods is hindered by constrained adaptability to different prompts. In this paper, we introduce a Prompt-Agnostic Adversarial Perturbation (PAP) method for customized diffusion models. PAP first models the prompt distribution using a Laplace Approximation, and then produces prompt-agnostic perturbations by maximizing a disturbance expectation based on the modeled distribution. This approach effectively tackles the prompt-agnostic attacks, leading to improved defense stability. Extensive experiments in face privacy and artistic style protection, demonstrate the superior generalization of our method in comparison to existing techniques.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/93631"} +{"video_file": "oNMnR0NJ2e_39027041.mp4", "openreview_id": "oNMnR0NJ2e", "slideslive_id": 39027041, "venue": "nips2024", "title": "A Label is Worth A Thousand Images in Dataset Distillation", "status": "Poster", "keywords": "Dataset distillation;Data condensation;Synthetic data generation;Data-efficient learning", "tldr": "Informative soft labels is the key behind the success of dataset distillation methods.", "abstract": "Data quality is a crucial factor in the performance of machine learning models, a principle that dataset distillation methods exploit by compressing training datasets into much smaller counterparts that maintain similar downstream performance. Understanding how and why data distillation methods work is vital not only for improving these methods but also for revealing fundamental characteristics of \"good\u201d training data. However, a major challenge in achieving this goal is the observation that distillation approaches, which rely on sophisticated but mostly disparate methods to generate synthetic data, have little in common with each other. In this work, we highlight a largely overlooked aspect common to most of these methods: the use of soft (probabilistic) labels. Through a series of ablation experiments, we study the role of soft labels in depth. Our results reveal that the main factor explaining the performance of state-of-the-art distillation methods is not the specific techniques used to generate synthetic data but rather the use of soft labels. Furthermore, we demonstrate that not all soft labels are created equal; they must contain structured information to be beneficial. We also provide empirical scaling laws that characterize the effectiveness of soft labels as a function of images-per-class in the distilled dataset and establish an empirical Pareto frontier for data-efficient learning. Combined, our findings challenge conventional wisdom in dataset distillation, underscore the importance of soft labels in learning, and suggest new directions for improving distillation methods. Code for all experiments is available at https://github.com/sunnytqin/no-distillation.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93630"} +{"video_file": "oQ1Zj9iH88_39026407.mp4", "openreview_id": "oQ1Zj9iH88", "slideslive_id": 39026407, "venue": "nips2024", "title": "Penalty-based Methods for Simple Bilevel Optimization under H\u00f6lderian Error Bounds", "status": "Poster", "keywords": "Simple Bilevel Optimization;H\u00f6lderian Error Bound;Penalization;Complexity", "tldr": "This paper proposes a novel penalty-based formulation to solve simple bilevel problems with lower complexity results.", "abstract": "This paper investigates simple bilevel optimization problems where we minimize a convex upper-level objective over the optimal solution set of a convex lower-level objective. Existing methods for such problems either only guarantee asymptotic convergence, have slow sublinear rates, or require strong assumptions. To address these challenges, we propose a penalization framework that delineates the relationship between approximate solutions of the original problem and its reformulated counterparts. This framework accommodates varying assumptions regarding smoothness and convexity, enabling the application of specific methods with different complexity results. Specifically, when both upper- and lower-level objectives are composite convex functions, under an\n\u03b1\n-H\u00f6lderian error bound condition and certain mild assumptions, our algorithm attains an\n(\n\u03f5\n,\n\u03f5\n\u03b2\n)\n-optimal solution of the original problem for any\n\u03b2\n>\n0\nwithin\nO\n(\n1\n/\n\u03f5\nmax\n{\n\u03b1\n,\n\u03b2\n}\n)\niterations. The result can be improved further if the smooth part of the upper-level objective is strongly convex. We also establish complexity results when the upper- and lower-level objectives are general nonsmooth functions. Numerical experiments demonstrate the effectiveness of our algorithms.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93627"} +{"video_file": "oSOVME9kl2_39026852.mp4", "openreview_id": "oSOVME9kl2", "slideslive_id": 39026852, "venue": "nips2024", "title": "Implicit Regularization of Sharpness-Aware Minimization for Scale-Invariant Problems", "status": "Poster", "keywords": "sharpness-aware minimization;LoRA;implicit regularization;finetuning;language models", "tldr": "We leverage implicit regularization to enhance the computational efficiency of sharpness-aware minimization (SAM).", "abstract": "Sharpness-aware minimization (SAM) improves generalization of various deep learning tasks. Motivated by popular architectures such as LoRA, we explore the implicit regularization of SAM for scale-invariant problems involving two groups of variables. Instead of focusing on commonly used sharpness, this work introduces a concept termed balancedness, defined as the difference between the squared norm of two variables. This allows us to depict richer global behaviors of SAM. In particular, our theoretical and empirical findings reveal that i) SAM promotes balancedness; and ii) the regularization on balancedness is data-responsive -- outliers have stronger impact. The latter coincides with empirical observations that SAM outperforms SGD in the presence of outliers. Leveraging the implicit regularization, we develop a resource-efficient SAM variant, balancedness-aware regularization (BAR), tailored for scale-invariant problems such as finetuning language models with LoRA. BAR saves 95% computational overhead of SAM, with enhanced test performance across various tasks on RoBERTa, GPT2, and OPT-1.3B.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/93626"} +{"video_file": "oTzydUKWpq_39026067.mp4", "openreview_id": "oTzydUKWpq", "slideslive_id": 39026067, "venue": "nips2024", "title": "Intruding with Words: Towards Understanding Graph Injection Attacks at the Text Level", "status": "Poster", "keywords": "Graph Neural Networks;Graph Adversarial Attack;Graph Injection Attack", "tldr": "We study text-level Graph Injection Attacks beyond the embedding level, revealing new challenges and insights about Graph Injection Attack designs.", "abstract": "Graph Neural Networks (GNNs) excel across various applications but remain vulnerable to adversarial attacks, particularly Graph Injection Attacks (GIAs), which inject malicious nodes into the original graph and pose realistic threats. Text-attributed graphs (TAGs), where nodes are associated with textual features, are crucial due to their prevalence in real-world applications and are commonly used to evaluate these vulnerabilities. However, existing research only focuses on embedding-level GIAs, which inject node embeddings rather than actual textual content, limiting their applicability and simplifying detection. In this paper, we pioneer the exploration of GIAs at the text level, presenting three novel attack designs that inject textual content into the graph. Through theoretical and empirical analysis, we demonstrate that text interpretability, a factor previously overlooked at the embedding level, plays a crucial role in attack strength. Among the designs we investigate, the Word-frequency-based Text-level GIA (WTGIA) is particularly notable for its balance between performance and interpretability. Despite the success of WTGIA, we discover that defenders can easily enhance their defenses with customized text embedding methods or large language model (LLM)--based predictors. These insights underscore the necessity for further research into the potential and practical significance of text-level GIAs.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/93622"} +{"video_file": "oUXiNX5KRm_39026523.mp4", "openreview_id": "oUXiNX5KRm", "slideslive_id": 39026523, "venue": "nips2024", "title": "Universal Physics Transformers: A Framework For Efficiently Scaling Neural Operators", "status": "Poster", "keywords": "neural operator;computational fluid dynamics;Lagrangian simulations;transformers;latent space modeling", "tldr": "We introduce Universal Physics Transformers, an efficiently scalable neural operator framework to model a wide range of spatio-temporal problems \u2013 for Lagrangian and Eulerian discretization schemes.", "abstract": "Neural operators, serving as physics surrogate models, have recently gained increased interest. With ever increasing problem complexity, the natural question arises: what is an efficient way to scale neural operators to larger and more complex simulations - most importantly by taking into account different types of simulation datasets. This is of special interest since, akin to their numerical counterparts, different techniques are used across applications, even if the underlying dynamics of the systems are similar. Whereas the flexibility of transformers has enabled unified architectures across domains, neural operators mostly follow a problem specific design, where GNNs are commonly used for Lagrangian simulations and grid-based models predominate Eulerian simulations.\nWe introduce Universal Physics Transformers (UPTs), an efficient and unified learning paradigm for a wide range of spatio-temporal problems. UPTs operate without grid- or particle-based latent structures, enabling flexibility and scalability across meshes and particles. UPTs efficiently propagate dynamics in the latent space, emphasized by inverse encoding and decoding techniques. Finally, UPTs allow for queries of the latent space representation at any point in space-time. We demonstrate diverse applicability and efficacy of UPTs in mesh-based fluid simulations, and steady-state Reynolds averaged Navier-Stokes simulations, and Lagrangian-based dynamics.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/93621"} +{"video_file": "oX6aIl9f0Y_39024951.mp4", "openreview_id": "oX6aIl9f0Y", "slideslive_id": 39024951, "venue": "nips2024", "title": "Private Stochastic Convex Optimization with Heavy Tails: Near-Optimality from Simple Reductions", "status": "Poster", "keywords": "Stochastic Convex Optimization;Heavy-Tailed Distributions;Differential Privacy", "tldr": "We give the first algorithm attaining near-optimal error rates for DP-SCO assuming heavy-tailed gradients, and several improvements in structured cases.", "abstract": "We study the problem of differentially private stochastic convex optimization (DP-SCO) with heavy-tailed gradients, where we assume a\nk\nth\n-moment bound on the Lipschitz constants of sample functions, rather than a uniform bound. We propose a new reduction-based approach that enables us to obtain the first optimal rates (up to logarithmic factors) in the heavy-tailed setting, achieving error\nG\n2\n\u22c5\n1\nn\n+\nG\nk\n\u22c5\n(\nd\nn\n\u03f5\n)\n1\n\u2212\n1\nk\nunder\n(\n\u03f5\n,\n\u03b4\n)\n-approximate differential privacy, up to a mild\npolylog\n(\n1\n\u03b4\n)\nfactor, where\nG\n2\n2\nand\nG\nk\nk\nare the\n2\nnd\nand\nk\nth\nmoment bounds on sample Lipschitz constants, nearly-matching a lower bound of [LR23]. We then give a suite of private algorithms for DP-SCO with heavy-tailed gradients improving our basic result under additional assumptions, including an optimal algorithm under a known-Lipschitz constant assumption, a near-linear time algorithm for smooth functions, and an optimal linear time algorithm for smooth generalized linear models.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/93619"} +{"video_file": "oXHyYHp4Zb_39027433.mp4", "openreview_id": "oXHyYHp4Zb", "slideslive_id": 39027433, "venue": "nips2024", "title": "SparseLLM: Towards Global Pruning of Pre-trained Language Models", "status": "Poster", "keywords": "Large Language Model;Pruning", "tldr": "This paper proposed a novel framework for global pruning of large language models, via considering the LLMs as a chain of modules and introducing auxiliary variables.", "abstract": "The transformative impact of large language models (LLMs) like LLaMA and GPT on natural language processing is countered by their prohibitive computational demands. Pruning has emerged as a pivotal compression strategy, introducing sparsity to enhance both memory and computational efficiency. Yet, traditional global pruning is impractical for LLMs due to scalability issues, while local pruning, despite its efficiency, leads to suboptimal solutions. Addressing these challenges, we propose SparseLLM, a novel framework that redefines the global pruning process into manageable, coordinated subproblems, allowing for resource-efficient optimization with global optimality. SparseLLM's approach, which conceptualizes LLMs as a chain of modular functions and leverages auxiliary variables for problem decomposition, not only facilitates a pragmatic application on LLMs but also demonstrates significant performance improvements, particularly in high-sparsity regimes where it surpasses current state-of-the-art methods. Our source code is publicly available at https://github.com/BaiTheBest/SparseLLM.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93617"} +{"video_file": "oe7MfqFK1M_39028023.mp4", "openreview_id": "oe7MfqFK1M", "slideslive_id": 39028023, "venue": "nips2024", "title": "Recovering Complete Actions for Cross-dataset Skeleton Action Recognition", "status": "Poster", "keywords": "skeleton action recognition;domain generalization;data augmentation", "tldr": "We present a novel recover-and-resample augmentation framework based on complete action prior for skeleton action generalization task.", "abstract": "Despite huge progress in skeleton-based action recognition, its generalizability to different domains remains a challenging issue. In this paper, to solve the skeleton action generalization problem, we present a recover-and-resample augmentation framework based on a novel complete action prior. We observe that human daily actions are confronted with temporal mismatch across different datasets, as they are usually partial observations of their complete action sequences. By recovering complete actions and resampling from these full sequences, we can generate strong augmentations for unseen domains. At the same time, we discover the nature of general action completeness within large datasets, indicated by the per-frame diversity over time. This allows us to exploit two assets of transferable knowledge that can be shared across action samples and be helpful for action completion: boundary poses for determining the action start, and linear temporal transforms for capturing global action patterns. Therefore, we formulate the recovering stage as a two-step stochastic action completion with boundary pose-conditioned extrapolation followed by smooth linear transforms. Both the boundary poses and linear transforms can be efficiently learned from the whole dataset via clustering. We validate our approach on a cross-dataset setting with three skeleton action datasets, outperforming other domain generalization approaches by a considerable margin.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93611"} +{"video_file": "ogk236hsJM_39024536.mp4", "openreview_id": "ogk236hsJM", "slideslive_id": 39024536, "venue": "nips2024", "title": "One-Step Diffusion Distillation through Score Implicit Matching", "status": "Poster", "keywords": "Diffusion mode;Diffusion Distillation;Text-to-Image Generation;Generative Adversarial Network", "tldr": "The submission propose an general one-step diffusion distillation framework that induces efficient and stable instances with very strong performances.", "abstract": "Despite their strong performances on many generative tasks, diffusion models require a large number of sampling steps in order to generate realistic samples. This has motivated the community to develop effective methods to distill pre-trained diffusion models into more efficient models, but these methods still typically require few-step inference or perform substantially worse than the underlying model. In this paper, we present Score Implicit Matching (SIM) a new approach to distilling pre-trained diffusion models into single-step generator models, while maintaining almost the same sample generation ability as the original model as well as being data-free with no need of training samples for distillation. The method rests upon the fact that, although the traditional score-based loss is intractable to minimize for generator models, under certain conditions we \\emph{can} efficiently compute the \\emph{gradients} for a wide class of score-based divergences between a diffusion model and a generator. SIM shows strong empirical performances for one-step generators: on the CIFAR10 dataset, it achieves an FID of 2.06 for unconditional generation and 1.96 for class-conditional generation. Moreover, by applying SIM to a leading transformer-based diffusion model, we distill a single-step generator for text-to-image (T2I) generation that attains an aesthetic score of 6.42 with no performance decline over the original multi-step counterpart, clearly outperforming the other one-step generators including SDXL-TURBO of 5.33, SDXL-LIGHTNING of 5.34 and HYPER-SDXL of 5.85. We will release this industry-ready one-step transformer-based T2I generator along with this paper.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/93608"} +{"video_file": "ohvXBIPV7e_39025695.mp4", "openreview_id": "ohvXBIPV7e", "slideslive_id": 39025695, "venue": "nips2024", "title": "CSPG: Crossing Sparse Proximity Graphs for Approximate Nearest Neighbor Search", "status": "Poster", "keywords": "similarity search;approximate nearest neighbor search;high-dimensional space;graph index", "tldr": "A graph-based schema for ANNS", "abstract": "The state-of-the-art approximate nearest neighbor search (ANNS) algorithm builds a large proximity graph on the dataset and performs a greedy beam search, which may bring many unnecessary explorations. We develop a novel framework, namely corssing sparse proximity graph (CSPG), based on random partitioning of the dataset. It produces a smaller sparse proximity graph for each partition and routing vectors that bind all the partitions. An efficient two-staged approach is designed for exploring CSPG, with fast approaching and cross-partition expansion. We theoretically prove that CSPG can accelerate the existing graph-based ANNS algorithms by reducing unnecessary explorations. In addition, we conduct extensive experiments on benchmark datasets. The experimental results confirm that the existing graph-based methods can be significantly outperformed by incorporating CSPG, achieving 1.5x to 2x speedups of QPS in almost all recalls.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/93606"} +{"video_file": "opaRhDvQRD_39025395.mp4", "openreview_id": "opaRhDvQRD", "slideslive_id": 39025395, "venue": "nips2024", "title": "Forgetting, Ignorance or Myopia: Revisiting Key Challenges in Online Continual Learning", "status": "Poster", "keywords": "online continual learning;data stream;model throughput", "tldr": "Forgetting, Ignorance or Myopia: Revisiting Key Challenges in Online Continual Learning", "abstract": "Online continual learning (OCL) requires the models to learn from constant, endless streams of data. While significant efforts have been made in this field, most were focused on mitigating the \\textit{catastrophic forgetting} issue to achieve better classification ability, at the cost of a much heavier training workload. They overlooked that in real-world scenarios, e.g., in high-speed data stream environments, data do not pause to accommodate slow models. In this paper, we emphasize that \\textit{model throughput}-- defined as the maximum number of training samples that a model can process within a unit of time -- is equally important. It directly limits how much data a model can utilize and presents a challenging dilemma for current methods. With this understanding, we revisit key challenges in OCL from both empirical and theoretical perspectives, highlighting two critical issues beyond the well-documented catastrophic forgetting: (\\romannumeral1) Model's ignorance: the single-pass nature of OCL challenges models to learn effective features within constrained training time and storage capacity, leading to a trade-off between effective learning and model throughput; (\\romannumeral2) Model's myopia: the local learning nature of OCL on the current task leads the model to adopt overly simplified, task-specific features and \\textit{excessively sparse classifier}, resulting in the gap between the optimal solution for the current task and the global objective. To tackle these issues, we propose the Non-sparse Classifier Evolution framework (NsCE) to facilitate effective global discriminative feature learning with minimal time cost. NsCE integrates non-sparse maximum separation regularization and targeted experience replay techniques with the help of pre-trained models, enabling rapid acquisition of new globally discriminative features. Extensive experiments demonstrate the substantial improvements of our framework in performance, throughput and real-world practicality.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93601"} +{"video_file": "orxQccN8Fm_39025467.mp4", "openreview_id": "orxQccN8Fm", "slideslive_id": 39025467, "venue": "nips2024", "title": "Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment", "status": "Poster", "keywords": "Large language models;Fine-tune;alignment;Reinforcement learning", "tldr": "Reward Learning from Human Demonstration Improves SFT for LLM Alignment", "abstract": "Aligning human preference and value is an important requirement for contemporary foundation models. State-of-the-art techniques such as Reinforcement Learning from Human Feedback (RLHF) often consist of two stages: 1) supervised fine-tuning (SFT), where the model is fine-tuned by learning from human demonstration data; 2) Preference learning, where preference data is used to learn a reward model, which is in turn used by a reinforcement learning (RL) step to fine-tune the model. Such reward model serves as a proxy to human preference, and it is critical to guide the RL step towards improving the model quality. In this work, we argue that the SFT stage significantly benefits from learning a reward model as well. Instead of using the human demonstration data directly via supervised learning, we propose to leverage an Inverse Reinforcement Learning (IRL) technique to {\\it simultaneously} build an reward model and a policy model. This approach leads to new SFT algorithms that are not only efficient to implement, but are robust to the presence of low-quality supervised learning data. Moreover, we discover a connection between the proposed IRL based approach, and a recent line of works called Self-Play Fine-tune (SPIN, \\cite{chen2024self}). Theoretically, we show that the proposed algorithms converge to the stationary solutions of the IRL problem. Empirically, we align 1B and 7B models using proposed methods and evaluate them on a reward benchmark model and the HuggingFace Open LLM Leaderboard. The proposed methods show significant performance improvement over existing SFT approaches. Our results indicate that it is beneficial to leverage reward learning throughout the entire alignment process. Our code is available at \\url{https://github.com/JasonJiaxiangLi/Reward_learning_SFT}.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93598"} +{"video_file": "ouoBW2PXFQ_39026942.mp4", "openreview_id": "ouoBW2PXFQ", "slideslive_id": 39026942, "venue": "nips2024", "title": "CALANet: Cheap All-Layer Aggregation for Human Activity Recognition", "status": "Poster", "keywords": "Human activity recognition;Wearable sensors;Neural networks;Real-time systems", "tldr": "We proposed CALANet that allows the classifier to aggregate features for all layer while maintaining the efficiency of existing real-time HAR models.", "abstract": "With the steady growth of sensing technology and wearable devices, sensor-based human activity recognition has become essential in widespread applications, such as healthcare monitoring and fitness tracking, where accurate and real-time systems are required. To achieve real-time response, recent studies have focused on lightweight neural network models. Specifically, they designed the network architectures by restricting the number of layers shallowly or connections of each layer. However, these approaches suffer from limited accuracy because the classifier only uses the features at the last layer. In this study, we propose a cheap all-layer aggregation network, CALANet, for accuracy improvement while maintaining the efficiency of existing real-time HAR models. Specifically, CALANet allows the classifier to aggregate the features for all layers, resulting in a performance gain. In addition, this work proves that the theoretical computation cost of CALANet is equivalent to that of conventional networks. Evaluated on seven publicly available datasets, CALANet outperformed existing methods, achieving state-of-the-art performance. The source codes of the CALANet are publicly available at https://github.com/jgpark92/CALANet.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93594"} +{"video_file": "owuEcT6BTl_39027240.mp4", "openreview_id": "owuEcT6BTl", "slideslive_id": 39027240, "venue": "nips2024", "title": "Emergence of Hidden Capabilities: Exploring Learning Dynamics in Concept Space", "status": "Spotlight", "keywords": "Learning Dynamics;Compositional Generalization;Emergent Abilities;Diffusion Models;Mechanistic Interpretability", "tldr": "We find that compositional generalization abilities of diffusion models emerge suddenly and robustly, while models might not actively exhibit this ability.", "abstract": "Modern generative models demonstrate impressive capabilities, likely stemming from an ability to identify and manipulate abstract concepts underlying their training data. However, fundamental questions remain: what determines the concepts a model learns, the order in which it learns them, and its ability to manipulate those concepts? To address these questions, we propose analyzing a model\u2019s learning dynamics via a framework we call the concept space, where each axis represents an independent concept underlying the data generating process. By characterizing learning dynamics in this space, we identify how the speed at which a concept is learned, and hence the order of concept learning, is controlled by properties of the data we term concept signal. Further, we observe moments of sudden turns in the direction of a model\u2019s learning dynamics in concept space. Surprisingly, these points precisely correspond to the emergence of hidden capabilities, i.e., where latent interventions show the model possesses the capability to manipulate a concept, but these capabilities cannot yet be elicited via naive input prompting. While our results focus on synthetically defined toy datasets, we hypothesize a general claim on emergence of hidden capabilities may hold: generative models possess latent capabilities that emerge suddenly and consistently during training, though a model might not exhibit these capabilities under naive input prompting.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/93592"} +{"video_file": "p3hNrpeWMe_39028336.mp4", "openreview_id": "p3hNrpeWMe", "slideslive_id": 39028336, "venue": "nips2024", "title": "A Walsh Hadamard Derived Linear Vector Symbolic Architecture", "status": "Poster", "keywords": "Vector Symbolic Architectures;Holographic Reduced Representations;Hadamard Transformation;HRR;VTB;MAP;HLB", "tldr": "Starting from the Hadamard transform we develop a simple method for neuro-symbolic manipulation of vectors that has desirable properties for deep learning.", "abstract": "Vector Symbolic Architectures (VSAs) are one approach to developing Neuro-symbolic AI, where two vectors in $\\mathbb{R}^d$ are 'bound' together to produce a new vector in the same space. VSAs support the commutativity and associativity of this binding operation, along with an inverse operation, allowing one to construct symbolic-style manipulations over real-valued vectors. Most VSAs were developed before deep learning and automatic differentiation became popular and instead focused on efficacy in hand-designed systems. In this work, we introduce the Hadamard-derived linear Binding (HLB), which is designed to have favorable computational efficiency, and efficacy in classic VSA tasks, and perform well in differentiable systems.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93583"} +{"video_file": "p3nPHMpx04_39024801.mp4", "openreview_id": "p3nPHMpx04", "slideslive_id": 39024801, "venue": "nips2024", "title": "A Surprisingly Simple Approach to Generalized Few-Shot Semantic Segmentation", "status": "Poster", "keywords": "few-shot learning;semantic segmentation;catastrophic forgetting", "tldr": "A simple approach without resorting to, e.g., complicated modules and meta-learning improved GFSS performance.", "abstract": "The goal of generalized few-shot semantic segmentation (GFSS) is to recognize novel-class objects through training with a few annotated examples and the base-class model that learned the knowledge about the base classes. Unlike the classic few-shot semantic segmentation, GFSS aims to classify pixels into both base and novel classes, meaning it is a more practical setting. Current GFSS methods rely on several techniques such as using combinations of customized modules, carefully designed loss functions, meta-learning, and transductive learning. However, we found that a simple rule and standard supervised learning substantially improve the GFSS performance. In this paper, we propose a simple yet effective method for GFSS that does not use the techniques mentioned above. Also, we theoretically show that our method perfectly maintains the segmentation performance of the base-class model over most of the base classes. Through numerical experiments, we demonstrated the effectiveness of our method. It improved in novel-class segmentation performance in the\n1\n-shot scenario by\n6.1\n% on the PASCAL-\n5\ni\ndataset,\n4.7\n% on the PASCAL-\n10\ni\ndataset, and\n1.0\n% on the COCO-\n20\ni\ndataset. Our code is publicly available at https://github.com/IBM/BCM.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93582"} +{"video_file": "p3tSEFMwpG_39024694.mp4", "openreview_id": "p3tSEFMwpG", "slideslive_id": 39024694, "venue": "nips2024", "title": "Drift-Resilient TabPFN: In-Context Learning Temporal Distribution Shifts on Tabular Data", "status": "Poster", "keywords": "Temporal Distribution Shifts;In-Context Learning;Bayesian Inference;Prior-Data Fitted Networks;Temporal Domain Generalization;Structural Causal Models;TabPFN;Tabular Data Modeling;Out-Of-Distribution Generalization;Domain Generalization;Meta-Learning;Concept Drift", "tldr": "We present Drift-Resilient TabPFN, an approach using In-Context Learning via a Prior-Data Fitted Network, to address distribution shifts in tabular data, outperforming existing methods in terms of performance and calibration.", "abstract": "While most ML models expect independent and identically distributed data, this assumption is often violated in real-world scenarios due to distribution shifts, resulting in the degradation of machine learning model performance. Until now, no tabular method has consistently outperformed classical supervised learning, which ignores these shifts. To address temporal distribution shifts, we present Drift-Resilient TabPFN, a fresh approach based on In-Context Learning with a Prior-Data Fitted Network that learns the learning algorithm itself: it accepts the entire training dataset as input and makes predictions on the test set in a single forward pass. Specifically, it learns to approximate Bayesian inference on synthetic datasets drawn from a prior that specifies the model's inductive bias. This prior is based on structural causal models (SCM), which gradually shift over time. To model shifts of these causal models, we use a secondary SCM, that specifies changes in the primary model parameters. The resulting Drift-Resilient TabPFN can be applied to unseen data, runs in seconds on small to moderately sized datasets and needs no hyperparameter tuning. Comprehensive evaluations across 18 synthetic and real-world datasets demonstrate large performance improvements over a wide range of baselines, such as XGB, CatBoost, TabPFN, and applicable methods featured in the Wild-Time benchmark. Compared to the strongest baselines, it improves accuracy from 0.688 to 0.744 and ROC AUC from 0.786 to 0.832 while maintaining stronger calibration. This approach could serve as significant groundwork for further research on out-of-distribution prediction.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/93581"} +{"video_file": "p43ObIwJFW_39025150.mp4", "openreview_id": "p43ObIwJFW", "slideslive_id": 39025150, "venue": "nips2024", "title": "Learning to Solve Quadratic Unconstrained Binary Optimization in a Classification Way", "status": "Spotlight", "keywords": "quadratic unconstrained binary optimization;combinational optimization;machine learning;classification;neural solver", "tldr": "We propose a neural solver to solve quadratic unconstrained binary optimization in a classification way, which can achieve near-optimal solutions in milliseconds, even for instances comprising thousands of variables.", "abstract": "The quadratic unconstrained binary optimization (QUBO) is a well-known NP-hard problem that takes an\nn\n\u00d7\nn\nmatrix\nQ\nas input and decides an\nn\n-dimensional 0-1 vector\nx\n, to optimize a quadratic function. Existing learning-based models that always formulate the solution process as sequential decisions suffer from high computational overload. To overcome this issue, we propose a neural solver called the Value Classification Model (VCM) that formulates the solution process from a classification perspective. It applies a Depth Value Network (DVN) based on graph convolution that exploits the symmetry property in\nQ\nto auto-grasp value features. These features are then fed into a Value Classification Network (VCN) which directly generates classification solutions. Trained by a highly efficient model-tailored Greedy-guided Self Trainer (GST) which does not require any priori optimal labels, VCM significantly outperforms competitors in both computational efficiency and solution quality with a remarkable generalization ability. It can achieve near-optimal solutions in milliseconds with an average optimality gap of just 0.362% on benchmarks with up to 2500 variables. Notably, a VCM trained at a specific DVN depth can steadily find better solutions by simply extending the testing depth, which narrows the gap to 0.034% on benchmarks. To our knowledge, this is the first learning-based model to reach such a performance.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93580"} +{"video_file": "p50Dyqk0GX_39024862.mp4", "openreview_id": "p50Dyqk0GX", "slideslive_id": 39024862, "venue": "nips2024", "title": "Dual Risk Minimization: Towards Next-Level Robustness in Fine-tuning Zero-Shot Models", "status": "Poster", "keywords": "robustness;fine-tuning zero-shot models;CLIP;concept descriptions", "tldr": "Combining empirical and worst-case risk minimization enhances the robustness of fine-tuned CLIP models.", "abstract": "Fine-tuning foundation models often compromises their robustness to distribution shifts. To remedy this, most robust fine-tuning methods aim to preserve the pre-trained features. However, not all pre-trained features are robust and those methods are largely indifferent to which ones to preserve. We propose dual risk minimization (DRM), which combines empirical risk minimization with worst-case risk minimization, to better preserve the core features of downstream tasks. In particular, we utilize core-feature descriptions generated by LLMs to induce core-based zero-shot predictions which then serve as proxies to estimate the worst-case risk. DRM balances two crucial aspects of model robustness: expected performance and worst-case performance, establishing a new state of the art on various real-world benchmarks. DRM significantly improves the out-of-distribution performance of CLIP ViT-L/14@336 on ImageNet (75.9$\\to$77.1), WILDS-iWildCam (47.1$\\to$51.8), and WILDS-FMoW (50.7$\\to$53.1); opening up new avenues for robust fine-tuning. Our code is available at https://github.com/vaynexie/DRM.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/93578"} +{"video_file": "p54CYwdjVP_39025316.mp4", "openreview_id": "p54CYwdjVP", "slideslive_id": 39025316, "venue": "nips2024", "title": "A Globally Optimal Portfolio for m-Sparse Sharpe Ratio Maximization", "status": "Poster", "keywords": "Sharpe ratio;$\\ell_0$ constraint;proximal gradient algorithm;global optimality", "tldr": "We exploit the Kurdyka-Lojasiewicz property to develop an efficient proximal gradient algorithm that leads to a portfolio which achieves the globally optimal m-sparse Sharpe ratio.", "abstract": "The Sharpe ratio is an important and widely-used risk-adjusted return in financial engineering. In modern portfolio management, one may require an m-sparse (no more than m active assets) portfolio to save managerial and financial costs. However, few existing methods can optimize the Sharpe ratio with the m-sparse constraint, due to the nonconvexity and the complexity of this constraint. We propose to convert the m-sparse fractional optimization problem into an equivalent m-sparse quadratic programming problem. The semi-algebraic property of the resulting objective function allows us to exploit the Kurdyka-Lojasiewicz property to develop an efficient Proximal Gradient Algorithm (PGA) that leads to a portfolio which achieves the globally optimal m-sparse Sharpe ratio under certain conditions. The convergence rates of PGA are also provided. To the best of our knowledge, this is the first proposal that achieves a globally optimal m-sparse Sharpe ratio with a theoretically-sound guarantee.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93577"} +{"video_file": "pASJxzMJb7_39027098.mp4", "openreview_id": "pASJxzMJb7", "slideslive_id": 39027098, "venue": "nips2024", "title": "Zipfian Whitening", "status": "Poster", "keywords": "representation learning;word embeddings;isotropy;natural language processing", "tldr": "We propose a method to correct anisotropy in the word embedding space by accounting for word frequency; then justify our approach from the perspectives of generative models, word embedding norms, and errors in long-tail distributions.", "abstract": "The word embedding space in neural models is skewed, and correcting this can improve task performance. We point out that most approaches for modeling, correcting, and measuring the symmetry of an embedding space implicitly assume that the word frequencies are uniform; in reality, word frequencies follow a highly non-uniform distribution, known as Zipf's law. Surprisingly, simply performing PCA whitening weighted by the empirical word frequency that follows Zipf's law significantly improves task performance, surpassing established baselines. From a theoretical perspective, both our approach and existing methods can be clearly categorized: word representations are distributed according to an exponential family with either uniform or Zipfian base measures. By adopting the latter approach, we can naturally emphasize informative low-frequency words in terms of their vector norm, which becomes evident from the information-geometric perspective (Oyama et al., EMNLP 2023), and in terms of the loss functions for imbalanced classification (Menon et al. ICLR 2021). Additionally, our theory corroborates that popular natural language processing methods, such as skip-gram negative sampling (Mikolov et al., NIPS 2013), WhiteningBERT (Huang et al., Findings of EMNLP 2021), and headless language models (Godey et al., ICLR 2024), work well just because their word embeddings encode the empirical word frequency into the underlying probabilistic model.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93576"} +{"video_file": "pG380vLYRU_39028731.mp4", "openreview_id": "pG380vLYRU", "slideslive_id": 39028731, "venue": "nips2024", "title": "Faster Accelerated First-order Methods for Convex Optimization with Strongly Convex Function Constraints", "status": "Poster", "keywords": "Convex optimization; Accelerated primal dual algorithm; Sparse Optimization", "tldr": "We propose faster accelerated primal-dual algorithms for minimizing a convex function subject to strongly convex function constraints.", "abstract": "In this paper, we introduce faster accelerated primal-dual algorithms for minimizing a convex function subject to strongly convex function constraints. Prior to our work, the best complexity bound was\nO\n(\n1\n/\n\u03b5\n)\n, regardless of the strong convexity of the constraint function. It is unclear whether the strong convexity assumption can enable even better convergence results. To address this issue, we have developed novel techniques to progressively estimate the strong convexity of the Lagrangian function. Our approach, for the first time, effectively leverages the constraint strong convexity, obtaining an improved complexity of\nO\n(\n1\n/\n\u03b5\n)\n. This rate matches the complexity lower bound for strongly-convex-concave saddle point optimization and is therefore order-optimal. We show the superior performance of our methods in sparsity-inducing constrained optimization, notably Google's personalized PageRank problem. Furthermore, we show that a restarted version of the proposed methods can effectively identify the optimal solution's sparsity pattern within a finite number of steps, a result that appears to have independent significance.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93571"} +{"video_file": "pGEY8JQ3qx_39025413.mp4", "openreview_id": "pGEY8JQ3qx", "slideslive_id": 39025413, "venue": "nips2024", "title": "Span-Based Optimal Sample Complexity for Weakly Communicating and General Average Reward MDPs", "status": "Oral", "keywords": "reinforcement learning theory;average reward;sample complexity", "tldr": "We resolve the span-based sample complexity of weakly communicating average reward MDPs and initiate the study of general multichain MDPs, obtaining minimax optimal bounds and uncovering improved horizon dependence for fixed discounted MDP instances.", "abstract": "We study the sample complexity of learning an\n\u03b5\n-optimal policy in an average-reward Markov decision process (MDP) under a generative model. For weakly communicating MDPs, we establish the complexity bound\nO\n~\n(\nS\nA\nH\n\u03b5\n2\n)\n, where\nH\nis the span of the bias function of the optimal policy and\nS\nA\nis the cardinality of the state-action space. Our result is the first that is minimax optimal (up to log factors) in all parameters\nS\n,\nA\n,\nH\n, and\n\u03b5\n, improving on existing work that either assumes uniformly bounded mixing times for all policies or has suboptimal dependence on the parameters. We also initiate the study of sample complexity in general (multichain) average-reward MDPs. We argue a new transient time parameter\nB\nis necessary, establish an\nO\n~\n(\nS\nA\nB\n+\nH\n\u03b5\n2\n)\ncomplexity bound, and prove a matching (up to log factors) minimax lower bound. Both results are based on reducing the average-reward MDP to a discounted MDP, which requires new ideas in the general setting. To optimally analyze this reduction, we develop improved bounds for\n\u03b3\n-discounted MDPs, showing that\nO\n~\n(\nS\nA\nH\n(\n1\n\u2212\n\u03b3\n)\n2\n\u03b5\n2\n)\nand\nO\n~\n(\nS\nA\nB\n+\nH\n(\n1\n\u2212\n\u03b3\n)\n2\n\u03b5\n2\n)\nsamples suffice to learn\n\u03b5\n-optimal policies in weakly communicating and in general MDPs, respectively. Both these results circumvent the well-known minimax lower bound of\n\u03a9\n~\n(\nS\nA\n1\n(\n1\n\u2212\n\u03b3\n)\n3\n\u03b5\n2\n)\nfor\n\u03b3\n-discounted MDPs, and establish a quadratic rather than cubic horizon dependence for a fixed MDP instance.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93570"} +{"video_file": "pGOBEYcXzs_39025208.mp4", "openreview_id": "pGOBEYcXzs", "slideslive_id": 39025208, "venue": "nips2024", "title": "Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models", "status": "Poster", "keywords": "Large Language Models;Binarization;Quantization;Model Compression;Efficient LLM", "tldr": "BinaryMoS introduces memory-efficient token-adaptive binarization for LLMs, reducing model size and enhancing representation ability of binarized weights", "abstract": "Binarization, which converts weight parameters to binary values, has emerged as an effective strategy to reduce the size of large language models (LLMs). However, typical binarization techniques significantly diminish linguistic effectiveness of LLMs. To address this issue, we introduce a novel binarization technique called Mixture of Scales (BinaryMoS). Unlike conventional methods, BinaryMoS employs multiple scaling experts for binary weights, dynamically merging these experts for each token to adaptively generate scaling factors. This token-adaptive approach boosts the representational power of binarized LLMs by enabling contextual adjustments to the values of binary weights. Moreover, because this adaptive process only involves the scaling factors rather than the entire weight matrix, BinaryMoS maintains compression efficiency similar to traditional static binarization methods. Our experimental results reveal that BinaryMoS surpasses conventional binarization techniques in various natural language processing tasks and even outperforms 2-bit quantization methods, all while maintaining similar model size to static binarization techniques.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93569"} +{"video_file": "pJlFURyTG5_39026691.mp4", "openreview_id": "pJlFURyTG5", "slideslive_id": 39026691, "venue": "nips2024", "title": "Scalable Constrained Policy Optimization for Safe Multi-agent Reinforcement Learning", "status": "Poster", "keywords": "Multi-agent reinforcement learning;policy optimization;safe learning;scalable method", "tldr": "We develop a novel scalable multi-agent constrained policy optimization method and prove that the safety constraints and the joint policy improvement can be met when each agent adopts a sequential update scheme to optimize a \n\u03ba\n-hop policy.", "abstract": "A challenging problem in seeking to bring multi-agent reinforcement learning (MARL) techniques into real-world applications, such as autonomous driving and drone swarms, is how to control multiple agents safely and cooperatively to accomplish tasks. Most existing safe MARL methods learn the centralized value function by introducing a global state to guide safety cooperation. However, the global coupling arising from agents\u2019 safety constraints and the exponential growth of the state-action space size limit their applicability in instant communication or computing resource-constrained systems and larger multi-agent systems. In this paper, we develop a novel scalable and theoretically-justified multi-agent constrained policy optimization method. This method utilizes the rigorous bounds of the trust region method and the bounds of the truncated advantage function to provide a new local policy optimization objective for each agent. Also, we prove that the safety constraints and the joint policy improvement can be met when each agent adopts a sequential update scheme to optimize a\n\u03ba\n-hop policy. Then, we propose a practical algorithm called Scalable MAPPO-Lagrangian (Scal-MAPPO-L). The proposed method\u2019s effectiveness is verified on a collection of benchmark tasks, and the results support our theory that decentralized training with local interactions can still improve reward performance and satisfy safe constraints.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93564"} +{"video_file": "pLoX8Og3bH_39026483.mp4", "openreview_id": "pLoX8Og3bH", "slideslive_id": 39026483, "venue": "nips2024", "title": "Unleashing Multispectral Video's Potential in Semantic Segmentation: A Semi-supervised Viewpoint and New UAV-View Benchmark", "status": "Poster", "keywords": "Computer Vision;Deep Learning;Semantic Segmentation", "tldr": "In this work, we propose the SemiMV baseline, the first approach to apply semi-supervised learning specifically for multispectral video semantic segmentation, and introduce a new UAV-view dataset to advance research in this field.", "abstract": "Thanks to the rapid progress in RGB & thermal imaging, also known as multispectral imaging, the task of multispectral video semantic segmentation, or MVSS in short, has recently drawn significant attentions. Noticeably, it offers new opportunities in improving segmentation performance under unfavorable visual conditions such as poor light or overexposure. Unfortunately, there are currently very few datasets available, including for example MVSeg dataset that focuses purely toward eye-level view; and it features the sparse annotation nature due to the intensive demands of labeling process. To address these key challenges of the MVSS task, this paper presents two major contributions: the introduction of MVUAV, a new MVSS benchmark dataset, and the development of a dedicated semi-supervised MVSS baseline - SemiMV. Our MVUAV dataset is captured via Unmanned Aerial Vehicles (UAV), which offers a unique oblique bird\u2019s-eye view complementary to the existing MVSS datasets; it also encompasses a broad range of day/night lighting conditions and over 30 semantic categories. In the meantime, to better leverage the sparse annotations and extra unlabeled RGB-Thermal videos, a semi-supervised learning baseline, SemiMV, is proposed to enforce consistency regularization through a dedicated Cross-collaborative Consistency Learning (C3L) module and a denoised temporal aggregation strategy. Comprehensive empirical evaluations on both MVSeg and MVUAV benchmark datasets have showcased the efficacy of our SemiMV baseline.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93562"} +{"video_file": "pMaCRgu8GV_39024901.mp4", "openreview_id": "pMaCRgu8GV", "slideslive_id": 39024901, "venue": "nips2024", "title": "Artificial Generational Intelligence: Cultural Accumulation in Reinforcement Learning", "status": "Poster", "keywords": "social learning;cultural accumulation;in-context learning;reinforcement learning", "tldr": "We investigate cultural accumulation among reinforcement learning agents and demonstrate that it can improve upon \"single-lifetime\" learning.", "abstract": "Cultural accumulation drives the open-ended and diverse progress in capabilities spanning human history. It builds an expanding body of knowledge and skills by combining individual exploration with inter-generational information transmission. Despite its widespread success among humans, the capacity for artificial learning agents to accumulate culture remains under-explored. In particular, approaches to reinforcement learning typically strive for improvements over only a single lifetime. Generational algorithms that do exist fail to capture the open-ended, emergent nature of cultural accumulation, which allows individuals to trade-off innovation and imitation. Building on the previously demonstrated ability for reinforcement learning agents to perform social learning, we find that training setups which balance this with independent learning give rise to cultural accumulation. These accumulating agents outperform those trained for a single lifetime with the same cumulative experience. We explore this accumulation by constructing two models under two distinct notions of a generation: episodic generations, in which accumulation occurs via in-context learning and train-time generations, in which accumulation occurs via in-weights learning. In-context and in-weights cultural accumulation can be interpreted as analogous to knowledge and skill accumulation, respectively. To the best of our knowledge, this work is the first to present general models that achieve emergent cultural accumulation in reinforcement learning, opening up new avenues towards more open-ended learning systems, as well as presenting new opportunities for modelling human culture.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/93559"} +{"video_file": "pOXgdFEB7q_39027460.mp4", "openreview_id": "pOXgdFEB7q", "slideslive_id": 39027460, "venue": "nips2024", "title": "What Variables Affect Out-of-Distribution Generalization in Pretrained Models?", "status": "Poster", "keywords": "Image Embeddings;Out-of-Distribution Generalization;Tunnel Effect;Neural Collapse", "tldr": "We identify what variables matter most in out-of-distribution generalization of embeddings and we show that the tunnel effect hypothesis proposed in NeurIPS-2023 is not universal.", "abstract": "Embeddings produced by pre-trained deep neural networks (DNNs) are widely used; however, their efficacy for downstream tasks can vary widely. We study the factors influencing transferability and out-of-distribution (OOD) generalization of pre-trained DNN embeddings through the lens of the tunnel effect hypothesis, which is closely related to intermediate neural collapse. This hypothesis suggests that deeper DNN layers compress representations and hinder OOD generalization. Contrary to earlier work, our experiments show this is not a universal phenomenon. We comprehensively investigate the impact of DNN architecture, training data, image resolution, and augmentations on transferability. We identify that training with high-resolution datasets containing many classes greatly reduces representation compression and improves transferability. Our results emphasize the danger of generalizing findings from toy datasets to broader contexts.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/93557"} +{"video_file": "pPSWHsgqRp_39026708.mp4", "openreview_id": "pPSWHsgqRp", "slideslive_id": 39026708, "venue": "nips2024", "title": "Smoothie: Label Free Language Model Routing", "status": "Poster", "keywords": "large language models;weak supervision;graphical models;routing", "tldr": "We propose an algorithm for learning LLM routers without labeled data.", "abstract": "Large language models (LLMs) are increasingly used in applications where LLM inputs may span many different tasks. Recent work has found that the choice of LLM is consequential, and different LLMs may be good for different input samples. Prior approaches have thus explored how engineers might select an LLM to use for each sample (i.e. routing). While existing routing methods mostly require training auxiliary models on human-annotated data, our work explores whether it is possible to perform unsupervised routing. We propose Smoothie, a weak supervision-inspired routing approach that requires no labeled data. Given a set of outputs from different LLMs, Smoothie constructs a latent variable graphical model over embedding representations of observable LLM outputs and unknown \u201ctrue\u201d outputs. Using this graphical model, we estimate sample-dependent quality scores for each LLM, and route each sample to the LLM with the highest corresponding score. We find that Smoothie's LLM quality-scores correlate with ground-truth model quality (correctly identifying the optimal model on 9/14 tasks), and that Smoothie outperforms baselines for routing by up to 10 points accuracy.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93556"} +{"video_file": "pPeXYByHNd_39025957.mp4", "openreview_id": "pPeXYByHNd", "slideslive_id": 39025957, "venue": "nips2024", "title": "MSAGPT: Neural Prompting Protein Structure Prediction via MSA Generative Pre-Training", "status": "Poster", "keywords": "Computational Biology;Protein Language Model;Protein Structure Prediction;MSA Generative Pre-Training", "tldr": "We propose a novel MSA generative pre-training framework to yield faithful and informative MSA to promote structure prediction accuracy in a low-MSA regime. Studies of transfer learning also show its great potential to benefit other protein tasks.", "abstract": "Multiple Sequence Alignment (MSA) plays a pivotal role in unveiling the evolutionary trajectories of protein families. The accuracy of protein structure predictions is often compromised for protein sequences that lack sufficient homologous information to construct high-quality MSA. Although various methods have been proposed to generate high-quality MSA under these conditions, they fall short in comprehensively capturing the intricate co-evolutionary patterns within MSA or require guidance from external oracle models. Here we introduce MSAGPT, a novel approach to prompt protein structure predictions via MSA generative pre-training in a low-MSA regime. MSAGPT employs a simple yet effective 2D evolutionary positional encoding scheme to model the complex evolutionary patterns. Endowed by this, the flexible 1D MSA decoding framework facilitates zero- or few-shot learning. Moreover, we demonstrate leveraging the feedback from AlphaFold2 (AF2) can further enhance the model\u2019s capacity via Rejective Fine-tuning (RFT) and Reinforcement Learning from AF2 Feedback (RLAF). Extensive experiments confirm the efficacy of MSAGPT in generating faithful and informative MSA (up to +8.5% TM-Score on few-shot scenarios). The transfer learning also demonstrates its great potential for the wide range of tasks resorting to the quality of MSA.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/93555"} +{"video_file": "pRQmRaonxf_39028663.mp4", "openreview_id": "pRQmRaonxf", "slideslive_id": 39028663, "venue": "nips2024", "title": "Transformers as Game Players: Provable In-context Game-playing Capabilities of Pre-trained Models", "status": "Poster", "keywords": "In-context Learning; Multi-agent Competitive Games; Transformers; Decision-making", "tldr": "This work provides a theoretical understanding of the in-context game-playing capabilities of pre-trained transformers, broadening the research scope of in-context RL from the single-agent scenario to the multi-agent competitive games.", "abstract": "The in-context learning (ICL) capability of pre-trained models based on the transformer architecture has received growing interest in recent years. While theoretical understanding has been obtained for ICL in reinforcement learning (RL), the previous results are largely confined to the single-agent setting. This work proposes to further explore the in-context learning capabilities of pre-trained transformer models in competitive multi-agent games, i.e., in-context game-playing (ICGP). Focusing on the classical two-player zero-sum games, theoretical guarantees are provided to demonstrate that pre-trained transformers can provably learn to approximate Nash equilibrium in an in-context manner for both decentralized and centralized learning settings. As a key part of the proof, constructional results are established to demonstrate that the transformer architecture is sufficiently rich to realize celebrated multi-agent game-playing algorithms, in particular, decentralized V-learning and centralized VI-ULCB.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93552"} +{"video_file": "pU0z2sNM1M_39026362.mp4", "openreview_id": "pU0z2sNM1M", "slideslive_id": 39026362, "venue": "nips2024", "title": "Causal Dependence Plots", "status": "Poster", "keywords": "Interpretable machine learning;interpretability;explainable AI;explainability;causality;partial dependence plots;total dependence plots;model agnostic explanations", "tldr": "We introduce a framework for creating model explanation plots based explicitly on causal relationships and illustrate several types including the popular existing method of partial dependence plots as a special case", "abstract": "To use artificial intelligence and machine learning models wisely we must understand how they interact with the world, including how they depend causally on data inputs. In this work we develop Causal Dependence Plots (CDPs) to visualize how a model's predicted outcome depends on changes in a given predictor along with consequent causal changes in other predictor variables. Crucially, this differs from standard methods based on independence or holding other predictors constant, such as regression coefficients or Partial Dependence Plots (PDPs). Our explanatory framework generalizes PDPs, including them as a special case, as well as a variety of other interpretive plots that show, for example, the total, direct, and indirect effects of causal mediation. We demonstrate with simulations and real data experiments how CDPs can be combined in a modular way with methods for causal learning or sensitivity analysis. Since people often think causally about input-output dependence, CDPs can be powerful tools in the xAI or interpretable machine learning toolkit and contribute to applications like scientific machine learning and algorithmic fairness.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/93550"} +{"video_file": "pW9Jwim918_39026716.mp4", "openreview_id": "pW9Jwim918", "slideslive_id": 39026716, "venue": "nips2024", "title": "ReMoDetect: Reward Models Recognize Aligned LLM's Generations", "status": "Poster", "keywords": "Large Language Model;LLM Generated Text Detection;Reward Model", "tldr": "We propose an effective aligned LLM-generated text detection method using reward model.", "abstract": "The remarkable capabilities and easy accessibility of large language models (LLMs) have significantly increased societal risks (e.g., fake news generation), necessitating the development of LLM-generated text (LGT) detection methods for safe usage. However, detecting LGTs is challenging due to the vast number of LLMs, making it impractical to account for each LLM individually; hence, it is crucial to identify the common characteristics shared by these models. In this paper, we draw attention to a common feature of recent powerful LLMs, namely the alignment training, i.e., training LLMs to generate human-preferable texts. Our key finding is that as these aligned LLMs are trained to maximize the human preferences, they generate texts with higher estimated preferences even than human-written texts; thus, such texts are easily detected by using the reward model (i.e., an LLM trained to model human preference distribution). Based on this finding, we propose two training schemes to further improve the detection ability of the reward model, namely (i) continual preference fine-tuning to make reward model prefer aligned LGTs even further and (ii) reward modeling of Human/LLM mixed texts (a rephrased texts from human-written texts using aligned LLMs), which serves as a median preference text corpus between LGTs and human-written texts to learn the decision boundary better. We provide an extensive evaluation by considering six text domains across twelve aligned LLMs, where our method demonstrates state-of-the-art results.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93548"} +{"video_file": "pWowK7jqok_39028636.mp4", "openreview_id": "pWowK7jqok", "slideslive_id": 39028636, "venue": "nips2024", "title": "E-Motion: Future Motion Simulation via Event Sequence Diffusion", "status": "Poster", "keywords": "Event-based vision;video diffusion model", "tldr": "We propose to integrate the strong learning capacity of the video diffusion model with the rich motion information of an event camera as a motion prediction framework.", "abstract": "Forecasting a typical object's future motion is a critical task for interpreting and interacting with dynamic environments in computer vision. Event-based sensors, which could capture changes in the scene with exceptional temporal granularity, may potentially offer a unique opportunity to predict future motion with a level of detail and precision previously unachievable. Inspired by that, we propose to integrate the strong learning capacity of the video diffusion model with the rich motion information of an event camera as a motion simulation framework. Specifically, we initially employ pre-trained stable video diffusion models to adapt the event sequence dataset. This process facilitates the transfer of extensive knowledge from RGB videos to an event-centric domain. Moreover, we introduce an alignment mechanism that utilizes reinforcement learning techniques to enhance the reverse generation trajectory of the diffusion model, ensuring improved performance and accuracy. Through extensive testing and validation, we demonstrate the effectiveness of our method in various complex scenarios, showcasing its potential to revolutionize motion flow prediction in computer vision applications such as autonomous vehicle guidance, robotic navigation, and interactive media. Our findings suggest a promising direction for future research in enhancing the interpretative power and predictive accuracy of computer vision systems. The source code is publicly available at https://github.com/p4r4mount/E-Motion.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93547"} +{"video_file": "paYwtPBpyZ_39025655.mp4", "openreview_id": "paYwtPBpyZ", "slideslive_id": 39025655, "venue": "nips2024", "title": "Sequence-Augmented SE(3)-Flow Matching For Conditional Protein Generation", "status": "Poster", "keywords": "Proteins;Flow Matching;Generative Models", "tldr": "We propose FoldFlow++ a new sequence conditioned protein structure generative model using flow-matching which can be finetuned for Motif Scaffolding.", "abstract": "Proteins are essential for almost all biological processes and derive their diverse functions from complex $3 \\rm D$ structures, which are in turn determined by their amino acid sequences. In this paper, we exploit the rich biological inductive bias of amino acid sequences and introduce FoldFlow++, a novel sequence-conditioned $\\text{SE}(3)$-equivariant flow matching model for protein structure generation. FoldFlow++ presents substantial new architectural features over the previous FoldFlow family of models including a protein large language model to encode sequence, a new multi-modal fusion trunk that combines structure and sequence representations, and a geometric transformer based decoder. To increase diversity and novelty of generated samples -- crucial for de-novo drug design -- we train FoldFlow++ at scale on a new dataset that is an order of magnitude larger than PDB datasets of prior works, containing both known proteins in PDB and high-quality synthetic structures achieved through filtering. We further demonstrate the ability to align FoldFlow++ to arbitrary rewards, e.g. increasing secondary structures diversity, by introducing a Reinforced Finetuning (ReFT) objective. We empirically observe that FoldFlow++ outperforms previous state-of-the-art protein structure-based generative models, improving over RFDiffusion in terms of unconditional generation across all metrics including designability, diversity, and novelty across all protein lengths, as well as exhibiting generalization on the task of equilibrium conformation sampling. Finally, we demonstrate that a fine-tuned FoldFlow++ makes progress on challenging conditional design tasks such as designing scaffolds for the VHH nanobody.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/93544"} +{"video_file": "pebP89l4v6_39026186.mp4", "openreview_id": "pebP89l4v6", "slideslive_id": 39026186, "venue": "nips2024", "title": "Sharing Key Semantics in Transformer Makes Efficient Image Restoration", "status": "Poster", "keywords": "Low-level Vision;Image Restoration;Vision Transformer", "tldr": "We propose SemanIR, a Transformer-based approach for image restoration that optimizes attention computation by focusing on semantically relevant regions, achieving linear complexity and state-of-the-art results across multiple tasks.", "abstract": "Image Restoration (IR), a classic low-level vision task, has witnessed significant advancements through deep models that effectively model global information. Notably, the emergence of Vision Transformers (ViTs) has further propelled these advancements. When computing, the self-attention mechanism, a cornerstone of ViTs, tends to encompass all global cues, even those from semantically unrelated objects or regions. This inclusivity introduces computational inefficiencies, particularly noticeable with high input resolution, as it requires processing irrelevant information, thereby impeding efficiency. Additionally, for IR, it is commonly noted that small segments of a degraded image, particularly those closely aligned semantically, provide particularly relevant information to aid in the restoration process, as they contribute essential contextual cues crucial for accurate reconstruction. To address these challenges, we propose boosting IR's performance by sharing the key semantics via Transformer for IR (i.e., SemanIR) in this paper. Specifically, SemanIR initially constructs a sparse yet comprehensive key-semantic dictionary within each transformer stage by establishing essential semantic connections for every degraded patch. Subsequently, this dictionary is shared across all subsequent transformer blocks within the same stage. This strategy optimizes attention calculation within each block by focusing exclusively on semantically related components stored in the key-semantic dictionary. As a result, attention calculation achieves linear computational complexity within each window. Extensive experiments across 6 IR tasks confirm the proposed SemanIR's state-of-the-art performance, quantitatively and qualitatively showcasing advancements. The visual results, code, and trained models are available at: https://github.com/Amazingren/SemanIR.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93541"} +{"video_file": "pjD08dtAh0_39026601.mp4", "openreview_id": "pjD08dtAh0", "slideslive_id": 39026601, "venue": "nips2024", "title": "HumanVLA: Towards Vision-Language Directed Object Rearrangement by Physical Humanoid", "status": "Poster", "keywords": "Human-Scene Interaction; Object Rearrangement; Vision-Language-Action Model; Physical Humanoid", "tldr": "We introduce HumanVLA, which performs a varity of object rearrangement tasks directed by vision and language by physical humanoid.", "abstract": "Physical Human-Scene Interaction (HSI) plays a crucial role in numerous applications. However, existing HSI techniques are limited to specific object dynamics and privileged information, which prevents the development of more comprehensive applications. To address this limitation, we introduce HumanVLA for general object rearrangement directed by practical vision and language. A teacher-student framework is utilized to develop HumanVLA. A state-based teacher policy is trained first using goal-conditioned reinforcement learning and adversarial motion prior. Then, it is distilled into a vision-language-action model via behavior cloning. We propose several key insights to facilitate the large-scale learning process. To support general object rearrangement by physical humanoid, we introduce a novel Human-in-the-Room dataset encompassing various rearrangement tasks. Through extensive experiments and analysis, we demonstrate the effectiveness of our approach.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93535"} +{"video_file": "plH8gW7tPQ_39025253.mp4", "openreview_id": "plH8gW7tPQ", "slideslive_id": 39025253, "venue": "nips2024", "title": "Algorithmic Capabilities of Random Transformers", "status": "Poster", "keywords": "transformer;deep learning;interpretability;capability;emergence;randomness;language models", "tldr": "Randomly initialized transformers may be more powerful than you think!", "abstract": "Trained transformer models have been found to implement interpretable procedures for tasks like arithmetic and associative recall, but little is understood about how the circuits that implement these procedures originate during training. To what extent do they depend on the supervisory signal provided to models, and to what extent are they attributable to behavior already present in models at the beginning of training? To investigate these questions, we investigate what functions can be learned by randomly initialized transformers in which only the embedding layers are optimized, so that the only input--output mappings learnable from data are those already implemented (up to a choice of encoding scheme) by the randomly initialized model. We find that these random transformers can perform a wide range of meaningful algorithmic tasks, including modular arithmetic, in-weights and in-context associative recall, decimal addition, parenthesis balancing, and even some aspects of natural language text generation. Our results indicate that some algorithmic capabilities are present in transformers (and accessible via appropriately structured inputs) even before these models are trained.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/93533"} +{"video_file": "pnmUiVAGnv_39025687.mp4", "openreview_id": "pnmUiVAGnv", "slideslive_id": 39025687, "venue": "nips2024", "title": "CAT: Coordinating Anatomical-Textual Prompts for Multi-Organ and Tumor Segmentation", "status": "Poster", "keywords": "Promptable model;Visual-Textual prompt;Multi-organ and tumor segmentation", "tldr": "We intro a novel automatic model that coordinates anatomical-textual prompts for multi-organ and tumor segmentation.", "abstract": "Existing promptable segmentation methods in the medical imaging field primarily consider either textual or visual prompts to segment relevant objects, yet they often fall short when addressing anomalies in medical images, like tumors, which may vary greatly in shape, size, and appearance. Recognizing the complexity of medical scenarios and the limitations of textual or visual prompts, we propose a novel dual-prompt schema that leverages the complementary strengths of visual and textual prompts for segmenting various organs and tumors. Specifically, we introduce $\\textbf{\\textit{CAT}}$, an innovative model that $\\textbf{C}$oordinates $\\textbf{A}$natomical prompts derived from 3D cropped images with $\\textbf{T}$extual prompts enriched by medical domain knowledge. The model architecture adopts a general query-based design, where prompt queries facilitate segmentation queries for mask prediction. To synergize two types of prompts within a unified framework, we implement a ShareRefiner, which refines both segmentation and prompt queries while disentangling the two types of prompts. Trained on a consortium of 10 public CT datasets, $\\textbf{\\textit{CAT}}$ demonstrates superior performance in multiple segmentation tasks. Further validation on a specialized in-house dataset reveals the remarkable capacity of segmenting tumors across multiple cancer stages. This approach confirms that coordinating multimodal prompts is a promising avenue for addressing complex scenarios in the medical domain.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/93532"} +{"video_file": "pqD7ckR8AF_39027721.mp4", "openreview_id": "pqD7ckR8AF", "slideslive_id": 39027721, "venue": "nips2024", "title": "SuperDeepFool: a new fast and accurate minimal adversarial attack", "status": "Poster", "keywords": "Deep Learning;Adversarial Attacks;Robustness;Interpretable AI;ML Security", "tldr": "We have introduced a family of parameter-free, fast, and parallelizable algorithms for crafting optimal adversarial perturbations.", "abstract": "Deep neural networks have been known to be vulnerable to adversarial examples, which are inputs that are modified slightly to fool the network into making incorrect predictions. This has led to a significant amount of research on evaluating the robustness of these networks against such perturbations. One particularly important robustness metric is the robustness to minimal\n\u2113\n2\nadversarial perturbations. However, existing methods for evaluating this robustness metric are either computationally expensive or not very accurate. In this paper, we introduce a new family of adversarial attacks that strike a balance between effectiveness and computational efficiency. Our proposed attacks are generalizations of the well-known DeepFool (DF) attack, while they remain simple to understand and implement. We demonstrate that our attacks outperform existing methods in terms of both effectiveness and computational efficiency. Our proposed attacks are also suitable for evaluating the robustness of large models and can be used to perform adversarial training (AT) to achieve state-of-the-art robustness to minimal\n\u2113\n2\nadversarial perturbations.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/93530"} +{"video_file": "prXfM5X2Db_39026095.mp4", "openreview_id": "prXfM5X2Db", "slideslive_id": 39026095, "venue": "nips2024", "title": "Frieren: Efficient Video-to-Audio Generation Network with Rectified Flow Matching", "status": "Poster", "keywords": "video-to-audio generation;rectified flow model;efficient generation", "tldr": "We design a video-to-audio generation model with higher quality and fewer sampling steps.", "abstract": "Video-to-audio (V2A) generation aims to synthesize content-matching audio from silent video, and it remains challenging to build V2A models with high generation quality, efficiency, and visual-audio temporal synchrony. We propose Frieren, a V2A model based on rectified flow matching. Frieren regresses the conditional transport vector field from noise to spectrogram latent with straight paths and conducts sampling by solving ODE, outperforming autoregressive and score-based models in terms of audio quality. By employing a non-autoregressive vector field estimator based on a feed-forward transformer and channel-level cross-modal feature fusion with strong temporal alignment, our model generates audio that is highly synchronized with the input video. Furthermore, through reflow and one-step distillation with guided vector field, our model can generate decent audio in a few, or even only one sampling step. Experiments indicate that Frieren achieves state-of-the-art performance in both generation quality and temporal alignment on VGGSound, with alignment accuracy reaching 97.22%, and 6.2% improvement in inception score over the strong diffusion-based baseline. Audio samples and code are available at http://frieren-v2a.github.io.", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/93527"} +{"video_file": "prgxz9fYbf_39026726.mp4", "openreview_id": "prgxz9fYbf", "slideslive_id": 39026726, "venue": "nips2024", "title": "Stochastic Kernel Regularisation Improves Generalisation in Deep Kernel Machines", "status": "Poster", "keywords": "gaussian process;deep gaussian process;kernel methods;representation learning", "tldr": "We improve an existing kernel method to achieve 94.5% test accuracy on CIFAR-10, a significant increase over the current SOTA for kernel methods.", "abstract": "Recent work developed convolutional deep kernel machines, achieving 92.7% test accuracy on CIFAR-10 using a ResNet-inspired architecture, which is SOTA for kernel methods. However, this still lags behind neural networks, which easily achieve over 94% test accuracy with similar architectures. In this work we introduce several modifications to improve the convolutional deep kernel machine\u2019s generalisation, including stochastic kernel regularisation, which adds noise to the learned Gram matrices during training. The resulting model achieves 94.5% test accuracy on CIFAR-10. This finding has important theoretical and practical implications, as it demonstrates that the ability to perform well on complex tasks like image classification is not unique to neural networks. Instead, other approaches including deep kernel methods can achieve excellent performance on such tasks, as long as they have the capacity to learn representations from data.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/93525"} +{"video_file": "pwKkNSuuEs_39027353.mp4", "openreview_id": "pwKkNSuuEs", "slideslive_id": 39027353, "venue": "nips2024", "title": "Abstracted Shapes as Tokens - A Generalizable and Interpretable Model for Time-series Classification", "status": "Poster", "keywords": "Time-series;Interpretability;Self-supervised Learning;Pre-trained Model", "tldr": "A generalizable, interpretable, pre-trained model for time-series modeling and classification.", "abstract": "In time-series analysis, many recent works seek to provide a unified view and representation for time-series across multiple domains, leading to the development of foundation models for time-series data. Despite diverse modeling techniques, existing models are black boxes and fail to provide insights and explanations about their representations. In this paper, we present VQShape, a pre-trained, generalizable, and interpretable model for time-series representation learning and classification. By introducing a novel representation for time-series data, we forge a connection between the latent space of VQShape and shape-level features. Using vector quantization, we show that time-series from different domains can be described using a unified set of low-dimensional codes, where each code can be represented as an abstracted shape in the time domain. On classification tasks, we show that the representations of VQShape can be utilized to build interpretable classifiers, achieving comparable performance to specialist models. Additionally, in zero-shot learning, VQShape and its codebook can generalize to previously unseen datasets and domains that are not included in the pre-training process. The code and pre-trained weights are available at https://github.com/YunshiWen/VQShape.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/93522"} +{"video_file": "pwLdvYIMrF_39027401.mp4", "openreview_id": "pwLdvYIMrF", "slideslive_id": 39027401, "venue": "nips2024", "title": "Train-Attention: Meta-Learning Where to Focus in Continual Knowledge Learning", "status": "Poster", "keywords": "continual learning;continual knowledge learning;large language models;meta-learning;train-attention;token weight", "tldr": "enhancing continual knowledge learning performance through meta-learning based token weighted learning method", "abstract": "Previous studies on continual knowledge learning (CKL) in large language models (LLMs) have predominantly focused on approaches such as regularization, architectural modifications, and rehearsal techniques to mitigate catastrophic forgetting. However, these methods naively inherit the inefficiencies of standard training procedures, indiscriminately applying uniform weight across all tokens, which can lead to unnecessary parameter updates and increased forgetting. To address these shortcomings, we propose a novel CKL approach termed Train-Attention-Augmented Language Model (TAALM), which enhances learning efficiency by dynamically predicting and applying weights to tokens based on their usefulness. This method employs a meta-learning framework that optimizes token importance predictions, facilitating targeted knowledge updates and minimizing forgetting. Also, we observe that existing benchmarks do not clearly exhibit the trade-off between learning and retaining, therefore we propose a new benchmark, LAMA-ckl, to address this issue. Through experiments conducted on both newly introduced and established CKL benchmarks, TAALM proves the state-of-the-art performance upon the baselines, and also shows synergistic compatibility when integrated with previous CKL approaches. The code and the dataset are available online.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93521"} +{"video_file": "pwRVGRWtGg_39026294.mp4", "openreview_id": "pwRVGRWtGg", "slideslive_id": 39026294, "venue": "nips2024", "title": "Apathetic or Empathetic? Evaluating LLMs' Emotional Alignments with Humans", "status": "Poster", "keywords": "LLM;Evaluation;Emotions", "tldr": "We introduce EmotionBench (which includes eight negative emotions) to evaluate LLMs' emotional alignments with human norms collected from >1200 human responses.", "abstract": "Evaluating Large Language Models\u2019 (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes seven LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4, Mixtral-8x22B, and LLaMA-3.1. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, i.e., EmotionBench, are publicly available at https://github.com/CUHK-ARISE/EmotionBench.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93520"} +{"video_file": "q7TxGUWlhD_39028047.mp4", "openreview_id": "q7TxGUWlhD", "slideslive_id": 39028047, "venue": "nips2024", "title": "N-agent Ad Hoc Teamwork", "status": "Poster", "keywords": "ad hoc teamwork;reinforcement learning;multi-agent systems;multi-agent reinforcement learning", "tldr": "Proposes a generalization of ad hoc teamwork to the N-agent setting.", "abstract": "Current approaches to learning cooperative multi-agent behaviors assume relatively restrictive settings. In standard fully cooperative multi-agent reinforcement learning, the learning algorithm controls all agents in the scenario, while in ad hoc teamwork, the learning algorithm usually assumes control over only a single agent in the scenario. However, many cooperative settings in the real world are much less restrictive. For example, in an autonomous driving scenario, a company might train its cars with the same learning algorithm, yet once on the road, these cars must cooperate with cars from another company. Towards expanding the class of scenarios that cooperative learning methods may optimally address, we introduce\nN\n-agent ad hoc teamwork (NAHT), where a set of autonomous agents must interact and cooperate with dynamically varying numbers and types of teammates. This paper formalizes the problem, and proposes the Policy Optimization with Agent Modelling (POAM) algorithm. POAM is a policy gradient, multi-agent reinforcement learning approach to the NAHT problem, that enables adaptation to diverse teammate behaviors by learning representations of teammate behaviors. Empirical evaluation on tasks from the multi-agent particle environment and StarCraft II shows that POAM improves cooperative task returns compared to baseline approaches, and enables out-of-distribution generalization to unseen teammates.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93515"} +{"video_file": "q9dKv1AK6l_39025932.mp4", "openreview_id": "q9dKv1AK6l", "slideslive_id": 39025932, "venue": "nips2024", "title": "Small steps no more: Global convergence of stochastic gradient bandits for arbitrary learning rates", "status": "Poster", "keywords": "stochastic gradient bandit;arbitrary stepsize;global convergence", "tldr": "stochastic gradient bandit algorithm converges to a globally optimal policy almost surely using any constant learning rate", "abstract": "We provide a new understanding of the stochastic gradient bandit algorithm by showing that it converges to a globally optimal policy almost surely using \\emph{any} constant learning rate. This result demonstrates that the stochastic gradient algorithm continues to balance exploration and exploitation appropriately even in scenarios where standard smoothness and noise control assumptions break down. The proofs are based on novel findings about action sampling rates and the relationship between cumulative progress and noise, and extend the current understanding of how simple stochastic gradient methods behave in bandit settings.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/93513"} +{"video_file": "qAP6RyYIJc_39027296.mp4", "openreview_id": "qAP6RyYIJc", "slideslive_id": 39027296, "venue": "nips2024", "title": "Stealth edits to large language models", "status": "Poster", "keywords": "large language models;stealth attacks;memory editing", "tldr": "We reveal the theoretical foundations of techniques for editing large language models, and present new methods which can do so without requiring retraining.", "abstract": "We reveal the theoretical foundations of techniques for editing large language models, and present new methods which can do so without requiring retraining. Our theoretical insights show that a single metric (a measure of the intrinsic dimension of the model's features) can be used to assess a model's editability and reveals its previously unrecognised susceptibility to malicious stealth attacks. This metric is fundamental to predicting the success of a variety of editing approaches, and reveals new bridges between disparate families of editing methods. We collectively refer to these as stealth editing methods, because they directly update a model's weights to specify its response to specific known hallucinating prompts without affecting other model behaviour. By carefully applying our theoretical insights, we are able to introduce a new jet-pack network block which is optimised for highly selective model editing, uses only standard network operations, and can be inserted into existing networks. We also reveal the vulnerability of language models to stealth attacks: a small change to a model's weights which fixes its response to a single attacker-chosen prompt. Stealth attacks are computationally simple, do not require access to or knowledge of the model's training data, and therefore represent a potent yet previously unrecognised threat to redistributed foundation models. Extensive experimental results illustrate and support our methods and their theoretical underpinnings. Demos and source code are available at https://github.com/qinghua-zhou/stealth-edits.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93512"} +{"video_file": "qCpCy0EQAJ_39026921.mp4", "openreview_id": "qCpCy0EQAJ", "slideslive_id": 39026921, "venue": "nips2024", "title": "Dynamic Neural Regeneration: Enhancing Deep Learning Generalization on Small Datasets", "status": "Poster", "keywords": "Small datasets Generalization;Overfitting;Iterative training;Neurogenesis", "tldr": "A novel iterative learning paradigm with data-aware dynamic masking removes redundant connections, increases DNNs' capacity for learning, and improves generalization on small datasets.", "abstract": "The efficacy of deep learning techniques is contingent upon access to large volumes of data (labeled or unlabeled). However, in practical domains such as medical applications, data availability is often limited. This presents a significant challenge: How can we effectively train deep neural networks on relatively small datasets while improving generalization? Recent works have explored evolutionary or iterative training paradigms, which reinitialize a subset of parameters to enhance generalization performance for small datasets. However, these methods typically rely on randomly selected parameter subsets and maintain fixed masks throughout training, potentially leading to suboptimal outcomes. Inspired by neurogenesis in the brain, we propose a novel iterative training framework, Dynamic Neural Regeneration (DNR), that employs a data-aware dynamic masking scheme to eliminate redundant connections by estimating their significance. This approach increases the model's capacity for further learning through random weight reinitialization. Experimental results demonstrate that our approach outperforms existing methods in accuracy and robustness, highlighting its potential for real-world applications where data collection is challenging.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93510"} +{"video_file": "qDuqp1nZZ6_39027480.mp4", "openreview_id": "qDuqp1nZZ6", "slideslive_id": 39027480, "venue": "nips2024", "title": "Differentially Private Equivalence Testing for Continuous Distributions and Applications", "status": "Poster", "keywords": "Differential Privacy;Equivalence Tester;Continuous Distributions", "tldr": "The first paper to give a DP equivalence tester for continuous distrbiutions", "abstract": "We present the first algorithm for testing equivalence between two continuous distributions using differential privacy (DP). Our algorithm is a private version of the algorithm of Diakonikolas et al. The algorithm of Diakonikolas et al uses the data itself to repeatedly discretize the real line so that --- when the two distributions are far apart in ${\\cal A}_k$-norm --- one of the discretized distributions exhibits large $L_2$-norm difference; and upon repeated sampling such large gap would be detected. Designing its private analogue poses two difficulties. First, our DP algorithm can not resample new datapoints as a change to a single datapoint may lead to a very large change in the descretization of the real line. In contrast, the (sorted) index of the discretization point changes only by $1$ between neighboring instances, and so we use a novel algorithm that set the discretization points using random Bernoulli noise, resulting in only a few buckets being affected under the right coupling. Second, our algorithm, which doesn't resample data, requires we also revisit the utility analysis of the original algorithm and prove its correctness w.r.t. the original sorted data; a problem we tackle using sampling a subset of Poisson-drawn size from each discretized bin. Lastly, since any distribution can be reduced to a continuous distribution, our algorithm is successfully carried to multiple other families of distributions and thus has numerous applications.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/93508"} +{"video_file": "qGiZQb1Khm_39028221.mp4", "openreview_id": "qGiZQb1Khm", "slideslive_id": 39028221, "venue": "nips2024", "title": "Watermarking Makes Language Models Radioactive", "status": "Spotlight", "keywords": "Watermarking;Large Language Models;Membership Inference", "tldr": "LLM watermarking, intended for generated text detection, has the secondary effect of revealing when synthetic data are used to fine-tune another model.", "abstract": "We investigate the radioactivity of text generated by large language models (LLM), \\ie whether it is possible to detect that such synthetic input was used to train a subsequent LLM. Current methods like membership inference or active IP protection either work only in settings where the suspected text is known or do not provide reliable statistical guarantees. We discover that, on the contrary, it is possible to reliably determine if a language model was trained on synthetic data if that data is output by a watermarked LLM. Our new methods, specialized for radioactivity, detects with a provable confidence weak residuals of the watermark signal in the fine-tuned LLM. We link the radioactivity contamination level to the following properties: the watermark robustness, its proportion in the training set, and the fine-tuning process. For instance, if the suspect model is open-weight, we demonstrate that training on watermarked instructions can be detected with high confidence (\np\n-value\n<\n10\n\u2212\n5\n) even when as little as\n5\nof training text is watermarked.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93506"} +{"video_file": "qLnXPVvwLx_39026243.mp4", "openreview_id": "qLnXPVvwLx", "slideslive_id": 39026243, "venue": "nips2024", "title": "Prism: A Framework for Decoupling and Assessing the Capabilities of VLMs", "status": "Poster", "keywords": "Multi-modality;VLM;evaluation", "tldr": "This paper presents Prism, a framework that can be used for: 1) analyzing the perception and reasoning capabilities of VLMs; 2) solving general visual questions efficiently", "abstract": "Vision Language Models (VLMs) demonstrate remarkable proficiency in addressing a wide array of visual questions, which requires strong perception and reasoning faculties. Assessing these two competencies independently is crucial for model refinement, despite the inherent difficulty due to the intertwined nature of seeing and reasoning in existing VLMs. To tackle this issue, we present Prism, an innovative framework designed to disentangle the perception and reasoning processes involved in visual question solving. Prism comprises two distinct stages: a perception stage that utilizes a VLM to extract and articulate visual information in textual form, and a reasoning stage that formulates responses based on the extracted visual information using a Large Language Model (LLM). This modular design enables the systematic comparison and assessment of both proprietary and open-source VLM for their perception and reasoning strengths. Our analytical framework provides several valuable insights, underscoring Prism's potential as a cost-effective solution for vision-language tasks. By combining a streamlined VLM focused on perception with a powerful LLM tailored for reasoning, Prism achieves superior results in general vision-language tasks while substantially cutting down on training and operational expenses. Quantitative evaluations show that Prism, when configured with a vanilla 2B LLaVA and freely accessible GPT-3.5, delivers performance on par with VLMs $10 \\times$ larger on the rigorous multimodal benchmark MMStar.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/93501"} +{"video_file": "qNXRXUC90b_39025779.mp4", "openreview_id": "qNXRXUC90b", "slideslive_id": 39025779, "venue": "nips2024", "title": "Uncertainty-aware Fine-tuning of Segmentation Foundation Models", "status": "Poster", "keywords": "Segmentation foundation model", "tldr": "We introduce the Segmentation with Uncertainty Model (SUM), which enhances the accuracy of segmentation foundation models by incorporating an uncertainty-aware training loss and prompt sampling based on the estimated uncertainty of pseudo-labels.", "abstract": "The Segment Anything Model (SAM) is a large-scale foundation model that has revolutionized segmentation methodology. Despite its impressive generalization ability, the segmentation accuracy of SAM on images with intricate structures is often unsatisfactory. Recent works have proposed lightweight fine-tuning using high-quality annotated data to improve accuracy on such images. However, here we provide extensive empirical evidence that this strategy leads to forgetting how to \"segment anything\": these models lose the original generalization abilities of SAM, in the sense that they perform worse for segmentation tasks not represented in the annotated fine-tuning set. To improve performance without forgetting, we introduce a novel framework that combines high-quality annotated data with a large unlabeled dataset. The framework relies on two methodological innovations. First, we quantify the uncertainty in the SAM pseudo labels associated with the unlabeled data and leverage it to perform uncertainty-aware fine-tuning. Second, we encode the type of segmentation task associated with each training example using a\ntask prompt\nto reduce ambiguity. We evaluated the proposed Segmentation with Uncertainty Model (SUM) on a diverse test set consisting of 14 public benchmarks, where it achieves state-of-the-art results. Notably, our method consistently surpasses SAM by 3-6 points in mean IoU and 4-7 in mean boundary IoU across point-prompt interactive segmentation rounds. Code is available at https://github.com/Kangningthu/SUM", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93500"} +{"video_file": "qOSFiJdVkZ_39027208.mp4", "openreview_id": "qOSFiJdVkZ", "slideslive_id": 39027208, "venue": "nips2024", "title": "Continual learning with the neural tangent ensemble", "status": "Spotlight", "keywords": "continual learning;catastrophic forgetting;Bayesian ensembles;Boosting and Ensemble Methods;mixture of experts", "tldr": "All network classifiers are ensembles; each edge provides a classifier. If you weigh them by their posterior probability you (almost) get SGD.", "abstract": "A natural strategy for continual learning is to weigh a Bayesian ensemble of fixed functions. This suggests that if a (single) neural network could be interpreted as an ensemble, one could design effective algorithms that learn without forgetting. To realize this possibility, we observe that a neural network classifier with N parameters can be interpreted as a weighted ensemble of N classifiers, and that in the lazy regime limit these classifiers are fixed throughout learning. We call these classifiers the neural tangent experts and show they output valid probability distributions over the labels. We then derive the likelihood and posterior probability of each expert given past data. Surprisingly, the posterior updates for these experts are equivalent to a scaled and projected form of stochastic gradient descent (SGD) over the network weights. Away from the lazy regime, networks can be seen as ensembles of adaptive experts which improve over time. These results offer a new interpretation of neural networks as Bayesian ensembles of experts, providing a principled framework for understanding and mitigating catastrophic forgetting in continual learning settings.", "primary_area": "online_learning", "site": "https://neurips.cc/virtual/2024/poster/93499"} +{"video_file": "qTypwXvNJa_39028251.mp4", "openreview_id": "qTypwXvNJa", "slideslive_id": 39028251, "venue": "nips2024", "title": "Geodesic Optimization for Predictive Shift Adaptation on EEG data", "status": "Spotlight", "keywords": "EEG;brain age;Neurosciences;Riemannian geometry;Domain Adaptation;Mixed-effects models", "tldr": "This paper proposes Geodesic Optimization for Predictive Shift Adaptation to address multi-source domain adaptation where source domains have distinct y distributions in the context of brain age prediction from EEG covariance matrices.", "abstract": "Electroencephalography (EEG) data is often collected from diverse contexts involving different populations and EEG devices. This variability can induce distribution shifts in the data $X$ and in the biomedical variables of interest $y$, thus limiting the application of supervised machine learning (ML) algorithms. While domain adaptation (DA) methods have been developed to mitigate the impact of these shifts, such methods struggle when distribution shifts occur simultaneously in $X$ and $y$. As state-of-the-art ML models for EEG represent the data by spatial covariance matrices, which lie on the Riemannian manifold of Symmetric Positive Definite (SPD) matrices, it is appealing to study DA techniques operating on the SPD manifold. This paper proposes a novel method termed Geodesic Optimization for Predictive Shift Adaptation (GOPSA) to address test-time multi-source DA for situations in which source domains have distinct $y$ distributions. GOPSA exploits the geodesic structure of the Riemannian manifold to jointly learn a domain-specific re-centering operator representing site-specific intercepts and the regression model. We performed empirical benchmarks on the cross-site generalization of age-prediction models with resting-state EEG data from a large multi-national dataset (HarMNqEEG), which included $14$ recording sites and more than $1500$ human participants. Compared to state-of-the-art methods, our results showed that GOPSA achieved significantly higher performance on three regression metrics ($R^2$, MAE, and Spearman's $\\rho$) for several source-target site combinations, highlighting its effectiveness in tackling multi-source DA with predictive shifts in EEG data analysis. Our method has the potential to combine the advantages of mixed-effects modeling with machine learning for biomedical applications of EEG, such as multicenter clinical trials.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/93495"} +{"video_file": "qamfjyhPeg_39028333.mp4", "openreview_id": "qamfjyhPeg", "slideslive_id": 39028333, "venue": "nips2024", "title": "Protected Test-Time Adaptation via Online Entropy Matching: A Betting Approach", "status": "Poster", "keywords": "Test Time Domain Adaptation;Online Learning;Testing by Betting;Martingale;Distribution Shift Detection", "tldr": "A novel self-training approach for adapting ML models to test-time distribution shifts by monitoring the model's output and aligning it with the source domain's statistics.", "abstract": "We present a novel approach for test-time adaptation via online self-training, consisting of two components. First, we introduce a statistical framework that detects distribution shifts in the classifier's entropy values obtained on a stream of unlabeled samples. Second, we devise an online adaptation mechanism that utilizes the evidence of distribution shifts captured by the detection tool to dynamically update the classifier's parameters. The resulting adaptation process drives the distribution of test entropy values obtained from the self-trained classifier to match those of the source domain, building invariance to distribution shifts. This approach departs from the conventional self-training method, which focuses on minimizing the classifier's entropy. Our approach combines concepts in betting martingales and online learning to form a detection tool capable of quickly reacting to distribution shifts. We then reveal a tight relation between our adaptation scheme and optimal transport, which forms the basis of our novel self-supervised loss. Experimental results demonstrate that our approach improves test-time accuracy under distribution shifts while maintaining accuracy and calibration in their absence, outperforming leading entropy minimization methods across various scenarios.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93486"} +{"video_file": "qbvt3ocQxB_39028744.mp4", "openreview_id": "qbvt3ocQxB", "slideslive_id": 39028744, "venue": "nips2024", "title": "IODA: Instance-Guided One-shot Domain Adaptation for Super-Resolution", "status": "Poster", "keywords": "one-shot domain adaptation;super resolution;domain adaptation", "tldr": "We propose an Instance-guided One-shot Domain Adaptation for Super-Resolution (IODA) to enable efficient domain adaptation with only a single unlabeled target domain LR image.", "abstract": "The domain adaptation method effectively mitigates the negative impact of domain gaps on the performance of super-resolution (SR) networks through the guidance of numerous target domain low-resolution (LR) images. However, in real-world scenarios, the availability of target domain LR images is often limited, sometimes even to just one, which inevitably impairs the domain adaptation performance of SR networks. We propose Instance-guided One-shot Domain Adaptation for Super-Resolution (IODA) to enable efficient domain adaptation with only a single unlabeled target domain LR image. To address the limited diversity of the target domain distribution caused by a single target domain LR image, we propose an instance-guided target domain distribution expansion strategy. This strategy effectively expands the diversity of the target domain distribution by generating instance-specific features focused on different instances within the image. For SR tasks emphasizing texture details, we propose an image-guided domain adaptation method. Compared to existing methods that use text representation for domain difference, this method utilizes pixel-level representation with higher granularity, enabling efficient domain adaptation guidance for SR networks. Finally, we validate the effectiveness of IODA on multiple datasets and various network architectures, achieving satisfactory one-shot domain adaptation for SR networks. Our code is available at https://github.com/ZaizuoTang/IODA.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93485"} +{"video_file": "qd8blc0o0F_39026250.mp4", "openreview_id": "qd8blc0o0F", "slideslive_id": 39026250, "venue": "nips2024", "title": "GRANOLA: Adaptive Normalization for Graph Neural Networks", "status": "Poster", "keywords": "Graph Neural Networks;Normalization Layer in GNNs", "tldr": "We study the effect of normalization layers in GNNs and propose a novel layer, called GRANOLA, that is expressive and graph adaptive.", "abstract": "Despite the widespread adoption of Graph Neural Networks (GNNs), these models often incorporate off-the-shelf normalization layers like BatchNorm or InstanceNorm, which were not originally designed for GNNs. Consequently, these normalization layers may not effectively capture the unique characteristics of graph-structured data, potentially even weakening the expressive power of the overall architecture. While existing graph-specific normalization layers have been proposed, they often struggle to offer substantial and consistent benefits. In this paper, we propose GRANOLA, a novel graph-adaptive normalization layer. Unlike existing normalization layers, GRANOLA normalizes node features by adapting to the specific characteristics of the graph, particularly by generating expressive representations of its nodes, obtained by leveraging the propagation of Random Node Features (RNF) in the graph. We provide theoretical results that support our design choices as well as an extensive empirical evaluation demonstrating the superior performance of GRANOLA over existing normalization techniques. Furthermore, GRANOLA emerges as the top-performing method among all baselines in the same time complexity class of Message Passing Neural Networks (MPNNs).", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/93483"} +{"video_file": "qdV1vp1AtL_39027150.mp4", "openreview_id": "qdV1vp1AtL", "slideslive_id": 39027150, "venue": "nips2024", "title": "Universal Sample Coding", "status": "Poster", "keywords": "source coding;compression;sampling;channel simulation;federated learning;generative AI", "tldr": "Communication of multiple samples from an unknown probability distribution and its applications to federated learning and generative inference.", "abstract": "In this work, we study the problem of communicating multiple samples from an unknown probability distribution using as few bits as possible. This is a generalization of the channel simulation problem, which has recently found applications and achieved state of the art results in realistic image compression, neural network compression, and communication-efficient federated learning. In this problem, the transmitter wants the receiver to generate multiple independent and identically distributed (i.i.d.) samples from a target distribution\nP\n, while the transmitter and the receiver have access to independent samples from a reference distribution\nQ\n. The core idea is to employ channel simulation in multiple rounds while updating the reference distribution\nQ\nafter each round in order to reduce the KL-divergence between\nP\nand\nQ\n, thereby reducing the communication cost in subsequent rounds. We derive a lower bound on the expected communication cost and construct a practical algorithm that achieves the lower bound up to a multiplicative constant. We then employ this algorithm in communication-efficient federated learning, in which model updates correspond to samples from a distribution, and achieve a 37% reduction in the communication load. To further highlight the potential of sample communication for generative models, we show that the number of bits needed to communicate samples from a large language model can be reduced by up to 16 times, compared to entropy-based data compression.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/93482"} +{"video_file": "qf1ncViBr5_39026737.mp4", "openreview_id": "qf1ncViBr5", "slideslive_id": 39026737, "venue": "nips2024", "title": "einspace: Searching for Neural Architectures from Fundamental Operations", "status": "Poster", "keywords": "neural architecture search;nas;deep learning architectures;context-free grammars;cfg;pcfg;neural networks;search space", "tldr": "We introduce an expressive NAS search space, containing diverse SOTA architectures. When searching in this space we find new SOTA and improvements on existing architectures.", "abstract": "Neural architecture search (NAS) finds high performing networks for a given task. Yet the results of NAS are fairly prosaic; they did not e.g. create a shift from convolutional structures to transformers. This is not least because the search spaces in NAS often aren\u2019t diverse enough to include such transformations a priori. Instead, for NAS to provide greater potential for fundamental design shifts, we need a novel expressive search space design which is built from more fundamental operations. To this end, we introduce einspace, a search space based on a parameterised probabilistic context-free grammar. Our space is versatile, supporting architectures of various sizes and complexities, while also containing diverse network operations which allow it to model convolutions, attention components and more. It contains many existing competitive architectures, and provides flexibility for discovering new ones. Using this search space, we perform experiments to find novel architectures as well as improvements on existing ones on the diverse Unseen NAS datasets. We show that competitive architectures can be obtained by searching from scratch, and we consistently find large improvements when initialising the search with strong baselines. We believe that this work is an important advancement towards a transformative NAS paradigm where search space expressivity and strategic search initialisation play key roles.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93480"} +{"video_file": "qlH21Ig1IC_39024866.mp4", "openreview_id": "qlH21Ig1IC", "slideslive_id": 39024866, "venue": "nips2024", "title": "Adaptive Proximal Gradient Method for Convex Optimization", "status": "Poster", "keywords": "adaptive methods;gradient descent;proximal gradient method", "tldr": "Adaptive versions of GD and ProxGD with large steps.", "abstract": "In this paper, we explore two fundamental first-order algorithms in convex optimization, namely, gradient descent (GD) and proximal gradient method (ProxGD). Our focus is on making these algorithms entirely adaptive by leveraging local curvature information of smooth functions. We propose adaptive versions of GD and ProxGD that are based on observed gradient differences and, thus, have no added computational costs. Moreover, we prove convergence of our methods assuming only local Lipschitzness of the gradient. In addition, the proposed versions allow for even larger stepsizes than those initially suggested in [MM20].", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93476"} +{"video_file": "qpeAtfUWOQ_39026112.mp4", "openreview_id": "qpeAtfUWOQ", "slideslive_id": 39026112, "venue": "nips2024", "title": "Variational Multi-scale Representation for Estimating Uncertainty in 3D Gaussian Splatting", "status": "Poster", "keywords": "Neural Rendering;Uncertainty Quantification", "tldr": "We quantify the uncertainty in 3D Gaussian Splatting by deviating Gaussians to construction model space samples and learn with variational inference. .", "abstract": "Recently, 3D Gaussian Splatting (3DGS) has become popular in reconstructing dense 3D representations of appearance and geometry. However, the learning pipeline in 3DGS inherently lacks the ability to quantify uncertainty, which is an important factor in applications like robotics mapping and navigation. In this paper, we propose an uncertainty estimation method built upon the Bayesian inference framework. Specifically, we propose a method to build variational multi-scale 3D Gaussians, where we leverage explicit scale information in 3DGS parameters to construct diversified parameter space samples. We develop an offset table technique to draw local multi-scale samples efficiently by offsetting selected attributes and sharing other base attributes. Then, the offset table is learned by variational inference with multi-scale prior. The learned offset posterior can quantify the uncertainty of each individual Gaussian component, and be used in the forward pass to infer the predictive uncertainty. Extensive experimental results on various benchmark datasets show that the proposed method provides well-aligned calibration performance on estimated uncertainty and better rendering quality compared with the previous methods that enable uncertainty quantification with view synthesis. Besides, by leveraging the model parameter uncertainty estimated by our method, we can remove noisy Gaussians automatically, thereby obtaining a high-fidelity part of the reconstructed scene, which is of great help in improving the visual quality.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93472"} +{"video_file": "qrfp4eeZ47_39028791.mp4", "openreview_id": "qrfp4eeZ47", "slideslive_id": 39028791, "venue": "nips2024", "title": "FactorizePhys: Matrix Factorization for Multidimensional Attention in Remote Physiological Sensing", "status": "Poster", "keywords": "Time-series estimation;remote photo-plethysmography;spatial-temporal attention;non-negative matrix factorization", "tldr": "This work introduces the Factorized Self-Attention Module, which computes multidimensional attention through nonnegative matrix factorization, and integrate it into FactorizePhys, a proposed 3D-CNN model for robust rPPG signal extraction.", "abstract": "Remote photoplethysmography (rPPG) enables non-invasive extraction of blood volume pulse signals through imaging, transforming spatial-temporal data into time series signals. Advances in end-to-end rPPG approaches have focused on this transformation where attention mechanisms are crucial for feature extraction. However, existing methods compute attention disjointly across spatial, temporal, and channel dimensions. Here, we propose the Factorized Self-Attention Module (FSAM), which jointly computes multidimensional attention from voxel embeddings using nonnegative matrix factorization. To demonstrate FSAM's effectiveness, we developed FactorizePhys, an end-to-end 3D-CNN architecture for estimating blood volume pulse signals from raw video frames. Our approach adeptly factorizes voxel embeddings to achieve comprehensive spatial, temporal, and channel attention, enhancing performance of generic signal extraction tasks. Furthermore, we deploy FSAM within an existing 2D-CNN-based rPPG architecture to illustrate its versatility. FSAM and FactorizePhys are thoroughly evaluated against state-of-the-art rPPG methods, each representing different types of architecture and attention mechanism. We perform ablation studies to investigate the architectural decisions and hyperparameters of FSAM. Experiments on four publicly available datasets and intuitive visualization of learned spatial-temporal features substantiate the effectiveness of FSAM and enhanced cross-dataset generalization in estimating rPPG signals, suggesting its broader potential as a multidimensional attention mechanism. The code is accessible at https://github.com/PhysiologicAILab/FactorizePhys.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93470"} +{"video_file": "qwl3EiDi9r_39025858.mp4", "openreview_id": "qwl3EiDi9r", "slideslive_id": 39025858, "venue": "nips2024", "title": "Integrating GNN and Neural ODEs for Estimating Non-Reciprocal Two-Body Interactions in Mixed-Species Collective Motion", "status": "Poster", "keywords": "Deep Learning;Neural Differential Equations;Graph Neural Networks;System Identification;Active Matter;Collective Motion;Non-Reciprocal", "tldr": "We present a framework to estimate non-reciprocal two-body interactions from trajectories in mixed-species collective motion. Our method accurately replicates interactions and collective behaviors, demonstrated through numerical experiments.", "abstract": "Analyzing the motion of multiple biological agents, be it cells or individual animals, is pivotal for the understanding of complex collective behaviors. With the advent of advanced microscopy, detailed images of complex tissue formations involving multiple cell types have become more accessible in recent years. However, deciphering the underlying rules that govern cell movements is far from trivial. Here, we present a novel deep learning framework for estimating the underlying equations of motion from observed trajectories, a pivotal step in decoding such complex dynamics. Our framework integrates graph neural networks with neural differential equations, enabling effective prediction of two-body interactions based on the states of the interacting entities. We demonstrate the efficacy of our approach through two numerical experiments. First, we used simulated data from a toy model to tune the hyperparameters. Based on the obtained hyperparameters, we then applied this approach to a more complex model with non-reciprocal forces that mimic the collective dynamics of the cells of slime molds. Our results show that the proposed method can accurately estimate the functional forms of two-body interactions -- even when they are nonreciprocal -- thereby precisely replicating both individual and collective behaviors within these systems.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/93465"} +{"video_file": "r0eSCJ6qsL_39025204.mp4", "openreview_id": "r0eSCJ6qsL", "slideslive_id": 39025204, "venue": "nips2024", "title": "AsCAN: Asymmetric Convolution-Attention Networks for Efficient Recognition and Generation", "status": "Poster", "keywords": "Text-to-Image Generation;Hybrid Architectures", "tldr": "We propose a hybrid architecture with asymmetric distribution of convolution and attention blocks in different network stages to achieve superior latency-vs-performance trade-off in image recognition and generation tasks.", "abstract": "Neural network architecture design requires making many crucial decisions. The common desiderata is that similar decisions, with little modifications, can be reused in a variety of tasks and applications. To satisfy that, architectures must provide promising latency and performance trade-offs, support a variety of tasks, scale efficiently with respect to the amounts of data and compute, leverage available data from other tasks, and efficiently support various hardware. To this end, we introduce AsCAN---a hybrid architecture, combining both convolutional and transformer blocks. We revisit the key design principles of hybrid architectures and propose a simple and effective \\emph{asymmetric} architecture, where the distribution of convolutional and transformer blocks is \\emph{asymmetric}, containing more convolutional blocks in the earlier stages, followed by more transformer blocks in later stages. AsCAN supports a variety of tasks: recognition, segmentation, class-conditional image generation, and features a superior trade-off between performance and latency. We then scale the same architecture to solve a large-scale text-to-image task and show state-of-the-art performance compared to the most recent public and commercial models. Notably, without performing any optimization of inference time our model shows faster execution, even when compared to works that do such optimization, highlighting the advantages and the value of our approach.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/93461"} +{"video_file": "r5nev2SHtJ_39025982.mp4", "openreview_id": "r5nev2SHtJ", "slideslive_id": 39025982, "venue": "nips2024", "title": "From Causal to Concept-Based Representation Learning", "status": "Poster", "keywords": "concept learning;causal representation learning;interpretable representation learning", "tldr": "We formally study how to extract concepts from data, by utilizing ideas from the causal representation learning and interpretability literatures.", "abstract": "To build intelligent machine learning systems, modern representation learning attempts to recover latent generative factors from data, such as in causal representation learning. A key question in this growing field is to provide rigorous conditions under which latent factors can be identified and thus, potentially learned. Motivated by extensive empirical literature on linear representations and concept learning, we propose to relax causal notions with a geometric notion of concepts. We formally define a notion of concepts and show rigorously that they can be provably recovered from diverse data. Instead of imposing assumptions on the \"true\" generative latent space, we assume that concepts can be represented linearly in this latent space. The tradeoff is that instead of identifying the \"true\" generative factors, we identify a subset of desired human-interpretable concepts that are relevant for a given application. Experiments on synthetic data, multimodal CLIP models and large language models supplement our results and show the utility of our approach. In this way, we provide a foundation for moving from causal representations to interpretable, concept-based representations by bringing together ideas from these two neighboring disciplines.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/93459"} +{"video_file": "r6V7EjANUK_39024570.mp4", "openreview_id": "r6V7EjANUK", "slideslive_id": 39024570, "venue": "nips2024", "title": "GSDF: 3DGS Meets SDF for Improved Neural Rendering and Reconstruction", "status": "Poster", "keywords": "Neural Rendering; 3D Reconstruction;3D Gaussian Splatting; Signed Distance Field", "tldr": "We propose GSDF, a dual-branch system that enhances rendering and reconstruction at the same time, leveraging the mutual geometry regularization and guidance between Gaussain primitives and neural surface.", "abstract": "Representing 3D scenes from multiview images remains a core challenge in computer vision and graphics, requiring both reliable rendering and reconstruction, which often conflicts due to the mismatched prioritization of image quality over precise underlying scene geometry. Although both neural implicit surfaces and explicit Gaussian primitives have advanced with neural rendering techniques, current methods impose strict constraints on density fields or primitive shapes, which enhances the affinity for geometric reconstruction at the sacrifice of rendering quality. To address this dilemma, we introduce GSDF, a dual-branch architecture combining 3D Gaussian Splatting (3DGS) and neural Signed Distance Fields (SDF). Our approach leverages mutual guidance and joint supervision during the training process to mutually enhance reconstruction and rendering. Specifically, our method guides the Gaussian primitives to locate near potential surfaces and accelerates the SDF convergence. This implicit mutual guidance ensures robustness and accuracy in both synthetic and real-world scenarios. Experimental results demonstrate that our method boosts the SDF optimization process to reconstruct more detailed geometry, while reducing floaters and blurry edge artifacts in rendering by aligning Gaussian primitives with the underlying geometry.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93457"} +{"video_file": "rCnZrFikX6_39027964.mp4", "openreview_id": "rCnZrFikX6", "slideslive_id": 39027964, "venue": "nips2024", "title": "Neural Persistence Dynamics", "status": "Poster", "keywords": "Persistent Homology;Multi-Time Attention;Latent ODE;Collective Behavior;Physical Sciences", "tldr": "We consider the problem of learning the dynamics in the topology of time-evolving point clouds, observed in systems that exhibit collective behavior such as swarms of birds, insects, or fish.", "abstract": "We consider the problem of learning the dynamics in the topology of time-evolving point clouds, the prevalent spatiotemporal model for systems exhibiting collective behavior, such as swarms of insects and birds or particles in physics. In such systems, patterns emerge from (local) interactions among self-propelled entities. While several well-understood governing equations for motion and interaction exist, they are notoriously difficult to fit to data, as most prior work requires knowledge about individual motion trajectories, i.e., a requirement that is challenging to satisfy with an increasing number of entities. To evade such confounding factors, we investigate collective behavior from a topological perspective, but instead of summarizing entire observation sequences (as done previously), we propose learning a latent dynamical model from topological features per time point. The latter is then used to formulate a downstream regression task to predict the parametrization of some a priori specified governing equation. We implement this idea based on a latent ODE learned from vectorized (static) persistence diagrams and show that a combination of recent stability results for persistent homology justifies this modeling choice. Various (ablation) experiments not only demonstrate the relevance of each model component but provide compelling empirical evidence that our proposed model -- Neural Persistence Dynamics -- substantially outperforms the state-of-the-art across a diverse set of parameter regression tasks.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/93451"} +{"video_file": "rF1YRtZfoJ_39027891.mp4", "openreview_id": "rF1YRtZfoJ", "slideslive_id": 39027891, "venue": "nips2024", "title": "CLAP4CLIP: Continual Learning with Probabilistic Finetuning for Vision-Language Models", "status": "Poster", "keywords": "Continual Learning;Vision-language models;Finetuning", "tldr": "We propose a probabilistic finetuning method for continual learning with pre-trained CLIP model to enable better in-domain knowledge acquisition, generalization, and model calibration.", "abstract": "Continual learning (CL) aims to help deep neural networks to learn new knowledge while retaining what has been learned. Owing to their powerful generalizability, pre-trained vision-language models such as Contrastive Language-Image Pre-training (CLIP) have lately gained traction as practical CL candidates. However, the domain mismatch between the pre-training and the downstream CL tasks calls for finetuning of the CLIP on the latter. The deterministic nature of the existing finetuning methods makes them overlook the many possible interactions across the modalities and deems them unsafe for high-risk tasks requiring reliable uncertainty estimation. To address these, our work proposes Continual LeArning with Probabilistic finetuning (CLAP) - a probabilistic modeling framework over visual-guided text features per task, thus providing more calibrated CL finetuning. Unlike recent data-hungry anti-forgetting CL techniques, CLAP alleviates forgetting by exploiting the rich pre-trained knowledge of CLIP for weight initialization and distribution regularization of task-specific parameters. Cooperating with the diverse range of existing prompting methods, CLAP can surpass the predominant deterministic finetuning approaches for CL with CLIP. We conclude with out-of-the-box applications of superior uncertainty estimation abilities of CLAP including novel data detection and exemplar selection within the existing CL setups. Our code is available at https://github.com/srvCodes/clap4clip.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93449"} +{"video_file": "rI7oZj1WMc_39026744.mp4", "openreview_id": "rI7oZj1WMc", "slideslive_id": 39026744, "venue": "nips2024", "title": "Learning Successor Features the Simple Way", "status": "Poster", "keywords": "deep reinforcement learning;representation learning;continual reinforcement learning;successor feature;successor representation", "tldr": "A simple approach for learning Successor Features from pixel-level observations for Continual Reinforcement Learning", "abstract": "In Deep Reinforcement Learning (RL), it is a challenge to learn representations that do not exhibit catastrophic forgetting or interference in non-stationary environments. Successor Features (SFs) offer a potential solution to this challenge. However, canonical techniques for learning SFs from pixel-level observations often lead to representation collapse, wherein representations degenerate and fail to capture meaningful variations in the data. More recent methods for learning SFs can avoid representation collapse, but they often involve complex losses and multiple learning phases, reducing their efficiency. We introduce a novel, simple method for learning SFs directly from pixels. Our approach uses a combination of a Temporal-difference (TD) loss and a reward prediction loss, which together capture the basic mathematical definition of SFs. We show that our approach matches or outperforms existing SF learning techniques in both 2D (Minigrid) and 3D (Miniworld) mazes, for both single and continual learning scenarios. As well, our technique is efficient, and can reach higher levels of performance in less time than other approaches. Our work provides a new, streamlined technique for learning SFs directly from pixel observations, with no pretraining required.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93447"} +{"video_file": "rI80PHlnFm_39024578.mp4", "openreview_id": "rI80PHlnFm", "slideslive_id": 39024578, "venue": "nips2024", "title": "Model Based Inference of Synaptic Plasticity Rules", "status": "Poster", "keywords": "computational neuroscience;plasticity rules;synaptic plasticity;biologically plausible learning", "tldr": "We developed a computational method to infer complex synaptic plasticity rules from neural and behavioral data, revealing new insights like active forgetting in reward learning in flies.", "abstract": "Inferring the synaptic plasticity rules that govern learning in the brain is a key challenge in neuroscience. We present a novel computational method to infer these rules from experimental data, applicable to both neural and behavioral data. Our approach approximates plasticity rules using a parameterized function, employing either truncated Taylor series for theoretical interpretability or multilayer perceptrons. These plasticity parameters are optimized via gradient descent over entire trajectories to align closely with observed neural activity or behavioral learning dynamics. This method can uncover complex rules that induce long nonlinear time dependencies, particularly involving factors like postsynaptic activity and current synaptic weights. We validate our approach through simulations, successfully recovering established rules such as Oja's, as well as more intricate plasticity rules with reward-modulated terms. We assess the robustness of our technique to noise and apply it to behavioral data from \\textit{Drosophila} in a probabilistic reward-learning experiment. Notably, our findings reveal an active forgetting component in reward learning in flies, improving predictive accuracy over previous models. This modeling framework offers a promising new avenue for elucidating the computational principles of synaptic plasticity and learning in the brain.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/93446"} +{"video_file": "rIOTceoNc8_39026393.mp4", "openreview_id": "rIOTceoNc8", "slideslive_id": 39026393, "venue": "nips2024", "title": "Graph Coarsening with Message-Passing Guarantees", "status": "Poster", "keywords": "graph coarsening;message passing;graph neural network", "tldr": "We propose a new message-passing paradigm specific to coarsened graphs, with theoretical guarantees from the original graph.", "abstract": "Graph coarsening aims to reduce the size of a large graph while preserving some of its key properties, which has been used in many applications to reduce computational load and memory footprint. For instance, in graph machine learning, training Graph Neural Networks (GNNs) on coarsened graphs leads to drastic savings in time and memory. However, GNNs rely on the Message-Passing (MP) paradigm, and classical spectral preservation guarantees for graph coarsening do not directly lead to theoretical guarantees when performing naive message-passing on the coarsened graph.\nIn this work, we propose a new message-passing operation specific to coarsened graphs, which exhibit theoretical guarantees on the preservation of the propagated signal. Interestingly, and in a sharp departure from previous proposals, this operation on coarsened graphs is oriented, even when the original graph is undirected. We conduct node classification tasks on synthetic and real data and observe improved results compared to performing naive message-passing on the coarsened graph.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/93445"} +{"video_file": "rIOl7KbSkv_39025872.mp4", "openreview_id": "rIOl7KbSkv", "slideslive_id": 39025872, "venue": "nips2024", "title": "No Free Lunch in LLM Watermarking: Trade-offs in Watermarking Design Choices", "status": "Poster", "keywords": "watermarking;large language models;security;privacy", "tldr": "We reveal and evaluate new attack vectors that exploit the common design choices of LLM watermarks.", "abstract": "Advances in generative models have made it possible for AI-generated text, code, and images to mirror human-generated content in many applications. Watermarking, a technique that aims to embed information in the output of a model to verify its source, is useful for mitigating the misuse of such AI-generated content. However, we show that common design choices in LLM watermarking schemes make the resulting systems surprisingly susceptible to attack---leading to fundamental trade-offs in robustness, utility, and usability. To navigate these trade-offs, we rigorously study a set of simple yet effective attacks on common watermarking systems, and propose guidelines and defenses for LLM watermarking in practice.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/93444"} +{"video_file": "rM24UUgZg8_39024998.mp4", "openreview_id": "rM24UUgZg8", "slideslive_id": 39024998, "venue": "nips2024", "title": "Activating Self-Attention for Multi-Scene Absolute Pose Regression", "status": "Poster", "keywords": "Multi-Scene Absolute Pose Regression;Transformer;Attention Collapse", "tldr": "Unleashing the potential of self-attention by rectifying the distortion of query-key embedding space", "abstract": "Multi-scene absolute pose regression addresses the demand for fast and memory-efficient camera pose estimation across various real-world environments. Nowadays, transformer-based model has been devised to regress the camera pose directly in multi-scenes. Despite its potential, transformer encoders are underutilized due to the collapsed self-attention map, having low representation capacity. This work highlights the problem and investigates it from a new perspective: distortion of query-key embedding space. Based on the statistical analysis, we reveal that queries and keys are mapped in completely different spaces while only a few keys are blended into the query region. This leads to the collapse of the self-attention map as all queries are considered similar to those few keys. Therefore, we propose simple but effective solutions to activate self-attention. Concretely, we present an auxiliary loss that aligns queries and keys, preventing the distortion of query-key space and encouraging the model to find global relations by self-attention. In addition, the fixed sinusoidal positional encoding is adopted instead of undertrained learnable one to reflect appropriate positional clues into the inputs of self-attention. As a result, our approach resolves the aforementioned problem effectively, thus outperforming existing methods in both outdoor and indoor scenes.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93441"} +{"video_file": "rM3FFH1mqk_39025620.mp4", "openreview_id": "rM3FFH1mqk", "slideslive_id": 39025620, "venue": "nips2024", "title": "Semidefinite Relaxations of the Gromov-Wasserstein Distance", "status": "Poster", "keywords": "optimal transport;gromov-wasserstein;semidefinite programming;optimization", "tldr": "We propose a semi-definite programming (SDP) relaxation of the GW distance.", "abstract": "The Gromov-Wasserstein (GW) distance is an extension of the optimal transport problem that allows one to match objects between incomparable spaces. At its core, the GW distance is specified as the solution of a non-convex quadratic program and is not known to be tractable to solve. In particular, existing solvers for the GW distance are only able to find locally optimal solutions. In this work, we propose a semi-definite programming (SDP) relaxation of the GW distance. The relaxation can be viewed as the Lagrangian dual of the GW distance augmented with constraints that relate to the linear and quadratic terms of transportation plans. In particular, our relaxation provides a tractable (polynomial-time) algorithm to compute globally optimal transportation plans (in some instances) together with an accompanying proof of global optimality. Our numerical experiments suggest that the proposed relaxation is strong in that it frequently computes the globally optimal solution. Our Python implementation is available at https://github.com/tbng/gwsdp.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93440"} +{"video_file": "rPgc5brxmT_39026908.mp4", "openreview_id": "rPgc5brxmT", "slideslive_id": 39026908, "venue": "nips2024", "title": "Interaction-Force Transport Gradient Flows", "status": "Poster", "keywords": "kernel methods;gradient flow;optimal transport;Wasserstein;Fisher-Rao;Hellinger;unbalanced optimal transport;partial differential equation;optimization;calculus of variations;variational inference;MCMC;MMD", "tldr": "We propose the spherical interaction-force transport (IFT) gradient flows with the global exponential convergence analysis for both the MMD and the KL-energy gradient flows. Numerical experiments show stable inference performance.", "abstract": "This paper presents a new gradient flow dissipation geometry over non-negative and probability measures. This is motivated by a principled construction that combines the unbalanced optimal transport and interaction forces modeled by reproducing kernels. Using a precise connection between the Hellinger geometry and the maximum mean discrepancy (MMD), we propose the interaction-force transport (IFT) gradient flows and its spherical variant via an infimal convolution of the Wasserstein and spherical MMD tensors. We then develop a particle-based optimization algorithm based on the JKO-splitting scheme of the mass-preserving spherical IFT gradient flows. Finally, we provide both theoretical global exponential convergence guarantees and improved empirical simulation results for applying the IFT gradient flows to the sampling task of MMD-minimization. Furthermore, we prove that the spherical IFT gradient flow enjoys the best of both worlds by providing the global exponential convergence guarantee for both the MMD and KL energy.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/93438"} +{"video_file": "rQYyWGYuzK_39026408.mp4", "openreview_id": "rQYyWGYuzK", "slideslive_id": 39026408, "venue": "nips2024", "title": "Monomial Matrix Group Equivariant Neural Functional Networks", "status": "Poster", "keywords": "neural functional networks;equivariant networks;monomial matrices;symmetry", "tldr": "We extend the study of the group action on the network weights from the group of permutation matrices to the group of monomial matrices by incorporating scaling symmetries.", "abstract": "Neural functional networks (NFNs) have recently gained significant attention due to their diverse applications, ranging from predicting network generalization and network editing to classifying implicit neural representation. Previous NFN designs often depend on permutation symmetries in neural networks' weights, which traditionally arise from the unordered arrangement of neurons in hidden layers. However, these designs do not take into account the weight scaling symmetries of\nReLU\nnetworks, and the weight sign flipping symmetries of\nsin\nor\nTanh\nnetworks. In this paper, we extend the study of the group action on the network weights from the group of permutation matrices to the group of monomial matrices by incorporating scaling/sign-flipping symmetries. Particularly, we encode these scaling/sign-flipping symmetries by designing our corresponding equivariant and invariant layers. We name our new family of NFNs the Monomial Matrix Group Equivariant Neural Functional Networks (Monomial-NFN). Because of the expansion of the symmetries, Monomial-NFN has much fewer independent trainable parameters compared to the baseline NFNs in the literature, thus enhancing the model's efficiency. Moreover, for fully connected and convolutional neural networks, we theoretically prove that all groups that leave these networks invariant while acting on their weight spaces are some subgroups of the monomial matrix group. We provide empirical evidences to demonstrate the advantages of our model over existing baselines, achieving competitive performance and efficiency. The code is publicly available at https://github.com/MathematicalAI-NUS/Monomial-NFN.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93437"} +{"video_file": "rYjYwuM6yH_39024943.mp4", "openreview_id": "rYjYwuM6yH", "slideslive_id": 39024943, "venue": "nips2024", "title": "3-in-1: 2D Rotary Adaptation for Efficient Finetuning, Efficient Batching and Composability", "status": "Poster", "keywords": "parameter-efficient finetuning;orthogonal finetuning;batching;interpretability", "tldr": "Our proposed method, RoAd, is extremely parameter-efficient, batching-efficient and composable.", "abstract": "Parameter-efficient finetuning (PEFT) methods effectively adapt large language models (LLMs) to diverse downstream tasks, reducing storage and GPU memory demands. Despite these advantages, several applications pose new challenges to PEFT beyond mere parameter efficiency. One notable challenge involves the efficient deployment of LLMs equipped with multiple task- or user-specific adapters, particularly when different adapters are needed for distinct requests within the same batch. Another challenge is the interpretability of LLMs, which is crucial for understanding how LLMs function. Previous studies introduced various approaches to address different challenges. In this paper, we introduce a novel method, RoAd, which employs a straightforward 2D rotation to adapt LLMs and addresses all the above challenges: (1) RoAd is remarkably parameter-efficient, delivering optimal performance on GLUE, eight commonsense reasoning tasks and four arithmetic reasoning tasks with <0.1% trainable parameters; (2) RoAd facilitates the efficient serving of requests requiring different adapters within a batch, with an overhead comparable to element-wise multiplication instead of batch matrix multiplication; (3) RoAd enhances LLM's interpretability through integration within a framework of distributed interchange intervention, demonstrated via composition experiments.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93432"} +{"video_file": "rYs2Dmn9tD_39028832.mp4", "openreview_id": "rYs2Dmn9tD", "slideslive_id": 39028832, "venue": "nips2024", "title": "Trace is the Next AutoDiff: Generative Optimization with Rich Feedback, Execution Traces, and LLMs", "status": "Poster", "keywords": "Optimization;Back-Propagation;Automatic Differentiation;LLM;Language Feedback;Execution Trace", "tldr": "Framework for efficient optimization of heterogenous parameters in general computational workflows", "abstract": "We study a class of optimization problems motivated by automating the design and update of AI systems like coding assistants, robots, and copilots. AutoDiff frameworks, like PyTorch, enable efficient end-to-end optimization of differentiable systems. However, general computational workflows can be non-differentiable and involve rich feedback (e.g. console output or user\u2019s responses), heterogeneous parameters (e.g. prompts, codes), and intricate objectives (beyond maximizing a score). We investigate end-to-end generative optimization \u2013 using generative models such as LLMs within the optimizer for automatic updating of general computational workflows. We discover that workflow execution traces are akin to back-propagated gradients in AutoDiff and can provide key information to interpret feedback for efficient optimization. Formally, we frame a new mathematical setup, Optimization with Trace Oracle (OPTO). In OPTO, an optimizer receives an execution trace along with feedback on the computed output and updates parameters iteratively. We provide a Python library, Trace, that efficiently converts a workflow optimization problem into an OPTO instance using PyTorch-like syntax. Using Trace, we develop a general LLM-based generative optimizer called OptoPrime. In empirical studies, we find that OptoPrime is capable of first-order numerical optimization, prompt optimization, hyper-parameter tuning, robot controller design, code debugging, etc., and is often competitive with specialized optimizers for each domain. We envision Trace as an open research platform for devising novel generative optimizers and developing the next generation of interactive learning agents. Website: https://microsoft.github.io/Trace/.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/93431"} +{"video_file": "rafVvthuxD_39028886.mp4", "openreview_id": "rafVvthuxD", "slideslive_id": 39028886, "venue": "nips2024", "title": "EM Distillation for One-step Diffusion Models", "status": "Poster", "keywords": "Generative models", "tldr": "We propose EM Distillation (EMD), a maximum likelihood-based approach that distills a diffusion model to a one-step generator model with minimal loss of perceptual quality.", "abstract": "While diffusion models can learn complex distributions, sampling requires a computationally expensive iterative process. Existing distillation methods enable efficient sampling, but have notable limitations, such as performance degradation with very few sampling steps, reliance on training data access, or mode-seeking optimization that may fail to capture the full distribution. We propose EM Distillation (EMD), a maximum likelihood-based approach that distills a diffusion model to a one-step generator model with minimal loss of perceptual quality. Our approach is derived through the lens of Expectation-Maximization (EM), where the generator parameters are updated using samples from the joint distribution of the diffusion teacher prior and inferred generator latents. We develop a reparametrized sampling scheme and a noise cancellation technique that together stabilizes the distillation process. We further reveal an interesting connection of our method with existing methods that minimize mode-seeking KL. EMD outperforms existing one-step generative methods in terms of FID scores on ImageNet-64 and ImageNet-128, and compares favorably with prior work on distilling text-to-image diffusion models.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/93429"} +{"video_file": "rajRJ6WKj2_39025551.mp4", "openreview_id": "rajRJ6WKj2", "slideslive_id": 39025551, "venue": "nips2024", "title": "DeBaRA: Denoising-Based 3D Room Arrangement Generation", "status": "Poster", "keywords": "Indoor 3D Scene Synthesis;Layout Generation;Score-based Generative Models;Diffusion Models;Conditional Generation", "tldr": "We propose DeBaRA, a conditional score-based generative model that performs state-of-the art arrangement generation in bounded indoor scenes and several downstream applications by solely learning 3D object spatial features.", "abstract": "Generating realistic and diverse layouts of furnished indoor 3D scenes unlocks multiple interactive applications impacting a wide range of industries. The inherent complexity of object interactions, the limited amount of available data and the requirement to fulfill spatial constraints all make generative modeling for 3D scene synthesis and arrangement challenging. Current methods address these challenges autoregressively or by using off-the-shelf diffusion objectives by simultaneously predicting all attributes without 3D reasoning considerations. In this paper, we introduce DeBaRA, a score-based model specifically tailored for precise, controllable and flexible arrangement generation in a bounded environment. We argue that the most critical component of a scene synthesis system is to accurately establish the size and position of various objects within a restricted area. Based on this insight, we propose a lightweight conditional score-based model designed with 3D spatial awareness at its core. We demonstrate that by focusing on spatial attributes of objects, a single trained DeBaRA model can be leveraged at test time to perform several downstream applications such as scene synthesis, completion and re-arrangement. Further, we introduce a novel Self Score Evaluation procedure so it can be optimally employed alongside external LLM models. We evaluate our approach through extensive experiments and demonstrate significant improvement upon state-of-the-art approaches in a range of scenarios.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93428"} +{"video_file": "rbtnRsiXSN_39028184.mp4", "openreview_id": "rbtnRsiXSN", "slideslive_id": 39028184, "venue": "nips2024", "title": "DeMo: Decoupling Motion Forecasting into Directional Intentions and Dynamic States", "status": "Poster", "keywords": "Motion Forecasting;Autonomous Driving;Mamba;Attention", "tldr": "A framework that decouples multi-modal trajectory queries into mode queries for directional intentions and state queries for dynamic states, utilizing Attention and Mamba.", "abstract": "Accurate motion forecasting for traffic agents is crucial for ensuring the safety and efficiency of autonomous driving systems in dynamically changing environments. Mainstream methods adopt a one-query-one-trajectory paradigm, where each query corresponds to a unique trajectory for predicting multi-modal trajectories. While straightforward and effective, the absence of detailed representation of future trajectories may yield suboptimal outcomes, given that the agent states dynamically evolve over time. To address this problem, we introduce DeMo, a framework that decouples multi-modal trajectory queries into two types: mode queries capturing distinct directional intentions and state queries tracking the agent's dynamic states over time. By leveraging this format, we separately optimize the multi-modality and dynamic evolutionary properties of trajectories. Subsequently, the mode and state queries are integrated to obtain a comprehensive and detailed representation of the trajectories. To achieve these operations, we additionally introduce combined Attention and Mamba techniques for global information aggregation and state sequence modeling, leveraging their respective strengths. Extensive experiments on both the Argoverse 2 and nuScenes benchmarks demonstrate that our DeMo achieves state-of-the-art performance in motion forecasting. In addition, we will make our code and models publicly available.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/93426"} +{"video_file": "re2jPCnzkA_39025634.mp4", "openreview_id": "re2jPCnzkA", "slideslive_id": 39025634, "venue": "nips2024", "title": "MIDGArD: Modular Interpretable Diffusion over Graphs for Articulated Designs", "status": "Poster", "keywords": "3D articulated objects;diffusion models;generative models", "tldr": "We present MIDGArD, a new generative framework for creating 3D articulated objects, separating structure and shape generation.", "abstract": "Providing functionality through articulation and interaction with objects is a key objective in 3D generation. We introduce MIDGArD (Modular Interpretable Diffusion over Graphs for Articulated Designs), a novel diffusion-based framework for articulated 3D asset generation. MIDGArD improves over foundational work in the field by enhancing quality, consistency, and controllability in the generation process. This is achieved through MIDGArD's modular approach that separates the problem into two primary components: structure generation and shape generation. The structure generation module of MIDGArD aims at producing coherent articulation features from noisy or incomplete inputs. It acts on the object's structural and kinematic attributes, represented as features of a graph that are being progressively denoised to issue coherent and interpretable articulation solutions. This denoised graph then serves as an advanced conditioning mechanism for the shape generation module, a 3D generative model that populates each link of the articulated structure with consistent 3D meshes. Experiments show the superiority of MIDGArD on the quality, consistency, and interpretability of the generated assets. Importantly, the generated models are fully simulatable, i.e., can be seamlessly integrated into standard physics engines such as MuJoCo, broadening MIDGArD's applicability to fields such as digital content creation, meta realities, and robotics.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93424"} +{"video_file": "rjSPDVdUaw_39027066.mp4", "openreview_id": "rjSPDVdUaw", "slideslive_id": 39027066, "venue": "nips2024", "title": "Moving Off-the-Grid: Scene-Grounded Video Representations", "status": "Spotlight", "keywords": "Self supervised learning;point tracking;representation learning", "tldr": "We propose an \"off-the-grid\" representation model which learns from video and binds tokens to scene elements and tracks them over time consistently via self-supervised learning.", "abstract": "Current vision models typically maintain a fixed correspondence between their representation structure and image space. Each layer comprises a set of tokens arranged \u201con-the-grid,\u201d which biases patches or tokens to encode information at a specific spatio(-temporal) location. In this work we present Moving Off-the-Grid (MooG), a self-supervised video representation model that offers an alternative approach, allowing tokens to move \u201coff-the-grid\u201d to better enable them to represent scene elements consistently, even as they move across the image plane through time. By using a combination of cross-attention and positional embeddings we disentangle the representation structure and image structure. We find that a simple self-supervised objective\u2014next frame prediction\u2014trained on video data, results in a set of latent tokens which bind to specific scene structures and track them as they move. We demonstrate the usefulness of MooG\u2019s learned representation both qualitatively and quantitatively by training readouts on top of the learned representation on a variety of downstream tasks. We show that MooG can provide a strong foundation for different vision tasks when compared to \u201con-the-grid\u201d baselines.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93419"} +{"video_file": "rkuVYosT2c_39024915.mp4", "openreview_id": "rkuVYosT2c", "slideslive_id": 39024915, "venue": "nips2024", "title": "Distributed Least Squares in Small Space via Sketching and Bias Reduction", "status": "Poster", "keywords": "Matrix Sketching;Least squares;Randomized Linear Algebra;Random Matrix Theory", "tldr": "We give a sparse sketching method running in optimal space and current matrix multiplication time, recovering a nearly-unbiased least squares estimator using two passes over the data.", "abstract": "Matrix sketching is a powerful tool for reducing the size of large data matrices. Yet there are fundamental limitations to this size reduction when we want to recover an accurate estimator for a task such as least square regression. We show that these limitations can be circumvented in the distributed setting by designing sketching methods that minimize the bias of the estimator, rather than its error. In particular, we give a sparse sketching method running in optimal space and current matrix multiplication time, which recovers a nearly-unbiased least squares estimator using two passes over the data. This leads to new communication-efficient distributed averaging algorithms for least squares and related tasks, which directly improve on several prior approaches. Our key novelty is a new bias analysis for sketched least squares, giving a sharp characterization of its dependence on the sketch sparsity. The techniques include new higher moment restricted Bai-Silverstein inequalities, which are of independent interest to the non-asymptotic analysis of deterministic equivalents for random matrices that arise from sketching.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93417"} +{"video_file": "rle9X7DQuH_39026616.mp4", "openreview_id": "rle9X7DQuH", "slideslive_id": 39026616, "venue": "nips2024", "title": "OwMatch: Conditional Self-Labeling with Consistency for Open-World Semi-Supervised Learning", "status": "Poster", "keywords": "Open-world Semi-Supervised Learning;self-labeling;consistency loss", "tldr": "Boosting open-world semi-supervised learning with conditional self-labeling and open-world hierarchical thresholding", "abstract": "Semi-supervised learning (SSL) offers a robust framework for harnessing the potential of unannotated data. Traditionally, SSL mandates that all classes possess labeled instances. However, the emergence of open-world SSL (OwSSL) introduces a more practical challenge, wherein unlabeled data may encompass samples from unseen classes. This scenario leads to misclassification of unseen classes as known ones, consequently undermining classification accuracy. To overcome this challenge, this study revisits two methodologies from self-supervised and semi-supervised learning, self-labeling and consistency, tailoring them to address the OwSSL problem. Specifically, we propose an effective framework called OwMatch, combining conditional self-labeling and open-world hierarchical thresholding. Theoretically, we analyze the estimation of class distribution on unlabeled data through rigorous statistical analysis, thus demonstrating that OwMatch can ensure the unbiasedness of the label assignment estimator with reliability. Comprehensive empirical analyses demonstrate that our method yields substantial performance enhancements across both known and unknown classes in comparison to previous studies. Code is available at https://github.com/niusj03/OwMatch.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93416"} +{"video_file": "s1MoH2pACa_39026519.mp4", "openreview_id": "s1MoH2pACa", "slideslive_id": 39026519, "venue": "nips2024", "title": "EnsIR: An Ensemble Algorithm for Image Restoration via Gaussian Mixture Models", "status": "Poster", "keywords": "Model Ensemble;Image Restoration;Gaussian Mixture Models;Expectation Maximization", "tldr": "In this work, a training-free ensemble algorithm is developed to boost the performance of image restoration by combining multiple pre-trained base models at the inference stage.", "abstract": "Image restoration has experienced significant advancements due to the development of deep learning. Nevertheless, it encounters challenges related to ill-posed problems, resulting in deviations between single model predictions and ground-truths. Ensemble learning, as a powerful machine learning technique, aims to address these deviations by combining the predictions of multiple base models. Most existing works adopt ensemble learning during the design of restoration models, while only limited research focuses on the inference-stage ensemble of pre-trained restoration models. Regression-based methods fail to enable efficient inference, leading researchers in academia and industry to prefer averaging as their choice for post-training ensemble. To address this, we reformulate the ensemble problem of image restoration into Gaussian mixture models (GMMs) and employ an expectation maximization (EM)-based algorithm to estimate ensemble weights for aggregating prediction candidates. We estimate the range-wise ensemble weights on a reference set and store them in a lookup table (LUT) for efficient ensemble inference on the test set. Our algorithm is model-agnostic and training-free, allowing seamless integration and enhancement of various pre-trained image restoration models. It consistently outperforms regression-based methods and averaging ensemble approaches on 14 benchmarks across 3 image restoration tasks, including super-resolution, deblurring and deraining. The codes and all estimated weights have been released in Github.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93407"} +{"video_file": "s2hA6Bz3LE_39024514.mp4", "openreview_id": "s2hA6Bz3LE", "slideslive_id": 39024514, "venue": "nips2024", "title": "Enhancing Diversity in Bayesian Deep Learning via Hyperspherical Energy Minimization of CKA", "status": "Poster", "keywords": "bayesian inference;variational inference;uncertainty quantification;deep learning;hypernetworks", "tldr": "New kernel to increase feature diversity of ensembles, train hypernetworks, and improve uncertainty estimation of deep ensembles.", "abstract": "Particle-based Bayesian deep learning often requires a similarity metric to compare two networks. However, naive similarity metrics lack permutation invariance and are inappropriate for comparing networks. Centered Kernel Alignment (CKA) on feature kernels has been proposed to compare deep networks but has not been used as an optimization objective in Bayesian deep learning. In this paper, we explore the use of CKA in Bayesian deep learning to generate diverse ensembles and hypernetworks that output a network posterior. Noting that CKA projects kernels onto a unit hypersphere and that directly optimizing the CKA objective leads to diminishing gradients when two networks are very similar. We propose adopting the approach of hyperspherical energy (HE) on top of CKA kernels to address this drawback and improve training stability. Additionally, by leveraging CKA-based feature kernels, we derive feature repulsive terms applied to synthetically generated outlier examples. Experiments on both diverse ensembles and hypernetworks show that our approach significantly outperforms baselines in terms of uncertainty quantification in both synthetic and realistic outlier detection tasks.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/93406"} +{"video_file": "sEpSxteEKJ_39027820.mp4", "openreview_id": "sEpSxteEKJ", "slideslive_id": 39027820, "venue": "nips2024", "title": "Almost-Linear RNNs Yield Highly Interpretable Symbolic Codes in Dynamical Systems Reconstruction", "status": "Poster", "keywords": "recurrent neural networks;dynamical systems;chaos;attractors;interpretability", "tldr": "We introduce Almost-Linear Recurrent Neural Networks (AL-RNNs) to derive highly interpretable piecewise-linear models of dynamical systems from time-series data.", "abstract": "Dynamical systems theory (DST) is fundamental for many areas of science and engineering. It can provide deep insights into the behavior of systems evolving in time, as typically described by differential or recursive equations. A common approach to facilitate mathematical tractability and interpretability of DS models involves decomposing nonlinear DS into multiple linear DS combined by switching manifolds, i.e. piecewise linear (PWL) systems. PWL models are popular in engineering and a frequent choice in mathematics for analyzing the topological properties of DS. However, hand-crafting such models is tedious and only possible for very low-dimensional scenarios, while inferring them from data usually gives rise to unnecessarily complex representations with very many linear subregions. Here we introduce Almost-Linear Recurrent Neural Networks (AL-RNNs) which automatically and robustly produce most parsimonious PWL representations of DS from time series data, using as few PWL nonlinearities as possible. AL-RNNs can be efficiently trained with any SOTA algorithm for dynamical systems reconstruction (DSR), and naturally give rise to a symbolic encoding of the underlying DS that provably preserves important topological properties. We show that for the Lorenz and R\u00f6ssler systems, AL-RNNs derive, in a purely data-driven way, the known topologically minimal PWL representations of the corresponding chaotic attractors. We further illustrate on two challenging empirical datasets that interpretable symbolic encodings of the dynamics can be achieved, tremendously facilitating mathematical and computational analysis of the underlying systems.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/93399"} +{"video_file": "sGvZyV2iqN_39025098.mp4", "openreview_id": "sGvZyV2iqN", "slideslive_id": 39025098, "venue": "nips2024", "title": "HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach", "status": "Poster", "keywords": "Generative Model;StyleGAN;HairSwap", "tldr": "Our paper introduces the HairFast model, which uses a novel architecture in the FS latent space of StyleGAN to achieve high-resolution, near real-time hairstyle transfer with superior results, even when source and target poses differ significantly.", "abstract": "Our paper addresses the complex task of transferring a hairstyle from a reference image to an input photo for virtual hair try-on. This task is challenging due to the need to adapt to various photo poses, the sensitivity of hairstyles, and the lack of objective metrics. The current state of the art hairstyle transfer methods use an optimization process for different parts of the approach, making them inexcusably slow. At the same time, faster encoder-based models are of very low quality because they either operate in StyleGAN's W+ space or use other low-dimensional image generators. Additionally, both approaches have a problem with hairstyle transfer when the source pose is very different from the target pose, because they either don't consider the pose at all or deal with it inefficiently. In our paper, we present the HairFast model, which uniquely solves these problems and achieves high resolution, near real-time performance, and superior reconstruction compared to optimization problem-based methods. Our solution includes a new architecture operating in the FS latent space of StyleGAN, an enhanced inpainting approach, and improved encoders for better alignment, color transfer, and a new encoder for post-processing. The effectiveness of our approach is demonstrated on realism metrics after random hairstyle transfer and reconstruction when the original hairstyle is transferred. In the most difficult scenario of transferring both shape and color of a hairstyle from different images, our method performs in less than a second on the Nvidia V100.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93397"} +{"video_file": "sRILMnkkQd_39025341.mp4", "openreview_id": "sRILMnkkQd", "slideslive_id": 39025341, "venue": "nips2024", "title": "UniGAD: Unifying Multi-level Graph Anomaly Detection", "status": "Poster", "keywords": "Graph Anomaly Detection;Graph Neural Networks", "tldr": "We propose the first unified framework for detecting anomalies at node, edge, and graph levels.", "abstract": "Graph Anomaly Detection (GAD) aims to identify uncommon, deviated, or suspicious objects within graph-structured data. Existing methods generally focus on a single graph object type (node, edge, graph, etc.) and often overlook the inherent connections among different object types of graph anomalies. For instance, a money laundering transaction might involve an abnormal account and the broader community it interacts with. To address this, we present UniGAD, the first unified framework for detecting anomalies at node, edge, and graph levels jointly. Specifically, we develop the Maximum Rayleigh Quotient Subgraph Sampler (MRQSampler) that unifies multi-level formats by transferring objects at each level into graph-level tasks on subgraphs. We theoretically prove that MRQSampler maximizes the accumulated spectral energy of subgraphs (i.e., the Rayleigh quotient) to preserve the most significant anomaly information. To further unify multi-level training, we introduce a novel GraphStitch Network to integrate information across different levels, adjust the amount of sharing required at each level, and harmonize conflicting training goals. Comprehensive experiments show that UniGAD outperforms both existing GAD methods specialized for a single task and graph prompt-based approaches for multiple tasks, while also providing robust zero-shot task transferability.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/93390"} +{"video_file": "sRSjr9SDKR_39024918.mp4", "openreview_id": "sRSjr9SDKR", "slideslive_id": 39024918, "venue": "nips2024", "title": "Preferential Normalizing Flows", "status": "Poster", "keywords": "normalizing flow;elicitation;random utility models;prior distribution", "tldr": "We show how normalising flows can be fitted for preferential data representing expert's choices between a set of alternatives, as a function-space maximum a posteriori estimate with a novel functional prior.", "abstract": "Eliciting a high-dimensional probability distribution from an expert via noisy judgments is notoriously challenging, yet useful for many applications, such as prior elicitation and reward modeling. We introduce a method for eliciting the expert's belief density as a normalizing flow based solely on preferential questions such as comparing or ranking alternatives. This allows eliciting in principle arbitrarily flexible densities, but flow estimation is susceptible to the challenge of collapsing or diverging probability mass that makes it difficult in practice. We tackle this problem by introducing a novel functional prior for the flow, motivated by a decision-theoretic argument, and show empirically that the belief density can be inferred as the function-space maximum a posteriori estimate. We demonstrate our method by eliciting multivariate belief densities of simulated experts, including the prior belief of a general-purpose large language model over a real-world dataset.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/93389"} +{"video_file": "satH8Evs2y_39024404.mp4", "openreview_id": "satH8Evs2y", "slideslive_id": 39024404, "venue": "nips2024", "title": "Beware of Road Markings: A New Adversarial Patch Attack to Monocular Depth Estimation", "status": "Poster", "keywords": "monocular depth estimation;adversarial patch;road dependence", "tldr": "We propose a new road adversarial patch against MDE models based on our groundbreaking findings, which is completely different from previous obstacle patches and better adapts to complex traffic scenarios.", "abstract": "Monocular Depth Estimation (MDE) enables the prediction of scene depths from a single RGB image, having been widely integrated into production-grade autonomous driving systems, e.g., Tesla Autopilot. Current adversarial attacks to MDE models focus on attaching an optimized adversarial patch to a designated obstacle. Although effective, this approach presents two inherent limitations: its reliance on specific obstacles and its limited malicious impact. In contrast, we propose a pioneering attack to MDE models that \\textit{decouples obstacles from patches physically and deploys optimized patches on roads}, thereby extending the attack scope to arbitrary traffic participants. This approach is inspired by our groundbreaking discovery: \\textit{various MDE models with different architectures, trained for autonomous driving, heavily rely on road regions} when predicting depths for different obstacles. Based on this discovery, we design the Adversarial Road Marking (AdvRM) attack, which camouflages patches as ordinary road markings and deploys them on roads, thereby posing a continuous threat within the environment. Experimental results from both dataset simulations and real-world scenarios demonstrate that AdvRM is effective, stealthy, and robust against various MDE models, achieving about 1.507 of Mean Relative Shift Ratio (MRSR) over 8 MDE models. The code is available at \\url{https://github.com/a-c-a-c/AdvRM.git}", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93386"} +{"video_file": "sbsaRj475E_39028525.mp4", "openreview_id": "sbsaRj475E", "slideslive_id": 39028525, "venue": "nips2024", "title": "DiP-GO: A Diffusion Pruner via Few-step Gradient Optimization", "status": "Poster", "keywords": "Diffusion;Pruning;Speedup;Gradient optimization;SuperNet", "tldr": "A diffusion pruner via few-step gradient optimization without retraining the diffusion model.", "abstract": "Diffusion models have achieved remarkable progress in the field of image generation due to their outstanding capabilities. However, these models require substantial computing resources because of the multi-step denoising process during inference. While traditional pruning methods have been employed to optimize these models, the retraining process necessitates large-scale training datasets and extensive computational costs to maintain generalization ability, making it neither convenient nor efficient. Recent studies attempt to utilize the similarity of features across adjacent denoising stages to reduce computational costs through simple and static strategies. However, these strategies cannot fully harness the potential of the similar feature patterns across adjacent timesteps. In this work, we propose a novel pruning method that derives an efficient diffusion model via a more intelligent and differentiable pruner. At the core of our approach is casting the model pruning process into a SubNet search process. Specifically, we first introduce a SuperNet based on standard diffusion via adding some backup connections built upon the similar features. We then construct a plugin pruner network and design optimization losses to identify redundant computation. Finally, our method can identify an optimal SubNet through few-step gradient optimization and a simple post-processing procedure. We conduct extensive experiments on various diffusion models including Stable Diffusion series and DiTs. Our DiP-GO approach achieves 4.4 x speedup for SD-1.5 without any loss of accuracy, significantly outperforming the previous state-of-the-art methods.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93385"} +{"video_file": "scw6Et4pEr_39026918.mp4", "openreview_id": "scw6Et4pEr", "slideslive_id": 39026918, "venue": "nips2024", "title": "DeepLag: Discovering Deep Lagrangian Dynamics for Intuitive Fluid Prediction", "status": "Poster", "keywords": "Deep learning;Fluid prediction;Lagrangian perspective", "tldr": "We propose DeepLag to tackle the intricate fluid dynamics by guiding the Eulerian fluid prediction with learned dynamics of tracked Lagrangian particles.", "abstract": "Accurately predicting the future fluid is vital to extensive areas such as meteorology, oceanology, and aerodynamics. However, since the fluid is usually observed from the Eulerian perspective, its moving and intricate dynamics are seriously obscured and confounded in static grids, bringing thorny challenges to the prediction. This paper introduces a new Lagrangian-Eulerian combined paradigm to tackle the tanglesome fluid dynamics. Instead of solely predicting the future based on Eulerian observations, we propose DeepLag to discover hidden Lagrangian dynamics within the fluid by tracking the movements of adaptively sampled key particles. Further, DeepLag presents a new paradigm for fluid prediction, where the Lagrangian movement of the tracked particles is inferred from Eulerian observations, and their accumulated Lagrangian dynamics information is incorporated into global Eulerian evolving features to guide future prediction respectively. Tracking key particles not only provides a transparent and interpretable clue for fluid dynamics but also makes our model free from modeling complex correlations among massive grids for better efficiency. Experimentally, DeepLag excels in three challenging fluid prediction tasks covering 2D and 3D, simulated and real-world fluids. Code is available at this repository: https://github.com/thuml/DeepLag.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/93384"} +{"video_file": "sgVOjDqUMT_39025454.mp4", "openreview_id": "sgVOjDqUMT", "slideslive_id": 39025454, "venue": "nips2024", "title": "MiniCache: KV Cache Compression in Depth Dimension for Large Language Models", "status": "Poster", "keywords": "KV Cache;Large Language Models;Efficiency AI", "tldr": "We introduce MiniCache, a novel method to compress the KV cache between adjacent layers from a depth perspective, achieving superior compression ratios, high throughput, and near-lossless performance.", "abstract": "A critical approach for efficiently deploying computationally demanding large language models (LLMs) is Key-Value (KV) caching. The KV cache stores key-value states of previously generated tokens, significantly reducing the need for repetitive computations and thereby lowering latency in autoregressive generation. However, the size of the KV cache grows linearly with sequence length, posing challenges for applications requiring long context input and extensive sequence generation. In this paper, we present a simple yet effective approach, called MiniCache, to compress the KV cache across layers from a novel depth perspective, significantly reducing the memory footprint for LLM inference. Our approach is based on the observation that KV cache states exhibit high similarity between the adjacent layers in the middle-to-deep portion of LLMs. To facilitate merging, we propose disentangling the states into the magnitude and direction components, interpolating the directions of the state vectors while preserving their lengths unchanged. Furthermore, we introduce a token retention strategy to keep highly distinct state pairs unmerged, thus preserving the information with minimal additional storage overhead. Our MiniCache is training-free and general, complementing existing KV cache compression strategies, such as quantization and sparsity. We conduct a comprehensive evaluation of MiniCache utilizing various models including LLaMA-2, LLaMA-3, Phi-3, Mistral, and Mixtral across multiple benchmarks, demonstrating its exceptional performance in achieving superior compression ratios and high throughput. On the ShareGPT dataset, LLaMA-2-7B with cross-layer merging achieves a compression ratio of\n1.53\n\u00d7\n. Additionally, since MiniCache is orthogonal to existing quantization techniques, it can achieve a compression ratio of up to\n5.02\n\u00d7\nwhen combined with the 4-bit quantization technique, enhancing inference throughput by approximately\n5\n\u00d7\nand reducing the memory footprint by\n41\ncompared to the FP16 full cache baseline, all while maintaining near-lossless performance. Project is available at https://minicache.vmv.re .", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93380"} +{"video_file": "shYQXpnBLB_39025593.mp4", "openreview_id": "shYQXpnBLB", "slideslive_id": 39025593, "venue": "nips2024", "title": "Association of Objects May Engender Stereotypes: Mitigating Association-Engendered Stereotypes in Text-to-Image Generation", "status": "Spotlight", "keywords": "Stereotypes;Diffusion Model;Text-to-Image", "tldr": "A novel approach to mitigate association-engendered stereotypes in T2I diffusion models.", "abstract": "Text-to-Image (T2I) has witnessed significant advancements, demonstrating superior performance for various generative tasks. However, the presence of stereotypes in T2I introduces harmful biases that require urgent attention as the T2I technology becomes more prominent. Previous work for stereotype mitigation mainly concentrated on mitigating stereotypes engendered with individual objects within images, which failed to address stereotypes engendered by the association of multiple objects, referred to as Association-Engendered Stereotypes. For example, mentioning ''black people'' and ''houses'' separately in prompts may not exhibit stereotypes. Nevertheless, when these two objects are associated in prompts, the association of ''black people'' with ''poorer houses'' becomes more pronounced. To tackle this issue, we propose a novel framework, MAS, to Mitigate Association-engendered Stereotypes. This framework models the stereotype problem as a probability distribution alignment problem, aiming to align the stereotype probability distribution of the generated image with the stereotype-free distribution. The MAS framework primarily consists of the Prompt-Image-Stereotype CLIP (PIS CLIP) and Sensitive Transformer. The PIS CLIP learns the association between prompts, images, and stereotypes, which can establish the mapping of prompts to stereotypes. The Sensitive Transformer produces the sensitive constraints, which guide the stereotyped image distribution to align with the stereotype-free probability distribution. Moreover, recognizing that existing metrics are insufficient for accurately evaluating association-engendered stereotypes, we propose a novel metric, Stereotype-Distribution-Total-Variation(SDTV), to evaluate stereotypes in T2I. Comprehensive experiments demonstrate that our framework effectively mitigates association-engendered stereotypes.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/93379"} +{"video_file": "skeopn3q5Y_39025085.mp4", "openreview_id": "skeopn3q5Y", "slideslive_id": 39025085, "venue": "nips2024", "title": "SfPUEL: Shape from Polarization under Unknown Environment Light", "status": "Poster", "keywords": "shape-from-polarization;photometric 3D reconstruction;physics-based vision", "tldr": "Single-view shape from polarization by integrating pretrained model knowledge under unknown environment illumination.", "abstract": "Shape from polarization (SfP) benefits from advancements like polarization cameras for single-shot normal estimation, but its performance heavily relies on light conditions. This paper proposes SfPUEL, an end-to-end SfP method to jointly estimate surface normal and material under unknown environment light. To handle this challenging light condition, we design a transformer-based framework for enhancing the perception of global context features. We further propose to integrate photometric stereo (PS) priors from pretrained models to enrich extracted features for high-quality normal predictions. As metallic and dielectric materials exhibit different BRDFs, SfPUEL additionally predicts dielectric and metallic material segmentation to further boost performance. Experimental results on synthetic and our collected real-world dataset demonstrate that SfPUEL significantly outperforms existing SfP and single-shot normal estimation methods. The code and dataset is available at https://github.com/YouweiLyu/SfPUEL.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93377"} +{"video_file": "sntv8Ac3U2_39025974.mp4", "openreview_id": "sntv8Ac3U2", "slideslive_id": 39025974, "venue": "nips2024", "title": "Adapting Diffusion Models for Improved Prompt Compliance and Controllable Image Synthesis", "status": "Poster", "keywords": "Image Synthesis; Controllable 2D/3D Synthesis; Diffusion", "tldr": "We introduce a new framework for modeling the joint distribution of images and conditioning variables by adapting Stable Diffusion to enhance prompt compliance, controllability and editing of images.", "abstract": "Recent advances in generative modeling with diffusion processes (DPs) enabled breakthroughs in image synthesis. Despite impressive image quality, these models have various prompt compliance problems, including low recall in generating multiple objects, difficulty in generating text in images, and meeting constraints like object locations and pose. For fine-grained editing and manipulation, they also require fine-grained semantic or instance maps that are tedious to produce manually. While prompt compliance can be enhanced by addition of loss functions at inference, this is time consuming and does not scale to complex scenes. To overcome these limitations, this work introduces a new family of\nFactor Graph Diffusion Models\n(FG-DMs) that models the joint distribution of images and conditioning variables, such as semantic, sketch, depth or normal maps via a factor graph decomposition. This joint structure has several advantages, including support for efficient sampling based prompt compliance schemes, which produce images of high object recall, semi-automated fine-grained editing, explainability at intermediate levels, ability to produce labeled datasets for the training of downstream models such as segmentation or depth, training with missing data, and continual learning where new conditioning variables can be added with minimal or no modifications to the existing structure. We propose an implementation of FG-DMs by adapting a pre-trained Stable Diffusion (SD) model to implement all FG-DM factors, using only COCO dataset, and show that it is effective in generating images with 15% higher recall than SD while retaining its generalization ability. We introduce an attention distillation loss that encourages consistency among the attention maps of all factors, improving the fidelity of the generated conditions and image. We also show that training FG-DMs from scratch on MM-CelebA-HQ, Cityscapes, ADE20K, and COCO produce images of high quality (FID) and diversity (LPIPS).", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/93374"} +{"video_file": "snxWD0Q4EI_39028349.mp4", "openreview_id": "snxWD0Q4EI", "slideslive_id": 39028349, "venue": "nips2024", "title": "The Iterative Optimal Brain Surgeon: Faster Sparse Recovery by Leveraging Second-Order Information", "status": "Poster", "keywords": "Optimal Brain Surgeon;Sparse Recovery;Pruning;Second-Order Optimization", "tldr": "We consider iterative version of OBS framework of finding sparse solutions providing practical evidence and theoretical justifications for convergence.", "abstract": "The rising footprint of machine learning has led to a focus on imposing model sparsity as a means of reducing computational and memory costs. For deep neural networks (DNNs), the state-of-the-art accuracy-vs-sparsity is achieved by heuristics inspired by the classical Optimal Brain Surgeon (OBS) framework [LeCun et al., 1989, Hassibi and Stork, 1992, Hassibi et al., 1993], which leverages loss curvature information to make better pruning decisions. Yet, these results still lack a solid theoretical understanding, and it is unclear whether they can be improved by leveraging connections to the wealth of work on sparse recovery algorithms. In this paper, we draw new connections between these two areas and present new sparse recovery algorithms inspired by the OBS framework that come with theoretical guarantees under reasonable assumptions and have strong practical performance. Specifically, our work starts from the observation that we can leverage curvature information in OBS-like fashion upon the projection step of classic iterative sparse recovery algorithms such as IHT. We show for the first time that this leads both to improved convergence bounds in well-behaved settings and to stronger practical convergence. Furthermore, we present extensions of this approach to training accurate sparse DNNs, and validate it experimentally at scale.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93373"} +{"video_file": "soUXmwL5aK_39026261.mp4", "openreview_id": "soUXmwL5aK", "slideslive_id": 39026261, "venue": "nips2024", "title": "Interpretable Generalized Additive Models for Datasets with Missing Values", "status": "Poster", "keywords": "Interpretability;Missing Data;Generalized Additive Models;Sparsity", "tldr": "We introduce an interpretable GAM approach for missing data which improves accuracy under synthetic missingness while globally improving sparsity, all with no significant cost to real-world accuracy or runtime.", "abstract": "Many important datasets contain samples that are missing one or more feature values. Maintaining the interpretability of machine learning models in the presence of such missing data is challenging. Singly or multiply imputing missing values complicates the model\u2019s mapping from features to labels. On the other hand, reasoning on indicator variables that represent missingness introduces a potentially large number of additional terms, sacrificing sparsity. We solve these problems with M-GAM, a sparse, generalized, additive modeling approach that incorporates missingness indicators and their interaction terms while maintaining sparsity through\n\u2113\n0\nregularization. We show that M-GAM provides similar or superior accuracy to prior methods while significantly improving sparsity relative to either imputation or na\u00efve inclusion of indicator variables.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/93372"} +{"video_file": "suYAAOI5bd_39028432.mp4", "openreview_id": "suYAAOI5bd", "slideslive_id": 39028432, "venue": "nips2024", "title": "On the Expressive Power of Tree-Structured Probabilistic Circuits", "status": "Poster", "keywords": "Probabilistic circuits;Circuit complexities;Network polynomials;Probabilistic models", "tldr": "Our paper proves a universal upper bound and a conditional lower bound for the expressive power of tree-structured probabilistic circuits.", "abstract": "Probabilistic circuits (PCs) have emerged as a powerful framework compactly representing probability distributions for efficient and exact probabilistic inference. It has been shown that PCs with general directed acyclic graph (DAG) structure can be understood as a mixture of exponentially (in its height) many components, each of which is a product distributions over univariate marginals. However, existing structure learning algorithms for PCs often generate tree-structured circuits, or using tree-structured circuits as intermediate steps to compress them into DAG-structured circuits. This leads to an intriguing question on whether there exists an exponential gap between DAGs and trees for the PC structure.\nIn this paper, we provide a negative answer to this conjecture by proving that, for\nn\nvariables, there is a quasi-polynomial upper bound\nn\nO\n(\nlog\n\u2061\nn\n)\non the size of an equivalent tree computing the same probability distribution. On the other hand, we will also show that given a depth restriction on the tree, there is a super-polynomial separation between tree and DAG-structured PCs. Our work takes an important step towards understanding the expressive power of tree-structured PCs, and our techniques may be of independent interest in the study of structure learning algorithms for PCs.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/93366"} +{"video_file": "t8iosEWoyd_39024706.mp4", "openreview_id": "t8iosEWoyd", "slideslive_id": 39024706, "venue": "nips2024", "title": "Stochastic contextual bandits with graph feedback: from independence number to MAS number", "status": "Poster", "keywords": "contextual bandits;graph feedback;minimax rate", "tldr": "We (1) prove a regret lower bound through a novel graph quantity that increases with the number of contexts and (2) propose algorithms that achieve tight upper bound under reasonable assumptions and a weaker one in general.", "abstract": "We consider contextual bandits with graph feedback, a class of interactive learning problems with richer structures than vanilla contextual bandits, where taking an action reveals the rewards for all neighboring actions in the feedback graph under all contexts. Unlike the multi-armed bandits setting where a growing literature has painted a near-complete understanding of graph feedback, much remains unexplored in the contextual bandits counterpart. In this paper, we make inroads into this inquiry by establishing a regret lower bound\n\u03a9\n(\n\u03b2\nM\n(\nG\n)\nT\n)\n, where\nM\nis the number of contexts,\nG\nis the feedback graph, and\n\u03b2\nM\n(\nG\n)\nis our proposed graph-theoretic quantity that characterizes the fundamental learning limit for this class of problems. Interestingly,\n\u03b2\nM\n(\nG\n)\ninterpolates between\n\u03b1\n(\nG\n)\n(the independence number of the graph) and\nm\n(\nG\n)\n(the maximum acyclic subgraph (MAS) number of the graph) as the number of contexts\nM\nvaries. We also provide algorithms that achieve near-optimal regret for important classes of context sequences and/or feedback graphs, such as transitively closed graphs that find applications in auctions and inventory control. In particular, with many contexts, our results show that the MAS number essentially characterizes the statistical complexity for contextual bandits, as opposed to the independence number in multi-armed bandits.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/93357"} +{"video_file": "tAOg1HdvGy_39026489.mp4", "openreview_id": "tAOg1HdvGy", "slideslive_id": 39026489, "venue": "nips2024", "title": "Interpolating Item and User Fairness in Multi-Sided Recommendations", "status": "Poster", "keywords": "fair recommendation;multi-sided platform;multi-stakeholder fairness;recommendation system;online learning algorithms", "tldr": "We present a fair recommendation framework that balances platform revenue and item/user fairness in multi-sided platforms, along with a low-regret algorithm that ensures fair recommendations in an online setting where user data must be learned.", "abstract": "Today's online platforms heavily lean on algorithmic recommendations for bolstering user engagement and driving revenue. However, these recommendations can impact multiple stakeholders simultaneously---the platform, items (sellers), and users (customers)---each with their unique objectives, making it difficult to find the right middle ground that accommodates all stakeholders. To address this, we introduce a novel fair recommendation framework, Problem (FAIR), that flexibly balances multi-stakeholder interests via a constrained optimization formulation. We next explore Problem (FAIR) in a dynamic online setting where data uncertainty further adds complexity, and propose a low-regret algorithm FORM that concurrently performs real-time learning and fair recommendations, two tasks that are often at odds. Via both theoretical analysis and a numerical case study on real-world data, we demonstrate the efficacy of our framework and method in maintaining platform revenue while ensuring desired levels of fairness for both items and users.", "primary_area": "fairness", "site": "https://neurips.cc/virtual/2024/poster/93355"} +{"video_file": "tAlMAcqK9s_39026193.mp4", "openreview_id": "tAlMAcqK9s", "slideslive_id": 39026193, "venue": "nips2024", "title": "Optimal Algorithms for Augmented Testing of Discrete Distributions", "status": "Poster", "keywords": "distribution testing;learning-augmented algorithms;data driven algorithm;hypothesis testing;hypothesis selection;distribution learning", "tldr": "We study hypothesis testing of distributions in an augmented setting where learned information about the underlying distribution is available.", "abstract": "We consider the problem of hypothesis testing for discrete distributions. In the standard model, where we have sample access to an underlying distribution\np\n, extensive research has established optimal bounds for uniformity testing, identity testing (goodness of fit), and closeness testing (equivalence or two-sample testing). We explore these problems in a setting where a predicted data distribution, possibly derived from historical data or predictive machine learning models, is available. We demonstrate that such a predictor can indeed reduce the number of samples required for all three property testing tasks. The reduction in sample complexity depends directly on the predictor\u2019s quality, measured by its total variation distance from\np\n. A key advantage of our algorithms is their adaptability to the precision of the prediction. Specifically, our algorithms can self-adjust their sample complexity based on the accuracy of the available prediction, operating without any prior knowledge of the estimation\u2019s accuracy (i.e. they are consistent). Additionally, we never use more samples than the standard approaches require, even if the predictions provide no meaningful information (i.e. they are also robust). We provide lower bounds to indicate that the improvements in sample complexity achieved by our algorithms are information-theoretically optimal. Furthermore, experimental results show that the performance of our algorithms on real data significantly exceeds our worst-case guarantees for sample complexity, demonstrating the practicality of our approach.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93354"} +{"video_file": "tBRNC6YemY_39024887.mp4", "openreview_id": "tBRNC6YemY", "slideslive_id": 39024887, "venue": "nips2024", "title": "Gorilla: Large Language Model Connected with Massive APIs", "status": "Poster", "keywords": "LLM;Tool Use;APIs;Function Calling", "tldr": "Teaching LLMs to use tools at scale with innvoations in finetuning (RAT) and a novel way to mesasure hallucination using AST.", "abstract": "Large Language Models (LLMs) have seen an impressive wave of advances, with models now excelling in a variety of tasks, such as mathematical reasoning and program synthesis. However, their potential to effectively use tools via API calls remains unfulfilled. This is a challenging task even for today\u2019s state-of-the-art LLMs such as GPT-4 largely due to their unawareness of what APIs are available and how to use them in a frequently updated tool set. We develop Gorilla, a finetuned LLaMA model that surpasses the performance of GPT-4 on writing API calls. Trained with the novel Retriever Aware Training (RAT), when combined with a document retriever, Gorilla demonstrates a strong capability to adapt to test-time document changes, allowing flexible user updates or version changes. It also substantially mitigates the issue of hallucination, commonly encountered when prompting LLMs directly. To evaluate the model\u2019s ability, we introduce APIBench, a comprehensive dataset consisting of HuggingFace, TorchHub, and TensorHub APIs. The successful integration of the retrieval system with Gorilla demonstrates the potential for LLMs to use tools more accurately, keep up with frequently updated documentation, and consequently increase the reliability and applicability of their outputs. Gorilla\u2019s code, model, data, and demo are available at: https://gorilla.cs.berkeley.edu", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93353"} +{"video_file": "tDvFa5OJyS_39026349.mp4", "openreview_id": "tDvFa5OJyS", "slideslive_id": 39026349, "venue": "nips2024", "title": "Computation-Aware Gaussian Processes: Model Selection And Linear-Time Inference", "status": "Poster", "keywords": "Gaussian Processes;Model Selection;Approximate Inference;Variational Inference;Probabilistic Numerics", "tldr": "We demonstrate how to perform model selection for computation-aware Gaussian processes enabling training on over a million data points on a single GPU.", "abstract": "Model selection in Gaussian processes scales prohibitively with the size of the training dataset, both in time and memory. While many approximations exist, all incur inevitable approximation error. Recent work accounts for this error in the form of computational uncertainty, which enables---at the cost of quadratic complexity---an explicit tradeoff between computational efficiency and precision. Here we extend this development to model selection, which requires significant enhancements to the existing approach, including linear-time scaling in the size of the dataset. We propose a novel training loss for hyperparameter optimization and demonstrate empirically that the resulting method can outperform SGPR, CGGP and SVGP, state-of-the-art methods for GP model selection, on medium to large-scale datasets. Our experiments show that model selection for computation-aware GPs trained on 1.8 million data points can be done within a few hours on a single GPU. As a result of this work, Gaussian processes can be trained on large-scale datasets without significantly compromising their ability to quantify uncertainty---a fundamental prerequisite for optimal decision-making.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/93350"} +{"video_file": "tEEpVPDaRf_39027356.mp4", "openreview_id": "tEEpVPDaRf", "slideslive_id": 39027356, "venue": "nips2024", "title": "Identity Decoupling for Multi-Subject Personalization of Text-to-Image Models", "status": "Poster", "keywords": "Text-to-Image Diffusion Models;Multi-subject personalization", "tldr": "MuDI enables multi-subject personalization by effectively decoupling identities from multiple subjects.", "abstract": "Text-to-image diffusion models have shown remarkable success in generating personalized subjects based on a few reference images. However, current methods often fail when generating multiple subjects simultaneously, resulting in mixed identities with combined attributes from different subjects. In this work, we present MuDI, a novel framework that enables multi-subject personalization by effectively decoupling identities from multiple subjects. Our main idea is to utilize segmented subjects generated by a foundation model for segmentation (Segment Anything) for both training and inference, as a form of data augmentation for training and initialization for the generation process. Moreover, we further introduce a new metric to better evaluate the performance of our method on multi-subject personalization. Experimental results show that our MuDI can produce high-quality personalized images without identity mixing, even for highly similar subjects as shown in Figure 1. Specifically, in human evaluation, MuDI obtains twice the success rate for personalizing multiple subjects without identity mixing over existing baselines and is preferred over 70% against the strongest baseline.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93349"} +{"video_file": "tFB5SsabVb_39025690.mp4", "openreview_id": "tFB5SsabVb", "slideslive_id": 39025690, "venue": "nips2024", "title": "Graph Neural Flows for Unveiling Systemic Interactions Among Irregularly Sampled Time Series", "status": "Poster", "keywords": "Graph learning;neural flows;time series", "tldr": "A graph-based continuous-time model is proposed to unveil systemic interactions and improve time series tasks such as classification and forecasting.", "abstract": "Interacting systems are prevalent in nature. It is challenging to accurately predict the dynamics of the system if its constituent components are analyzed independently. We develop a graph-based model that unveils the systemic interactions of time series observed at irregular time points, by using a directed acyclic graph to model the conditional dependencies (a form of causal notation) of the system components and learning this graph in tandem with a continuous-time model that parameterizes the solution curves of ordinary differential equations (ODEs). Our technique, a graph neural flow, leads to substantial enhancements over non-graph-based methods, as well as graph-based methods without the modeling of conditional dependencies. We validate our approach on several tasks, including time series classification and forecasting, to demonstrate its efficacy.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/93348"} +{"video_file": "tKuLgnDWWN_39025302.mp4", "openreview_id": "tKuLgnDWWN", "slideslive_id": 39025302, "venue": "nips2024", "title": "SILENCE: Protecting privacy in offloaded speech understanding on resource-constrained devices", "status": "Poster", "keywords": "spoken language understanding;resource-constrained devices;privacy-preserving", "tldr": "We provide a lightweight, privacy-preserving encoder that can be efficiently embedded into low-power audio devices.", "abstract": "Speech serves as a ubiquitous input interface for embedded mobile devices. Cloud-based solutions, while offering powerful speech understanding services, raise significant concerns regarding user privacy. To address this, disentanglement-based encoders have been proposed to remove sensitive information from speech signals without compromising the speech understanding functionality. However, these encoders demand high memory usage and computation complexity, making them impractical for resource-constrained wimpy devices. Our solution is based on a key observation that speech understanding hinges on long-term dependency knowledge of the entire utterance, in contrast to privacy-sensitive elements that are short-term dependent. Exploiting this observation, we propose SILENCE, a lightweight system that selectively obscuring short-term details, without damaging the long-term dependent speech understanding performance. The crucial part of SILENCE is a differential mask generator derived from interpretable learning to automatically configure the masking process. We have implemented SILENCE on the STM32H7 microcontroller and evaluate its efficacy under different attacking scenarios. Our results demonstrate that SILENCE offers speech understanding performance and privacy protection capacity comparable to existing encoders, while achieving up to 53.3\n\u00d7\nspeedup and 134.1\n\u00d7\nreduction in memory footprint.", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/93343"} +{"video_file": "tPgagXpvcV_39027306.mp4", "openreview_id": "tPgagXpvcV", "slideslive_id": 39027306, "venue": "nips2024", "title": "Any2Graph: Deep End-To-End Supervised Graph Prediction With An Optimal Transport Loss", "status": "Spotlight", "keywords": "Optimal Transport;Graph Prediction;Structured Prediction;Graph;Deep Learning", "tldr": "We introduce Any2Graph a framework for deep end-to-end supervised graph prediction. The key components of the framework is PMFGW, an Optimal Transport Loss.", "abstract": "We propose Any2graph, a generic framework for end-to-end Supervised Graph Prediction (SGP) i.e. a deep learning model that predicts an entire graph for any kind of input. The framework is built on a novel Optimal Transport loss, the Partially-Masked Fused Gromov-Wasserstein, that exhibits all necessary properties (permutation invariance, differentiability and scalability) and is designed to handle any-sized graphs. Numerical experiments showcase the versatility of the approach that outperform existing competitors on a novel challenging synthetic dataset and a variety of real-world tasks such as map construction from satellite image (Sat2Graph) or molecule prediction from fingerprint (Fingerprint2Graph).", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/93336"} +{"video_file": "tQukGCDaNT_39027277.mp4", "openreview_id": "tQukGCDaNT", "slideslive_id": 39027277, "venue": "nips2024", "title": "Improved Distribution Matching Distillation for Fast Image Synthesis", "status": "Oral", "keywords": "Image Generation;diffusion based models;model distillation", "tldr": "We distill diffusion models into few-step generators that produce images with superior quality.", "abstract": "Recent approaches have shown promises distilling expensive diffusion models into efficient one-step generators. Amongst them, Distribution Matching Distillation (DMD) produces one-step generators that match their teacher in distribution, i.e., the distillation process does not enforce a one-to-one correspondence with the sampling trajectories of their teachers. However, to ensure stable training in practice, DMD requires an additional regression loss computed using a large set of noise--image pairs, generated by the teacher with many steps of a deterministic sampler. This is not only computationally expensive for large-scale text-to-image synthesis, but it also limits the student's quality, tying it too closely to the teacher's original sampling paths. We introduce DMD2, a set of techniques that lift this limitation and improve DMD training. First, we eliminate the regression loss and the need for expensive dataset construction. We show that the resulting instability is due to the \"fake\" critic not estimating the distribution of generated samples with sufficient accuracy and propose a two time-scale update rule as a remedy. Second, we integrate a GAN loss into the distillation procedure, discriminating between generated samples and real images. This lets us train the student model on real data, thus mitigating the imperfect \"real\" score estimation from the teacher model, and thereby enhancing quality. Third, we introduce a new training procedure that enables multi-step sampling in the student, and addresses the training--inference input mismatch of previous work, by simulating inference-time generator samples during training. Taken together, our improvements set new benchmarks in one-step image generation, with FID scores of 1.28 on ImageNet-64\u00d764 and 8.35 on zero-shot COCO 2014, surpassing the original teacher despite a 500X reduction in inference cost. Further, we show our approach can generate megapixel images by distilling SDXL, demonstrating exceptional visual quality among few-step methods, and surpassing the teacher. We release our code and pretrained models.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/93335"} +{"video_file": "tTnFH7D1h4_39028710.mp4", "openreview_id": "tTnFH7D1h4", "slideslive_id": 39028710, "venue": "nips2024", "title": "Out-of-Distribution Detection with a Single Unconditional Diffusion Model", "status": "Poster", "keywords": "out-of-distribution detection;anomaly detection;diffusion model", "tldr": "We propose to perform unsupervised out-of-distribution detection using a single unconditional diffusion model by characterizing properties of the diffusion path.", "abstract": "Out-of-distribution (OOD) detection is a critical task in machine learning that seeks to identify abnormal samples. Traditionally, unsupervised methods utilize a deep generative model for OOD detection. However, such approaches require a new model to be trained for each inlier dataset. This paper explores whether a single model can perform OOD detection across diverse tasks. To that end, we introduce Diffusion Paths (DiffPath), which uses a single diffusion model originally trained to perform unconditional generation for OOD detection. We introduce a novel technique of measuring the rate-of-change and curvature of the diffusion paths connecting samples to the standard normal. Extensive experiments show that with a single model, DiffPath is competitive with prior work using individual models on a variety of OOD tasks involving different distributions. Our code is publicly available at https://github.com/clear-nus/diffpath.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/93332"} +{"video_file": "tUpcRQNvVM_39027726.mp4", "openreview_id": "tUpcRQNvVM", "slideslive_id": 39027726, "venue": "nips2024", "title": "Deep Submodular Peripteral Networks", "status": "Spotlight", "keywords": "Submodular Optimization;Learning Set Functions;Experimental Design;Streaming Summarization;Data subset selection;Knowledge Distillation", "tldr": "This work proposes a new expressive parametric family of submodular functions and new graded-pairwise comparison (GPC) loss functions for their learning from expensive teachers.", "abstract": "Submodular functions, crucial for various applications, often lack practical learning methods for their acquisition. Seemingly unrelated, learning a scaling from oracles offering graded pairwise preferences (GPC) is underexplored, despite a rich history in psychometrics. In this paper, we introduce deep submodular peripteral networks (DSPNs), a novel parametric family of submodular functions, and methods for their training using a GPC-based strategy to connect and then tackle both of the above challenges. We introduce newly devised GPC-style ``peripteral'' loss which leverages numerically graded relationships between pairs of objects (sets in our case). Unlike traditional contrastive learning, or RHLF preference ranking, our method utilizes graded comparisons, extracting more nuanced information than just binary-outcome comparisons, and contrasts sets of any size (not just two). We also define a novel suite of automatic sampling strategies for training, including active-learning inspired submodular feedback. We demonstrate DSPNs' efficacy in learning submodularity from a costly target submodular function and demonstrate its superiority both for experimental design and online streaming applications.", "primary_area": "active_learning", "site": "https://neurips.cc/virtual/2024/poster/93329"} +{"video_file": "tVConYid20_39025082.mp4", "openreview_id": "tVConYid20", "slideslive_id": 39025082, "venue": "nips2024", "title": "FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision", "status": "Spotlight", "keywords": "attention;hardware-aware algorithms;H100", "tldr": "We speed up FlashAttention on modern GPUs (Hopper) with asynchrony and low-precision", "abstract": "Attention, as a core layer of the ubiquitous Transformer architecture, is the bottleneck for large language models and long-context applications. elaborated an approach to speed up attention on GPUs through minimizing memory reads/writes. However, it has yet to take advantage of new capabilities present in recent hardware, with FlashAttention-2 achieving only 35% utilization on the H100 GPU. We develop three main techniques to speed up attention on Hopper GPUs: exploiting asynchrony of the Tensor Cores and TMA to (1) overlap overall computation and data movement via warp-specialization and (2) interleave block-wise matmul and softmax operations, and (3) block quantization and incoherent processing that leverages hardware support for FP8 low-precision. We demonstrate that our method, FlashAttention-3, achieves speedup on H100 GPUs by 1.5-2.0\n\u00d7\nwith BF16 reaching up to 840 TFLOPs/s (85% utilization), and with FP8 reaching 1.3 PFLOPs/s. We validate that FP8 FlashAttention-3 achieves 2.6\n\u00d7\nlower numerical error than a baseline FP8 attention.", "primary_area": "infrastructure", "site": "https://neurips.cc/virtual/2024/poster/93328"} +{"video_file": "tWkL7k1u5v_39025408.mp4", "openreview_id": "tWkL7k1u5v", "slideslive_id": 39025408, "venue": "nips2024", "title": "Improving Equivariant Model Training via Constraint Relaxation", "status": "Poster", "keywords": "Equivariant Neural Networks;Symmetries;Approximate Equivariance;Optimization", "tldr": "Improving the optimization of equivariant neural networks by relaxing the equivariant constraint during training", "abstract": "Equivariant neural networks have been widely used in a variety of applications due to their ability to generalize well in tasks where the underlying data symmetries are known. Despite their successes, such networks can be difficult to optimize and require careful hyperparameter tuning to train successfully. In this work, we propose a novel framework for improving the optimization of such models by relaxing the hard equivariance constraint during training: We relax the equivariance constraint of the network's intermediate layers by introducing an additional non-equivariant term that we progressively constrain until we arrive at an equivariant solution. By controlling the magnitude of the activation of the additional relaxation term, we allow the model to optimize over a larger hypothesis space containing approximate equivariant networks and converge back to an equivariant solution at the end of training. We provide experimental results on different state-of-the-art network architectures, demonstrating how this training framework can result in equivariant models with improved generalization performance. Our code is available at https://github.com/StefanosPert/Equivariant_Optimization_CR", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/93327"} +{"video_file": "tZtepJBtHg_39024816.mp4", "openreview_id": "tZtepJBtHg", "slideslive_id": 39024816, "venue": "nips2024", "title": "Transductive Active Learning: Theory and Applications", "status": "Poster", "keywords": "active learning;experimental design;bandits;Bayesian optimization;neural networks;deep learning;fine-tuning;transfer learning;transductive learning;generalization;extrapolation", "tldr": "We develop a theory for automatic data selection when you know what you want to learn. We show that knowing what you want a model to learn can be leveraged to learn much more efficiently than just trying to learn \"everything\".", "abstract": "We study a generalization of classical active learning to real-world settings with concrete prediction targets where sampling is restricted to an accessible region of the domain, while prediction targets may lie outside this region. We analyze a family of decision rules that sample adaptively to minimize uncertainty about prediction targets. We are the first to show, under general regularity assumptions, that such decision rules converge uniformly to the smallest possible uncertainty obtainable from the accessible data. We demonstrate their strong sample efficiency in two key applications: active fine-tuning of large neural networks and safe Bayesian optimization, where they achieve state-of-the-art performance.", "primary_area": "active_learning", "site": "https://neurips.cc/virtual/2024/poster/93324"} +{"video_file": "taI8M5DiXj_39026378.mp4", "openreview_id": "taI8M5DiXj", "slideslive_id": 39026378, "venue": "nips2024", "title": "When to Act and When to Ask: Policy Learning With Deferral Under Hidden Confounding", "status": "Poster", "keywords": "policy learning;causal inference;sensitivity analysis;human-algorithm collaboration", "tldr": "Learning a treatment recommendation policy under hidden confounding, with the option of deferring the decision to an expert", "abstract": "We consider the task of learning how to act in collaboration with a human expert based on observational data. The task is motivated by high-stake scenarios such as healthcare and welfare where algorithmic action recommendations are made to a human expert, opening the option of deferring making a recommendation in cases where the human might act better on their own. This task is especially challenging when dealing with observational data, as using such data runs the risk of hidden confounders whose existence can lead to biased and harmful policies. However, unlike standard policy learning, the presence of a human expert can mitigate some of these risks. We build on the work of Mozannar and Sontag (2020) on consistent surrogate loss for learning with the option of deferral to an expert, where they solve a cost-sensitive supervised classification problem. Since we are solving a causal problem, where labels don\u2019t exist, we use a causal model to learn costs which are robust to a bounded degree of hidden confounding. We prove that our approach can take advantage of the strengths of both the model and the expert to obtain a better policy than either. We demonstrate our results by conducting experiments on synthetic and semi-synthetic data and show the advantages of our method compared to baselines.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/93323"} +{"video_file": "tb1MlJCY5g_39026424.mp4", "openreview_id": "tb1MlJCY5g", "slideslive_id": 39026424, "venue": "nips2024", "title": "KALM: Knowledgeable Agents by Offline Reinforcement Learning from Large Language Model Rollouts", "status": "Poster", "keywords": "reinforcement learning;large language models;knowledgeable agents", "tldr": "This study investigates developing knowledgeable agents with RL and LLMs, which achieve low-level control and adapt to novel situations.", "abstract": "Reinforcement learning (RL) traditionally trains agents using interaction data, which limits their capabilities to the scope of the training data. To create more knowledgeable agents, leveraging knowledge from large language models (LLMs) has shown a promising way. Despite various attempts to combine LLMs with RL, there is commonly a semantic gap between action signals and LLM tokens, which hinders their integration. This paper introduces a novel approach, KALM (Knowledgeable Agents from Language Model Rollouts), to learn knowledgeable agents by bridging this gap. KALM extracts knowledge from LLMs in the form of imaginary rollouts, which agents can learn through offline RL. To overcome the limitation that LLMs are inherently text-based and may be incompatible with numerical environmental data, KALM fine-tunes the LLM to perform bidirectional translation between textual goals and rollouts. This process enables the LLM to understand the environment better, facilitating the generation of meaningful rollouts. Experiments on robotic manipulation tasks demonstrate that KALM allows agents to rephrase complex goals and tackle novel tasks requiring new optimal behaviors. KALM achieves a 46% success rate in completing 1400 various novel goals, significantly outperforming the 26% success rate of baseline methods. Project homepage: https://kalmneurips2024.github.io.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93321"} +{"video_file": "tnh4LK72yj_39027802.mp4", "openreview_id": "tnh4LK72yj", "slideslive_id": 39027802, "venue": "nips2024", "title": "Get Rid of Isolation: A Continuous Multi-task Spatio-Temporal Learning Framework", "status": "Oral", "keywords": "continuous multi-task learning;spatio-temporal forecasting;urban intelligence", "tldr": "Breaking free from isolation, this work presents an innovative multi-task spatio-temporal modeling approach, fostering interconnectedness among diverse data sources for enhanced prediction accuracy and adaptability in urban forecasting", "abstract": "Spatiotemporal learning has become a pivotal technique to enable urban intelligence. Traditional spatiotemporal models mostly focus on a specific task by assuming a same distribution between training and testing sets. However, given that urban systems are usually dynamic, multi-sourced with imbalanced data distributions, current specific task-specific models fail to generalize to new urban conditions and adapt to new domains without explicitly modeling interdependencies across various dimensions and types of urban data. To this end, we argue that there is an essential to propose a Continuous Multi-task Spatio-Temporal learning framework (CMuST) to empower collective urban intelligence, which reforms the urban spatiotemporal learning from single-domain to cooperatively multi-dimensional and multi-task learning. Specifically, CMuST proposes a new multi-dimensional spatiotemporal interaction network (MSTI) to allow cross-interactions between context and main observations as well as self-interactions within spatial and temporal aspects to be exposed, which is also the core for capturing task-level commonality and personalization. To ensure continuous task learning, a novel Rolling Adaptation training scheme (RoAda) is devised, which not only preserves task uniqueness by constructing data summarization-driven task prompts, but also harnesses correlated patterns among tasks by iterative model behavior modeling. We further establish a benchmark of three cities for multi-task spatiotemporal learning, and empirically demonstrate the superiority of CMuST via extensive evaluations on these datasets. The impressive improvements on both few-shot streaming data and new domain tasks against existing SOAT methods are achieved. Code is available at https://github.com/DILab-USTCSZ/CMuST.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/93311"} +{"video_file": "ttUXtV2YrA_39027834.mp4", "openreview_id": "ttUXtV2YrA", "slideslive_id": 39027834, "venue": "nips2024", "title": "Revisiting the Integration of Convolution and Attention for Vision Backbone", "status": "Poster", "keywords": "convolution;attention;vision backbone", "tldr": "Apply attention and convolution at different granularity levels for more efficient representation learning.", "abstract": "Convolutions (Convs) and multi-head self-attentions (MHSAs) are typically considered alternatives to each other for building vision backbones. Although some works try to integrate both, they apply the two operators simultaneously at the finest pixel granularity. With Convs responsible for per-pixel feature extraction already, the question is whether we still need to include the heavy MHSAs at such a fine-grained level. In fact, this is the root cause of the scalability issue w.r.t. the input resolution for vision transformers. To address this important problem, we propose in this work to use MSHAs and Convs in parallel \\textbf{at different granularity levels} instead. Specifically, in each layer, we use two different ways to represent an image: a fine-grained regular grid and a coarse-grained set of semantic slots. We apply different operations to these two representations: Convs to the grid for local features, and MHSAs to the slots for global features. A pair of fully differentiable soft clustering and dispatching modules is introduced to bridge the grid and set representations, thus enabling local-global fusion. Through extensive experiments on various vision tasks, we empirically verify the potential of the proposed integration scheme, named \\textit{GLMix}: by offloading the burden of fine-grained features to light-weight Convs, it is sufficient to use MHSAs in a few (e.g., 64) semantic slots to match the performance of recent state-of-the-art backbones, while being more efficient. Our visualization results also demonstrate that the soft clustering module produces a meaningful semantic grouping effect with only IN1k classification supervision, which may induce better interpretability and inspire new weakly-supervised semantic segmentation approaches. Code will be available at \\url{https://github.com/rayleizhu/GLMix}.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93308"} +{"video_file": "tu1oC7zHGW_39026278.mp4", "openreview_id": "tu1oC7zHGW", "slideslive_id": 39026278, "venue": "nips2024", "title": "Unveiling the Tapestry of Consistency in Large Vision-Language Models", "status": "Poster", "keywords": "Consistency;ConBench;Large Vision-Language Models;Analysis", "tldr": "We propose a Consistency benchmark, get an in-depth analysis and design a simple method to improve VLMs.", "abstract": "Large vision-language models (LVLMs) have recently achieved rapid progress, exhibiting great perception and reasoning abilities concerning visual information. However, when faced with prompts in different sizes of solution spaces, LVLMs fail to always give consistent answers regarding the same knowledge point. This inconsistency of answers between different solution spaces is prevalent in LVLMs and erodes trust. To this end, we provide a multi-modal benchmark ConBench, to intuitively analyze how LVLMs perform when the solution space of a prompt revolves around a knowledge point. Based on the ConBench tool, we are the first to reveal the tapestry and get the following findings: (1) In the discriminate realm, the larger the solution space of the prompt, the lower the accuracy of the answers. (2) Establish the relationship between the discriminative and generative realms: the accuracy of the discriminative question type exhibits a strong positive correlation with its Consistency with the caption. (3) Compared to open-source models, closed-source models exhibit a pronounced bias advantage in terms of Consistency. Eventually, we ameliorate the consistency of LVLMs by trigger-based diagnostic refinement, indirectly improving the performance of their caption. We hope this paper will accelerate the research community in better evaluating their models and encourage future advancements in the consistency domain.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/93307"} +{"video_file": "tuiqq1G8I5_39026976.mp4", "openreview_id": "tuiqq1G8I5", "slideslive_id": 39026976, "venue": "nips2024", "title": "DisCEdit: Model Editing by Identifying Discriminative Components", "status": "Poster", "keywords": "model editing;selective forgetting;structured pruning;total variation distance", "tldr": "We use new lower bounds on the TV distance to identify discriminative network components for structured pruning and selective forgetting.", "abstract": "Model editing is a growing area of research that is particularly valuable in contexts where modifying key model components, like neurons or filters, can significantly impact the model\u2019s performance. The key challenge lies in identifying important components useful to the model\u2019s predictions. We apply model editing to address two active areas of research, Structured Pruning, and Selective Class Forgetting. In this work, we adopt a distributional approach to the problem of identifying important components, leveraging the recently proposed discriminative filters hypothesis, which states that well-trained (convolutional) models possess discriminative filters that are essential to prediction. To do so, we define discriminative ability in terms of the Bayes error rate associated with the feature distributions, which is equivalent to computing the Total Variation (TV) distance between the distributions. However, computing the TV distance is intractable, motivating us to derive novel witness function-based lower bounds on the TV distance that require no assumptions on the underlying distributions; using this bound generalizes prior work such as Murti et al. [39] that relied on unrealistic Gaussianity assumptions on the feature distributions. With these bounds, we are able to discover critical subnetworks responsible for classwise predictions, and derive DISCEDIT-SP and DISCEDIT-U , algorithms for structured pruning requiring no access to the training data and loss function, and selective forgetting respectively. We apply DISCEDIT-U to selective class forgetting on models trained on CIFAR10 and CIFAR100, and we show that on average, we can reduce accuracy on a single class by over 80% with a minimal reduction in test accuracy on the remaining classes. Similarly, on Structured pruning problems, we obtain 40.8% sparsity on ResNet50 on Imagenet, with only a 2.6% drop in accuracy with minimal fine-tuning.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93306"} +{"video_file": "twYE75Mnkt_39025441.mp4", "openreview_id": "twYE75Mnkt", "slideslive_id": 39025441, "venue": "nips2024", "title": "Derandomizing Multi-Distribution Learning", "status": "Poster", "keywords": "pac learning;multi-distribution;derandomization;computational efficiency;discrepancy minimization", "tldr": "We show that it is computationally hard to derandomize multi-distribution learning algorithms. We also show this hardness can be alleviated with a structural condition that we identify.", "abstract": "Multi-distribution or collaborative learning involves learning a single predictor that works well across multiple data distributions, using samples from each during training. Recent research on multi-distribution learning, focusing on binary loss and finite VC dimension classes, has shown near-optimal sample complexity that is achieved with oracle efficient algorithms. That is, these algorithms are computationally efficient given an efficient ERM for the class. Unlike in classical PAC learning, where the optimal sample complexity is achieved with deterministic predictors, current multi-distribution learning algorithms output randomized predictors. This raises the question: can these algorithms be derandomized to produce a deterministic predictor for multiple distributions? Through a reduction to discrepancy minimization, we show that derandomizing multi-distribution learning is computationally hard, even when ERM is computationally efficient. On the positive side, we identify a structural condition enabling an efficient black-box reduction, converting existing randomized multi-distribution predictors into deterministic ones.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93304"} +{"video_file": "twpPD9UMUN_39028150.mp4", "openreview_id": "twpPD9UMUN", "slideslive_id": 39028150, "venue": "nips2024", "title": "Look, Listen, and Answer: Overcoming Biases for Audio-Visual Question Answering", "status": "Poster", "keywords": "audio-visual question answering;bias elimination;debiasing;multimodality learning", "tldr": "We systematically explore the bias in the AVQA task from the persepctive of model designs and model evaluations.", "abstract": "Audio-Visual Question Answering (AVQA) is a complex multi-modal reasoning task, demanding intelligent systems to accurately respond to natural language queries based on audio-video input pairs. Nevertheless, prevalent AVQA approaches are prone to overlearning dataset biases, resulting in poor robustness. Furthermore, current datasets may not provide a precise diagnostic for these methods. To tackle these challenges, firstly, we propose a novel dataset, MUSIC-AVQA-R, crafted in two steps: rephrasing questions within the test split of a public dataset (MUSIC-AVQA) and subsequently introducing distribution shifts to split questions. The former leads to a large, diverse test space, while the latter results in a comprehensive robustness evaluation on rare, frequent, and overall questions. Secondly, we propose a robust architecture that utilizes a multifaceted cycle collaborative debiasing strategy to overcome bias learning. Experimental results show that this architecture achieves state-of-the-art performance on MUSIC-AVQA-R, notably obtaining a significant improvement of 9.32%. Extensive ablation experiments are conducted on the two datasets mentioned to analyze the component effectiveness within the debiasing strategy. Additionally, we highlight the limited robustness of existing multi-modal QA methods through the evaluation on our dataset. We also conduct experiments combining various baselines with our proposed strategy on two datasets to verify its plug-and-play capability. Our dataset and code are available at https://github.com/reml-group/MUSIC-AVQA-R.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93303"} +{"video_file": "tyPcIETPWM_39028058.mp4", "openreview_id": "tyPcIETPWM", "slideslive_id": 39028058, "venue": "nips2024", "title": "Conditional Outcome Equivalence: A Quantile Alternative to CATE", "status": "Poster", "keywords": "Heteregenous Treatment Effect;Conditional Average Treatment Effect;Conditional Quantile Treatment Effect;Quantile Regression", "tldr": "We introduce a new estimand, the conditional quantile comparator to compete with both the CATE and the CQTE.", "abstract": "The conditional quantile treatment effect (CQTE) can provide insight into the effect of a treatment beyond the conditional average treatment effect (CATE). This ability to provide information over multiple quantiles of the response makes the CQTE especially valuable in cases where the effect of a treatment is not well-modelled by a location shift, even conditionally on the covariates. Nevertheless, the estimation of the CQTE is challenging and often depends upon the smoothness of the individual quantiles as a function of the covariates rather than smoothness of the CQTE itself. This is in stark contrast to the CATE where it is possible to obtain high-quality estimates which have less dependency upon the smoothness of the nuisance parameters when the CATE itself is smooth. Moreover, relative smoothness of the CQTE lacks the interpretability of smoothness of the CATE making it less clear whether it is a reasonable assumption to make. We combine the desirable properties of the CATE and CQTE by considering a new estimand, the conditional quantile comparator (CQC). The CQC not only retains information about the whole treatment distribution, similar to the CQTE, but also having more natural examples of smoothness and is able to leverage simplicity in an auxiliary estimand. We provide finite sample bounds on the error of our estimator, demonstrating its ability to exploit simplicity. We validate our theory in numerical simulations which show that our method produces more accurate estimates than baselines. Finally, we apply our methodology to a study on the effect of employment incentives on earnings across different age groups. We see that our method is able to reveal heterogeneity of the effect across different quantiles.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/93302"} +{"video_file": "tz83Nyb71l_39027021.mp4", "openreview_id": "tz83Nyb71l", "slideslive_id": 39027021, "venue": "nips2024", "title": "YOLOv10: Real-Time End-to-End Object Detection", "status": "Poster", "keywords": "YOLO;object detection;computer vision", "tldr": "A YOLO model for object detection", "abstract": "Over the past years, YOLOs have emerged as the predominant paradigm in the field of real-time object detection owing to their effective balance between computational cost and detection performance. Researchers have explored the architectural designs, optimization objectives, data augmentation strategies, and others for YOLOs, achieving notable progress. However, the reliance on the non-maximum suppression (NMS) for post-processing hampers the end-to-end deployment of YOLOs and adversely impacts the inference latency. Besides, the design of various components in YOLOs lacks the comprehensive and thorough inspection, resulting in noticeable computational redundancy and limiting the model's capability. It renders the suboptimal efficiency, along with considerable potential for performance improvements. In this work, we aim to further advance the performance-efficiency boundary of YOLOs from both the post-processing and the model architecture. To this end, we first present the consistent dual assignments for NMS-free training of YOLOs, which brings the competitive performance and low inference latency simultaneously. Moreover, we introduce the holistic efficiency-accuracy driven model design strategy for YOLOs. We comprehensively optimize various components of YOLOs from both the efficiency and accuracy perspectives, which greatly reduces the computational overhead and enhances the capability. The outcome of our effort is a new generation of YOLO series for real-time end-to-end object detection, dubbed YOLOv10. Extensive experiments show that YOLOv10 achieves the state-of-the-art performance and efficiency across various model scales. For example, our YOLOv10-S is 1.8\n\u00d7\nfaster than RT-DETR-R18 under the similar AP on COCO, meanwhile enjoying 2.8\n\u00d7\nsmaller number of parameters and FLOPs. Compared with YOLOv9-C, YOLOv10-B has 46% less latency and 25% fewer parameters for the same performance. Code and models are available at https://github.com/THU-MIG/yolov10.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93301"} +{"video_file": "u1Z3HWz4VJ_39026072.mp4", "openreview_id": "u1Z3HWz4VJ", "slideslive_id": 39026072, "venue": "nips2024", "title": "RAMP: Boosting Adversarial Robustness Against Multiple $l_p$ Perturbations for Universal Robustness", "status": "Poster", "keywords": "Adversarial Robustness;Pre-training and Fine-tuning;Distribution Shifts", "tldr": "We design a logit pairing loss and connect natural training with adversarial training via gradient projection to improve the multi-norm robustness while maintaining good clean accuracy.", "abstract": "Most existing works focus on improving robustness against adversarial attacks bounded by a single\nl\np\nnorm using adversarial training (AT). However, these AT models' multiple-norm robustness (union accuracy) is still low, which is crucial since in the real-world an adversary is not necessarily bounded by a single norm. The tradeoffs among robustness against multiple\nl\np\nperturbations and accuracy/robustness make obtaining good union and clean accuracy challenging. We design a logit pairing loss to improve the union accuracy by analyzing the tradeoffs from the lens of distribution shifts. We connect natural training (NT) with AT via gradient projection, to incorporate useful information from NT into AT, where we empirically and theoretically show it moderates the accuracy/robustness tradeoff. We propose a novel training framework \\textbf{RAMP}, to boost the robustness against multiple\nl\np\nperturbations. \\textbf{RAMP} can be easily adapted for robust fine-tuning and full AT. For robust fine-tuning, \\textbf{RAMP} obtains a union accuracy up to\n53.3\non CIFAR-10, and\n29.1\non ImageNet. For training from scratch, \\textbf{RAMP} achieves a union accuracy of\n44.6\nand good clean accuracy of\n81.2\non ResNet-18 against AutoAttack on CIFAR-10. Beyond multi-norm robustness \\textbf{RAMP}-trained models achieve superior \\textit{universal robustness}, effectively generalizing against a range of unseen adversaries and natural corruptions.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93300"} +{"video_file": "HfSJlBRkKJ_39024428.mp4", "openreview_id": "HfSJlBRkKJ", "slideslive_id": 39024428, "venue": "nips2024", "title": "Blind Image Restoration via Fast Diffusion Inversion", "status": "Poster", "keywords": "blind image restoration;diffusion models;unsupervised learning", "tldr": "", "abstract": "Image Restoration (IR) methods based on a pre-trained diffusion model have demonstrated state-of-the-art performance. However, they have two fundamental limitations: 1) they often assume that the degradation operator is completely known and 2) they alter the diffusion sampling process, which may result in restored images that do not lie onto the data manifold. To address these issues, we propose Blind Image Restoration via fast Diffusion inversion (BIRD) a blind IR method that jointly optimizes for the degradation model parameters and the restored image. To ensure that the restored images lie onto the data manifold, we propose a novel sampling technique on a pre-trained diffusion model. A key idea in our method is not to modify the reverse sampling, i.e., not to alter all the intermediate latents, once an initial noise is sampled. This is ultimately equivalent to casting the IR task as an optimization problem in the space of the input noise. Moreover, to mitigate the computational cost associated with inverting a fully unrolled diffusion model, we leverage the inherent capability of these models to skip ahead in the forward diffusion process using large time steps. We experimentally validate BIRD on several image restoration tasks and show that it achieves state of the art performance.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/95812"} +{"video_file": "X6rqEpbnj3_39026111.mp4", "openreview_id": "X6rqEpbnj3", "slideslive_id": 39026111, "venue": "nips2024", "title": "Why Transformers Need Adam: A Hessian Perspective", "status": "Poster", "keywords": "Transformers;Adam;Optimization", "tldr": "", "abstract": "SGD performs worse than Adam by a significant margin on Transformers, but the reason remains unclear. In this work, we provide an explanation through the lens of Hessian: (i) Transformers are \"heterogeneous'': the Hessian spectrum across parameter blocks vary dramatically, a phenomenon we call \"block heterogeneity\"; (ii) Heterogeneity hampers SGD: SGD performs worse than Adam on problems with block heterogeneity. To validate (i) and (ii), we check various Transformers, CNNs, MLPs, and quadratic problems, and find that SGD can perform on par with Adam on problems without block heterogeneity, but performs worse than Adam when the heterogeneity exists. Our initial theoretical analysis indicates that SGD performs worse because it applies one single learning rate to all blocks, which cannot handle the heterogeneity among blocks. This limitation could be ameliorated if we use coordinate-wise learning rates, as designed in Adam.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/94790"} +{"video_file": "u3mZzd0Pdx_39025237.mp4", "openreview_id": "u3mZzd0Pdx", "slideslive_id": 39025237, "venue": "nips2024", "title": "Lower Bounds of Uniform Stability in Gradient-Based Bilevel Algorithms for Hyperparameter Optimization", "status": "Poster", "keywords": "Uniform stability;Lower bound;Hyperparameter optimization;Bilevel programming", "tldr": "We establish uniform stability lower bounds for representative gradient-based bilevel hyperparameter optimization algorithms.", "abstract": "Gradient-based bilevel programming leverages unrolling differentiation (UD) or implicit function theorem (IFT) to solve hyperparameter optimization (HO) problems, and is proven effective and scalable in practice. To understand their generalization behavior, existing works establish upper bounds on the uniform stability of these algorithms, while their tightness is still unclear. To this end, this paper attempts to establish stability lower bounds for UD-based and IFT-based algorithms. A central technical challenge arises from the dependency of each outer-level update on the concurrent stage of inner optimization in bilevel programming. To address this problem, we introduce lower-bounded expansion properties to characterize the instability in update rules which can serve as general tools for lower-bound analysis. These properties guarantee the hyperparameter divergence at the outer level and the Lipschitz constant of inner output at the inner level in the context of HO. Guided by these insights, we construct a quadratic example that yields tight lower bounds for the UD-based algorithm and meaningful bounds for a representative IFT-based algorithm. Our tight result indicates that uniform stability has reached its limit in stability analysis for the UD-based algorithm.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93297"} +{"video_file": "u6FuiKzT1K_39027332.mp4", "openreview_id": "u6FuiKzT1K", "slideslive_id": 39027332, "venue": "nips2024", "title": "Leveraging Contrastive Learning for Enhanced Node Representations in Tokenized Graph Transformers", "status": "Poster", "keywords": "Node classification;Graph Transformer;Positive Token Sequence;Negative Token Sequence;Contrastive Learning", "tldr": "This paper enhances node classification in graph Transformers by leveraging contrastive learning and a hybrid token generator to capture diverse graph information.", "abstract": "While tokenized graph Transformers have demonstrated strong performance in node classification tasks, their reliance on a limited subset of nodes with high similarity scores for constructing token sequences overlooks valuable information from other nodes, hindering their ability to fully harness graph information for learning optimal node representations. To address this limitation, we propose a novel graph Transformer called GCFormer. Unlike previous approaches, GCFormer develops a hybrid token generator to create two types of token sequences, positive and negative, to capture diverse graph information. And a tailored Transformer-based backbone is adopted to learn meaningful node representations from these generated token sequences. Additionally, GCFormer introduces contrastive learning to extract valuable information from both positive and negative token sequences, enhancing the quality of learned node representations. Extensive experimental results across various datasets, including homophily and heterophily graphs, demonstrate the superiority of GCFormer in node classification, when compared to representative graph neural networks (GNNs) and graph Transformers.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/93294"} +{"video_file": "u7JRmrGutT_39028835.mp4", "openreview_id": "u7JRmrGutT", "slideslive_id": 39028835, "venue": "nips2024", "title": "Graph Edit Distance with General Costs Using Neural Set Divergence", "status": "Poster", "keywords": "graph neural network;graph edit distance", "tldr": "Neural graph edit distance network with uniform and non-uniform costs", "abstract": "Graph Edit Distance (GED) measures the (dis-)similarity between two given graphs in terms of the minimum-cost edit sequence, which transforms one graph to the other. GED is related to other notions of graph similarity, such as graph and subgraph isomorphism, maximum common subgraph, etc. However, the computation of exact GED is NP-Hard, which has recently motivated the design of neural models for GED estimation. However, they do not explicitly account for edit operations with different costs. In response, we propose\nGraphEdX\n, a neural GED estimator that can work with general costs specified for the four edit operations, viz., edge deletion, edge addition, node deletion, and node addition. We first present GED as a quadratic assignment problem (QAP) that incorporates these four costs. Then, we represent each graph as a set of node and edge embeddings and use them to design a family of neural set divergence surrogates. We replace the QAP terms corresponding to each operation with their surrogates. Computing such neural set divergence requires aligning nodes and edges of the two graphs. We learn these alignments using a Gumbel-Sinkhorn permutation generator, additionally ensuring that the node and edge alignments are consistent with each other. Moreover, these alignments are cognizant of both the presence and absence of edges between node pairs. Through extensive experiments on several datasets, along with a variety of edit cost settings, we show that\nGraphEdX\nconsistently outperforms state-of-the-art methods and heuristics in terms of prediction error. The code is available at https://github.com/structlearning/GraphEdX.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/93292"} +{"video_file": "u9ShP64FJV_39026121.mp4", "openreview_id": "u9ShP64FJV", "slideslive_id": 39026121, "venue": "nips2024", "title": "Protecting Your LLMs with Information Bottleneck", "status": "Poster", "keywords": "Defense;Information Bottleneck;Jailbreaking;Large Language Models", "tldr": "Our protector efficiently defends against adversarial prompts without losing key information", "abstract": "The advent of large language models (LLMs) has revolutionized the field of natural language processing, yet they might be attacked to produce harmful content. Despite efforts to ethically align LLMs, these are often fragile and can be circumvented by jailbreaking attacks through optimized or manual adversarial prompts. To address this, we introduce the Information Bottleneck Protector (IBProtector), a defense mechanism grounded in the information bottleneck principle, and we modify the objective to avoid trivial solutions. The IBProtector selectively compresses and perturbs prompts, facilitated by a lightweight and trainable extractor, preserving only essential information for the target LLMs to respond with the expected answer. Moreover, we further consider a situation where the gradient is not visible to be compatible with any LLM. Our empirical evaluations show that IBProtector outperforms current defense methods in mitigating jailbreak attempts, without overly affecting response quality or inference speed. Its effectiveness and adaptability across various attack methods and target LLMs underscore the potential of IBProtector as a novel, transferable defense that bolsters the security of LLMs without requiring modifications to the underlying models.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93290"} +{"video_file": "uCZI8gSfD4_39028110.mp4", "openreview_id": "uCZI8gSfD4", "slideslive_id": 39028110, "venue": "nips2024", "title": "Training Compute-Optimal Protein Language Models", "status": "Spotlight", "keywords": "Protein Language Model;Scaling Law", "tldr": "We explore optimally training protein language models and propose the scaling law, an area of significant interest in biological research with limited guidance on best practices.", "abstract": "We explore optimally training protein language models, an area of significant interest in biological research where guidance on best practices is limited. Most models are trained with extensive compute resources until performance gains plateau, focusing primarily on increasing model sizes rather than optimizing the efficient compute frontier that balances performance and compute budgets. Our investigation is grounded in a massive dataset consisting of 939 million protein sequences. We trained over 300 models ranging from 3.5 million to 10.7 billion parameters on 5 to 200 billion unique tokens, to investigate the relations between model sizes, training token numbers, and objectives. First, we observed the effect of diminishing returns for the Causal Language Model (CLM) and that of overfitting for Masked Language Model (MLM) when repeating the commonly used Uniref database. To address this, we included metagenomic protein sequences in the training set to increase the diversity and avoid the plateau or overfitting effects. Second, we obtained the scaling laws of CLM and MLM on Transformer, tailored to the specific characteristics of protein sequence data. Third, we observe a transfer scaling phenomenon from CLM to MLM, further demonstrating the effectiveness of transfer through scaling behaviors based on estimated Effectively Transferred Tokens. Finally, to validate our scaling laws, we compare the large-scale versions of ESM-2 and PROGEN2 on downstream tasks, encompassing evaluations of protein generation as well as structure- and function-related tasks, all within less or equivalent pre-training compute budgets.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/93287"} +{"video_file": "uCvdw0IOuU_39027149.mp4", "openreview_id": "uCvdw0IOuU", "slideslive_id": 39027149, "venue": "nips2024", "title": "Addressing Asynchronicity in Clinical Multimodal Fusion via Individualized Chest X-ray Generation", "status": "Poster", "keywords": "Multi-modal clinical data;latent diffusion model;chest X-ray image;eletronic health records.", "tldr": "We propose DDL-CXR to address the asynchronicity in clinical multimodal fusion by generating individualized up-to-date CXR images. Cross-modal interactions are captured by the generation process, leading to improved prediction performance.", "abstract": "Integrating multi-modal clinical data, such as electronic health records (EHR) and chest X-ray images (CXR), is particularly beneficial for clinical prediction tasks. However, in a temporal setting, multi-modal data are often inherently asynchronous. EHR can be continuously collected but CXR is generally taken with a much longer interval due to its high cost and radiation dose. When clinical prediction is needed, the last available CXR image might have been outdated, leading to suboptimal predictions. To address this challenge, we propose DDL-CXR, a method that dynamically generates an up-to-date latent representation of the individualized CXR images. Our approach leverages latent diffusion models for patient-specific generation strategically conditioned on a previous CXR image and EHR time series, providing information regarding anatomical structures and disease progressions, respectively. In this way, the interaction across modalities could be better captured by the latent CXR generation process, ultimately improving the prediction performance. Experiments using MIMIC datasets show that the proposed model could effectively address asynchronicity in multimodal fusion and consistently outperform existing methods.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/93285"} +{"video_file": "uDD44NROOt_39024783.mp4", "openreview_id": "uDD44NROOt", "slideslive_id": 39024783, "venue": "nips2024", "title": "SPRINQL: Sub-optimal Demonstrations driven Offline Imitation Learning", "status": "Poster", "keywords": "imitation learning;offline imitation learning;reference reward;supplementary data;ranked dataset", "tldr": "We develop a novel inverse soft-Q learning for offline imitation learning with expert and non-expert demonstrations.", "abstract": "We focus on offline imitation learning (IL), which aims to mimic an expert's behavior using demonstrations without any interaction with the environment. One of the main challenges in offline IL is the limited support of expert demonstrations, which typically cover only a small fraction of the state-action space. While it may not be feasible to obtain numerous expert demonstrations, it is often possible to gather a larger set of sub-optimal demonstrations. For example, in treatment optimization problems, there are varying levels of doctor treatments available for different chronic conditions. These range from treatment specialists and experienced general practitioners to less experienced general practitioners. Similarly, when robots are trained to imitate humans in routine tasks, they might learn from individuals with different levels of expertise and efficiency.\nIn this paper, we propose an offline IL approach that leverages the larger set of sub-optimal demonstrations while effectively mimicking expert trajectories. Existing offline IL methods based on behavior cloning or distribution matching often face issues such as overfitting to the limited set of expert demonstrations or inadvertently imitating sub-optimal trajectories from the larger dataset. Our approach, which is based on inverse soft-Q learning, learns from both expert and sub-optimal demonstrations. It assigns higher importance (through learned weights) to aligning with expert demonstrations and lower importance to aligning with sub-optimal ones. A key contribution of our approach, called SPRINQL, is transforming the offline IL problem into a convex optimization over the space of Q functions. Through comprehensive experimental evaluations, we demonstrate that the SPRINQL algorithm achieves state-of-the-art (SOTA) performance on offline IL benchmarks. Code is available at https://github.com/hmhuy0/SPRINQL .", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93284"} +{"video_file": "uDxhMgjVJB_39024504.mp4", "openreview_id": "uDxhMgjVJB", "slideslive_id": 39024504, "venue": "nips2024", "title": "Automatic Outlier Rectification via Optimal Transport", "status": "Poster", "keywords": "Outlier Rectification; Optimal Transport; Statistically Robust", "tldr": "This paper introduces a novel framework for outlier detection using optimal transport with a concave cost function, integrating outlier rectification and estimation into a single optimization process.", "abstract": "In this paper, we propose a novel conceptual framework to detect outliers using optimal transport with a concave cost function. Conventional outlier detection approaches typically use a two-stage procedure: first, outliers are detected and removed, and then estimation is performed on the cleaned data. However, this approach does not inform outlier removal with the estimation task, leaving room for improvement. To address this limitation, we propose an automatic outlier rectification mechanism that integrates rectification and estimation within a joint optimization framework. We take the first step to utilize the optimal transport distance with a concave cost function to construct a rectification set in the space of probability distributions. Then, we select the best distribution within the rectification set to perform the estimation task. Notably, the concave cost function we introduced in this paper is the key to making our estimator effectively identify the outlier during the optimization process. We demonstrate the effectiveness of our approach over conventional approaches in simulations and empirical analyses for mean estimation, least absolute regression, and the fitting of option implied volatility surfaces.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/93283"} +{"video_file": "uFXGsiYkkX_39026300.mp4", "openreview_id": "uFXGsiYkkX", "slideslive_id": 39026300, "venue": "nips2024", "title": "BAKU: An Efficient Transformer for Multi-Task Policy Learning", "status": "Poster", "keywords": "Robot learning;Imitation Learning;Multitask Learning", "tldr": "We present BAKU, a simple architecture for multi-task policy learning that provides highly efficient training, particularly in data-scarce problems such as robotics.", "abstract": "Training generalist agents capable of solving diverse tasks is challenging, often requiring large datasets of expert demonstrations. This is particularly problematic in robotics, where each data point requires physical execution of actions in the real world. Thus, there is a pressing need for architectures that can effectively leverage the available training data. In this work, we present BAKU, a simple transformer architecture that enables efficient learning of multi-task robot policies. BAKU builds upon recent advancements in offline imitation learning and meticulously combines observation trunks, action chunking, multi-sensory observations, and action heads to substantially improve upon prior work. Our experiments on 129 simulated tasks across LIBERO, Meta-World suite, and the Deepmind Control suite exhibit an overall 18% absolute improvement over RT-1 and MT-ACT, with a 36% improvement on the harder LIBERO benchmark. On 30 real-world manipulation tasks, given an average of just 17 demonstrations per task, BAKU achieves a 91% success rate. Videos of the robot are best viewed at baku-robot.github.io.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/93282"} +{"video_file": "uHml6eyoVF_39027824.mp4", "openreview_id": "uHml6eyoVF", "slideslive_id": 39027824, "venue": "nips2024", "title": "Learning from higher-order correlations, efficiently: hypothesis tests, random features, and neural networks", "status": "Poster", "keywords": "higher-order cumulant;hypothesis test;neural network;random features;low-degree method", "tldr": "We analyse the statistical-to-computational gap in learning from higher-order data correlations and show that neural networks learn these correlations more efficiently than kernel methods.", "abstract": "Neural networks excel at discovering statistical patterns in high-dimensional data sets. In practice, higher-order cumulants, which quantify the non-Gaussian correlations between three or more variables, are particularly important for the performance of neural networks. But how efficient are neural networks at extracting features from higher-order cumulants? We study this question in the spiked cumulant model, where the statistician needs to recover a privileged direction or \"spike'' from the order-\np\n\u2265\n4\ncumulants of\nd\n-dimensional inputs. We first discuss the fundamental statistical and computational limits of recovering the spike by analysing the number of samples\nn\nrequired to strongly distinguish between inputs from the spiked cumulant model and isotropic Gaussian inputs. Existing literature established the presence of a wide statistical-to-computational gap in this problem. We deepen this line of work by finding an exact formula for the likelihood ratio norm which proves that statistical distinguishability requires\nn\n\u2273\nd\nsamples, while distinguishing the two distributions in polynomial time requires\nn\n\u2273\nd\n2\nsamples for a wide class of algorithms, i.e. those covered by the low-degree conjecture. Numerical experiments show that neural networks do indeed learn to distinguish the two distributions with quadratic sample complexity, while ``lazy'' methods like random features are not better than random guessing in this regime. Our results show that neural networks extract information from higher-order correlations in the spiked cumulant model efficiently, and reveal a large gap in the amount of data required by neural networks and random features to learn from higher-order cumulants.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93280"} +{"video_file": "uM3rQ14iex_39026308.mp4", "openreview_id": "uM3rQ14iex", "slideslive_id": 39026308, "venue": "nips2024", "title": "Partial Structure Discovery is Sufficient for No-regret Learning in Causal Bandits", "status": "Poster", "keywords": "Causal Bandits;No-regret Learning;Causal Discovery", "tldr": "We propose a two-phase algorithm for causal bandits with unknown causal graphs: Phase one learns the subgraph on reward's ancestors to identify all possibly optimal arms, followed by a standard bandit algorithm with analysis of the cumulative regret.", "abstract": "Causal knowledge about the relationships among decision variables and a reward variable in a bandit setting can accelerate the learning of an optimal decision. Current works often assume the causal graph is known, which may not always be available a priori. Motivated by this challenge, we focus on the causal bandit problem in scenarios where the underlying causal graph is unknown and may include latent confounders. While intervention on the parents of the reward node is optimal in the absence of latent confounders, this is not necessarily the case in general. Instead, one must consider a set of possibly optimal arms/interventions, each being a special subset of the ancestors of the reward node, making causal discovery beyond the parents of the reward node essential. For regret minimization, we identify that discovering the full causal structure is unnecessary; however, no existing work provides the necessary and sufficient components of the causal graph. We formally characterize the set of necessary and sufficient latent confounders one needs to detect or learn to ensure that all possibly optimal arms are identified correctly. We also propose a randomized algorithm for learning the causal graph with a limited number of samples, providing a sample complexity guarantee for any desired confidence level. In the causal bandit setup, we propose a two-stage approach. In the first stage, we learn the induced subgraph on ancestors of the reward, along with a necessary and sufficient subset of latent confounders, to construct the set of possibly optimal arms. We show that for our proposed algorithm, the number of intervention samples required to learn the set of possibly optimal arms scales polynomially with respect to the number of nodes. The second phase involves the application of a standard bandit algorithm, such as the UCB algorithm. We also establish a regret bound for our two-phase approach, which is sublinear in the number of rounds.", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/93277"} +{"video_file": "uNKlTQ8mBD_39027856.mp4", "openreview_id": "uNKlTQ8mBD", "slideslive_id": 39027856, "venue": "nips2024", "title": "Learning Formal Mathematics From Intrinsic Motivation", "status": "Oral", "keywords": "reasoning;reinforcement learning;formal mathematics;logic", "tldr": "We jointly learn to prove formal mathematical theorems and to propose harder provable conjectures in a self-improving loop", "abstract": "How did humanity coax mathematics from the aether? We explore the Platonic view that mathematics can be discovered from its axioms---a game of conjecture and proof. We describe an agent that jointly learns to pose challenging problems for itself (conjecturing) and solve them (theorem proving). Given a mathematical domain axiomatized in dependent type theory, we first combine methods for constrained decoding and type-directed synthesis to sample valid conjectures from a language model. Our method guarantees well-formed conjectures by construction, even as we start with a randomly initialized model. We use the same model to represent a policy and value function for guiding proof search. Our agent targets generating hard but provable conjectures --- a moving target, since its own theorem proving ability also improves as it trains. We propose novel methods for hindsight relabeling on proof search trees to significantly improve the agent's sample efficiency in both tasks. Experiments on 3 axiomatic domains (propositional logic, arithmetic and group theory) demonstrate that our agent can bootstrap from only the axioms, self-improving in generating true and challenging conjectures and in finding proofs.", "primary_area": "machine_learning_for_other_sciences_and_fields", "site": "https://neurips.cc/virtual/2024/poster/93276"} +{"video_file": "uO53206oLJ_39025405.mp4", "openreview_id": "uO53206oLJ", "slideslive_id": 39025405, "venue": "nips2024", "title": "Nonconvex Federated Learning on Compact Smooth Submanifolds With Heterogeneous Data", "status": "Poster", "keywords": "Federated learning;manifold optimization;heterogeneous data", "tldr": "This paper proposes a computation- and communication-efficient algorithm for federated learning over manifolds with heterogeneous data.", "abstract": "Many machine learning tasks, such as principal component analysis and low-rank matrix completion, give rise to manifold optimization problems. Although there is a large body of work studying the design and analysis of algorithms for manifold optimization in the centralized setting, there are currently very few works addressing the federated setting. In this paper, we consider nonconvex federated learning over a compact smooth submanifold in the setting of heterogeneous client data. We propose an algorithm that leverages stochastic Riemannian gradients and a manifold projection operator to improve computational efficiency, uses local updates to improve communication efficiency, and avoids client drift. Theoretically, we show that our proposed algorithm converges sub-linearly to a neighborhood of a first-order optimal solution by using a novel analysis that jointly exploits the manifold structure and properties of the loss functions. Numerical experiments demonstrate that our algorithm has significantly smaller computational and communication overhead than existing methods.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93275"} +{"video_file": "uRnTYPkF3V_39027698.mp4", "openreview_id": "uRnTYPkF3V", "slideslive_id": 39027698, "venue": "nips2024", "title": "Sequential Probability Assignment with Contexts: Minimax Regret, Contextual Shtarkov Sums, and Contextual Normalized Maximum Likelihood", "status": "Poster", "keywords": "online learning;log loss;probabilistic forecasting", "tldr": "We define a notion of complexity for a class of probability kernels and show it is the minimax regret of sequential probability assignment, and allows us to describe the minimax algorithm.", "abstract": "We study the fundamental problem of sequential probability assignment, also known as online learning with logarithmic loss, with respect to an arbitrary, possibly nonparametric hypothesis class. Our goal is to obtain a complexity measure for the hypothesis class that characterizes the minimax regret and to determine a general, minimax optimal algorithm. Notably, the sequential\n\u2113\n\u221e\nentropy, extensively studied in the literature (Rakhlin and Sridharan, 2015, Bilodeau et al., 2020, Wu et al., 2023), was shown to not characterize minimax regret in general. Inspired by the seminal work of Shtarkov (1987) and Rakhlin, Sridharan, and Tewari (2010), we introduce a novel complexity measure, the \\emph{contextual Shtarkov sum}, corresponding to the Shtarkov sum after projection onto a multiary context tree, and show that the worst case log contextual Shtarkov sum equals the minimax regret. Using the contextual Shtarkov sum, we derive the minimax optimal strategy, dubbed \\emph{contextual Normalized Maximum Likelihood} (cNML). Our results hold for sequential experts, beyond binary labels, which are settings rarely considered in prior work. To illustrate the utility of this characterization, we provide a short proof of a new regret upper bound in terms of sequential\n\u2113\n\u221e\nentropy, unifying and sharpening state-of-the-art bounds by Bilodeau et al. (2020) and Wu et al. (2023).", "primary_area": "online_learning", "site": "https://neurips.cc/virtual/2024/poster/93273"} +{"video_file": "uatPOPWzzU_39024700.mp4", "openreview_id": "uatPOPWzzU", "slideslive_id": 39024700, "venue": "nips2024", "title": "Unifying Homophily and Heterophily for Spectral Graph Neural Networks via Triple Filter Ensembles", "status": "Poster", "keywords": "graph neural networks; filter ensemble; homophily and heterophily", "tldr": "Unifying Homophily and Heterophily for Spectral Graph Neural Networks via Triple Filter Ensembles", "abstract": "Polynomial-based learnable spectral graph neural networks (GNNs) utilize polynomial to approximate graph convolutions and have achieved impressive performance on graphs. Nevertheless, there are three progressive problems to be solved. Some models use polynomials with better approximation for approximating filters, yet perform worse on real-world graphs. Carefully crafted graph learning methods, sophisticated polynomial approximations, and refined coefficient constraints leaded to overfitting, which diminishes the generalization of the models. How to design a model that retains the ability of polynomial-based spectral GNNs to approximate filters while it possesses higher generalization and performance? In this paper, we propose a spectral GNN with triple filter ensemble (TFE-GNN), which extracts homophily and heterophily from graphs with different levels of homophily adaptively while utilizing the initial features. Specifically, the first and second ensembles are combinations of a set of base low-pass and high-pass filters, respectively, after which the third ensemble combines them with two learnable coefficients and yield a graph convolution (TFE-Conv). Theoretical analysis shows that the approximation ability of TFE-GNN is consistent with that of ChebNet under certain conditions, namely it can learn arbitrary filters. TFE-GNN can be viewed as a reasonable combination of two unfolded and integrated excellent spectral GNNs, which motivates it to perform well. Experiments show that TFE-GNN achieves high generalization and new state-of-the-art performance on various real-world datasets.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/93265"} +{"video_file": "ud0RBkdBfE_39026222.mp4", "openreview_id": "ud0RBkdBfE", "slideslive_id": 39026222, "venue": "nips2024", "title": "Convergence Analysis of Split Federated Learning on Heterogeneous Data", "status": "Poster", "keywords": "split federated learning;distributed learning;convergence analysis;machine learning", "tldr": "Convergence analysis of split federated learning", "abstract": "Split federated learning (SFL) is a recent distributed approach for collaborative model training among multiple clients. In SFL, a global model is typically split into two parts, where clients train one part in a parallel federated manner, and a main server trains the other. Despite the recent research on SFL algorithm development, the convergence analysis of SFL is missing in the literature, and this paper aims to fill this gap. The analysis of SFL can be more challenging than that of federated learning (FL), due to the potential dual-paced updates at the clients and the main server. We provide convergence analysis of SFL for strongly convex and general convex objectives on heterogeneous data. The convergence rates are\nO\n(\n1\n/\nT\n)\nand\nO\n(\n1\n/\nT\n3\n)\n, respectively, where\nT\ndenotes the total number of rounds for SFL training. We further extend the analysis to non-convex objectives and where some clients may be unavailable during training. Numerical experiments validate our theoretical results and show that SFL outperforms FL and split learning (SL) when data is highly heterogeneous across a large number of clients.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93262"} +{"video_file": "udTwwF7tks_39024419.mp4", "openreview_id": "udTwwF7tks", "slideslive_id": 39024419, "venue": "nips2024", "title": "Iteratively Refined Early Interaction Alignment for Subgraph Matching based Graph Retrieval", "status": "Poster", "keywords": "Graph Neural Networks;Graph Retrieval", "tldr": "Improved neural interaction modelling of graph pairs for subgraph matching based retrieval.", "abstract": "Graph retrieval based on subgraph isomorphism has several real-world applications such as scene graph retrieval, molecular fingerprint detection and circuit design. Roy et al. [35] proposed IsoNet, a late interaction model for subgraph matching, which first computes the node and edge embeddings of each graph independently of paired graph and then computes a trainable alignment map. Here, we present\nIsoNet++\n, an early interaction graph neural network (GNN), based on several technical innovations. First, we compute embeddings of all nodes by passing messages within and across the two input graphs, guided by an injective alignment between their nodes. Second, we update this alignment in a lazy fashion over multiple rounds. Within each round, we run a layerwise GNN from scratch, based on the current state of the alignment. After the completion of one round of GNN, we use the last-layer embeddings to update the alignments, and proceed to the next round. Third,\nIsoNet++\nincorporates a novel notion of node-pair partner interaction. Traditional early interaction computes attention between a node and its potential partners in the other graph, the attention then controlling messages passed across graphs. We consider node pairs (not single nodes) as potential partners. Existence of an edge between the nodes in one graph and non-existence in the other provide vital signals for refining the alignment. Our experiments on several datasets show that the alignments get progressively refined with successive rounds, resulting in significantly better retrieval performance than existing methods. We demonstrate that all three innovations contribute to the enhanced accuracy. Our code and datasets are publicly available at https://github.com/structlearning/isonetpp.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/93261"} +{"video_file": "ufKBRvYxtp_39026569.mp4", "openreview_id": "ufKBRvYxtp", "slideslive_id": 39026569, "venue": "nips2024", "title": "Sample-Efficient Agnostic Boosting", "status": "Poster", "keywords": "boosting; sample complexity; learning theory; reinforcement learning", "tldr": "Improved sample complexity for agnostic boosting", "abstract": "The theory of boosting provides a computational framework for aggregating approximate weak learning algorithms, which perform marginally better than a random predictor, into an accurate strong learner. In the realizable case, the success of the boosting approach is underscored by a remarkable fact that the resultant sample complexity matches that of a computationally demanding alternative, namely Empirical Risk Minimization (ERM). This in particular implies that the realizable boosting methodology has the potential to offer computational relief without compromising on sample efficiency.\nDespite recent progress, in agnostic boosting, where assumptions on the conditional distribution of labels given feature descriptions are absent, ERM outstrips the agnostic boosting methodology in being quadratically more sample efficient than all known agnostic boosting algorithms. In this paper, we make progress on closing this gap, and give a substantially more sample efficient agnostic boosting algorithm than those known, without compromising on the computational (or oracle) complexity. A key feature of our algorithm is that it leverages the ability to reuse samples across multiple rounds of boosting, while guaranteeing a generalization error strictly better than those obtained by blackbox applications of uniform convergence arguments. We also apply our approach to other previously studied learning problems, including boosting for reinforcement learning, and demonstrate improved results.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93259"} +{"video_file": "ufPPf9ghzP_39025950.mp4", "openreview_id": "ufPPf9ghzP", "slideslive_id": 39025950, "venue": "nips2024", "title": "A Neural Network Approach for Efficiently Answering Most Probable Explanation Queries in Probabilistic Models", "status": "Spotlight", "keywords": "Most Probable Explanation;Probabilistic Graphical Models;Probabilistic Circuits;Neural Autoregressive Model", "tldr": "A novel neural-based method for efficiently answering arbitrary Most Probable Explanation (any-MPE) queries in large probabilistic models.", "abstract": "We propose a novel neural networks based approach to efficiently answer arbitrary Most Probable Explanation (MPE) queries\u2014a well-known NP-hard task\u2014in large probabilistic models such as Bayesian and Markov networks, probabilistic circuits, and neural auto-regressive models. By arbitrary MPE queries, we mean that there is no predefined partition of variables into evidence and non-evidence variables. The key idea is to distill all MPE queries over a given probabilistic model into a neural network and then use the latter for answering queries, eliminating the need for time-consuming inference algorithms that operate directly on the probabilistic model. We improve upon this idea by incorporating inference-time optimization with self-supervised loss to iteratively improve the solutions and employ a teacher-student framework that provides a better initial network, which in turn, helps reduce the number of inference-time optimization steps. The teacher network utilizes a self-supervised loss function optimized for getting the exact MPE solution, while the student network learns from the teacher's near-optimal outputs through supervised loss. We demonstrate the efficacy and scalability of our approach on various datasets and a broad class of probabilistic models, showcasing its practical effectiveness.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/93258"} +{"video_file": "uikhNa4wam_39027278.mp4", "openreview_id": "uikhNa4wam", "slideslive_id": 39027278, "venue": "nips2024", "title": "FIFO-Diffusion: Generating Infinite Videos from Text without Training", "status": "Poster", "keywords": "generative models;diffusion;long video generation;tuning-free", "tldr": "Infinitely long video generation technique without training based on pretrained diffusion models", "abstract": "We propose a novel inference technique based on a pretrained diffusion model for text-conditional video generation. Our approach, called FIFO-Diffusion, is conceptually capable of generating infinitely long videos without additional training. This is achieved by iteratively performing diagonal denoising, which simultaneously processes a series of consecutive frames with increasing noise levels in a queue; our method dequeues a fully denoised frame at the head while enqueuing a new random noise frame at the tail. However, diagonal denoising is a double-edged sword as the frames near the tail can take advantage of cleaner frames by forward reference but such a strategy induces the discrepancy between training and inference. Hence, we introduce latent partitioning to reduce the training-inference gap and lookahead denoising to leverage the benefit of forward referencing. Practically, FIFO-Diffusion consumes a constant amount of memory regardless of the target video length given a baseline model, while well-suited for parallel inference on multiple GPUs. We have demonstrated the promising results and effectiveness of the proposed methods on existing text-to-video generation baselines. Generated video examples and source codes are available at our project page.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/93253"} +{"video_file": "ujk0XrNTQZ_39028376.mp4", "openreview_id": "ujk0XrNTQZ", "slideslive_id": 39028376, "venue": "nips2024", "title": "Drago: Primal-Dual Coupled Variance Reduction for Faster Distributionally Robust Optimization", "status": "Poster", "keywords": "Distributionally Robust Optimization;Stochastic Optimization;Convex Optimization;Saddle Point", "tldr": "A stochastic primal-dual algorithm for solving distributionally robust optimization problems. It achieves a state-of-the-art linear convergence rate and combines randomized and cyclic components.", "abstract": "We consider the penalized distributionally robust optimization (DRO) problem with a closed, convex uncertainty set, a setting that encompasses learning using\nf\n-DRO and spectral/\nL\n-risk minimization. We present Drago, a stochastic primal-dual algorithm which combines cyclic and randomized components with a carefully regularized primal update to achieve dual variance reduction. Owing to its design, Drago enjoys a state-of-the-art linear convergence rate on strongly convex-strongly concave DRO problems witha fine-grained dependency on primal and dual condition numbers. The theoretical results are supported with numerical benchmarks on regression and classification tasks.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93250"} +{"video_file": "umukvCdGI6_39025146.mp4", "openreview_id": "umukvCdGI6", "slideslive_id": 39025146, "venue": "nips2024", "title": "DOFEN: Deep Oblivious Forest ENsemble", "status": "Poster", "keywords": "Tabular Data;Structured Data;Deep Neural Network;Architecture Design", "tldr": "DOFEN (Deep Oblivious Forest ENsemble): a novel deep neural network architecture for tabular data, achieving sota performance compared to deep nerual network baselines and comparable performance with tree-based models.", "abstract": "Deep Neural Networks (DNNs) have revolutionized artificial intelligence, achieving impressive results on diverse data types, including images, videos, and texts. However, DNNs still lag behind Gradient Boosting Decision Trees (GBDT) on tabular data, a format extensively utilized across various domains. This paper introduces DOFEN, which stands for Deep Oblivious Forest ENsemble. DOFEN is a novel DNN architecture inspired by oblivious decision trees and achieves on-off sparse selection of columns. DOFEN surpasses other DNNs on tabular data, achieving state-of-the-art performance on the well-recognized benchmark: Tabular Benchmark, which includes 73 total datasets spanning a wide array of domains. The code of DOFEN is available at: https://github.com/Sinopac-Digital-Technology-Division/DOFEN", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93248"} +{"video_file": "uoJQ9qadjY_39027122.mp4", "openreview_id": "uoJQ9qadjY", "slideslive_id": 39027122, "venue": "nips2024", "title": "Learning to Reason Iteratively and Parallelly for Complex Visual Reasoning Scenarios", "status": "Poster", "keywords": "iterative and parallel computation; complex visual reasoning and question answering; neural network based reasoning architectures", "tldr": "We introduce a fully neural reasoning mechanism comprising iterative & parallel computation to address complex image & video reasoning tasks such as AGQA, STAR, CLEVR-Humans and CLEVRER-Humans.", "abstract": "Complex visual reasoning and question answering (VQA) is a challenging task that requires compositional multi-step processing and higher-level reasoning capabilities beyond the immediate recognition and localization of objects and events. Here, we introduce a fully neural Iterative and Parallel Reasoning Mechanism (IPRM) that combines two distinct forms of computation -- iterative and parallel -- to better address complex VQA scenarios. Specifically, IPRM's \"iterative\" computation facilitates compositional step-by-step reasoning for scenarios wherein individual operations need to be computed, stored, and recalled dynamically (e.g. when computing the query \u201cdetermine the color of pen to the left of the child in red t-shirt sitting at the white table\u201d). Meanwhile, its \"parallel'' computation allows for the simultaneous exploration of different reasoning paths and benefits more robust and efficient execution of operations that are mutually independent (e.g. when counting individual colors for the query: \"determine the maximum occurring color amongst all t-shirts'\"). We design IPRM as a lightweight and fully-differentiable neural module that can be conveniently applied to both transformer and non-transformer vision-language backbones. It notably outperforms prior task-specific methods and transformer-based attention modules across various image and video VQA benchmarks testing distinct complex reasoning capabilities such as compositional spatiotemporal reasoning (AGQA), situational reasoning (STAR), multi-hop reasoning generalization (CLEVR-Humans) and causal event linking (CLEVRER-Humans). Further, IPRM's internal computations can be visualized across reasoning steps, aiding interpretability and diagnosis of its errors.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93246"} +{"video_file": "up4tWnwRol_39027613.mp4", "openreview_id": "up4tWnwRol", "slideslive_id": 39027613, "venue": "nips2024", "title": "The Fine-Grained Complexity of Gradient Computation for Training Large Language Models", "status": "Poster", "keywords": "Strong Exponential Time Hypothesis;Fine-grained Complexity;Polynomial methods;Gradient Complexity", "tldr": "A theoretical study of gradient computation, from both algorithm and hardness perspective", "abstract": "Large language models (LLMs) have made fundamental contributions over the last a few years. To train an LLM, one needs to alternatingly run `forward' computations and backward computations. The forward computation can be viewed as attention function evaluation, and the backward computation can be viewed as a gradient computation. In previous work by [Alman and Song, NeurIPS 2023], it was proved that the forward step can be performed in almost-linear time in certain parameter regimes, but that there is no truly sub-quadratic time algorithm in the remaining parameter regimes unless the popular hypothesis\nSETH\nis false. In this work, we show nearly identical results for the harder-seeming problem of computing the gradient of loss function of one layer attention network, and thus for the entire process of LLM training. This completely characterizes the fine-grained complexity of every step of LLM training.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93245"} +{"video_file": "uqWfLgZpV1_39026863.mp4", "openreview_id": "uqWfLgZpV1", "slideslive_id": 39026863, "venue": "nips2024", "title": "On the Necessity of Collaboration for Online Model Selection with Decentralized Data", "status": "Poster", "keywords": "online learning;model selection;federated learning;kernel methods", "tldr": "We clarify the unnecessary nature of collaboration in previous federated online model selection algorithms, and give conditions under which collaboration is necessary.", "abstract": "We consider online model selection with decentralized data over\nM\nclients, and study the necessity of collaboration among clients. Previous work proposed various federated algorithms without demonstrating their necessity, while we answer the question from a novel perspective of computational constraints. We prove lower bounds on the regret, and propose a federated algorithm and analyze the upper bound. Our results show (i) collaboration is unnecessary in the absence of computational constraints on clients; (ii) collaboration is necessary if the computational cost on each client is limited to\no\n(\nK\n)\n, where\nK\nis the number of candidate hypothesis spaces. We clarify the unnecessary nature of collaboration in previous federated algorithms for distributed online multi-kernel learning, and improve the regret bounds at a smaller computational and communication cost. Our algorithm relies on three new techniques including an improved Bernstein's inequality for martingale, a federated online mirror descent framework, and decoupling model selection and prediction, which might be of independent interest.", "primary_area": "online_learning", "site": "https://neurips.cc/virtual/2024/poster/93244"} +{"video_file": "uuQQwrjMzb_39028015.mp4", "openreview_id": "uuQQwrjMzb", "slideslive_id": 39028015, "venue": "nips2024", "title": "Adaptive Labeling for Efficient Out-of-distribution Model Evaluation", "status": "Poster", "keywords": "Model Evaluation;Uncertainty Quantification;Markov Decision Process;Policy Gradient;Auto-differentiation", "tldr": "Supervised data suffers severe selection bias when labels are expensive. We formulate a MDP over posterior beliefs on model performance and solve it with pathwise policy gradients computed through an auto-differentiable pipeline.", "abstract": "Datasets often suffer severe selection bias; clinical labels are only available on patients for whom doctors ordered medical exams. To assess model performance outside the support of available data, we present a computational framework for adaptive labeling, providing cost-efficient model evaluations under severe distribution shifts. We formulate the problem as a Markov Decision Process over states defined by posterior beliefs on model performance. Each batch of new labels incurs a \u201cstate transition\u201d to sharper beliefs, and we choose batches to minimize uncertainty on model performance at the end of the label collection process. Instead of relying on high-variance REINFORCE policy gradient estimators that do not scale, our adaptive labeling policy is optimized using path-wise policy gradients computed by auto-differentiating through simulated roll-outs. Our framework is agnostic to different uncertainty quantification approaches and highlights the virtue of planning in adaptive labeling. On synthetic and real datasets, we empirically demonstrate even a one-step lookahead policy substantially outperforms active learning-inspired heuristics.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/93241"} +{"video_file": "uyqjpycMbU_39026187.mp4", "openreview_id": "uyqjpycMbU", "slideslive_id": 39026187, "venue": "nips2024", "title": "Integrating Deep Metric Learning with Coreset for Active Learning in 3D Segmentation", "status": "Poster", "keywords": "Active Learning;Medical Imaging;Segmentation;Deep Metric Learning", "tldr": "We propose a novel metric learning framework for Coreset for active learning in 3D medical image segmentation.", "abstract": "Deep learning has seen remarkable advancements in machine learning, yet it often demands extensive annotated data. Tasks like 3D semantic segmentation impose a substantial annotation burden, especially in domains like medicine, where expert annotations drive up the cost. Active learning (AL) holds great potential to alleviate this annotation burden in 3D medical segmentation. The majority of existing AL methods, however, are not tailored to the medical domain. While weakly-supervised methods have been explored to reduce annotation burden, the fusion of AL with weak supervision remains unexplored, despite its potential to significantly reduce annotation costs. Additionally, there is little focus on slice-based AL for 3D segmentation, which can also significantly reduce costs in comparison to conventional volume-based AL. This paper introduces a novel metric learning method for Coreset to perform slice-based active learning in 3D medical segmentation. By merging contrastive learning with inherent data groupings in medical imaging, we learn a metric that emphasizes the relevant differences in samples for training 3D medical segmentation models. We perform comprehensive evaluations using both weak and full annotations across four datasets (medical and non-medical). Our findings demonstrate that our approach surpasses existing active learning techniques on both weak and full annotations and obtains superior performance with low-annotation budgets which is crucial in medical imaging. Source code for this project is available in the supplementary materials and on GitHub: https://github.com/arvindmvepa/al-seg.", "primary_area": "active_learning", "site": "https://neurips.cc/virtual/2024/poster/93237"} +{"video_file": "uzIWqRzjEP_39025215.mp4", "openreview_id": "uzIWqRzjEP", "slideslive_id": 39025215, "venue": "nips2024", "title": "Learning to Edit Visual Programs with Self-Supervision", "status": "Poster", "keywords": "visual program induction;program synthesis;visual programs;inverse graphics", "tldr": "For the task of visual program induction, we design a network that learns how to edit visual programs and is trained in a self-supervised bootstrapping paradigm.", "abstract": "We design a system that learns how to edit visual programs. Our edit network consumes a complete input program and a visual target. From this input, we task our network with predicting a local edit operation that could be applied to the input program to improve its similarity to the target. In order to apply this scheme for domains that lack program annotations, we develop a self-supervised learning approach that integrates this edit network into a bootstrapped finetuning loop along with a network that predicts entire programs in one-shot. Our joint finetuning scheme, when coupled with an inference procedure that initializes a population from the one-shot model and evolves members of this population with the edit network, helps to infer more accurate visual programs. Over multiple domains, we experimentally compare our method against the alternative of using only the one-shot model, and find that even under equal search-time budgets, our editing-based paradigm provides significant advantages.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93236"} +{"video_file": "v07KRLYxDX_39025573.mp4", "openreview_id": "v07KRLYxDX", "slideslive_id": 39025573, "venue": "nips2024", "title": "Achieving Domain-Independent Certified Robustness via Knowledge Continuity", "status": "Poster", "keywords": "Lipschitz continuity;robustness;certified robustness;adversarial robustness", "tldr": "We present knowledge continuity, a novel definition which aims to certify robustness of neural networks across continuous/discrete domains.", "abstract": "We present knowledge continuity, a novel definition inspired by Lipschitz continuity which aims to certify the robustness of neural networks across input domains (such as continuous and discrete domains in vision and language, respectively). Most existing approaches that seek to certify robustness, especially Lipschitz continuity, lie within the continuous domain with norm and distribution-dependent guarantees. In contrast, our proposed definition yields certification guarantees that depend only on the loss function and the intermediate learned metric spaces of the neural network. These bounds are independent of domain modality, norms, and distribution. We further demonstrate that the expressiveness of a model class is not at odds with its knowledge continuity. This implies that achieving robustness by maximizing knowledge continuity should not theoretically hinder inferential performance. Finally, to complement our theoretical results, we present several applications of knowledge continuity such as regularization, a certification algorithm, and show that knowledge continuity can be used to localize vulnerable components of a neural network.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93235"} +{"video_file": "v1BIm8wESL_39027750.mp4", "openreview_id": "v1BIm8wESL", "slideslive_id": 39027750, "venue": "nips2024", "title": "Skinned Motion Retargeting with Dense Geometric Interaction Perception", "status": "Spotlight", "keywords": "Neural Motion Processing;Motion Retargeting", "tldr": "Geometry-aware motion retargeting in a single stage with no contradiction introduced by geometry correction.", "abstract": "Capturing and maintaining geometric interactions among different body parts is crucial for successful motion retargeting in skinned characters. Existing approaches often overlook body geometries or add a geometry correction stage after skeletal motion retargeting. This results in conflicts between skeleton interaction and geometry correction, leading to issues such as jittery, interpenetration, and contact mismatches. To address these challenges, we introduce a new retargeting framework, MeshRet, which directly models the dense geometric interactions in motion retargeting. Initially, we establish dense mesh correspondences between characters using semantically consistent sensors (SCS), effective across diverse mesh topologies. Subsequently, we develop a novel spatio-temporal representation called the dense mesh interaction (DMI) field. This field, a collection of interacting SCS feature vectors, skillfully captures both contact and non-contact interactions between body geometries. By aligning the DMI field during retargeting, MeshRet not only preserves motion semantics but also prevents self-interpenetration and ensures contact preservation. Extensive experiments on the public Mixamo dataset and our newly-collected ScanRet dataset demonstrate that MeshRet achieves state-of-the-art performance. Code available at https://github.com/abcyzj/MeshRet.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93234"} +{"video_file": "v4dXL3LsGX_39025552.mp4", "openreview_id": "v4dXL3LsGX", "slideslive_id": 39025552, "venue": "nips2024", "title": "Learning to Cooperate with Humans using Generative Agents", "status": "Poster", "keywords": "multi-agent reinforcement learning;human-AI cooperation", "tldr": "We use generative model to sample partner agents to train a coordinator agent. These agents cooperate well with real human players.", "abstract": "Training agents that can coordinate zero-shot with humans is a key mission in multi-agent reinforcement learning (MARL). Current algorithms focus on training simulated human partner policies which are then used to train a Cooperator agent. The simulated human is produced either through behavior cloning over a dataset of human cooperation behavior, or by using MARL to create a population of simulated agents. However, these approaches often struggle to produce a Cooperator that can coordinate well with real humans, since the simulated humans fail to cover the diverse strategies and styles employed by people in the real world. We show \\emph{learning a generative model of human partners} can effectively address this issue. Our model learns a latent variable representation of the human that can be regarded as encoding the human's unique strategy, intention, experience, or style. This generative model can be flexibly trained from any (human or neural policy) agent interaction data. By sampling from the latent space, we can use the generative model to produce different partners to train Cooperator agents. We evaluate our method---Generative Agent Modeling for Multi-agent Adaptation (GAMMA)---on Overcooked, a challenging cooperative cooking game that has become a standard benchmark for zero-shot coordination. We conduct an evaluation with real human teammates, and the results show that GAMMA consistently improves performance, whether the generative model is trained on simulated populations or human datasets. Further, we propose a method for posterior sampling from the generative model that is biased towards the human data, enabling us to efficiently improve performance with only a small amount of expensive human interaction data.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93229"} +{"video_file": "v7vYVvmfru_39028483.mp4", "openreview_id": "v7vYVvmfru", "slideslive_id": 39028483, "venue": "nips2024", "title": "An Accelerated Algorithm for Stochastic Bilevel Optimization under Unbounded Smoothness", "status": "Poster", "keywords": "Bilevel Optimization;Acceleration;Unbounded Smoothness;Nonconvex Optimization", "tldr": "This paper introduces a new algorithm and analysis to accelerate bilevel optimization under unbounded smoothness.", "abstract": "This paper investigates a class of stochastic bilevel optimization problems where the upper-level function is nonconvex with potentially unbounded smoothness and the lower-level problem is strongly convex. These problems have significant applications in sequential data learning, such as text classification using recurrent neural networks. The unbounded smoothness is characterized by the smoothness constant of the upper-level function scaling linearly with the gradient norm, lacking a uniform upper bound. Existing state-of-the-art algorithms require\nO\n~\n(\n\u03f5\n\u2212\n4\n)\noracle calls of stochastic gradient or Hessian/Jacobian-vector product to find an\n\u03f5\n-stationary point. However, it remains unclear if we can further improve the convergence rate when the assumptions for the function in the population level also hold for each random realization almost surely (e.g., Lipschitzness of each realization of the stochastic gradient). To address this issue, we propose a new Accelerated Bilevel Optimization algorithm named AccBO. The algorithm updates the upper-level variable by normalized stochastic gradient descent with recursive momentum and the lower-level variable by the stochastic Nesterov accelerated gradient descent algorithm with averaging. We prove that our algorithm achieves an oracle complexity of\nO\n~\n(\n\u03f5\n\u2212\n3\n)\nto find an\n\u03f5\n-stationary point, when the lower-level stochastic gradient has a small variance\nO\n(\n\u03f5\n)\n. Our proof relies on a novel lemma characterizing the dynamics of stochastic Nesterov accelerated gradient descent algorithm under distribution drift with high probability for the lower-level variable, which is of independent interest and also plays a crucial role in analyzing the hypergradient estimation error over time. Experimental results on various tasks confirm that our proposed algorithm achieves the predicted theoretical acceleration and significantly outperforms baselines in bilevel optimization.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93226"} +{"video_file": "v8RRFNbJ43_39024891.mp4", "openreview_id": "v8RRFNbJ43", "slideslive_id": 39024891, "venue": "nips2024", "title": "Measuring Dejavu Memorization Efficiently", "status": "Poster", "keywords": "memorization;privacy", "tldr": "The deja vu memorization test measures training data memorization, but is inefficient because it involves training another similar model. This work provides a simpler, more efficient way to carry out the test.", "abstract": "Recent research has shown that representation learning models may accidentally memorize their training data. For example, the d\u00e9j\u00e0 vu method shows that for certain representation learning models and training images, it is sometimes possible to correctly predict the foreground label given only the representation of he background \u2013 better than through dataset-level correlations. However, their measurement method requires training two models \u2013 one to estimate dataset-level correlations and the other to estimate memorization. This multiple model setup becomes infeasible for large open-source models. In this work, we propose alter native simple methods to estimate dataset-level correlations, and show that these can be used to approximate an off-the-shelf model\u2019s memorization ability without any retraining. This enables, for the first time, the measurement of memorization in pre-trained open-source image representation and vision-language models. Our results show that different ways of measuring memorization yield very similar aggregate results. We also find that open-source models typically have lower aggregate memorization than similar models trained on a subset of the data. The code is available both for vision (https://github.com/facebookresearch/DejaVuOSS) and vision language (https://github.com/facebookresearch/VLMDejaVu) models.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/93225"} +{"video_file": "v8X70gTodR_39026359.mp4", "openreview_id": "v8X70gTodR", "slideslive_id": 39026359, "venue": "nips2024", "title": "Analysing the Generalisation and Reliability of Steering Vectors", "status": "Poster", "keywords": "Interpretability;Causal Abstractions;Steering Vectors;Representation Engineering;Linear Representation Hypothesis;Contrastive Activation Addition", "tldr": "We evaluate steering vectors on over 100 datasets, finding that they work unreliably in-distribution and sometimes misgeneralise out-of-distribution.", "abstract": "Steering vectors (SVs) are a new approach to efficiently adjust language model behaviour at inference time by intervening on intermediate model activations. They have shown promise in terms of improving both capabilities and model alignment. However, the reliability and generalisation properties of this approach are unknown. In this work, we rigorously investigate these properties, and show that steering vectors have substantial limitations both in- and out-of-distribution. In-distribution, steerability is highly variable across different inputs. Depending on the concept, spurious biases can substantially contribute to how effective steering is for each input, presenting a challenge for the widespread use of steering vectors. Out-of-distribution, while steering vectors often generalise well, for several concepts they are brittle to reasonable changes in the prompt, resulting in them failing to generalise well. Overall, our findings show that while steering can work well in the right circumstances, there remain many technical difficulties of applying steering vectors to guide models' behaviour at scale.", "primary_area": "interpretability_and_explainability", "site": "https://neurips.cc/virtual/2024/poster/93224"} +{"video_file": "vA4s3kN4QE_39026421.mp4", "openreview_id": "vA4s3kN4QE", "slideslive_id": 39026421, "venue": "nips2024", "title": "LG-VQ: Language-Guided Codebook Learning", "status": "Poster", "keywords": "Codebook Learing;VQ-GAN;Vector-quantized Image Modeling", "tldr": "We utilize pre-trained text semantics to guide the codebook to learn rich multi-modal knowledge to improve the performance of multi-modal downstream tasks", "abstract": "Vector quantization (VQ) is a key technique in high-resolution and high-fidelity image synthesis, which aims to learn a codebook to encode an image with a sequence of discrete codes and then generate an image in an auto-regression manner. Although existing methods have shown superior performance, most methods prefer to learn a single-modal codebook (\\emph{e.g.}, image), resulting in suboptimal performance when the codebook is applied to multi-modal downstream tasks (\\emph{e.g.}, text-to-image, image captioning) due to the existence of modal gaps. In this paper, we propose a novel language-guided codebook learning framework, called LG-VQ, which aims to learn a codebook that can be aligned with the text to improve the performance of multi-modal downstream tasks. Specifically, we first introduce pre-trained text semantics as prior knowledge, then design two novel alignment modules (\\emph{i.e.}, Semantic Alignment Module, and Relationship Alignment Module) to transfer such prior knowledge into codes for achieving codebook text alignment.\nIn particular, our LG-VQ method is model-agnostic, which can be easily integrated into existing VQ models. Experimental results show that our method achieves superior performance on reconstruction and various multi-modal downstream tasks.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93222"} +{"video_file": "vAOgaPvgYr_39025251.mp4", "openreview_id": "vAOgaPvgYr", "slideslive_id": 39025251, "venue": "nips2024", "title": "OccamLLM: Fast and Exact Language Model Arithmetic in a Single Step", "status": "Poster", "keywords": "LLM;Language Model;Arithmetic;OccamNet;Llama", "tldr": "We propose a framework that enables exact arithmetic in a single autoregressive step, providing faster, more secure, and more interpretable LLM systems with arithmetic capabilities.", "abstract": "Despite significant advancements in text generation and reasoning, Large Language Models (LLMs) still face challenges in accurately performing complex arithmetic operations. Language model systems often enable LLMs to generate code for arithmetic operations to achieve accurate calculations. However, this approach compromises speed and security, and fine-tuning risks the language model losing prior capabilities. We propose a framework that enables exact arithmetic in a single autoregressive step, providing faster, more secure, and more interpretable LLM systems with arithmetic capabilities. We use the hidden states of a LLM to control a symbolic architecture that performs arithmetic. Our implementation using Llama 3 with OccamNet as a symbolic model (OccamLlama) achieves 100% accuracy on single arithmetic operations (\n+\n,\n\u2212\n,\n\u00d7\n,\n\u00f7\n,\nsin\n\u2061\n,\ncos\n\u2061\n,\nlog\n\u2061\n,\nexp\n\u2061\n,\n), outperforming GPT 4o with and without a code interpreter. Furthermore, OccamLlama outperforms GPT 4o with and without a code interpreter on average across a range of mathematical problem solving benchmarks, demonstrating that OccamLLMs can excel in arithmetic tasks, even surpassing much larger models. Code is available at https://github.com/druidowm/OccamLLM.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93221"} +{"video_file": "vBGMbFgvsX_39026894.mp4", "openreview_id": "vBGMbFgvsX", "slideslive_id": 39026894, "venue": "nips2024", "title": "Going Beyond Heuristics by Imposing Policy Improvement as a Constraint", "status": "Poster", "keywords": "Deep reinforcement learning", "tldr": "We propose a modification to existing RL algorithms to improve the performance when trained with heuristic rewards.", "abstract": "In many reinforcement learning (RL) applications, incorporating heuristic rewards alongside the task reward is crucial for achieving desirable performance. Heuristics encode prior human knowledge about how a task should be done, providing valuable hints for RL algorithms. However, such hints may not be optimal, limiting the performance of learned policies. The currently established way of using heuristics is to modify the heuristic reward in a manner that ensures that the optimal policy learned with it remains the same as the optimal policy for the task reward (i.e., optimal policy invariance). However, these methods often fail in practical scenarios with limited training data. We found that while optimal policy invariance ensures convergence to the best policy based on task rewards, it doesn't guarantee better performance than policies trained with biased heuristics under a finite data regime, which is impractical. In this paper, we introduce a new principle tailored for finite data settings. Instead of enforcing optimal policy invariance, we train a policy that combines task and heuristic rewards and ensures it outperforms the heuristic-trained policy. As such, we prevent policies from merely exploiting heuristic rewards without improving the task reward. Our experiments on robotic locomotion, helicopter control, and manipulation tasks demonstrate that our method consistently outperforms the heuristic policy, regardless of the heuristic rewards' quality. Code is available at https://github.com/Improbable-AI/hepo.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93220"} +{"video_file": "vBah12uVbD_39024438.mp4", "openreview_id": "vBah12uVbD", "slideslive_id": 39024438, "venue": "nips2024", "title": "Conformalized Credal Set Predictors", "status": "Poster", "keywords": "Conformal Prediction;Credal Sets;Imprecise Probabilities;Uncertainty Representation;Uncertainty Quantification", "tldr": "A novel conformal prediction method is introduced to construct credal sets that are able to represent both the aleatoric and epistemic uncertainty in a prediction.", "abstract": "Credal sets are sets of probability distributions that are considered as candidates for an imprecisely known ground-truth distribution. In machine learning, they have recently attracted attention as an appealing formalism for uncertainty representation, in particular, due to their ability to represent both the aleatoric and epistemic uncertainty in a prediction. However, the design of methods for learning credal set predictors remains a challenging problem. In this paper, we make use of conformal prediction for this purpose. More specifically, we propose a method for predicting credal sets in the classification task, given training data labeled by probability distributions. Since our method inherits the coverage guarantees of conformal prediction, our conformal credal sets are guaranteed to be valid with high probability (without any assumptions on model or distribution). We demonstrate the applicability of our method on ambiguous classification tasks for uncertainty quantification.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/93218"} +{"video_file": "vBlzen37i0_39026034.mp4", "openreview_id": "vBlzen37i0", "slideslive_id": 39026034, "venue": "nips2024", "title": "Optimal deep learning of holomorphic operators between Banach spaces", "status": "Spotlight", "keywords": "Deep learning;operator learning;parametric PDEs;deep neural networks;generalization error;optimal algorithms", "tldr": "We show that deep learning with fully-connected deep neural networks is optimal for learning holomorphic operators", "abstract": "Operator learning problems arise in many key areas of scientific computing where Partial Differential Equations (PDEs) are used to model physical systems. In such scenarios, the operators map between Banach or Hilbert spaces. In this work, we tackle the problem of learning operators between Banach spaces, in contrast to the vast majority of past works considering only Hilbert spaces. We focus on learning holomorphic operators -- an important class of problems with many applications. We combine arbitrary approximate encoders and decoders with standard feedforward Deep Neural Network (DNN) architectures -- specifically, those with constant width exceeding the depth -- under standard\n\u2113\n2\n-loss minimization. We first identify a family of DNNs such that the resulting Deep Learning (DL) procedure achieves optimal generalization bounds for such operators. For standard fully-connected architectures, we then show that there are uncountably many minimizers of the training problem that yield equivalent optimal performance. The DNN architectures we consider are `problem agnostic', with width and depth only depending on the amount of training data\nm\nand not on regularity assumptions of the target operator. Next, we show that DL is optimal for this problem: no recovery procedure can surpass these generalization bounds up to log terms. Finally, we present numerical results demonstrating the practical performance on challenging problems including the parametric diffusion, Navier-Stokes-Brinkman and Boussinesq PDEs.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93217"} +{"video_file": "vCOgjBIZuL_39026505.mp4", "openreview_id": "vCOgjBIZuL", "slideslive_id": 39026505, "venue": "nips2024", "title": "Direct3D: Scalable Image-to-3D Generation via 3D Latent Diffusion Transformer", "status": "Poster", "keywords": "3D Generation;Diffsion Model", "tldr": "We propse a novel approach for direct 3D shape generation from a single image, bypassing the need for multi-view reconstruction.", "abstract": "Generating high-quality 3D assets from text and images has long been challenging, primarily due to the absence of scalable 3D representations capable of capturing intricate geometry distributions. In this work, we introduce Direct3D, a native 3D generative model scalable to in-the-wild input images, without requiring a multi-view diffusion model or SDS optimization. Our approach comprises two primary components: a Direct 3D Variational Auto-Encoder (D3D-VAE) and a Direct 3D Diffusion Transformer (D3D-DiT). D3D-VAE efficiently encodes high-resolution 3D shapes into a compact and continuous latent triplane space. Notably, our method directly supervises the decoded geometry using a semi-continuous surface sampling strategy, diverging from previous methods relying on rendered images as supervision signals. D3D-DiT models the distribution of encoded 3D latents and is specifically designed to fuse positional information from the three feature maps of the triplane latent, enabling a native 3D generative model scalable to large-scale 3D datasets. Additionally, we introduce an innovative image-to-3D generation pipeline incorporating semantic and pixel-level image conditions, allowing the model to produce 3D shapes consistent with the provided conditional image input. Extensive experiments demonstrate the superiority of our large-scale pre-trained Direct3D over previous image-to-3D approaches, achieving significantly better generation quality and generalization ability, thus establishing a new state-of-the-art for 3D content creation. Project page: https://www.neural4d.com/research/direct3d.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93214"} +{"video_file": "vH7GcaDhAo_39024912.mp4", "openreview_id": "vH7GcaDhAo", "slideslive_id": 39024912, "venue": "nips2024", "title": "RSA: Resolving Scale Ambiguities in Monocular Depth Estimators through Language Descriptions", "status": "Poster", "keywords": "Monocular Depth Estimation;Vision-Language Model;Multimodal Learning", "tldr": "We use language descriptions to transform relative depth to metric depth.", "abstract": "We propose a method for metric-scale monocular depth estimation. Inferring depth from a single image is an ill-posed problem due to the loss of scale from perspective projection during the image formation process. Any scale chosen is a bias, typically stemming from training on a dataset; hence, existing works have instead opted to use relative (normalized, inverse) depth. Our goal is to recover metric-scaled depth maps through a linear transformation. The crux of our method lies in the observation that certain objects (e.g., cars, trees, street signs) are typically found or associated with certain types of scenes (e.g., outdoor). We explore whether language descriptions can be used to transform relative depth predictions to those in metric scale. Our method, RSA , takes as input a text caption describing objects present in an image and outputs the parameters of a linear transformation which can be applied globally to a relative depth map to yield metric-scaled depth predictions. We demonstrate our method on recent general-purpose monocular depth models on indoors (NYUv2, VOID) and outdoors (KITTI). When trained on multiple datasets, RSA can serve as a general alignment module in zero-shot settings. Our method improves over common practices in aligning relative to metric depth and results in predictions that are comparable to an upper bound of fitting relative depth to ground truth via a linear transformation. Code is available at: https://github.com/Adonis-galaxy/RSA.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93212"} +{"video_file": "vJMMdFfL0A_39026163.mp4", "openreview_id": "vJMMdFfL0A", "slideslive_id": 39026163, "venue": "nips2024", "title": "The Benefits of Balance: From Information Projections to Variance Reduction", "status": "Poster", "keywords": "regularized optimal transport;self-supervised learning;variance reduction;alternating projection", "tldr": "We present a data balancing approach to distribution estimation that provides theoretical interpretations of the various self-supervised training schemes.", "abstract": "Data balancing across multiple modalities and sources appears in various forms in foundation models in machine learning and AI, e.g., in CLIP and DINO. We show that data balancing across modalities and sources actually offers an unsuspected benefit: variance reduction. We present a non-asymptotic statistical bound that quantifies this variance reduction effect and relates it to the eigenvalue decay of Markov operators. Furthermore, we describe how various forms of data balancing in contrastive multimodal learning and self-supervised clustering can be better understood, and even improved upon, owing to our variance reduction viewpoint.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93207"} +{"video_file": "vMMzjCr5Zj_39026185.mp4", "openreview_id": "vMMzjCr5Zj", "slideslive_id": 39026185, "venue": "nips2024", "title": "Large Pre-trained time series models for cross-domain Time series analysis tasks", "status": "Poster", "keywords": "Time-series;Self-supervised Learning", "tldr": "Novel time-series segmentation and self-supervised training for building general pre-trained time-series models capable of time-series analysis across multiple domains.", "abstract": "Large pre-trained models have been vital in recent advancements in domains like language and vision, making model training for individual downstream tasks more efficient and provide superior performance. However, tackling time-series analysis tasks usually involves designing and training a separate model from scratch leveraging training data and domain expertise specific to the task. We tackle a significant challenge for pre-training a foundational time-series model from multi-domain time-series datasets: extracting semantically useful tokenized inputs to the model across heterogeneous time-series from different domains. We propose Large Pre-trained Time-series Models (LPTM) that introduces a novel method of adaptive segmentation that automatically identifies optimal dataset-specific segmentation strategy during pre-training. This enables LPTM to perform similar to or better than domain-specific state-of-art model when fine-tuned to different downstream time-series analysis tasks and under zero-shot settings. LPTM achieves superior forecasting and time-series classification results taking up to 40% less data and 50% less training time compared to state-of-art baselines.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/93205"} +{"video_file": "vP9qAzr2Gw_39026618.mp4", "openreview_id": "vP9qAzr2Gw", "slideslive_id": 39026618, "venue": "nips2024", "title": "Supra-Laplacian Encoding for Transformer on Dynamic Graphs", "status": "Poster", "keywords": "Dynamic graphs;Link prediction;Transformer;supra-Lapacian encoding", "tldr": "New spectral spatio-temporal encoding for fully connected Dynamic Graph Transformer in dynamic link prediction", "abstract": "Fully connected Graph Transformers (GT) have rapidly become prominent in the static graph community as an alternative to Message-Passing models, which suffer from a lack of expressivity, oversquashing, and under-reaching. However, in a dynamic context, by interconnecting all nodes at multiple snapshots with self-attention,GT loose both structural and temporal information. In this work, we introduce Supra-LAplacian encoding for spatio-temporal TransformErs (SLATE), a new spatio-temporal encoding to leverage the GT architecture while keeping spatio-temporal information. Specifically, we transform Discrete Time Dynamic Graphs into multi-layer graphs and take advantage of the spectral properties of their associated supra-Laplacian matrix. Our second contribution explicitly model nodes' pairwise relationships with a cross-attention mechanism, providing an accurate edge representation for dynamic link prediction. SLATE outperforms numerous state-of-the-art methods based on Message-Passing Graph Neural Networks combined with recurrent models (e.g, LSTM), and Dynamic Graph Transformers, on~9 datasets. Code is open-source and available at this link https://github.com/ykrmm/SLATE.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/93204"} +{"video_file": "vU1SiBb57j_39026325.mp4", "openreview_id": "vU1SiBb57j", "slideslive_id": 39026325, "venue": "nips2024", "title": "Learning Multimodal Behaviors from Scratch with Diffusion Policy Gradient", "status": "Poster", "keywords": "Reinforcement Learning;Diffusion Model;Multimodal Learning;Unsupervised Skill Discovery", "tldr": "Deep Diffusion Policy Gradient is a novel RL algorithm that successfully trains diffusion policies online, discovering and maintaining multimodal behaviors in complex environments.", "abstract": "Deep reinforcement learning (RL) algorithms typically parameterize the policy as a deep network that outputs either a deterministic action or a stochastic one modeled as a Gaussian distribution, hence restricting learning to a single behavioral mode. Meanwhile, diffusion models emerged as a powerful framework for multimodal learning. However, the use of diffusion policies in online RL is hindered by the intractability of policy likelihood approximation, as well as the greedy objective of RL methods that can easily skew the policy to a single mode. This paper presents Deep Diffusion Policy Gradient (DDiffPG), a novel actor-critic algorithm that learns from scratch multimodal policies parameterized as diffusion models while discovering and maintaining versatile behaviors. DDiffPG explores and discovers multiple modes through off-the-shelf unsupervised clustering combined with novelty-based intrinsic motivation. DDiffPG forms a multimodal training batch and utilizes mode-specific Q-learning to mitigate the inherent greediness of the RL objective, ensuring the improvement of the diffusion policy across all modes. Our approach further allows the policy to be conditioned on mode-specific embeddings to explicitly control the learned modes. Empirical studies validate DDiffPG's capability to master multimodal behaviors in complex, high-dimensional continuous control tasks with sparse rewards, also showcasing proof-of-concept dynamic online replanning when navigating mazes with unseen obstacles. Our project page is available at https://supersglzc.github.io/projects/ddiffpg/.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93202"} +{"video_file": "vUrOuc6NR3_39027503.mp4", "openreview_id": "vUrOuc6NR3", "slideslive_id": 39027503, "venue": "nips2024", "title": "DynaMo: In-Domain Dynamics Pretraining for Visuo-Motor Control", "status": "Poster", "keywords": "Robot learning;representation learning;self-supervised learning", "tldr": "DynaMo, a new self-supervised method for pretraining visual encoders for downstream visuomotor control by explicitly modeling dynamics in the demonstration observations.", "abstract": "Imitation learning has proven to be a powerful tool for training complex visuo-motor policies. However, current methods often require hundreds to thousands of expert demonstrations to handle high-dimensional visual observations. A key reason for this poor data efficiency is that visual representations are predominantly either pretrained on out-of-domain data or trained directly through a behavior cloning objective. In this work, we present DynaMo, a new in-domain, self-supervised method for learning visual representations. Given a set of expert demonstrations, we jointly learn a latent inverse dynamics model and a forward dynamics model over a sequence of image embeddings, predicting the next frame in latent space, without augmentations, contrastive sampling, or access to ground truth actions. Importantly, DynaMo does not require any out-of-domain data such as Internet datasets or cross-embodied datasets. On a suite of six simulated and real environments, we show that representations learned with DynaMo significantly improve downstream imitation learning performance over prior self-supervised learning objectives, and pretrained representations. Gains from using DynaMo hold across policy classes such as Behavior Transformer, Diffusion Policy, MLP, and nearest neighbors. Finally, we ablate over key components of DynaMo and measure its impact on downstream policy performance. Robot videos are best viewed at https://dynamo-ssl.github.io.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/93200"} +{"video_file": "vWSll6M9pj_39026865.mp4", "openreview_id": "vWSll6M9pj", "slideslive_id": 39026865, "venue": "nips2024", "title": "Unified Speech Recognition: A Single Model for Auditory, Visual, and Audiovisual Inputs", "status": "Poster", "keywords": "Speech recognition;lipreading;self-supervised learning;semi-supervised learning", "tldr": "We propose a unified speech recognition method for ASR, VSR, and AVSR.", "abstract": "Research in auditory, visual, and audiovisual speech recognition (ASR, VSR, and AVSR, respectively) has traditionally been conducted independently. Even recent self-supervised studies addressing two or all three tasks simultaneously tend to yield separate models, leading to disjoint inference pipelines with increased memory requirements and redundancies. This paper proposes unified training strategies for these systems. We demonstrate that training a single model for all three tasks enhances VSR and AVSR performance, overcoming typical optimisation challenges when training from scratch. Moreover, we introduce a greedy pseudo-labelling approach to more effectively leverage unlabelled samples, addressing shortcomings in related self-supervised methods. Finally, we develop a self-supervised pre-training method within our framework, proving its effectiveness alongside our semi-supervised approach. Despite using a single model for all tasks, our unified approach achieves state-of-the-art performance on LRS3 for ASR, VSR, and AVSR compared to recent methods. Code will be made publicly available.", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/93199"} +{"video_file": "vYUx8j5KK2_39027176.mp4", "openreview_id": "vYUx8j5KK2", "slideslive_id": 39027176, "venue": "nips2024", "title": "Curriculum Fine-tuning of Vision Foundation Model for Medical Image Classification Under Label Noise", "status": "Poster", "keywords": "Learning with noisy label;medical image classification", "tldr": "We propose CUFIT, a robust fine-tuning method for vision foundation models under noisy label conditions, based on the advantages of linear probing and adapters.", "abstract": "Deep neural networks have demonstrated remarkable performance in various vision tasks, but their success heavily depends on the quality of the training data. Noisy labels are a critical issue in medical datasets and can significantly degrade model performance. Previous clean sample selection methods have not utilized the well pre-trained features of vision foundation models (VFMs) and assumed that training begins from scratch. In this paper, we propose CUFIT, a curriculum fine-tuning paradigm of VFMs for medical image classification under label noise. Our method is motivated by the fact that linear probing of VFMs is relatively unaffected by noisy samples, as it does not update the feature extractor of the VFM, thus robustly classifying the training samples. Subsequently, curriculum fine-tuning of two adapters is conducted, starting with clean sample selection from the linear probing phase. Our experimental results demonstrate that CUFIT outperforms previous methods across various medical image benchmarks. Specifically, our method surpasses previous baselines by 5.0%, 2.1%, 4.6%, and 5.8% at a 40% noise rate on the HAM10000, APTOS-2019, BloodMnist, and OrgancMnist datasets, respectively. Furthermore, we provide extensive analyses to demonstrate the impact of our method on noisy label detection. For instance, our method shows higher label precision and recall compared to previous approaches. Our work highlights the potential of leveraging VFMs in medical image classification under challenging conditions of noisy labels.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/93198"} +{"video_file": "vjsd8Bcipv_39026818.mp4", "openreview_id": "vjsd8Bcipv", "slideslive_id": 39026818, "venue": "nips2024", "title": "$\\epsilon$-Softmax: Approximating One-Hot Vectors for Mitigating Label Noise", "status": "Poster", "keywords": "Learning with Noisy Labels;Robust Loss Function;Excess Risk Bound", "tldr": "We propose a simple yet effective method for mitigating label noise, which can be implemented with just two lines of code.", "abstract": "Noisy labels pose a common challenge for training accurate deep neural networks. To mitigate label noise, prior studies have proposed various robust loss functions to achieve noise tolerance in the presence of label noise, particularly symmetric losses. However, they usually suffer from the underfitting issue due to the overly strict symmetric condition. In this work, we propose a simple yet effective approach for relaxing the symmetric condition, namely\n\u03f5\n-softmax, which simply modifies the outputs of the softmax layer to approximate one-hot vectors with a controllable error\n\u03f5\n. Essentially,\n\u03f5\n-softmax not only acts as an alternative for the softmax layer, but also implicitly plays the crucial role in modifying the loss function. We prove theoretically that\n\u03f5\n-softmax can achieve noise-tolerant learning with controllable excess risk bound for almost any loss function. Recognizing that\n\u03f5\n-softmax-enhanced losses may slightly reduce fitting ability on clean datasets, we further incorporate them with one symmetric loss, thereby achieving a better trade-off between robustness and effective learning. Extensive experiments demonstrate the superiority of our method in mitigating synthetic and real-world label noise.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/93191"} +{"video_file": "vtRotUd539_39025370.mp4", "openreview_id": "vtRotUd539", "slideslive_id": 39025370, "venue": "nips2024", "title": "Average gradient outer product as a mechanism for deep neural collapse", "status": "Poster", "keywords": "Theory of deep learning;neural collapse;average gradient outer product;kernel methods;feature learning", "tldr": "We demonstrate that feature learning through the average gradient outer product is a setting for deep neural collapse.", "abstract": "Deep Neural Collapse (DNC) refers to the surprisingly rigid structure of the data representations in the final layers of Deep Neural Networks (DNNs). Though the phenomenon has been measured in a variety of settings, its emergence is typically explained via data-agnostic approaches, such as the unconstrained features model. In this work, we introduce a data-dependent setting where DNC forms due to feature learning through the average gradient outer product (AGOP). The AGOP is defined with respect to a learned predictor and is equal to the uncentered covariance matrix of its input-output gradients averaged over the training dataset. Deep Recursive Feature Machines are a method that constructs a neural network by iteratively mapping the data with the AGOP and applying an untrained random feature map. We demonstrate theoretically and empirically that DNC occurs in Deep Recursive Feature Machines as a consequence of the projection with the AGOP matrix computed at each layer. We then provide evidence that this mechanism holds for neural networks more generally. We show that the right singular vectors and values of the weights can be responsible for the majority of within-class variability collapse for DNNs trained in the feature learning regime. As observed in recent work, this singular structure is highly correlated with that of the AGOP.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93185"} +{"video_file": "vwgWbCxeAQ_39028817.mp4", "openreview_id": "vwgWbCxeAQ", "slideslive_id": 39028817, "venue": "nips2024", "title": "Rethinking Misalignment in Vision-Language Model Adaptation from a Causal Perspective", "status": "Poster", "keywords": "causal;adaptation;foundational models", "tldr": "Guided by the theory of causation, we propose semantic decoupling and uncertainty modeling to conduct prompt tuning on CLIP for downstream tasks.", "abstract": "Foundational Vision-Language models such as CLIP have exhibited impressive generalization in downstream tasks. However, CLIP suffers from a two-level misalignment issue, i.e., task misalignment and data misalignment, when adapting to specific tasks. Soft prompt tuning has mitigated the task misalignment, yet the data misalignment remains a challenge. To analyze the impacts of the data misalignment, we revisit the pre-training and adaptation processes of CLIP and develop a structural causal model. We discover that while we expect to capture task-relevant information for downstream tasks accurately, the task-irrelevant knowledge impacts the prediction results and hampers the modeling of the true relationships between the images and the predicted classes. As task-irrelevant knowledge is unobservable, we leverage the front-door adjustment and propose Causality-Guided Semantic Decoupling and Classification (CDC) to mitigate the interference of task-irrelevant knowledge. Specifically, we decouple semantics contained in the data of downstream tasks and perform classification based on each semantic. Furthermore, we employ the Dempster-Shafer evidence theory to evaluate the uncertainty of each prediction generated by diverse semantics. Experiments conducted in multiple different settings have consistently demonstrated the effectiveness of CDC.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93182"} +{"video_file": "vymkuBMLlh_39026347.mp4", "openreview_id": "vymkuBMLlh", "slideslive_id": 39026347, "venue": "nips2024", "title": "Conditional Generative Models are Sufficient to Sample from Any Causal Effect Estimand", "status": "Poster", "keywords": "causal inference;causal graphs;deep generative models", "tldr": "We propose a conditional generative model based approach to sample from any identifiable interventional or conditional interventional distribution given an arbitrary causal graph containing latent confounders.", "abstract": "Causal inference from observational data plays critical role in many applications in trustworthy machine learning. While sound and complete algorithms exist to compute causal effects, many of them assume access to conditional likelihoods, which is difficult to estimate for high-dimensional (particularly image) data. Researchers have alleviated this issue by simulating causal relations with neural models. However, when we have high-dimensional variables in the causal graph along with some unobserved confounders, no existing work can effectively sample from the un/conditional interventional distributions. In this work, we show how to sample from any identifiable interventional distribution given an arbitrary causal graph through a sequence of push-forward computations of conditional generative models, such as diffusion models. Our proposed algorithm follows the recursive steps of the existing likelihood-based identification algorithms to train a set of feed-forward models, and connect them in a specific way to sample from the desired distribution. We conduct experiments on a Colored MNIST dataset having both the treatment (\nX\n) and the target variables (\nY\n) as images and sample from\nP\n(\ny\n|\nd\no\n(\nx\n)\n)\n. Our algorithm also enables us to conduct a causal analysis to evaluate spurious correlations among input features of generative models pre-trained on the CelebA dataset. Finally, we generate high-dimensional interventional samples from the MIMIC-CXR dataset involving text and image variables.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/93180"} +{"video_file": "w28i9oe9Xr_39024875.mp4", "openreview_id": "w28i9oe9Xr", "slideslive_id": 39024875, "venue": "nips2024", "title": "High Rank Path Development: an approach to learning the filtration of stochastic processes", "status": "Poster", "keywords": "adapted weak topology; stochastic process; synthetic time series generation; path development", "tldr": "This paper introduces the High Rank PCF Distance (HRPCFD) for metrizing extended weak convergence of stochastic processes, demonstrating its efficiency and effectiveness in numerical implementations such as conditional time series generation.", "abstract": "Since the weak convergence for stochastic processes does not account for the growth of information over time which is represented by the underlying filtration, a slightly erroneous stochastic model in weak topology may cause huge loss in multi-periods decision making problems. To address such discontinuities, Aldous introduced the extended weak convergence, which can fully characterise all essential properties, including the filtration, of stochastic processes; however, it was considered to be hard to find efficient numerical implementations. In this paper, we introduce a novel metric called High Rank PCF Distance (HRPCFD) for extended weak convergence based on the high rank path development method from rough path theory, which also defines the characteristic function for measure-valued processes. We then show that such HRPCFD admits many favourable analytic properties which allows us to design an efficient algorithm for training HRPCFD from data and construct the HRPCF-GAN by using HRPCFD as the discriminator for conditional time series generation. Our numerical experiments on both hypothesis testing and generative modelling validate the out-performance of our approach compared with several state-of-the-art methods, highlighting its potential in broad applications of synthetic time series generation and in addressing classic financial and economic challenges, such as optimal stopping or utility maximisation problems. Code is available at https://github.com/DeepIntoStreams/High-Rank-PCF-GAN.git.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/93179"} +{"video_file": "w2L3Ll1jbV_39026331.mp4", "openreview_id": "w2L3Ll1jbV", "slideslive_id": 39026331, "venue": "nips2024", "title": "Adversarially Robust Multi-task Representation Learning", "status": "Poster", "keywords": "Learning Theory;Multi-task and Transfer Learning;Adversarial Robustness", "tldr": "We give bounds for adversarially robust transfer learning.", "abstract": "We study adversarially robust transfer learning, wherein, given labeled data on multiple (source) tasks, the goal is to train a model with small robust error on a previously unseen (target) task. In particular, we consider a multi-task representation learning (MTRL) setting, i.e., we assume that the source and target tasks admit a simple (linear) predictor on top of a shared representation (e.g., the final hidden layer of a deep neural network). In this general setting, we provide rates on~the excess adversarial (transfer) risk for Lipschitz losses and smooth nonnegative losses. These rates show that learning a representation using adversarial training on diverse tasks helps protect against inference-time attacks in data-scarce environments. Additionally, we provide novel rates for the single-task setting.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93178"} +{"video_file": "w3JCTBRduf_39027376.mp4", "openreview_id": "w3JCTBRduf", "slideslive_id": 39027376, "venue": "nips2024", "title": "Optimization Can Learn Johnson Lindenstrauss Embeddings", "status": "Poster", "keywords": "Optimization;Non-Convex Optimization;Embeddings;Projections;Derandomization;Gradient Descent;Dimensionality Reduction", "tldr": "We give a novel derandomization of JL via optimization, that avoids all bad local minima in the non-convex landscape by a diffusion-like process where we move through the space of randomized solution samplers, sequentially reducing the variance.", "abstract": "Embeddings play a pivotal role across various disciplines, offering compact representations of complex data structures. Randomized methods like Johnson-Lindenstrauss (JL) provide state-of-the-art and essentially unimprovable theoretical guarantees for achieving such representations. These guarantees are worst-case and in particular, neither the analysis,\nnor the algorithm\n, takes into account any potential structural information of the data. The natural question is: must we randomize? Could we instead use an optimization-based approach, working directly with the data? A first answer is no: as we show, the distance-preserving objective of JL has a non-convex landscape over the space of projection matrices, with many bad stationary points. But this is not the final answer.\nWe present a novel method motivated by diffusion models, that circumvents this fundamental challenge: rather than performing optimization directly over the space of projection matrices, we use optimization over the larger space of\nrandom solution samplers\n, gradually reducing the variance of the sampler. We show that by moving through this larger space, our objective converges to a deterministic (zero variance) solution, avoiding bad stationary points.\nThis method can also be seen as an optimization-based derandomization approach, and is an idea and method that we believe can be applied to many other problems.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93177"} +{"video_file": "w50ICQC6QJ_39027349.mp4", "openreview_id": "w50ICQC6QJ", "slideslive_id": 39027349, "venue": "nips2024", "title": "Discovery of the Hidden World with Large Language Models", "status": "Poster", "keywords": "Causal Discovery;Large Language Models;Causal Representation Learning", "tldr": "A new framework leveraging large language models to extend the scope of causal discovery to unstructured data.", "abstract": "Revealing the underlying causal mechanisms in the real world is the key to the development of science. Despite the progress in the past decades, traditional causal discovery approaches (CDs) mainly rely on high-quality measured variables, usually given by human experts, to find causal relations. The lack of well-defined high-level variables in many real-world applications has already been a longstanding roadblock to a broader application of CDs. To this end, this paper presents Causal representatiOn AssistanT (COAT) that introduces large language models (LLMs) to bridge the gap. LLMs are trained on massive observations of the world and have demonstrated great capability in extracting key information from unstructured data. Therefore, it is natural to employ LLMs to assist with proposing useful high-level factors and crafting their measurements. Meanwhile, COAT also adopts CDs to find causal relations among the identified variables as well as to provide feedback to LLMs to iteratively refine the proposed factors. We show that LLMs and CDs are mutually beneficial and the constructed feedback provably also helps with the factor proposal. We construct and curate several synthetic and real-world benchmarks including analysis of human reviews and diagnosis of neuropathic and brain tumors, to comprehensively evaluate COAT. Extensive empirical results confirm the effectiveness and reliability of COAT with significant improvements.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/93175"} +{"video_file": "w67vRHZF13_39025292.mp4", "openreview_id": "w67vRHZF13", "slideslive_id": 39025292, "venue": "nips2024", "title": "Unified Generative and Discriminative Training for Multi-modal Large Language Models", "status": "Poster", "keywords": "vision-language;multi-modal understanding", "tldr": "We propose an structure-induced approach to unify generative and discriminative paradigms.", "abstract": "In recent times, Vision-Language Models (VLMs) have been trained under two predominant paradigms. Generative training has enabled Multimodal Large Language Models (MLLMs) to tackle various complex tasks, yet issues such as hallucinations and weak object discrimination persist. Discriminative training, exemplified by models like CLIP, excels in zero-shot image-text classification and retrieval, yet struggles with complex scenarios requiring fine-grained semantic differentiation. This paper addresses these challenges by proposing a unified approach that integrates the strengths of both paradigms. Considering interleaved image-text sequences as the general format of input samples, we introduce a structure-induced training strategy that imposes semantic relationships between input samples and the MLLM\u2019s hidden state. This approach enhances the MLLM\u2019s ability to capture global semantics and distinguish fine-grained semantics. By leveraging dynamic sequence alignment within the Dynamic Time Warping framework and integrating a novel kernel for fine-grained semantic differentiation, our method effectively balances generative and discriminative tasks. Extensive experiments demonstrate the effectiveness of our approach, achieving state-of-the-art results in multiple generative tasks, especially those requiring cognitive and discrimination abilities. Additionally, our method surpasses discriminative benchmarks in interleaved and fine-grained retrieval tasks. By employing a retrieval-augmented generation strategy, our approach further enhances performance in some generative tasks within one model, offering a promising direction for future research in vision-language modeling.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93174"} +{"video_file": "w6vbfSC1y0_39025969.mp4", "openreview_id": "w6vbfSC1y0", "slideslive_id": 39025969, "venue": "nips2024", "title": "Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection", "status": "Poster", "keywords": "out-of-distribution detection;vision-language model;prompt-tuning", "tldr": "We propose a framework, named SCT, to mitigate the problem of spurious OOD features mined from ID data in prompt-tuning based OOD detection methods.", "abstract": "Out-of-distribution (OOD) detection is crucial for deploying reliable machine learning models in open-world applications. Recent advances in CLIP-based OOD detection have shown promising results via regularizing prompt tuning with OOD features extracted from ID data. However, the irrelevant context mined from ID data can be spurious due to the inaccurate foreground-background decomposition, thus limiting the OOD detection performance. In this work, we propose a novel framework, namely, \\textit{Self-Calibrated Tuning (SCT)}, to mitigate this problem for effective OOD detection with only the given few-shot ID data. Specifically, SCT introduces modulating factors respectively on the two components of the original learning objective. It adaptively directs the optimization process between the two tasks during training on data with different prediction uncertainty to calibrate the influence of OOD regularization, which is compatible with many prompt tuning based OOD detection methods. Extensive experiments and analyses have been conducted to characterize and demonstrate the effectiveness of the proposed SCT. The code is publicly available at: https://github.com/tmlr-group/SCT.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93172"} +{"video_file": "wAqdvcK1Fv_39025084.mp4", "openreview_id": "wAqdvcK1Fv", "slideslive_id": 39025084, "venue": "nips2024", "title": "Energy-Based Modelling for Discrete and Mixed Data via Heat Equations on Structured Spaces", "status": "Poster", "keywords": "Energy-based models;discrete probabilistic modelling;tabular data", "tldr": "A MCMC-free method based on the discrete heat diffusion for the training of energy-based models on discrete and mixed data with applications in generative modelling for tabular data.", "abstract": "Energy-based models (EBMs) offer a flexible framework for probabilistic modelling across various data domains. However, training EBMs on data in discrete or mixed state spaces poses significant challenges due to the lack of robust and fast sampling methods. In this work, we propose to train discrete EBMs with Energy Discrepancy, a loss function which only requires the evaluation of the energy function at data points and their perturbed counterparts, thus eliminating the need for Markov chain Monte Carlo. We introduce perturbations of the data distribution by simulating a diffusion process on the discrete state space endowed with a graph structure. This allows us to inform the choice of perturbation from the structure of the modelled discrete variable, while the continuous time parameter enables fine-grained control of the perturbation. Empirically, we demonstrate the efficacy of the proposed approaches in a wide range of applications, including the estimation of discrete densities with non-binary vocabulary and binary image modelling. We also introduce the first application of EBMs to tabular data sets with applications in synthetic data generation and calibrated classification.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93171"} +{"video_file": "wBtmN8SZ2B_39025561.mp4", "openreview_id": "wBtmN8SZ2B", "slideslive_id": 39025561, "venue": "nips2024", "title": "Learning Structured Representations with Hyperbolic Embeddings", "status": "Poster", "keywords": "hierarchical representation;representation learning;hyperbolic geometry", "tldr": "We propose a structured regularization method for learning label-hierarchy informed representations using hyperbolic geometry.", "abstract": "Most real-world datasets consist of a natural hierarchy between classes or an inherent label structure that is either already available or can be constructed cheaply. However, most existing representation learning methods ignore this hierarchy, treating labels as permutation invariant. Recent work [Zeng et al., 2022] proposes using this structured information explicitly, but the use of Euclidean distance may distort the underlying semantic context [Chen et al., 2013]. In this work, motivated by the advantage of hyperbolic spaces in modeling hierarchical relationships, we propose a novel approach HypStructure: a Hyperbolic Structured regularization approach to accurately embed the label hierarchy into the learned representations. HypStructure is a simple-yet-effective regularizer that consists of a hyperbolic tree-based representation loss along with a centering loss, and can be combined with any standard task loss to learn hierarchy-informed features. Extensive experiments on several large-scale vision benchmarks demonstrate the efficacy of HypStructure in reducing distortion and boosting generalization performance especially under low dimensional scenarios. For a better understanding of structured representation, we perform eigenvalue analysis that links the representation geometry to improved Out-of-Distribution (OOD) detection performance seen empirically.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/93170"} +{"video_file": "wBzvYh3PRA_39028836.mp4", "openreview_id": "wBzvYh3PRA", "slideslive_id": 39028836, "venue": "nips2024", "title": "FactorSim: Generative Simulation via Factorized Representation", "status": "Poster", "keywords": "generative simulation;POMDP;Large Language Models", "tldr": "We propose a framework for generating simulations in code to train RL agents and introduce a new benchmark to showcase its efficacy.", "abstract": "Generating simulations to train intelligent agents in game-playing and robotics from natural language input, user input, or task documentation remains an open-ended challenge. Existing approaches focus on parts of this challenge, such as generating reward functions or task hyperparameters. Unlike previous work, we introduce FACTORSIM that generates full simulations in code from language input that can be used to train agents. Exploiting the structural modularity specific to coded simulations, we propose to use a factored partially observable Markov decision process representation that allows us to reduce context dependence during each step of the generation. For evaluation, we introduce a generative simulation benchmark that assesses the generated simulation code\u2019s accuracy and effectiveness in facilitating zero-shot transfers in reinforcement learning settings. We show that FACTORSIM outperforms existing methods in generating simulations regarding prompt alignment (i.e., accuracy), zero-shot transfer abilities, and human evaluation. We also demonstrate its effectiveness in generating robotic tasks.", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/93169"} +{"video_file": "wDDvJzvvBR_39028760.mp4", "openreview_id": "wDDvJzvvBR", "slideslive_id": 39028760, "venue": "nips2024", "title": "Learning Spatially-Aware Language and Audio Embeddings", "status": "Poster", "keywords": "multimodal embeddings;spatial audio;contrastive learning", "tldr": "We train a model that aligns 3D Spatial Audio with open vocabulary captions.", "abstract": "Humans can picture a sound scene given an imprecise natural language description. For example, it is easy to imagine an acoustic environment given a phrase like \"the lion roar came from right behind me!\". For a machine to have the same degree of comprehension, the machine must know what a lion is (semantic attribute), what the concept of \"behind\" is (spatial attribute) and how these pieces of linguistic information align with the semantic and spatial attributes of the sound (what a roar sounds like when its coming from behind). State-of-the-art audio foundation models, such as CLAP, which learn to map between audio scenes and natural textual descriptions, are trained on non-spatial audio and text pairs, and hence lack spatial awareness. In contrast, sound event localization and detection models are limited to recognizing sounds from a fixed number of classes, and they localize the source to absolute position (e.g., 0.2m) rather than a position described using natural language (e.g., \"next to me\"). To address these gaps, we present ELSA (Embeddings for Language and Spatial Audio), a spatially aware-audio and text embedding model trained using multimodal contrastive learning. ELSA supports non-spatial audio, spatial audio, and open vocabulary text captions describing both the spatial and semantic components of sound. To train ELSA: (a) we spatially augment the audio and captions of three open-source audio datasets totaling 4,738 hours and 890,038 samples of audio comprised from 8,972 simulated spatial configurations, and (b) we design an encoder to capture the semantics of non-spatial audio, and the semantics and spatial attributes of spatial audio using contrastive learning. ELSA is a single model that is competitive with state-of-the-art for both semantic retrieval and 3D source localization. In particular, ELSA achieves +2.8% mean audio-to-text and text-to-audio R@1 above the LAION-CLAP baseline, and outperforms by -11.6\u00b0 mean-absolute-error in 3D source localization over the SeldNET baseline on the TUT Sound Events 2018 benchmark. Moreover, we show that the representation-space of ELSA is structured, enabling swapping of direction of audio via vector arithmetic of two directional text embeddings.", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/93168"} +{"video_file": "wDirCeTIoz_39028447.mp4", "openreview_id": "wDirCeTIoz", "slideslive_id": 39028447, "venue": "nips2024", "title": "Communication Efficient Distributed Training with Distributed Lion", "status": "Poster", "keywords": "Distributed Optimization", "tldr": "We introduce the distributed version of the Lion optimizer with efficient binary/low-precision communication. We provide both theoretical and empirical evidence to demonstrate it is a simple yet strong method.", "abstract": "The Lion optimizer has been a promising competitor with the AdamW for training large AI models, with advantages in memory, computation, and sample efficiency. In this paper, we introduce Distributed Lion, an innovative adaptation of Lion for distributed training environments. Leveraging the sign operator in Lion, our Distributed Lion only requires to communicate binary or lower-precision vectors between workers to the center server, significantly reducing the communication cost.\nOur theoretical analysis confirms Distributed Lion's convergence properties. Empirical results demonstrate its robustness across a range of tasks, worker counts, and batch sizes, on both vision and language problems. Notably, Distributed Lion attains comparable performance to standard Lion or AdamW optimizers applied on aggregated gradients, but with significantly reduced communication bandwidth. This feature is particularly advantageous for training large models. In addition, we also demonstrate that \\mavolion{} presents a more favorable performance-bandwidth balance compared to existing efficient distributed methods such as deep gradient compression and ternary gradients.", "primary_area": "infrastructure", "site": "https://neurips.cc/virtual/2024/poster/93167"} +{"video_file": "wGP1tBCP1E_39026631.mp4", "openreview_id": "wGP1tBCP1E", "slideslive_id": 39026631, "venue": "nips2024", "title": "Diffusion Models are Certifiably Robust Classifiers", "status": "Poster", "keywords": "certified robustness;diffusion classifier;adversarial robustness", "tldr": "We derive the certified radius and Lipschitz constant for diffusion classifiers. We also generalize diffusion classifiers to classify noisy data.", "abstract": "Generative learning, recognized for its effective modeling of data distributions, offers inherent advantages in handling out-of-distribution instances, especially for enhancing robustness to adversarial attacks. Among these, diffusion classifiers, utilizing powerful diffusion models, have demonstrated superior empirical robustness. However, a comprehensive theoretical understanding of their robustness is still lacking, raising concerns about their vulnerability to stronger future attacks. In this study, we prove that diffusion classifiers possess\nO\n(\n1\n)\nLipschitzness, and establish their certified robustness, demonstrating their inherent resilience. To achieve non-constant Lipschitzness, thereby obtaining much tighter certified robustness, we generalize diffusion classifiers to classify Gaussian-corrupted data. This involves deriving the evidence lower bounds (ELBOs) for these distributions, approximating the likelihood using the ELBO, and calculating classification probabilities via Bayes' theorem. Experimental results show the superior certified robustness of these Noised Diffusion Classifiers (NDCs). Notably, we achieve over 80% and 70% certified robustness on CIFAR-10 under adversarial perturbations with (\\ell_2) norms less than 0.25 and 0.5, respectively, using a single off-the-shelf diffusion model without any additional data.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93165"} +{"video_file": "wGjSbaMsop_39027588.mp4", "openreview_id": "wGjSbaMsop", "slideslive_id": 39027588, "venue": "nips2024", "title": "Algorithmic Collective Action in Recommender Systems: Promoting Songs by Reordering Playlists", "status": "Poster", "keywords": "collective action;platform power;sequential recommender systems;transformer models;music recommendation", "tldr": "Small user collectives can effectively promote artists through simple reordering of playlists, leveraging the sequential nature of transformer-based recommendation systems.", "abstract": "We investigate algorithmic collective action in transformer-based recommender systems. Our use case is a collective of fans aiming to promote the visibility of an underrepresented artist by strategically placing one of their songs in the existing playlists they control. We introduce two easily implementable strategies to select the position at which to insert the song and boost recommendations at test time. The strategies exploit statistical properties of the learner to leverage discontinuities in the recommendations, and the long-tail nature of song distributions. We evaluate the efficacy of our strategies using a publicly available recommender system model released by a major music streaming platform. Our findings reveal that even small collectives (controlling less than 0.01% of the training data) can achieve up to\n40\n\u00d7\nmore test time recommendations than songs with similar training set occurrences, on average. Focusing on the externalities of the strategy, we find that the recommendations of other songs are largely preserved, and the newly gained recommendations are distributed across various artists. Together, our findings demonstrate how carefully designed collective action strategies can be effective while not necessarily being adversarial.", "primary_area": "fairness", "site": "https://neurips.cc/virtual/2024/poster/93164"} +{"video_file": "wJAF8TGVUG_39025187.mp4", "openreview_id": "wJAF8TGVUG", "slideslive_id": 39025187, "venue": "nips2024", "title": "S-MolSearch: 3D Semi-supervised Contrastive Learning for Bioactive Molecule Search", "status": "Poster", "keywords": "semi-supervised learning; 3D molecule search; contrastive learning", "tldr": "S-MolSearch, a semi-supervised framework for ligand-based virtual screening that leverages molecular 3D information. S-MolSearch efficiently processes labeled and unlabeled data with inverse optimal transport, achieving SOTA on LIT-PCBA and DUD-E", "abstract": "Virtual Screening is an essential technique in the early phases of drug discovery, aimed at identifying promising drug candidates from vast molecular libraries. Recently, ligand-based virtual screening has garnered significant attention due to its efficacy in conducting extensive database screenings without relying on specific protein-binding site information. Obtaining binding affinity data for complexes is highly expensive, resulting in a limited amount of available data that covers a relatively small chemical space. Moreover, these datasets contain a significant amount of inconsistent noise. It is challenging to identify an inductive bias that consistently maintains the integrity of molecular activity during data augmentation. To tackle these challenges, we propose S-MolSearch, the first framework to our knowledge, that leverages molecular 3D information and affinity information in semi-supervised contrastive learning for ligand-based virtual screening. % S-MolSearch processes both labeled and unlabeled data, trains molecular structural encoders, and generates soft labels for unlabeled data, drawing on the principles of inverse optimal transport. Drawing on the principles of inverse optimal transport, S-MolSearch efficiently processes both labeled and unlabeled data, training molecular structural encoders while generating soft labels for the unlabeled data. This design allows S-MolSearch to adaptively utilize unlabeled data within the learning process. Empirically, S-MolSearch demonstrates superior performance on widely-used benchmarks LIT-PCBA and DUD-E. It surpasses both structure-based and ligand-based virtual screening methods for AUROC, BEDROC and EF.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/93161"} +{"video_file": "wJaCsnT9UE_39027767.mp4", "openreview_id": "wJaCsnT9UE", "slideslive_id": 39027767, "venue": "nips2024", "title": "Sharpness-diversity tradeoff: improving flat ensembles with SharpBalance", "status": "Poster", "keywords": "Diversity;loss landscape;deep ensemble", "tldr": "This paper reveals a trade-off between sharpness and diversity in deep ensembles, both empirically and theoretically, and proposes SharpBalance, a novel ensemble algorithm that achieves an optimal balance between these two crucial metrics.", "abstract": "Recent studies on deep ensembles have identified the sharpness of the local minima of individual learners and the diversity of the ensemble members as key factors in improving test-time performance. Building on this, our study investigates the interplay between sharpness and diversity within deep ensembles, illustrating their crucial role in robust generalization to both in-distribution (ID) and out-of-distribution (OOD) data. We discover a trade-off between sharpness and diversity: minimizing the sharpness in the loss landscape tends to diminish the diversity of individual members within the ensemble, adversely affecting the ensemble's improvement. The trade-off is justified through our rigorous theoretical analysis and verified empirically through extensive experiments. To address the issue of reduced diversity, we introduce SharpBalance, a novel training approach that balances sharpness and diversity within ensembles. Theoretically, we show that our training strategy achieves a better sharpness-diversity trade-off. Empirically, we conducted comprehensive evaluations in various data sets (CIFAR-10, CIFAR-100, TinyImageNet) and showed that SharpBalance not only effectively improves the sharpness-diversity trade-off but also significantly improves ensemble performance in ID and OOD scenarios.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/93160"} +{"video_file": "wN5AgP0DJ0_39026251.mp4", "openreview_id": "wN5AgP0DJ0", "slideslive_id": 39026251, "venue": "nips2024", "title": "Space-Time Continuous PDE Forecasting using Equivariant Neural Fields", "status": "Poster", "keywords": "pde solving;neural fields;equivariance;attention", "tldr": "We introduce an equivariant continuous PDE solving method based on Equivariant Neural Fields that preserves boundary conditions and known symmetries of the PDE.", "abstract": "Recently, Conditional Neural Fields (NeFs) have emerged as a powerful modelling paradigm for PDEs, by learning solutions as flows in the latent space of the Conditional NeF. Although benefiting from favourable properties of NeFs such as grid-agnosticity and space-time-continuous dynamics modelling, this approach limits the ability to impose known constraints of the PDE on the solutions -- such as symmetries or boundary conditions -- in favour of modelling flexibility. Instead, we propose a space-time continuous NeF-based solving framework that - by preserving geometric information in the latent space of the Conditional NeF - preserves known symmetries of the PDE. We show that modelling solutions as flows of pointclouds over the group of interest $G$ improves generalization and data-efficiency. Furthermore, we validate that our framework readily generalizes to unseen spatial and temporal locations, as well as geometric transformations of the initial conditions - where other NeF-based PDE forecasting methods fail -, and improve over baselines in a number of challenging geometries.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/93158"} +{"video_file": "wT5AgMVkaJ_39028088.mp4", "openreview_id": "wT5AgMVkaJ", "slideslive_id": 39028088, "venue": "nips2024", "title": "Aligning Vision Models with Human Aesthetics in Retrieval: Benchmarks and Algorithms", "status": "Poster", "keywords": "Image retrieval;Alignment;Aesthetics;Vision-Language Models", "tldr": "Method for aligning visual retrieval model with human aesthetics", "abstract": "Modern vision models are trained on very large noisy datasets. While these models acquire strong capabilities, they may not follow the user's intent to output the desired results in certain aspects, e.g., visual aesthetic, preferred style, and responsibility. In this paper, we target the realm of visual aesthetics and aim to align vision models with human aesthetic standards in a retrieval system. Advanced retrieval systems usually adopt a cascade of aesthetic models as re-rankers or filters, which are limited to low-level features like saturation and perform poorly when stylistic, cultural or knowledge contexts are involved. We find that utilizing the reasoning ability of large language models (LLMs) to rephrase the search query and extend the aesthetic expectations can make up for this shortcoming. Based on the above findings, we propose a preference-based reinforcement learning method that fine-tunes the vision models to distill the knowledge from both LLMs reasoning and the aesthetic models to better align the vision models with human aesthetics. Meanwhile, with rare benchmarks designed for evaluating retrieval systems, we leverage large multi-modality model (LMM) to evaluate the aesthetic performance with their strong abilities. As aesthetic assessment is one of the most subjective tasks, to validate the robustness of LMM, we further propose a novel dataset named HPIR to benchmark the alignment with human aesthetics. Experiments demonstrate that our method significantly enhances the aesthetic behaviors of the vision models, under several metrics. We believe the proposed algorithm can be a general practice for aligning vision models with human values.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93151"} +{"video_file": "wT6GHk5ShC_39026477.mp4", "openreview_id": "wT6GHk5ShC", "slideslive_id": 39026477, "venue": "nips2024", "title": "Enhancing In-Context Learning Performance with just SVD-Based Weight Pruning: A Theoretical Perspective", "status": "Poster", "keywords": "Large language models;In-Context Learning;SVD;Theoretical generalization bounds", "tldr": "We show SVD-based weight pruning boosts in-context learning in large language models, proved by theoretical analysis, and propose an effective, intuitive algorithm for downstream tasks.", "abstract": "Pre-trained large language models (LLMs) based on Transformer have demonstrated striking in-context learning (ICL) abilities. With a few demonstration input-label pairs, they can predict the label for an unseen input without any parameter updates. In this paper, we show an exciting phenomenon that SVD-based weight pruning can enhance ICL performance, and more surprising, pruning weights in deep layers often results in more stable performance improvements than in shallow layers. However, the underlying mechanism of those findings still remains an open question. To reveal those findings, we conduct an in-depth theoretical analysis by presenting the implicit gradient descent (GD) trajectories of ICL and giving the mutual information based generalization bounds of ICL via full implicit GD trajectories. This helps us reasonably explain the surprising experimental findings. Besides, based on all our experimental and theoretical insights, we intuitively propose a simple, model-compression and derivative-free algorithm for downstream tasks in enhancing ICL inference. Experiments on benchmark datasets and open source LLMs display the method effectiveness.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93150"} +{"video_file": "wTIzpqX121_39024864.mp4", "openreview_id": "wTIzpqX121", "slideslive_id": 39024864, "venue": "nips2024", "title": "Probabilistic Weather Forecasting with Hierarchical Graph Neural Networks", "status": "Spotlight", "keywords": "weather forecasting;graph neural network;probabilistic;ensemble forecasting;latent variable model;earth system modeling", "tldr": "We introduce a probabilistic graph neural network model for weather forecasting, capturing uncertainty by generating ensemble forecasts.", "abstract": "In recent years, machine learning has established itself as a powerful tool for high-resolution weather forecasting. While most current machine learning models focus on deterministic forecasts, accurately capturing the uncertainty in the chaotic weather system calls for probabilistic modeling. We propose a probabilistic weather forecasting model called Graph-EFM, combining a flexible latent-variable formulation with the successful graph-based forecasting framework. The use of a hierarchical graph construction allows for efficient sampling of spatially coherent forecasts. Requiring only a single forward pass per time step, Graph-EFM allows for fast generation of arbitrarily large ensembles. We experiment with the model on both global and limited area forecasting. Ensemble forecasts from Graph-EFM achieve equivalent or lower errors than comparable deterministic models, with the added benefit of accurately capturing forecast uncertainty.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/93149"} +{"video_file": "wWguwYhpAY_39026506.mp4", "openreview_id": "wWguwYhpAY", "slideslive_id": 39026506, "venue": "nips2024", "title": "Neural Experts: Mixture of Experts for Implicit Neural Representations", "status": "Poster", "keywords": "Implicit Neural Representation;Surface Reconstruction;Mixture of Experts", "tldr": "We propose a mixture of experts architecture for reconstructing various signals (3D surfaces, images, audio) using a neural representation.", "abstract": "Implicit neural representations (INRs) have proven effective in various tasks including image, shape, audio, and video reconstruction. These INRs typically learn the implicit field from sampled input points. This is often done using a single network for the entire domain, imposing many global constraints on a single function. In this paper, we propose a mixture of experts (MoE) implicit neural representation approach that enables learning local piece-wise continuous functions that simultaneously learns to subdivide the domain and fit it locally. We show that incorporating a mixture of experts architecture into existing INR formulations provides a boost in speed, accuracy, and memory requirements. Additionally, we introduce novel conditioning and pretraining methods for the gating network that improves convergence to the desired solution. We evaluate the effectiveness of our approach on multiple reconstruction tasks, including surface reconstruction, image reconstruction, and audio signal reconstruction and show improved performance compared to non-MoE methods. Code is available at our project page https://sitzikbs.github.io/neural-experts-projectpage/ .", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93148"} +{"video_file": "wZgw4CrxwK_39027622.mp4", "openreview_id": "wZgw4CrxwK", "slideslive_id": 39027622, "venue": "nips2024", "title": "Incentivizing Quality Text Generation via Statistical Contracts", "status": "Poster", "keywords": "Contract Theory;Contract Design;Moral Hazard;Natural Language Generation;LLM evaluation;Hypothesis Testing", "tldr": "We design pay-for-performance contracts that incentivize the use of high-quality LLMs for text generation.", "abstract": "While the success of large language models (LLMs) increases demand for machine-generated text, current pay-per-token pricing schemes create a misalignment of incentives known in economics as moral hazard: Text-generating agents have strong incentive to cut costs by preferring a cheaper model over the cutting-edge one, and this can be done \u201cbehind the scenes\u201d since the agent performs inference internally. In this work, we approach this issue from an economic perspective, by proposing a pay-for-performance, contract-based framework for incentivizing quality. We study a principal-agent game where the agent generates text using costly inference, and the contract determines the principal\u2019s payment for the text according to an automated quality evaluation. Since standard contract theory is inapplicable when internal inference costs are unknown, we introduce cost-robust contracts. As our main theoretical contribution, we characterize optimal cost-robust contracts through a direct correspondence to optimal composite hypothesis tests from statistics, generalizing a result of Saig et al. (NeurIPS\u201923). We evaluate our framework empirically by deriving contracts for a range of objectives and LLM evaluation benchmarks, and find that cost-robust contracts sacrifice only a marginal increase in objective value compared to their cost-aware counterparts.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/93145"} +{"video_file": "wZigMVFURk_39026137.mp4", "openreview_id": "wZigMVFURk", "slideslive_id": 39026137, "venue": "nips2024", "title": "RoPINN: Region Optimized Physics-Informed Neural Networks", "status": "Poster", "keywords": "Physics-informed Neural Networks;PINN Training;Deep Learning", "tldr": "This paper proposes and theoretically studies a new training paradigm as region optimization. RoPINN is derived from the theory as a practical training algorithm, which can consistently benefit diverse PINN backbones on extensive PDEs.", "abstract": "Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs) by enforcing outputs and gradients of deep models to satisfy target equations. Due to the limitation of numerical computation, PINNs are conventionally optimized on finite selected points. However, since PDEs are usually defined on continuous domains, solely optimizing models on scattered points may be insufficient to obtain an accurate solution for the whole domain. To mitigate this inherent deficiency of the default scatter-point optimization, this paper proposes and theoretically studies a new training paradigm as region optimization. Concretely, we propose to extend the optimization process of PINNs from isolated points to their continuous neighborhood regions, which can theoretically decrease the generalization error, especially for hidden high-order constraints of PDEs. A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm, which is implemented by a straightforward but effective Monte Carlo sampling method. By calibrating the sampling process into trust regions, RoPINN finely balances optimization and generalization error. Experimentally, RoPINN consistently boosts the performance of diverse PINNs on a wide range of PDEs without extra backpropagation or gradient calculation. Code is available at this repository: https://github.com/thuml/RoPINN.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/93144"} +{"video_file": "wblxm5zdkE_39028670.mp4", "openreview_id": "wblxm5zdkE", "slideslive_id": 39028670, "venue": "nips2024", "title": "Real-Time Selection Under General Constraints via Predictive Inference", "status": "Poster", "keywords": "Online multiple testing; Predictive inference; False selection rate; Individual and interactive constraints; Local false discovery rate.", "tldr": "We address online sample selection, introducing II-COS, a decision rule that efficiently identifies preferable samples meeting practical requirements by managing individual and interactive constraints.", "abstract": "Real-time decision-making gets more attention in the big data era. Here, we consider the problem of sample selection in the online setting, where one encounters a possibly infinite sequence of individuals collected over time with covariate information available. The goal is to select samples of interest that are characterized by their unobserved responses until the user-specified stopping time. We derive a new decision rule that enables us to find more preferable samples that meet practical requirements by simultaneously controlling two types of general constraints: individual and interactive constraints, which include the widely utilized False Selection Rate (FSR), cost limitations, and diversity of selected samples. The key elements of our approach involve quantifying the uncertainty of response predictions via predictive inference and addressing individual and interactive constraints in a sequential manner. Theoretical and numerical results demonstrate the effectiveness of the proposed method in controlling both individual and interactive constraints.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/93141"} +{"video_file": "weemASPtzg_39024572.mp4", "openreview_id": "weemASPtzg", "slideslive_id": 39024572, "venue": "nips2024", "title": "Linear Causal Representation Learning from Unknown Multi-node Interventions", "status": "Poster", "keywords": "Causal representation learning;interventions;score-based methods;identifiability", "tldr": "We prove identifiability results and design algorithms for linear causal representation learning from unknown multi-node stochastic interventions.", "abstract": "Despite the multifaceted recent advances in interventional causal representation learning (CRL), they primarily focus on the stylized assumption of single-node interventions. This assumption is not valid in a wide range of applications, and generally, the subset of nodes intervened in an interventional environment is fully unknown. This paper focuses on interventional CRL under unknown multi-node (UMN) interventional environments and establishes the first identifiability results for general latent causal models (parametric or nonparametric) under stochastic interventions (soft or hard) and linear transformation from the latent to observed space. Specifically, it is established that given sufficiently diverse interventional environments, (i) identifiability up to ancestors is possible using only soft interventions, and (ii) perfect identifiability is possible using hard interventions. Remarkably, these guarantees match the best-known results for more restrictive single-node interventions. Furthermore, CRL algorithms are also provided that achieve the identifiability guarantees. A central step in designing these algorithms is establishing the relationships between UMN interventional CRL and score functions associated with the statistical models of different interventional environments. Establishing these relationships also serves as constructive proof of the identifiability guarantees.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/93136"} +{"video_file": "wfU2CdgmWt_39025419.mp4", "openreview_id": "wfU2CdgmWt", "slideslive_id": 39025419, "venue": "nips2024", "title": "Stochastic Optimal Control Matching", "status": "Poster", "keywords": "Stochastic Optimal Control;Diffusion", "tldr": "We propose the first least squares loss to solve Stochastic Optimal Control problems, and show that it outperforms existing losses experimentally.", "abstract": "Stochastic optimal control, which has the goal of driving the behavior of noisy systems, is broadly applicable in science, engineering and artificial intelligence. Our work introduces Stochastic Optimal Control Matching (SOCM), a novel Iterative Diffusion Optimization (IDO) technique for stochastic optimal control that stems from the same philosophy as the conditional score matching loss for diffusion models. That is, the control is learned via a least squares problem by trying to fit a matching vector field. The training loss, which is closely connected to the cross-entropy loss, is optimized with respect to both the control function and a family of reparameterization matrices which appear in the matching vector field. The optimization with respect to the reparameterization matrices aims at minimizing the variance of the matching vector field. Experimentally, our algorithm achieves lower error than all the existing IDO techniques for stochastic optimal control for three out of four control problems, in some cases by an order of magnitude. The key idea underlying SOCM is the path-wise reparameterization trick, a novel technique that may be of independent interest.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/93135"} +{"video_file": "wiK6bwuxjE_39028867.mp4", "openreview_id": "wiK6bwuxjE", "slideslive_id": 39028867, "venue": "nips2024", "title": "MonoMAE: Enhancing Monocular 3D Detection through Depth-Aware Masked Autoencoders", "status": "Poster", "keywords": "Monocular 3D Object Detection;Masked Autoencoders", "tldr": "This paper presents MonoMAE, a monocular 3D detector inspired by Masked Autoencoders that addresses the object occlusion issue in monocular 3D object detection by masking and reconstructing objects in the feature space.", "abstract": "Monocular 3D object detection aims for precise 3D localization and identification of objects from a single-view image. Despite its recent progress, it often struggles while handling pervasive object occlusions that tend to complicate and degrade the prediction of object dimensions, depths, and orientations. We design MonoMAE, a monocular 3D detector inspired by Masked Autoencoders that addresses the object occlusion issue by masking and reconstructing objects in the feature space. MonoMAE consists of two novel designs. The first is depth-aware masking that selectively masks certain parts of non-occluded object queries in the feature space for simulating occluded object queries for network training. It masks non-occluded object queries by balancing the masked and preserved query portions adaptively according to the depth information. The second is lightweight query completion that works with the depth-aware masking to learn to reconstruct and complete the masked object queries. With the proposed feature-space occlusion and completion, MonoMAE learns enriched 3D representations that achieve superior monocular 3D detection performance qualitatively and quantitatively for both occluded and non-occluded objects. Additionally, MonoMAE learns generalizable representations that can work well in new domains.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93132"} +{"video_file": "wiMaws0FWB_39025485.mp4", "openreview_id": "wiMaws0FWB", "slideslive_id": 39025485, "venue": "nips2024", "title": "Implicit Bias of Mirror Flow on Separable Data", "status": "Poster", "keywords": "Implicit bias;Mirror descent;Classification", "tldr": "We provide the implicit bias of mirror flow in the classification setting.", "abstract": "We examine the continuous-time counterpart of mirror descent, namely mirror flow, on classification problems which are linearly separable. Such problems are minimised \u2018at infinity\u2019 and have many possible solutions; we study which solution is preferred by the algorithm depending on the mirror potential. For exponential tailed losses and under mild assumptions on the potential, we show that the iterates converge in direction towards a\n\u03d5\n\u221e\n-maximum margin classifier. The function\n\u03d5\n\u221e\nis the horizon function of the mirror potential and characterises its shape \u2018at infinity\u2019. When the potential is separable, a simple formula allows to compute this function. We analyse several examples of potentials and provide numerical experiments highlighting our results.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93131"} +{"video_file": "wjbTHLUSzU_39027517.mp4", "openreview_id": "wjbTHLUSzU", "slideslive_id": 39027517, "venue": "nips2024", "title": "TSDS: Data Selection for Task-Specific Model Finetuning", "status": "Poster", "keywords": "data selection;finetuning;foundation model;large language model;optimal transport", "tldr": "We present a framework to select data for task-specific model finetuning, guided by a small but representative set of examples from the target task.", "abstract": "Finetuning foundation models for specific tasks is an emerging paradigm in modern machine learning. The efficacy of task-specific finetuning largely depends on the selection of appropriate training data. We present TSDS (Task-Specific Data Selection), a framework to select data for task-specific model finetuning, guided by a small but representative set of examples from the target task. To do so, we formulate data selection for task-specific finetuning as an optimization problem with a distribution alignment loss based on optimal transport to capture the discrepancy between the selected data and the target distribution. In addition, we add a regularizer to encourage the diversity of the selected data and incorporate kernel density estimation into the regularizer to reduce the negative effects of near-duplicates among the candidate data. We connect our optimization problem to nearest neighbor search and design efficient algorithms to compute the optimal solution based on approximate nearest neighbor search techniques. We evaluate our method on data selection for both continued pretraining and instruction tuning of language models. We show that instruction tuning using data selected by our method with a 1% selection ratio often outperforms using the full dataset and beats the baseline selection methods by 1.5 points in F1 score on average.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93130"} +{"video_file": "wlqfOvlTQz_39025123.mp4", "openreview_id": "wlqfOvlTQz", "slideslive_id": 39025123, "venue": "nips2024", "title": "Reinforcement Learning with Lookahead Information", "status": "Poster", "keywords": "Reinforcement Learning;Regret Minimization;Lookahead", "tldr": "We study RL settings where either immediate rewards or transitions are observed before acting and show how to achieve tight regret compared to stronger baselines with similar information.", "abstract": "We study reinforcement learning (RL) problems in which agents observe the reward or transition realizations at their current state before deciding which action to take. Such observations are available in many applications, including transactions, navigation and more. When the environment is known, previous work shows that this lookahead information can drastically increase the collected reward. However, outside of specific applications, existing approaches for interacting with unknown environments are not well-adapted to these observations. In this work, we close this gap and design provably-efficient learning algorithms able to incorporate lookahead information. To achieve this, we perform planning using the empirical distribution of the reward and transition observations, in contrast to vanilla approaches that only rely on estimated expectations. We prove that our algorithms achieve tight regret versus a baseline that also has access to lookahead information -- linearly increasing the amount of collected reward compared to agents that cannot handle lookahead information.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93125"} +{"video_file": "wqs2RMq4CW_39025835.mp4", "openreview_id": "wqs2RMq4CW", "slideslive_id": 39025835, "venue": "nips2024", "title": "Corruption-Robust Linear Bandits: Minimax Optimality and Gap-Dependent Misspecification", "status": "Poster", "keywords": "corruption-robust;linear bandits;misspecification;reinforcement learning", "tldr": "We obtain minimax optimal bounds for corruption-robust linear bandits, and show that they can be used to obtain novel gap-dependent misspecification bounds in bandits and RL.", "abstract": "In linear bandits, how can a learner effectively learn when facing corrupted rewards? While significant work has explored this question, a holistic understanding across different adversarial models and corruption measures is lacking, as is a full characterization of the minimax regret bounds. In this work, we compare two types of corruptions commonly considered: strong corruption, where the corruption level depends on the learner\u2019s chosen action, and weak corruption, where the corruption level does not depend on the learner\u2019s chosen action. We provide a unified framework to analyze these corruptions. For stochastic linear bandits, we fully characterize the gap between the minimax regret under strong and weak corruptions. We also initiate the study of corrupted adversarial linear bandits, obtaining upper and lower bounds with matching dependencies on the corruption level. Next, we reveal a connection between corruption-robust learning and learning with gap-dependent misspecification\u2014a setting first studied by Liu et al. (2023a), where the misspecification level of an action or policy is proportional to its suboptimality. We present a general reduction that enables any corruption-robust algorithm to handle gap-dependent misspecification. This allows us to recover the results of Liu et al. (2023a) in a black-box manner and significantly generalize them to settings like linear MDPs, yielding the first results for gap-dependent misspecification in reinforcement learning. However, this general reduction does not attain the optimal rate for gap-dependent misspecification. Motivated by this, we develop a specialized algorithm that achieves optimal bounds for gap-dependent misspecification in linear bandits, thus answering an open question posed by Liu et al. (2023a).", "primary_area": "bandits", "site": "https://neurips.cc/virtual/2024/poster/93118"} +{"video_file": "wsHMb4J2o9_39028725.mp4", "openreview_id": "wsHMb4J2o9", "slideslive_id": 39028725, "venue": "nips2024", "title": "The Feature Speed Formula: a flexible approach to scale hyper-parameters of deep neural networks", "status": "Poster", "keywords": "Feature Learning;Deep Neural Networks;One SGD step;Hyperparameter scaling;dynamical isometry", "tldr": "We propose an elementary approach to quantifying feature learning in wide and deep neural networks and to derive hyperparameter scalings.", "abstract": "Deep learning succeeds by doing hierarchical feature learning, yet tuning hyper-parameters (HP) such as initialization scales, learning rates etc., only give indirect control over this behavior. In this paper, we introduce a key notion to predict and control feature learning: the angle\n\u03b8\n\u2113\nbetween the feature updates and the backward pass (at layer index\n\u2113\n). We show that the magnitude of feature updates after one GD step, at any training time, can be expressed via a simple and general feature speed formula in terms of this angle\n\u03b8\n\u2113\n, the loss decay, and the magnitude of the backward pass. This angle\n\u03b8\n\u2113\nis controlled by the conditioning of the layer-to-layer Jacobians and at random initialization, it is determined by the spectrum of a certain kernel, which coincides with the Neural Tangent Kernel when\n\u2113\n=\ndepth\n. Given\n\u03b8\n\u2113\n, the feature speed formula provides us with rules to adjust HPs (scales and learning rates) so as to satisfy certain dynamical properties, such as feature learning and loss decay. We investigate the implications of our approach for ReLU MLPs and ResNets in the large width-then-depth limit. Relying on prior work, we show that in ReLU MLPs with iid initialization, the angle degenerates with depth as\ncos\n\u2061\n(\n\u03b8\n\u2113\n)\n=\n\u0398\n(\n1\n/\n\u2113\n)\n. In contrast, ResNets with branch scale\nO\n(\n1\n/\ndepth\n)\nmaintain a non-degenerate angle\ncos\n\u2061\n(\n\u03b8\n\u2113\n)\n=\n\u0398\n(\n1\n)\n. We use these insights to recover key properties of known HP scalings (such as\n\u03bc\nP), and also introduce a new HP scaling for large depth ReLU MLPs with favorable theoretical properties.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/93116"} +{"video_file": "wsqDJHPUHN_39027738.mp4", "openreview_id": "wsqDJHPUHN", "slideslive_id": 39027738, "venue": "nips2024", "title": "On the Ability of Developers' Training Data Preservation of Learnware", "status": "Poster", "keywords": "Learnware;Model Specification;Reduced Kernel Mean Embedding;Data Preservation;Synthetic Data;Learnware Dock System", "tldr": "We conducted a theoretical analysis of the data protection capabilities of the Reduced Kernel Mean Embeding (RKME) specification in learnware.", "abstract": "The learnware paradigm aims to enable users to leverage numerous existing well-trained models instead of building machine learning models from scratch. In this paradigm, developers worldwide can submit their well-trained models spontaneously into a learnware dock system, and the system helps developers generate specification for each model to form a learnware. As the key component, a specification should characterize the capabilities of the model, enabling it to be adequately identified and reused, while preserving the developer's original data. Recently, the RKME (Reduced Kernel Mean Embedding) specification was proposed and most commonly utilized. This paper provides a theoretical analysis of RKME specification about its preservation ability for developer's training data. By modeling it as a geometric problem on manifolds and utilizing tools from geometric analysis, we prove that the RKME specification is able to disclose none of the developer's original data and possesses robust defense against common inference attacks, while preserving sufficient information for effective learnware identification.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93115"} +{"video_file": "wz2KvvEk44_39025169.mp4", "openreview_id": "wz2KvvEk44", "slideslive_id": 39025169, "venue": "nips2024", "title": "Focus On What Matters: Separated Models For Visual-Based RL Generalization", "status": "Poster", "keywords": "Reinforcement Learning;Visual-based RL;Generalization", "tldr": "We propose SMG, which utilizes a reconstruction-based auxiliary task to extract task-relevant representations from visual observations and further strengths the generalization ability of RL agents with the help of two consistency losses.", "abstract": "A primary challenge for visual-based Reinforcement Learning (RL) is to generalize effectively across unseen environments. Although previous studies have explored different auxiliary tasks to enhance generalization, few adopt image reconstruction due to concerns about exacerbating overfitting to task-irrelevant features during training. Perceiving the pre-eminence of image reconstruction in representation learning, we propose SMG (\\blue{S}eparated \\blue{M}odels for \\blue{G}eneralization), a novel approach that exploits image reconstruction for generalization. SMG introduces two model branches to extract task-relevant and task-irrelevant representations separately from visual observations via cooperatively reconstruction. Built upon this architecture, we further emphasize the importance of task-relevant features for generalization. Specifically, SMG incorporates two additional consistency losses to guide the agent's focus toward task-relevant areas across different scenarios, thereby achieving free from overfitting. Extensive experiments in DMC demonstrate the SOTA performance of SMG in generalization, particularly excelling in video-background settings. Evaluations on robotic manipulation tasks further confirm the robustness of SMG in real-world applications. Source code is available at \\url{https://anonymous.4open.science/r/SMG/}.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93111"} +{"video_file": "wzof7Y66xs_39024705.mp4", "openreview_id": "wzof7Y66xs", "slideslive_id": 39024705, "venue": "nips2024", "title": "Hierarchical Selective Classification", "status": "Poster", "keywords": "Hierarchical Selective Classification;Hierarchical Uncertainty;Selective Classification;Non-Bayesian Uncertainty Estimation;CLIP", "tldr": "We extend selective classification to a hierarchical setting, showing outstanding results for various models.", "abstract": "Deploying deep neural networks for risk-sensitive tasks necessitates an uncertainty estimation mechanism. This paper introduces hierarchical selective classification, extending selective classification to a hierarchical setting. Our approach leverages the inherent structure of class relationships, enabling models to reduce the specificity of their predictions when faced with uncertainty. In this paper, we first formalize hierarchical risk and coverage, and introduce hierarchical risk-coverage curves. Next, we develop algorithms for hierarchical selective classification (which we refer to as \"inference rules\"), and propose an efficient algorithm that guarantees a target accuracy constraint with high probability. Lastly, we conduct extensive empirical studies on over a thousand ImageNet classifiers, revealing that training regimes such as CLIP, pretraining on ImageNet21k and knowledge distillation boost hierarchical selective performance.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/93110"} +{"video_file": "x2780VcMOI_39026657.mp4", "openreview_id": "x2780VcMOI", "slideslive_id": 39026657, "venue": "nips2024", "title": "A Polar coordinate system represents syntax in large language models", "status": "Poster", "keywords": "Natural Language Processing;Large Language Models;Interpretability;Syntax;Linguistics;Cognitive Science", "tldr": "We show that the presence and type of syntactic relations in a sentence can be inferred respectively from distances and orientations in the activation space of language models", "abstract": "Originally formalized with symbolic representations, syntactic trees may also be effectively represented in the activations of large language models (LLMs). Indeed, a ''Structural Probe'' can find a subspace of neural activations, where syntactically-related words are relatively close to one-another. However, this syntactic code remains incomplete: the distance between the Structural Probe word embeddings can represent the \\emph{existence} but not the type and direction of syntactic relations. Here, we hypothesize that syntactic relations are, in fact, coded by the relative direction between nearby embeddings. To test this hypothesis, we introduce a ''Polar Probe'' trained to read syntactic relations from both the distance and the direction between word embeddings. Our approach reveals three main findings. First, our Polar Probe successfully recovers the type and direction of syntactic relations, and substantially outperforms the Structural Probe by nearly two folds. Second, we confirm that this polar coordinate system exists in a low-dimensional subspace of the intermediate layers of many LLMs and becomes increasingly precise in the latest frontier models. Third, we demonstrate with a new benchmark that similar syntactic relations are coded similarly across the nested levels of syntactic trees. Overall, this work shows that LLMs spontaneously learn a geometry of neural activations that explicitly represents the main symbolic structures of linguistic theory.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93109"} +{"video_file": "x2zY4hZcmg_39026162.mp4", "openreview_id": "x2zY4hZcmg", "slideslive_id": 39026162, "venue": "nips2024", "title": "Dynamic Model Predictive Shielding for Provably Safe Reinforcement Learning", "status": "Poster", "keywords": "Safe Reinforcement Learning;Model Predictive Shielding;Planning;MCTS", "tldr": "The paper proposes a novel method for integrating a dynamic planner within a safe reinforcement learning framework, enabling progress towards the goal while recovering from unsafe situations.", "abstract": "Among approaches for provably safe reinforcement learning, Model Predictive Shielding (MPS) has proven effective at complex tasks in continuous, high-dimensional state spaces, by leveraging a backup policy to ensure safety when the learned policy attempts to take risky actions. However, while MPS can ensure safety both during and after training, it often hinders task progress due to the conservative and task-oblivious nature of backup policies. This paper introduces Dynamic Model Predictive Shielding (DMPS), which optimizes reinforcement learning objectives while maintaining provable safety. DMPS employs a local planner to dynamically select safe recovery actions that maximize both short-term progress as well as long-term rewards. Crucially, the planner and the neural policy play a synergistic role in DMPS. When planning recovery actions for ensuring safety, the planner utilizes the neural policy to estimate long-term rewards, allowing it to observe beyond its short-term planning horizon. Conversely, the neural policy under training learns from the recovery plans proposed by the planner, converging to policies that are both high-performing and safe in practice. This approach guarantees safety during and after training, with bounded recovery regret that decreases exponentially with planning horizon depth. Experimental results demonstrate that DMPS converges to policies that rarely require shield interventions after training and achieve higher rewards compared to several state-of-the-art baselines.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93108"} +{"video_file": "x33oWJQyH0_39026982.mp4", "openreview_id": "x33oWJQyH0", "slideslive_id": 39026982, "venue": "nips2024", "title": "Unsupervised Object Detection with Theoretical Guarantees", "status": "Poster", "keywords": "unsupervised object detection;object detection;unsupervised learning;representation learning", "tldr": "We propose the first unsupervised object detection method that we prove has theoretical guarantees of recovering the true object positions up to small shifts.", "abstract": "Unsupervised object detection using deep neural networks is typically a difficult problem with few to no guarantees about the learned representation. In this work we present the first unsupervised object detection method that is theoretically guaranteed to recover the true object positions up to quantifiable small shifts. We develop an unsupervised object detection architecture and prove that the learned variables correspond to the true object positions up to small shifts related to the encoder and decoder receptive field sizes, the object sizes, and the widths of the Gaussians used in the rendering process. We perform detailed analysis of how the error depends on each of these variables and perform synthetic experiments validating our theoretical predictions up to a precision of individual pixels. We also perform experiments on CLEVR-based data and show that, unlike current SOTA object detection methods (SAM, CutLER), our method's prediction errors always lie within our theoretical bounds. We hope that this work helps open up an avenue of research into object detection methods with theoretical guarantees.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93107"} +{"video_file": "x4EoTQW7ka_39028586.mp4", "openreview_id": "x4EoTQW7ka", "slideslive_id": 39028586, "venue": "nips2024", "title": "DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation", "status": "Poster", "keywords": "Training Acceleration;Memory Efficient Fine-Tuning;Large Language Models;Backpropagation Optimization.", "tldr": "DropBP, randomly dropping backward propagation based on layer sensitivity, significantly accelerates fine-tuning in Large Language Models (LLMs) with considerable memory reduction.", "abstract": "Large language models (LLMs) have achieved significant success across various domains. However, training these LLMs typically involves substantial memory and computational costs during both forward and backward propagation. While parameter-efficient fine-tuning (PEFT) considerably reduces the training memory associated with parameters, it does not address the significant computational costs and activation memory. In this paper, we propose Dropping Backward Propagation (DropBP), a novel approach designed to reduce computational costs and activation memory while maintaining accuracy. DropBP randomly drops layers during backward propagation, which is essentially equivalent to training shallow submodules generated by undropped layers and residual connections. Additionally, DropBP calculates the sensitivity of each layer to assign an appropriate drop rate, thereby stabilizing the training process. DropBP is not only applicable to full fine-tuning but can also be orthogonally integrated with all types of PEFT by dropping layers during backward propagation. Specifically, DropBP can reduce training time by 44% with comparable accuracy to the baseline, accelerate convergence to the same perplexity by 1.5\n\u00d7\n, and enable training with a sequence length 6.2\n\u00d7\nlarger on a single NVIDIA-A100 GPU. Furthermore, our DropBP enabled a throughput increase of 79% on a NVIDIA A100 GPU and 117% on an Intel Gaudi2 HPU. The code is available at https://github.com/WooSunghyeon/dropbp.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93106"} +{"video_file": "x4Kk4FxLs3_39026591.mp4", "openreview_id": "x4Kk4FxLs3", "slideslive_id": 39026591, "venue": "nips2024", "title": "Pard: Permutation-Invariant Autoregressive Diffusion for Graph Generation", "status": "Poster", "keywords": "diffusion;generative model;graph generation;graph generative model", "tldr": "Combining autoregressive method with diffusion for modeling graph distribution with SOTA graph generation performance.", "abstract": "Graph generation has been dominated by autoregressive models due to their simplicity and effectiveness, despite their sensitivity to ordering. Yet diffusion models have garnered increasing attention, as they offer comparable performance while being permutation-invariant. Current graph diffusion models generate graphs in a one-shot fashion, but they require extra features and thousands of denoising steps to achieve optimal performance. We introduce PARD, a Permutation-invariant Auto Regressive Diffusion model that integrates diffusion models with autoregressive methods. PARD harnesses the effectiveness and efficiency of the autoregressive model while maintaining permutation invariance without ordering sensitivity. Specifically, we show that contrary to sets, elements in a graph are not entirely un-ordered and there is a unique partial order for nodes and edges. With this partial order, PARD generates a graph in a block-by-block, autoregressive fashion, where each block\u2019s probability is conditionally modeled by a shared diffusion model with an equivariant network. To ensure efficiency while being expressive, we further propose a higher-order graph transformer, which integrates transformer with PPGN (Maronet al., 2019). Like GPT, we extend the higher-order graph transformer to support parallel training of all blocks. Without any extra features, PARD achieves state-of-the-art performance on molecular and non-molecular datasets, and scales to large datasets like MOSES containing 1.9M molecules.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93104"} +{"video_file": "x7AD0343Jz_39026935.mp4", "openreview_id": "x7AD0343Jz", "slideslive_id": 39026935, "venue": "nips2024", "title": "Limits of Transformer Language Models on Learning to Compose Algorithms", "status": "Poster", "keywords": "Few-shot Compositional Learning;Compositionality;Sample Efficiency;Algorithmic Learning;Large Language Models;Transformers", "tldr": "We analyze the capabilities of Transformer language models in learning compositional discrete tasks and observe that compositional learning is very sample inefficient.", "abstract": "We analyze the capabilities of Transformer language models in learning compositional discrete tasks. To this end, we evaluate training LLaMA models and prompting GPT-4 and Gemini on four tasks demanding to learn a composition of several discrete sub-tasks. In particular, we measure how well these models can reuse primitives observable in the sub-tasks to learn the composition task. Our results indicate that compositional learning in state-of-the-art Transformer language models is highly sample inefficient: LLaMA requires more data samples than relearning all sub-tasks from scratch to learn the compositional task; in-context prompting with few samples is unreliable and fails at executing the sub-tasks or correcting the errors in multi-round code generation. Further, by leveraging complexity theory, we support these findings with a theoretical analysis focused on the sample inefficiency of gradient descent in memorizing feedforward models. We open source our code at https://github.com/IBM/limitations-lm-algorithmic-compositional-learning.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/93102"} +{"video_file": "x7pjdDod6Z_39027013.mp4", "openreview_id": "x7pjdDod6Z", "slideslive_id": 39027013, "venue": "nips2024", "title": "MeshFormer : High-Quality Mesh Generation with 3D-Guided Reconstruction Model", "status": "Oral", "keywords": "sparse view 3D reconstruction;3D generation;3D AIGC;reconstruction model", "tldr": "We introduce MeshFormer, a sparse-view reconstruction model that can deliver high-quality meshes and be trained efficiently.", "abstract": "Open-world 3D reconstruction models have recently garnered significant attention. However, without sufficient 3D inductive bias, existing methods typically entail expensive training costs and struggle to extract high-quality 3D meshes. In this work, we introduce MeshFormer, a sparse-view reconstruction model that explicitly leverages 3D native structure, input guidance, and training supervision. Specifically, instead of using a triplane representation, we store features in 3D sparse voxels and combine transformers with 3D convolutions to leverage an explicit 3D structure and projective bias. In addition to sparse-view RGB input, we require the network to take input and generate corresponding normal maps. The input normal maps can be predicted by 2D diffusion models, significantly aiding in the guidance and refinement of the geometry's learning. Moreover, by combining Signed Distance Function (SDF) supervision with surface rendering, we directly learn to generate high-quality meshes without the need for complex multi-stage training processes. By incorporating these explicit 3D biases, MeshFormer can be trained efficiently and deliver high-quality textured meshes with fine-grained geometric details. It can also be integrated with 2D diffusion models to enable fast single-image-to-3D and text-to-3D tasks. Videos are available at https://meshformer3d.github.io/", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93101"} +{"video_file": "x9eFgahVBI_39024759.mp4", "openreview_id": "x9eFgahVBI", "slideslive_id": 39024759, "venue": "nips2024", "title": "From Unstructured Data to In-Context Learning: Exploring What Tasks Can Be Learned and When", "status": "Poster", "keywords": "in-context learning;large language models;unstructured data;continuous bag of words;co-occurrence", "tldr": "We demonstrate the significance of co-occurrence, positional information, noise, and data structures for in-context learning from training on unstructured data.", "abstract": "Large language models (LLMs) like transformers demonstrate impressive in-context learning (ICL) capabilities, allowing them to make predictions for new tasks based on prompt exemplars without parameter updates. While existing ICL theories often assume structured training data resembling ICL tasks (e.g., x-y pairs for linear regression), LLMs are typically trained unsupervised on unstructured text, such as web content, which lacks clear parallels to tasks like word analogy. To address this gap, we examine what enables ICL in models trained on unstructured data, focusing on critical sequence model requirements and training data structure. We find that many ICL capabilities can emerge simply from co-occurrence of semantically related word pairs in unstructured data; word analogy completion, for example, can provably arise purely through co-occurrence modeling, using classical language models like continuous bag of words (CBOW), without needing positional information or attention mechanisms. However, positional information becomes crucial for logic reasoning tasks requiring generalization to unseen tokens. Finally, we identify two cases where ICL fails: one in logic reasoning tasks that require generalizing to new, unseen patterns, and another in analogy completion where relevant word pairs appear only in fixed training positions. These findings suggest that LLMs' ICL abilities depend heavily on the structural elements within their training data.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93099"} +{"video_file": "xCIbVuXwPM_39028758.mp4", "openreview_id": "xCIbVuXwPM", "slideslive_id": 39028758, "venue": "nips2024", "title": "Trading off Consistency and Dimensionality of Convex Surrogates for Multiclass Classification", "status": "Poster", "keywords": "Loss Functions;Consistency;Property Elicitation", "tldr": "This work explores theoretical guarantees for partial consistency when violating known prediction dimension bounds.", "abstract": "In multiclass classification over\nn\noutcomes, we typically optimize some surrogate loss\nL\n:\nR\nd\n\u00d7\nY\n\u2192\nR\nassigning real-valued error to predictions in\nR\nd\n. In this paradigm, outcomes must be embedded into the reals with dimension\nd\n\u2248\nn\nin order to design a consistent surrogate loss. Consistent losses are well-motivated theoretically, yet for large\nn\n, such as in information retrieval and structured prediction tasks, their optimization may be computationally infeasible. In practice, outcomes are typically embedded into some\nR\nd\nfor\nd\n\u226a\nn\n, with little known about their suitability for multiclass classification. We investigate two approaches for trading off consistency and dimensionality in multiclass classification while using a convex surrogate loss. We first formalize partial consistency when the optimized surrogate has dimension\nd\n\u226a\nn\n. We then check if partial consistency holds under a given embedding and low-noise assumption, providing insight into when to use a particular embedding into\nR\nd\n. Finally, we present a new method to construct (fully) consistent losses with\nd\n\u226a\nn\nout of multiple problem instances. Our practical approach leverages parallelism to sidestep lower bounds on\nd\n.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93098"} +{"video_file": "xL7Ve14AHA_39027288.mp4", "openreview_id": "xL7Ve14AHA", "slideslive_id": 39027288, "venue": "nips2024", "title": "Regularized Adaptive Momentum Dual Averaging with an Efficient Inexact Subproblem Solver for Training Structured Neural Network", "status": "Poster", "keywords": "structured neural networks;variance reduction;manifold identification;proximal methods;adaptive methods;inexact subproblem solution", "tldr": "A regularized adaptive momentum dual averaging algorithm for training structured neural networks with guarantees for finding the locally optimal structure, and exhibits outstanding performance in language, speech and image classification tasks.", "abstract": "We propose a Regularized Adaptive Momentum Dual Averaging (RAMDA) algorithm for training structured neural networks. Similar to existing regularized adaptive methods, the subproblem for computing the update direction of RAMDA involves a nonsmooth regularizer and a diagonal preconditioner, and therefore does not possess a closed-form solution in general. We thus also carefully devise an implementable inexactness condition that retains convergence guarantees similar to the exact versions, and propose a companion efficient solver for the subproblems of both RAMDA and existing methods to make them practically feasible. We leverage the theory of manifold identification in variational analysis to show that, even in the presence of such inexactness, the iterates of RAMDA attain the ideal structure induced by the regularizer at the stationary point of asymptotic convergence. This structure is locally optimal near the point of convergence, so RAMDA is guaranteed to obtain the best structure possible among all methods converging to the same point, making it the first regularized adaptive method outputting models that possess outstanding predictive performance while being (locally) optimally structured. Extensive numerical experiments in large-scale modern computer vision, language modeling, and speech tasks show that the proposed RAMDA is efficient and consistently outperforms state of the art for training structured neural network. Implementation of our algorithm is available at https://www.github.com/ismoptgroup/RAMDA.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/93094"} +{"video_file": "xM5m7J6Lbl_39028053.mp4", "openreview_id": "xM5m7J6Lbl", "slideslive_id": 39028053, "venue": "nips2024", "title": "Can an AI Agent Safely Run a Government? Existence of Probably Approximately Aligned Policies", "status": "Poster", "keywords": "Alignment;Planning;Social Choice;AI Safety", "tldr": "This paper introduces probably approximately aligned (PAA) and safe policies in the context of social decision processes.", "abstract": "While autonomous agents often surpass humans in their ability to handle vast and complex data, their potential misalignment (i.e., lack of transparency regarding their true objective) has thus far hindered their use in critical applications such as social decision processes. More importantly, existing alignment methods provide no formal guarantees on the safety of such models. Drawing from utility and social choice theory, we provide a novel quantitative definition of alignment in the context of social decision-making. Building on this definition, we introduce probably approximately aligned (i.e., near-optimal) policies, and we derive a sufficient condition for their existence. Lastly, recognizing the practical difficulty of satisfying this condition, we introduce the relaxed concept of safe (i.e., nondestructive) policies, and we propose a simple yet robust method to safeguard the black-box policy of any autonomous agent, ensuring all its actions are verifiably safe for the society.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93093"} +{"video_file": "xRdpCOdghl_39028858.mp4", "openreview_id": "xRdpCOdghl", "slideslive_id": 39028858, "venue": "nips2024", "title": "Enhancing Semi-Supervised Learning via Representative and Diverse Sample Selection", "status": "Poster", "keywords": "Semi-supervised learning;Sample selection;Low-budget learning", "tldr": "Proposed a representative and diversified sample selection method to select data for annotation from the unlabeled data to improve the performance of semi-supervised learning approaches.", "abstract": "Semi-Supervised Learning (SSL) has become a preferred paradigm in many deep learning tasks, which reduces the need for human labor. Previous studies primarily focus on effectively utilising the labelled and unlabeled data to improve performance. However, we observe that how to select samples for labelling also significantly impacts performance, particularly under extremely low-budget settings. The sample selection task in SSL has been under-explored for a long time. To fill in this gap, we propose a Representative and Diverse Sample Selection approach (RDSS). By adopting a modified Frank-Wolfe algorithm to minimise a novel criterion\n\u03b1\n-Maximum Mean Discrepancy (\n\u03b1\n-MMD), RDSS samples a representative and diverse subset for annotation from the unlabeled data. We demonstrate that minimizing\n\u03b1\n-MMD enhances the generalization ability of low-budget learning. Experimental results show that RDSS consistently improves the performance of several popular SSL frameworks and outperforms the state-of-the-art sample selection approaches used in Active Learning (AL) and Semi-Supervised Active Learning (SSAL), even with constrained annotation budgets. Our code is available at RDSS.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/93085"} +{"video_file": "xZKXGvLB0c_39027337.mp4", "openreview_id": "xZKXGvLB0c", "slideslive_id": 39027337, "venue": "nips2024", "title": "Causal vs. Anticausal merging of predictors", "status": "Poster", "keywords": "Causality;Merging of predictors;Causal vs Anticausal;Maximum Entropy", "tldr": "We study the asymmetries produced in the merging of predictors whenever we have causal information.", "abstract": "We study the differences arising from merging predictors in the causal and anticausal directions using the same data. In particular we study the asymmetries that arise in a simple model where we merge the predictors using one binary variable as target and two continuous variables as predictors. We use Causal Maximum Entropy (CMAXENT) as inductive bias to merge the predictors, however, we expect similar differences to hold also when we use other merging methods that take into account asymmetries between cause and effect. We show that if we observe all bivariate distributions, the CMAXENT solution reduces to a logistic regression in the causal direction and Linear Discriminant Analysis (LDA) in the anticausal direction. Furthermore, we study how the decision boundaries of these two solutions differ whenever we observe only some of the bivariate distributions implications for Out-Of-Variable (OOV) generalisation.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/93078"} +{"video_file": "xZxXNhndXU_39028235.mp4", "openreview_id": "xZxXNhndXU", "slideslive_id": 39028235, "venue": "nips2024", "title": "Dynamic 3D Gaussian Fields for Urban Areas", "status": "Spotlight", "keywords": "Neural Rendering;Gaussian Splatting;Dynamic Urban Areas", "tldr": "Given a set of heterogeneous input sequences of a common geographic area, we optimize a single dynamic scene representation that permits rendering of arbitrary viewpoints and scene configurations at interactive speeds.", "abstract": "We present an efficient neural 3D scene representation for novel-view synthesis (NVS) in large-scale, dynamic urban areas. Existing works are not well suited for applications like mixed-reality or closed-loop simulation due to their limited visual quality and non-interactive rendering speeds. Recently, rasterization-based approaches have achieved high-quality NVS at impressive speeds. However, these methods are limited to small-scale, homogeneous data, i.e. they cannot handle severe appearance and geometry variations due to weather, season, and lighting and do not scale to larger, dynamic areas with thousands of images. We propose 4DGF, a neural scene representation that scales to large-scale dynamic urban areas, handles heterogeneous input data, and substantially improves rendering speeds. We use 3D Gaussians as an efficient geometry scaffold while relying on neural fields as a compact and flexible appearance model. We integrate scene dynamics via a scene graph at global scale while modeling articulated motions on a local level via deformations. This decomposed approach enables flexible scene composition suitable for real-world applications. In experiments, we surpass the state-of-the-art by over 3 dB in PSNR and more than 200x in rendering speed.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93077"} +{"video_file": "xavWvnJTST_39028149.mp4", "openreview_id": "xavWvnJTST", "slideslive_id": 39028149, "venue": "nips2024", "title": "Feedback control guides credit assignment in recurrent neural networks", "status": "Poster", "keywords": "biologically-plausible learning;RNNs;motor control;feedback control", "tldr": "Feedback control may enable biological recurrent neural networks to achieve accurate and efficient credit assignment, facilitating real-time learning and adaptation in behavior generation.", "abstract": "How do brain circuits learn to generate behaviour? While significant strides have been made in understanding learning in artificial neural networks, applying this knowledge to biological networks remains challenging. For instance, while backpropagation is known to perform accurate credit assignment of error in artificial neural networks, how a similarly powerful process can be realized within the constraints of biological circuits remains largely unclear. One of the major challenges is that the brain's extensive recurrent connectivity requires the propagation of error through both space and time, a problem that is notoriously difficult to solve in vanilla recurrent neural networks. Moreover, the extensive feedback connections in the brain are known to influence forward network activity, but the interaction between feedback-driven activity changes and local, synaptic plasticity-based learning is not fully understood. Building on our previous work modelling motor learning, this work investigates the mechanistic properties of pre-trained networks with feedback control on a standard motor task. We show that feedback control of the ongoing recurrent network dynamics approximates the optimal first-order gradient with respect to the network activities, allowing for rapid, ongoing movement correction. Moreover, we show that trial-by-trial adaptation to a persistent perturbation using a local, biologically plausible learning rule that integrates recent activity and error feedback is both more accurate and more efficient with feedback control during learning, due to the decoupling of the recurrent network dynamics and the injection of an adaptive, second-order gradient into the network dynamics. Thus, our results suggest that feedback control may guide credit assignment in biological recurrent neural networks, enabling both rapid and efficient learning in the brain.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/93074"} +{"video_file": "xcF2VbyZts_39027815.mp4", "openreview_id": "xcF2VbyZts", "slideslive_id": 39027815, "venue": "nips2024", "title": "SocialGPT: Prompting LLMs for Social Relation Reasoning via Greedy Segment Optimization", "status": "Poster", "keywords": "Social Relation Reasoning;Large Language Models;Foundation Models;Prompt Optimization", "tldr": "We present SocialGPT, a modular framework with greedy segment prompt optimization for social relation reasoning, which attains competitive results while also providing interpretable explanations.", "abstract": "Social relation reasoning aims to identify relation categories such as friends, spouses, and colleagues from images. While current methods adopt the paradigm of training a dedicated network end-to-end using labeled image data, they are limited in terms of generalizability and interpretability. To address these issues, we first present a simple yet well-crafted framework named SocialGPT, which combines the perception capability of Vision Foundation Models (VFMs) and the reasoning capability of Large Language Models (LLMs) within a modular framework, providing a strong baseline for social relation recognition. Specifically, we instruct VFMs to translate image content into a textual social story, and then utilize LLMs for text-based reasoning. SocialGPT introduces systematic design principles to adapt VFMs and LLMs separately and bridge their gaps. Without additional model training, it achieves competitive zero-shot results on two databases while offering interpretable answers, as LLMs can generate language-based explanations for the decisions. The manual prompt design process for LLMs at the reasoning phase is tedious and an automated prompt optimization method is desired. As we essentially convert a visual classification task into a generative task of LLMs, automatic prompt optimization encounters a unique long prompt optimization issue. To address this issue, we further propose the Greedy Segment Prompt Optimization (GSPO), which performs a greedy search by utilizing gradient information at the segment level. Experimental results show that GSPO significantly improves performance, and our method also generalizes to different image styles. The code is available at https://github.com/Mengzibin/SocialGPT.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93072"} +{"video_file": "xcqSOfHt4g_39024647.mp4", "openreview_id": "xcqSOfHt4g", "slideslive_id": 39024647, "venue": "nips2024", "title": "Simplified and Generalized Masked Diffusion for Discrete Data", "status": "Poster", "keywords": "diffusion;discrete;masked diffusion;absorbing diffusion;diffusion model", "tldr": "A simplified and generalized framework for training masked discrete diffusion models", "abstract": "Masked (or absorbing) diffusion is actively explored as an alternative to autoregressive models for generative modeling of discrete data. However, existing work in this area has been hindered by unnecessarily complex model formulations and unclear relationships between different perspectives, leading to suboptimal parameterization, training objectives, and ad hoc adjustments to counteract these issues. In this work, we aim to provide a simple and general framework that unlocks the full potential of masked diffusion models. We show that the continuous-time variational objective of masked diffusion models is a simple weighted integral of cross-entropy losses. Our framework also enables training generalized masked diffusion models with state-dependent masking schedules. When evaluated by perplexity, our models trained on OpenWebText surpass prior diffusion language models at GPT-2 scale and demonstrate superior performance on 4 out of 5 zero-shot language modeling tasks. Furthermore, our models vastly outperform previous discrete diffusion models on pixel-level image modeling, achieving 2.75 (CIFAR-10) and 3.40 (ImageNet 64x64) bits per dimension that are better than autoregressive models of similar sizes.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/93071"} +{"video_file": "xnmm1jThkv_39024893.mp4", "openreview_id": "xnmm1jThkv", "slideslive_id": 39024893, "venue": "nips2024", "title": "Hybrid Top-Down Global Causal Discovery with Local Search for Linear and Nonlinear Additive Noise Models", "status": "Poster", "keywords": "global causal discovery;additive noise model;local structure", "tldr": "We present a novel hybrid method for causal discovery that leverages local causal relationships in SEMs to construct a compact hierarchical topological sort, followed by a novel nonparametric constraint-based method for efficient edge discovery.", "abstract": "Learning the unique directed acyclic graph corresponding to an unknown causal model is a challenging task. Methods based on functional causal models can identify a unique graph, but either suffer from the curse of dimensionality or impose strong parametric assumptions. To address these challenges, we propose a novel hybrid approach for global causal discovery in observational data that leverages local causal substructures. We first present a topological sorting algorithm that leverages ancestral relationships in linear structural causal models to establish a compact top-down hierarchical ordering, encoding more causal information than linear orderings produced by existing methods. We demonstrate that this approach generalizes to nonlinear settings with arbitrary noise. We then introduce a nonparametric constraint-based algorithm that prunes spurious edges by searching for local conditioning sets, achieving greater accuracy than current methods. We provide theoretical guarantees for correctness and worst-case polynomial time complexities, with empirical validation on synthetic data.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/93064"} +{"video_file": "xoCFd1WKpf_39024859.mp4", "openreview_id": "xoCFd1WKpf", "slideslive_id": 39024859, "venue": "nips2024", "title": "Unified Lexical Representation for Interpretable Visual-Language Alignment", "status": "Poster", "keywords": "Multi-modal;Alignment;Retrieval;Sparse Retrieval;Lexical Representation", "tldr": "We introduce LexVLA, a interpretable Visual-Language Alignment framework by learning a unified lexical representation for both modalities without complex design.", "abstract": "Visual-Language Alignment (VLA) has gained a lot of attention since CLIP's groundbreaking work. Although CLIP performs well, the typical direct latent feature alignment lacks clarity in its representation and similarity scores. On the other hand, lexical representation, a vector whose element represents the similarity between the sample and a word from the vocabulary, is a natural sparse representation and interpretable, providing exact matches for individual words. However, lexical representations are difficult to learn due to no ground-truth supervision and false-discovery issues, and thus requires complex design to train effectively. In this paper, we introduce LexVLA, a more interpretable VLA framework by learning a unified lexical representation for both modalities without complex design. We use DINOv2 as our visual model for its local-inclined features and Llama 2, a generative language model, to leverage its in-context lexical prediction ability. To avoid the false discovery, we propose an overuse penalty to refrain the lexical representation from falsely frequently activating meaningless words. We demonstrate that these two pre-trained uni-modal models can be well-aligned by fine-tuning on the modest multi-modal dataset and avoid intricate training configurations. On cross-modal retrieval benchmarks, LexVLA, trained on the CC-12M multi-modal dataset, outperforms baselines fine-tuned on larger datasets (e.g., YFCC15M) and those trained from scratch on even bigger datasets (e.g., 1.1B data, including CC-12M). We conduct extensive experiments to analyze LexVLA. Codes are available at https://github.com/Clementine24/LexVLA.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93063"} +{"video_file": "xqc8yyhScL_39026803.mp4", "openreview_id": "xqc8yyhScL", "slideslive_id": 39026803, "venue": "nips2024", "title": "Is Programming by Example Solved by LLMs?", "status": "Poster", "keywords": "programming by example;program synthesis;LLM;code generation", "tldr": "We explore methods for doing PBE with LLMs", "abstract": "Programming-by-Examples (PBE) aims to generate an algorithm from input-output examples. Such systems are practically and theoretically important: from an end-user perspective, they are deployed to millions of people, and from an AI perspective, PBE corresponds to a very general form of few-shot inductive inference. Given the success of Large Language Models (LLMs) in code-generation tasks, we investigate here the extent to which LLMs can be said to have \"solved\" PBE. We experiment on classic domains such as lists and strings, and an uncommon graphics programming domain not well represented in typical pretraining data. We find that pretrained models are not effective at PBE, but that they can be fine-tuned for much higher performance, provided the test problems are in-distribution. We analyze empirically what causes these models to succeed and fail, and take steps toward understanding how to achieve better out-of-distribution generalization. Collectively these results suggest that LLMs make strong progress toward solving the typical suite of PBE tasks, potentially increasing the flexibility and applicability of PBE systems, while also identifying ways in which LLMs still fall short.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/93059"} +{"video_file": "xrbgXJomJp_39027527.mp4", "openreview_id": "xrbgXJomJp", "slideslive_id": 39027527, "venue": "nips2024", "title": "Inverse Factorized Soft Q-Learning for Cooperative Multi-agent Imitation Learning", "status": "Poster", "keywords": "Multi-agent Imitation Learning;Inverse Q Learning;Centralized Learning", "tldr": "An Inverse Q-Learning Algorithm for Multi-Agent Imitation Learning", "abstract": "This paper concerns imitation learning (IL) in cooperative multi-agent systems. The learning problem under consideration poses several challenges, characterized by high-dimensional state and action spaces and intricate inter-agent dependencies. In a single-agent setting, IL was shown to be done efficiently via an inverse soft-Q learning process. However, extending this framework to a multi-agent context introduces the need to simultaneously learn both local value functions to capture local observations and individual actions, and a joint value function for exploiting centralized learning. In this work, we introduce a new multi-agent IL algorithm designed to address these challenges. Our approach enables the centralized learning by leveraging mixing networks to aggregate decentralized Q functions. We further establish conditions for the mixing networks under which the multi-agent IL objective function exhibits convexity within the Q function space. We present extensive experiments conducted on some challenging multi-agent game environments, including an advanced version of the Star-Craft multi-agent challenge (SMACv2), which demonstrates the effectiveness of our algorithm.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93057"} +{"video_file": "xtK3gZjQDC_39025014.mp4", "openreview_id": "xtK3gZjQDC", "slideslive_id": 39025014, "venue": "nips2024", "title": "Towards Human-AI Complementarity with Prediction Sets", "status": "Poster", "keywords": "conformal prediction;decision support systems;human-ai complementarity", "tldr": "Prediction sets based on conformal prediction can be suboptimal in achieving human-ai complementarity. We show that finding optimal prediction sets is NP-hard and we give a greedy algorithm with provably improved performance than conformal prediction", "abstract": "Decision support systems based on prediction sets have proven to be effective at helping human experts solve classification tasks. Rather than providing single-label predictions, these systems provide sets of label predictions constructed using conformal prediction, namely prediction sets, and ask human experts to predict label values from these sets. In this paper, we first show that the prediction sets constructed using conformal prediction are, in general, suboptimal in terms of average accuracy. Then, we show that the problem of finding the optimal prediction sets under which the human experts achieve the highest average accuracy is NP-hard. More strongly, unless P = NP, we show that the problem is hard to approximate to any factor less than the size of the label set. However, we introduce a simple and efficient greedy algorithm that, for a large class of expert models and non-conformity scores, is guaranteed to find prediction sets that provably offer equal or greater performance than those constructed using conformal prediction. Further, using a simulation study with both synthetic and real expert predictions, we demonstrate that, in practice, our greedy algorithm finds near-optimal prediction sets offering greater performance than conformal prediction.", "primary_area": "human-AI_interaction", "site": "https://neurips.cc/virtual/2024/poster/93055"} +{"video_file": "xutrKezbPF_39027292.mp4", "openreview_id": "xutrKezbPF", "slideslive_id": 39027292, "venue": "nips2024", "title": "CIFD: Controlled Information Flow to Enhance Knowledge Distillation", "status": "Poster", "keywords": "Knowledge Distillation;Information Bottleneck;Rate-Distortion;Teaching Assistant;CLIP", "tldr": "This paper proposes a Rate-Distortion theory based module that mimics teaching assistants for knowledge distillation, while being computationally far cheaper to train than conventional TAs.", "abstract": "Knowledge Distillation is the mechanism by which the insights gained from a larger teacher model are transferred to a smaller student model. However, the transfer suffers when the teacher model is significantly larger than the student. To overcome this, prior works have proposed training intermediately sized models, Teacher Assistants (TAs) to help the transfer process. However, training TAs is expensive, as training these models is a knowledge transfer task in itself. Further, these TAs are larger than the student model and training them especially in large data settings can be computationally intensive. In this paper, we propose a novel framework called Controlled Information Flow for Knowledge Distillation (CIFD) consisting of two components. First, we propose a significantly smaller alternatives to TAs, the Rate-Distortion Module (RDM) which uses the teacher's penultimate layer embedding and a information rate-constrained bottleneck layer to replace the Teacher Assistant model. RDMs are smaller and easier to train than TAs, especially in large data regimes, since they operate on the teacher embeddings and do not need to relearn low level input feature extractors. Also, by varying the information rate across the bottleneck, RDMs can replace TAs of different sizes. Secondly, we propose the use of Information Bottleneck Module in the student model, which is crucial for regularization in the presence of a large number of RDMs. We show comprehensive state-of-the-art results of the proposed method over large datasets like Imagenet. Further, we show the significant improvement in distilling CLIP like models over a huge 12M image-text dataset. It outperforms CLIP specialized distillation methods across five zero-shot classification datasets and two zero-shot image-text retrieval datasets.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93054"} +{"video_file": "xvYI7TCiU6_39024598.mp4", "openreview_id": "xvYI7TCiU6", "slideslive_id": 39024598, "venue": "nips2024", "title": "Measuring Mutual Policy Divergence for Multi-Agent Sequential Exploration", "status": "Poster", "keywords": "multi-agent reinforcement learning;sequential updating;exploration;Cauchy-Schwarz divergence", "tldr": "A method that maximizes conditional Cauchy-Shwarz policy divergence between agents and between episodes to enhance exploration and heterogeneity for MARL.", "abstract": "Despite the success of Multi-Agent Reinforcement Learning (MARL) algorithms in cooperative tasks, previous works, unfortunately, face challenges in heterogeneous scenarios since they simply disable parameter sharing for agent specialization. Sequential updating scheme was thus proposed, naturally diversifying agents by encouraging agents to learn from preceding ones. However, the exploration strategy in sequential scheme has not been investigated. Benefiting from updating one-by-one, agents have the access to the information from preceding agents. Thus, in this work, we propose to exploit the preceding information to enhance exploration and heterogeneity sequentially. We present Multi-Agent Divergence Policy Optimization (MADPO), equipped with mutual policy divergence maximization framework. We quantify the policy discrepancies between episodes to enhance exploration and between agents to heterogenize agents, termed intra-agent and inter-agent policy divergence. To address the issue that traditional divergence measurements lack stability and directionality, we propose to employ the conditional Cauchy-Schwarz divergence to provide entropy-guided exploration incentives. Extensive experiments show that the proposed method outperforms state-of-the-art sequential updating approaches in two challenging multi-agent tasks with various heterogeneous scenarios. Source code is available at \\url{https://github.com/hwdou6677/MADPO}.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93051"} +{"video_file": "xxY8d4rnSb_39026214.mp4", "openreview_id": "xxY8d4rnSb", "slideslive_id": 39026214, "venue": "nips2024", "title": "ManiPose: Manifold-Constrained Multi-Hypothesis 3D Human Pose Estimation", "status": "Poster", "keywords": "human pose estimation;depth ambiguity;multiple choice learning", "tldr": "We prove previous 2D-to-3D human pose lifting methods suffer from topology inconsistencies invisible to standard evaluation metrics, and propose both new metrics and a new method circumventing these issues via constrained multiple hypotheses.", "abstract": "We propose ManiPose, a manifold-constrained multi-hypothesis model for human-pose 2D-to-3D lifting. We provide theoretical and empirical evidence that, due to the depth ambiguity inherent to monocular 3D human pose estimation, traditional regression models suffer from pose-topology consistency issues, which standard evaluation metrics (MPJPE, P-MPJPE and PCK) fail to assess. ManiPose addresses depth ambiguity by proposing multiple candidate 3D poses for each 2D input, each with its estimated plausibility. Unlike previous multi-hypothesis approaches, ManiPose forgoes generative models, greatly facilitating its training and usage. By constraining the outputs to lie on the human pose manifold, ManiPose guarantees the consistency of all hypothetical poses, in contrast to previous works. We showcase the performance of ManiPose on real-world datasets, where it outperforms state-of-the-art models in pose consistency by a large margin while being very competitive on the MPJPE metric.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93050"} +{"video_file": "xzCuBjHQbS_39026001.mp4", "openreview_id": "xzCuBjHQbS", "slideslive_id": 39026001, "venue": "nips2024", "title": "Random Function Descent", "status": "Poster", "keywords": "optimization;hyperparameter tuning;Gaussian processes;random functions;random fields;average case analysis;bayesian optimization", "tldr": "Grounding step size heuristics in average case analysis.", "abstract": "Classical worst-case optimization theory neither explains the success of optimization in machine learning, nor does it help with step size selection. In this paper we demonstrate the viability and advantages of replacing the classical 'convex function' framework with a 'random function' framework. With complexity\nO\n(\nn\n3\nd\n3\n)\n, where\nn\nis the number of steps and\nd\nthe number of dimensions, Bayesian optimization with gradients has not been viable in large dimension so far. By bridging the gap between Bayesian optimization (i.e. random function optimization theory) and classical optimization we establish viability. Specifically, we use a 'stochastic Taylor approximation' to rediscover gradient descent, which is scalable in high dimension due to\nO\n(\nn\nd\n)\ncomplexity. This rediscovery yields a specific step size schedule we call Random Function Descent (RFD). The advantage of this random function framework is that RFD is scale invariant and that it provides a theoretical foundation for common step size heuristics such as gradient clipping and gradual learning rate warmup.", "primary_area": "optimization", "site": "https://neurips.cc/virtual/2024/poster/93048"} +{"video_file": "y2fAmldTIf_39025334.mp4", "openreview_id": "y2fAmldTIf", "slideslive_id": 39025334, "venue": "nips2024", "title": "HEPrune: Fast Private Training of Deep Neural Networks With Encrypted Data Pruning", "status": "Poster", "keywords": "Confidential Training;Privacy;Pruning;Cryptographic Computing", "tldr": "Efficient confidential deep neural network training with encrypted data pruning", "abstract": "Non-interactive cryptographic computing, Fully Homomorphic Encryption (FHE), provides a promising solution for private neural network training on encrypted data. One challenge of FHE-based private training is its large computational overhead, especially the multiple rounds of forward and backward execution on each encrypted data sample. Considering the existence of largely redundant data samples, pruning them will significantly speed up the training, as proven in plain non-FHE training. Executing the data pruning of encrypted data on the server side is not trivial since the knowledge calculation of data pruning needs complex and expensive executions on encrypted data. There is a lack of FHE-based data pruning protocol for efficient, private training. In this paper, we propose, \\textit{HEPrune}, to construct a FHE data-pruning protocol and then design an FHE-friendly data-pruning algorithm under client-aided or non-client-aided settings, respectively. We also observed that data sample pruning may not always remove ciphertexts, leaving large empty slots and limiting the effects of data pruning. Thus, in HEPrune, we further propose ciphertext-wise pruning to reduce ciphertext computation numbers without hurting accuracy. Experimental results show that our work can achieve a\n16\n\u00d7\nspeedup with only a\n0.6\naccuracy drop over prior work. The code is publicly available at \\href{https://github.com/UCF-Lou-Lab-PET/Private-Data-Prune}.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/93046"} +{"video_file": "y6qhVtFG77_39028497.mp4", "openreview_id": "y6qhVtFG77", "slideslive_id": 39028497, "venue": "nips2024", "title": "NeuroBOLT: Resting-state EEG-to-fMRI Synthesis with Multi-dimensional Feature Mapping", "status": "Poster", "keywords": "EEG-to-fMRI synthesis;EEG;fMRI", "tldr": "We propose NeuroBOLT, a versatile deep-learning solution for projecting scalp EEG to BOLD fMRI signals.", "abstract": "Functional magnetic resonance imaging (fMRI) is an indispensable tool in modern neuroscience, providing a non-invasive window into whole-brain dynamics at millimeter-scale spatial resolution. However, fMRI is constrained by issues such as high operation costs and immobility. With the rapid advancements in cross-modality synthesis and brain decoding, the use of deep neural networks has emerged as a promising solution for inferring whole-brain, high-resolution fMRI features directly from electroencephalography (EEG), a more widely accessible and portable neuroimaging modality. Nonetheless, the complex projection from neural activity to fMRI hemodynamic responses and the spatial ambiguity of EEG pose substantial challenges both in modeling and interpretability. Relatively few studies to date have developed approaches for EEG-fMRI translation, and although they have made significant strides, the inference of fMRI signals in a given study has been limited to a small set of brain areas and to a single condition (i.e., either resting-state or a specific task). The capability to predict fMRI signals in other brain areas, as well as to generalize across conditions, remain critical gaps in the field. To tackle these challenges, we introduce a novel and generalizable framework: NeuroBOLT, i.e., Neuro-to-BOLD Transformer, which leverages multi-dimensional representation learning from temporal, spatial, and spectral domains to translate raw EEG data to the corresponding fMRI activity signals across the brain. Our experiments demonstrate that NeuroBOLT effectively reconstructs unseen resting-state fMRI signals from primary sensory, high-level cognitive areas, and deep subcortical brain regions, achieving state-of-the-art accuracy with the potential to generalize across varying conditions and sites, which significantly advances the integration of these two modalities.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/93044"} +{"video_file": "y8P633E5HQ_39026336.mp4", "openreview_id": "y8P633E5HQ", "slideslive_id": 39026336, "venue": "nips2024", "title": "Equivariant Machine Learning on Graphs with Nonlinear Spectral Filters", "status": "Poster", "keywords": "graph machine learning;graph signal processing;equivariant machine learning;geometric deep learning;spectral method;nonlinear method", "tldr": "We present a novel equivariant graph neural network based on nonlinear operations in the spectral domain.", "abstract": "Equivariant machine learning is an approach for designing deep learning models that respect the symmetries of the problem, with the aim of reducing model complexity and improving generalization. In this paper, we focus on an extension of shift equivariance, which is the basis of convolution networks on images, to general graphs. Unlike images, graphs do not have a natural notion of domain translation. Therefore, we consider the graph functional shifts as the symmetry group: the unitary operators that commute with the graph shift operator. Notably, such symmetries operate in the signal space rather than directly in the spatial space. We remark that each linear filter layer of a standard spectral graph neural network (GNN) commutes with graph functional shifts, but the activation function breaks this symmetry. Instead, we propose nonlinear spectral filters (NLSFs) that are fully equivariant to graph functional shifts and show that they have universal approximation properties. The proposed NLSFs are based on a new form of spectral domain that is transferable between graphs. We demonstrate the superior performance of NLSFs over existing spectral GNNs in node and graph classification benchmarks.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/93041"} +{"video_file": "y929esCZNJ_39027105.mp4", "openreview_id": "y929esCZNJ", "slideslive_id": 39027105, "venue": "nips2024", "title": "MomentumSMoE: Integrating Momentum into Sparse Mixture of Experts", "status": "Poster", "keywords": "Sparse Mixture of Experts;optimization;gradient descent;momentum;adam", "tldr": "We first establish a connection between the dynamics of the expert representations in SMoEs and gradient descent on a multi-objective optimization problem and then integrate momentum into SMoE to develop a new family of SMoEs, named MomentumSMoE.", "abstract": "Sparse Mixture of Experts (SMoE) has become the key to unlocking unparalleled scalability in deep learning. SMoE has the potential to exponentially increase in parameter count while maintaining the efficiency of the model by only activating a small subset of these parameters for a given sample. However, it has been observed that SMoE suffers from unstable training and has difficulty adapting to new distributions, leading to the model's lack of robustness to data contamination. To overcome these limitations, we first establish a connection between the dynamics of the expert representations in SMoEs and gradient descent on a multi-objective optimization problem. Leveraging our framework, we then integrate momentum into SMoE and propose a new family of SMoEs, named MomentumSMoE. We theoretically prove and numerically validate that MomentumSMoE is more stable and robust than SMoE. In particular, we verify the advantages of MomentumSMoE over SMoE on a variety of practical tasks including ImageNet-1K object recognition and WikiText-103 language modeling. We demonstrate the applicability of MomentumSMoE to many types of SMoE models, including those in the Sparse MoE model for vision (V-MoE) and the Generalist Language Model (GLaM). We also show that other advanced momentum-based optimization methods, such as Adam, can be easily incorporated into the MomentumSMoE framework for designing new SMoE models with even better performance, almost negligible additional computation cost, and simple implementations.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93039"} +{"video_file": "y9huwsnGRJ_39027641.mp4", "openreview_id": "y9huwsnGRJ", "slideslive_id": 39027641, "venue": "nips2024", "title": "Continuously Learning, Adapting, and Improving: A Dual-Process Approach to Autonomous Driving", "status": "Poster", "keywords": "Autonomous Driving;Dual-process System;Knowledge-Driven;Vision Language Model", "tldr": "LeapAD, a new autonomous driving paradigm inspired by human cognition, improves adaptability and interpretability in complex scenarios through dual-process decision-making and continuous learning from past experiences.", "abstract": "Autonomous driving has advanced significantly due to sensors, machine learning, and artificial intelligence improvements. However, prevailing methods struggle with intricate scenarios and causal relationships, hindering adaptability and interpretability in varied environments. To address the above problems, we introduce LeapAD, a novel paradigm for autonomous driving inspired by the human cognitive process. Specifically, LeapAD emulates human attention by selecting critical objects relevant to driving decisions, simplifying environmental interpretation, and mitigating decision-making complexities. Additionally, LeapAD incorporates an innovative dual-process decision-making module, which consists of an Analytic Process (System-II) for thorough analysis and reasoning, along with a Heuristic Process (System-I) for swift and empirical processing. The Analytic Process leverages its logical reasoning to accumulate linguistic driving experience, which is then transferred to the Heuristic Process by supervised fine-tuning. Through reflection mechanisms and a growing memory bank, LeapAD continuously improves itself from past mistakes in a closed-loop environment. Closed-loop testing in CARLA shows that LeapAD outperforms all methods relying solely on camera input, requiring 1-2 orders of magnitude less labeled data. Experiments also demonstrate that as the memory bank expands, the Heuristic Process with only 1.8B parameters can inherit the knowledge from a GPT-4 powered Analytic Process and achieve continuous performance improvement. Project page: https://pjlab-adg.github.io/LeapAD", "primary_area": "robotics", "site": "https://neurips.cc/virtual/2024/poster/93038"} +{"video_file": "y9zIRxshzj_39025847.mp4", "openreview_id": "y9zIRxshzj", "slideslive_id": 39025847, "venue": "nips2024", "title": "Causal Discovery from Event Sequences by Local Cause-Effect Attribution", "status": "Poster", "keywords": "causality;causal discovery;event sequences", "tldr": "We introduce a causal discovery method for event sequences, that matches individual events to individual causing events.", "abstract": "Sequences of events, such as crashes in the stock market or outages in a network, contain strong temporal dependencies, whose understanding is crucial to react to and influence future events. In this paper, we study the problem of discovering the underlying causal structure from event sequences. To this end, we introduce a new causal model, where individual events of the cause trigger events of the effect with dynamic delays. We show that in contrast to existing methods based on Granger causality, our model is identifiable for both instant and delayed effects.\nWe base our approach on the Algorithmic Markov Condition, by which we identify the true causal network as the one that minimizes the Kolmogorov complexity. As the Kolmogorov complexity is not computable, we instantiate our model using Minimum Description Length and show that the resulting score identifies the causal direction. To discover causal graphs, we introduce the Cascade algorithm, which adds edges in topological order. Extensive evaluation shows that Cascade outperforms existing methods in settings with instantaneous effects, noise, and multiple colliders, and discovers insightful causal graphs on real-world data.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/93036"} +{"video_file": "yAAQWBMGiT_39024640.mp4", "openreview_id": "yAAQWBMGiT", "slideslive_id": 39024640, "venue": "nips2024", "title": "Sketchy Moment Matching: Toward Fast and Provable Data Selection for Finetuning", "status": "Poster", "keywords": "Data selection;Finetuning;Sketching;Johnson-Lindenstrauss transform", "tldr": "We present a theory on data selection for high-dimensional ridge regression that inspires a fast and effective data selection algorithm for finetuning.", "abstract": "We revisit data selection in a modern context of finetuning from a fundamental perspective. Extending the classical wisdom of variance minimization in low dimensions to high-dimensional finetuning, our generalization analysis unveils the importance of additionally reducing bias induced by low-rank approximation. Inspired by the variance-bias tradeoff in high dimensions from the theory, we introduce Sketchy Moment Matching (SkMM), a scalable data selection scheme with two stages. (i) First, the bias is controlled using gradient sketching that explores the finetuning parameter space for an informative low-dimensional subspace\nS\n; (ii) then the variance is reduced over\nS\nvia moment matching between the original and selected datasets. Theoretically, we show that gradient sketching is fast and provably accurate: selecting\nn\nsamples by reducing variance over\nS\npreserves the fast-rate generalization\nO\n(\ndim\n\u2061\n(\nS\n)\n/\nn\n)\n, independent of the parameter dimension. Empirically, we concretize the variance-bias balance via synthetic experiments and demonstrate the effectiveness of SkMM for finetuning in real vision tasks.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93035"} +{"video_file": "yBHbeSpwYS_39026023.mp4", "openreview_id": "yBHbeSpwYS", "slideslive_id": 39026023, "venue": "nips2024", "title": "In Pursuit of Causal Label Correlations for Multi-label Image Recognition", "status": "Poster", "keywords": "Multi-label;Label correlation;Causal intervention", "tldr": "Utilizing casual intervention theory to capture casual label correlations.", "abstract": "Multi-label image recognition aims to predict all objects present in an input image. A common belief is that modeling the correlations between objects is beneficial for multi-label recognition. However, this belief has been recently challenged as label correlations may mislead the classifier in testing, due to the possible contextual bias in training. Accordingly, a few of recent works not only discarded label correlation modeling, but also advocated to remove contextual information for multi-label image recognition. This work explicitly explores label correlations for multi-label image recognition based on a principled causal intervention approach. With causal intervention, we pursue causal label correlations and suppress spurious label correlations, as the former tend to convey useful contextual cues while the later may mislead the classifier. Specifically, we decouple label-specific features with a Transformer decoder attached to the backbone network, and model the confounders which may give rise to spurious correlations by clustering spatial features of all training images. Based on label-specific features and confounders, we employ a cross-attention module to implement causal intervention, quantifying the causal correlations from all object categories to each predicted object category. Finally, we obtain image labels by combining the predictions from decoupled features and causal label correlations. Extensive experiments clearly validate the effectiveness of our approach for multi-label image recognition in both common and cross-dataset settings.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/93033"} +{"video_file": "yBrxziByeG_39028852.mp4", "openreview_id": "yBrxziByeG", "slideslive_id": 39028852, "venue": "nips2024", "title": "Text-DiFuse: An Interactive Multi-Modal Image Fusion Framework based on Text-modulated Diffusion Model", "status": "Spotlight", "keywords": "Image fusion;multi-modal fusion;text;diffusion", "tldr": "Interactive Multi-modal Image Fusion", "abstract": "Existing multi-modal image fusion methods fail to address the compound degradations presented in source images, resulting in fusion images plagued by noise, color bias, improper exposure, etc. Additionally, these methods often overlook the specificity of foreground objects, weakening the salience of the objects of interest within the fused images. To address these challenges, this study proposes a novel interactive multi-modal image fusion framework based on the text-modulated diffusion model, called Text-DiFuse. First, this framework integrates feature-level information integration into the diffusion process, allowing adaptive degradation removal and multi-modal information fusion. This is the first attempt to deeply and explicitly embed information fusion within the diffusion process, effectively addressing compound degradation in image fusion. Second, by embedding the combination of the text and zero-shot location model into the diffusion fusion process, a text-controlled fusion re-modulation strategy is developed. This enables user-customized text control to improve fusion performance and highlight foreground objects in the fused images. Extensive experiments on diverse public datasets show that our Text-DiFuse achieves state-of-the-art fusion performance across various scenarios with complex degradation. Moreover, the semantic segmentation experiment validates the significant enhancement in semantic performance achieved by our text-controlled fusion re-modulation strategy. The code is publicly available at https://github.com/Leiii-Cao/Text-DiFuse.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93032"} +{"video_file": "yCh1z6Dcto_39027087.mp4", "openreview_id": "yCh1z6Dcto", "slideslive_id": 39027087, "venue": "nips2024", "title": "Stepping Forward on the Last Mile", "status": "Poster", "keywords": "On-device model adaptation;Fixed-point forward gradient learning;Low memory;Edge devices", "tldr": "On-device model adaptation with fixed-point forward gradient learning", "abstract": "Continuously adapting pre-trained models to local data on resource constrained edge devices is the \\emph{last mile} for model deployment. However, as models increase in size and depth, backpropagation requires a large amount of memory, which becomes prohibitive for edge devices. In addition, most existing low power neural processing engines (e.g., NPUs, DSPs, MCUs, etc.) are designed as fixed-point inference accelerators, without training capabilities. Forward gradients, solely based on directional derivatives computed from two forward calls, have been recently used for model training, with substantial savings in computation and memory. However, the performance of quantized training with fixed-point forward gradients remains unclear. In this paper, we investigate the feasibility of on-device training using fixed-point forward gradients, by conducting comprehensive experiments across a variety of deep learning benchmark tasks in both vision and audio domains. We propose a series of algorithm enhancements that further reduce the memory footprint, and the accuracy gap compared to backpropagation. An empirical study on how training with forward gradients navigates in the loss landscape is further explored. Our results demonstrate that on the last mile of model customization on edge devices, training with fixed-point forward gradients is a feasible and practical approach.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/93031"} +{"video_file": "yOe6ajdslI_39028624.mp4", "openreview_id": "yOe6ajdslI", "slideslive_id": 39028624, "venue": "nips2024", "title": "AUC Maximization under Positive Distribution Shift", "status": "Poster", "keywords": "AUC maximization;distribution shift;domain adaptation;PU learning;imbalanced data", "tldr": "In this paper, we propose a method for maximizing the AUC under the positive distribution shift by using labeled positive and unlabeled data in the training distribution and unlabeled data in the test distribution.", "abstract": "Maximizing the area under the receiver operating characteristic curve (AUC) is a popular approach to imbalanced binary classification problems. Existing AUC maximization methods usually assume that training and test distributions are identical. However, this assumption is often violated in practice due to {\\it a positive distribution shift}, where the negative-conditional density does not change but the positive-conditional density can vary. This shift often occurs in imbalanced classification since positive data are often more diverse and time-varying than negative data. To deal with this shift, we theoretically show that the AUC on the test distribution can be expressed by using the positive and marginal training densities and the marginal test density. Based on this result, we can maximize the AUC on the test distribution by using positive and unlabeled data in the training distribution and unlabeled data in the test distribution. The proposed method requires only positive labels in the training distribution as supervision. Moreover, the derived AUC has a simple form and thus is easy to implement. The effectiveness of the proposed method is shown with four real-world datasets.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/93025"} +{"video_file": "yQL5tutdaH_39024973.mp4", "openreview_id": "yQL5tutdaH", "slideslive_id": 39024973, "venue": "nips2024", "title": "Toward a Stable, Fair, and Comprehensive Evaluation of Object Hallucination in Large Vision-Language Models", "status": "Poster", "keywords": "Large Vision-Language Models;Multimodal large language models;Multimodal;Object hallucination;Evaluation;Image caption", "tldr": "The paper reveals the correlation between the object hallucination degree and the length of image descriptions, and proposes a stable and comprehensive framework for evaluating object hallucination in large vision-language models..", "abstract": "Given different instructions, large vision-language models (LVLMs) exhibit different degrees of object hallucinations, posing a significant challenge to the evaluation of object hallucinations. Overcoming this challenge, existing object hallucination evaluation methods average the results obtained from a set of instructions. However, these methods fail to provide consistent evaluation across instruction sets that generate image descriptions of significantly different lengths. In this paper, we present the first systematic investigation of the effect of instructions on object hallucinations in LVLMs, with a specific focus on the role played by image description lengths. A valuable finding is that instructions indirectly affect hallucinations through the length of image descriptions. The longer the image description, the higher the object hallucination degree. Accordingly, we fit an informative length-hallucination curve, upon which a fine-grained evaluation framework named LeHaCE is introduced for evaluating object hallucinations at any given image description length. LeHaCE evaluates the object hallucination degree at a uniform image description length to mitigate the effect of description lengths, promoting stability and fairness. Moreover, LeHaCE incorporates the curve slope as an innovative hallucination evaluation metric, reflecting the extent to which the object hallucination degree is affected by the image description length, achieving a more comprehensive evaluation. Experimental results demonstrate that LeHaCE provides a more stable, fair, and comprehensive evaluation of object hallucinations in LVLMs compared to existing methods.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/93023"} +{"video_file": "yRhrVaDOWE_39027026.mp4", "openreview_id": "yRhrVaDOWE", "slideslive_id": 39027026, "venue": "nips2024", "title": "Diffusion-based Curriculum Reinforcement Learning", "status": "Poster", "keywords": "curriculum reinforcement learning;reinforcement learning;diffusion models", "tldr": "A novel diffusion based curriculum reinforcement learning", "abstract": "Curriculum Reinforcement Learning (CRL) is an approach to facilitate the learning process of agents by structuring tasks in a sequence of increasing complexity. Despite its potential, many existing CRL methods struggle to efficiently guide agents toward desired outcomes, particularly in the absence of domain knowledge. This paper introduces DiCuRL (Diffusion Curriculum Reinforcement Learning), a novel method that leverages conditional diffusion models to generate curriculum goals. To estimate how close an agent is to achieving its goal, our method uniquely incorporates a\nQ\n-function and a trainable reward function based on Adversarial Intrinsic Motivation within the diffusion model. Furthermore, it promotes exploration through the inherent noising and denoising mechanism present in the diffusion models and is environment-agnostic. This combination allows for the generation of challenging yet achievable goals, enabling agents to learn effectively without relying on domain knowledge. We demonstrate the effectiveness of DiCuRL in three different maze environments and two robotic manipulation tasks simulated in MuJoCo, where it outperforms or matches nine state-of-the-art CRL algorithms from the literature.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93021"} +{"video_file": "yRuJqoWoCs_39028304.mp4", "openreview_id": "yRuJqoWoCs", "slideslive_id": 39028304, "venue": "nips2024", "title": "$SE(3)$ Equivariant Ray Embeddings for Implicit Multi-View Depth Estimation", "status": "Poster", "keywords": "$SE(3)$ Equivariance;Stereo Depth Estimation", "tldr": "We propose an \nS\nE\n(\n3\n)\n equivariant model with spherical harmonics ray embeddings and demonstrate its effectiveness in the task of generalized stereo depth estimation.", "abstract": "Incorporating inductive bias by embedding geometric entities (such as rays) as input has proven successful in multi-view learning. However, the methods adopting this technique typically lack equivariance, which is crucial for effective 3D learning. Equivariance serves as a valuable inductive prior, aiding in the generation of robust multi-view features for 3D scene understanding. In this paper, we explore the application of equivariant multi-view learning to depth estimation, not only recognizing its significance for computer vision and robotics but also addressing the limitations of previous research. Most prior studies have either overlooked equivariance in this setting or achieved only approximate equivariance through data augmentation, which often leads to inconsistencies across different reference frames. To address this issue, we propose to embed\nS\nE\n(\n3\n)\nequivariance into the Perceiver IO architecture. We employ Spherical Harmonics for positional encoding to ensure 3D rotation equivariance, and develop a specialized equivariant encoder and decoder within the Perceiver IO architecture. To validate our model, we applied it to the task of stereo depth estimation, achieving state of the art results on real-world datasets without explicit geometric constraints or extensive data augmentation.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93020"} +{"video_file": "yTTomSJsSW_39026324.mp4", "openreview_id": "yTTomSJsSW", "slideslive_id": 39026324, "venue": "nips2024", "title": "Aligning Large Language Models with Representation Editing: A Control Perspective", "status": "Poster", "keywords": "Large language model;Alignment;Representation editing", "tldr": "We propose a new method to align large language model from a stochastic optimal control perspective.", "abstract": "Aligning large language models (LLMs) with human objectives is crucial for real-world applications. However, fine-tuning LLMs for alignment often suffers from unstable training and requires substantial computing resources. Test-time alignment techniques, such as prompting and guided decoding, do not modify the underlying model, and their performance remains dependent on the original model's capabilities. To address these challenges, we propose aligning LLMs through representation editing. The core of our method is to view a pre-trained autoregressive LLM as a discrete-time stochastic dynamical system. To achieve alignment for specific objectives, we introduce external control signals into the state space of this language dynamical system. We train a value function directly on the hidden states according to the Bellman equation, enabling gradient-based optimization to obtain the optimal control signals at test time. Our experiments demonstrate that our method outperforms existing test-time alignment techniques while requiring significantly fewer resources compared to fine-tuning methods. Our code is available at https://github.com/Lingkai-Kong/RE-Control.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/93018"} +{"video_file": "yUckuDjAE0_39027073.mp4", "openreview_id": "yUckuDjAE0", "slideslive_id": 39027073, "venue": "nips2024", "title": "Learning Bregman Divergences with Application to Robustness", "status": "Poster", "keywords": "Bregman divergence;similarity and distance learning;mirror descent;corruption robustness.", "tldr": "Just as the KL divergence is derived from the Shannon entropy, we generate Bregman divergences from learned base functions and apply them to obtain similarity measures for real-world image corruptions, which we then use for robustness training.", "abstract": "We propose a novel and general method to learn Bregman divergences from raw high-dimensional data that measure similarity between images in pixel space. As a prototypical application, we learn divergences that consider real-world corruptions of images (e.g., blur) as close to the original and noisy perturbations as far, even if in\nL\np\n-distance the opposite holds. We also show that the learned Bregman divergence excels on datasets of human perceptual similarity judgment, suggesting its utility in a range of applications. We then define adversarial attacks by replacing the projected gradient descent (PGD) with the mirror descent associated with the learned Bregman divergence, and use them to improve the state-of-the-art in robustness through adversarial training for common image corruptions. In particular, for the contrast corruption that was found problematic in prior work we achieve an accuracy that exceeds the\nL\np\n- and the LPIPS-based adversarially trained neural networks by a margin of 27.16% on the CIFAR-10-C corruption data set.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93016"} +{"video_file": "yUqUBGioBG_39027795.mp4", "openreview_id": "yUqUBGioBG", "slideslive_id": 39027795, "venue": "nips2024", "title": "Class Distribution Shifts in Zero-Shot Learning: Learning Robust Representations", "status": "Poster", "keywords": "Zero-Shot Learning;Distribution Shift;Out of Distribution Generalization;Robust Representation Learning", "tldr": "This work tackles the challenge of learning data representations robust to class distribution shifts in zero-shot learning, by constructing synthetic data environments and harnessing out-of-distribution generalization techniques.", "abstract": "Zero-shot learning methods typically assume that the new, unseen classes encountered during deployment come from the same distribution as the the classes in the training set. However, real-world scenarios often involve class distribution shifts (e.g., in age or gender for person identification), posing challenges for zero-shot classifiers that rely on learned representations from training classes. In this work, we propose and analyze a model that assumes that the attribute responsible for the shift is unknown in advance. We show that in this setting, standard training may lead to non-robust representations. To mitigate this, we develop an algorithm for learning robust representations in which (a) synthetic data environments are constructed via hierarchical sampling, and (b) environment balancing penalization, inspired by out-of-distribution problems, is applied. We show that our algorithm improves generalization to diverse class distributions in both simulations and experiments on real-world datasets.", "primary_area": "evaluation", "site": "https://neurips.cc/virtual/2024/poster/93015"} +{"video_file": "yVzWlFhpRW_39028033.mp4", "openreview_id": "yVzWlFhpRW", "slideslive_id": 39028033, "venue": "nips2024", "title": "Excluding the Irrelevant: Focusing Reinforcement Learning through Continuous Action Masking", "status": "Poster", "keywords": "Reinforcement Learning;Policy Gradient;Action Masking;Robotics;Continuous Actions", "tldr": "This work introduces three action masking methods for continuous action spaces to focus the exploration of reinforcement learning on state-specific relevant actions, which enhances learning efficiency and effectiveness.", "abstract": "Continuous action spaces in reinforcement learning (RL) are commonly defined as multidimensional intervals. While intervals usually reflect the action boundaries for tasks well, they can be challenging for learning because the typically large global action space leads to frequent exploration of irrelevant actions. Yet, little task knowledge can be sufficient to identify significantly smaller state-specific sets of relevant actions. Focusing learning on these relevant actions can significantly improve training efficiency and effectiveness. In this paper, we propose to focus learning on the set of relevant actions and introduce three continuous action masking methods for exactly mapping the action space to the state-dependent set of relevant actions. Thus, our methods ensure that only relevant actions are executed, enhancing the predictability of the RL agent and enabling its use in safety-critical applications. We further derive the implications of the proposed methods on the policy gradient. Using proximal policy optimization ( PPO), we evaluate our methods on four control tasks, where the relevant action set is computed based on the system dynamics and a relevant state set. Our experiments show that the three action masking methods achieve higher final rewards and converge faster than the baseline without action masking.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/93013"} +{"video_file": "yWq89o19wf_39027640.mp4", "openreview_id": "yWq89o19wf", "slideslive_id": 39027640, "venue": "nips2024", "title": "User-Creator Feature Polarization in Recommender Systems with Dual Influence", "status": "Poster", "keywords": "recommender systems;performativity;preference dynamics;diversity;polarization", "tldr": "We show that recommender systems with dual influences on users and creators are guaranteed to polarize, and discuss how to prevent it.", "abstract": "Recommender systems serve the dual purpose of presenting relevant content to users and helping content creators reach their target audience. The dual nature of these systems naturally influences both users and creators: users' preferences are affected by the items they are recommended, while creators may be incentivized to alter their content to attract more users. We define a model, called user-creator feature dynamics, to capture the dual influence of recommender systems. We prove that a recommender system with dual influence is guaranteed to polarize, causing diversity loss in the system. We then investigate, both theoretically and empirically, approaches for mitigating polarization and promoting diversity in recommender systems. Unexpectedly, we find that common diversity-promoting approaches do not work in the presence of dual influence, while relevancy-optimizing methods like top-\nk\ntruncation can prevent polarization and improve diversity of the system.", "primary_area": "algorithmic_game_theory", "site": "https://neurips.cc/virtual/2024/poster/93010"} +{"video_file": "yXW2dCTQdi_39025703.mp4", "openreview_id": "yXW2dCTQdi", "slideslive_id": 39025703, "venue": "nips2024", "title": "Controlled maximal variability along with reliable performance in recurrent neural networks", "status": "Poster", "keywords": "Reinforcement Learning;Computational Neuroscience;Neural Variability;Recurrent Neural Network;Maximum Occupancy Principle;Maximum Entropy Reinforcement Learning", "tldr": "Maximizing cumulative future action entropy allows recurrent neural networks to perform tasks while maximizing variability.", "abstract": "Natural behaviors, even stereotyped ones, exhibit variability. Despite its role in exploring and learning, the function and neural basis of this variability is still not well understood. Given the coupling between neural activity and behavior, we ask what type of neural variability does not compromise behavioral performance. While previous studies typically curtail variability to allow for high task performance in neural networks, our approach takes the reversed perspective. We investigate how to generate maximal neural variability while at the same time having high network performance. To do so, we extend to neural activity the maximum occupancy principle (MOP) developed for behavior, and refer to this new neural principle as NeuroMOP. NeuroMOP posits that the goal of the nervous system is to maximize future action-state entropy, a reward-free, intrinsic motivation that entails creating all possible activity patterns while avoiding terminal or dangerous ones. We show that this goal can be achieved through a neural network controller that injects currents (actions) into a recurrent neural network of fixed random weights to maximize future cumulative action-state entropy. High activity variability can be induced while adhering to an energy constraint or while avoiding terminal states defined by specific neurons' activities, also in a context-dependent manner. The network solves these tasks by flexibly switching between stochastic and deterministic modes as needed and projecting noise onto a null space. Based on future maximum entropy production, NeuroMOP contributes to a novel theory of neural variability that reconciles stochastic and deterministic behaviors within a single framework.", "primary_area": "neuroscience_and_cognitive_science", "site": "https://neurips.cc/virtual/2024/poster/93009"} +{"video_file": "yXpfrLMIr2_39027902.mp4", "openreview_id": "yXpfrLMIr2", "slideslive_id": 39027902, "venue": "nips2024", "title": "Binarized Diffusion Model for Image Super-Resolution", "status": "Poster", "keywords": "diffusion model;binarization;image super-resolution", "tldr": "A binarized diffusion model, BI-DiffSR, for image SR.", "abstract": "Advanced diffusion models (DMs) perform impressively in image super-resolution (SR), but the high memory and computational costs hinder their deployment. Binarization, an ultra-compression algorithm, offers the potential for effectively accelerating DMs. Nonetheless, due to the model structure and the multi-step iterative attribute of DMs, existing binarization methods result in significant performance degradation. In this paper, we introduce a novel binarized diffusion model, BI-DiffSR, for image SR. First, for the model structure, we design a UNet architecture optimized for binarization. We propose the consistent-pixel-downsample (CP-Down) and consistent-pixel-upsample (CP-Up) to maintain dimension consistent and facilitate the full-precision information transfer. Meanwhile, we design the channel-shuffle-fusion (CS-Fusion) to enhance feature fusion in skip connection. Second, for the activation difference across timestep, we design the timestep-aware redistribution (TaR) and activation function (TaA). The TaR and TaA dynamically adjust the distribution of activations based on different timesteps, improving the flexibility and representation alability of the binarized module. Comprehensive experiments demonstrate that our BI-DiffSR outperforms existing binarization methods. Code is released at: https://github.com/zhengchen1999/BI-DiffSR.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93008"} +{"video_file": "ybHPzL7eYT_39027854.mp4", "openreview_id": "ybHPzL7eYT", "slideslive_id": 39027854, "venue": "nips2024", "title": "Large Spatial Model: End-to-end Unposed Images to Semantic 3D", "status": "Poster", "keywords": "3D Reconstruction;3D Scene Understanding;Gaussian Splatting", "tldr": "We propose a method that utilizes two unposed and uncalibrated images as input, and reconstructs the explicit radiance field, encompassing geometry, appearance, and semantics in real-time.", "abstract": "Reconstructing and understanding 3D structures from a limited number of images is a classical problem in computer vision. Traditional approaches typically decompose this task into multiple subtasks, involving several stages of complex mappings between different data representations. For example, dense reconstruction using Structure-from-Motion (SfM) requires transforming images into key points, optimizing camera parameters, and estimating structures. Following this, accurate sparse reconstructions are necessary for further dense modeling, which is then input into task-specific neural networks. This multi-stage paradigm leads to significant processing times and engineering complexity.\nIn this work, we introduce the Large Spatial Model (LSM), which directly processes unposed RGB images into semantic radiance fields. LSM simultaneously estimates geometry, appearance, and semantics in a single feed-forward pass and can synthesize versatile label maps by interacting through language at novel views. Built on a general Transformer-based framework, LSM predicts global geometry via pixel-aligned point maps. To improve spatial attribute regression, we adopt local context aggregation with multi-scale fusion, enhancing the accuracy of fine local details. To address the scarcity of labeled 3D semantic data and enable natural language-driven scene manipulation, we incorporate a pre-trained 2D language-based segmentation model into a 3D-consistent semantic feature field. An efficient decoder parameterizes a set of semantic anisotropic Gaussians, allowing supervised end-to-end learning. Comprehensive experiments on various tasks demonstrate that LSM unifies multiple 3D vision tasks directly from unposed images, achieving real-time semantic 3D reconstruction for the first time.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/93007"} +{"video_file": "ygDl8q02gA_39028773.mp4", "openreview_id": "ygDl8q02gA", "slideslive_id": 39028773, "venue": "nips2024", "title": "Optimal Algorithms for Learning Partitions with Faulty Oracles", "status": "Poster", "keywords": "clustering; error tolerant; partitions; query complexity; oracle advice; graph learning; active learning", "tldr": "We design algorithms for the problem of learning partitions from faulty same-cluster oracle queries and prove that they achieve optimal query complexity.", "abstract": "We consider a clustering problem where a learner seeks to partition a finite set by querying a faulty oracle. This models applications where learners crowdsource information from non-expert human workers or conduct noisy experiments to determine group structure. The learner aims to exactly recover a partition by submitting queries of the form ``are\nu\nand\nv\nin the same group?'' for any pair of elements\nu\nand\nv\nin the set. Moreover, because the learner only has access to faulty sources of information, they require an error-tolerant algorithm for this task: i.e. they must fully recover the correct partition, even if up to\n\u2113\nanswers are incorrect, for some error-tolerance parameter\n\u2113\n. We study the question: for any given error-tolerance\n\u2113\n, what is the minimum number of queries needed to learn a finite set partition of\nn\nelements into\nk\ngroups? We design algorithms for this task and prove that they achieve optimal query complexity. To analyze our algorithms, we first highlight a connection between this task and correlation clustering. We then use this connection to build a R\u00e9nyi-Ulam style analytical framework for this problem, which yields matching lower bounds. Our analysis also reveals an inherent asymmetry between the query complexity necessary to be robust against false negative errors as opposed to false positive errors.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/93001"} +{"video_file": "yiXZZC5qDI_39024827.mp4", "openreview_id": "yiXZZC5qDI", "slideslive_id": 39024827, "venue": "nips2024", "title": "From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion Models", "status": "Poster", "keywords": "Diffusion model;data poisoning;data replication;diffusion classifier", "tldr": "We reveal the bilateral data poisoning effects in diffusion models when the training data is poisoned like BadNets.", "abstract": "While state-of-the-art diffusion models (DMs) excel in image generation, concerns regarding their security persist. Earlier research highlighted DMs' vulnerability to data poisoning attacks, but these studies placed stricter requirements than conventional methods like 'BadNets' in image classification. This is because the art necessitates modifications to the diffusion training and sampling procedures. Unlike the prior work, we investigate whether BadNets-like data poisoning methods can directly degrade the generation by DMs. In other words, if only the training dataset is contaminated (without manipulating the diffusion process), how will this affect the performance of learned DMs? In this setting, we uncover bilateral data poisoning effects that not only serve an adversarial purpose (compromising the functionality of DMs) but also offer a defensive advantage (which can be leveraged for defense in classification tasks against poisoning attacks). We show that a BadNets-like data poisoning attack remains effective in DMs for producing incorrect images (misaligned with the intended text conditions). Meanwhile, poisoned DMs exhibit an increased ratio of triggers, a phenomenon we refer to as 'trigger amplification', among the generated images. This insight can be then used to enhance the detection of poisoned training data. In addition, even under a low poisoning ratio, studying the poisoning effects of DMs is also valuable for designing robust image classifiers against such attacks. Last but not least, we establish a meaningful linkage between data poisoning and the phenomenon of data replications by exploring DMs' inherent data memorization tendencies. Code is available at https://github.com/OPTML-Group/BiBadDiff.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/92999"} +{"video_file": "ykQnxko1cJ_39025598.mp4", "openreview_id": "ykQnxko1cJ", "slideslive_id": 39025598, "venue": "nips2024", "title": "CemiFace: Center-based Semi-hard Synthetic Face Generation for Face Recognition", "status": "Poster", "keywords": "synthetic face recognition;diffusion models;center-based semi-hard", "tldr": "diffusion model to generate semi-hard samples for synthetic face recognition", "abstract": "Privacy issue is a main concern in developing face recognition techniques. Although synthetic face images can partially mitigate potential legal risks while maintaining effective face recognition (FR) performance, FR models trained by face images synthesized by existing generative approaches frequently suffer from performance degradation problems due to the insufficient discriminative quality of these synthesized samples. In this paper, we systematically investigate what contributes to solid face recognition model training, and reveal that face images with certain degree of similarities to their identity centers show great effectiveness in the performance of trained FR models. Inspired by this, we propose a novel diffusion-based approach (namely Center-based Semi-hard Synthetic Face Generation (CemiFace) which produces facial samples with various levels of similarity to the subject center, thus allowing to generate face datasets containing effective discriminative samples for training face recognition. Experimental results show that with a modest degree of similarity, training on the generated dataset can produce competitive performance compared to previous generation methods. The code will be available at:https://github.com/szlbiubiubiu/CemiFace", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/92997"} +{"video_file": "yktQNqtepd_39028470.mp4", "openreview_id": "yktQNqtepd", "slideslive_id": 39028470, "venue": "nips2024", "title": "Towards Flexible 3D Perception: Object-Centric Occupancy Completion Augments 3D Object Detection", "status": "Poster", "keywords": "Object Centric;Occupancy;LiDAR;Detection;Long Sequence", "tldr": "We introduce a new object-centric occupancy concept to enhance 3D perception in data and algorithmic perspectives.", "abstract": "While 3D object bounding box (bbox) representation has been widely used in autonomous driving perception, it lacks the ability to capture the precise details of an object's intrinsic geometry. Recently, occupancy has emerged as a promising alternative for 3D scene perception. However, constructing a high-resolution occupancy map remains infeasible for large scenes due to computational constraints. Recognizing that foreground objects only occupy a small portion of the scene, we introduce object-centric occupancy as a supplement to object bboxes. This representation not only provides intricate details for detected objects but also enables higher voxel resolution in practical applications. We advance the development of object-centric occupancy perception from both data and algorithm perspectives. On the data side, we construct the first object-centric occupancy dataset from scratch using an automated pipeline. From the algorithmic standpoint, we introduce a novel object-centric occupancy completion network equipped with an implicit shape decoder that manages dynamic-size occupancy generation. This network accurately predicts the complete object-centric occupancy volume for inaccurate object proposals by leveraging temporal information from long sequences. Our method demonstrates robust performance in completing object shapes under noisy detection and tracking conditions. Additionally, we show that our occupancy features significantly enhance the detection results of state-of-the-art 3D object detectors, especially for incomplete or distant objects in the Waymo Open Dataset.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/92996"} +{"video_file": "ylceJ2xIw5_39028797.mp4", "openreview_id": "ylceJ2xIw5", "slideslive_id": 39028797, "venue": "nips2024", "title": "Fair Wasserstein Coresets", "status": "Poster", "keywords": "Algorithmic Fairness;Nonconvex Optimization;Coresets", "tldr": "We present Fair Wasserstein Coreset (FWC), a novel coreset approach to generate fair synthetic representative samples along with sample-level weights to be used in downstream learning tasks.", "abstract": "Data distillation and coresets have emerged as popular approaches to generate a smaller representative set of samples for downstream learning tasks to handle large-scale datasets. At the same time, machine learning is being increasingly applied to decision-making processes at a societal level, making it imperative for modelers to address inherent biases towards subgroups present in the data. While current approaches focus on creating fair synthetic representative samples by optimizing local properties relative to the original samples, their impact on downstream learning processes has yet to be explored. In this work, we present fair Wasserstein coresets (\nFWC\n), a novel coreset approach which generates fair synthetic representative samples along with sample-level weights to be used in downstream learning tasks.\nFWC\nuses an efficient majority minimization algorithm to minimize the Wasserstein distance between the original dataset and the weighted synthetic samples while enforcing demographic parity. We show that an unconstrained version of\nFWC\nis equivalent to Lloyd's algorithm for k-medians and k-means clustering. Experiments conducted on both synthetic and real datasets show that\nFWC\n: (i) achieves a competitive fairness-performance tradeoff in downstream models compared to existing approaches, (ii) improves downstream fairness when added to the existing training data and (iii) can be used to reduce biases in predictions from large language models (GPT-3.5 and GPT-4).", "primary_area": "fairness", "site": "https://neurips.cc/virtual/2024/poster/92995"} +{"video_file": "yltJAlwtW9_39024873.mp4", "openreview_id": "yltJAlwtW9", "slideslive_id": 39024873, "venue": "nips2024", "title": "Information-theoretic Generalization Analysis for Expected Calibration Error", "status": "Poster", "keywords": "information thery;information-theoretic generalization error analysis;generalization error;expected calibration error;calibration error;binning", "tldr": "This paper offers a comprehensive analysis of the expected calibration error using the information-theoretic generalization analysis.", "abstract": "While the expected calibration error (ECE), which employs binning, is widely adopted to evaluate the calibration performance of machine learning models, theoretical understanding of its estimation bias is limited. In this paper, we present the first comprehensive analysis of the estimation bias in the two common binning strategies, uniform mass and uniform width binning. Our analysis establishes upper bounds on the bias, achieving an improved convergence rate. Moreover, our bounds reveal, for the first time, the optimal number of bins to minimize the estimation bias. We further extend our bias analysis to generalization error analysis based on the information-theoretic approach, deriving upper bounds that enable the numerical evaluation of how small the ECE is for unknown data. Experiments using deep learning models show that our bounds are nonvacuous thanks to this information-theoretic generalization analysis approach.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/92994"} +{"video_file": "ynJr0RW6FR_39024397.mp4", "openreview_id": "ynJr0RW6FR", "slideslive_id": 39024397, "venue": "nips2024", "title": "ReGS: Reference-based Controllable Scene Stylization with Gaussian Splatting", "status": "Poster", "keywords": "Gaussian Splatting;Appearance Editing", "tldr": "We present a method to enable precise appearance editing for 3D Gaussian Splatting.", "abstract": "Referenced-based scene stylization that edits the appearance based on a content-aligned reference image is an emerging research area. Starting with a pretrained neural radiance field (NeRF), existing methods typically learn a novel appearance that matches the given style. Despite their effectiveness, they inherently suffer from time-consuming volume rendering, and thus are impractical for many real-time applications. In this work, we propose ReGS, which adapts 3D Gaussian Splatting (3DGS) for reference-based stylization to enable real-time stylized view synthesis. Editing the appearance of a pretrained 3DGS is challenging as it uses discrete Gaussians as 3D representation, which tightly bind appearance with geometry. Simply optimizing the appearance as prior methods do is often insufficient for modeling continuous textures in the given reference image. To address this challenge, we propose a novel texture-guided control mechanism that adaptively adjusts local responsible Gaussians to a new geometric arrangement, serving for desired texture details. The proposed process is guided by texture clues for effective appearance editing, and regularized by scene depth for preserving original geometric structure. With these novel designs, we show ReGs can produce state-of-the-art stylization results that respect the reference texture while embracing real-time rendering speed for free-view navigation.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/92993"} +{"video_file": "ypEamFKu2O_39025640.mp4", "openreview_id": "ypEamFKu2O", "slideslive_id": 39025640, "venue": "nips2024", "title": "PGN: The RNN's New Successor is Effective for Long-Range Time Series Forecasting", "status": "Poster", "keywords": "information propagation paths;the RNN's new successor;long-range time series forecasting;comprehensive semantic information", "tldr": "We propose a novel paradigm, PGN, as the new successor to RNN, and propose a novel temporal modeling framework called TPGN, based on PGN.", "abstract": "Due to the recurrent structure of RNN, the long information propagation path poses limitations in capturing long-term dependencies, gradient explosion/vanishing issues, and inefficient sequential execution. Based on this, we propose a novel paradigm called Parallel Gated Network (PGN) as the new successor to RNN. PGN directly captures information from previous time steps through the designed Historical Information Extraction (HIE) layer and leverages gated mechanisms to select and fuse it with the current time step information. This reduces the information propagation path to\nO\n(\n1\n)\n, effectively addressing the limitations of RNN. To enhance PGN's performance in long-range time series forecasting tasks, we propose a novel temporal modeling framework called Temporal PGN (TPGN). TPGN incorporates two branches to comprehensively capture the semantic information of time series. One branch utilizes PGN to capture long-term periodic patterns while preserving their local characteristics. The other branch employs patches to capture short-term information and aggregate the global representation of the series. TPGN achieves a theoretical complexity of\nO\n(\nL\n)\n, ensuring efficiency in its operations. Experimental results on five benchmark datasets demonstrate the state-of-the-art (SOTA) performance and high efficiency of TPGN, further confirming the effectiveness of PGN as the new successor to RNN in long-range time series forecasting. The code is available in this repository: https://github.com/Water2sea/TPGN.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/92992"} +{"video_file": "ypFgcT147Z_39028263.mp4", "openreview_id": "ypFgcT147Z", "slideslive_id": 39028263, "venue": "nips2024", "title": "Decoupling Semantic Similarity from Spatial Alignment for Neural Networks.", "status": "Poster", "keywords": "Representational Similarity;Representational Similarity Analysis;Computer Vision", "tldr": "We make representational similarity matrices permutation invariance and show resulting improvements in retrieval.", "abstract": "What representation do deep neural networks learn? How similar are images to each other for neural networks? Despite the overwhelming success of deep learning methods key questions about their internal workings still remain largely unanswered, due to their internal high dimensionality and complexity. To address this, one approach is to measure the similarity of activation responses to various inputs. Representational Similarity Matrices (RSMs) distill this similarity into scalar values for each input pair. These matrices encapsulate the entire similarity structure of a system, indicating which input lead to similar responses. While the similarity between images is ambiguous, we argue that the spatial location of semantic objects does neither influence human perception nor deep learning classifiers. Thus this should be reflected in the definition of similarity between image responses for computer vision systems. Revisiting the established similarity calculations for RSMs we expose their sensitivity to spatial alignment. In this paper we propose to solve this through semantic RSMs, which are invariant to spatial permutation. We measure semantic similarity between input responses by formulating it as a set-matching problem. Further, we quantify the superiority of semantic RSMs over spatio-semantic RSMs through image retrieval and by comparing the similarity between representations to the similarity between predicted class probabilities.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/92991"} +{"video_file": "ypaqE8UwsC_39025361.mp4", "openreview_id": "ypaqE8UwsC", "slideslive_id": 39025361, "venue": "nips2024", "title": "Federated Ensemble-Directed Offline Reinforcement Learning", "status": "Poster", "keywords": "Deep Reinforcement Learning;Offline Reinforcement Learning;Federated Learning", "tldr": "A novel federated offline reinforcement learning algorithm", "abstract": "We consider the problem of federated offline reinforcement learning (RL), a scenario under which distributed learning agents must collaboratively learn a high-quality control policy only using small pre-collected datasets generated according to different unknown behavior policies. Na\"{i}vely combining a standard offline RL approach with a standard federated learning approach to solve this problem can lead to poorly performing policies. In response, we develop the Federated Ensemble-Directed Offline Reinforcement Learning Algorithm (FEDORA), which distills the collective wisdom of the clients using an ensemble learning approach. We develop the FEDORA codebase to utilize distributed compute resources on a federated learning platform. We show that FEDORA significantly outperforms other approaches, including offline RL over the combined data pool, in various complex continuous control environments and real-world datasets. Finally, we demonstrate the performance of FEDORA in the real-world on a mobile robot. We provide our code and a video of our experiments at \\url{https://github.com/DesikRengarajan/FEDORA}.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/92989"} +{"video_file": "yppcLFeZgy_39024896.mp4", "openreview_id": "yppcLFeZgy", "slideslive_id": 39024896, "venue": "nips2024", "title": "MutaPLM: Protein Language Modeling for Mutation Explanation and Engineering", "status": "Poster", "keywords": "protein language modeling;mutation explanation;directed evolution", "tldr": "A unified framework harvesting protein language models for mutation explanation and engineering", "abstract": "Studying protein mutations within amino acid sequences holds tremendous significance in life sciences. Protein language models (PLMs) have demonstrated strong capabilities in broad biological applications. However, due to architectural design and lack of supervision, PLMs model mutations implicitly with evolutionary plausibility, which is not satisfactory to serve as explainable and engineerable tools in real-world studies. To address these issues, we present MutaPLM, a unified framework for interpreting and navigating protein mutations with protein language models. MutaPLM introduces a protein delta network that captures explicit protein mutation representations within a unified feature space, and a transfer learning pipeline with a chain-of-thought (CoT) strategy to harvest protein mutation knowledge from biomedical texts. We also construct MutaDescribe, the first large-scale protein mutation dataset with rich textual annotations, which provides cross-modal supervision signals. Through comprehensive experiments, we demonstrate that MutaPLM excels at providing human-understandable explanations for mutational effects and prioritizing novel mutations with desirable properties. Our code, model, and data are open-sourced at https://github.com/PharMolix/MutaPLM.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/92987"} +{"video_file": "yxOrSmS5wR_39028655.mp4", "openreview_id": "yxOrSmS5wR", "slideslive_id": 39028655, "venue": "nips2024", "title": "AV-Cloud: Spatial Audio Rendering Through Audio-Visual Cloud Splatting", "status": "Poster", "keywords": "audio-visual;audio scenes reconstruction;spatial audio;point-based scene rendering", "tldr": "We propose a novel approach, AV-Cloud, for rendering high-quality spatial audio in 3D scenes that is in synchrony with the visual stream but does not rely or explicitly conditioned on the visual rendering.", "abstract": "We propose a novel approach for rendering high-quality spatial audio for 3D scenes that is in synchrony with the visual stream but does not rely or explicitly conditioned on the visual rendering. We demonstrate that such an approach enables the experience of immersive virtual tourism - performing a real-time dynamic navigation within the scene, experiencing both audio and visual content. Current audio-visual rendering approaches typically rely on visual cues, such as images, and thus visual artifacts could cause inconsistency in the audio quality. Furthermore, when such approaches are incorporated with visual rendering, audio generation at each viewpoint occurs after the rendering of the image of the viewpoint and thus could lead to audio lag that affects the integration of audio and visual streams. Our proposed approach, AV-Cloud, overcomes these challenges by learning the representation of the audio-visual scene based on a set of sparse AV anchor points, that constitute the Audio-Visual Cloud, and are derived from the camera calibration. The Audio-Visual Cloud serves as an audio-visual representation from which the generation of spatial audio for arbitrary listener location can be generated. In particular, we propose a novel module Audio-Visual Cloud Splatting which decodes AV anchor points into a spatial audio transfer function for the arbitrary viewpoint of the target listener. This function, applied through the Spatial Audio Render Head module, transforms monaural input into viewpoint-specific spatial audio. As a result, AV-Cloud efficiently renders the spatial audio aligned with any visual viewpoint and eliminates the need for pre-rendered images. We show that AV-Cloud surpasses current state-of-the-art accuracy on audio reconstruction, perceptive quality, and acoustic effects on two real-world datasets. AV-Cloud also outperforms previous methods when tested on scenes \"in the wild\".", "primary_area": "speech_and_audio", "site": "https://neurips.cc/virtual/2024/poster/92984"} +{"video_file": "yxjWAJzUyV_39028558.mp4", "openreview_id": "yxjWAJzUyV", "slideslive_id": 39028558, "venue": "nips2024", "title": "REBEL: Reinforcement Learning via Regressing Relative Rewards", "status": "Poster", "keywords": "Reinforcement Learning;Reinforcement Learning from Human Feedback", "tldr": "We present REBEL, a new reinforcement learning algorithm that simplifies policy optimization to regressing relative rewards, offering strong theoretical guarantees and empirical performances.", "abstract": "While originally developed for continuous control problems, Proximal Policy Optimization (PPO) has emerged as the work-horse of a variety of reinforcement learning (RL) applications, including the fine-tuning of generative models. Unfortunately, PPO requires multiple heuristics to enable stable convergence (e.g. value networks, clipping), and is notorious for its sensitivity to the precise implementation of these components. In response, we take a step back and ask what a minimalist RL algorithm for the era of generative models would look like. We propose REBEL, an algorithm that cleanly reduces the problem of policy optimization to regressing the relative reward between two completions to a prompt in terms of the policy, enabling strikingly lightweight implementation. In theory, we prove that fundamental RL algorithms like Natural Policy Gradient can be seen as variants of REBEL, which allows us to match the strongest known theoretical guarantees in terms of convergence and sample complexity in the RL literature. REBEL can also cleanly incorporate offline data and be extended to handle the intransitive preferences we frequently see in practice. Empirically, we find that REBEL provides a unified approach to language modeling and image generation with stronger or similar performance as PPO and DPO, all while being simpler to implement and more computationally efficient than PPO. When fine-tuning Llama-3-8B-Instruct, REBEL achieves strong performance in AlpacaEval 2.0, MT-Bench, and Open LLM Leaderboard. Implementation of REBEL can be found at https://github.com/ZhaolinGao/REBEL, and models trained by REBEL can be found at https://huggingface.co/Cornell-AGI.", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/92983"} +{"video_file": "yySpldUsU2_39025470.mp4", "openreview_id": "yySpldUsU2", "slideslive_id": 39025470, "venue": "nips2024", "title": "Changing the Training Data Distribution to Reduce Simplicity Bias Improves In-distribution Generalization", "status": "Poster", "keywords": "In-distribution generalization;Simplicity bias;Data modification;Sharpness-aware minimization", "tldr": "We propose a sharpness-aware motivated data modification to improve in-distribution generalization performance.", "abstract": "Can we modify the training data distribution to encourage the underlying optimization method toward finding solutions with superior generalization performance on in-distribution data? In this work, we approach this question for the first time by comparing the inductive bias of gradient descent (GD) with that of sharpness-aware minimization (SAM). By studying a two-layer CNN, we rigorously prove that SAM learns different features more uniformly, particularly in early epochs. That is, SAM is less susceptible to simplicity bias compared to GD. We also show that examples constraining features that are learned early are separable from the rest based on the model\u2019s output. Based on this observation, we propose a method that (i) clusters examples based on the network output early in training, (ii) identifies a cluster of examples with similar network output, and (iii) upsamples the rest of examples only once to alleviate the simplicity bias. We show empirically that USEFUL effectively improves the generalization performance on the original data distribution when training with various gradient methods, including (S)GD and SAM. Notably, we demonstrate that our method can be combined with SAM variants and existing data augmentation strategies to achieve, to the best of our knowledge, state-of-the-art performance for training ResNet18 on CIFAR10, STL10, CINIC10, Tiny-ImageNet; ResNet34 on CIFAR100; and VGG19 and DenseNet121 on CIFAR10.", "primary_area": "optimization_for_deep_networks", "site": "https://neurips.cc/virtual/2024/poster/92982"} +{"video_file": "z0I2SbjN0R_39025612.mp4", "openreview_id": "z0I2SbjN0R", "slideslive_id": 39025612, "venue": "nips2024", "title": "DiffusionPDE: Generative PDE-Solving under Partial Observation", "status": "Poster", "keywords": "Guided Diffusion Model;Partial Differential Equation;Sparse Observation;Inverse Problem", "tldr": "DiffusionPDE solves forward and inverse PDEs from partial observations using diffusion models.", "abstract": "We introduce a general framework for solving partial differential equations (PDEs) using generative diffusion models. In particular, we focus on the scenarios where we do not have the full knowledge of the scene necessary to apply classical solvers. Most existing forward or inverse PDE approaches perform poorly when the observations on the data or the underlying coefficients are incomplete, which is a common assumption for real-world measurements. In this work, we propose DiffusionPDE that can simultaneously fill in the missing information and solve a PDE by modeling the joint distribution of the solution and coefficient spaces. We show that the learned generative priors lead to a versatile framework for accurately solving a wide range of PDEs under partial observation, significantly outperforming the state-of-the-art methods for both forward and inverse directions.", "primary_area": "machine_learning_for_physical_sciences", "site": "https://neurips.cc/virtual/2024/poster/92980"} +{"video_file": "z4duW3KzlD_39027273.mp4", "openreview_id": "z4duW3KzlD", "slideslive_id": 39027273, "venue": "nips2024", "title": "Gated Inference Network: Inference and Learning State-Space Models", "status": "Poster", "keywords": "Time Series and Recurrent Networks", "tldr": "Our algorithm efficiently models dynamical systems by observing high-dimensional noise-affected data, outperforming state-of-the-art counterparts in state estimation and image imputation tasks.", "abstract": "This paper advances temporal reasoning within dynamically changing high-dimensional noisy observations, focusing on a latent space that characterizes the nonlinear dynamics of objects in their environment. We introduce the Gated Inference Network (GIN), an efficient approximate Bayesian inference algorithm for state space models (SSMs) with nonlinear state transitions and emissions. GIN disentangles two latent representations: one representing the object derived from a nonlinear mapping model, and another representing the latent state describing its dynamics. This disentanglement enables direct state estimation and missing data imputation as the world evolves. To infer the latent state, we utilize a deep extended Kalman filter (EKF) approach that integrates a novel compact RNN structure to compute both the Kalman Gain (KG) and smoothing gain (SG), completing the data flow. This design results in a computational cost per step that is linearly faster than EKF but introduces issues such as the exploding gradient problem. To mitigate the exploding gradients caused by the compact RNN structure in our model, we propose a specialized learning method that ensures stable training and inference. The model is then trained end-to-end on videos depicting a diverse range of simulated and real-world physical systems, and outperforms its ounterparts \u2014RNNs, autoregressive models, and variational approaches\u2014 in state estimation and missing data imputation tasks.", "primary_area": "other", "site": "https://neurips.cc/virtual/2024/poster/92976"} +{"video_file": "z4eVwH484M_39024756.mp4", "openreview_id": "z4eVwH484M", "slideslive_id": 39024756, "venue": "nips2024", "title": "Unveiling the Hidden: Online Vectorized HD Map Construction with Clip-Level Token Interaction and Propagation", "status": "Poster", "keywords": "vectorized HD map;clip-level pipeline;clip-level token;interaction;propagation", "tldr": "This paper introduces a novel clip-level pipeline to explicitly unveils the invisible map elements.", "abstract": "Predicting and constructing road geometric information (e.g., lane lines, road markers) is a crucial task for safe autonomous driving, while such static map elements can be repeatedly occluded by various dynamic objects on the road. Recent studies have shown significantly improved vectorized high-definition (HD) map construction performance, but there has been insufficient investigation of temporal information across adjacent input frames (i.e., clips), which may lead to inconsistent and suboptimal prediction results. To tackle this, we introduce a novel paradigm of clip-level vectorized HD map construction, MapUnveiler, which explicitly unveils the occluded map elements within a clip input by relating dense image representations with efficient clip tokens. Additionally, MapUnveiler associates inter-clip information through clip token propagation, effectively utilizing long- term temporal map information. MapUnveiler runs efficiently with the proposed clip-level pipeline by avoiding redundant computation with temporal stride while building a global map relationship. Our extensive experiments demonstrate that MapUnveiler achieves state-of-the-art performance on both the nuScenes and Argoverse2 benchmark datasets. We also showcase that MapUnveiler significantly outperforms state-of-the-art approaches in a challenging setting, achieving +10.7% mAP improvement in heavily occluded driving road scenes. The project page can be found at https://mapunveiler.github.io.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/92975"} +{"video_file": "z6reLFqv6w_39024542.mp4", "openreview_id": "z6reLFqv6w", "slideslive_id": 39024542, "venue": "nips2024", "title": "Learning diverse causally emergent representations from time series data", "status": "Poster", "keywords": "emergence;representation learning", "tldr": "learning emergent features using representation learning", "abstract": "Cognitive processes usually take place at a macroscopic scale in systems characterised by emergent properties, which make the whole \u2018more than the sum of its parts.\u2019 While recent proposals have provided quantitative, information-theoretic metrics to detect emergence in time series data, it is often highly non-trivial to identify the relevant macroscopic variables a priori. In this paper we leverage recent advances in representation learning and differentiable information estimators to put forward a data-driven method to find emergent variables. The proposed method successfully detects emergent variables and recovers the ground-truth emergence values in a synthetic dataset. Furthermore, we show the method can be extended to learn multiple independent features, extracting a diverse set of emergent quantities. We finally show that a modified method scales to real experimental data from primate brain activity, paving the ground for future analyses uncovering the emergent structure of cognitive representations in biological and artificial intelligence systems.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/92973"} +{"video_file": "z7h7zMgyPJ_39024878.mp4", "openreview_id": "z7h7zMgyPJ", "slideslive_id": 39024878, "venue": "nips2024", "title": "The Many Faces of Optimal Weak-to-Strong Learning", "status": "Poster", "keywords": "Learning Theory;Weak to Strong Learning;Boosting;Large Margin Classifiers;Generalization Bounds;Sample Complexity", "tldr": "We propose a new, simpler, faster and optimal boosting algorithm in terms of sample complexity", "abstract": "Boosting is an extremely successful idea, allowing one to combine multiple low accuracy classifiers into a much more accurate voting classifier. In this work, we present a new and surprisingly simple Boosting algorithm that obtains a provably optimal sample complexity. Sample optimal Boosting algorithms have only recently been developed, and our new algorithm has the fastest runtime among all such algorithms and is the simplest to describe: Partition your training data into 5 disjoint pieces of equal size, run AdaBoost on each, and combine the resulting classifiers via a majority vote. In addition to this theoretical contribution, we also perform the first empirical comparison of the proposed sample optimal Boosting algorithms. Our pilot empirical study suggests that our new algorithm might outperform previous algorithms on large data sets.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/92972"} +{"video_file": "zApFYcLg6K_39028302.mp4", "openreview_id": "zApFYcLg6K", "slideslive_id": 39028302, "venue": "nips2024", "title": "On Differentially Private U Statistics", "status": "Poster", "keywords": "Differential Privacy;Statistics;Mean Estimation", "tldr": "We devise efficient algorithms for differentially private U-statistics in the Central DP model, achieving nearly optimal error rates in various settings. Previously, this was studied in the local DP model and with U-statistics of degree 2.", "abstract": "We consider the problem of privately estimating a parameter\nE\n[\nh\n(\nX\n1\n,\n\u2026\n,\nX\nk\n)\n]\n, where\nX\n1\n,\nX\n2\n,\n\u2026\n,\nX\nk\nare i.i.d. data from some distribution and\nh\nis a permutation-invariant function. Without privacy constraints, the standard estimators for this task are U-statistics, which commonly arise in a wide range of problems, including nonparametric signed rank tests, symmetry testing, uniformity testing, and subgraph counts in random networks, and are the unique minimum variance unbiased estimators under mild conditions. Despite the recent outpouring of interest in private mean estimation, privatizing U-statistics has received little attention. While existing private mean estimation algorithms can be applied in a black-box manner to obtain confidence intervals, we show that they can lead to suboptimal private error, e.g., constant-factor inflation in the leading term, or even\n\u0398\n(\n1\n/\nn\n)\nrather than\nO\n(\n1\n/\nn\n2\n)\nin degenerate settings. To remedy this, we propose a new thresholding-based approach that reweights different subsets of the data using local H\u00e1jek projections. This leads to nearly optimal private error for non-degenerate U-statistics and a strong indication of near-optimality for degenerate U-statistics.", "primary_area": "privacy", "site": "https://neurips.cc/virtual/2024/poster/92970"} +{"video_file": "zBG7WogAvm_39027837.mp4", "openreview_id": "zBG7WogAvm", "slideslive_id": 39027837, "venue": "nips2024", "title": "Amortized Bayesian Experimental Design for Decision-Making", "status": "Poster", "keywords": "Bayesian experimental design;amortized inference;Bayesian decision theory;neural processes", "tldr": "We introduce a decision-aware amortized Bayesian experimental design framework with a novel Transformer neural decision process architecture to optimize experimental designs for better decision-making.", "abstract": "Many critical decisions, such as personalized medical diagnoses and product pricing, are made based on insights gained from designing, observing, and analyzing a series of experiments. This highlights the crucial role of experimental design, which goes beyond merely collecting information on system parameters as in traditional Bayesian experimental design (BED), but also plays a key part in facilitating downstream decision-making. Most recent BED methods use an amortized policy network to rapidly design experiments. However, the information gathered through these methods is suboptimal for down-the-line decision-making, as the experiments are not inherently designed with downstream objectives in mind. In this paper, we present an amortized decision-aware BED framework that prioritizes maximizing downstream decision utility. We introduce a novel architecture, the Transformer Neural Decision Process (TNDP), capable of instantly proposing the next experimental design, whilst inferring the downstream decision, thus effectively amortizing both tasks within a unified workflow. We demonstrate the performance of our method across several tasks, showing that it can deliver informative designs and facilitate accurate decision-making.", "primary_area": "probabilistic_methods", "site": "https://neurips.cc/virtual/2024/poster/92968"} +{"video_file": "zDaD8zv8tG_39025073.mp4", "openreview_id": "zDaD8zv8tG", "slideslive_id": 39025073, "venue": "nips2024", "title": "A teacher-teacher framework for clinical language representation learning", "status": "Poster", "keywords": "clinical language models;teacher-teacher framework;knowledge alignment", "tldr": "This paper introduces a teacher-teacher paradigm where two pretrained LLMs achieve knowledge exchange and alignment through a two-step, few-epoch training of the LINE module with a well-designed alignment objective.", "abstract": "In recent years, there has been a proliferation of ready-to-use large language models (LLMs) designed for various applications, both general-purpose and domain-specific. Instead of advocating for the development of a new model or continuous pretraining of an existing one, this paper introduces a pragmatic teacher-teacher framework to facilitate mutual learning between two pre-existing models. By leveraging two teacher models possessing complementary knowledge, we introduce a LIghtweight kNowledge alignmEnt (LINE) module aimed at harmonizing their knowledge within a unified representation space. This framework is particularly valuable in clinical settings, where stringent regulations and privacy considerations dictate the handling of detailed clinical notes. Our trained LINE module excels in capturing critical information from clinical notes, leveraging highly de-identified data. Validation and downstream tasks further demonstrate the effectiveness of the proposed framework.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/92966"} +{"video_file": "zGN0YWy2he_39025508.mp4", "openreview_id": "zGN0YWy2he", "slideslive_id": 39025508, "venue": "nips2024", "title": "Scene Graph Disentanglement and Composition for Generalizable Complex Image Generation", "status": "Spotlight", "keywords": "Scene Graph; Disentanglement; Diffusion Model; Compositional Image Generation", "tldr": "In this paper, we leverage the scene graph, a powerful structured representation, for complex image generation.", "abstract": "There has been exciting progress in generating images from natural language or layout conditions. However, these methods struggle to faithfully reproduce complex scenes due to the insufficient modeling of multiple objects and their relationships. To address this issue, we leverage the scene graph, a powerful structured representation, for complex image generation. Different from the previous works that directly use scene graphs for generation, we employ the generative capabilities of variational autoencoders and diffusion models in a generalizable manner, compositing diverse disentangled visual clues from scene graphs. Specifically, we first propose a Semantics-Layout Variational AutoEncoder (SL-VAE) to jointly derive (layouts, semantics) from the input scene graph, which allows a more diverse and reasonable generation in a one-to-many mapping. We then develop a Compositional Masked Attention (CMA) integrated with a diffusion model, incorporating (layouts, semantics) with fine-grained attributes as generation guidance. To further achieve graph manipulation while keeping the visual content consistent, we introduce a Multi-Layered Sampler (MLS) for an \"isolated\" image editing effect. Extensive experiments demonstrate that our method outperforms recent competitors based on text, layout, or scene graph, in terms of generation rationality and controllability.", "primary_area": "generative_models", "site": "https://neurips.cc/virtual/2024/poster/92965"} +{"video_file": "zJremsKVyh_39024771.mp4", "openreview_id": "zJremsKVyh", "slideslive_id": 39024771, "venue": "nips2024", "title": "Marginal Causal Flows for Validation and Inference", "status": "Poster", "keywords": "Causal Inference;Normalising Flows;Synthetic Data;Marginal Structural Models", "tldr": "We show how Normalising Flows can be used to explicitly parameterise marginal causal distributions, and illustrate its utility for both inference and synthetic data generation/benchmarking.", "abstract": "Investigating the marginal causal effect of an intervention on an outcome from complex data remains challenging due to the inflexibility of employed models and the lack of complexity in causal benchmark datasets, which often fail to reproduce intricate real-world data patterns. In this paper we introduce Frugal Flows, a likelihood-based machine learning model that uses normalising flows to flexibly learn the data-generating process, while also directly targeting the marginal causal quantities inferred from observational data. We provide a novel algorithm for fitting a model to observational data with a parametrically specified causal distribution, and propose that these models are exceptionally well suited for synthetic data generation to validate causal methods. Unlike existing data generation methods, Frugal Flows generate synthetic data that closely resembles the empirical dataset, while also automatically and exactly satisfying a user-defined average treatment effect. To our knowledge, Frugal Flows are the first generative model to both learn flexible data representations and also \\textit{exactly} parameterise quantities such as the average treatment effect and the degree of unobserved confounding. We demonstrate the above with experiments on both simulated and real-world datasets.", "primary_area": "causal_inference", "site": "https://neurips.cc/virtual/2024/poster/92962"} +{"video_file": "zLU21oQjD5_39027479.mp4", "openreview_id": "zLU21oQjD5", "slideslive_id": 39027479, "venue": "nips2024", "title": "DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving", "status": "Poster", "keywords": "Large Language Models;Mathematical Reasoning;Synthetic Data", "tldr": "To achieve the best performance (for mathematical reasoning), more correct responses for difficult queries are crucial.", "abstract": "Solving mathematical problems requires advanced reasoning abilities and presents notable challenges for large language models. Previous works usually synthesize data from proprietary models to augment existing datasets, followed by instruction tuning to achieve top-tier results. However, our analysis of these datasets reveals severe biases towards easy queries, with frequent failures to generate any correct response for the most challenging queries. Hypothesizing that difficult queries are crucial to learning complex reasoning, we propose Difficulty-Aware Rejection Tuning (DART), a method that allocates difficult queries more trials during the synthesis phase, enabling more extensive training on difficult samples. Utilizing DART, we have created new datasets for mathematical problem-solving that focus more on difficult queries and are substantially smaller than previous ones. Remarkably, our synthesis process solely relies on a 7B-sized open-weight model, without reliance on the commonly used proprietary GPT-4. We fine-tune various base models on our datasets ranging from 7B to 70B in size, resulting in a series of strong models called DART-Math. In comprehensive in-domain and out-of-domain evaluation on 6 mathematical benchmarks, DART-Math outperforms vanilla rejection tuning significantly, being superior or comparable to previous arts, despite using much smaller datasets and no proprietary models. Furthermore, our results position our synthetic datasets as the most effective and cost-efficient publicly available resources for advancing mathematical problem-solving. Our datasets, models and code are publicly available at https://github.com/hkust-nlp/dart-math.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/92959"} +{"video_file": "zNiJZUAlxg_39025775.mp4", "openreview_id": "zNiJZUAlxg", "slideslive_id": 39025775, "venue": "nips2024", "title": "ResAD: A Simple Framework for Class Generalizable Anomaly Detection", "status": "Spotlight", "keywords": "class-generalizable anomaly detection", "tldr": "we propose a simple but effective class-generalizable AD framework, called ResAD, which can be applied to detect and localize anomalies in new classes.", "abstract": "This paper explores the problem of class-generalizable anomaly detection, where the objective is to train one unified AD model that can generalize to detect anomalies in diverse classes from different domains without any retraining or fine-tuning on the target data. Because normal feature representations vary significantly across classes, this will cause the widely studied one-for-one AD models to be poorly classgeneralizable (i.e., performance drops dramatically when used for new classes). In this work, we propose a simple but effective framework (called ResAD) that can be directly applied to detect anomalies in new classes. Our main insight is to learn the residual feature distribution rather than the initial feature distribution. In this way, we can significantly reduce feature variations. Even in new classes, the distribution of normal residual features would not remarkably shift from the learned distribution. Therefore, the learned model can be directly adapted to new classes. ResAD consists of three components: (1) a Feature Converter that converts initial features into residual features; (2) a simple and shallow Feature Constraintor that constrains normal residual features into a spatial hypersphere for further reducing feature variations and maintaining consistency in feature scales among different classes; (3) a Feature Distribution Estimator that estimates the normal residual feature distribution, anomalies can be recognized as out-of-distribution. Despite the simplicity, ResAD can achieve remarkable anomaly detection results when directly used in new classes. The code is available at https://github.com/xcyao00/ResAD.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/92956"} +{"video_file": "zO55ovdLJw_39025114.mp4", "openreview_id": "zO55ovdLJw", "slideslive_id": 39025114, "venue": "nips2024", "title": "Deep Correlated Prompting for Visual Recognition with Missing Modalities", "status": "Poster", "keywords": "Multimodal prompting;Mutltimodal models;Missing modalities", "tldr": "A deep correlated promp learning paradigm for visual recognition for missing-modality scenarios with minimal computational costs", "abstract": "Large-scale multimodal models have shown excellent performance over a series of tasks powered by the large corpus of paired multimodal training data. Generally, they are always assumed to receive modality-complete inputs. However, this simple assumption may not always hold in the real world due to privacy constraints or collection difficulty, where models pretrained on modality-complete data easily demonstrate degraded performance on missing-modality cases. To handle this issue, we refer to prompt learning to adapt large pretrained multimodal models to handle missing-modality scenarios by regarding different missing cases as different types of input. Instead of only prepending independent prompts to the intermediate layers, we present to leverage the correlations between prompts and input features and excavate the relationships between different layers of prompts to carefully design the instructions. We also incorporate the complementary semantics of different modalities to guide the prompting design for each modality. Extensive experiments on three commonly-used datasets consistently demonstrate the superiority of our method compared to the previous approaches upon different missing scenarios. Plentiful ablations are further given to show the generalizability and reliability of our method upon different modality-missing ratios and types.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/92955"} +{"video_file": "zTu0QEpvtZ_39026609.mp4", "openreview_id": "zTu0QEpvtZ", "slideslive_id": 39026609, "venue": "nips2024", "title": "Towards Understanding the Working Mechanism of Text-to-Image Diffusion Model", "status": "Poster", "keywords": "text-to-image generation; working mechanism;", "tldr": "In this paper, we reveal the working mechanism of text-to-image diffusion model", "abstract": "Recently, the strong latent Diffusion Probabilistic Model (DPM) has been applied to high-quality Text-to-Image (T2I) generation (e.g., Stable Diffusion), by injecting the encoded target text prompt into the gradually denoised diffusion image generator. Despite the success of DPM in practice, the mechanism behind it remains to be explored. To fill this blank, we begin by examining the intermediate statuses during the gradual denoising generation process in DPM. The empirical observations indicate, the shape of image is reconstructed after the first few denoising steps, and then the image is filled with details (e.g., texture). The phenomenon is because the low-frequency signal (shape relevant) of the noisy image is not corrupted until the final stage in the forward process (initial stage of generation) of adding noise in DPM. Inspired by the observations, we proceed to explore the influence of each token in the text prompt during the two stages. After a series of experiments of T2I generations conditioned on a set of text prompts. We conclude that in the earlier generation stage, the image is mostly decided by the special token [\\texttt{EOS}] in the text prompt, and the information in the text prompt is already conveyed in this stage. After that, the diffusion model completes the details of generated images by information from themselves. Finally, we propose to apply this observation to accelerate the process of T2I generation by properly removing text guidance, which finally accelerates the sampling up to 25%+.", "primary_area": "diffusion_based_models", "site": "https://neurips.cc/virtual/2024/poster/92954"} +{"video_file": "zWuHSIALBh_39025203.mp4", "openreview_id": "zWuHSIALBh", "slideslive_id": 39025203, "venue": "nips2024", "title": "FLAME : Factuality-Aware Alignment for Large Language Models", "status": "Poster", "keywords": "large language models;factuality;alignment", "tldr": "We find that the standard alignment process encourages hallucination, and propose factuality-aware alignment while maintaining the LLM's general instruction-following capability.", "abstract": "Alignment is a procedure to fine-tune pre-trained large language models (LLMs) to follow natural language instructions and serve as helpful AI assistants. We have observed, however, that the conventional alignment process fails to enhance the factual accuracy of LLMs, and often leads to the generation of more false facts (i.e., hallucination). In this paper, we study how to make the LLM alignment process more factual, by first identifying factors that lead to hallucination in both alignment steps: supervised fine-tuning (SFT) and reinforcement learning (RL). In particular, we find that training the LLM on new or unfamiliar knowledge can encourage hallucination. This makes SFT less factual as it trains on human-labeled data that may be novel to the LLM. Furthermore, reward functions used in standard RL often inadequately capture factuality and favor longer and more detailed responses, which inadvertently promote hallucination. Based on these observations, we propose FactuaLity-aware AlignMEnt, comprised of factuality-aware SFT and factuality-aware RL through direct preference optimization. Experiments show that our proposed FLAME guides LLMs to output more factual responses while maintaining their instruction-following capability.", "primary_area": "natural_language_processing", "site": "https://neurips.cc/virtual/2024/poster/92950"} +{"video_file": "zZVqZRXSao_39027236.mp4", "openreview_id": "zZVqZRXSao", "slideslive_id": 39027236, "venue": "nips2024", "title": "Semantic Feature Learning for Universal Unsupervised Cross-Domain Retrieval", "status": "Poster", "keywords": "Image Retrieval;Domain Adaptation", "tldr": "We identify an unexplored but important problem called Universal Unsupervised Cross-Domain Retrieval, and propose a two-stage semantic feature learning framework to address it.", "abstract": "Cross-domain retrieval (CDR) is finding increasingly broad applications across various domains. However, existing efforts have several major limitations, with the most critical being their reliance on accurate supervision. Recent studies thus focus on achieving unsupervised CDR, but they typically assume that the category spaces across domains are identical, an assumption that is often unrealistic in real-world scenarios. This is because only through dedicated and comprehensive analysis can the category composition of a data domain be obtained, which contradicts the premise of unsupervised scenarios. Therefore, in this work, we introduce the problem of Universal Unsupervised Cross-Domain Retrieval (U^2CDR) for the first time and design a two-stage semantic feature learning framework to address it. In the first stage, a cross-domain unified prototypical structure is established under the guidance of an instance-prototype-mixed contrastive loss and a semantic-enhanced loss, to counteract category space differences. In the second stage, through a modified adversarial training mechanism, we ensure minimal changes for the established prototypical structure during domain alignment, enabling more accurate nearest-neighbor searching. Extensive experiments across multiple datasets and scenarios, including close-set, partial, and open-set CDR, demonstrate that our approach significantly outperforms existing state-of-the-art CDR methods and other related methods in solving U^2CDR challenges.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/92948"} +{"video_file": "za9Jx8yqUA_39028601.mp4", "openreview_id": "za9Jx8yqUA", "slideslive_id": 39028601, "venue": "nips2024", "title": "GenRL: Multimodal-foundation world models for generalization in embodied agents", "status": "Poster", "keywords": "world models;foundations models;reinforcement learning;multitask generalization", "tldr": "Connecting multimodal foundation models' representations with world models' representations for RL enables specifying task by vision or language prompts and learning the corresponding embodied behaviors in imagination.", "abstract": "Learning generalist embodied agents, able to solve multitudes of tasks in different domains is a long-standing problem. Reinforcement learning (RL) is hard to scale up as it requires a complex reward design for each task. In contrast, language can specify tasks in a more natural way. Current foundation vision-language models (VLMs) generally require fine-tuning or other adaptations to be adopted in embodied contexts, due to the significant domain gap. However, the lack of multimodal data in such domains represents an obstacle to developing foundation models for embodied applications. In this work, we overcome these problems by presenting multimodal-foundation world models, able to connect and align the representation of foundation VLMs with the latent space of generative world models for RL, without any language annotations. The resulting agent learning framework, GenRL, allows one to specify tasks through vision and/or language prompts, ground them in the embodied domain\u2019s dynamics, and learn the corresponding behaviors in imagination. As assessed through large-scale multi-task benchmarking in locomotion and manipulation domains, GenRL enables multi-task generalization from language and visual prompts. Furthermore, by introducing a data-free policy learning strategy, our approach lays the groundwork for foundational policy learning using generative world models. Website, code and data: https://mazpie.github.io/genrl/", "primary_area": "reinforcement_learning", "site": "https://neurips.cc/virtual/2024/poster/92947"} +{"video_file": "ziYC4FHRNr_39026075.mp4", "openreview_id": "ziYC4FHRNr", "slideslive_id": 39026075, "venue": "nips2024", "title": "Entrywise error bounds for low-rank approximations of kernel matrices", "status": "Poster", "keywords": "low-rank approximation;kernel methods;SVD;theory;error bounds", "tldr": "This paper proves an entrywise error bound on the low-rank approximation of a kernel matrix, obtained using the truncated eigen-decomposition (or singular value decomposition).", "abstract": "In this paper, we derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition (or singular value decomposition). While this approximation is well-known to be optimal with respect to the spectral and Frobenius norm error, little is known about the statistical behaviour of individual entries. Our error bounds fill this gap. A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues, which takes inspiration from the field of Random Matrix Theory. Finally, we validate our theory with an empirical study of a collection of synthetic and real-world datasets.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/92940"} +{"video_file": "zkfCa4oESF_39026270.mp4", "openreview_id": "zkfCa4oESF", "slideslive_id": 39026270, "venue": "nips2024", "title": "TPR: Topology-Preserving Reservoirs for Generalized Zero-Shot Learning", "status": "Poster", "keywords": "Generalized Zero-shot Learning;Vision-Language Models;Contrastive Learning", "tldr": "a model incorporating dual-space alignment and topology-preserving strategies for GZSL", "abstract": "Pre-trained vision-language models (VLMs) such as CLIP have shown excellent performance for zero-shot classification. Based on CLIP, recent methods design various learnable prompts to evaluate the zero-shot generalization capability on a base-to-novel setting. This setting assumes test samples are already divided into either base or novel classes, limiting its application to realistic scenarios. In this paper, we focus on a more challenging and practical setting: generalized zero-shot learning (GZSL), i.e., testing with no information about the base/novel division. To address this challenging zero-shot problem, we introduce two unique designs that enable us to classify an image without the need of knowing whether it comes from seen or unseen classes. Firstly, most existing methods only adopt a single latent space to align visual and linguistic features, which has a limited ability to represent complex visual-linguistic patterns, especially for fine-grained tasks. Instead, we propose a dual-space feature alignment module that effectively augments the latent space with a novel attribute space induced by a well-devised attribute reservoir. In particular, the attribute reservoir consists of a static vocabulary and learnable tokens complementing each other for flexible control over feature granularity. Secondly, finetuning CLIP models (e.g., prompt learning) on seen base classes usually sacrifices the model's original generalization capability on unseen novel classes. To mitigate this issue, we present a new topology-preserving objective that can enforce feature topology structures of the combined base and novel classes to resemble the topology of CLIP. In this manner, our model will inherit the generalization ability of CLIP through maintaining the pairwise class angles in the attribute space. Extensive experiments on twelve object recognition datasets demonstrate that our model, termed Topology-Preserving Reservoir (TPR), outperforms strong baselines including both prompt learning and conventional generative-based zero-shot methods.", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/92938"} +{"video_file": "zkhyrxlwqH_39026164.mp4", "openreview_id": "zkhyrxlwqH", "slideslive_id": 39026164, "venue": "nips2024", "title": "Unsupervised Homography Estimation on Multimodal Image Pair via Alternating Optimization", "status": "Poster", "keywords": "Homography Estimation;Unsupervised Learning;Cross Domain;Image Registration;Image Alignment;Multimodal;Alternating Optimization", "tldr": "We introduce an unsupervised framework for estimating homography in multimodal images, improving performance by using a two-phase optimization and Barlow Twins loss.", "abstract": "Estimating the homography between two images is crucial for mid- or high-level vision tasks, such as image stitching and fusion. However, using supervised learning methods is often challenging or costly due to the difficulty of collecting ground-truth data. In response, unsupervised learning approaches have emerged. Most early methods, though, assume that the given image pairs are from the same camera or have minor lighting differences. Consequently, while these methods perform effectively under such conditions, they generally fail when input image pairs come from different domains, referred to as multimodal image pairs. To address these limitations, we propose AltO, an unsupervised learning framework for estimating homography in multimodal image pairs. Our method employs a two-phase alternating optimization framework, similar to Expectation-Maximization (EM), where one phase reduces the geometry gap and the other addresses the modality gap. To handle these gaps, we use Barlow Twins loss for the modality gap and propose an extended version, Geometry Barlow Twins, for the geometry gap. As a result, we demonstrate that our method, AltO, can be trained on multimodal datasets without any ground-truth data. It not only outperforms other unsupervised methods but is also compatible with various architectures of homography estimators. The source code can be found at: https://github.com/songsang7/AltO", "primary_area": "machine_vision", "site": "https://neurips.cc/virtual/2024/poster/92937"} +{"video_file": "zlgfRk2CQa_39026368.mp4", "openreview_id": "zlgfRk2CQa", "slideslive_id": 39026368, "venue": "nips2024", "title": "Rethinking Deep Thinking: Stable Learning of Algorithms using Lipschitz Constraints", "status": "Poster", "keywords": "machine learning;iterative algorithms;deep thinking;lipschitz;traveling salesperson problem;contraction mapping", "tldr": "Training machines to learn algorithms that are guaranteed to converge to a solution", "abstract": "Iterative algorithms solve problems by taking steps until a solution is reached. Models in the form of Deep Thinking (DT) networks have been demonstrated to learn iterative algorithms in a way that can scale to different sized problems at inference time using recurrent computation and convolutions. However, they are often unstable during training, and have no guarantees of convergence/termination at the solution. This paper addresses the problem of instability by analyzing the growth in intermediate representations, allowing us to build models (referred to as Deep Thinking with Lipschitz Constraints (DT-L)) with many fewer parameters and providing more reliable solutions. Additionally our DT-L formulation provides guarantees of convergence of the learned iterative procedure to a unique solution at inference time. We demonstrate DT-L is capable of robustly learning algorithms which extrapolate to harder problems than in the training set. We benchmark on the traveling salesperson problem to evaluate the capabilities of the modified system in an NP-hard problem where DT fails to learn.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/92936"} +{"video_file": "zm1LcgRpHm_39025597.mp4", "openreview_id": "zm1LcgRpHm", "slideslive_id": 39025597, "venue": "nips2024", "title": "Segment, Shuffle, and Stitch: A Simple Layer for Improving Time-Series Representations", "status": "Poster", "keywords": "Time Series;Deep Learning;Representation Learning;Temporal Mechanism", "tldr": "This paper presents S3, a modular neural network layer that is designed to enhance time-series representation learning by rearranging its segments, yielding improved results across various tasks with minimal computational overhead.", "abstract": "Existing approaches for learning representations of time-series keep the temporal arrangement of the time-steps intact with the presumption that the original order is the most optimal for learning. However, non-adjacent sections of real-world time-series may have strong dependencies. Accordingly, we raise the question: Is there an alternative arrangement for time-series which could enable more effective representation learning? To address this, we propose a simple plug-and-play neural network layer called Segment, Shuffle, and Stitch (S3) designed to improve representation learning in time-series models. S3 works by creating non-overlapping segments from the original sequence and shuffling them in a learned manner that is optimal for the task at hand. It then re-attaches the shuffled segments back together and performs a learned weighted sum with the original input to capture both the newly shuffled sequence along with the original sequence. S3 is modular and can be stacked to achieve different levels of granularity, and can be added to many forms of neural architectures including CNNs or Transformers with negligible computation overhead. Through extensive experiments on several datasets and state-of-the-art baselines, we show that incorporating S3 results in significant improvements for the tasks of time-series classification, forecasting, and anomaly detection, improving performance on certain datasets by up to 68%. We also show that S3 makes the learning more stable with a smoother training loss curve and loss landscape compared to the original baseline. The code is available at https://github.com/shivam-grover/S3-TimeSeries.", "primary_area": "deep_learning_architectures", "site": "https://neurips.cc/virtual/2024/poster/92935"} +{"video_file": "zqLAMwVLkt_39025890.mp4", "openreview_id": "zqLAMwVLkt", "slideslive_id": 39025890, "venue": "nips2024", "title": "Generative Semi-supervised Graph Anomaly Detection", "status": "Poster", "keywords": "Anomaly Detection;Graph Neural Network;Graph Anomaly Detection", "tldr": "We propose a novel Generative Graph Anomaly Detection approach (GGAD) for an under-explored semi-supervised setting that has only labeled normal nodes and establish an evaluation benchmark for the setting.", "abstract": "This work considers a practical semi-supervised graph anomaly detection (GAD) scenario, where part of the nodes in a graph are known to be normal, contrasting to the extensively explored unsupervised setting with a fully unlabeled graph. We reveal that having access to the normal nodes, even just a small percentage of normal nodes, helps enhance the detection performance of existing unsupervised GAD methods when they are adapted to the semi-supervised setting. However, their utilization of these normal nodes is limited. In this paper, we propose a novel Generative GAD approach (namely GGAD) for the semi-supervised scenario to better exploit the normal nodes. The key idea is to generate pseudo anomaly nodes, referred to as 'outlier nodes', for providing effective negative node samples in training a discriminative one-class classifier. The main challenge here lies in the lack of ground truth information about real anomaly nodes. To address this challenge, GGAD is designed to leverage two important priors about the anomaly nodes -- asymmetric local affinity and egocentric closeness -- to generate reliable outlier nodes that assimilate anomaly nodes in both graph structure and feature representations. Comprehensive experiments on six real-world GAD datasets are performed to establish a benchmark for semi-supervised GAD and show that GGAD substantially outperforms state-of-the-art unsupervised and semi-supervised GAD methods with varying numbers of training normal nodes.", "primary_area": "graph_neural_networks", "site": "https://neurips.cc/virtual/2024/poster/92932"} +{"video_file": "ztwl4ubnXV_39024646.mp4", "openreview_id": "ztwl4ubnXV", "slideslive_id": 39024646, "venue": "nips2024", "title": "OxonFair: A Flexible Toolkit for Algorithmic Fairness", "status": "Poster", "keywords": "Fairness Toolkit;Algorithmic Fairness;Trustworthy AI", "tldr": "We present a new toolkit for enforcing and measuring fairness with a focus on deep learning models.", "abstract": "We present OxonFair, a new open source toolkit for enforcing fairness in binary classification. Compared to existing toolkits: (i) We support NLP and Computer Vision classification as well as standard tabular problems. (ii) We support enforcing fairness on validation data, making us robust to a wide range of overfitting challenges. (iii) Our approach can optimize any measure based on True Positives, False Positive, False Negatives, and True Negatives. This makes it easily extensible and much more expressive than existing toolkits. It supports all 9 and all 10 of the decision-based group metrics of two popular review articles. (iv) We jointly optimize a performance objective alongside fairness constraints. This minimizes degradation while enforcing fairness, and even improves the performance of inadequately tuned unfair baselines. OxonFair is compatible with standard ML toolkits, including sklearn, Autogluon, and PyTorch and is available at https://github.com/oxfordinternetinstitute/oxonfair.", "primary_area": "infrastructure", "site": "https://neurips.cc/virtual/2024/poster/92930"} +{"video_file": "zuwLGhgxtQ_39028785.mp4", "openreview_id": "zuwLGhgxtQ", "slideslive_id": 39028785, "venue": "nips2024", "title": "A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers", "status": "Poster", "keywords": "Proximal samplers;Complexity of heavy-tailed sampling;Restricted Gaussian oracle;Restricted Stable oracle", "tldr": "Gaussian-based proximal samplers face accuracy limits sampling heavy-tailed targets. Stable-based samplers offer high-accuracy guarantees, surpassing this constraint.", "abstract": "We study the complexity of heavy-tailed sampling and present a separation result in terms of obtaining high-accuracy versus low-accuracy guarantees i.e., samplers that require only\nO\n(\nlog\n\u2061\n(\n1\n/\n\u03b5\n)\n)\nversus\n\u03a9\n(\npoly\n(\n1\n/\n\u03b5\n)\n)\niterations to output a sample which is\n\u03b5\n-close to the target in\n\u03c7\n2\n-divergence. Our results are presented for proximal samplers that are based on Gaussian versus stable oracles. We show that proximal samplers based on the Gaussian oracle have a fundamental barrier in that they necessarily achieve only low-accuracy guarantees when sampling from a class of heavy-tailed targets. In contrast, proximal samplers based on the stable oracle exhibit high-accuracy guarantees, thereby overcoming the aforementioned limitation. We also prove lower bounds for samplers under the stable oracle and show that our upper bounds cannot be fundamentally improved.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/92929"} +{"video_file": "zuwpeRkJNH_39025347.mp4", "openreview_id": "zuwpeRkJNH", "slideslive_id": 39025347, "venue": "nips2024", "title": "Procedure-Aware Surgical Video-language Pretraining with Hierarchical Knowledge Augmentation", "status": "Spotlight", "keywords": "Surgical Data Science;Video-language Pretraining;Multi-modal;Surgical Foundation Model", "tldr": "Surgical Vision-language Foundation Model (CLIP-like)", "abstract": "Surgical video-language pretraining (VLP) faces unique challenges due to the knowledge domain gap and the scarcity of multi-modal data. This study aims to bridge the gap by addressing issues regarding textual information loss in surgical lecture videos and the spatial-temporal challenges of surgical VLP. To tackle these issues, we propose a hierarchical knowledge augmentation approach and a novel Procedure-Encoded Surgical Knowledge-Augmented Video-Language Pretraining (PeskaVLP) framework. The proposed knowledge augmentation approach uses large language models (LLM) to refine and enrich surgical concepts, thus providing comprehensive language supervision and reducing the risk of overfitting. The PeskaVLP framework combines language supervision with visual self-supervision, constructing hard negative samples and employing a Dynamic Time Warping (DTW) based loss function to effectively comprehend the cross-modal procedural alignment. Extensive experiments on multiple public surgical scene understanding and cross-modal retrieval datasets show that our proposed method significantly improves zero-shot transferring performance and offers a generalist visual repre- sentation for further advancements in surgical scene understanding. The source code will be available at https://github.com/CAMMA-public/PeskaVLP.", "primary_area": "machine_learning_for_healthcare", "site": "https://neurips.cc/virtual/2024/poster/92928"} +{"video_file": "zv9gYC3xgF_39027145.mp4", "openreview_id": "zv9gYC3xgF", "slideslive_id": 39027145, "venue": "nips2024", "title": "Toward Global Convergence of Gradient EM for Over-Paramterized Gaussian Mixture Models", "status": "Poster", "keywords": "over-parameterization;global convergence;non-convex optimization", "tldr": "We give the first global convergence of gradient EM for over-parameterized Gaussian mixture models when the ground truth is single Gaussian.", "abstract": "We study the gradient Expectation-Maximization (EM) algorithm for Gaussian Mixture Models (GMM) in the over-parameterized setting, where a general GMM with\nn\n>\n1\ncomponents learns from data that are generated by a single ground truth Gaussian distribution. While results for the special case of 2-Gaussian mixtures are well-known, a general global convergence analysis for arbitrary\nn\nremains unresolved and faces several new technical barriers since the convergence becomes sub-linear and non-monotonic. To address these challenges, we construct a novel likelihood-based convergence analysis framework and rigorously prove that gradient EM converges globally with a sublinear rate\nO\n(\n1\n/\nt\n)\n. This is the first global convergence result for Gaussian mixtures with more than\n2\ncomponents. The sublinear convergence rate is due to the algorithmic nature of learning over-parameterized GMM with gradient EM. We also identify a new emerging technical challenge for learning general over-parameterized GMM: the existence of bad local regions that can trap gradient EM for an exponential number of steps.", "primary_area": "learning_theory", "site": "https://neurips.cc/virtual/2024/poster/92926"} +{"video_file": "zzOOqD6R1b_39024537.mp4", "openreview_id": "zzOOqD6R1b", "slideslive_id": 39024537, "venue": "nips2024", "title": "Stress-Testing Capability Elicitation With Password-Locked Models", "status": "Poster", "keywords": "LLMs;Elicitation;Fine-tuning;Sandbagging;Red-teaming;Safety", "tldr": "We train models to behave poorly except when the prompt contains a password, and study when supervised fine-tuning and RL can recover high performance.", "abstract": "To determine the safety of large language models (LLMs), AI developers must be able to assess their dangerous capabilities. But simple prompting strategies often fail to elicit an LLM\u2019s full capabilities. One way to elicit capabilities more robustly is to fine-tune the LLM to complete the task. In this paper, we investigate the conditions under which fine-tuning-based elicitation suffices to elicit capabilities. To do this, we introduce password-locked models, LLMs fine-tuned such that some of their capabilities are deliberately hidden. Specifically, these LLMs are trained to exhibit these capabilities only when a password is present in the prompt, and to imitate a much weaker LLM otherwise. Password-locked models enable a novel method of evaluating capabilities elicitation methods, by testing whether these password-locked capabilities can be elicited without using the password. We find that a few high-quality demonstrations are often sufficient to fully elicit password-locked capabilities. More surprisingly, fine-tuning can elicit other capabilities that have been locked using the same password, or even different passwords. Furthermore, when only evaluations, and not demonstrations, are available, approaches like reinforcement learning are still often able to elicit capabilities. Overall, our findings suggest that fine-tuning is an effective method of eliciting hidden capabilities of current models but may be unreliable when high-quality demonstrations are not available, e.g., as may be the case when models\u2019 (hidden) capabilities exceed those of human demonstrators.", "primary_area": "safety_in_machine_learning", "site": "https://neurips.cc/virtual/2024/poster/92923"} diff --git a/video/01XV5Za56k_39027005.mp4 b/video/01XV5Za56k_39027005.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c688190a4951cdb5e507239d35c1769462eb8602 --- /dev/null +++ b/video/01XV5Za56k_39027005.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e250c9dc6c1b52815211606721bec5463a9a3b865dc8092abd51c7f9ba614274 +size 2476134 diff --git a/video/01s5ODIHKd_39025842.mp4 b/video/01s5ODIHKd_39025842.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1a0fc48fa79aa15783ef4c0fcf803462fb85b264 --- /dev/null +++ b/video/01s5ODIHKd_39025842.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:863fa6994c4ed066a25eb370425ca805719b3ca84f6485b5357360c9c5e6dd47 +size 2305570 diff --git a/video/06JRFVK88O_39028540.mp4 b/video/06JRFVK88O_39028540.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d0b1e9f6efc1a2f48dfdcfe868904fbd43737d96 --- /dev/null +++ b/video/06JRFVK88O_39028540.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:660fa3cac49ed62a89b236ae4f58b9349ebd187907604c56c29600dd4e76eab6 +size 1971062 diff --git a/video/06Vt6f2js7_39024371.mp4 b/video/06Vt6f2js7_39024371.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d62dfb86a30d2f17766138e51155724aa8c43cc6 --- /dev/null +++ b/video/06Vt6f2js7_39024371.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b55eef8413bcdbcae64e9ba9daecca1a8d6d984efd2f8a4614ae261d927fafc +size 2051100 diff --git a/video/08GbdALmEs_39028523.mp4 b/video/08GbdALmEs_39028523.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0bd2713c35aeddd4032d667bd49ed33e7eafbda9 --- /dev/null +++ b/video/08GbdALmEs_39028523.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e991d9e48b5a9732d0b2053df7ba02d90223155c2158564a62b7c887cb6fe76f +size 1945164 diff --git a/video/09nyBqSdUz_39024909.mp4 b/video/09nyBqSdUz_39024909.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..967a5c9207bf04e7034c9459bb92e1cab94e4a3a --- /dev/null +++ b/video/09nyBqSdUz_39024909.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6301d0096abf462f6895819bdd715a81f77c704391f2d0d8b438e8514bd6d807 +size 1437882 diff --git a/video/0DE1dLMW2b_39024590.mp4 b/video/0DE1dLMW2b_39024590.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0851b95be6c0645661c84a3b97ccfd50ffea445b --- /dev/null +++ b/video/0DE1dLMW2b_39024590.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2f5cbce1e6b308a0aa3e3ab5c0d79b3fdd3a64271c395f09bb080e7f98e85d7 +size 2133239 diff --git a/video/0G0VpMjKyV_39026169.mp4 b/video/0G0VpMjKyV_39026169.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a2aa1aa178dd24608f69d416772f5b8c26f51afe --- /dev/null +++ b/video/0G0VpMjKyV_39026169.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c9b7beccbdaba068a8d21f182a2ee3684a377d7b54dd6f3efd4db3da08437ef4 +size 2037530 diff --git a/video/0JsRZEGZ7L_39017996.mp4 b/video/0JsRZEGZ7L_39017996.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..181c577b3fd53d6b3428abf389df76f763162e76 --- /dev/null +++ b/video/0JsRZEGZ7L_39017996.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a04ade027c2defdb00706ea70e9bc30dd100268e2c3c5ef9e552c91a168599b3 +size 2707109 diff --git a/video/0KvYLaTBTE_39028679.mp4 b/video/0KvYLaTBTE_39028679.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..20d995aea23b2606148a1e49a1fd2b4ec91d8403 --- /dev/null +++ b/video/0KvYLaTBTE_39028679.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd3fcdeb88d28510e96321851cc1118dd47ebad7abc4b385a2cf516f3c48cfd2 +size 2661456 diff --git a/video/0LXotew9Du_39025943.mp4 b/video/0LXotew9Du_39025943.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5344b40f68f038f54400edf53499a9c013321d88 --- /dev/null +++ b/video/0LXotew9Du_39025943.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a791e33dc6e4ec59ab1de2aaa90c87ba709efded37d4d1ac57e4a3af93eeb40 +size 2323153 diff --git a/video/0MXzbAv8xy_39026191.mp4 b/video/0MXzbAv8xy_39026191.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e646d7aa03a87dfdedb11589c272d702474a1903 --- /dev/null +++ b/video/0MXzbAv8xy_39026191.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:07852b46b5a01c1efd0d37a83352af35458a17e92e4b71789c0ccc4d09ba36cb +size 2125226 diff --git a/video/0SRJBtTNhX_39025353.mp4 b/video/0SRJBtTNhX_39025353.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b7989669fa3f0b51e2a078386fe268230f19ddd6 --- /dev/null +++ b/video/0SRJBtTNhX_39025353.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:228557bb10d1bbfcf1aaf2e67dcc6cfab235d32a4643038207b441a683b81520 +size 2214659 diff --git a/video/0TUMAAb3of_39027114.mp4 b/video/0TUMAAb3of_39027114.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..eebac71e51bc0c6afe50dc7ec00a97948c93f9e0 --- /dev/null +++ b/video/0TUMAAb3of_39027114.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89563a6846fdee25b0ebd56c4163a935b0658cd147033d8dc3946df813a5f6ce +size 2056349 diff --git a/video/0WCFI2Qx85_39026309.mp4 b/video/0WCFI2Qx85_39026309.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2cd34080d3141f4c989e37cc9845c375c5e10283 --- /dev/null +++ b/video/0WCFI2Qx85_39026309.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1353d111d44bdbccc9ea88b3de4a5c337957022dbcd65290d4f937ac95e7defe +size 2631379 diff --git a/video/0XeNkkENuI_39024867.mp4 b/video/0XeNkkENuI_39024867.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..321f9f5fc601e5c0ad5e3744480fb8fe15dd4c20 --- /dev/null +++ b/video/0XeNkkENuI_39024867.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cfb6c90b0a06a739ba75c15f4ee25f20c2be9c950c8040426a08b8b1ee5d3a2c +size 2853218 diff --git a/video/0ZZMUjZJYF_39028519.mp4 b/video/0ZZMUjZJYF_39028519.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dc087ac21a019c7b12b5aedc1df69c29863cdce0 --- /dev/null +++ b/video/0ZZMUjZJYF_39028519.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d8c627cae0a9e88bcb726a0131c7219d8c8f336e4ce21072e51f4464fb7a4f92 +size 2663643 diff --git a/video/0ZeONp33f0_39024716.mp4 b/video/0ZeONp33f0_39024716.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..92aab23e272ac0523f9005d70134d13fe5d32237 --- /dev/null +++ b/video/0ZeONp33f0_39024716.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e3caba54b64a4d8307f544b23cfe6af18e10068b80d795341a80199ff350a1b +size 1027738 diff --git a/video/0aN7VWwp4g_39026675.mp4 b/video/0aN7VWwp4g_39026675.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b52b43844674efbb182413a84858b82df85a5f08 --- /dev/null +++ b/video/0aN7VWwp4g_39026675.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e08b8be03f995c0cb515aa337123ae839cc3751905730435e8a52cabfdc534f +size 2514571 diff --git a/video/0akLDTFR9x_39018886.mp4 b/video/0akLDTFR9x_39018886.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..39a2b64926928d25727c27eae05aaee618df0339 --- /dev/null +++ b/video/0akLDTFR9x_39018886.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d4db7780281a92aa2c987614f095716c5bc24da442760840de95270c8fe3c98f +size 2810035 diff --git a/video/0bFXbEMz8e_39028296.mp4 b/video/0bFXbEMz8e_39028296.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d2030d9b58ce2ee9d72327e1b1f34b5787d4aee4 --- /dev/null +++ b/video/0bFXbEMz8e_39028296.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db5b29e7695a38757d0e5638f1a018d153e3a77b5970b4a327baead30f311230 +size 1625663 diff --git a/video/0cgDDa4OFr_39028117.mp4 b/video/0cgDDa4OFr_39028117.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3820f7a033604b2638d37f85f59ce2b60f727488 --- /dev/null +++ b/video/0cgDDa4OFr_39028117.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16de85417b86ee20974f15df41288bf688295e646689e0177d6d647d0c20d862 +size 2372464 diff --git a/video/0d50Il6enG_39028894.mp4 b/video/0d50Il6enG_39028894.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0b36aa9859936bdccf7dabcbfd3b2fc548127acc --- /dev/null +++ b/video/0d50Il6enG_39028894.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc4ac9f6bd7b7879b1b176fed89a2a9a13efef2467c3963840301571e13dd0d2 +size 2584775 diff --git a/video/0dtA21q83C_39026385.mp4 b/video/0dtA21q83C_39026385.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..14156b3d9b86f198108a2daf24fe97a97a909f1a --- /dev/null +++ b/video/0dtA21q83C_39026385.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:624fb7054834d15e0bdff762a6d3bceecb885395cf917ffa009b848b8f226c0a +size 2159731 diff --git a/video/0feJEykDRx_39024610.mp4 b/video/0feJEykDRx_39024610.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0e01037cef10a2f4fada7e2d8779bdcb22040a53 --- /dev/null +++ b/video/0feJEykDRx_39024610.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b29611e812b470bae0ef1faba4284fb3708b58e30fbc38d9cf5e03343e1d0686 +size 59319 diff --git a/video/0jld45XGgJ_39028310.mp4 b/video/0jld45XGgJ_39028310.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6fcff1fb8ba002fbc862d7512b39c92f0b17e525 --- /dev/null +++ b/video/0jld45XGgJ_39028310.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db3012854d28f0d05b6b3e7b273e72189c3b482a30269d5c584fec75c623312d +size 2826685 diff --git a/video/0jsfesDZDq_39018608.mp4 b/video/0jsfesDZDq_39018608.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5653e01352a6a72fbbae7fdd735bd89be7c7ac95 --- /dev/null +++ b/video/0jsfesDZDq_39018608.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d63b0cb9a3a7113489d700943258e9548b552f8096c99e2f8ce9b234e27eb251 +size 1828944 diff --git a/video/0m19blQT6y_39028770.mp4 b/video/0m19blQT6y_39028770.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d2da8fb38349102afcdce8ea47baa07d21edd7be --- /dev/null +++ b/video/0m19blQT6y_39028770.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e21f7214370a49665e1affc1ce46b86af278550aae138822ab1f615e5d418183 +size 2860701 diff --git a/video/0og7nmvDbe_39028701.mp4 b/video/0og7nmvDbe_39028701.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..36fe04687ea69850b3a5beeec3466dfaa3f2bc9f --- /dev/null +++ b/video/0og7nmvDbe_39028701.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:111bfba86eb07ab52a836b8e7463e39c3c432675f71cc9d542648ba6decbfbf6 +size 2807355 diff --git a/video/0qb8KoPsej_39025925.mp4 b/video/0qb8KoPsej_39025925.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..81fb012f44456f76cf3ac50f2e3fc89716cc9b07 --- /dev/null +++ b/video/0qb8KoPsej_39025925.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:76157749ed8bf4753fc351b2d855c24bc52661e71784e045571f7d3f58719a07 +size 2574417 diff --git a/video/0t1O8ziRZp_39018974.mp4 b/video/0t1O8ziRZp_39018974.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7befbd7faeaf80d2c6464def385d447c4cba4a9b --- /dev/null +++ b/video/0t1O8ziRZp_39018974.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5307706191c710c522137a0092dd67b8223dc448d1078330b80653c9edb63916 +size 8306 diff --git a/video/0uI5415ry7_39018607.mp4 b/video/0uI5415ry7_39018607.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4ebcc7895aed0c7f56c2901e25f5680cffe38917 --- /dev/null +++ b/video/0uI5415ry7_39018607.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:960148a8cdc4976b7ae645782d10f2aae9e906014156b5dab15f01c3575041ea +size 2938305 diff --git a/video/0uXtFk5KNJ_39028687.mp4 b/video/0uXtFk5KNJ_39028687.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..45b533c2f92fc133a7f5ce3a4f9a00d421849205 --- /dev/null +++ b/video/0uXtFk5KNJ_39028687.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8575ea7601a524bd56837f57f5a2944e0a997f98df3cfe0fbd51b92bfe7e1e3c +size 2703761 diff --git a/video/0zFVhMBZHJ_39028527.mp4 b/video/0zFVhMBZHJ_39028527.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..04f80c5fb55fd00cc8255ef73964b0970a8384fe --- /dev/null +++ b/video/0zFVhMBZHJ_39028527.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cda7588e02ee8be2e980e4b7219506d4ac7c58ed5611ae09c4a952a15b8b4870 +size 1421127 diff --git a/video/0zWzJj6lO3_39024377.mp4 b/video/0zWzJj6lO3_39024377.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..de66c0aee92b1ba2c768ef8ff040077f11db2bee --- /dev/null +++ b/video/0zWzJj6lO3_39024377.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dfcfaadfed9653d6ef4f664ed2cfc3519191699844119c3398f8a1260244a88f +size 3376963 diff --git a/video/105ZuvpdyW_39027053.mp4 b/video/105ZuvpdyW_39027053.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..002239c46fdb409c3e9d6c421f0f6bc616cf836c --- /dev/null +++ b/video/105ZuvpdyW_39027053.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d954c93e93b39557fd38205ac258b99cdcffc98068a45b63a3402b343cc49eb +size 2792255 diff --git a/video/1067784F6e_39028634.mp4 b/video/1067784F6e_39028634.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f0349d9ec90b3263a640257a48a0460c9615e713 --- /dev/null +++ b/video/1067784F6e_39028634.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:612fee1199bf766f5df062d131d85650fec2993450afcf2c6c31523f6b5af740 +size 2561308 diff --git a/video/164QnJsYjF_39026649.mp4 b/video/164QnJsYjF_39026649.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..502b4b9386378fdaf4e2ac09dfff35c8e21a7da6 --- /dev/null +++ b/video/164QnJsYjF_39026649.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0eca7ab10680437480bf01305eaa5f05247eb65cbec2e593d91521a36320a730 +size 2661214 diff --git a/video/17pVDnpwwl_39018606.mp4 b/video/17pVDnpwwl_39018606.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bbe6b9f40e6df09c778d9770c7960044bdfffaa9 --- /dev/null +++ b/video/17pVDnpwwl_39018606.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d34ec0e8dc2dff079cd5382f143da0a99ae53bef6ebcd68268508f92fff959c7 +size 2590594 diff --git a/video/18RdkSv9h9_39028034.mp4 b/video/18RdkSv9h9_39028034.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..624eb5f6c7bd7ff6f1483f42e30fc7bb3685ea72 --- /dev/null +++ b/video/18RdkSv9h9_39028034.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67d05145ade3d6cd510fadf36ceff27e303e8513cc22a344bcc539bb595e6253 +size 2580799 diff --git a/video/1CK45cqkEh_39017547.mp4 b/video/1CK45cqkEh_39017547.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b88123722651c42e8879022d31b8b3bc3702316c --- /dev/null +++ b/video/1CK45cqkEh_39017547.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93275d824c2b508fc54d352e72a6f8dbe5257e2b960c564a62c441763a28c990 +size 1954199 diff --git a/video/1ELFGSNBGC_39028060.mp4 b/video/1ELFGSNBGC_39028060.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..269839ade521e2c562da7f9c127a6d7b7ff65540 --- /dev/null +++ b/video/1ELFGSNBGC_39028060.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5340bde19e5891bf85da854754e96ceb56d4f86c102f10bff4874cb7a269547e +size 2407086 diff --git a/video/1MCseWaFZb_39027443.mp4 b/video/1MCseWaFZb_39027443.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..36ca0d88d1c84c41acdd854eab663936e58f73f5 --- /dev/null +++ b/video/1MCseWaFZb_39027443.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a678a701eae617ffac248283a42edfc93cfde2dc745bd0dae03b127564bcf44f +size 2031653 diff --git a/video/1NHgmKqOzZ_39018605.mp4 b/video/1NHgmKqOzZ_39018605.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..07deae96d4e661814cadf1b3a652b47fe7751a6b --- /dev/null +++ b/video/1NHgmKqOzZ_39018605.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7610a433b2cd8349439ccd7c4a07e5aa5593a8366248a011721a1a0bcda22a65 +size 2320584 diff --git a/video/1PmsSugB87_39025568.mp4 b/video/1PmsSugB87_39025568.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..900c57f98bc0a16ace49598e616f7f7ed6afd861 --- /dev/null +++ b/video/1PmsSugB87_39025568.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d80f07e5a76127b59de2af101efc3c4976ae1af7720bf77fc38b8a42ce35f9b8 +size 2460336 diff --git a/video/1bAUywYJTU_39018982.mp4 b/video/1bAUywYJTU_39018982.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8b15e05a4d6e0c54c391e04beae774cf5938619b --- /dev/null +++ b/video/1bAUywYJTU_39018982.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec802813a8003775016bfb9083ff48e52b971310cc2a05d8e62b5e7e4c696a6e +size 2506695 diff --git a/video/1cXdndzkxU_39025878.mp4 b/video/1cXdndzkxU_39025878.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..464eb6c54adc2b7b67bf14d1262ed9ce2268659c --- /dev/null +++ b/video/1cXdndzkxU_39025878.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c1ff4dc57583f70f6ecfb6c5bc8c748bf0bfe98f83062dd40a29488aee01efd4 +size 1517464 diff --git a/video/1e3MOwHSIX_39027082.mp4 b/video/1e3MOwHSIX_39027082.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e28f11909be040410cd5a3fa7b97b238d388163f --- /dev/null +++ b/video/1e3MOwHSIX_39027082.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58f40bd3de68de72933d3f27117427601e301d9abb2736ca2ee6c5d4c58925cd +size 2804773 diff --git a/video/1f82rnwCbl_39024913.mp4 b/video/1f82rnwCbl_39024913.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6d1749c3b30de4d971a3971177d5f74de1073450 --- /dev/null +++ b/video/1f82rnwCbl_39024913.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6309e066592bf03711171cb8606abef06de77fa73309c36f44d8ccfd4e21279d +size 2376849 diff --git a/video/1hsVvgW0rU_39018600.mp4 b/video/1hsVvgW0rU_39018600.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e26c085e24378519daa65bc72df2c102a9ccd252 --- /dev/null +++ b/video/1hsVvgW0rU_39018600.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4986970e693952bc2a481e85bdfbdec60c5d8ad2f87ba4c469522198d5a5302c +size 2219426 diff --git a/video/1iHmhMHNyA_39027774.mp4 b/video/1iHmhMHNyA_39027774.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5e3ef5fff96d9cace50b4e13cdaf4ce32238f3d0 --- /dev/null +++ b/video/1iHmhMHNyA_39027774.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f86b21c3da5608abd5f9ea286616123a1099d17bc076c50a9b787097fae0b6a +size 3141344 diff --git a/video/1jbh2e0b2K_39018516.mp4 b/video/1jbh2e0b2K_39018516.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0e8dca1114faae5bcf661d935cf12568481123db --- /dev/null +++ b/video/1jbh2e0b2K_39018516.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e07e98299b673128d937d6d3e04a639603878c6da40c9f4a5d73d7218cd87b99 +size 2639919 diff --git a/video/1l9cEyFmxg_39025080.mp4 b/video/1l9cEyFmxg_39025080.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d410c994c03be1b384c4b5d49ed46a93455bb6bc --- /dev/null +++ b/video/1l9cEyFmxg_39025080.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ce1cf3a28f53905497635b724145a34a83e45de321a95602c58c067b5d4a826 +size 2978707 diff --git a/video/1mAaewThcz_39024527.mp4 b/video/1mAaewThcz_39024527.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3b792987e2a7f6c1f5f46af1d56954939a069193 --- /dev/null +++ b/video/1mAaewThcz_39024527.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51c66ee4fa370216df5cd539f32181fc482c18ce0f75f90b522fbeedc8f7ccd1 +size 2878628 diff --git a/video/1mNFsbvo2P_39018752.mp4 b/video/1mNFsbvo2P_39018752.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..27866fd8ca8071cf3aae954c073db6c01b6b12fe --- /dev/null +++ b/video/1mNFsbvo2P_39018752.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dc849b18b42488b8a23a6ca42ae0e98768f3b495a71fc6c264a079f2d9272161 +size 3098560 diff --git a/video/1op5YGZu8X_39018512.mp4 b/video/1op5YGZu8X_39018512.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4e250de4ba7a4eb3adecd62a1fb7f9376feb5394 --- /dev/null +++ b/video/1op5YGZu8X_39018512.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0db1b73f57323c6797c5c60f41ee27c305926b3839df82e2a59747856f4cdfb4 +size 2493023 diff --git a/video/1po4j1Tv7O_39026881.mp4 b/video/1po4j1Tv7O_39026881.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5858de8003f5a794bf3688da2f93ae584a39ae39 --- /dev/null +++ b/video/1po4j1Tv7O_39026881.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9d53647c8faf04a2d35a2ce8a99eb136f833cf62fcbe8cec62ee3fce4d1f10f +size 2550706 diff --git a/video/1qfdCAXn6K_39028051.mp4 b/video/1qfdCAXn6K_39028051.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6f1ffd894516627d915befb33e228c03cff94f0d --- /dev/null +++ b/video/1qfdCAXn6K_39028051.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c6534b55c09c716b566b5d2054defe800f04fd49e4efde450465cae915ed3ba3 +size 2675207 diff --git a/video/1u3qkG7BkQ_39026781.mp4 b/video/1u3qkG7BkQ_39026781.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..57c2a1be9ca8b68fee02a4d4140671b5b2e85b45 --- /dev/null +++ b/video/1u3qkG7BkQ_39026781.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b82a6d25424fb75424507c915936f7b49da3095d3a76c0dce7584e9e78a39924 +size 2044618 diff --git a/video/1v0BPTR3AA_39028812.mp4 b/video/1v0BPTR3AA_39028812.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..865e122d2636f4912348fafa8d6d085ff057a802 --- /dev/null +++ b/video/1v0BPTR3AA_39028812.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22ab8e04fdfe108d4a0e1f66e13dd96a168e6a8ebec9ffd2fe72526732870020 +size 2621193 diff --git a/video/1v4gKsyGfe_39028352.mp4 b/video/1v4gKsyGfe_39028352.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..801d6eb965026303de972c63c636288abbe6b4fa --- /dev/null +++ b/video/1v4gKsyGfe_39028352.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1d9284814064b01a1d85be1f9d8c80cb8c005892ef2a7840cb0b7e281853078 +size 2076307 diff --git a/video/1vPqOmqSfO_39028431.mp4 b/video/1vPqOmqSfO_39028431.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9e39ad2905708acfe350243d32f167d573eb2052 --- /dev/null +++ b/video/1vPqOmqSfO_39028431.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20347aa977cc18bb8a3ae9af0d8f74331f3e07adc93405e620f23420bce0be94 +size 1954979 diff --git a/video/1vmSEVL19f_39018510.mp4 b/video/1vmSEVL19f_39018510.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..be530001be088eb5c41930b79cf04d5d64361c2a --- /dev/null +++ b/video/1vmSEVL19f_39018510.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17616d60000d67130c7bc242a67bca991a1eef5db6209e136f172690eb53a423 +size 3023487 diff --git a/video/1ziIqFo4Tj_39027389.mp4 b/video/1ziIqFo4Tj_39027389.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..34ee7448697f9ba68110cd964dcd7f24b72f2b61 --- /dev/null +++ b/video/1ziIqFo4Tj_39027389.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f378318fb8d1f8e03e4ea95cf946b2224a1eb391154c522194ae427cd8147b39 +size 2864833 diff --git a/video/204YOrDHny_39024844.mp4 b/video/204YOrDHny_39024844.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4e126ec421a9cbe71d004a8b83a57e298df52d69 --- /dev/null +++ b/video/204YOrDHny_39024844.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9fb685ce7d1a72e5b4635182b9fc28278b9931bdf96183f8784500ca01a3807 +size 1754961 diff --git a/video/20QgErW5zH_39025814.mp4 b/video/20QgErW5zH_39025814.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..737959e4cb2ec28fd684777f5daffd6dd4184901 --- /dev/null +++ b/video/20QgErW5zH_39025814.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43ea1fa1af8b748fe472f08aae9d78e77aa1b979d38e35bf6e68a79918a54aa8 +size 1342594 diff --git a/video/22OTbutug9_39018903.mp4 b/video/22OTbutug9_39018903.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f9a0d1ae0f3a9261774d91b1c770aade125914fb --- /dev/null +++ b/video/22OTbutug9_39018903.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f79b9d4cfcb26f418fc7cd58bc75a404020eb088ef4d34606a913eaa2daf3404 +size 2448000 diff --git a/video/25Ioxw576r_39028571.mp4 b/video/25Ioxw576r_39028571.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6a901baaf8f5b229f3c66b9dc95a9116ec48ba0f --- /dev/null +++ b/video/25Ioxw576r_39028571.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6de6d702e38b8475dd8ad6a528f9a1c0ec687b42ecc0a14a75a2210c03dd725 +size 2226622 diff --git a/video/26BdXIY3ik_39027955.mp4 b/video/26BdXIY3ik_39027955.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5ccea228eb442ab9d0e64cd1bd1855e00a26d42c --- /dev/null +++ b/video/26BdXIY3ik_39027955.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be4d74aee42e16ec87dcf1202229e741afb683660a815fba092c952161167f16 +size 2538155 diff --git a/video/2AIwiIkE0s_39025562.mp4 b/video/2AIwiIkE0s_39025562.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2c9ce6c2a5ba17245bb79099f54f84bd485c6e1f --- /dev/null +++ b/video/2AIwiIkE0s_39025562.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f52a1662c0311cfdb762e8477e23b0aad15dd5ace8990cd22066ecded19cc1e +size 2807193 diff --git a/video/2DbVeuoa6a_39018688.mp4 b/video/2DbVeuoa6a_39018688.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..394ccc92a750981f06160f54115b2aa60a599ac4 --- /dev/null +++ b/video/2DbVeuoa6a_39018688.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:549dfeefee14dafdd32cfd0b9a792ddbfa4e0b2bc4f5cc6a40c7cde1a83cc4c5 +size 2310658 diff --git a/video/2HvgvB4aWq_39027405.mp4 b/video/2HvgvB4aWq_39027405.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..924c316c5bd1c49b141e3fc2b766684ae1fd6397 --- /dev/null +++ b/video/2HvgvB4aWq_39027405.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dbf8124d800ca08fc5fe86f4a84e1cfb826901e95969e01ef0bbfaadf06bb84e +size 2633529 diff --git a/video/2Inwtjvyx8_39028685.mp4 b/video/2Inwtjvyx8_39028685.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8179b29e003cdf2c80ec45911ea7dc86ccd8af7d --- /dev/null +++ b/video/2Inwtjvyx8_39028685.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af1cbb9cc807ae98300b4e45a15ecfd604ca6bbf5dc04dbbf9bafbbd113761ce +size 3167979 diff --git a/video/2KuZHYykkq_39028396.mp4 b/video/2KuZHYykkq_39028396.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2d1149787e569a2a82c4625255fe9d0f216d9323 --- /dev/null +++ b/video/2KuZHYykkq_39028396.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f5e4bc1a204c9e7e111bf187b0f493e1f17d7be569dbb8810d43b1b45fde5f4 +size 862478 diff --git a/video/2LctgfN6Ty_39024503.mp4 b/video/2LctgfN6Ty_39024503.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9d82db6eed77b08abf7ab3310e882cb7e9dbc705 --- /dev/null +++ b/video/2LctgfN6Ty_39024503.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af8f1d71eca6e63bc3fe7fc18c9ca1449d37efef829becaf0105b0beae571861 +size 2897537 diff --git a/video/2NfBBpbN9x_39025188.mp4 b/video/2NfBBpbN9x_39025188.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d7e3f245c3a300de1da79ef638605c3df7afea46 --- /dev/null +++ b/video/2NfBBpbN9x_39025188.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:423e7788a76c762e5fb8b2503004e36da5ba4b209f0d5909b8c63301c5ebe64c +size 2484317 diff --git a/video/2RS0fL7Eet_39025418.mp4 b/video/2RS0fL7Eet_39025418.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..91f12ca01f0e06111495f83b99adfcf5bb7f55f7 --- /dev/null +++ b/video/2RS0fL7Eet_39025418.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ebdb33ed0a0ebcb0f005b376e67ca28e3d6a147382a0de35b6e92b661914e15f +size 3333380 diff --git a/video/2Rwq6c3tvr_39017046.mp4 b/video/2Rwq6c3tvr_39017046.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6040516b529f34810f722005e1309714361535e2 --- /dev/null +++ b/video/2Rwq6c3tvr_39017046.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d7cb695ef10b2d1eaa5a7b97572c04e86d4dc175ff03bae2a0f71f4f179e402 +size 2372751 diff --git a/video/2UJLv3KPGO_39025768.mp4 b/video/2UJLv3KPGO_39025768.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..816740e303af9e010524a59c8b7cfae59efc882d --- /dev/null +++ b/video/2UJLv3KPGO_39025768.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38320e65b40b3f426321a51c2957d1df81a3a36807205868798f4adeb26bae83 +size 3244527 diff --git a/video/2UnCj3jeao_39019056.mp4 b/video/2UnCj3jeao_39019056.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..85bfa4bacee9891bf68ccb91fdba1f56952d903d --- /dev/null +++ b/video/2UnCj3jeao_39019056.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c92c3ce0490eee24591002662b54961b2572f7ac7fd98bf341de47de41ac2f4 +size 6947367 diff --git a/video/2XkTz7gdpc_39018499.mp4 b/video/2XkTz7gdpc_39018499.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4b55eb4e26fdded63a7f10e6d4a83da0932f4a97 --- /dev/null +++ b/video/2XkTz7gdpc_39018499.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70c15fcdfb143921598d190518fb36c0e44c682aac48d2176f813633228b7da4 +size 2254328 diff --git a/video/2YSHEBRRol_39028382.mp4 b/video/2YSHEBRRol_39028382.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cfeae2f20236c800cd305006833dc5246a487924 --- /dev/null +++ b/video/2YSHEBRRol_39028382.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21a6ed6f4f3c8122af58ddef4256610768b31ef422a928bb58c42dcff3876cee +size 1372035 diff --git a/video/2bdSnxeQcW_39027683.mp4 b/video/2bdSnxeQcW_39027683.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..95afd32aeae3990ae8460f51a63da93b83c6daf6 --- /dev/null +++ b/video/2bdSnxeQcW_39027683.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b755c0e5c2dc117d2a4f2a124f15e9803baf6e91abc920c7514786c6ae111570 +size 2093878 diff --git a/video/2cFUYnNL1m_39028014.mp4 b/video/2cFUYnNL1m_39028014.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..df1a9c988305a2561f12b06a9dfd2d4765511ba1 --- /dev/null +++ b/video/2cFUYnNL1m_39028014.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c38eaffbbb5a7eb032594c67c99e37c4d3741fee40c026c15a3b813977a5ebf +size 2828377 diff --git a/video/2cczgOfMP4_39024825.mp4 b/video/2cczgOfMP4_39024825.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..75b916b9ce57e73177ec3fba647581b4576f9754 --- /dev/null +++ b/video/2cczgOfMP4_39024825.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8766e1bbaf148e6564d4a74f1fea1763e60a9963a83b8ae7c6843a7a735cd414 +size 3105005 diff --git a/video/2dhxxIKhqz_39018800.mp4 b/video/2dhxxIKhqz_39018800.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f7beda6b6f0c9456961c21061b81d45725f8624f --- /dev/null +++ b/video/2dhxxIKhqz_39018800.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1fa468875e492b0a2c326b24713e67e76af5af922c2d0d3b2f2551d9b427ecf +size 2804497 diff --git a/video/2iGiSHmeAN_39018498.mp4 b/video/2iGiSHmeAN_39018498.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b915b6b97d968499f9beb894bcccf489372e7e56 --- /dev/null +++ b/video/2iGiSHmeAN_39018498.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e382cc146dfcb12f1233ebbd99c7cf167c4b9fc3091b66a73dc365f3d8076fd +size 3093613 diff --git a/video/2kZMtdjzSV_39026429.mp4 b/video/2kZMtdjzSV_39026429.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cb63ac656593dc7dc9aa81612a92246a92325c75 --- /dev/null +++ b/video/2kZMtdjzSV_39026429.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:660e08bba73a30329c062eb7b678f921881ae0ad809bf25694df350bc88a3e54 +size 1883768 diff --git a/video/2lL7s5ESTj_39027706.mp4 b/video/2lL7s5ESTj_39027706.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1826385bfe804a2d68d0430e5cdf014c57e7d152 --- /dev/null +++ b/video/2lL7s5ESTj_39027706.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9753e22128e097b4bcf2b213f80b367255c3fe779e4682f73f115236252a783a +size 2770842 diff --git a/video/2nisrxMMQR_39024627.mp4 b/video/2nisrxMMQR_39024627.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c66f6a65fd1a2c638f1817ed87f951e241fb559a --- /dev/null +++ b/video/2nisrxMMQR_39024627.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b55b2a22957f670a06bb516cd2ea26cf1eeeee0e6add2ceb2d907139ccedbb3b +size 9479039 diff --git a/video/2oWRumm67L_39018494.mp4 b/video/2oWRumm67L_39018494.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2bc0b39e6d26ccbc5650ca0d7dcd92b9a599afbe --- /dev/null +++ b/video/2oWRumm67L_39018494.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:142ae0fae4b18d991c4420b9839cbeb6426f8737467a8806fb589c855a674096 +size 2939761 diff --git a/video/2oZea6pKhl_39026148.mp4 b/video/2oZea6pKhl_39026148.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..30df450d6c52f49304dca4e7c16c41fa56d6040a --- /dev/null +++ b/video/2oZea6pKhl_39026148.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2de9d1f2d5d8dbd3e67b9df8acf3a7db29ec362da5bf9b0d43a98f532daebe4a +size 2020799 diff --git a/video/2pgc5xDJ1b_39025605.mp4 b/video/2pgc5xDJ1b_39025605.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b64fd86bcd50d530fdc1164a9458729df6ebe6b5 --- /dev/null +++ b/video/2pgc5xDJ1b_39025605.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:810f2f93e27862785862bc80b09adb65b4328f8730c95758896f64d9e8a8db57 +size 2597046 diff --git a/video/2squ766Iq4_39028198.mp4 b/video/2squ766Iq4_39028198.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0895c6f3263554af4a199162ccd288b9376299c0 --- /dev/null +++ b/video/2squ766Iq4_39028198.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ad103e1316ad7771b3a798af4e1aa5bd6ee33e4c91308e32a6cf41d56c8fc85 +size 1843151 diff --git a/video/2vMvh5XP0P_39025136.mp4 b/video/2vMvh5XP0P_39025136.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..da26bee2e94f7049d86dd536d7125b2a0ce3d9a0 --- /dev/null +++ b/video/2vMvh5XP0P_39025136.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7b3d0069f7131d012a69a8a313d093f63e2df6c1c550a84bec252b53859c93f +size 66718 diff --git a/video/2wlNnIqCb7_39027989.mp4 b/video/2wlNnIqCb7_39027989.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f36de12add15c1a1aa65a4b8977626f0c8ccc632 --- /dev/null +++ b/video/2wlNnIqCb7_39027989.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c36b011fac51775d80d0c3e2689bef36c835732fd6b5b2a30bb6c57b04a88fa +size 2620582 diff --git a/video/2zWbzx50mH_39026340.mp4 b/video/2zWbzx50mH_39026340.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..366508f978f0dc1b0bd9b3e1e06c7a94b8d22025 --- /dev/null +++ b/video/2zWbzx50mH_39026340.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44f1a892ce38fcecece8249ee26c25f618a77af0672b91b8a30cf7d6f85ea10e +size 2366468 diff --git a/video/30N3bNAiw3_39017176.mp4 b/video/30N3bNAiw3_39017176.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..90bf82062e0a81651b4ad3775b0758c28ba85086 --- /dev/null +++ b/video/30N3bNAiw3_39017176.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a08e43f4ca41043ac1ef6d6e1ffad62250ea6bf5f2da95851fe430cf1fcdec75 +size 2215920 diff --git a/video/31IOmrnoP4_39018490.mp4 b/video/31IOmrnoP4_39018490.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3cc0a6f923d5fb997bf42349e538d512ff475d70 --- /dev/null +++ b/video/31IOmrnoP4_39018490.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:474b58d8aed41ccbdebc54b56c0cf4e6d94255d106e5bd3c23748401fb5bbf79 +size 2506724 diff --git a/video/31xWlIdxTm_39024822.mp4 b/video/31xWlIdxTm_39024822.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5069fa472d67f9d905575ce59abe5317315f6cbb --- /dev/null +++ b/video/31xWlIdxTm_39024822.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:59281cce2c62cf03390ab5aba2e04c276ba0128c314819ae748dd99cfa290f3b +size 2277341 diff --git a/video/327tbF3S65_39019212.mp4 b/video/327tbF3S65_39019212.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4c1a09b9240ceddfa4d789040e08f82af876d9c9 --- /dev/null +++ b/video/327tbF3S65_39019212.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64c3b294f7afb2d26989117982cf88040d9e145a6f15e27e196b018778b261b1 +size 3118863 diff --git a/video/337dHOexCM_39027342.mp4 b/video/337dHOexCM_39027342.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9746c105fdef412966d027406bfeca21ffb0b10f --- /dev/null +++ b/video/337dHOexCM_39027342.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:26eb2f0b6d3eb1148cc6c08df66590694779ae7d62d24ea8651700e0e50ed3ed +size 2388955 diff --git a/video/348hfcprUs_39025490.mp4 b/video/348hfcprUs_39025490.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f8fe4bdc9fd1d3d5e41d562bdd8807271da0c86f --- /dev/null +++ b/video/348hfcprUs_39025490.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:07f8a7167e8ccf811cad294693435e001214c8f6aa8b4226efe35d0edbf5b328 +size 2319939 diff --git a/video/35DAviqMFo_39027796.mp4 b/video/35DAviqMFo_39027796.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fdd48d04018d16023a20e2a7035b18d95f4e5585 --- /dev/null +++ b/video/35DAviqMFo_39027796.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0747e83b1c1858a8d421e9b970d7d0228e75e8e64661289b65fbed393c599130 +size 1726873 diff --git a/video/35WwZhkush_39026145.mp4 b/video/35WwZhkush_39026145.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6da16c612f41ee7f3a586eaa32e14300ac6134a1 --- /dev/null +++ b/video/35WwZhkush_39026145.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0654b2ea61870e11818aa5b3b640a3484fe0ae43907fda1bad28b6391ba8e662 +size 3052039 diff --git a/video/36L7W3ri4U_39018488.mp4 b/video/36L7W3ri4U_39018488.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a548897b037379bb74338edd918d391b09e544ec --- /dev/null +++ b/video/36L7W3ri4U_39018488.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6f14ed2cac3637f1ebd2ee059f244d9e248e1b005de7c897504585907af5e98 +size 1875696 diff --git a/video/36tMV15dPO_39024888.mp4 b/video/36tMV15dPO_39024888.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ad82e66c97adde025ab3729cf91249bcf754f117 --- /dev/null +++ b/video/36tMV15dPO_39024888.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83af1250bb8a77a0ed4258c09bd6d3b6ab34834f28bae754bedf264fdc832974 +size 3094757 diff --git a/video/37CyA1K0vV_39025343.mp4 b/video/37CyA1K0vV_39025343.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2c8da78ff7739881c6a950ad2353159e7ee3f261 --- /dev/null +++ b/video/37CyA1K0vV_39025343.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7230a1dd1ec1561a1acdce3786fdac1cc6681de40968647315191fa30e75911 +size 2553783 diff --git a/video/38UFpdt3Tr_39028653.mp4 b/video/38UFpdt3Tr_39028653.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..316c3613755808b7dae8c44c09cd9ed9cd8f4b0b --- /dev/null +++ b/video/38UFpdt3Tr_39028653.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:669ccf0a477678bfa4338fd31ec1547396504f297fb95ee2212c27df1a1d373f +size 2580741 diff --git a/video/3ADBiWNUBb_39028449.mp4 b/video/3ADBiWNUBb_39028449.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6c33b265eb1f4a1577de16297dac30d771b6eac7 --- /dev/null +++ b/video/3ADBiWNUBb_39028449.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b264710ae4db6d52bc83532e672e61861696d0225b8ed210a186cece11fce44d +size 2566806 diff --git a/video/3BNPUDvqMt_39026717.mp4 b/video/3BNPUDvqMt_39026717.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1cdf1011eb4b55ddbb27f2567700c55c096cc705 --- /dev/null +++ b/video/3BNPUDvqMt_39026717.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:281f856ce6cce642fc4eac86ee308c539d3a92c0fc4a293109da28b35785e370 +size 2007783 diff --git a/video/3HpCVZV9it_39025921.mp4 b/video/3HpCVZV9it_39025921.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5e3fe318c820c343b36bcd1d009153a7a5235c1c --- /dev/null +++ b/video/3HpCVZV9it_39025921.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:56a3905e6926ea2b3579ab5e0e244f41f23ef72176c29dd1aa41d5a17713425f +size 1707002 diff --git a/video/3K3s9qxSn7_39018484.mp4 b/video/3K3s9qxSn7_39018484.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dc003970e098682c8edd95c1d224fae4370735dc --- /dev/null +++ b/video/3K3s9qxSn7_39018484.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:076877ad6ccb11a41f423b06c85e19f9379f296b703cd0457d0cccecdf4daba8 +size 1944297 diff --git a/video/3LZHatxUa9_39025144.mp4 b/video/3LZHatxUa9_39025144.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b811614cd4558d82c326bb5eaffdb96c6e9712a8 --- /dev/null +++ b/video/3LZHatxUa9_39025144.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5c5507cb6e8c48a3502f41d1798bb9a947265b65aa10bae61c4123490fda811 +size 7700 diff --git a/video/3O5YCEWETq_39025826.mp4 b/video/3O5YCEWETq_39025826.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0e3798cba223f2f77b6f8d9046e05cc0307403c3 --- /dev/null +++ b/video/3O5YCEWETq_39025826.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c765a667265cefe746f0e3ba53f84b40f6062fcc8e0b6f8039532a08365314c9 +size 2358457 diff --git a/video/3Odq2tGSpp_39027338.mp4 b/video/3Odq2tGSpp_39027338.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5e0edeca94fdd0f7fec5a3a65dd1c102cbcbdb73 --- /dev/null +++ b/video/3Odq2tGSpp_39027338.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:074cc4c80d34bb6d80e3317e5a6c98b0447f3ac86ea929f3b1068a4bfe5ebe50 +size 2165774 diff --git a/video/3QkzYBSWqL_39018480.mp4 b/video/3QkzYBSWqL_39018480.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fa1d829306ecb9a354066bc59176c0433c2c1594 --- /dev/null +++ b/video/3QkzYBSWqL_39018480.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a37b1bf09c3d2d9f0d1b9a0eb83793d7e4ad7b93d147e0750d9f1a0f0bd8ab8 +size 1462285 diff --git a/video/3ROGsTX3IR_39019190.mp4 b/video/3ROGsTX3IR_39019190.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..62662dac150d675571cab1197ebb12deea99a469 --- /dev/null +++ b/video/3ROGsTX3IR_39019190.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b47088eb4326808a7d103c4fa89fceb05a520d284729245355baaa68832b8037 +size 2405516 diff --git a/video/3RxcarQFRn_39026743.mp4 b/video/3RxcarQFRn_39026743.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..81b8ae8d753cf8e0f9becb02cd7acbc74aba5990 --- /dev/null +++ b/video/3RxcarQFRn_39026743.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f102e9d26e34fd99edb71440f705102070cd78e2243db03910d232eee813ee76 +size 1519207 diff --git a/video/3TO3TtnOFl_39018849.mp4 b/video/3TO3TtnOFl_39018849.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dd717989815f12e0e26d7c721c97181c7615888b --- /dev/null +++ b/video/3TO3TtnOFl_39018849.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d385f14dd58a65cb758a71db2115f7f94b7aa4f01bce50c387c21a9ca1073e31 +size 2883949 diff --git a/video/3Tzcot1LKb_39028698.mp4 b/video/3Tzcot1LKb_39028698.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6cb487e5c9043e7974bf18b8cffd5fa508dd318e --- /dev/null +++ b/video/3Tzcot1LKb_39028698.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df6f11b0fb4c25bb664e72f7663d933399c939b9e6864d135ce808720af42eb8 +size 2052870 diff --git a/video/3UWuFoksGb_39019084.mp4 b/video/3UWuFoksGb_39019084.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6d5b6e0583cdb3ef2b2aa65db2b06d33e75c96b6 --- /dev/null +++ b/video/3UWuFoksGb_39019084.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:55babe6b9784e57b363d65117394bdf85c4edd868c12f925802366ec68c40d59 +size 2132168 diff --git a/video/3Vw7DQqq7U_39017300.mp4 b/video/3Vw7DQqq7U_39017300.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..65edba85547854871354a2ccc51b046641e68d39 --- /dev/null +++ b/video/3Vw7DQqq7U_39017300.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24369d27410fd38fb0bea9b719e44b5fe3bf636004ec4558565d30558242b06c +size 2207259 diff --git a/video/3XnBVK9sD6_39027218.mp4 b/video/3XnBVK9sD6_39027218.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1e58e5630cc3eb383e96fd3f6a7c31a94f796427 --- /dev/null +++ b/video/3XnBVK9sD6_39027218.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3271489b2025e0e721c8dfe1fafa874981a82330dbc8584298943f6624e59e69 +size 2761610 diff --git a/video/3YIyB82rjX_39026607.mp4 b/video/3YIyB82rjX_39026607.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..192166b9938f5ea0103b1e4a6d742e7939b2c15e --- /dev/null +++ b/video/3YIyB82rjX_39026607.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:549a2b5024fb17663e68d862f2addf078a2c8ad9c4609054feddfa248beba5e2 +size 2600390 diff --git a/video/3Z0LTDjIM0_39027222.mp4 b/video/3Z0LTDjIM0_39027222.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8d5608d5e3e59c5de469f8046bedf070cf723002 --- /dev/null +++ b/video/3Z0LTDjIM0_39027222.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cee8bb56ab68aff43d989e6b77e206a139863ef33b6e49fcb176cdcb15e11dc4 +size 2800793 diff --git a/video/3ZAfFoAcUI_39026452.mp4 b/video/3ZAfFoAcUI_39026452.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4f52ba34a1348268238e11affe2da70bc107ea1a --- /dev/null +++ b/video/3ZAfFoAcUI_39026452.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:862c268a05afab869eb32782420e2dacac6a0ef43ae53050ce24db3d37aa2e39 +size 2429547 diff --git a/video/3ZqKxMHcAg_39017135.mp4 b/video/3ZqKxMHcAg_39017135.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..94bc39c047844100d00d16a8c83d3f6085128f2d --- /dev/null +++ b/video/3ZqKxMHcAg_39017135.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7edbeb7fb53809f421a6e056eee0d0c83c9c0b9455dbbef9f172ab05ba789c38 +size 2493504 diff --git a/video/3apt5AJ5QN_39028172.mp4 b/video/3apt5AJ5QN_39028172.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b1b383b4c2d3c84e8e08fb189905a3b96d376cf0 --- /dev/null +++ b/video/3apt5AJ5QN_39028172.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc3e82970f1ee9f7e90a6d5b3b7c5761aafb26471617d403fd082b73ca9212ec +size 2606085 diff --git a/video/3cL2XDyaEB_39024615.mp4 b/video/3cL2XDyaEB_39024615.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..994919efb046fa1311e5db771cfe307c3306d57c --- /dev/null +++ b/video/3cL2XDyaEB_39024615.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f252966d789258de1c85e05e30169e6ec59273f247fac56620dab1da888b62c +size 2220370 diff --git a/video/3cb6pF3Tvf_39025274.mp4 b/video/3cb6pF3Tvf_39025274.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..32d1823ecea04d89dd78bb95c63008b5cedd598a --- /dev/null +++ b/video/3cb6pF3Tvf_39025274.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3eaaf2d9aa0d8321ec524dfcfa934f9497030bd8e17dbfa2adcc024d50fc2484 +size 63129 diff --git a/video/3csuL7TVpV_39025376.mp4 b/video/3csuL7TVpV_39025376.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2f51a01f355e2c8c2dd953e8a2b98eb65c75988a --- /dev/null +++ b/video/3csuL7TVpV_39025376.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d36d33caaa827a5b038fbddcb2a803622f1c2bbf44b041cc81414466a5b2a34 +size 2602349 diff --git a/video/3dn1hINA6o_39025717.mp4 b/video/3dn1hINA6o_39025717.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2fc04855f0ba3562637aa22a9596ea873cd6f59c --- /dev/null +++ b/video/3dn1hINA6o_39025717.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b4e78d73e79ef98205b44350be31982c84c7b18fea91c85ca48297e85fe3019 +size 2572192 diff --git a/video/3f5PALef5B_39017130.mp4 b/video/3f5PALef5B_39017130.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..06e9e70f7888dd1040a50d3dd1e9d5bd994616fe --- /dev/null +++ b/video/3f5PALef5B_39017130.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7bf9abed57df5789843610b7fbf8972fa9792eef129eb1b5b25fe17bf0454ff8 +size 2642072 diff --git a/video/3f8i9GlBzu_39025789.mp4 b/video/3f8i9GlBzu_39025789.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..36c272df5698f4a9f350292431212fc9e5e7997e --- /dev/null +++ b/video/3f8i9GlBzu_39025789.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b8a1fd3edab8b598bece656544fc8322e3a350a70a4e70c9d80c12044fa91c9 +size 2490970 diff --git a/video/3hcn0UxP72_39028010.mp4 b/video/3hcn0UxP72_39028010.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e5ba98c67800422e9377790714cf8a5400f1fad4 --- /dev/null +++ b/video/3hcn0UxP72_39028010.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68de6079d038d83f7e80406bc9a3e5e8d505f7221dd646bf7a005360274524bc +size 2411532 diff --git a/video/3ie8NWA1El_39028662.mp4 b/video/3ie8NWA1El_39028662.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e7f0131d46efaa8de5bee94d9a1cc3aca191077b --- /dev/null +++ b/video/3ie8NWA1El_39028662.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d8a854fb4f23d23bb334b61158b48b93a07c885f0224a55aeba454cad1d5c7d +size 1838796 diff --git a/video/3j2nasmKkP_39026508.mp4 b/video/3j2nasmKkP_39026508.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8e9f183acdcba2220664f2bc3ed6421d7b1cd78c --- /dev/null +++ b/video/3j2nasmKkP_39026508.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6add2bec9f3e8ed893fb32f776b6b518e7b6a3506fa1e66737f581d198262b78 +size 2743036 diff --git a/video/3l2HnZXNou_39024488.mp4 b/video/3l2HnZXNou_39024488.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..784eb5b0b0b3491eb1c34b655d96383c5511e72e --- /dev/null +++ b/video/3l2HnZXNou_39024488.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5ecadba7dec32ad44efe28b3b5a498b268edd495fc35309a625ec841df9f385 +size 2252982 diff --git a/video/3lQgEPRxeu_39028522.mp4 b/video/3lQgEPRxeu_39028522.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d2a56312160f22b56efa7e7b258fb69189126b99 --- /dev/null +++ b/video/3lQgEPRxeu_39028522.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9bbf5fbcf773eaa05962b9cc0af48c3de48c0f5c14e752f61aedb8b01785bd2a +size 3000639 diff --git a/video/3lic0JgPRZ_39026323.mp4 b/video/3lic0JgPRZ_39026323.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c62baf813fc0bdfc4e1b0dd99b025b7c9bad4996 --- /dev/null +++ b/video/3lic0JgPRZ_39026323.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf899c1c293be11942c565703a8edd3956ba2b311b2d9bce99083f9dc6b7ad8f +size 2905212 diff --git a/video/3mCr7ZNdSw_39025500.mp4 b/video/3mCr7ZNdSw_39025500.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..33634b3f8b9db186fd54667865368c1bf4d32fac --- /dev/null +++ b/video/3mCr7ZNdSw_39025500.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:55aaad56a75355f8ff835fb85bf5b17097441670d4f522d7b15a775fca6a0d2a +size 1787070 diff --git a/video/3mnWvUZIXt_39018649.mp4 b/video/3mnWvUZIXt_39018649.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e04830899aba6b45fe5e03874531dafc65a66b7b --- /dev/null +++ b/video/3mnWvUZIXt_39018649.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a4401dbe1f37f43518ad3a76a463cfd563195d40cf76bac918bdd812e29bcaa +size 2963922 diff --git a/video/3pf2hEdu8B_39019110.mp4 b/video/3pf2hEdu8B_39019110.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8ac6e376e6c65e91cf5c0ffa0fcc0ea149457bc3 --- /dev/null +++ b/video/3pf2hEdu8B_39019110.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f1f24c24df2ba4d88df9b06c14807e8d64bb4c6315e726b4fa5f3c9edbf7ff84 +size 2663927 diff --git a/video/3qo1pJHabg_39018470.mp4 b/video/3qo1pJHabg_39018470.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..08d03d03f3c955af6ff95fb3dc20e8a5aabe3504 --- /dev/null +++ b/video/3qo1pJHabg_39018470.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8b58eba4ba40646bd9ea9089fe58cb1fab5a413e8d1897d6533e32da3925b18 +size 1718345 diff --git a/video/3tM1l5tSbv_39018634.mp4 b/video/3tM1l5tSbv_39018634.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6ab8bb599d0f36b603edb9073b124549001501ec --- /dev/null +++ b/video/3tM1l5tSbv_39018634.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d12df8e1fc2e515943a98e88e2368af7a02dfce9897b3bab229cbaf2b2c9b209 +size 2531050 diff --git a/video/3uQtNWNTwz_39025904.mp4 b/video/3uQtNWNTwz_39025904.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d012e3c376f687c5a4b2de0443404a1a3d4cf8bc --- /dev/null +++ b/video/3uQtNWNTwz_39025904.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c6fc017155a34adbbac4045989416c22beb48e47d1ddebb2124400634011de2 +size 3017124 diff --git a/video/3vHfwL2stG_39028004.mp4 b/video/3vHfwL2stG_39028004.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f4fc08ccb9e42b79df85cf2d48d3b980d55ef6ac --- /dev/null +++ b/video/3vHfwL2stG_39028004.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4365cd5ab2814a951265ecc6720692405d4c537d5c9ab1a77c52f9bcaa6843e4 +size 7755 diff --git a/video/3xHCaDdYcc_39028330.mp4 b/video/3xHCaDdYcc_39028330.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ed4e1350326c8254640987ac0eaeb543be442d49 --- /dev/null +++ b/video/3xHCaDdYcc_39028330.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:795b19b52d078061f9fd27e22cdd3f7b39f2415d92d2d1bff0415226ad746b14 +size 864703 diff --git a/video/3z60EWfh1p_39018464.mp4 b/video/3z60EWfh1p_39018464.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a81daf551c4ab83478b279a057d506062e607ba2 --- /dev/null +++ b/video/3z60EWfh1p_39018464.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f54268a4f4a760d9406aa9b54433ee5c5d1c773d617d8bec02e3174fc48731c +size 2227728 diff --git a/video/3zKtaqxLhW_39018463.mp4 b/video/3zKtaqxLhW_39018463.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0d63b77662b1b2ed139b1dfcf39103f82bc9fad3 --- /dev/null +++ b/video/3zKtaqxLhW_39018463.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f74f2848140732a1926997059bff6fa329988627b2b717c68ff45f8949a60b77 +size 2242409 diff --git a/video/3zQo5oUvia_39018462.mp4 b/video/3zQo5oUvia_39018462.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0a06a1f3f603a64dc14d35638d1350207683d4a0 --- /dev/null +++ b/video/3zQo5oUvia_39018462.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:169205af3f8ea0944083a58ae67e1a2a92490530bdac061b810a086bf2e2beb8 +size 2902984 diff --git a/video/41lovPOCo5_39026130.mp4 b/video/41lovPOCo5_39026130.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a82355dbdb590a2ce8179184f3f578478fecb7ea --- /dev/null +++ b/video/41lovPOCo5_39026130.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90c628ae8bab3672a9db1daeed093e0d6d3df7978a00ae6394948ac209e2bf96 +size 1990877 diff --git a/video/47loYmzxep_39027313.mp4 b/video/47loYmzxep_39027313.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..236e295fd175abde7727e2400c2457f6ad3a36b0 --- /dev/null +++ b/video/47loYmzxep_39027313.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:42da641a50004c7ccf9ac701cae655bd2e6670dd1048246926f801d91e0f54da +size 2073984 diff --git a/video/483IPG0HWL_39026796.mp4 b/video/483IPG0HWL_39026796.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7b26eb4280884c078ddc7ea6ea78a451eff18abd --- /dev/null +++ b/video/483IPG0HWL_39026796.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a69510c576b52922f753a8bd9d761728dc54d43ef980fd4dcb254fef5098d106 +size 2652993 diff --git a/video/488A64eOf6_39019195.mp4 b/video/488A64eOf6_39019195.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1527a04f74b0d7b1937c8bdf103b5520966f22ab --- /dev/null +++ b/video/488A64eOf6_39019195.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:396ace479ed289b5918c757f5fed6c9c18a07bea62081d460c0c0abb0d3a3396 +size 2430717 diff --git a/video/49z97Y9lMq_39018712.mp4 b/video/49z97Y9lMq_39018712.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0a3333d0e39afbe5d780408a9c911a35eb632043 --- /dev/null +++ b/video/49z97Y9lMq_39018712.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1d85ff911cc4651d1f844649c43fcfc39d5f1dc79d6e5f7b3873644bba7fe94 +size 2316256 diff --git a/video/4A5IQEjG8c_39028737.mp4 b/video/4A5IQEjG8c_39028737.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d10af7d8a817956122e6634bbc6ceb0930152397 --- /dev/null +++ b/video/4A5IQEjG8c_39028737.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7a80d788deb01778423f161ce9b150e5839b97647c7b932bc8af9391108164e +size 2592898 diff --git a/video/4DA5vaPHFb_39026388.mp4 b/video/4DA5vaPHFb_39026388.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dd18c844cd6e69f0c3ef6a51f2b7492011965a07 --- /dev/null +++ b/video/4DA5vaPHFb_39026388.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99eabc11b1d27dd97b8aea915b9b6361159d9e75f376f8f155b30c0053de024d +size 2097552 diff --git a/video/4DHoSjET4R_39025328.mp4 b/video/4DHoSjET4R_39025328.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..86cb45461e166c96649c05c08cf663dcb36a5a4c --- /dev/null +++ b/video/4DHoSjET4R_39025328.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2250c7b403335d97d38c93e1e944fa2012661330e77eb6cd297c2fb64e8eb01b +size 1771481 diff --git a/video/4DcpFagQ9e_39026227.mp4 b/video/4DcpFagQ9e_39026227.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..be9339825fdc6aa071f3e77572d8e453675f56c6 --- /dev/null +++ b/video/4DcpFagQ9e_39026227.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77a504dc72e8ca5b7c1340e9956611921646a9400117c38ad7d4d69e7d87bbde +size 2064185 diff --git a/video/4IT2pgc9v6_39017683.mp4 b/video/4IT2pgc9v6_39017683.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..94a0fe37fecf7da983a518d9951bcaf01a4b2f53 --- /dev/null +++ b/video/4IT2pgc9v6_39017683.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c49c4a8a9db5be71f9de473f57d9ff2b6e3f7568b1302aa8eddd0eab7deddfe +size 2471437 diff --git a/video/4KZpDGD4Nh_39018456.mp4 b/video/4KZpDGD4Nh_39018456.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3998dde398f3d9697bb4125eef71430ceac0e6d2 --- /dev/null +++ b/video/4KZpDGD4Nh_39018456.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f1fb604b854493557118c613fb8d4f3edd1c967757569825badc2e43948c4fc +size 2730200 diff --git a/video/4KqkizXgXU_39018912.mp4 b/video/4KqkizXgXU_39018912.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..88f5d9fc98b4f53fb2342170a20a607ad73a9788 --- /dev/null +++ b/video/4KqkizXgXU_39018912.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4730e2af3f838cabfc248f2e917e1611c59344192351aeb84931c5aa18f3c153 +size 2928571 diff --git a/video/4M9f8VMt2C_39026658.mp4 b/video/4M9f8VMt2C_39026658.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..aae2dfee64871d9435042ef025fc348adb880f03 --- /dev/null +++ b/video/4M9f8VMt2C_39026658.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee0e2b5e7c95d738f3dc31adbc8d71c1ecdf33734c692e6cc3298ccf00df3aee +size 2193041 diff --git a/video/4N97bz1sP6_39018455.mp4 b/video/4N97bz1sP6_39018455.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..449a7c03f580a5e61d90e6530f39dc4bced21a37 --- /dev/null +++ b/video/4N97bz1sP6_39018455.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78b52d87190ea65ef61049d2c2efe8f041aadd728bfc80e15ce70aa72dbb39d1 +size 2192245 diff --git a/video/4NGlu45uyt_39025803.mp4 b/video/4NGlu45uyt_39025803.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2ba0be00897f0bbb19f67b8d333b38b0d37c0b67 --- /dev/null +++ b/video/4NGlu45uyt_39025803.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7804daa88c1b572a6edbdd008dd066914388e50457346e196502c9a3dfdaea4c +size 2904178 diff --git a/video/4NGrHrhJPx_39028763.mp4 b/video/4NGrHrhJPx_39028763.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1f17f31d4e9b0c4ecb570053dbb591d5a6177b5a --- /dev/null +++ b/video/4NGrHrhJPx_39028763.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a96339129e8714935cc1e0b92cb938adfc213f259991ef780ba6dfd908959de3 +size 2363256 diff --git a/video/4NJBV6Wp0h_39026353.mp4 b/video/4NJBV6Wp0h_39026353.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9e3c963a775675cf78d3d3378384912a3f75bf3b --- /dev/null +++ b/video/4NJBV6Wp0h_39026353.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7233eb7778714b0d30a872adf79d29ffd26e148bbd5189e153a72812dd051ac +size 917781 diff --git a/video/4NQ24cHnOi_39025798.mp4 b/video/4NQ24cHnOi_39025798.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..73b937fd6fd277ae9f7ebf1b83709ad2decd294d --- /dev/null +++ b/video/4NQ24cHnOi_39025798.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:139e4f444f9aeae46be59518cfd912015ca879561b01b1df8ddef597cccfde8a +size 2390452 diff --git a/video/4OJdZhcwBb_39025519.mp4 b/video/4OJdZhcwBb_39025519.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..618d02ddb7805321a873f53977c34e3c34040830 --- /dev/null +++ b/video/4OJdZhcwBb_39025519.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f7f20177c68d0998811700df276394877b54d1d892f8ad5372c4f5c41faeba1e +size 1049515 diff --git a/video/4SAR7IRqmB_39024667.mp4 b/video/4SAR7IRqmB_39024667.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7ca16b7a03c85397c41dcd491192c26f5a9013e4 --- /dev/null +++ b/video/4SAR7IRqmB_39024667.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd05a42023afba78f8c1aabb2373a699c11766c5c72deff0ab22eb8ad9d859e3 +size 2490029 diff --git a/video/4TENzBftZR_39027312.mp4 b/video/4TENzBftZR_39027312.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fac26542b0447f45faf17c13c1d7511005e79eb0 --- /dev/null +++ b/video/4TENzBftZR_39027312.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:92777a630942c74c15193e0b8ac6015952e7851fffce13d5e1cfabfad05c03ed +size 2692853 diff --git a/video/4TlUE0ufiz_39028552.mp4 b/video/4TlUE0ufiz_39028552.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dba9a52ae9a5c4701730625af2dba0c9ed017913 --- /dev/null +++ b/video/4TlUE0ufiz_39028552.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:193825edd106d706acad15f1da7201075c787c47eb1ad0220a8cfb25c4d714f7 +size 3043796 diff --git a/video/4U18ZoRXTD_39027406.mp4 b/video/4U18ZoRXTD_39027406.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e72ed2c69001b1e2d0aa20b81eda60952b1d54db --- /dev/null +++ b/video/4U18ZoRXTD_39027406.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f934fa2b4e767fa7164707799f66dc3b76f8302ad6e81b5fdbf8486f8ae70e2b +size 2171415 diff --git a/video/4VIgNuQ1pY_39018449.mp4 b/video/4VIgNuQ1pY_39018449.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2265c2ce56acb75a9fb14a4eb168b3ce2a7d4efe --- /dev/null +++ b/video/4VIgNuQ1pY_39018449.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:179680d27a311f1c9b4eef256daf82d7957b12bad93ad6561c75e7315d76c234 +size 2177868 diff --git a/video/4ZH48aGD60_39027011.mp4 b/video/4ZH48aGD60_39027011.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..592ca2e9d029e387bc544b1d130ed9395a430ec6 --- /dev/null +++ b/video/4ZH48aGD60_39027011.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6200d441567b6face9567840365c0a90ec89d37f3e5da41c5d8ce5adacac048d +size 2231691 diff --git a/video/4Zt7S0B0Jp_39028274.mp4 b/video/4Zt7S0B0Jp_39028274.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..79e7470e7fcd17cad283943c414b124120a28d4d --- /dev/null +++ b/video/4Zt7S0B0Jp_39028274.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:377c7b021dd55836688d4e5f372355e74bf127d42089f2222603aad0152f8758 +size 2414086 diff --git a/video/4Zz5UELkIt_39018445.mp4 b/video/4Zz5UELkIt_39018445.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..14dd37b73b506fa0508eb833a3e164149bde28a5 --- /dev/null +++ b/video/4Zz5UELkIt_39018445.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3733d04aecfa639ac691a06adf0425f3ac89e9be6ffbb7815896eb64fa3610c9 +size 2273731 diff --git a/video/4bJufOS6No_39028861.mp4 b/video/4bJufOS6No_39028861.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ae471db952487487045be72332ef989485da5766 --- /dev/null +++ b/video/4bJufOS6No_39028861.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf5028879bba759262013cacfca54c00b376a1267c3c5f4010c75909ae1d647d +size 2332042 diff --git a/video/4cU9ZvOkBz_39028548.mp4 b/video/4cU9ZvOkBz_39028548.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..973138b344d396b6b5dcbe5206a11b3974aef28d --- /dev/null +++ b/video/4cU9ZvOkBz_39028548.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34bf1f808aed992531edb5e3f4fc95d642f80b0422fafbc572fa6568030a1582 +size 2750931 diff --git a/video/4eJDMjYZZG_39017202.mp4 b/video/4eJDMjYZZG_39017202.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..80bc469ad5a7ff67013f12974a9e68079c2d4e09 --- /dev/null +++ b/video/4eJDMjYZZG_39017202.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8891e4596376438f9420c5852073263cc85861e8541917521e5b9e9548611a26 +size 2236723 diff --git a/video/4h1apFjO99_39018442.mp4 b/video/4h1apFjO99_39018442.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9cebb851f7ef3ea45ed06f7d6052ec9b43045a7a --- /dev/null +++ b/video/4h1apFjO99_39018442.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54cf5fff7f25ad0e7c4163bf36f584f2f92747e6d0f05d1a59bea92bd49a38ac +size 2573460 diff --git a/video/4iPw1klFWa_39018441.mp4 b/video/4iPw1klFWa_39018441.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..edf57f02385794d074b2e30b2cb3fa131f3be69e --- /dev/null +++ b/video/4iPw1klFWa_39018441.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e0e16f87174d81afd31254d6b8a8ed3f29b673851879c22554f1b53a5e95b76 +size 2523216 diff --git a/video/4kLVvIh8cp_39018440.mp4 b/video/4kLVvIh8cp_39018440.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9451998e84b48138c52b8089d8274ced3297c543 --- /dev/null +++ b/video/4kLVvIh8cp_39018440.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9682f042dfb46bbb3d681ac70a894737f7017e4e673b37dbc077d93e50231173 +size 1665141 diff --git a/video/4lGPSbGe11_39025699.mp4 b/video/4lGPSbGe11_39025699.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..61e05dca0a42c5141bb34b9544aa27f20a4e825b --- /dev/null +++ b/video/4lGPSbGe11_39025699.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fdbe0b94dc5a765f0cac652f92adc18ae494765c5f5ad57ce216f57e9f174799 +size 2917913 diff --git a/video/4oAt5L4lYe_39027193.mp4 b/video/4oAt5L4lYe_39027193.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7744daa6300711af4d0e742d0e1b4a21fd185d64 --- /dev/null +++ b/video/4oAt5L4lYe_39027193.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a8ec7d4eccadac1237a721c7116e8d4e0a833a96cbfa49d73b0d7de91b3ccc8 +size 2657079 diff --git a/video/4r2ybzJnmN_39018439.mp4 b/video/4r2ybzJnmN_39018439.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..53ec308321b229a515b2d0e8b4d4ec8bf806f5bd --- /dev/null +++ b/video/4r2ybzJnmN_39018439.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57d6e908697f731dc8e1986771a8a77f91fe96326daeaf50669c2492d7629353 +size 2679809 diff --git a/video/4syq5cgwA2_39027818.mp4 b/video/4syq5cgwA2_39027818.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..027105fb138ecfbaca2487109d697c00a9595734 --- /dev/null +++ b/video/4syq5cgwA2_39027818.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30f7217fdc93fb8d97c0f2b0e79da619f6468f263abcf91b6dd7e338776767f3 +size 3100888 diff --git a/video/4t3ox9hj3z_39027447.mp4 b/video/4t3ox9hj3z_39027447.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..821480265878fb3443f2542599651fd314b85d7e --- /dev/null +++ b/video/4t3ox9hj3z_39027447.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63678f0c22349e5114a43cbfad26173fe2d7c423c57c21a16d6146ea4f16094c +size 2732676 diff --git a/video/4vp0edVY4o_39026151.mp4 b/video/4vp0edVY4o_39026151.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..578fcceb53f7f73c1f2ec2e73c11129b9da702b4 --- /dev/null +++ b/video/4vp0edVY4o_39026151.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b37d47ef8ec427a2bee1770a86cc961e6b5c2d62f0d53c128026d7b5cfb14b8b +size 2945202 diff --git a/video/4ztP4PujOG_39024856.mp4 b/video/4ztP4PujOG_39024856.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e0993a1ac5e33d916093c8a8db554f31c6ec7c3b --- /dev/null +++ b/video/4ztP4PujOG_39024856.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c555416a4efa92b5970ee46e58cc6daf2cf5bb1fe062a1ff8acccb21bb154ec8 +size 2563418 diff --git a/video/50nEnmVLRb_39026005.mp4 b/video/50nEnmVLRb_39026005.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9f709d12e6b11e5b78cedc5408a2ce7589da48e4 --- /dev/null +++ b/video/50nEnmVLRb_39026005.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4846b96ee8bd925f628cf6d17c34ddf6c040fc29e6a4fd0d4b91228b8a7539d9 +size 2283101 diff --git a/video/51HQpkQy3t_39024739.mp4 b/video/51HQpkQy3t_39024739.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..35dc7fa327726f1f4b0d3d0731b75660006fb569 --- /dev/null +++ b/video/51HQpkQy3t_39024739.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f16dd24b54d883d2fe0dc78da9f9f59347d3e11ecafff059e8bdd91b8b1050a +size 2238421 diff --git a/video/52r4XJYzjg_39028387.mp4 b/video/52r4XJYzjg_39028387.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..41d63908500d008f2322e86e4b54c301b66e9397 --- /dev/null +++ b/video/52r4XJYzjg_39028387.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3174431d16cddae60d99c5493c29a37a55c5b5d4e81454ac2ab860fa691588f9 +size 3058584 diff --git a/video/567BjxgaTp_39019208.mp4 b/video/567BjxgaTp_39019208.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..33378236925a2652c25f68ab07e76fe2b0d0a9cf --- /dev/null +++ b/video/567BjxgaTp_39019208.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38a5c88c6813d7729ba435fa9130e5ce5548e20281feec531346631cefdf7f0e +size 1297622 diff --git a/video/5AeLrXb9sQ_39027161.mp4 b/video/5AeLrXb9sQ_39027161.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5ba7dbae7827034708342ea1348b3668d416d83d --- /dev/null +++ b/video/5AeLrXb9sQ_39027161.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:079af20ee745e98cab9a046283926ec03304f5489620680960fcbb0bc029539e +size 2981273 diff --git a/video/5BCFlnfE1g_39018432.mp4 b/video/5BCFlnfE1g_39018432.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..268633feb8f4c6a37f043e236908e9e9c7865f69 --- /dev/null +++ b/video/5BCFlnfE1g_39018432.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cbe3b997615101d0bb6400aff0d8f7ed0a3e65898fc1e798b97a4842fa382d91 +size 2450150 diff --git a/video/5BXXoJh0Vr_39027317.mp4 b/video/5BXXoJh0Vr_39027317.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c39ba529f54a4a4afb501d7a83fddd3f7fe0c951 --- /dev/null +++ b/video/5BXXoJh0Vr_39027317.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:490aa8a1d0bf10999174e27b9abde454ace5d607f65db7d6d98a32fd3b2a26b0 +size 3218164 diff --git a/video/5DJBBACqim_39027028.mp4 b/video/5DJBBACqim_39027028.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..558d2d7f688b73b1b663987b6592e680cf1d672b --- /dev/null +++ b/video/5DJBBACqim_39027028.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61443ea68bdec8ecf7e6311621f1a9bf0ca674f3c036acd97393ce9e4e80f6e7 +size 2236336 diff --git a/video/5Dwqu5urzs_39017446.mp4 b/video/5Dwqu5urzs_39017446.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8cc01d03151a710055252493cc7225fa96479b7d --- /dev/null +++ b/video/5Dwqu5urzs_39017446.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3f53010be9317cc2a0b5c93d4e9858a41555d232c5063eadabc4fbc88d9710f +size 2063100 diff --git a/video/5EniAcsO7f_39018428.mp4 b/video/5EniAcsO7f_39018428.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2209f44e65729e26cbdc9fd0f3bd7c3bae44588c --- /dev/null +++ b/video/5EniAcsO7f_39018428.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a1f2daaa9cb3f09ecb95fce8a457e0ffa2d6c046d4c63e2d1f0ffd18cd57bb8 +size 2263473 diff --git a/video/5FATPIlWUJ_39027578.mp4 b/video/5FATPIlWUJ_39027578.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2b2865d5b0b0b505ef5690e354598818c6a5488f --- /dev/null +++ b/video/5FATPIlWUJ_39027578.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8616e48a8c65ce4c6d7e4ed53068e0591de3cc9d4b80c6b8c1b227538b4533e7 +size 2699664 diff --git a/video/5FHzrRGOKR_39025749.mp4 b/video/5FHzrRGOKR_39025749.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a3d517a5d8628363786518055168b604c21325fc --- /dev/null +++ b/video/5FHzrRGOKR_39025749.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f675f4c85c298109fe1bc2021e78d9cf8d8479320ceca84ca74c6f0a97e7f00d +size 2639457 diff --git a/video/5GCgNFZSyo_39025291.mp4 b/video/5GCgNFZSyo_39025291.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6bd7a508b74b074ec8631fa0ca9c2e22e455c993 --- /dev/null +++ b/video/5GCgNFZSyo_39025291.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9fde1a54e8b760cda6e1c5466b1c8d56d315ad6271e5b57a478d137c55f80646 +size 2628113 diff --git a/video/5H4l37IsZ8_39026562.mp4 b/video/5H4l37IsZ8_39026562.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..831f6b3dec725c9c631231520bef359240434d80 --- /dev/null +++ b/video/5H4l37IsZ8_39026562.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b64566ef59130a710c95e953a46422ac1f26c0c01153f8dd8372d1f55524db4e +size 2788946 diff --git a/video/5HQhYiGnYb_39025931.mp4 b/video/5HQhYiGnYb_39025931.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..23e8f91f83692b2d349be783dfc874dd5c925c15 --- /dev/null +++ b/video/5HQhYiGnYb_39025931.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9dca00a40c362c9976152cf63341cd35dcfff3eaf6db6d8a266adaf47b125ba +size 2546293 diff --git a/video/5IFeCNA7zR_39027126.mp4 b/video/5IFeCNA7zR_39027126.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..084f96cf98af49c4400ae759ec237e68e0d1db93 --- /dev/null +++ b/video/5IFeCNA7zR_39027126.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50dd8dde3465f3706e5650f84ac79d0ac8278f6cc7afc6b99fb63c89e3c27d90 +size 3062150 diff --git a/video/5K3VeoBnqc_39026203.mp4 b/video/5K3VeoBnqc_39026203.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..baafe988c2da29701faffe809f76d41178eccc74 --- /dev/null +++ b/video/5K3VeoBnqc_39026203.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67a31ca886db833288ac15834baecbabd2e9ebb676eeebcc3c0bff1a598fd6f5 +size 2415248 diff --git a/video/5RielfrDkP_39018756.mp4 b/video/5RielfrDkP_39018756.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ca29ea11aca54c946ac10a780b6aacc1c42d498b --- /dev/null +++ b/video/5RielfrDkP_39018756.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:321e41b884a4e231086d5865830d1505f309b85650769c9c0fc77f76aa22867d +size 1265499 diff --git a/video/5SUP6vUVkP_39024414.mp4 b/video/5SUP6vUVkP_39024414.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..249c126c30ab31c741c0d000914f933060c1659c --- /dev/null +++ b/video/5SUP6vUVkP_39024414.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:364bb9ecb344672d6f1cd3083e89b4e1bc3878fb5b580a78943bd7ee8bc2327d +size 3044438 diff --git a/video/5VE1iLeYOz_39026841.mp4 b/video/5VE1iLeYOz_39026841.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8a9827f24eb68874cda819f6af7f4aeeb6d1ecbb --- /dev/null +++ b/video/5VE1iLeYOz_39026841.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:043ecfee5f935b7d22a43fd4eb85d3ad7e16fbf83f9d35c7ad023fae13ff610d +size 2603852 diff --git a/video/5WoYFypPv0_39027461.mp4 b/video/5WoYFypPv0_39027461.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2b03c4dbbdfc715faec1a18c2ef242becf08c25b --- /dev/null +++ b/video/5WoYFypPv0_39027461.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a19a47fb9c0127df05f6e05fd2b6734b4fbe53ecf5b0dbfdd7632bc67d3f377d +size 1990902 diff --git a/video/5atraF1tbg_39028256.mp4 b/video/5atraF1tbg_39028256.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..30fe3caed1c32b149b001c5737136ca58aedbba7 --- /dev/null +++ b/video/5atraF1tbg_39028256.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68343417e3d80f4399a78bbef844f771f2c1dfbac5d4b4a8d5257815400aebf3 +size 2391536 diff --git a/video/5cIRdGM1uG_39028591.mp4 b/video/5cIRdGM1uG_39028591.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dbe354fc2bf0fd29ec63b9637cbb00828b46a1c4 --- /dev/null +++ b/video/5cIRdGM1uG_39028591.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:573cc3d07b8bd633f5840f1735c9e36b36a1adb5a810725840770528316ac89c +size 2530435 diff --git a/video/5d2eScRiRC_39027135.mp4 b/video/5d2eScRiRC_39027135.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..51cc087a34079e90779477905fa5ceb30f16d9cb --- /dev/null +++ b/video/5d2eScRiRC_39027135.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:243f3f4a0090af60338360ba83b1c130d382c9b3006648a8e7045b5a734e1871 +size 2015656 diff --git a/video/5dlfiJIXoh_39018422.mp4 b/video/5dlfiJIXoh_39018422.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..50165246057c29fef4ddb38b4354bec1c94d760a --- /dev/null +++ b/video/5dlfiJIXoh_39018422.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e0ecf600c95134470e0b7158d99c2db28d6c614a98beca419867c5a887dc9361 +size 2497960 diff --git a/video/5fybcQZ0g4_39025070.mp4 b/video/5fybcQZ0g4_39025070.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1456657e86fa30ccaa301656a7582c7496394277 --- /dev/null +++ b/video/5fybcQZ0g4_39025070.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e45e038f99122675c2ca25ea0d29e79833376ca77a97e2fa3dd50fa2b931583d +size 2151941 diff --git a/video/5h0qf7IBZZ_39018420.mp4 b/video/5h0qf7IBZZ_39018420.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..07d5c2b7be5eb061ebaef60d3e73fdddc8eb0636 --- /dev/null +++ b/video/5h0qf7IBZZ_39018420.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:635301f75e3a2a667c5fc30a7ec53c7cd5f70dca0cf95ca4ab8c28243cafc6e8 +size 2054320 diff --git a/video/5iENGLEJKG_39018419.mp4 b/video/5iENGLEJKG_39018419.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..15aae63823ad42288108100341917f8e30eee297 --- /dev/null +++ b/video/5iENGLEJKG_39018419.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3edb3f13ab15802985e89e49fa40ca4768c649e6245adc8e75af0fd5498a126f +size 2513074 diff --git a/video/5jRU8ufi8H_39025484.mp4 b/video/5jRU8ufi8H_39025484.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ef96b9affd729205a3b2c0f7418840bb983ba9c4 --- /dev/null +++ b/video/5jRU8ufi8H_39025484.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1eb0fba7db21bc6ee18c277dedb4821e9267556a3687896ced30c39527650060 +size 3098247 diff --git a/video/5jWsW08zUh_39017066.mp4 b/video/5jWsW08zUh_39017066.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6a7b27d2ed00ba89bf85ffb62f790bc549a7f982 --- /dev/null +++ b/video/5jWsW08zUh_39017066.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e04e2da01879f2fd6b62d117af6b3996ca90ad382886e6612cf46ad012fac4e +size 2691549 diff --git a/video/5jYFoldunM_39027972.mp4 b/video/5jYFoldunM_39027972.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c911aa9fef0cdd927f7f81eb5ce6dad8d308d7a1 --- /dev/null +++ b/video/5jYFoldunM_39027972.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:92344cf17a291309495d8e9b6b3362a0866de62204967951cc774ec928360ec8 +size 2909971 diff --git a/video/5l5bhYexYO_39026357.mp4 b/video/5l5bhYexYO_39026357.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..537b6a46dd61d23a07b8443c57819ddd2fa92b53 --- /dev/null +++ b/video/5l5bhYexYO_39026357.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a5f545fcf9b8a527881addc26d5551ad30bd57e7d17f8a129f18979bd7e4283 +size 2843108 diff --git a/video/5lLb7aXRN9_39027432.mp4 b/video/5lLb7aXRN9_39027432.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c3c089325ff89730a12e69b6bfebd2b99e45e146 --- /dev/null +++ b/video/5lLb7aXRN9_39027432.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:80253d286fdeafbd93bc76a4ae563bfa90baad817c8aba4260fcf0e32bf301d0 +size 2973290 diff --git a/video/5liV2xUdJL_39018417.mp4 b/video/5liV2xUdJL_39018417.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3401e00180b4db3d2959d767eb3736c783115262 --- /dev/null +++ b/video/5liV2xUdJL_39018417.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd0b55def7981122e4ab43bcdcea2a74c57a6e6994f202d98ddac747805a08cc +size 2245303 diff --git a/video/5nM2AHzqUj_39018416.mp4 b/video/5nM2AHzqUj_39018416.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ea9ebbfa8fa350d0c08f6c970c18d0140f12f7d0 --- /dev/null +++ b/video/5nM2AHzqUj_39018416.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df6e16e0fdcb6d572dda2c3281b8a1b82b0993d279d939d9eb460e859f90eabc +size 2127166 diff --git a/video/5o9G4XF1LI_39018415.mp4 b/video/5o9G4XF1LI_39018415.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..420fd0a6b890f28fa074dd9e84b78d90bc25d6d0 --- /dev/null +++ b/video/5o9G4XF1LI_39018415.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:288ef8357c1944ebec7dab0cd074768144d27c1e6005b4634a63581567072da8 +size 2688244 diff --git a/video/5pJfDlaSxV_39028374.mp4 b/video/5pJfDlaSxV_39028374.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..90602556a1cd066076c3c84de165d0059c417416 --- /dev/null +++ b/video/5pJfDlaSxV_39028374.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c77314fc7e9acf210014994092c8d291d9415d9df7a354b57374ac89b239b49d +size 2333914 diff --git a/video/5pnhGedG98_39028681.mp4 b/video/5pnhGedG98_39028681.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0101709b55ef0d7cfcfbe4ad7d3d84d26990db96 --- /dev/null +++ b/video/5pnhGedG98_39028681.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de81da101f58443e2153026e52b9cfce24d9c15f5470d5cbb7d89294d4c28942 +size 2635375 diff --git a/video/5qPmQtfvhy_39024765.mp4 b/video/5qPmQtfvhy_39024765.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8404cd8a002323a9eeecce869e84c301443b5e20 --- /dev/null +++ b/video/5qPmQtfvhy_39024765.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:408711059b8797d889c49a30d29bb16ff1961d71c326c78c1ca3d2ea81c93da7 +size 747538 diff --git a/video/5sjxMwWmk8_39018413.mp4 b/video/5sjxMwWmk8_39018413.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6cfb5a969cf9ae3a8d7dfb2d4867ff87837288b0 --- /dev/null +++ b/video/5sjxMwWmk8_39018413.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:376e3845418a6b7a34dc3375f4f50796232b6eb9ffa26372ebd40d5df9f1e4cd +size 1851130 diff --git a/video/5sm8YDnWvC_39028103.mp4 b/video/5sm8YDnWvC_39028103.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7aa91cbd1e7a425b17942df6f0498633aa01d13d --- /dev/null +++ b/video/5sm8YDnWvC_39028103.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2329d02700fbdd936179d695b79c6e4a939b1cc97d620e60b338057ea4f52fc8 +size 2351633 diff --git a/video/5tIG2KZogL_39024422.mp4 b/video/5tIG2KZogL_39024422.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e6f8502d7e52c48bc0768443137de1b7ba89edd1 --- /dev/null +++ b/video/5tIG2KZogL_39024422.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3db7285c6bdb4e9a6700c86197f9719b594a85aee984b4fa5c26552cf116d855 +size 2711915 diff --git a/video/61YYSy078Z_39027805.mp4 b/video/61YYSy078Z_39027805.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6006da508f73893234682aadf6f2be9cb89c0d39 --- /dev/null +++ b/video/61YYSy078Z_39027805.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2bedd11d0e5a8f36d11d65a76ac685e063dca5578eb63c947ac5d52271f34af +size 2823771 diff --git a/video/64V40K2fDv_39026628.mp4 b/video/64V40K2fDv_39026628.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2698a891930c6bd26bb77a857a5f960f94be1ec3 --- /dev/null +++ b/video/64V40K2fDv_39026628.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d03ac61b622d8a01b2ca859c8f8cb043fc141839eb64d31b383d1f01a0bc05e2 +size 1357752 diff --git a/video/64kSvC4iPg_39018408.mp4 b/video/64kSvC4iPg_39018408.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5eb0dc36dcc792c704b33c41eea795e9f98f45f3 --- /dev/null +++ b/video/64kSvC4iPg_39018408.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b0279daee198d8991cde9a967e36566c43d0c706a38cb65be39e9398554fd48 +size 2587713 diff --git a/video/65UoJ0z7Kp_39025434.mp4 b/video/65UoJ0z7Kp_39025434.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c6d2d5865e6171cf8dcbfcd373457492563bc649 --- /dev/null +++ b/video/65UoJ0z7Kp_39025434.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45cff9885e049fa74bb93a5f9b0fee26ad33f6ca267083c6529668561fac3ebe +size 2408175 diff --git a/video/6A29LUZhfv_39026342.mp4 b/video/6A29LUZhfv_39026342.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..86d5380c12b876042396e223e6fdeb29b8e3f4fa --- /dev/null +++ b/video/6A29LUZhfv_39026342.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fdbf4843d41130b66859d59cecc4f316c3debaace5ba2bcf98162481a120db4f +size 2080429 diff --git a/video/6AeIDnrTN2_39027525.mp4 b/video/6AeIDnrTN2_39027525.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..88ba47bb925f878e360ab9183df731315bb1b378 --- /dev/null +++ b/video/6AeIDnrTN2_39027525.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53c42f5ef6681575dc41295123365025c20e68808d769b444b15e5cc8a848a49 +size 2272275 diff --git a/video/6ArNmbMpKF_39025115.mp4 b/video/6ArNmbMpKF_39025115.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..922f416ce5e66eb34477d47f1a758cafb13daa9d --- /dev/null +++ b/video/6ArNmbMpKF_39025115.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b85e61007f6582cd325e52b7ca4a484789f22f3fda3559a46a5d59112f2d1dc +size 2443782 diff --git a/video/6CZ50WgfCG_39018405.mp4 b/video/6CZ50WgfCG_39018405.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..68e9df2b275073073f2ef17f6894fc7d3f61a642 --- /dev/null +++ b/video/6CZ50WgfCG_39018405.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d1c49b1eb8c6b1d34bfe5e3b0b6314d0f1104da158cc185e810ccb853f943ac +size 1836345 diff --git a/video/6HUJoD3wTj_39025480.mp4 b/video/6HUJoD3wTj_39025480.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..73d3bd69cb54690ac6789bbc55df191348487ca3 --- /dev/null +++ b/video/6HUJoD3wTj_39025480.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eeae0e1efe9814c6c4e14dfeae0aeb06200f6c0977cceeb848ba23e041ce26eb +size 2078174 diff --git a/video/6IjN7oxjXt_39018404.mp4 b/video/6IjN7oxjXt_39018404.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c1e30ab7d585e3fedc14095ed12a30610e5f1dbd --- /dev/null +++ b/video/6IjN7oxjXt_39018404.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c6d426c369650bde428bbc1acf5309dcdc75e2fddd8e8f102f075c4c9ec675e8 +size 2604456 diff --git a/video/6KDZHgrDhG_39025050.mp4 b/video/6KDZHgrDhG_39025050.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f68a6e8b9d2b61dc355b9054e49e52cecd2eea46 --- /dev/null +++ b/video/6KDZHgrDhG_39025050.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:87e48d574e1eaddf8158e480a1293390e2cd3eca34052466468d3ca4c2d7b0d9 +size 3475278 diff --git a/video/6Kg26g1quR_39024991.mp4 b/video/6Kg26g1quR_39024991.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..19a690411bc439d8c7ce43894c699ddd61476a36 --- /dev/null +++ b/video/6Kg26g1quR_39024991.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:812934b39b4469ac22e9aae8b547c75214c0ac7d92ecf8778a1f2f56060a0f39 +size 2108267 diff --git a/video/6LVxO1C819_39026364.mp4 b/video/6LVxO1C819_39026364.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8e4efdb17379c9e2ee84cd72583521e44d9e5c89 --- /dev/null +++ b/video/6LVxO1C819_39026364.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cbf6e963ca1a422bedf6ff0153a0924b4a70fa98eac156f41f211f061f8d4e78 +size 2268029 diff --git a/video/6OK8Qy9yVu_39028666.mp4 b/video/6OK8Qy9yVu_39028666.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b9d2b1125050be2fdeb20f11c6427457e9a8fd16 --- /dev/null +++ b/video/6OK8Qy9yVu_39028666.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aedc4d66539788f016206847968ef959deb89ac6e5e6d44c5b1d366669e462ac +size 2377243 diff --git a/video/6SSzMq3WTn_39025688.mp4 b/video/6SSzMq3WTn_39025688.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0413ed8cafa5827416c6d69a72241913f5b318d9 --- /dev/null +++ b/video/6SSzMq3WTn_39025688.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:204f7320277a8c0dc1defefe7aa4d578307ac6917a24d0a5695367eec052ded4 +size 1923196 diff --git a/video/6VVgAgVfxW_39027173.mp4 b/video/6VVgAgVfxW_39027173.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ce985d5027646e3dd2a4249aa217ec323435bf9e --- /dev/null +++ b/video/6VVgAgVfxW_39027173.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75ad9cf054748be3fb1c02273f5a4a418568130e874213b1487210bb073f9cec +size 2961356 diff --git a/video/6ZBHIEtdP4_39026198.mp4 b/video/6ZBHIEtdP4_39026198.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5be77d9c140e07a3a69250c4bdee149d2353c318 --- /dev/null +++ b/video/6ZBHIEtdP4_39026198.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06cdcbd79bc2681fc34db87007b7033bbaee6256b1417a7bbbcd8c3399360237 +size 3216402 diff --git a/video/6bcAD6g688_39018399.mp4 b/video/6bcAD6g688_39018399.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d20516502469ab050850f165a95a2ef26688f7a2 --- /dev/null +++ b/video/6bcAD6g688_39018399.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6b8dc1f938bcf7c11512cb69836aa314b30e3446ea3709b309be338010bb986 +size 2491068 diff --git a/video/6cWDg9t3z5_39028695.mp4 b/video/6cWDg9t3z5_39028695.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8641f15860b3cd38f8fa45ae619ee2939988c0e0 --- /dev/null +++ b/video/6cWDg9t3z5_39028695.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a93751f7e5faf818992ce321844847a9a5adc241f237a9cf53043ca845b6976 +size 2405560 diff --git a/video/6cdYMkxxNt_39025033.mp4 b/video/6cdYMkxxNt_39025033.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b4f6c15c81510738586389c8bae1b98cf8e44770 --- /dev/null +++ b/video/6cdYMkxxNt_39025033.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c360f036c78f2445443599b41124614d8a5a57ba5687d136606a130c7a31ad5 +size 2142021 diff --git a/video/6ejpSVIiIl_39028906.mp4 b/video/6ejpSVIiIl_39028906.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..70055cbd094d009fe34a631823ca445cd18d1590 --- /dev/null +++ b/video/6ejpSVIiIl_39028906.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ebd6de028801d605312a3442fe45b5ed5ceedf4efe534cdc214227af36aa82f1 +size 1859229 diff --git a/video/6eoGVqMiIj_39027890.mp4 b/video/6eoGVqMiIj_39027890.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f5d6ebaad3029b38a615a7218cd98463a21b7c5e --- /dev/null +++ b/video/6eoGVqMiIj_39027890.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b189991e5f063b735dabd849b1e20872285ea22e1fedbfe9ef299bd75202bf1 +size 2769252 diff --git a/video/6gMnj9oc6d_39026945.mp4 b/video/6gMnj9oc6d_39026945.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..49c72bebd636db7708a7730f69c18c54ce5461f0 --- /dev/null +++ b/video/6gMnj9oc6d_39026945.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da0fa6b7cbd733a72d28e364ee305c12835cd54b54871782ecc1a2d38a22f269 +size 2835128 diff --git a/video/6gzPSMUAz2_39028482.mp4 b/video/6gzPSMUAz2_39028482.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f282b648966c555735a8c5ea3489fad529108877 --- /dev/null +++ b/video/6gzPSMUAz2_39028482.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd44c9de55c699eca96c8e7b628251727d1e952ffa0e9a28ae8ba99e38b8080c +size 2791387 diff --git a/video/6hvtSLkKeZ_39018397.mp4 b/video/6hvtSLkKeZ_39018397.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..636c1d6b6cb992aa6fb7c5a40e260b8229d3b580 --- /dev/null +++ b/video/6hvtSLkKeZ_39018397.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e89a8038e7647801865ed10319fe96012e01b6df7d93d26387b5a0a631363ae +size 2870406 diff --git a/video/6jOScqwdHU_39028068.mp4 b/video/6jOScqwdHU_39028068.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b54484bf8df98d536233fdfff4ebcb68e3a26d19 --- /dev/null +++ b/video/6jOScqwdHU_39028068.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de4b0a4da16fec9aa689f3228006bc00a9e5b2c5972870e898681f3af2127bc3 +size 2235926 diff --git a/video/6lwKOvL3KN_39024577.mp4 b/video/6lwKOvL3KN_39024577.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..81b10b517633850612e42b5726867a3c051191e6 --- /dev/null +++ b/video/6lwKOvL3KN_39024577.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ffb19d3019ed5a025662d678b9015736fc27650ed7b46ed004ab483d44c4947 +size 7776 diff --git a/video/6okaSfANzh_39018789.mp4 b/video/6okaSfANzh_39018789.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d96f439fce112405cf4d0d4cc8a96b0e44653b08 --- /dev/null +++ b/video/6okaSfANzh_39018789.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:422b214852234ec5c8576f4d4abe46fb445ac0da1ccb11402d39bea09fe8cb0b +size 2550532 diff --git a/video/6osgTNnAZQ_39025492.mp4 b/video/6osgTNnAZQ_39025492.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8e1606d5ff967d3229f02e8ef5d9862166f51219 --- /dev/null +++ b/video/6osgTNnAZQ_39025492.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6afbf96ce8191fc28f29257853d7f1a815d2477d61266160d1ffaa1679b630c6 +size 3474550 diff --git a/video/6qr3932RWe_39024933.mp4 b/video/6qr3932RWe_39024933.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fc9323e9831017ef4b3d3b61979377fc584554eb --- /dev/null +++ b/video/6qr3932RWe_39024933.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72064e66e68fc09c8e351fb73bfb3eecc5d8db29c80106a8a6e0187d235e0df3 +size 1230811 diff --git a/video/6tqgL8VluV_39017123.mp4 b/video/6tqgL8VluV_39017123.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e565e17dac98bf692bc8b39dfb3a45eed6cd58da --- /dev/null +++ b/video/6tqgL8VluV_39017123.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f49dddf9eb28e868de1f6efb8092414c84ee93b9eef0b8c1640d66d24b021174 +size 2679212 diff --git a/video/6uv9ViIoMj_39027808.mp4 b/video/6uv9ViIoMj_39027808.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e2acbb9911e8067334e322a50d68636b761c20ed --- /dev/null +++ b/video/6uv9ViIoMj_39027808.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:528e795b661e3a2d7d3d1490cbe294a37c19785fdf0013c97ecbcc28d08c1a7e +size 2463777 diff --git a/video/6yv8UHVJn4_39018393.mp4 b/video/6yv8UHVJn4_39018393.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..aaa7d513de0c732568e6b8e778663c2b51fa1aa8 --- /dev/null +++ b/video/6yv8UHVJn4_39018393.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52fb4c8379db23dd36173853827524bdb7aeaee88da7ba70f3a244f092e1cdcc +size 2674720 diff --git a/video/6zOKbzjBO4_39026485.mp4 b/video/6zOKbzjBO4_39026485.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..49e1c9b58f09505a76902d6a10c4bf01d5290624 --- /dev/null +++ b/video/6zOKbzjBO4_39026485.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49fa868ae2db87b7d5b46ef80afed45e768af9d9bc0cf2fd6f791d95dd0c1531 +size 2632655 diff --git a/video/74B6qX62vW_39025207.mp4 b/video/74B6qX62vW_39025207.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..18e03493739b94a1aef79251b1b4981b4356d34a --- /dev/null +++ b/video/74B6qX62vW_39025207.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:81db3d643cdb183e6cf427188267bd9494fc75ee6ff7ed264d047281a68d2f14 +size 3013001 diff --git a/video/76CZrhbMoo_39027044.mp4 b/video/76CZrhbMoo_39027044.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f0e71d47004068f9fa400d42ac8ee924b39b8ac4 --- /dev/null +++ b/video/76CZrhbMoo_39027044.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ed843b0ed4b627f2b535270a61cdb87003696da217b9e2c7881ada91fd16c08 +size 2534109 diff --git a/video/776lhoaulC_39018391.mp4 b/video/776lhoaulC_39018391.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7befbd7faeaf80d2c6464def385d447c4cba4a9b --- /dev/null +++ b/video/776lhoaulC_39018391.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5307706191c710c522137a0092dd67b8223dc448d1078330b80653c9edb63916 +size 8306 diff --git a/video/77kCJzvpOa_39027671.mp4 b/video/77kCJzvpOa_39027671.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7385e26c10a83ae38fdbc91b1ba8ebf77af6d410 --- /dev/null +++ b/video/77kCJzvpOa_39027671.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:680c5ccb495c12f97851c5b96527e4b811e9de4933ba8227848b0b507d73e760 +size 1922604 diff --git a/video/78iGZdqxYY_39017153.mp4 b/video/78iGZdqxYY_39017153.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..285cdc5f286a7dc3e059e117d78683ad340cd16e --- /dev/null +++ b/video/78iGZdqxYY_39017153.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d1eb627077d9f01c4359fd008f7ad41365986eb37d59ff62546380ee97e3232 +size 2766388 diff --git a/video/792txRlKit_39025706.mp4 b/video/792txRlKit_39025706.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d6ace3466312d59f728576183790fa92709cb1a3 --- /dev/null +++ b/video/792txRlKit_39025706.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1c6cfc6648a67304dcf16e4fc18afed93c90a45a08de6e53f1bf637e7c296f31 +size 2307170 diff --git a/video/79FVDdfoSR_39017040.mp4 b/video/79FVDdfoSR_39017040.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..92920483693607649d2de45e3abadb4801461e6c --- /dev/null +++ b/video/79FVDdfoSR_39017040.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e94e969ee61529e891567f78b2cdc70a07c19d2408948f680c1e5154b2fbd498 +size 1900925 diff --git a/video/79eWvkLjib_39028383.mp4 b/video/79eWvkLjib_39028383.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d0944f1bc16c11c043d91d7bf884cf8c9e202fb7 --- /dev/null +++ b/video/79eWvkLjib_39028383.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3416ef7003bde882a0f729d55d4af595a2c9e26d84c64c9bcf40afa0d34dc94a +size 2731375 diff --git a/video/79q206xswc_39025595.mp4 b/video/79q206xswc_39025595.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0330ae1ee440bc2c9b56fa06715ef57647581c8d --- /dev/null +++ b/video/79q206xswc_39025595.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:03d0d9cb56503b71c6f95a9905d0a3605345dcb50976548f6ef340ce416df064 +size 2332187 diff --git a/video/7ANmKBfP88_39025971.mp4 b/video/7ANmKBfP88_39025971.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cfee16cc0da16d94f65a85e3585adfe7c4773c4e --- /dev/null +++ b/video/7ANmKBfP88_39025971.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6550b1a2e1b66bf7e5ebfa197a8360f529d0911e120d9b95cc5ae9becae3048 +size 1492792 diff --git a/video/7Dep87TMJs_39027877.mp4 b/video/7Dep87TMJs_39027877.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8cd899056df5837343d043a2fff4152151ff83f7 --- /dev/null +++ b/video/7Dep87TMJs_39027877.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:106c7ea9f8f208ce10f0da73d97377a02f8d16280a2cd0a523ff48e133d28ff2 +size 2103627 diff --git a/video/7ESHFpqjNO_39025786.mp4 b/video/7ESHFpqjNO_39025786.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0f4b47519abf7b7c2816c74ae4d15734bb4a8b53 --- /dev/null +++ b/video/7ESHFpqjNO_39025786.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b06a1d347936b791d4f0589aee5695d2c5cbdcf7cc14f4a270552760eebc61a6 +size 2387869 diff --git a/video/7FeIRqCedv_39018389.mp4 b/video/7FeIRqCedv_39018389.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..206d1d73a7d335536d99f6363dddb31f76791c69 --- /dev/null +++ b/video/7FeIRqCedv_39018389.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70e71d6eb68d1e16fa5e1635b54571a1a04235b8915f9da1ef806a019b4be6bf +size 2947234 diff --git a/video/7Fzx3Akdt5_39024612.mp4 b/video/7Fzx3Akdt5_39024612.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5573f7490e9df584eaea6e6a7d21236dc73099bd --- /dev/null +++ b/video/7Fzx3Akdt5_39024612.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:621b1438a7a08ab57104c8f6585dc996608cb96d1e400b3c6bf4fe4c78e59f8e +size 2810251 diff --git a/video/7JfKCZQPxJ_39019104.mp4 b/video/7JfKCZQPxJ_39019104.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a68842e3277c0e9ba92fc8d9b783e7ea6b993025 --- /dev/null +++ b/video/7JfKCZQPxJ_39019104.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0534c8af22a4d6351f97193f7e6e0f096973477cf33d369a8d895bcd1b236578 +size 2569842 diff --git a/video/7Jwpw4qKkb_39017149.mp4 b/video/7Jwpw4qKkb_39017149.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cc4940faf8bc56545411860277c997bf20d592dc --- /dev/null +++ b/video/7Jwpw4qKkb_39017149.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49ade171f6c98eee0db61bb3f797f12b20db4db00cfe792c057ecb55263a83ed +size 2622460 diff --git a/video/7Mo1NOosNT_39027088.mp4 b/video/7Mo1NOosNT_39027088.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..aeac972119c2ebff9f2e2513e3b2a60777979de3 --- /dev/null +++ b/video/7Mo1NOosNT_39027088.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce33be8fd6f4e4243d2e4866473734448900b828ff8677adcbfb6f629c32c337 +size 3277362 diff --git a/video/7NzgkEdGyr_39018970.mp4 b/video/7NzgkEdGyr_39018970.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b36bc869f8eecfc28d8bd0ab235234a2dfc4c425 --- /dev/null +++ b/video/7NzgkEdGyr_39018970.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a86295cd45a39cf2f9fb7cd654403055c1bd6b0d1c5a8cc16d1e05a7fd883007 +size 2316569 diff --git a/video/7O6KtaAr8n_39028762.mp4 b/video/7O6KtaAr8n_39028762.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..02941f8c3cfa47537fabee5da3ccceb57ce099c4 --- /dev/null +++ b/video/7O6KtaAr8n_39028762.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:92777cc981b54ff6d720de03afc4070779ecd5d3dc027d580308565f1112cb84 +size 2745580 diff --git a/video/7PORYhql4V_39024870.mp4 b/video/7PORYhql4V_39024870.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..88c2c0dff801677f32fd6e4cbf013a5e09828598 --- /dev/null +++ b/video/7PORYhql4V_39024870.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8088d8ac59369873e2254cbf7bbc944fb9a6989ddaadd3c0af1d1f834492f689 +size 2408044 diff --git a/video/7QG9R8urVy_39024754.mp4 b/video/7QG9R8urVy_39024754.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0d3253e4fa6f87045a5f26a9513e275a455d9ea8 --- /dev/null +++ b/video/7QG9R8urVy_39024754.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84c2425dd84f29ce9cc4919e7f667f4ccc4c8d4f7120f06b24684ee8d28b4812 +size 2398652 diff --git a/video/7TOs9gjAg1_39018385.mp4 b/video/7TOs9gjAg1_39018385.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..616cc1918dc7661ab5fbbe9c1ff9c852076471f0 --- /dev/null +++ b/video/7TOs9gjAg1_39018385.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7341ce538173d527aad3e9f130488a1eaa8523b5dca78c080151483d515560da +size 2984288 diff --git a/video/7Tir0u0ukg_39026855.mp4 b/video/7Tir0u0ukg_39026855.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ec2203c48d352a5d24788698dd0b70b85908b03d --- /dev/null +++ b/video/7Tir0u0ukg_39026855.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72cc74811c0966b8ade552dea2201b9e91cb978176dd24021b45db564ce7177c +size 2868360 diff --git a/video/7U5MwUS3Rw_39025871.mp4 b/video/7U5MwUS3Rw_39025871.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..57f85085fccc1e40a2a2c95f15df89a9ef022190 --- /dev/null +++ b/video/7U5MwUS3Rw_39025871.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a420b97f9d888845cb274d026f62621ccf6097da42a4993d3444e779893cfb9a +size 2759403 diff --git a/video/7UyBKTFrtd_39028444.mp4 b/video/7UyBKTFrtd_39028444.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b72ff9575c89ea22c8be2a0a9b0a1ee100cef74c --- /dev/null +++ b/video/7UyBKTFrtd_39028444.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0edcb478dd337bb25e6df9b4a1be1548847c7c895a6c57c43641a0e00366ec17 +size 2726168 diff --git a/video/7VPTUWkiDQ_39018383.mp4 b/video/7VPTUWkiDQ_39018383.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..66b30dc21548e28409a41361c1ff7a926e27f6d9 --- /dev/null +++ b/video/7VPTUWkiDQ_39018383.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51c9a9e749e3d828e2ef290d13ee60d0e6c6c69c617c406100c6bbcdf9812bb6 +size 1658942 diff --git a/video/7W0f7lifDk_39028676.mp4 b/video/7W0f7lifDk_39028676.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..614bae36ea0f2c8a0b711a37e58e485f19f6057d --- /dev/null +++ b/video/7W0f7lifDk_39028676.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dcfea1d5455a05601447a0675e1a745bade2575b18ac0cca96fa1007477df1da +size 1352938 diff --git a/video/7W3GLNImfS_39018382.mp4 b/video/7W3GLNImfS_39018382.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1977116990236f7676ac410dfde9e842dc45bbd8 --- /dev/null +++ b/video/7W3GLNImfS_39018382.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cdfa9a4ebd59f1affd0f25afab4c862ff5775856afe07f32f2b66011958410c7 +size 1726640 diff --git a/video/7WvwzuYkUq_39025447.mp4 b/video/7WvwzuYkUq_39025447.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dabe5da2ab2c23a8e0c63a1d1882c9d5e78c81ea --- /dev/null +++ b/video/7WvwzuYkUq_39025447.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54cc38463902f5d849b0702438e599024ae980e623b5c0fb0157360781af3e10 +size 1832185 diff --git a/video/7Ye12RLZ4P_39025131.mp4 b/video/7Ye12RLZ4P_39025131.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6a0db5cdd993cf0473701f4f4b244ed046bb168f --- /dev/null +++ b/video/7Ye12RLZ4P_39025131.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:37118b08344f872ae9445469673f6fbf2d04abb55785ff3818b58c977abf31f8 +size 2670725 diff --git a/video/7arAADUK6D_39026017.mp4 b/video/7arAADUK6D_39026017.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f3dfa0a441189731673fbc50f579810c7885e7b7 --- /dev/null +++ b/video/7arAADUK6D_39026017.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc5c9e84bc347fb410402e2782887256e0d2693f8d0707cd11711e583846c9be +size 2512313 diff --git a/video/7eIaqYrpcs_39024775.mp4 b/video/7eIaqYrpcs_39024775.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3b3b565ae829fc51200859dd7847616f1ab90509 --- /dev/null +++ b/video/7eIaqYrpcs_39024775.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4412716b20c896289c6984937a3f3263ad5cfa89535dd75daa5c1ff8b979ecb +size 753214 diff --git a/video/7fScrgJ3An_39026007.mp4 b/video/7fScrgJ3An_39026007.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..81c400e687890181f6fccf8eb623472051480ea9 --- /dev/null +++ b/video/7fScrgJ3An_39026007.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb02db0100345a5addcb1e87f651014ed7d588c9be50d15d1f82bc5ed4f11fe8 +size 1422766 diff --git a/video/7flSQgZ4RT_39026573.mp4 b/video/7flSQgZ4RT_39026573.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..41cc82d06938383be354cb0b81c016a0c1bcd5e6 --- /dev/null +++ b/video/7flSQgZ4RT_39026573.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71dc2d1c12b32d87277f5c072739d6e01193e96c06894dcedd47b391924987ae +size 2335463 diff --git a/video/7gLfQT52Nn_39018925.mp4 b/video/7gLfQT52Nn_39018925.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..47a64cc94f775dbbefa9116f4c977940e67045f8 --- /dev/null +++ b/video/7gLfQT52Nn_39018925.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7393ef11555eda01b04e5690872d52f3efc96c0590a1b64b874d41188a623ab0 +size 2787806 diff --git a/video/7gUrYE50Rb_39018816.mp4 b/video/7gUrYE50Rb_39018816.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..42e26ecfbecfd9a9373648c5106d42c5d916f1ee --- /dev/null +++ b/video/7gUrYE50Rb_39018816.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ec422e09ad5d93412c98aa9f8e3f9b1bedfcca2741f1392f5a6b3a5bab3ee29 +size 2893701 diff --git a/video/7gf6oGdKPU_39028754.mp4 b/video/7gf6oGdKPU_39028754.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0ae7b646d1462cc711a9f116e03ee7072ea95a87 --- /dev/null +++ b/video/7gf6oGdKPU_39028754.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:726c7a20d9fbec9e572e46ae975986319bbbdcedf3fd7e7dca9bb4a39befc761 +size 3321669 diff --git a/video/7hxoYxKDTV_39018888.mp4 b/video/7hxoYxKDTV_39018888.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0703fda1329f906a28f09b68631da419d2e7d770 --- /dev/null +++ b/video/7hxoYxKDTV_39018888.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f403e07c1f939629fd252e5db8ebb33a48bd62a732ab23a8cd62a6bce1b40c4 +size 3047166 diff --git a/video/7oLshfEIC2_39019019.mp4 b/video/7oLshfEIC2_39019019.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5a637bbca833ecb2d382ad279bac814e48fd398d --- /dev/null +++ b/video/7oLshfEIC2_39019019.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc7552a91e0280305a48605340426650ffc835800fafa56ac6f089516e9bd20d +size 2807342 diff --git a/video/7sdkLVuYCU_39026062.mp4 b/video/7sdkLVuYCU_39026062.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..74495789b54e2e314dd6d0449dec0d436ee5d914 --- /dev/null +++ b/video/7sdkLVuYCU_39026062.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a56d39080efbb3a7812dbc561dfde31f88a777be6eb1de27efbf92e90928e28 +size 2659539 diff --git a/video/7t9eDEY2GT_39027987.mp4 b/video/7t9eDEY2GT_39027987.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..56ac4686b3b504617e16211f558b616644ad1ac0 --- /dev/null +++ b/video/7t9eDEY2GT_39027987.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f7c51914682e6aea104f711b257b9f486111b6ada4c05653dfbbb98c3ea5f0ec +size 1937218 diff --git a/video/7tRtH0AoBl_39028803.mp4 b/video/7tRtH0AoBl_39028803.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5cbe7e561a498bacfff6bc9f9ef03ce004dcf0b8 --- /dev/null +++ b/video/7tRtH0AoBl_39028803.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:722d8f78e9a6bbc195745d0f8a8cda0b754093274ada52537e30e012994f8471 +size 2275970 diff --git a/video/7txPaUpUnc_39028864.mp4 b/video/7txPaUpUnc_39028864.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f7b3f15d2dfb2411afe03d0aaa20efb17e3d9656 --- /dev/null +++ b/video/7txPaUpUnc_39028864.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7371cb4f25b047bed6419799b407566ffc9b29a9c0e4db13a31986d134fa1f12 +size 1585935 diff --git a/video/7uqVfZW6Mo_39027049.mp4 b/video/7uqVfZW6Mo_39027049.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9bb39594d8b0c26837245472fadf7f80ea9e6e78 --- /dev/null +++ b/video/7uqVfZW6Mo_39027049.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b6292dc66f8eb16f8edc18cafc8914b68473fac10a6a79adbfd9408bdb37588 +size 3159705 diff --git a/video/7v0UyO0B6q_39027050.mp4 b/video/7v0UyO0B6q_39027050.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..741ed90843262909f2012a86d52f384cb7732921 --- /dev/null +++ b/video/7v0UyO0B6q_39027050.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c27e9e043a84e3a05ff13788e3b244c89df401b22ae845252103d787ed45b015 +size 2370405 diff --git a/video/7zY781bMDO_39018375.mp4 b/video/7zY781bMDO_39018375.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..36b1738dba53d221a3b9723c56d8e34766fe195a --- /dev/null +++ b/video/7zY781bMDO_39018375.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e63860dbbdf2af08e47b358f27b93ccc3066c01103776cdd3eed9677c6087631 +size 2091324 diff --git a/video/7zzOcyT0hd_39024451.mp4 b/video/7zzOcyT0hd_39024451.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..28765d96c6c96f3fbb42e9354e01d0131a42ce8b --- /dev/null +++ b/video/7zzOcyT0hd_39024451.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:179e9fa3d576707a6332d64ec5873a58250e4f926bcde5a67cc6529536a125b4 +size 2424295 diff --git a/video/82Ndsr4OS6_39027557.mp4 b/video/82Ndsr4OS6_39027557.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c60b6f40ea2e6ac0f3a6383fcb218897cbedb406 --- /dev/null +++ b/video/82Ndsr4OS6_39027557.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3eb3e48dc7a3d2c912f8c584abfe0c8d8b8742985e345139a47fafa6a70a9011 +size 2407959 diff --git a/video/848vuK2cKp_39025785.mp4 b/video/848vuK2cKp_39025785.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7ad05a5528463cc4a17ec62b0139342ed8fc6270 --- /dev/null +++ b/video/848vuK2cKp_39025785.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6f65ecefdc2b52b6afb7c674826370f229764db99384b3b91e368e6d0e20e26 +size 2964478 diff --git a/video/85tu7K06i3_39028596.mp4 b/video/85tu7K06i3_39028596.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b667d383d19b0a5ee218476cea8c792d192b345a --- /dev/null +++ b/video/85tu7K06i3_39028596.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cbc337c4f98b9fc469e823e45c469a44d207dd50e5afcfb7b5270c1d0c0c365 +size 7746 diff --git a/video/87AXdbkRyd_39025642.mp4 b/video/87AXdbkRyd_39025642.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7e1eab980a321b027d5aacd93db295a3597a177e --- /dev/null +++ b/video/87AXdbkRyd_39025642.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2137b19d8d4d2bc184807931944a2bd7574d58b04cdd48f4502622a2d772394 +size 2924033 diff --git a/video/88TzdGyPT6_39026613.mp4 b/video/88TzdGyPT6_39026613.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..edc4d25871a12b175f5cf8d5a910028984889032 --- /dev/null +++ b/video/88TzdGyPT6_39026613.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:32fa65ac832faec6bffd41e089080640495c9c15489b53284bc0847197749845 +size 1859714 diff --git a/video/89A5c6enfc_39018373.mp4 b/video/89A5c6enfc_39018373.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f6fdd9b6d77bd2c893e2c6c94d6b53f3e95ec868 --- /dev/null +++ b/video/89A5c6enfc_39018373.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3aafb16683ebde9e4f7e0475d37f39dbf5fd73d062ccafe22aff723b5cbbc020 +size 2344336 diff --git a/video/8APPypS0yN_39027713.mp4 b/video/8APPypS0yN_39027713.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..79d900c969b41c814783adf4eba5cd9308c50645 --- /dev/null +++ b/video/8APPypS0yN_39027713.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca0f664615ebda2ad4e377b6016a5b93194b1a585dcc48594dc721b6198c486d +size 2464163 diff --git a/video/8BAkNCqpGW_39019007.mp4 b/video/8BAkNCqpGW_39019007.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5bf6fe0b0287418a0c50d0b61e011c01e1da95b5 --- /dev/null +++ b/video/8BAkNCqpGW_39019007.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c87b4ada6116df6f9f193ed20a3f8ccd85955e70c7fb25a0cf221b57e9f94d0 +size 2502843 diff --git a/video/8CguPoe3TP_39024795.mp4 b/video/8CguPoe3TP_39024795.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f26007bbfbb3f99fc7b483843f0ecfa05b03de58 --- /dev/null +++ b/video/8CguPoe3TP_39024795.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cb89862b9ef1d2a28e6563031ce3d727636a86f60acbb520c818021633bf9630 +size 1731686 diff --git a/video/8Dkz60yGfj_39026972.mp4 b/video/8Dkz60yGfj_39026972.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..69a5de8f1db74f82a2da8f086c81eb673a769084 --- /dev/null +++ b/video/8Dkz60yGfj_39026972.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8703544d804a5699c39ecebb8beae6a7f87e13bb16e141a31f9e702467bf7cc1 +size 2945529 diff --git a/video/8Dy42ThoNe_39026234.mp4 b/video/8Dy42ThoNe_39026234.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e406822eb9a5aa8e729280ddd5f02beb3f05ac67 --- /dev/null +++ b/video/8Dy42ThoNe_39026234.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:399d26039885f5fac8e02302f0c4b977a716ae3a4edb28ca0c43e272a6f77dab +size 2423888 diff --git a/video/8Fxqn1tZM1_39027780.mp4 b/video/8Fxqn1tZM1_39027780.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1f4b83b5709067e59000016f9d7f67da64b862af --- /dev/null +++ b/video/8Fxqn1tZM1_39027780.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:857d7048aa7ec25eb9a6894072b289760ed45fc62e5fb96df82a564be920e0a1 +size 2755990 diff --git a/video/8HCARN2hhw_39019184.mp4 b/video/8HCARN2hhw_39019184.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b6f8a7dcc7c32568a0eeebba1c2ad08f20886911 --- /dev/null +++ b/video/8HCARN2hhw_39019184.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5aef0c47a4535bd722ee7d7f913a68328bac1fc32cec5e9b0b1b7623ee7ccfff +size 2147522 diff --git a/video/8HwI6UavYc_39025583.mp4 b/video/8HwI6UavYc_39025583.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ae1ac0fc4cc7d558865163da6c1b431d2c57d5b0 --- /dev/null +++ b/video/8HwI6UavYc_39025583.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f99f11e780b547a33000ad87a69e6083f9d5b08ef1fce23c037ca8ea07bf0bd +size 849507 diff --git a/video/8PWvdaRQAu_39025563.mp4 b/video/8PWvdaRQAu_39025563.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ac938ca8a13664583ccdd349a6b4571f4504a24a --- /dev/null +++ b/video/8PWvdaRQAu_39025563.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb274b5b9b8df42910ebb5b5147f7c478e6853cc4c86a58f52e37c53c806677f +size 2755428 diff --git a/video/8UqyWNsnyA_39026293.mp4 b/video/8UqyWNsnyA_39026293.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8323d9e7e094af1fca0469a0acf78bc2d99e90ea --- /dev/null +++ b/video/8UqyWNsnyA_39026293.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c44feb6d74034ec8e0a073443e5eeb2cbe3ea0fc3146c1bce026f2be9278f01 +size 2676312 diff --git a/video/8Uyfr5TcNR_39027538.mp4 b/video/8Uyfr5TcNR_39027538.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b8160eb3e5418d6f792deb37cd5e3697742f43ce --- /dev/null +++ b/video/8Uyfr5TcNR_39027538.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06a669cb1e3f8d92504c229aab864e3013df36ae84c0aa7afc8d1ea1fd0c5ff4 +size 3050237 diff --git a/video/8VPWfqtQMX_39018365.mp4 b/video/8VPWfqtQMX_39018365.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..32c7ef5d81482f301ee644ccf3dea0e66fbcf9f6 --- /dev/null +++ b/video/8VPWfqtQMX_39018365.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f86445c178ed61e1dffef10d2809275d2162a5746fadc3843c1ce949af86b31b +size 2101715 diff --git a/video/8W5ADJOKcv_39024678.mp4 b/video/8W5ADJOKcv_39024678.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..56514a1948766c7e15a67edb88114a897f9d4ae6 --- /dev/null +++ b/video/8W5ADJOKcv_39024678.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1eed1ab295e06175e2077c632f4dcb7dccbc4e26cedcf6a261bc1b0383c6c08 +size 2730747 diff --git a/video/8ZLL6mu2qC_39024886.mp4 b/video/8ZLL6mu2qC_39024886.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..74220a5a59794562f7d8e3427413af762bb03da6 --- /dev/null +++ b/video/8ZLL6mu2qC_39024886.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cb45a4a42cd43136da0f53ccf24e18c4550bca35470d0a636a9593359d6d3c4e +size 2965020 diff --git a/video/8aAaYEwNR4_39028878.mp4 b/video/8aAaYEwNR4_39028878.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..76558b2558faf5d9da2a36162c164941c26d5571 --- /dev/null +++ b/video/8aAaYEwNR4_39028878.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:42b70ac649fe6d8c4e12e122f4c73cdfed67c927c2d082de597797b61a9abb97 +size 2277305 diff --git a/video/8i6px5W1Rf_39025069.mp4 b/video/8i6px5W1Rf_39025069.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7289a8c1373b65a9c5c2ccdbd57af256885bec42 --- /dev/null +++ b/video/8i6px5W1Rf_39025069.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aecdf26e6950cdd8016132a2b0020c1e6e670da748608fe5267239194d38b8e8 +size 1845319 diff --git a/video/8koaqRdRYH_39025569.mp4 b/video/8koaqRdRYH_39025569.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..579a99a9c096a788c0e28588cb520bfb127604a2 --- /dev/null +++ b/video/8koaqRdRYH_39025569.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1079ab430a8af43384337ed930633ecbc88da8ee6060f03f2c20b5682f11a7d +size 2650329 diff --git a/video/8moTQjfqAV_39026669.mp4 b/video/8moTQjfqAV_39026669.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..66b9b1cc4bff636c7c6fd224fc229fa4b9fc700a --- /dev/null +++ b/video/8moTQjfqAV_39026669.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:010d3df1ebdef45044dc852e586de3c13cb9ef6d922ec410be9e4d9683df4e5c +size 2596623 diff --git a/video/8nxy1bQWTG_39018627.mp4 b/video/8nxy1bQWTG_39018627.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d7e3b2439e8b2ddf767aea9d2e6ddcb3e541f2c9 --- /dev/null +++ b/video/8nxy1bQWTG_39018627.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0331c36e2274cb3a2a90a4f464e1a34ba3401d34462449b9ef8b08ac5a61184b +size 2064284 diff --git a/video/8oSY3rA9jY_39028328.mp4 b/video/8oSY3rA9jY_39028328.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9a015f053af72ef5dda495b88b1c4c232f990b10 --- /dev/null +++ b/video/8oSY3rA9jY_39028328.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f72fd9dc1869597ad08d02ee3fa0c2b395ea84191f9013730b0c1eafe7414821 +size 2220401 diff --git a/video/8ohsbxw7q8_39025973.mp4 b/video/8ohsbxw7q8_39025973.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..aaef0de0e2ae3de2b9df1f57f14c09572a0a0ea9 --- /dev/null +++ b/video/8ohsbxw7q8_39025973.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ad9dc931c03ab2e7e8000d545fc90fc55ecc468dd483ed6c16a748f8c65bcbc +size 2287226 diff --git a/video/8on9dIUh5v_39025410.mp4 b/video/8on9dIUh5v_39025410.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5a53ec250a0797ffd29237c8604eeddff811e38e --- /dev/null +++ b/video/8on9dIUh5v_39025410.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c30d7f8df5369ee6c52ec780a998bbf5ee5dfbec6d535ba31fa6c0615840fba +size 2541625 diff --git a/video/8puv3c9CPg_39028709.mp4 b/video/8puv3c9CPg_39028709.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ef61f478144c37ebea8d5d96b6c3642fe9a797d8 --- /dev/null +++ b/video/8puv3c9CPg_39028709.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3cd770b472dbcbf63f4e6ce07f4ba37888549f34cbe4dfac02597e2a587ccd9f +size 1942382 diff --git a/video/8qEkjSEdls_39025792.mp4 b/video/8qEkjSEdls_39025792.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..af449135643af414ea1dbced3ed421878d121ff3 --- /dev/null +++ b/video/8qEkjSEdls_39025792.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d9e26e88bd24f102c89fdd3628caec1c89950b1f2e2fff11700f666666f5937 +size 2250799 diff --git a/video/8sKcAWOf2D_39018358.mp4 b/video/8sKcAWOf2D_39018358.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..67690bd4a2d17f4f6c90d40d846b6cf7e1226c3a --- /dev/null +++ b/video/8sKcAWOf2D_39018358.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:865bed216003bc911da7eba6ff3f593ae6e69ed3f6fc32bed0c3c0d259888d83 +size 2838078 diff --git a/video/8ugOlbjJpp_39028271.mp4 b/video/8ugOlbjJpp_39028271.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f28b92cc57021ba0a04b59ea935ddf377c8697d3 --- /dev/null +++ b/video/8ugOlbjJpp_39028271.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63b94e05499005aa099b3cc009f864dca1559c0ad3d668255dde7c57c27b42c7 +size 2850080 diff --git a/video/8wvH0RZPsG_39026431.mp4 b/video/8wvH0RZPsG_39026431.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3f6d617cd695f5dec1bb88a16cd6745cd01a0fba --- /dev/null +++ b/video/8wvH0RZPsG_39026431.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca897a25e1459f0b8ec25f187a67c3dfb5e8fa9e0580361fb3f74d4860eed7bc +size 2717338 diff --git a/video/8x48XFLvyd_39028001.mp4 b/video/8x48XFLvyd_39028001.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..726133b95535a97e08422969918052f8e6133240 --- /dev/null +++ b/video/8x48XFLvyd_39028001.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:120d1caad636387c86a9f94a48f60cec20ec35805b65736e89d5b00156de52a2 +size 2667374 diff --git a/video/92btneN9Wm_39018355.mp4 b/video/92btneN9Wm_39018355.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..71e3d6bcdad88f23c2e6755813fd762da4132f91 --- /dev/null +++ b/video/92btneN9Wm_39018355.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1422ab35bcc68829d5c3ee0eff6147abd47d67fb176ec60d3c1b95942e859fbb +size 1985522 diff --git a/video/96gXvFYWSE_39025707.mp4 b/video/96gXvFYWSE_39025707.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e05f2c2a3e166ab4d4a9300ab17aa7b36a442092 --- /dev/null +++ b/video/96gXvFYWSE_39025707.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:358c8b8faaf154ac3f740eaa6839d41991912305fad9c3df0be0872586263f65 +size 3090206 diff --git a/video/99rOAM7Jfm_39026389.mp4 b/video/99rOAM7Jfm_39026389.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7500c1cf04ec15d80811404bccea9d04f1ad5a56 --- /dev/null +++ b/video/99rOAM7Jfm_39026389.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fbb134bbf4181db65ac3d6deb5fabc7aca6ea6d55a0298c03c3234396bf63ff +size 2311941 diff --git a/video/9B0iOkn3UP_39028199.mp4 b/video/9B0iOkn3UP_39028199.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3023126bfc3b7f429a9fddeaaf180f3ce4569f13 --- /dev/null +++ b/video/9B0iOkn3UP_39028199.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:005bd1eca3414b1ed12f570d8af32aa29cd26ce11b36c0a726e83b30cc30d228 +size 1694121 diff --git a/video/9DXXMXnIGm_39018352.mp4 b/video/9DXXMXnIGm_39018352.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8a12a976bb9e393918ec3c3f772a772ef1f9cf78 --- /dev/null +++ b/video/9DXXMXnIGm_39018352.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7037662cee20b892940e394b82ecefc429d39a8645713e99f818c6a4bb7be994 +size 2741223 diff --git a/video/9GhSOp1LYH_39026174.mp4 b/video/9GhSOp1LYH_39026174.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..af7210b91581bddfd91a63f5e5df6f3ebe2addb3 --- /dev/null +++ b/video/9GhSOp1LYH_39026174.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e6dc9d33e5a3dab194a6e72ce0acf5473e5006cf6e2fce7d0d14a432e3e14c5 +size 3129377 diff --git a/video/9JFSJitKC0_39026490.mp4 b/video/9JFSJitKC0_39026490.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9a5ef2c44ee0c5fc4ea25128c5e2ec950c954543 --- /dev/null +++ b/video/9JFSJitKC0_39026490.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1cf738648a6a71ef56773246183d64d451c396f2ad4e186e8b4c31b0aee59d76 +size 2621371 diff --git a/video/9Jmt1eER9P_39027473.mp4 b/video/9Jmt1eER9P_39027473.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1750fe8b4ef48ae20570f07faf3d1bb693aab015 --- /dev/null +++ b/video/9Jmt1eER9P_39027473.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a94c20b9ab41d4312c08aeb2523096de7f988db747bf66685d04868c2374620f +size 2247404 diff --git a/video/9O2sVnEHor_39028114.mp4 b/video/9O2sVnEHor_39028114.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b504253541868919af0578655b7d58abc01c9751 --- /dev/null +++ b/video/9O2sVnEHor_39028114.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad51d54dea85f69b10848c0bf6065ace94889b1195db364d2e0fd3a2a01bc862 +size 2630611 diff --git a/video/9OHXQybMZB_39027855.mp4 b/video/9OHXQybMZB_39027855.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..aca4a68c303c9ec19bba5685b43f3908cd6f83fb --- /dev/null +++ b/video/9OHXQybMZB_39027855.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d426ba47e87d2b877e42ef3fa2b66cfc8405d5679368635ecafb4eb4c38742d +size 2454743 diff --git a/video/9RIbNmx984_39017173.mp4 b/video/9RIbNmx984_39017173.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..082c46de1145c4a0cf1d920da4ba60166b9e5de8 --- /dev/null +++ b/video/9RIbNmx984_39017173.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d0a945357dd785518c5096a375e6a193e9ed70854e337ebf0b50a2dcbf623e7 +size 2612968 diff --git a/video/9SghPrjYU1_39025402.mp4 b/video/9SghPrjYU1_39025402.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..33ccfa3f7a3fc47f67ba9dde05846ec5172b75b7 --- /dev/null +++ b/video/9SghPrjYU1_39025402.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0766ce352e0356f76db1382110d7b0b23ba91c7a14869973a0e25d69f684d7fd +size 2390198 diff --git a/video/9SpWvX9ykp_39027172.mp4 b/video/9SpWvX9ykp_39027172.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..237f4aea9b92e3530c7e6b0ed947ce9ae0e7ddf0 --- /dev/null +++ b/video/9SpWvX9ykp_39027172.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e03c13af617ef6ec141cff3fb2aaea939c6171ede576f6accf33d6a722b9f9e5 +size 1543282 diff --git a/video/9U0nLnNMJ7_39027612.mp4 b/video/9U0nLnNMJ7_39027612.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ef000616f90c8b3bd60c48e2c3f07a21adccf606 --- /dev/null +++ b/video/9U0nLnNMJ7_39027612.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:048065ab98ce030894dba2f89b191af14e07ece8f79c2d71d53c0e914f93227c +size 2164810 diff --git a/video/9VbGjXLzig_39024898.mp4 b/video/9VbGjXLzig_39024898.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2cb13d23ae575307b583addb07dad2e08ec3c685 --- /dev/null +++ b/video/9VbGjXLzig_39024898.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:449ea4906600841f0c46709649912c2ea0c4e9be12288564877c4eb7e6a320cf +size 2211815 diff --git a/video/9XDYEEBRV6_39027905.mp4 b/video/9XDYEEBRV6_39027905.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d3437ea1debbea413fedc0aa55b48f439c5c7fe2 --- /dev/null +++ b/video/9XDYEEBRV6_39027905.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7154aa75e1c92e18995890fb7c2f907bae521e4b51ce3da8b3908892df3ec1ac +size 2505534 diff --git a/video/9Y8zUO11EQ_39024460.mp4 b/video/9Y8zUO11EQ_39024460.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f8f2db49789f7e6e1d3428e10da5e69bd09bb6a5 --- /dev/null +++ b/video/9Y8zUO11EQ_39024460.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d658c159f2c83b448d8982f59b8b14a979bb6964630bf03e4290d7b7df296da +size 2632044 diff --git a/video/9bmTbVaA2A_39018724.mp4 b/video/9bmTbVaA2A_39018724.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..647aeb43c14225a4b43db7b6f87a2abe0cb9c132 --- /dev/null +++ b/video/9bmTbVaA2A_39018724.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67e51fec7337e569a228e97f64711d092beea34924fd14ffab1518dff7c6e35d +size 2634778 diff --git a/video/9f5tOXKoMC_39027809.mp4 b/video/9f5tOXKoMC_39027809.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..87c6ad361d5dd0edc8a4b31f02ee3788ba857840 --- /dev/null +++ b/video/9f5tOXKoMC_39027809.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3446ca644f3c8ef45ecd9458362f4e0736bd47a0c502f7327ebf6e288e1fac32 +size 3191487 diff --git a/video/9j1RD9LlWH_39018973.mp4 b/video/9j1RD9LlWH_39018973.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..182bd951ab0065930eb8ddce7c28c99c6fdfe284 --- /dev/null +++ b/video/9j1RD9LlWH_39018973.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63cf5a4604954c7785a803acd3d0ac522fcd08cb619ffb7cad7558ac3cee01a7 +size 2385630 diff --git a/video/9kG7TwgLYu_39018711.mp4 b/video/9kG7TwgLYu_39018711.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a978258cdcd839700a41d78edecd8f1a6a388126 --- /dev/null +++ b/video/9kG7TwgLYu_39018711.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1406e4a27fdefe1abed1cf04933cac16595750d93b08168e3714c6c2dc2347e +size 3217537 diff --git a/video/9m02ib92Wz_39018344.mp4 b/video/9m02ib92Wz_39018344.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7b11239f673defa2458264d0aab38b576a2537fd --- /dev/null +++ b/video/9m02ib92Wz_39018344.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:228c63c2d837046727158b77d5dba4e7944f5433265e896b31022269f5a499d6 +size 2489593 diff --git a/video/9nsNyN0vox_39018343.mp4 b/video/9nsNyN0vox_39018343.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2a9880f6f5054a88f45af02a186fb4ee57da136c --- /dev/null +++ b/video/9nsNyN0vox_39018343.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7672984344a2c54a938e29d1f41901e384949b99111fedb9a39ba4cf897a166 +size 2410845 diff --git a/video/9rPyHyjfwP_39018341.mp4 b/video/9rPyHyjfwP_39018341.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bc3337d809ecbf3d76d1b8c7b7dc351670b3a6c7 --- /dev/null +++ b/video/9rPyHyjfwP_39018341.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7247871a669d983f6872579da7e2ced6efe0b47ccbef428bc7d84f621a891069 +size 1804938 diff --git a/video/9sP4oejtjB_39028582.mp4 b/video/9sP4oejtjB_39028582.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ad82d2d3ace8b36dac82cd15f4527a757186caa3 --- /dev/null +++ b/video/9sP4oejtjB_39028582.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f69d5c563f2cfb3953b30d37b61065ef0315ca37f47bc42fbd4efb9e05ab9622 +size 2910009 diff --git a/video/9uolDxbYLm_39025646.mp4 b/video/9uolDxbYLm_39025646.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..91b0eb6dfabfe4ad4c9f9f72a53389f267bc1651 --- /dev/null +++ b/video/9uolDxbYLm_39025646.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ced160298f494734adf9ad17a8da4f8eac38b572249767d7d6bf7bc3c5d42634 +size 3033587 diff --git a/video/9utMGIbHBt_39024429.mp4 b/video/9utMGIbHBt_39024429.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8d0b4635bc92e133ed0f79fc60ce36e76c4092dc --- /dev/null +++ b/video/9utMGIbHBt_39024429.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:100b3f4546bbac0f275ae35b063bbef471a56e1345b01777520d8e85153064cf +size 2888901 diff --git a/video/9vcqleAHPl_39026746.mp4 b/video/9vcqleAHPl_39026746.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bb373c5ba906794b95265e032922061a9fe0ab2d --- /dev/null +++ b/video/9vcqleAHPl_39026746.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:beee68b83e5161855e174104b47f22241f4fedf9bbeb372b6ca05292c8a16147 +size 1883422 diff --git a/video/9w3iw8wDuE_39018340.mp4 b/video/9w3iw8wDuE_39018340.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c922b76a066092f29d0f200c5469dfc4540b216b --- /dev/null +++ b/video/9w3iw8wDuE_39018340.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71b488b78041836bc9098d3b618d98271c711405626fa67079b37db4666cdad1 +size 2751148 diff --git a/video/9zQl27mqWE_39027409.mp4 b/video/9zQl27mqWE_39027409.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..972ea90fa4c49c14cb77208b239365ad5480dda5 --- /dev/null +++ b/video/9zQl27mqWE_39027409.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3997733abff7dcad53be47d96bf28fecdb97984d6662bd35f678cdd834adaf7c +size 2093666 diff --git a/video/A2mRcRyGdl_39019078.mp4 b/video/A2mRcRyGdl_39019078.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0fbd7c346db4f5b47f0e73aee84da18b1e5580ba --- /dev/null +++ b/video/A2mRcRyGdl_39019078.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c4e6ad2d01d6bf5ce6c7a7c0429253ac690a4d732e131a949683cbdd3f3fbee +size 606538 diff --git a/video/A3hxp0EeNW_39025390.mp4 b/video/A3hxp0EeNW_39025390.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1b2acd28b85fb9ea25298c4343f2d74b7fe3ed6a --- /dev/null +++ b/video/A3hxp0EeNW_39025390.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:788ab5cda6e0b1a3b31795048d574e880f8ae6dc8b3226f6bc552187e6eeb53a +size 2428563 diff --git a/video/A7t7z6g6tM_39018336.mp4 b/video/A7t7z6g6tM_39018336.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fc3ba21df84638b4cbd78b4300e63fed03abc80c --- /dev/null +++ b/video/A7t7z6g6tM_39018336.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61212fa7174f78cb95f7130c419d2e89fced983b3bc5b8f95842e4b15b394d16 +size 2496975 diff --git a/video/A969ouPqEs_39027758.mp4 b/video/A969ouPqEs_39027758.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..88562cc59142ab2017bb22e37d8384bd3244f01a --- /dev/null +++ b/video/A969ouPqEs_39027758.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad95c5f8abb620b6c8fea9cfd66e66bbc8f4ad0e11efcaa4530a0ad720494a1b +size 2552163 diff --git a/video/AB6XpMzvqH_39028828.mp4 b/video/AB6XpMzvqH_39028828.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9b83874a891265f1c7e59d67c0deb3331a959b55 --- /dev/null +++ b/video/AB6XpMzvqH_39028828.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b0621b95fe25e15c926c50b9b1973d5d3d2b8c62994566334f3e8275919f615 +size 2534276 diff --git a/video/ACCqGLviig_39026827.mp4 b/video/ACCqGLviig_39026827.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..300f0a406e7ade68b1ae5136c5ec0b4978593c0a --- /dev/null +++ b/video/ACCqGLviig_39026827.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a68145b5c68ccd8ddb143a1eff02094261f360ae886bca8f249f17f07a9cf59 +size 2515674 diff --git a/video/ACIDDnTbSJ_39025491.mp4 b/video/ACIDDnTbSJ_39025491.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fa81f228f48d4777e9fd96f915d3ba0113a1a2f5 --- /dev/null +++ b/video/ACIDDnTbSJ_39025491.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bafca1462007d334585ac70f074c597bedabfff02a66386b47be9d3a2120a2e0 +size 3027266 diff --git a/video/ADJASE9uQ2_39024396.mp4 b/video/ADJASE9uQ2_39024396.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..72491e46d0b230ca73dba95b3f98904792b99f27 --- /dev/null +++ b/video/ADJASE9uQ2_39024396.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b7e381be0cc27b878116e32183a7302c8780e1242e9db2b4082e81d1842b9ab +size 1947540 diff --git a/video/AFnSMlye5K_39025078.mp4 b/video/AFnSMlye5K_39025078.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a606dc795ad057b8993dae60495eb0b5ea0bad30 --- /dev/null +++ b/video/AFnSMlye5K_39025078.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35558dfab431c6fe2e8bd302264a08110d414170dec8cba7ddf8b8305751219c +size 2772426 diff --git a/video/AH1mFs3c7o_39026446.mp4 b/video/AH1mFs3c7o_39026446.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..738ec7c7b0a826670512f50688d774cc532360a0 --- /dev/null +++ b/video/AH1mFs3c7o_39026446.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb5d2a01ff871a57b32afa6686ddf61f53cf27983130fcc8e8c633bfbeee9624 +size 1825574 diff --git a/video/AH5KwUSsln_39026739.mp4 b/video/AH5KwUSsln_39026739.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..180cb01ffc22a630ed83e4756df475e90a1ab916 --- /dev/null +++ b/video/AH5KwUSsln_39026739.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2235423edbfd7efdbefdc8f00be6fc2ad7258a99e98af914b255668bf0f1e29 +size 3361274 diff --git a/video/AJBkfwXh3u_39018331.mp4 b/video/AJBkfwXh3u_39018331.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6f2dd59885c55d926b7edc1859af78fd32ec1c03 --- /dev/null +++ b/video/AJBkfwXh3u_39018331.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:336982cefeeac746775361d679439373e798d1d6aebd5abd7d04a0242280c8f8 +size 2308051 diff --git a/video/AKBTFQhCjm_39026181.mp4 b/video/AKBTFQhCjm_39026181.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fd05177a091add09ded38512b9a7bd7c6fe9dda5 --- /dev/null +++ b/video/AKBTFQhCjm_39026181.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa0edc8a65d7e4175c14495badbc106645b304c37f4ab89c709953b0da4077ef +size 2299916 diff --git a/video/ALISPmDPCq_39027605.mp4 b/video/ALISPmDPCq_39027605.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d3af9ffb01d79ce4ed118493dd24da4720f83d57 --- /dev/null +++ b/video/ALISPmDPCq_39027605.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bee7ca73de8c7318d5f11306cad4e4d2a6020efce9b886de951e86c7d9815552 +size 2247743 diff --git a/video/ALU676zGFE_39028453.mp4 b/video/ALU676zGFE_39028453.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c697d82b1969b5ee4a67dd3fc2c029457ae9dea0 --- /dev/null +++ b/video/ALU676zGFE_39028453.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:770c8bc83547ccd79dc62a9ae3aa02b32952595b275b9d08185c0ba2e4e68b9a +size 2835527 diff --git a/video/ALVwQjZRS8_39018330.mp4 b/video/ALVwQjZRS8_39018330.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ee95b186cd6a7a1e4d5dcd5a30cb1228183df4eb --- /dev/null +++ b/video/ALVwQjZRS8_39018330.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db527bbb588658b245cb0f51209676a6119f9ea5385f3467ea4f28239838cf8c +size 2125165 diff --git a/video/ARAxPPIAhq_39027155.mp4 b/video/ARAxPPIAhq_39027155.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c8546cb15db4e8162fc3280d079c69f57d71cf67 --- /dev/null +++ b/video/ARAxPPIAhq_39027155.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4b4cd945b8780d607242dce1e659561c5b371e3a068555c0c1514d90b0ea054 +size 2663478 diff --git a/video/ARPrtuzAnQ_39019095.mp4 b/video/ARPrtuzAnQ_39019095.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..72979cebcba287ea9cc4362771c34b3bc0e87caf --- /dev/null +++ b/video/ARPrtuzAnQ_39019095.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1871910f1118c1eed186958d03f68c0d0e95d68b407d4729b17de70eedc4ccf9 +size 2603108 diff --git a/video/ARV1gJSOzV_39028508.mp4 b/video/ARV1gJSOzV_39028508.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5cc1ae298835ba3276d1fc3264b49cd2974d3c41 --- /dev/null +++ b/video/ARV1gJSOzV_39028508.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2557494084f198b01fcfa44b3a7629bff5dd43ae354bf4deaf3317fcd3a222d +size 2890880 diff --git a/video/AU2gS9ut61_39018327.mp4 b/video/AU2gS9ut61_39018327.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3c323fedd926f46e0295978229a0f5639c627d34 --- /dev/null +++ b/video/AU2gS9ut61_39018327.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88881fd6255460cdc71f3d6f9b543a9e04320427e4004ec95a3b2a9da873ed4a +size 2213572 diff --git a/video/AUg9D2VjcF_39025776.mp4 b/video/AUg9D2VjcF_39025776.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ba9ffb53a5f548bd233cad0c269520b84a1fa924 --- /dev/null +++ b/video/AUg9D2VjcF_39025776.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:656065dc0d61c8450ece1f3a7965d010c6e7a0ef0c8ac2fd3fb411c948eee284 +size 2292645 diff --git a/video/AVrGtVrx10_39025027.mp4 b/video/AVrGtVrx10_39025027.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d189591a58b68c4d21119009d118ee05fe68e2f1 --- /dev/null +++ b/video/AVrGtVrx10_39025027.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4731417e115585db368f275e390801356eba6da605e3d0488fd69f9ae4a8fa58 +size 2790100 diff --git a/video/AWFryOJaGi_39027156.mp4 b/video/AWFryOJaGi_39027156.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0867f66e4b883c619a4e0a12e245246ec79b2f3a --- /dev/null +++ b/video/AWFryOJaGi_39027156.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8d0fa8ee597f135f3655b8b3e42f1c1fcfecc3f9d9405b1e326e7a70cd1a882 +size 2801358 diff --git a/video/AY6aM13gGF_39019013.mp4 b/video/AY6aM13gGF_39019013.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0df9353bb4ba27fc02fc160ffed1934717e0ead9 --- /dev/null +++ b/video/AY6aM13gGF_39019013.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97a66c181728b3bcecf35857ce4c652d3dd9d6f4f070a5bc5abcf525815b0f6e +size 2172555 diff --git a/video/AYDBFxNon4_39026380.mp4 b/video/AYDBFxNon4_39026380.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b15bdd2a86e39e1b72ff915831607cd0a13ce7c1 --- /dev/null +++ b/video/AYDBFxNon4_39026380.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b08903974f6b436c1b7eb334f3b47bef3146770337b7de123adae51f84dde10 +size 2762727 diff --git a/video/AYq6GxxrrY_39028378.mp4 b/video/AYq6GxxrrY_39028378.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b87d22692b5ccb9b70b782ce3ac4e372e38f1f8e --- /dev/null +++ b/video/AYq6GxxrrY_39028378.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:437bf417654d1742e4204504383e2fcaf9d4f658f2ac0bcd59e4028ca595b84b +size 1913532 diff --git a/video/AZW3qlCGTe_39018323.mp4 b/video/AZW3qlCGTe_39018323.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4b14ff9a60bc7f026013718b7fbf3328780098cf --- /dev/null +++ b/video/AZW3qlCGTe_39018323.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de846d500bcb960d8ca37682c4490ae647282dc03e7d37971c0e2442f399c09c +size 2541908 diff --git a/video/AbTpJl7vN6_39027552.mp4 b/video/AbTpJl7vN6_39027552.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5ac891380b611804f81f850c53951460bfe7cf80 --- /dev/null +++ b/video/AbTpJl7vN6_39027552.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bde09778573bc31729fd8abd349f7540c626321298caba4d8b8c994d23d376b5 +size 2812701 diff --git a/video/AcRfzLS6se_39018684.mp4 b/video/AcRfzLS6se_39018684.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..28c75fa43543d79d99c847ccac30343197b81d66 --- /dev/null +++ b/video/AcRfzLS6se_39018684.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e160d72ebf82b3246f869dd30bcb33628b00c9cfb55c05992a9b185f51099036 +size 2440710 diff --git a/video/AcSChDWL6V_39018653.mp4 b/video/AcSChDWL6V_39018653.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5133b41b8e821ed9d067e33d5bba4f00e003460e --- /dev/null +++ b/video/AcSChDWL6V_39018653.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ec39908369151a906b19a8910e4806325c69f24d933b9ea1d967671465a203f +size 2620887 diff --git a/video/AgM3MzT99c_39018315.mp4 b/video/AgM3MzT99c_39018315.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..68292cefdb02ee129568bf2696e9cdf97bdb2ff2 --- /dev/null +++ b/video/AgM3MzT99c_39018315.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a7cdb061e7b8f86c7cd0f0c3b43af6c11b31fa447efa878ef175a5604881afb +size 2662258 diff --git a/video/Ai76ATrb2y_39028879.mp4 b/video/Ai76ATrb2y_39028879.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c85924e36a1ccd46eea32b883a13408636d5dd00 --- /dev/null +++ b/video/Ai76ATrb2y_39028879.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2fdc96d6ddf2184cf8d8f07f93f69ffd659f989f6be9a0f01679dfaa4b76d942 +size 2741424 diff --git a/video/Aj8RKCGwjE_39026295.mp4 b/video/Aj8RKCGwjE_39026295.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7bc98b7a35aaf0104b790792d63da549185c7bce --- /dev/null +++ b/video/Aj8RKCGwjE_39026295.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:252545f5302c157a33f9e4202a62eb22d04b20334a47534fa5f2ccdc168fe2a3 +size 2974966 diff --git a/video/Ao0FiZqrXa_39027579.mp4 b/video/Ao0FiZqrXa_39027579.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..eac37dce62dc54c5105f84eabdff4207606e0eba --- /dev/null +++ b/video/Ao0FiZqrXa_39027579.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f5d4921b82e1e96f48bb777eeed2158ef9b1979bf564363eac7a069711c3426 +size 2779229 diff --git a/video/Apq6corvfZ_39027216.mp4 b/video/Apq6corvfZ_39027216.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dc2b3a238de23ff2ef76bf15887845261a7f65ba --- /dev/null +++ b/video/Apq6corvfZ_39027216.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e13a8c2e5f2a18a92dcaee880aedc46d722133866f61905cdb05d808628c473 +size 2293316 diff --git a/video/AvWB40qXZh_39027857.mp4 b/video/AvWB40qXZh_39027857.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..17b93c0363607d91b2be5cd8dbbf0ecf1ca99b23 --- /dev/null +++ b/video/AvWB40qXZh_39027857.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3bfb7d8263b9b28ad3f876320ad0f00426bd2071ff4a35b91e09a09c3bb91d7c +size 1938820 diff --git a/video/Ax2yRhCQr1_39018310.mp4 b/video/Ax2yRhCQr1_39018310.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5657ca84033725b1de20b88af6100221f1b46cf1 --- /dev/null +++ b/video/Ax2yRhCQr1_39018310.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af255cd8e051c39d4da8d88626cda9ab5982b7d6b93783e3c423852c47941d20 +size 2737896 diff --git a/video/AyzkDpuqcl_39018309.mp4 b/video/AyzkDpuqcl_39018309.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e6446c219c1c3691699827b3c2f97da3e5826d8e --- /dev/null +++ b/video/AyzkDpuqcl_39018309.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1eefdf8ffd4aac6318cb834dd2fc86d45e8e9320a3618c873111a543625e29cb +size 2713607 diff --git a/video/B0OWOkMwhz_39027530.mp4 b/video/B0OWOkMwhz_39027530.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b073ec87d0e4281ccd1fc608ffbc686483958cae --- /dev/null +++ b/video/B0OWOkMwhz_39027530.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e5ed06cd1505287647db8ece5862acd37fc718b174a25dbda0f46d0307b2313 +size 1783245 diff --git a/video/B1FOes6cyq_39028830.mp4 b/video/B1FOes6cyq_39028830.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0cd2d15d4dae8bb814729101d32502367cb3742f --- /dev/null +++ b/video/B1FOes6cyq_39028830.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:525263113643d68751e47d30a17379541cf4bc841ecf82fd7da7e33a23bc0ca2 +size 2807973 diff --git a/video/B1Iq1EOiVU_39025732.mp4 b/video/B1Iq1EOiVU_39025732.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..29e198416750e0eb570cd334bc885c37a2b52c35 --- /dev/null +++ b/video/B1Iq1EOiVU_39025732.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d609747ad490b3a8a1dbc8197115baff498004adc568eef9dd57e93333ee40f +size 3115894 diff --git a/video/B29BlRe26Z_39026632.mp4 b/video/B29BlRe26Z_39026632.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a3cad58b03be83dc9b1ee3656c73fd9fe98db159 --- /dev/null +++ b/video/B29BlRe26Z_39026632.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7869bce14232d91e4f4b9bf5ab278c4690f5c187cb0761c4310f038966e19878 +size 2735654 diff --git a/video/B2cTLakrhV_39028901.mp4 b/video/B2cTLakrhV_39028901.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1f80c5a7b26d61effec905d90714531908383af8 --- /dev/null +++ b/video/B2cTLakrhV_39028901.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f26c6a81277e1dec5cdf1a3ce28024c3f0f918b2000ea1a2d79fdbadc16008b6 +size 2965658 diff --git a/video/B74mb0tEY6_39027521.mp4 b/video/B74mb0tEY6_39027521.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a06f0f77f89d1eecd90684d84d23671dbfd0f8dd --- /dev/null +++ b/video/B74mb0tEY6_39027521.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f534317a0da1e1d54c128ec20e77c4a3a58867bb7ef101fd02a190dd0f08138 +size 2975983 diff --git a/video/B9FPPdNmyk_39024461.mp4 b/video/B9FPPdNmyk_39024461.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d4a713bcee240b912ac12854d8e9a83a238d2612 --- /dev/null +++ b/video/B9FPPdNmyk_39024461.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8295ebfc659252db38e182966f5be2b49e4747bf2ceac284f9ef5b0126dce22f +size 2148932 diff --git a/video/B9qg3wo75g_39025125.mp4 b/video/B9qg3wo75g_39025125.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..46166e2be19170b52df2a29bc57ccd18e86f36b7 --- /dev/null +++ b/video/B9qg3wo75g_39025125.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:debbf33a2d22afdeb7f6fa5a65db1ab1cf7fc07ba006d990263e4c6aec26848b +size 2485628 diff --git a/video/BAfKBkr8IP_39025382.mp4 b/video/BAfKBkr8IP_39025382.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..82a77fc0ddefa456a1ca4943310cc792613a2bc5 --- /dev/null +++ b/video/BAfKBkr8IP_39025382.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b80962bd57c6d536d41cde93acc1aaa0e02b3cdc4f630c84d16474fbd5f47089 +size 2149628 diff --git a/video/BAjjINf0Oh_39025493.mp4 b/video/BAjjINf0Oh_39025493.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3f9b1809693281452e332992fbc24a62f94ad572 --- /dev/null +++ b/video/BAjjINf0Oh_39025493.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2bcd5cad18f06bbf60e8a1bcbdfca244281d3edaa89c4fdd66f6e4f43047dc4d +size 3125441 diff --git a/video/BAmAFraxvf_39027175.mp4 b/video/BAmAFraxvf_39027175.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..834add3134dd663a5edcced99d2f6ab108118093 --- /dev/null +++ b/video/BAmAFraxvf_39027175.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b3c11da5520f2aad03469be89780a4d7274b5cbd45c0c8058d12495c4fda508c +size 3224401 diff --git a/video/BCA9NMZkLS_39025966.mp4 b/video/BCA9NMZkLS_39025966.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..086abf8485e4963483049e68c578bedc4589bd5f --- /dev/null +++ b/video/BCA9NMZkLS_39025966.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aafa77656b84324e680af5cbdbfd34af10014736e26e8120ac233cc7b06036ee +size 2635469 diff --git a/video/BEH4mGo7zP_39019202.mp4 b/video/BEH4mGo7zP_39019202.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9a758e8e2ec33d131f9859008ce44c2e8000018e --- /dev/null +++ b/video/BEH4mGo7zP_39019202.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7ddfaedd0cdd3460999e98343317dac09d27a5f1c1e6c4a341fe80943c397caa +size 1951055 diff --git a/video/BEiqNQZIky_39026093.mp4 b/video/BEiqNQZIky_39026093.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..08bfbb79c008d96cb3c72a8c83bd1a339606ecd5 --- /dev/null +++ b/video/BEiqNQZIky_39026093.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7ad9f9914eb9aec8de64ff112b7525fa65a934dcee73051dbfb2b118d8407b2 +size 2546361 diff --git a/video/BEyEziZ4R6_39018304.mp4 b/video/BEyEziZ4R6_39018304.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5902aa2770e88cef488368d44a1032b79dc9ab37 --- /dev/null +++ b/video/BEyEziZ4R6_39018304.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21da666a5ef962f826422fb5deee84223d439b5d3ae6a36fe278d04ac25dabdb +size 1762226 diff --git a/video/BFWdIPPLgZ_39027048.mp4 b/video/BFWdIPPLgZ_39027048.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9717557cb7ca16177c2ffadc6c459bacd63c84a1 --- /dev/null +++ b/video/BFWdIPPLgZ_39027048.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d75efe940bf9bc12c15ef1ef650f52c29ec414a947128332df800047506e430 +size 2764139 diff --git a/video/BGOGknwHbi_39028092.mp4 b/video/BGOGknwHbi_39028092.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a463d49fd1fe070eb1b2d50b26541c2e56932bb7 --- /dev/null +++ b/video/BGOGknwHbi_39028092.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0587807aed484bbe2a9082c3e47467a925dcb51024988241a42a490428406221 +size 2834322 diff --git a/video/BJndYScO6o_39024922.mp4 b/video/BJndYScO6o_39024922.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9fbcca4d59f75b656b05c9a53545a245a7b13325 --- /dev/null +++ b/video/BJndYScO6o_39024922.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:658316d5f1c37b56d564275de86178b3e7188ed4259e2960020bced7676540e0 +size 1964189 diff --git a/video/BJrBaLoDRJ_39025873.mp4 b/video/BJrBaLoDRJ_39025873.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8a8a49d04b735f43e1c835be7dfd2a1015aa6c2c --- /dev/null +++ b/video/BJrBaLoDRJ_39025873.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb1e80290f91cd06b052538a606069c617875c39d8ec90bb6d7482a165be7aca +size 3043706 diff --git a/video/BJv1t4XNJW_39028824.mp4 b/video/BJv1t4XNJW_39028824.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..061891b3c26fe9c84202c8f89619a7e4eb1aab84 --- /dev/null +++ b/video/BJv1t4XNJW_39028824.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:119a8d844a0522ca038e4c4283c5b5309f6f967cbef6005483a94b3dbc280510 +size 2123335 diff --git a/video/BLGQ3oqldb_39017145.mp4 b/video/BLGQ3oqldb_39017145.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0ee580011352087125c0067de65dfab9ac044f39 --- /dev/null +++ b/video/BLGQ3oqldb_39017145.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3a08e119a3f5e7e17a34889230051d25ceb4a9417a6f966ce4f128938e9956b +size 2543823 diff --git a/video/BOhnXyIPWW_39025990.mp4 b/video/BOhnXyIPWW_39025990.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2d0a942f9f9b0c61c64b05a8e9795440d4113c36 --- /dev/null +++ b/video/BOhnXyIPWW_39025990.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:81c66442bb58d48d19212c7816fdca2bbd83bc46764b671bece52e10b4b269f8 +size 1630341 diff --git a/video/BPb5AhT2Vf_39018638.mp4 b/video/BPb5AhT2Vf_39018638.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1739d6397a0ca1691d75c070cc61b40f6842d2b5 --- /dev/null +++ b/video/BPb5AhT2Vf_39018638.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5149d6b2db3920bf8e642fe0ef8a67e1760793c4c97b52ef90bcef848552a598 +size 2968197 diff --git a/video/BQh1SGvROG_39025987.mp4 b/video/BQh1SGvROG_39025987.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..db2cc6193d2ce1ffa4dd93e93cf5880c952c8625 --- /dev/null +++ b/video/BQh1SGvROG_39025987.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cfd9390055412233ce1c0e9ebd88c8c47b8e33feddba2f8e0c162b321fd40c5b +size 2685996 diff --git a/video/BRW0MKJ7Rr_39027570.mp4 b/video/BRW0MKJ7Rr_39027570.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..838d62cdf3cad047133360125fc50b7aaa76b36e --- /dev/null +++ b/video/BRW0MKJ7Rr_39027570.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64fef0f509dda255f91302359d5187bc16735aa07aac69b97f8848e283d50bd0 +size 2825252 diff --git a/video/BRZYhVHvSg_39026282.mp4 b/video/BRZYhVHvSg_39026282.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1094792973061ef9d6f3d710d32562197ede8d8a --- /dev/null +++ b/video/BRZYhVHvSg_39026282.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a17ee626cc4123dbeedb5b0775f04762fd8eaecbf4ede31f2142c4dd6cf9f85 +size 2933296 diff --git a/video/BRdEBlwUW6_39018300.mp4 b/video/BRdEBlwUW6_39018300.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cf2d2a219e0f54226b0666161b54218406cfc020 --- /dev/null +++ b/video/BRdEBlwUW6_39018300.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8a7d18f4d3fb51bf8fa9c7aba3732a0bb1edf5ea9e6f9066c80f238b31c73c6 +size 2468008 diff --git a/video/BV1PHbTJzd_39017079.mp4 b/video/BV1PHbTJzd_39017079.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a79bc1004f66523891984ffb6d83119e6b12b471 --- /dev/null +++ b/video/BV1PHbTJzd_39017079.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1dd6e0d9df944c8837067c8c2fb1dfdadf940027fb5cffa8e86b2845d1673340 +size 2862349 diff --git a/video/BZLdXBjB8O_39027408.mp4 b/video/BZLdXBjB8O_39027408.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..410d650867874070fb4586b4571f6691d06855f2 --- /dev/null +++ b/video/BZLdXBjB8O_39027408.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aea03542a31b85a541edfbc7b2e31a2d797d56c6b391728edc7ac4ebda568798 +size 2512328 diff --git a/video/Bb21JPnhhr_39017187.mp4 b/video/Bb21JPnhhr_39017187.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9a98eaef61bf9f0ee9151ed1d4d4088532f9708b --- /dev/null +++ b/video/Bb21JPnhhr_39017187.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cad051a0ac202089b692e1cdd8f6ce00c6e394f09f7737f7cf7e57db9e560b50 +size 2250312 diff --git a/video/Bb4VGOWELI_39018626.mp4 b/video/Bb4VGOWELI_39018626.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e1294d9324a106bd0e13b40f53c68f86cf1d2d1d --- /dev/null +++ b/video/Bb4VGOWELI_39018626.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06c7e33962802e6d9cd6dbe33b2630da13b0d7ad06b416f55c96f560ab16268c +size 2483620 diff --git a/video/BgZcuEsYU8_39026039.mp4 b/video/BgZcuEsYU8_39026039.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2e76c9aba12aea6de64e55443298a899ba5bae64 --- /dev/null +++ b/video/BgZcuEsYU8_39026039.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:492f6f65ac88606e6712d0d98985c9e1499700359d2fa48e8cf15b7fc839ee80 +size 1442825 diff --git a/video/Bh0LLUp8OA_39025217.mp4 b/video/Bh0LLUp8OA_39025217.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..01eaccf22887e3b31c891d3c379bd8f530f28ce0 --- /dev/null +++ b/video/Bh0LLUp8OA_39025217.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:929788a8d600f51c555530e856a83e223e5c582f2225f069e88e19632eb230cb +size 2593194 diff --git a/video/BifeBRhikU_39018799.mp4 b/video/BifeBRhikU_39018799.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5bfff3fe056aee12978c0681a7148f3c0fdb8f76 --- /dev/null +++ b/video/BifeBRhikU_39018799.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:12f86879062498bc1fa44c8a121f56b84a88bd364e5b166e76c790cd71891f78 +size 6559505 diff --git a/video/BiikUm6pLu_39028521.mp4 b/video/BiikUm6pLu_39028521.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dc9cde6ff98cc151e37f17d57ea3e72f639b71df --- /dev/null +++ b/video/BiikUm6pLu_39028521.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:695f6aa1d2987240a887612fc42bfc1dba3e5730a3870b5280ef4e73de49ae38 +size 2960486 diff --git a/video/Bj2CpB9Dey_39028844.mp4 b/video/Bj2CpB9Dey_39028844.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4e090540db0550893a5504f7d2eb737fe95d4bea --- /dev/null +++ b/video/Bj2CpB9Dey_39028844.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30f846d9be4bee1f93171583741863600ebd0e5939f18f084989c24c45079014 +size 1769127 diff --git a/video/BllUWdpIOA_39018294.mp4 b/video/BllUWdpIOA_39018294.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9ca564a3be086788fb9fdb0fab5e8c71b69b1895 --- /dev/null +++ b/video/BllUWdpIOA_39018294.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7deb84ec734c739f4880751268df040cb8aee282c27572fb8fd20f1214adf34a +size 2935756 diff --git a/video/BmwcbNYkuH_39025543.mp4 b/video/BmwcbNYkuH_39025543.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f0e4f3a3eedb7956a4ce38dfb7b178cd0e1721ea --- /dev/null +++ b/video/BmwcbNYkuH_39025543.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ab8e1539df7dcc6aa98afd18c6f3e2c5d3083f319a77d6db9df32d1ae629696 +size 3046476 diff --git a/video/Bo6GpQ3B9a_39018292.mp4 b/video/Bo6GpQ3B9a_39018292.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d012a0d7f2c97e49f37209468f0486fcf9694a6b --- /dev/null +++ b/video/Bo6GpQ3B9a_39018292.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4bc0f87b8f661d16cd4c9768d3fb00b042a8dcc96ce9057ca8e4b9c00414d5da +size 1081277 diff --git a/video/Bpcgcr8E8Z_39017110.mp4 b/video/Bpcgcr8E8Z_39017110.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c88692c3de8c659440060149e2aa71bb6f4107f5 --- /dev/null +++ b/video/Bpcgcr8E8Z_39017110.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bac3c82af2e316dd835c77ec4fdf90d31dd78bc49cdbb4b985905eed450d8a77 +size 1949165 diff --git a/video/Bpkhu2ExxU_39018916.mp4 b/video/Bpkhu2ExxU_39018916.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0683f582b6a3342bac5b53477f460afd821d9242 --- /dev/null +++ b/video/Bpkhu2ExxU_39018916.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:29c4851f89d2ee48da36ba9633fea1a68d211c4590e5f58e9ea32ac673ad6529 +size 2306825 diff --git a/video/BptJGaPn9C_39027861.mp4 b/video/BptJGaPn9C_39027861.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..67903e8aa07236336bb8e2f3dd3019e49dc080c3 --- /dev/null +++ b/video/BptJGaPn9C_39027861.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a333af0cb8074e108107c2644f37c56ce6111925d0e43b2e58d67b5095923674 +size 1908348 diff --git a/video/BqHaLnans2_39019214.mp4 b/video/BqHaLnans2_39019214.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..75c232ed61e303b215efce0d907e455c84b9f973 --- /dev/null +++ b/video/BqHaLnans2_39019214.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b94f6fc61538f8eebf9856612f8da74af6cade05076907e39f121e42498cb2e3 +size 2797435 diff --git a/video/BrPZMOQiSN_39024819.mp4 b/video/BrPZMOQiSN_39024819.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..38328221bec9dcccb520af6fd7cdd1e8bf1b31b6 --- /dev/null +++ b/video/BrPZMOQiSN_39024819.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e522eb1850d3e77a63367baf7b0da00b48d982582a45ac390ad0a7d51dd4d32b +size 2244740 diff --git a/video/BrvLTxEx08_39027024.mp4 b/video/BrvLTxEx08_39027024.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5f76c85a277f14a5a45ee49ef01e31edebcbd041 --- /dev/null +++ b/video/BrvLTxEx08_39027024.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b2e839005d95b1fb9dbd758c77948f963ed9d506fd88ca0625d4606e05a7ba7 +size 1151563 diff --git a/video/BtT6o5tfHu_39019264.mp4 b/video/BtT6o5tfHu_39019264.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1fb921e8d6b05f1d44af9fcb3ce09c9a798f3c27 --- /dev/null +++ b/video/BtT6o5tfHu_39019264.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:011f8cff2c2e5983ca1fd24a9b781f038406639c27204feb0c32ef98ef332494 +size 2537747 diff --git a/video/C0EhyoPpTN_39025028.mp4 b/video/C0EhyoPpTN_39025028.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..72e7434a27a3b1c7076d981affb27d8fa787e6b5 --- /dev/null +++ b/video/C0EhyoPpTN_39025028.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2fc989b4c832c98e4cfc8c31b0b11a7cfa7853c0dfde5e60879ee1a2acead7f +size 2861216 diff --git a/video/C1d3VVfdVG_39026383.mp4 b/video/C1d3VVfdVG_39026383.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..765bd94a9ef91f605270be6d1cfebb351a70d5e0 --- /dev/null +++ b/video/C1d3VVfdVG_39026383.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc381bc9f604c994048f0ab220d405d11df2c61a0f463f268d9d26f992f29e34 +size 2334767 diff --git a/video/C1hiRbzEH9_39028355.mp4 b/video/C1hiRbzEH9_39028355.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..658f49d8134445d2f9e47326f4e6d7cab5cb2c7b --- /dev/null +++ b/video/C1hiRbzEH9_39028355.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8eb9f40dd3bc92453861f4861e20f63f541ddc5492866020585e92e383b96f60 +size 2278901 diff --git a/video/C36v8541Ns_39018287.mp4 b/video/C36v8541Ns_39018287.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0d47fe44aab66c5448f320ee20be013825efc8e9 --- /dev/null +++ b/video/C36v8541Ns_39018287.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61757a22818083c02254203464f31314c18fa362ebc4f2a9c37870d9bf1ade52 +size 2332912 diff --git a/video/C3ZHiij9QE_39026032.mp4 b/video/C3ZHiij9QE_39026032.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b27c52be660426223a555b7f9730b28cf7552d7f --- /dev/null +++ b/video/C3ZHiij9QE_39026032.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72322a16c4d8b7d27e28b92834934fe9de18551dade1381340bc9fc1bfe43aae +size 2601553 diff --git a/video/C3tEX45hJX_39027737.mp4 b/video/C3tEX45hJX_39027737.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..56e7c8d2c00e001b9d8ed11da53aa9e8773518ee --- /dev/null +++ b/video/C3tEX45hJX_39027737.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2f37094f3791223d44e998643b44ad7b1126fdadd36d4a36c8a11ba4dde111b +size 2571543 diff --git a/video/C4BikKsgmK_39018158.mp4 b/video/C4BikKsgmK_39018158.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b6ff198528e57e8077efea2b95e2cf31880ad720 --- /dev/null +++ b/video/C4BikKsgmK_39018158.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b843f442d0af6c9449ad371610c0815929395b03d170243509c1aba1b587deb9 +size 2624892 diff --git a/video/C4CxQmp9wc_39018991.mp4 b/video/C4CxQmp9wc_39018991.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..022a780706b8344ea5e1a997ff8fcbc2fde38db2 --- /dev/null +++ b/video/C4CxQmp9wc_39018991.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e0dce66bd954f32edbd6caab4e0d2bc0c5fbb37362d6260756f42d28dd5bc75 +size 1195896 diff --git a/video/C4SInFLvuB_39027260.mp4 b/video/C4SInFLvuB_39027260.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..794965c301829054a83a60f4151806da1a0f9c79 --- /dev/null +++ b/video/C4SInFLvuB_39027260.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7905823032345e7e11fa4a3302c23cc4965036ae1b34d84af3abbf40e2247912 +size 2634061 diff --git a/video/C4zmR2kyP8_39026377.mp4 b/video/C4zmR2kyP8_39026377.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d211b3aeb090ab33337b31b0fbdd57965d246c5f --- /dev/null +++ b/video/C4zmR2kyP8_39026377.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d8ec15ac32d0dec6fdf4ac28e1fd48759b9fc5ebdbfb77b4a4fd19c7f7b29ab +size 2867824 diff --git a/video/CAqdG2dy5s_39018286.mp4 b/video/CAqdG2dy5s_39018286.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6953a2fb1fb72ccdd893c1dbdd79973bc16c571f --- /dev/null +++ b/video/CAqdG2dy5s_39018286.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2a65ef5f076cf2412776763f437dac58823c6a7acaeafe31d41d661a8e7e1aa +size 1914053 diff --git a/video/CEnoUjEqNx_39028204.mp4 b/video/CEnoUjEqNx_39028204.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0cf6b2951a694756faa8412fcad79c96f2fc12c1 --- /dev/null +++ b/video/CEnoUjEqNx_39028204.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13d79cfe6791bbc18b662ce60c903a35eaf1f4fb5d1d061a81a842a83f9fa7fe +size 2166345 diff --git a/video/CIHdlhfrOo_39024658.mp4 b/video/CIHdlhfrOo_39024658.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..261f7f5e04da56e240a5021e9e7dd2dbe7412293 --- /dev/null +++ b/video/CIHdlhfrOo_39024658.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88b923006fbf77cc0b1161880028d943443ed527269bbdfb430174c0eb6ded28 +size 1797928 diff --git a/video/CK5Hfb5hBG_39018917.mp4 b/video/CK5Hfb5hBG_39018917.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5775868f25b20a54339fc8b341150e548f8fe38f --- /dev/null +++ b/video/CK5Hfb5hBG_39018917.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ed2f2d702fc408c31732c89bfab7c77f45d79a50b6dffab527cfbcc9b5542f0 +size 2708296 diff --git a/video/CL9k2PaUQb_39027949.mp4 b/video/CL9k2PaUQb_39027949.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0b5330653d35e4461769f0140dedde75611b1db2 --- /dev/null +++ b/video/CL9k2PaUQb_39027949.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e10a53bab5f3bdbc59580510e5878adf7bb6d7f0b4d4202795019b2b0cd9e55 +size 2751511 diff --git a/video/CTIFk7b9jU_39027596.mp4 b/video/CTIFk7b9jU_39027596.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0a4fc0f218ded0998ee45b24fc8d7adddf5a496e --- /dev/null +++ b/video/CTIFk7b9jU_39027596.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e18d755a537566a5a64199bca6b4004ec4b637f320ffee2ba1bd7770c746fc6e +size 2746947 diff --git a/video/CTvxvAcSJN_39024440.mp4 b/video/CTvxvAcSJN_39024440.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6c2d785e9ba06f6c22bd924a099affaef07cda15 --- /dev/null +++ b/video/CTvxvAcSJN_39024440.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77f49d76622822ebf5a875c7555d24bf7102044040c3a8838a6e217e08700bee +size 2618078 diff --git a/video/CYmF38ysDa_39018889.mp4 b/video/CYmF38ysDa_39018889.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2625b490a3a25cb15e779dd1affb68106f54f968 --- /dev/null +++ b/video/CYmF38ysDa_39018889.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4473873f0c8c92a6c975794edd03d474563b2dbb43877f83d485f0a5d5999960 +size 2688810 diff --git a/video/Cb3kcwYBgw_39027912.mp4 b/video/Cb3kcwYBgw_39027912.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ac2e0e06c6be29cbc430570a9e264bd6e84d788c --- /dev/null +++ b/video/Cb3kcwYBgw_39027912.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0770b80771a2e454bf80208ace3e7d4a20afca47deeae75d810028adcc13be80 +size 2664854 diff --git a/video/CbHz30KeA4_39024971.mp4 b/video/CbHz30KeA4_39024971.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d3568c6853c940d4ad8760cde4727fda6193d0df --- /dev/null +++ b/video/CbHz30KeA4_39024971.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f41cbbbf977d3e8c5e509f06fd53a99e23182141218f2257f59179e50e089803 +size 2248518 diff --git a/video/Cc0ckJlJF2_39025272.mp4 b/video/Cc0ckJlJF2_39025272.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..873e1b6e837adfc64a911ff8f1c8128e44b094cb --- /dev/null +++ b/video/Cc0ckJlJF2_39025272.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:98b57b7ca293566d390358b0580fc3cb1b078d25545535d9e4d0e57b7ed0fa5a +size 2518873 diff --git a/video/CcNw4mVIxo_39028320.mp4 b/video/CcNw4mVIxo_39028320.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3ebd9c9ad530a10116708edf9ead79eb19b57828 --- /dev/null +++ b/video/CcNw4mVIxo_39028320.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1cd9e624a41bb0f876363c06ceb10fc7739a84653955df716bb26d4e502a78d7 +size 2100381 diff --git a/video/CdjnzWsQax_39018273.mp4 b/video/CdjnzWsQax_39018273.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a47b844f84e950a475f52708cfe4f5d05a5e83ed --- /dev/null +++ b/video/CdjnzWsQax_39018273.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3b1f164baf406f2e26dd2140f43db66d1d3c573df424b85f2f6e41d7812094a +size 2778166 diff --git a/video/CeOwahuQic_39025179.mp4 b/video/CeOwahuQic_39025179.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c3c0615d6b2b1f8c87f905c797232f099860e3e7 --- /dev/null +++ b/video/CeOwahuQic_39025179.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3740d595df8d04549aa961421cbbb616f76080169bef5e81c68f909d4c126759 +size 2959354 diff --git a/video/CehOqpvOxG_39028837.mp4 b/video/CehOqpvOxG_39028837.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f99f8ddf5215cd608288078be1734a665273efd7 --- /dev/null +++ b/video/CehOqpvOxG_39028837.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7bc8fad1b9a6671a1cf0e24bd36bf694032ffce28e5e851e61bdf3ff9d3716cb +size 2197125 diff --git a/video/CgGjT8EG8A_39028125.mp4 b/video/CgGjT8EG8A_39028125.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..646fc003ecced7750bd090200e5d5b59cfa35675 --- /dev/null +++ b/video/CgGjT8EG8A_39028125.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f267c7a0af6837acc34e869344de9a607fd33de28d02d2103cb657dd2bb6264f +size 3614024 diff --git a/video/ChHx5ORqF0_39017042.mp4 b/video/ChHx5ORqF0_39017042.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c51368ef9dd711c7d1203a40af48ef1aaacad0ec --- /dev/null +++ b/video/ChHx5ORqF0_39017042.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66b3a6a43a543390731b3b2f397d2fb4aacd8e977f4aef1dc8d7bb6c465f7710 +size 2782655 diff --git a/video/Ci7II4CPwm_39024637.mp4 b/video/Ci7II4CPwm_39024637.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0da6fe850da857b5dc7878e86a0515c07e9f7876 --- /dev/null +++ b/video/Ci7II4CPwm_39024637.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:efd2fd01828b4a20819ca1342fd405dccdb90fb3c3072100f6fde233b6852ac2 +size 2786638 diff --git a/video/CluvZBfrjj_39026076.mp4 b/video/CluvZBfrjj_39026076.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..99f24ce510dc943b9a74ca5d3b42e3352bcab613 --- /dev/null +++ b/video/CluvZBfrjj_39026076.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0da28f1ada6cbaf3067eac4ec0ef9bd38e75eac3c68320af598c6fd86fda286 +size 2460835 diff --git a/video/CovjSQmNOD_39027982.mp4 b/video/CovjSQmNOD_39027982.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..352bdc3d6c7341a70a306565faa8e0cc81a37997 --- /dev/null +++ b/video/CovjSQmNOD_39027982.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d8ecfba11d659332c4063aea89d413297f15d50644fa50b7dd32889e0eaa1f03 +size 2844500 diff --git a/video/Cp7HD618bd_39024921.mp4 b/video/Cp7HD618bd_39024921.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d31b91490e7d1d0ac6411ce0c528792ba7c8800c --- /dev/null +++ b/video/Cp7HD618bd_39024921.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:59b55e4aa26903ba50b97b53d00326905cd1d90a3f7d6b55bb2d8164b56ead9d +size 2571903 diff --git a/video/Cr2jEHJB9q_39024382.mp4 b/video/Cr2jEHJB9q_39024382.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..423e6fd9f81672a7cf705da6554f3cf2ec902115 --- /dev/null +++ b/video/Cr2jEHJB9q_39024382.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8fcd658604ea841d9c81a064eefe9d104dd24fe395ba20f03857baa87d0b5ffc +size 2387249 diff --git a/video/CrADAX7h23_39028109.mp4 b/video/CrADAX7h23_39028109.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..64ae59a97234c0eeb95e28188012426c76d27764 --- /dev/null +++ b/video/CrADAX7h23_39028109.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b705a0d7e1622a24680c10c0430f68bdb526f3294059e3f908353a15f811ff6 +size 2992284 diff --git a/video/CtOA9aN8fr_39018271.mp4 b/video/CtOA9aN8fr_39018271.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a4b96b35fc455d6f3cdfaa09571bc70b21220d87 --- /dev/null +++ b/video/CtOA9aN8fr_39018271.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc6f8a445a8ec4c730dfeae3a4408424d33f05ac0352900da95e68d765bfb3b7 +size 2735941 diff --git a/video/CvYBvgEUK9_39018270.mp4 b/video/CvYBvgEUK9_39018270.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..46a62891727d024bbff02034d2c0ab47902395a6 --- /dev/null +++ b/video/CvYBvgEUK9_39018270.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd95cbb10d95ef64d581cc52dc68f7b56e0abd7b8e4f236fd7f478b800759a6a +size 2840428 diff --git a/video/Cw7Agrr8GJ_39024574.mp4 b/video/Cw7Agrr8GJ_39024574.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7c3c97c0ed66956b433850328998d3c61073562a --- /dev/null +++ b/video/Cw7Agrr8GJ_39024574.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2fea5f2d01b7100f9ec0790779d72f2bbedb6b127cf9cf8cdc1fa7b35ec94ae8 +size 71728 diff --git a/video/CwNevJONgq_39028269.mp4 b/video/CwNevJONgq_39028269.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..eeba39707ef4586646ca9f7a642560034cc79077 --- /dev/null +++ b/video/CwNevJONgq_39028269.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa23083242d288b91cf3e1da2e468cf0146837ff9f429225aacd6f94f0f95155 +size 1697006 diff --git a/video/Cy5v64DqEF_39018269.mp4 b/video/Cy5v64DqEF_39018269.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fc5f3756864ab992bd99637799ad3127bf5cc1d5 --- /dev/null +++ b/video/Cy5v64DqEF_39018269.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea941de23bb7e00f31737f856b98378ae64bbaeb43129a28d9f74795a6995389 +size 2658346 diff --git a/video/CyzZeND3LB_39028252.mp4 b/video/CyzZeND3LB_39028252.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a0d5543b6789408ac2ed74e014a89549a999ac10 --- /dev/null +++ b/video/CyzZeND3LB_39028252.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a721bd40e43d8f679811212ed96648f428656ec8beb777428c6bdaf85ad14b2a +size 2074971 diff --git a/video/CzPtBzgfae_39027658.mp4 b/video/CzPtBzgfae_39027658.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..619cd577050bdb4d6ff650996d5c7daffb8d1a30 --- /dev/null +++ b/video/CzPtBzgfae_39027658.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:808569048012c31093976e4931ce3556bbffdafce7545a2ce1e8cf33e83f5d1b +size 1257877 diff --git a/video/D4QgSWxiOb_39025647.mp4 b/video/D4QgSWxiOb_39025647.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8e8b70176739d7ff44c64ad9ec1c33b6a249f6b7 --- /dev/null +++ b/video/D4QgSWxiOb_39025647.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2975151c9003346895e2c7df4e52a2c10d0ff8c2171fdfdd7539baadbaf896d +size 2888843 diff --git a/video/D4yRz3s7UL_39025723.mp4 b/video/D4yRz3s7UL_39025723.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5eafa27c24a911c4dee8e709c02188da0c8d985d --- /dev/null +++ b/video/D4yRz3s7UL_39025723.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c57756c1f1c1e8836d6d58163e3adedb0e4f0084542ab69a33c4ba67dc94aba8 +size 2734266 diff --git a/video/D6MQrw9HFu_39025718.mp4 b/video/D6MQrw9HFu_39025718.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3eaa51dc38c30e1dc36775ddcb831b3e70334e40 --- /dev/null +++ b/video/D6MQrw9HFu_39025718.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c94425d2ecd88ea4cd7ddeb9fb10d7e3b43da4e39969d9c1535cd2c214067db0 +size 1319776 diff --git a/video/D7KJmfEDQP_39018899.mp4 b/video/D7KJmfEDQP_39018899.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f04ebf9d705dbb242fbe34480f9274fa9f0f029f --- /dev/null +++ b/video/D7KJmfEDQP_39018899.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c12d14e249bd060012cc0c077170cf12195c5ae8985f62ce8df18656f54fa591 +size 1900620 diff --git a/video/DAO2BFzMfy_39028145.mp4 b/video/DAO2BFzMfy_39028145.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..68d8917382be132ff14f46ccbf9f7eeae9759d2f --- /dev/null +++ b/video/DAO2BFzMfy_39028145.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58333df801b185643bb05a058c2621a3f8b47d9c6cad8306ed81fec4513376c6 +size 2781287 diff --git a/video/DG2f1rVEM5_39025850.mp4 b/video/DG2f1rVEM5_39025850.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6101035cf809002eb5d6b728a4b81702605a9099 --- /dev/null +++ b/video/DG2f1rVEM5_39025850.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:556510e59f98cbcc0d4d567d00974d55ff6641356871e80e36565edfc9e1ad42 +size 3022162 diff --git a/video/DGez4B2a6Y_39018262.mp4 b/video/DGez4B2a6Y_39018262.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5d2f95ec880b8b3aa02fb369cc426fb4315e226f --- /dev/null +++ b/video/DGez4B2a6Y_39018262.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd2b139c2fbdb53d9fe5f86f466b731fedde61cd82c00e83966365e4d4c4ee0e +size 2188413 diff --git a/video/DJZDgMOLXQ_39018260.mp4 b/video/DJZDgMOLXQ_39018260.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0ccea6906856d40dde14d88052a776bcb1e7f802 --- /dev/null +++ b/video/DJZDgMOLXQ_39018260.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bdc40a15d26522bd2c021269b2092e415bf82368aa1d81234d1bfbc72d60ce33 +size 1391397 diff --git a/video/DKSI3bULiZ_39024890.mp4 b/video/DKSI3bULiZ_39024890.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2d4c9f318f8780e49b4357bca5459b13f25ed71c --- /dev/null +++ b/video/DKSI3bULiZ_39024890.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41125a023c7f656c34ca3c8f46f03eceb84a8f037b03a1c24c45e7a21e121185 +size 2605902 diff --git a/video/DLJznSp6X3_39018259.mp4 b/video/DLJznSp6X3_39018259.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..17a024514ba21762cd7a09ecb71a73d8bc09aad6 --- /dev/null +++ b/video/DLJznSp6X3_39018259.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89b94026922c745a6a441df4aaafb32cec38a7489093b2397777d41689829d7c +size 2522169 diff --git a/video/DLNOBJa7TM_39027297.mp4 b/video/DLNOBJa7TM_39027297.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d72008a2c983e0d57670f9bdab28d42bd6b9f9f9 --- /dev/null +++ b/video/DLNOBJa7TM_39027297.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6c2eb0e083c2ad0c90f26c62fddcc9553451b60f0508ea26f0a046e32ea0239 +size 2705475 diff --git a/video/DNGfCVBOnU_39028211.mp4 b/video/DNGfCVBOnU_39028211.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..572baf6ac06077366653c386a15522620d227856 --- /dev/null +++ b/video/DNGfCVBOnU_39028211.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b355024ec16170541657accf85d54deefd1cc249cbb718a67ff049418e2f797b +size 2505507 diff --git a/video/DQD0DNRjxk_39028692.mp4 b/video/DQD0DNRjxk_39028692.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1bb9f63c08d7468f91e4873c523859d87a8133b2 --- /dev/null +++ b/video/DQD0DNRjxk_39028692.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f14d865cde0c1cb7e040ad40dc325aa9271481571cecc8490a098ec47fa155b +size 1099476 diff --git a/video/DT7n4F2bbP_39027020.mp4 b/video/DT7n4F2bbP_39027020.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..45ed2effd3197fc4c3a2b6a21bb5fbbb01a5b094 --- /dev/null +++ b/video/DT7n4F2bbP_39027020.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:572f7d052bcccc4fd81ef1b6643a85f4d6978c37e48c5082ce76b295a1d5bd90 +size 2546529 diff --git a/video/DUHX779C5q_39026790.mp4 b/video/DUHX779C5q_39026790.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b087cf834baaf25703f8aa245ed0ef2a83bc1c58 --- /dev/null +++ b/video/DUHX779C5q_39026790.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c39902b43c5f76c51fe1b1dd38be83c4e373f9c095447acd0f1a65568800877 +size 2432339 diff --git a/video/DV15UbHCY1_39028694.mp4 b/video/DV15UbHCY1_39028694.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f11ec0861e09eec86f8291f71300f966a465dc64 --- /dev/null +++ b/video/DV15UbHCY1_39028694.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f3efe35f0c29bf2de687c0cf50e244c57a2bb3d689e4e5b4c6e9739dd244aa1 +size 2752169 diff --git a/video/DX5GUwMFFb_39028077.mp4 b/video/DX5GUwMFFb_39028077.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4e5a651ac1d21c047fa48ca3b336cec6f3082bda --- /dev/null +++ b/video/DX5GUwMFFb_39028077.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:55ffb8fffeac20bf8032f0c6cf673fc8109eda9c35d01a1c292b23e2711a4eef +size 2336412 diff --git a/video/DZUzOKE6og_39018655.mp4 b/video/DZUzOKE6og_39018655.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3f95f15f17a8b06cc2a8f83e3f1e48e6dec40f83 --- /dev/null +++ b/video/DZUzOKE6og_39018655.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a0feb3c381560ab9af9d1e7e8edf4517c747632ae53b158f5beae1b1dbddef2 +size 2742956 diff --git a/video/DdKdr4kqxh_39025863.mp4 b/video/DdKdr4kqxh_39025863.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..188c2fc9a4f98770c4beafca529becfe55a91de1 --- /dev/null +++ b/video/DdKdr4kqxh_39025863.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68c1dd65523c45fd2d1b2fab5f3f10c64eaab1518da23ffeef76ba4af0932340 +size 2973070 diff --git a/video/DfPtC8uSot_39018980.mp4 b/video/DfPtC8uSot_39018980.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0fca2040dfdb47718f01a7cf8e1703d8c9a72ae1 --- /dev/null +++ b/video/DfPtC8uSot_39018980.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f09823866468d73819d2177523f09c4f1932b205836747431dc17892e781295d +size 2280370 diff --git a/video/Diq6urt3lS_39018252.mp4 b/video/Diq6urt3lS_39018252.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a6844bef7b35fe0daa3cabfb14f92368d0cca98f --- /dev/null +++ b/video/Diq6urt3lS_39018252.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c1d4e66645e66950fd399a92b560c39032bc5c3caf01fd2b26dfc386e2540a85 +size 2362859 diff --git a/video/DmD1wboID9_39018248.mp4 b/video/DmD1wboID9_39018248.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b9fd8d9e9591b30241b7fbcef31529670ad43971 --- /dev/null +++ b/video/DmD1wboID9_39018248.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d71426cfb470ced4a33429945835228bfa6740bb70babf611520c5268c331ff2 +size 2821710 diff --git a/video/Dnc3paMqDE_39017322.mp4 b/video/Dnc3paMqDE_39017322.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..acca3fb7c95f4916c2b7284450e725eec7552029 --- /dev/null +++ b/video/Dnc3paMqDE_39017322.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e35911fceb6ee516f4aeb957a6176ac0455ba539f35953e5a71b4f6ec203349 +size 2707917 diff --git a/video/Dokew2u49m_39025939.mp4 b/video/Dokew2u49m_39025939.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..65daa7ec3c9924d2cfdd9a18eea181e410d46a6c --- /dev/null +++ b/video/Dokew2u49m_39025939.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2995620d27c4516eac545c5732639828a93c15b89dba250f506103ed47ca13fa +size 1349417 diff --git a/video/DpByqSbdhI_39027209.mp4 b/video/DpByqSbdhI_39027209.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4e55ba9746be8fe85e7e29adb3599fc425751a38 --- /dev/null +++ b/video/DpByqSbdhI_39027209.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2954823855118d2fd16a6046f3aac355cddca35916bea2370b6b0b08b9dc2c11 +size 2131992 diff --git a/video/DpP5F3UfKw_39025698.mp4 b/video/DpP5F3UfKw_39025698.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..adfeb4da57217afd675cbf2c7de3cc8a64882714 --- /dev/null +++ b/video/DpP5F3UfKw_39025698.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad2aa2f57d2ca28ffe74d0e403efad0f7f83bcefc1c2a6ecf3ce07265d84445e +size 1717253 diff --git a/video/DqiggGDOmA_39024764.mp4 b/video/DqiggGDOmA_39024764.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..abd26551885d0ea442763bf0b5eb5a15ca901ce2 --- /dev/null +++ b/video/DqiggGDOmA_39024764.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78730281e2bf0558a2d30ceb25c328aee502213df18d0eb86d1bef47c1a31b91 +size 2417944 diff --git a/video/DqziS8DG4M_39018244.mp4 b/video/DqziS8DG4M_39018244.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4d951f43437699f5f971cc42011d81d1be40fafb --- /dev/null +++ b/video/DqziS8DG4M_39018244.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:02df5306be81aa2d504c619b2b94c832c4c6793f9cce5dcb5ed9d85df787b5d4 +size 2358531 diff --git a/video/DrhZneqz4n_39018245.mp4 b/video/DrhZneqz4n_39018245.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7324c3c8bf8d1f7c78cc2fdaa673e3c239b9f93f --- /dev/null +++ b/video/DrhZneqz4n_39018245.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23cde88984956a4ee82fe23b33455774797ef2b5ded83ce278bc9fdbafb90bfe +size 71484 diff --git a/video/DztaBt4wP5_39028504.mp4 b/video/DztaBt4wP5_39028504.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..46c16fadbb57964036d8630e63b255588b51aeaa --- /dev/null +++ b/video/DztaBt4wP5_39028504.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:588f9046783ca3cfb5e39b1d406c181e34926aa45e66f64eaff9f6151384b7f6 +size 2719527 diff --git a/video/E1NxN5QMOE_39018241.mp4 b/video/E1NxN5QMOE_39018241.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7bd9fee93e1f06c04f759c5c53d9a6c180e175a0 --- /dev/null +++ b/video/E1NxN5QMOE_39018241.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c9d3b2edcf0638dfc742c005595e0e05a489217e3999feb7aa32275c7a93ba8 +size 2929898 diff --git a/video/E1nBLrEaJo_39027743.mp4 b/video/E1nBLrEaJo_39027743.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..50b802e536a2f0d896e1ce1edc72537f55aef565 --- /dev/null +++ b/video/E1nBLrEaJo_39027743.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88702a5788bc28d61e544a49aa06b7f9e989d9aee6f97c3e63e1413b0b192c60 +size 2505783 diff --git a/video/E2BYPreuU8_39026842.mp4 b/video/E2BYPreuU8_39026842.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8445536309bff961ee32c35ccca751dd506e14db --- /dev/null +++ b/video/E2BYPreuU8_39026842.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57a7af1cc506c13c3649fad67d6f80f3cfaa0285aa50f2a4aeb148d1d0b7f4aa +size 2338979 diff --git a/video/E34AlVLN0v_39018628.mp4 b/video/E34AlVLN0v_39018628.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8a0556065205aa6ab2abf7eb569c6a6ed6fca9dd --- /dev/null +++ b/video/E34AlVLN0v_39018628.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3a53bb03c3e17f5ae453c58ef9263ada0d84932353a97029bb99d6515c49ed5 +size 1992383 diff --git a/video/E3ZMsqdO0D_39025794.mp4 b/video/E3ZMsqdO0D_39025794.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6fd3d1a3ca4097c329da8e2d68d175b568950187 --- /dev/null +++ b/video/E3ZMsqdO0D_39025794.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:413b2f4ca8a4ceaf370ba9196bba1eb863f253f9ac3f531f67f6f8b3b2d0bfe1 +size 2866942 diff --git a/video/E6ZodZu0HQ_39026740.mp4 b/video/E6ZodZu0HQ_39026740.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c6118cf41d979e378d27fd86793e1b931264a838 --- /dev/null +++ b/video/E6ZodZu0HQ_39026740.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:12cf7f6db59a0a28387ad9539ec6c19ddd7100b0838ccfdff9435ae609753609 +size 2957276 diff --git a/video/E7en5DyO2G_39026701.mp4 b/video/E7en5DyO2G_39026701.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..986c29cadb86f0b3d1d94ffdbb09a1ed69b725fb --- /dev/null +++ b/video/E7en5DyO2G_39026701.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4768f5c01b8d4104aeb4bb6ef05a39d53e59e37dd24ab593a67da899ca5bf138 +size 2385163 diff --git a/video/E7fZOoiEKl_39025415.mp4 b/video/E7fZOoiEKl_39025415.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fd75671ffe1f8a0916c5eeca2bc28eb154185bdf --- /dev/null +++ b/video/E7fZOoiEKl_39025415.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52f3a2f0f0879760e4b6963d5deace876d7ebd7ca9cb4f228620d3fa30194692 +size 3535611 diff --git a/video/E8wDxddIqU_39028735.mp4 b/video/E8wDxddIqU_39028735.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..092941901fa9f5dc09b875d653e7a80e7148515e --- /dev/null +++ b/video/E8wDxddIqU_39028735.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:01d67e3648353590a6fbdc20c097169cc13739e52eef3f019fa821948d5b4974 +size 2669315 diff --git a/video/EAbNopo3os_39028718.mp4 b/video/EAbNopo3os_39028718.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..97a8dc1d402e8549a6fc52677009910938c93174 --- /dev/null +++ b/video/EAbNopo3os_39028718.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48177cdc183eb2de42b15b2953dcf5c2e13a8a85ec29556168155c71acde96f7 +size 3020600 diff --git a/video/EC9Hfi9V3k_39026900.mp4 b/video/EC9Hfi9V3k_39026900.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..39a8ea06ef135af105513f792b9276803a589076 --- /dev/null +++ b/video/EC9Hfi9V3k_39026900.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f7d1de83556e15b2501f682c55a3c03e41fed3722bd497dee3ef7541ac77500b +size 2551518 diff --git a/video/EDPxCjXzSb_39018658.mp4 b/video/EDPxCjXzSb_39018658.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ee2c14641d4d174454f397ce4413f9cab335e656 --- /dev/null +++ b/video/EDPxCjXzSb_39018658.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4dade36d2855d641f1e4edb00c1dbd58a7f59d527ed05d41d80406e09e55ae61 +size 3121691 diff --git a/video/EH2O3h7sBI_39018236.mp4 b/video/EH2O3h7sBI_39018236.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c2e5398775acc35aab06b44c790028967e73167c --- /dev/null +++ b/video/EH2O3h7sBI_39018236.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e216f9d9d256aae5aa594248ba6f5c1a93a61be77b446283c1b6f943cb0838a +size 2970174 diff --git a/video/EHXyeImux0_39026771.mp4 b/video/EHXyeImux0_39026771.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ace11f532d48c19d01917f55a3803fc0c9302833 --- /dev/null +++ b/video/EHXyeImux0_39026771.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e896f624c7a2eedd763b9fd42f0b84c4d5ea20cc9c517c0e4f93334245fbce7 +size 2699319 diff --git a/video/EHg5GDnyq1_39019258.mp4 b/video/EHg5GDnyq1_39019258.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..29b94283ed602c79006e085850a0ad3c3a475cfa --- /dev/null +++ b/video/EHg5GDnyq1_39019258.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a22e08a803ece6a7d11da7e0d5f67301bbc19617fbb79dfbcc465f5b8390095b +size 2571985 diff --git a/video/EHrvRNs2Y0_39018235.mp4 b/video/EHrvRNs2Y0_39018235.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7291c82a7e43ffa16ade6d04fa9d1bfb2422e7dc --- /dev/null +++ b/video/EHrvRNs2Y0_39018235.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2dafe56e16a5761ff5183c6cf89a881ca7642fa9dbe19a4aab644a97f3530cc5 +size 1663056 diff --git a/video/EJPIzl7mgc_39017887.mp4 b/video/EJPIzl7mgc_39017887.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6cb2d3dc1c06b9ee4aec69c5082b09eb237eb8ff --- /dev/null +++ b/video/EJPIzl7mgc_39017887.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7ef466e0c33370886aadc556b7998a3be6f1244ddede3d331e682581e3dc334c +size 2514518 diff --git a/video/EJZfcKXdiT_39025697.mp4 b/video/EJZfcKXdiT_39025697.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f8cf8761ed4039d00873a5b561894c6b4549f811 --- /dev/null +++ b/video/EJZfcKXdiT_39025697.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6dc2de9d24c93ab05c2fa80a9d348e0011200087e50617f9fb348c61d0e472d +size 1369202 diff --git a/video/EK1tyHcb3W_39025833.mp4 b/video/EK1tyHcb3W_39025833.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f9e06996d7cbd50ed8f791cbf3b6d56a4098bd3c --- /dev/null +++ b/video/EK1tyHcb3W_39025833.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d204dd07cf29c373822efd198f09e86fd22f81f472c1e4c0484b53f61665736a +size 2537702 diff --git a/video/EKdk4vxKO4_39028491.mp4 b/video/EKdk4vxKO4_39028491.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..76ba38ae40e6a99ee8979fdaa5eb566ae03c7853 --- /dev/null +++ b/video/EKdk4vxKO4_39028491.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aee2cb82df84f13ce7bf4daf2417b6579531e9e547cdff1447569a492491ec2c +size 3085387 diff --git a/video/ELnxXc8pik_39027014.mp4 b/video/ELnxXc8pik_39027014.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5d9d72854757b0ad404b9e31a4548cff9b41a318 --- /dev/null +++ b/video/ELnxXc8pik_39027014.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:496c25e01db2d957b26891cca8161927751e9e0c48f36694a30bdd20cd311727 +size 2881656 diff --git a/video/EMkrwJY2de_39024538.mp4 b/video/EMkrwJY2de_39024538.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8f7bdfd8bfb2b6ee8e2fda0177aa5ba57f13d5fe --- /dev/null +++ b/video/EMkrwJY2de_39024538.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfbcc1f649a7d55c8e7a67be879d896f86ac1f6225b5eab2d47a1c44f6710148 +size 2336926 diff --git a/video/EMstukR5J4_39027169.mp4 b/video/EMstukR5J4_39027169.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ec1fa3e25254a1a06ffb7642e41e327d596cb56e --- /dev/null +++ b/video/EMstukR5J4_39027169.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19f381ce47c34f64537a3d78a4ae9b87a038176e1039978fa80d884e66e5f6e5 +size 2316856 diff --git a/video/ENLsNDfys0_39028207.mp4 b/video/ENLsNDfys0_39028207.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..130990645347ff45373a9cbe49375c8d7ffd4b0b --- /dev/null +++ b/video/ENLsNDfys0_39028207.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aea2f773af4dd621b1676b91fbfd23c0ceda74fc3d87fa702e04b7dba96ccb23 +size 3077201 diff --git a/video/ENlubvb262_39027418.mp4 b/video/ENlubvb262_39027418.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b998cb29944f3331ae23ccc7fa57e2fad1f172e8 --- /dev/null +++ b/video/ENlubvb262_39027418.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d34282c3da085ace5412294a14de8960d16cedc02c911da4daa31b9ddb90c44d +size 2008942 diff --git a/video/EQHQzRJy75_39026939.mp4 b/video/EQHQzRJy75_39026939.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c57b94c5e403ac86a3fb0c847b4017c3d34b348e --- /dev/null +++ b/video/EQHQzRJy75_39026939.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f93a2f216cee91a6aa0767e902cebc0c6222f80f56ca168290d1a63edfccac9 +size 2669421 diff --git a/video/ES0Gj1KVUk_39027177.mp4 b/video/ES0Gj1KVUk_39027177.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..29af450c562a5952ee6644e39e3748b867891925 --- /dev/null +++ b/video/ES0Gj1KVUk_39027177.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35235f0a556b981d55cd3e55e38dd52c5498344452ef05bb6d6d59a34ccdb27a +size 1518149 diff --git a/video/EVw8Jh5Et9_39027885.mp4 b/video/EVw8Jh5Et9_39027885.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f807eb5540fae61d409789f632f511fe2c54b5d0 --- /dev/null +++ b/video/EVw8Jh5Et9_39027885.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc5c6bc3b49c2ba2d83f68c278a8f397e08e72168006aec9ec61af55e7a0de9b +size 2478619 diff --git a/video/EXitynZhYn_39018233.mp4 b/video/EXitynZhYn_39018233.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e394ead15f3d48f059d3ec43618ec7ef318e94e8 --- /dev/null +++ b/video/EXitynZhYn_39018233.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c6cfb1944808b579381e2f6b419b4a6c9008145a2f80162323d938b5771cc576 +size 2999259 diff --git a/video/EXuv4tVNa3_39026513.mp4 b/video/EXuv4tVNa3_39026513.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fdb1f4290821188963279d90582ea1fe95eaa18e --- /dev/null +++ b/video/EXuv4tVNa3_39026513.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fad102c928af61f354dea3e038e0d2706c8b0c08c97ff6a91a13a018cb11218 +size 3037813 diff --git a/video/EY2agT920S_39027794.mp4 b/video/EY2agT920S_39027794.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7edef40872441b0e8fadf5f47ef9f91df205b4c8 --- /dev/null +++ b/video/EY2agT920S_39027794.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8762f792a1aedadaab0f5df1c80c06b58722b0e7e8544170fce0fad0152d42d +size 2145714 diff --git a/video/EanCFCwAjM_39017208.mp4 b/video/EanCFCwAjM_39017208.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d4873963c171fdd669de2d913cdfbdc62cd361c3 --- /dev/null +++ b/video/EanCFCwAjM_39017208.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0fa841333e5ca39a792e409141d3fb369bf23d8b4756769b2d51d37faf7e2e63 +size 2273556 diff --git a/video/EdXW71LvKE_39026217.mp4 b/video/EdXW71LvKE_39026217.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8a1f5bdd4825bd50ff89263332dc53e4921b3c85 --- /dev/null +++ b/video/EdXW71LvKE_39026217.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e2ffd479f8b8fe022e2d5b1b63f88ac68c59392f4765b3f6db1438480ff31e4 +size 2204909 diff --git a/video/EeXcOYf3Lg_39024669.mp4 b/video/EeXcOYf3Lg_39024669.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..eb8f55b772fab3d19cc5900a98d1f66125b360a9 --- /dev/null +++ b/video/EeXcOYf3Lg_39024669.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d10c955f43d10b8faa6d927d195dd6a212c6835abc9d63649fb13cdbad4fcef +size 2262214 diff --git a/video/EehS4erXWB_39025035.mp4 b/video/EehS4erXWB_39025035.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bcf46d297461bc7186bf78d7caf22e5cf6d42025 --- /dev/null +++ b/video/EehS4erXWB_39025035.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6d087569eb341fbe443653bb5bcb41a4ef220293c1d6c68065e60e5b83a7a0a +size 2530477 diff --git a/video/EfpZNpkrm2_39025442.mp4 b/video/EfpZNpkrm2_39025442.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f3785d91ac18d27d47da5053148e3387259ae4ad --- /dev/null +++ b/video/EfpZNpkrm2_39025442.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ae6706894a6b13b7afd90e463a62a833ec7e300b92e5a7f5c8d832429cafc2a +size 2397846 diff --git a/video/EhrzQwsV4K_39018229.mp4 b/video/EhrzQwsV4K_39018229.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..182c65572845b5b3903ee67e1f4537b12a805f5e --- /dev/null +++ b/video/EhrzQwsV4K_39018229.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da255d72e08c5a2d1025dae9c6a5245800f6fd20f4bb4df8547d59eebd68cb45 +size 3403115 diff --git a/video/Ehsd856Ltb_39028192.mp4 b/video/Ehsd856Ltb_39028192.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2f0a35ba6ee9b44a4e04641f127061e28a746a0c --- /dev/null +++ b/video/Ehsd856Ltb_39028192.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa77360a1f425b0906c1c7a2311bd4ff01df0646f607a1b180f905634012971a +size 2924851 diff --git a/video/EjKNSErSMJ_39025856.mp4 b/video/EjKNSErSMJ_39025856.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..00460ae67b4364856512874a2c2eb8b6b97558cb --- /dev/null +++ b/video/EjKNSErSMJ_39025856.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e910c61e44015fac38401ab79021d60839cb4ad3e4ca403ee59f0961a8164fcd +size 2433778 diff --git a/video/Ejg4d4FVrs_39027340.mp4 b/video/Ejg4d4FVrs_39027340.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5edf3988264541b563790099fc9f74c0d3333b84 --- /dev/null +++ b/video/Ejg4d4FVrs_39027340.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:832df4b04e941751111b2472221cf850fc2604eaef7279f4deea5b42a847b2a9 +size 2542854 diff --git a/video/EmQSOi1X2f_39019089.mp4 b/video/EmQSOi1X2f_39019089.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..653c3011b6514fdc8a41fb395cf192705a1fd801 --- /dev/null +++ b/video/EmQSOi1X2f_39019089.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85bec6ab475c20bb4f8ef0b6e9ac8bb5a0e72c5126d5024ce889536d00892db5 +size 2545272 diff --git a/video/EnXJfQqy0K_39019003.mp4 b/video/EnXJfQqy0K_39019003.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2b290d37e80204c1a98b75a56cdc16003f3ca464 --- /dev/null +++ b/video/EnXJfQqy0K_39019003.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:432bdb438e4ada9f51c6fc964b61b96ef6bf52d50a7477cf1fb81f8ec6ceee58 +size 2448802 diff --git a/video/Eok6HbcSRI_39024882.mp4 b/video/Eok6HbcSRI_39024882.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5d52d5e7c1e7220fd76fef0d0d6abb87e4fcff78 --- /dev/null +++ b/video/Eok6HbcSRI_39024882.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:300596b565f286f40b0aaaa51228a9eae967675919cfe06606b02f8137a13814 +size 2648149 diff --git a/video/EpVe8jAjdx_39018907.mp4 b/video/EpVe8jAjdx_39018907.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a6f625bb5c1fdba7b29b863e79ae3bedd686944b --- /dev/null +++ b/video/EpVe8jAjdx_39018907.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:36a24765e8446a5c340a29109373522b30b3d5a3d841542e644ef8cf0264e26b +size 1397725 diff --git a/video/EpYnZpDpsQ_39018862.mp4 b/video/EpYnZpDpsQ_39018862.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7f5f728461a7d573765aab66d6a12848466ab263 --- /dev/null +++ b/video/EpYnZpDpsQ_39018862.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:927673dc6b5b0f38d3191315d39f422964cf702dedfedab6a56e2413bbd81ec8 +size 1888153 diff --git a/video/EpusiLXfNd_39025947.mp4 b/video/EpusiLXfNd_39025947.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..81ada71caeaffa388ed71792256114ba69e1e14a --- /dev/null +++ b/video/EpusiLXfNd_39025947.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:616687eb422d2b90ff08350d796a4eebc8b6a0f45e02e4707131339a6ebd7387 +size 2287106 diff --git a/video/EriR6Ec69a_39018226.mp4 b/video/EriR6Ec69a_39018226.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4749e56573892e6847e686a094742b08cb149311 --- /dev/null +++ b/video/EriR6Ec69a_39018226.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eaac8ec608e45827ff7bf56a477a49a34ae5292b184d823f826189443cb2a72e +size 717829 diff --git a/video/Eu80DGuOcs_39027995.mp4 b/video/Eu80DGuOcs_39027995.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bb8f21de5d7ba9558c19d9a771d28722c9e5259c --- /dev/null +++ b/video/Eu80DGuOcs_39027995.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b888b564fa518d5a7988b598fe0c5a145aff1189010b77a1bdcaa8dcad345ad4 +size 3202647 diff --git a/video/EwWpAPzcay_39024423.mp4 b/video/EwWpAPzcay_39024423.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..99778c3209b60bf23ec3be88d77e11a9952e90fd --- /dev/null +++ b/video/EwWpAPzcay_39024423.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30aaad837ce9ac18fa0662e3b6795a7fef111b1e5e803d88b54efbebb656c7d6 +size 2578929 diff --git a/video/ExeIyx6U0Z_39027163.mp4 b/video/ExeIyx6U0Z_39027163.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..564d1a0a9270a8b045c005a9f336cb9146c0f98e --- /dev/null +++ b/video/ExeIyx6U0Z_39027163.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16ebcf9681d99a77291d129f651fc0945d1cedefcd90f0a9984cd9d30b1f08d5 +size 996407 diff --git a/video/F6L23TNlFW_39028224.mp4 b/video/F6L23TNlFW_39028224.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..247e25c602a10472ebc9189473b4f3f0b712a82a --- /dev/null +++ b/video/F6L23TNlFW_39028224.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f404d2aef697e6fee149d8663996c7faeb2fc35f04890bd21b1ab6fd6d39531d +size 2498932 diff --git a/video/F738WY1Xm4_39027853.mp4 b/video/F738WY1Xm4_39027853.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f315bdf6f8d08460822951a92ac00458f97868c5 --- /dev/null +++ b/video/F738WY1Xm4_39027853.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b41e391ec82893a935eca97c1a9c89d1427185b8e4163ae00497b7ceed2024b +size 1918714 diff --git a/video/F76bwRSLeK_39018221.mp4 b/video/F76bwRSLeK_39018221.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b219a3afd4451e0cec4b8d2ade6431e8d7a70beb --- /dev/null +++ b/video/F76bwRSLeK_39018221.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77eca3a837365ddf54912b1ddfedc65918201a34c05f09680b0a55d1e23cdf80 +size 1752848 diff --git a/video/F8DWffLkYG_39027250.mp4 b/video/F8DWffLkYG_39027250.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4755ffd20be75b9cb191566a279a537c21b3a932 --- /dev/null +++ b/video/F8DWffLkYG_39027250.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bbb307abbd2dfd71b3f7f93b3d5a46b72155183eb1dcae506ac25090f2e4fe4a +size 3100351 diff --git a/video/F8aSOovlEP_39027468.mp4 b/video/F8aSOovlEP_39027468.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4a153c041bfb8278a1eb4c2d1fa803fd91b4dd61 --- /dev/null +++ b/video/F8aSOovlEP_39027468.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c42537fc76ecd060d0c1699753e1edf3ec575dd0076ba5b4a99a1f0b62baa9e5 +size 2413891 diff --git a/video/F9NDzHQtOl_39027113.mp4 b/video/F9NDzHQtOl_39027113.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bba5fcd051bab390feadd41ba075ee856d617bcf --- /dev/null +++ b/video/F9NDzHQtOl_39027113.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9aceee80e15bada9f661524db51684fb2e1a68ec3aee40071bd71b6aac41168d +size 3660571 diff --git a/video/FAuFpGeLmx_39026542.mp4 b/video/FAuFpGeLmx_39026542.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a54ed397475d7fb0a8c38fb986d71f12f00e6d11 --- /dev/null +++ b/video/FAuFpGeLmx_39026542.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9bfd96b6f5195631adb34db10055d73f77c6cf47e9ea097ee407f01bd29ead4 +size 2651289 diff --git a/video/FBMsBdH0yz_39025521.mp4 b/video/FBMsBdH0yz_39025521.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..990195c3134f81f0d4d2263edb928824341efe74 --- /dev/null +++ b/video/FBMsBdH0yz_39025521.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14806550f0b3e5eb13b02afb80ee3f204034023906be0a922ff81bed03f35853 +size 2894198 diff --git a/video/FDb2JQZsFH_39018218.mp4 b/video/FDb2JQZsFH_39018218.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b6981f73b069b3be33361c96c4f0f724f9ade712 --- /dev/null +++ b/video/FDb2JQZsFH_39018218.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfefcf06a2b52e3bc58da67ed2dc3a35e8185b5b40b645225c69ee2e03c4d449 +size 2002173 diff --git a/video/FFW6rPz48Z_39027315.mp4 b/video/FFW6rPz48Z_39027315.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e8be94cd4e861210fe4a89dce1907cb32d6b52d3 --- /dev/null +++ b/video/FFW6rPz48Z_39027315.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74f8716f47e2f0fc5e5088aa676e97958a19a45872b3c677f60bc0a0bfe4343d +size 2019426 diff --git a/video/FGTDe6EA0B_39026720.mp4 b/video/FGTDe6EA0B_39026720.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d5aeb6e749b979445db40700f92f99baba02bd40 --- /dev/null +++ b/video/FGTDe6EA0B_39026720.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d9f1bf044ec2016a42a6d5f8083c36488bbc240de092fa905fa0a05f1784ea9 +size 2423089 diff --git a/video/FHqAzWl2wE_39018217.mp4 b/video/FHqAzWl2wE_39018217.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4d94f2552083032a50fd7c5aaf8f147e7ed57dbb --- /dev/null +++ b/video/FHqAzWl2wE_39018217.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f1801f11cf850066989d2f077c4ebb12f42cd40f948ff56957c00a07b4c535a +size 295039 diff --git a/video/FIplmUWdm3_39019234.mp4 b/video/FIplmUWdm3_39019234.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a66545c2ab4878eeaa89d85de0779b4800a9fe4a --- /dev/null +++ b/video/FIplmUWdm3_39019234.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4395e3e6e98f1fddf879db8fa574d7dd76ad9a5a188c57dab15fb5d83bafe4a7 +size 1965827 diff --git a/video/FIs87Iro9j_39026954.mp4 b/video/FIs87Iro9j_39026954.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a041c0619abb5c0490b906149b9b513292f191ef --- /dev/null +++ b/video/FIs87Iro9j_39026954.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0987f21b52ef6954fc99856ddd9955c2259255cdcfe5a0408a322393e707fd3b +size 2054601 diff --git a/video/FJlrSZBMCD_39025671.mp4 b/video/FJlrSZBMCD_39025671.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f7212db45faaed94f6088e4577f4eaacaa3a5777 --- /dev/null +++ b/video/FJlrSZBMCD_39025671.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a7d5140b0b5c0cdce0c973e2577eb4b823c0beb8eea778fab9eb4a5c28a74e3 +size 2455801 diff --git a/video/FLNnlfBGMo_39027766.mp4 b/video/FLNnlfBGMo_39027766.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ae36cef98138ab24df72717b97a7f0db716a6e7d --- /dev/null +++ b/video/FLNnlfBGMo_39027766.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7705ef07bef7d59c35f8f4c47cd077ed7d773f621bef3de45b0545cd9eb2773 +size 2181352 diff --git a/video/FMMF1a9ifL_39017185.mp4 b/video/FMMF1a9ifL_39017185.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e9f7cfb932887044a4e1a1b1b94f58b1dcf12eff --- /dev/null +++ b/video/FMMF1a9ifL_39017185.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0152be71bbc31694eb206d5bb3302f990ea8a57a2bb4d1e5be607e8d572d408e +size 2074784 diff --git a/video/FNOBf6JM7r_39028827.mp4 b/video/FNOBf6JM7r_39028827.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..781de433e7cd9d0ebbe841707ce7a5ca74cefec7 --- /dev/null +++ b/video/FNOBf6JM7r_39028827.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f62649714612e396b5afd5c85d3dbc105ff076150787c53f75a1ea87dc6a7b0 +size 2670301 diff --git a/video/FNtsZLwkGr_39027454.mp4 b/video/FNtsZLwkGr_39027454.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7cfbaa887a8c89cb8dbc480c8ae0ea094d043f1e --- /dev/null +++ b/video/FNtsZLwkGr_39027454.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3980f2e2971967ebbd604f2a7a92e9a37179eee0f5f814916413e6769c932735 +size 2537803 diff --git a/video/FRCHDhbxZF_39018730.mp4 b/video/FRCHDhbxZF_39018730.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..47dc0405c92ff4ced7bc9f937c3f728fa60993e4 --- /dev/null +++ b/video/FRCHDhbxZF_39018730.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3434625a319631cba9fd4dc2055b4f34050e6d9e70749874b6d9b88086675b9d +size 1569157 diff --git a/video/FTpKGuxEfy_39028674.mp4 b/video/FTpKGuxEfy_39028674.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8979827ce7523ae2408ec612361ad8f8790bfdaf --- /dev/null +++ b/video/FTpKGuxEfy_39028674.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a4f00f0577975de1de78da9dc8de71d0a0d4e1eda3879f0f93ea96374fd91d7c +size 2834448 diff --git a/video/FVgCwcwpJw_39026806.mp4 b/video/FVgCwcwpJw_39026806.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..29e762b15e15cd1f788d52ca330dacde7efd8b1f --- /dev/null +++ b/video/FVgCwcwpJw_39026806.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:034ed46fbc92c002c5cf5ebf1c27ed75346dce4ffe93eea665f4397a496ab5b8 +size 1918079 diff --git a/video/FVhmnvqnsI_39018790.mp4 b/video/FVhmnvqnsI_39018790.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6ca1c033f545db0695dc8db33277ea68fd3a5ec1 --- /dev/null +++ b/video/FVhmnvqnsI_39018790.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:36f6d1421af63c083037d1824a3d9999f6e32320b8417aa51882701e053319eb +size 2920859 diff --git a/video/FXJDcriMYH_39028101.mp4 b/video/FXJDcriMYH_39028101.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ffca4496cba8c2c96792535e7bd3087af671b2e6 --- /dev/null +++ b/video/FXJDcriMYH_39028101.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c79ba29568fca7da0aa2f86fc8699e6f29b2dcea09ccb734ef609865459c799 +size 3822885 diff --git a/video/FY6vPtITtE_39025555.mp4 b/video/FY6vPtITtE_39025555.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bf7210a7bef963f983e0fca1a7be0fad27843cac --- /dev/null +++ b/video/FY6vPtITtE_39025555.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d09adddfaf78eef0d07a79a2bb0d1ed8998fbb9b67017a4fd3ca7fab64a536ce +size 1876937 diff --git a/video/FaNhyXY6Y1_39024939.mp4 b/video/FaNhyXY6Y1_39024939.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b58a028297696ed2671e41713c9ebe5cc6d6b433 --- /dev/null +++ b/video/FaNhyXY6Y1_39024939.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dc22a2295745bd75a34abb73cf7f587b13b29bc1b243658a85333525b22d4083 +size 2433028 diff --git a/video/FddFxi08J3_39017112.mp4 b/video/FddFxi08J3_39017112.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..538fbbf64503085ddcb7203b71431b8c78ab1ab2 --- /dev/null +++ b/video/FddFxi08J3_39017112.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aba42db7a6c0d5a4bfdbdb1ee24c0df9ec43892344d23760981aac428c775917 +size 2508412 diff --git a/video/Feiz5HtCD0_39018207.mp4 b/video/Feiz5HtCD0_39018207.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9cea709520664b0285dffe4626405df0b338aa31 --- /dev/null +++ b/video/Feiz5HtCD0_39018207.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b85bd40af5c5fa01099049d802c1668577065db2dc98adbaad67145552dd7036 +size 2567450 diff --git a/video/Ffb30OVVCa_39027889.mp4 b/video/Ffb30OVVCa_39027889.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..55a217ead1b6623e8e67318771631653da07a9a7 --- /dev/null +++ b/video/Ffb30OVVCa_39027889.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3060ae4ebfd669a915aa942e75b77fdd17f80a98303b764d1304f49c4ec9ce0c +size 2148194 diff --git a/video/FisyQfoJCm_39024505.mp4 b/video/FisyQfoJCm_39024505.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..23b9cb187bceaa4855499b1cdc4613c9522b4537 --- /dev/null +++ b/video/FisyQfoJCm_39024505.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2f92e0d1a316fbcb95f20fee35d123d2fc26ce21126cc93418906ac8d9e6d1a3 +size 2722573 diff --git a/video/FlcdW7NPRY_39026967.mp4 b/video/FlcdW7NPRY_39026967.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a46b1e48db7a30f1caa046ccdc1e2dc825ce04e1 --- /dev/null +++ b/video/FlcdW7NPRY_39026967.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3420e5a53c843177cf433abcdf3aac1021403446e184cdf9b4869feb8957d32 +size 2389424 diff --git a/video/FmNoFIImZG_39026671.mp4 b/video/FmNoFIImZG_39026671.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..62fbc33d47742fda763f79db3e0b5586056d39e0 --- /dev/null +++ b/video/FmNoFIImZG_39026671.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:926b480b256c84ecf0c8fac6de799aa87795f604a8089d21e4ff74b5f03a620d +size 2327735 diff --git a/video/Fp3JVz5XE7_39025791.mp4 b/video/Fp3JVz5XE7_39025791.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..900859d761f6f5ad179cc5bf404aed9316c9fa0e --- /dev/null +++ b/video/Fp3JVz5XE7_39025791.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:94d095b82926e36b3345949d821fa163b0e45f933868dcfc44f52731ce358944 +size 1466433 diff --git a/video/FqWyzyErVT_39028790.mp4 b/video/FqWyzyErVT_39028790.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..059f9389ed605887e90ad57f204f74ccffc20c66 --- /dev/null +++ b/video/FqWyzyErVT_39028790.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41a4dc54f1a0e543987b52da3c17a9e87e62bd3a31bd0bc2455643d6b6e61b31 +size 2568443 diff --git a/video/FsA0OSsdzJ_39027930.mp4 b/video/FsA0OSsdzJ_39027930.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9ff9b81c085a3f667750a0ab864ccc8aa7d6716c --- /dev/null +++ b/video/FsA0OSsdzJ_39027930.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d159766e923cd9ed19624d762fc7fbb58a983d55707ccedbf1c8744568d9f35 +size 3495091 diff --git a/video/FsdB3I9Y24_39025888.mp4 b/video/FsdB3I9Y24_39025888.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1bcb7d103eb970d0b704d16c2e1a78c5ffbf06da --- /dev/null +++ b/video/FsdB3I9Y24_39025888.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25dcc9c8a0cd9114328cc84dbceabd99f4b0e2e9018bd60a0d47a84f5e270e17 +size 2807256 diff --git a/video/FuTfZK7PK3_39027531.mp4 b/video/FuTfZK7PK3_39027531.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..37c12d4866e995105843e7d46ce168f40223a8c9 --- /dev/null +++ b/video/FuTfZK7PK3_39027531.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b3551b85358d1b2ab44616eb502fe99253cd9b748dc6e536bbb6e38fefa3ab0 +size 2138493 diff --git a/video/FvK2noilxT_39018201.mp4 b/video/FvK2noilxT_39018201.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..22cefe03a10fae2e7bf69e6ce9fa6ab2452ec639 --- /dev/null +++ b/video/FvK2noilxT_39018201.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc0c1619c6cca1085a49c526b80fa016a774653b99646cfb8d0918b69b10ce84 +size 653597 diff --git a/video/FwxOHl0BEl_39025191.mp4 b/video/FwxOHl0BEl_39025191.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..886764ea486a7be3b44f189e6b8e8dc79cd5c905 --- /dev/null +++ b/video/FwxOHl0BEl_39025191.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7add176864eef9649efd05edc333605b03fd9f7e9dbe6634189ea8f484c51cb1 +size 2660597 diff --git a/video/Fx2SbBgcte_39019174.mp4 b/video/Fx2SbBgcte_39019174.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3afb12bf0ac33092ba6eb83195c76d9f2c927fb5 --- /dev/null +++ b/video/Fx2SbBgcte_39019174.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a839e9f60cbda500f6009e03e29145c427423aed90b8912392e160363460b75 +size 1916004 diff --git a/video/G0LfcMiRkc_39026568.mp4 b/video/G0LfcMiRkc_39026568.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..041a3d9037f88a6c1e18510cfd4f4ac11b8a6b8f --- /dev/null +++ b/video/G0LfcMiRkc_39026568.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b61fe924df5b32e920b0fac44e801b27bf3c4bc6fec71434f1a27bb942f0e125 +size 2618695 diff --git a/video/G0v0TxX01N_39024885.mp4 b/video/G0v0TxX01N_39024885.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5e25aecaacb32946f8c23c1baeec87d7b7270af1 --- /dev/null +++ b/video/G0v0TxX01N_39024885.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b0ae80f3c82df9acd7978702d207aeeb41508987dc6b7d79eaaccfb572c027a +size 2366748 diff --git a/video/G0yxFmP87g_39025522.mp4 b/video/G0yxFmP87g_39025522.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7699d9c4fbce8fe1f9679c3718eab333e8094c2d --- /dev/null +++ b/video/G0yxFmP87g_39025522.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bdb7ac6cb395ccc947ce773f68744bbee5f6cd8f3ceac7be8650d278cc557d62 +size 2719136 diff --git a/video/G1Hlubz1fR_39019039.mp4 b/video/G1Hlubz1fR_39019039.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2892d00b538c9372131e3f0374217fb2c6b6a53d --- /dev/null +++ b/video/G1Hlubz1fR_39019039.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0313573c0eac398e904eee1a2e2e317f7193595089860376d4ea8d3048c804ae +size 2259628 diff --git a/video/G24fOpC3JE_39024583.mp4 b/video/G24fOpC3JE_39024583.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..25878b558e2860fcdcf18e3c3b2a1686a1f0b295 --- /dev/null +++ b/video/G24fOpC3JE_39024583.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f990b39a7d553d317cb3430d732f0695640d581b4f9c92290727c39a1a20839 +size 2824170 diff --git a/video/G2cG3mQqop_39018200.mp4 b/video/G2cG3mQqop_39018200.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7befbd7faeaf80d2c6464def385d447c4cba4a9b --- /dev/null +++ b/video/G2cG3mQqop_39018200.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5307706191c710c522137a0092dd67b8223dc448d1078330b80653c9edb63916 +size 8306 diff --git a/video/G4vFNmraxj_39028064.mp4 b/video/G4vFNmraxj_39028064.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3d3b1a5c7485a2eb8118e4331986b63d8c81bf8a --- /dev/null +++ b/video/G4vFNmraxj_39028064.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61d6a8183ba7f65a6220c8d94e644840fbde0bf4de87594161713d4fe39f8178 +size 1968080 diff --git a/video/G5lMFOtFHa_39027936.mp4 b/video/G5lMFOtFHa_39027936.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f6b5e6d80f42416dbe5db4b10970a2ca167513ed --- /dev/null +++ b/video/G5lMFOtFHa_39027936.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ab2dc4ee1e724fc8f1f6a42b633d477f55832fe4a1ccdbf245aac9cbfb7a358 +size 2744565 diff --git a/video/G7QS68ICPJ_39028089.mp4 b/video/G7QS68ICPJ_39028089.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..819478d403adc9024622f2da5ac53b398f44e0c5 --- /dev/null +++ b/video/G7QS68ICPJ_39028089.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e574923f0a07fe892d7123aa6882e65962cb981b59f9cb67ea8b8da6d1cec55 +size 2171761 diff --git a/video/G8aS48B9bm_39028777.mp4 b/video/G8aS48B9bm_39028777.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fc90aac1736a8b50f77085657acebfc076bbe822 --- /dev/null +++ b/video/G8aS48B9bm_39028777.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0236cce2cd6ee73fe59d111a7ed5d1a42f12dae31d44b171c9794ae3fad57efd +size 1493727 diff --git a/video/G99BSV9pt5_39026878.mp4 b/video/G99BSV9pt5_39026878.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9640bb2e8e5a137b84932b44446621315a9fe2cd --- /dev/null +++ b/video/G99BSV9pt5_39026878.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:26b2b9ab4e1286941081e65473155a5ae952d6e4e50e3a203ab46d9ed28abebc +size 1189306 diff --git a/video/G9OJUgKo4B_39025264.mp4 b/video/G9OJUgKo4B_39025264.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2ead9b1818aef8c267a172dff01afd6de0ddd11b --- /dev/null +++ b/video/G9OJUgKo4B_39025264.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0eb58a9581930d1a2f8b0c478bc9bfff26eb481a51bc6f87a41d3d971f94647 +size 2588448 diff --git a/video/GCmmy4At6i_39027132.mp4 b/video/GCmmy4At6i_39027132.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5483d4d8d376c612360dab8e409deb43ad587a5e --- /dev/null +++ b/video/GCmmy4At6i_39027132.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc44e7c08a25790f0e99ffc92b3b05e9f6b8b91b12ba35dad434f17ee5c4f08b +size 2179619 diff --git a/video/GDNZajKrML_39027682.mp4 b/video/GDNZajKrML_39027682.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bf4bdaa279e5d8cb51f8f94141f4d5aaf4dff98d --- /dev/null +++ b/video/GDNZajKrML_39027682.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6e5915fe042ed4364f1567302c0c83d23675f67c2ff645a685de804d98994b86 +size 2212075 diff --git a/video/GEcwtMk1uA_39017057.mp4 b/video/GEcwtMk1uA_39017057.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9084f6fe5f23fcf1634319f7137e7f6a6550a969 --- /dev/null +++ b/video/GEcwtMk1uA_39017057.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6fbf59defa846e85e3d43f96d83186589c015a182c5a9c7279f46960e15f88e +size 2604810 diff --git a/video/GJMYvWzjE1_39024555.mp4 b/video/GJMYvWzjE1_39024555.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a1edd322d93b2e85fe61a195d21729a67b93cacb --- /dev/null +++ b/video/GJMYvWzjE1_39024555.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:92b077fde5fa2810334226b101aaf8a9d697ee5d9eabbbd58386f8243d6cded7 +size 2985696 diff --git a/video/GLUIuli3Sm_39024487.mp4 b/video/GLUIuli3Sm_39024487.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e91076859d62fb93c9894b1ec210525994d75e52 --- /dev/null +++ b/video/GLUIuli3Sm_39024487.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e04128b286259c5c31e04f5dac36296d307c6f00d36f23b24752b5fd3b6387ac +size 2125018 diff --git a/video/GN2qbxZlni_39025507.mp4 b/video/GN2qbxZlni_39025507.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a35567928defbeef09d5211f256a04efe7cffd56 --- /dev/null +++ b/video/GN2qbxZlni_39025507.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3875e188cb541377d48abbbbdddf14f86fe8179b99835a09067f0d185f798707 +size 2520914 diff --git a/video/GN921JHCRw_39019272.mp4 b/video/GN921JHCRw_39019272.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..57fcbe3bee2c1da71ceb62927eefbafbb4850b92 --- /dev/null +++ b/video/GN921JHCRw_39019272.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:679d6c9289242a2590791dc11421b680ae660a1b1079ea4403e1e766b34d08bd +size 2798577 diff --git a/video/GNhrGRCerd_39028083.mp4 b/video/GNhrGRCerd_39028083.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..045b4d792795179c405eb7e1e0e828a4bf69be62 --- /dev/null +++ b/video/GNhrGRCerd_39028083.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d57897442666ceb5719f782d7b58b1dd1726e8fa4066493b582304f591ebe00 +size 1555332 diff --git a/video/GOgKhunkfw_39026436.mp4 b/video/GOgKhunkfw_39026436.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..773ea3e0851cdbbccbf4d2ba6611d962d72ee696 --- /dev/null +++ b/video/GOgKhunkfw_39026436.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb778597348ecdc3033fc5b5764d95fcdc13afb3de074f9a2d7aa89515a3d4d7 +size 2414603 diff --git a/video/GPKTIktA0k_39019282.mp4 b/video/GPKTIktA0k_39019282.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cf0cb4c85ed0fb5f6f0bb13427ac84cdc7ed0fa1 --- /dev/null +++ b/video/GPKTIktA0k_39019282.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:491d8383151655845f0f13ccdca1557aa3df5de613d7dea2a586cb4a044a524a +size 1867663 diff --git a/video/GQNvvQquO0_39025096.mp4 b/video/GQNvvQquO0_39025096.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..71d1581b54f833cd8f3ecf4a2119b000b60db407 --- /dev/null +++ b/video/GQNvvQquO0_39025096.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c21bacb7b40061d82cd6f30a30c7debaac770c4518f56197b4a3ed4f0586001f +size 2425611 diff --git a/video/GRmQjLzaPM_39025586.mp4 b/video/GRmQjLzaPM_39025586.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..25d5ef68764d316170fc65ac50c470bc4d6743e8 --- /dev/null +++ b/video/GRmQjLzaPM_39025586.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83a58dce74117b107e7b76e6432095fe4a8869bbc2913ac9f0b74b916572b376 +size 2162550 diff --git a/video/GVgRbz8MvG_39027448.mp4 b/video/GVgRbz8MvG_39027448.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1e57c99c9f37231942f36f6edb0dedd3df6c0925 --- /dev/null +++ b/video/GVgRbz8MvG_39027448.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51c8082ff3b944d1a483594999a1586c58de535c6b02e23aace0a05e9f37e232 +size 2643826 diff --git a/video/GXtmuiVrOM_39018191.mp4 b/video/GXtmuiVrOM_39018191.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a533c157b18dd977e84b52eecf102aca4ca65b6a --- /dev/null +++ b/video/GXtmuiVrOM_39018191.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e6761b09d3a1977a3194bab39cf1f63c5405f7ee76a3c20ed6f9371f6ad6427 +size 2542509 diff --git a/video/GYd5AfZaor_39026941.mp4 b/video/GYd5AfZaor_39026941.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fc96aa7f2cce196d3ed4e0f33c7004cc22d1738d --- /dev/null +++ b/video/GYd5AfZaor_39026941.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c3bdb45cf498179b6570fc633fdcd5fb29e8252e2be83d770222e0539b9d59c +size 2103400 diff --git a/video/GZ6AcZwA8r_39018931.mp4 b/video/GZ6AcZwA8r_39018931.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3a4ad26865ea8833c9dee4bb67f7e117a4ba739d --- /dev/null +++ b/video/GZ6AcZwA8r_39018931.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:babc174d67cad12dcb595afe50d0172c6bdc3fc993c8ba5f092d1e3863876956 +size 2464763 diff --git a/video/GaLCLvJaoF_39019152.mp4 b/video/GaLCLvJaoF_39019152.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a8ad7b2d958bf5eee2a7e5c078c653b2d2f8803d --- /dev/null +++ b/video/GaLCLvJaoF_39019152.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a03aff7f7bb1a81cfdc4dd0449fff2e30331a06a0969bc91c6db1d0dbd2ca478 +size 3393745 diff --git a/video/Gb0mXhn5h3_39026869.mp4 b/video/Gb0mXhn5h3_39026869.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..59d0ebad15c92eabc94cfb0565ddc875063aeb6f --- /dev/null +++ b/video/Gb0mXhn5h3_39026869.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f04943f49116da91789bfc2c898722fbc0ad204d60c61fb1c60c7f8cef8d44a +size 2991483 diff --git a/video/GbqzN9HiUC_39025112.mp4 b/video/GbqzN9HiUC_39025112.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1a5668bccdfdf40d3e77f1aba6b3f10df1802a62 --- /dev/null +++ b/video/GbqzN9HiUC_39025112.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6fdd3d1f122238495748a2b0fa4583a2f2f46f0f88f3f968ad63886e09b0512e +size 2303585 diff --git a/video/Gg7cXo3S8l_39018189.mp4 b/video/Gg7cXo3S8l_39018189.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2357433d23bf3648b01034a8b274bc1830f2a2bf --- /dev/null +++ b/video/Gg7cXo3S8l_39018189.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e66f1c70dc377ad187bfce70685470f62f03cba594d0fa10779f75a5a189cd28 +size 2426511 diff --git a/video/GkHXBasQwm_39026536.mp4 b/video/GkHXBasQwm_39026536.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..77980d699af40dbdf4b720808b650be747b1edd0 --- /dev/null +++ b/video/GkHXBasQwm_39026536.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:94d4616ce6b20950cd63157ccb033b4e9736df64eb2d92fad82eef904fbdbb13 +size 2609183 diff --git a/video/GkJOCga62u_39018639.mp4 b/video/GkJOCga62u_39018639.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..70a3bcd3681885c1d1e735215a6c6b02b2fe1e37 --- /dev/null +++ b/video/GkJOCga62u_39018639.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea43c2018bc9dd10ca4ad7ddb04581db46d5a1ba0cd5d10f8c1efb6290be4d15 +size 2089378 diff --git a/video/GkJbXpd3wM_39027451.mp4 b/video/GkJbXpd3wM_39027451.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8c30a0b9cc0e326681392dbe94179e42cd769055 --- /dev/null +++ b/video/GkJbXpd3wM_39027451.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5bc9085157d96b917f2576ae5725753d8a5854f9b48b65c93fc5c2cef2be300f +size 2249775 diff --git a/video/GkJiNn2QDF_39018737.mp4 b/video/GkJiNn2QDF_39018737.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ca61b07d0c65a1beea5abe9de3ddc9913af54c5b --- /dev/null +++ b/video/GkJiNn2QDF_39018737.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:909b89fbf79c06d89d0781b3c7a29fe8ed63ec9725074f6ab819c2c11bf1070d +size 2356716 diff --git a/video/GkzrVxs9LS_39025884.mp4 b/video/GkzrVxs9LS_39025884.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5d289c6956fec294051701eb6b95b63b0f077fb9 --- /dev/null +++ b/video/GkzrVxs9LS_39025884.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:520929d456eebcd205a0448a61122c65730601ecb2fea75049d01bafcc827d02 +size 3034810 diff --git a/video/GlD9Juva5V_39027202.mp4 b/video/GlD9Juva5V_39027202.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5d035b5f4a9b6eac7328d36c6fc7f24c711449a2 --- /dev/null +++ b/video/GlD9Juva5V_39027202.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:59e4cfcb6fc6a55d5d279d7d17d7b9cb5dc99ad6a07e77e40f57c0be4c7b497a +size 2837900 diff --git a/video/GlXUxNI6TN_39026563.mp4 b/video/GlXUxNI6TN_39026563.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6698a83cf2614eb7033b12fe813e8bbfe0e85d1a --- /dev/null +++ b/video/GlXUxNI6TN_39026563.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75329534699ca87f2801ec1daa1d4c5e51e967bc76f102cd0839ea3a7e762eec +size 319023 diff --git a/video/GlpawHh80l_39018186.mp4 b/video/GlpawHh80l_39018186.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..42f8bff5992fb30f26bef2b989ab862880ad5fa3 --- /dev/null +++ b/video/GlpawHh80l_39018186.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9865c3e6a43efb35ac7f539de1e02547e5019a014de9e0109544c1314525992a +size 1667252 diff --git a/video/Glt37xoU7e_39027840.mp4 b/video/Glt37xoU7e_39027840.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3c94a540e6d9e3290bdaf8ea4dcb59bbee861d01 --- /dev/null +++ b/video/Glt37xoU7e_39027840.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd634dc6a279b81301f93c22b425f298894494ca97b7119500e67be6e16183e6 +size 828180 diff --git a/video/GnaFrZRHPf_39028028.mp4 b/video/GnaFrZRHPf_39028028.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e657ce5361a97c22d756777259300c5a167cf105 --- /dev/null +++ b/video/GnaFrZRHPf_39028028.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3445ce6bf6d404d120788a54b09906820c93e23b5c069e43ec567392dac46dbc +size 1980117 diff --git a/video/GqefKjw1OR_39026397.mp4 b/video/GqefKjw1OR_39026397.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f9ae514f7f164e72766fe4236fc9bb73d4e0e751 --- /dev/null +++ b/video/GqefKjw1OR_39026397.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af48479471e4990e52f767505d8ff0f2683c4d81fe7c0d1873b1686dae4bbad5 +size 2428249 diff --git a/video/GrMczQGTlA_39027109.mp4 b/video/GrMczQGTlA_39027109.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7917168e8322065e8e49b9a7c28b372a226d40cc --- /dev/null +++ b/video/GrMczQGTlA_39027109.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ddfd448feb2fb7eae7670a0e893630eee5f0bfaea7c460e32cde81aca4b218e +size 1404041 diff --git a/video/Grd7yzFm5V_39027189.mp4 b/video/Grd7yzFm5V_39027189.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..19b7bed293d0eec570dd4e91491a36c0bef5b664 --- /dev/null +++ b/video/Grd7yzFm5V_39027189.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53b0092503b6ab8e88435f42c8e035fa33e1a52ea4b86e5f18ef33be09ad6c0e +size 2323159 diff --git a/video/GruDNzQ4ux_39018802.mp4 b/video/GruDNzQ4ux_39018802.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d2ab831c4af5ab581ec4fa2b46a14b233525c09a --- /dev/null +++ b/video/GruDNzQ4ux_39018802.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96540da0288faa43190c45263135663396f8311e19840b3deac524b069bf6d3e +size 2860567 diff --git a/video/GruuYVTGXV_39025071.mp4 b/video/GruuYVTGXV_39025071.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..19d63aae8f13911df512eff69d8ec08180c9d4af --- /dev/null +++ b/video/GruuYVTGXV_39025071.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97460884e4c2ca9949c2d018ea445d6c9665ad94d828db4e436c445770998083 +size 2345347 diff --git a/video/GuY0zB2xVU_39027722.mp4 b/video/GuY0zB2xVU_39027722.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b7dc8e173b4b94971e1733c409aab44b1ace5155 --- /dev/null +++ b/video/GuY0zB2xVU_39027722.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:526f28eec23a7ef5bc03f8a65e8a52240ba2e9fcb3e90e0cc54e4b006c70c989 +size 1810212 diff --git a/video/GvQU54uA7u_39028739.mp4 b/video/GvQU54uA7u_39028739.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7c29ccecec597e99a26c74ae4f76c5857abbe448 --- /dev/null +++ b/video/GvQU54uA7u_39028739.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d5a4172f4e99b3a501d52d4debf16156fab1fb89686be0e65f72d083dc37814b +size 2662937 diff --git a/video/GxwnQ8sxkL_39024687.mp4 b/video/GxwnQ8sxkL_39024687.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f9911b78221829e0814f1655edb53a204c0b84b8 --- /dev/null +++ b/video/GxwnQ8sxkL_39024687.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61c2b97afff882e5c23f955928639260d16eab2c6572aae16d9f2bed55f41adb +size 2622043 diff --git a/video/GzNaCp6Vcg_39018182.mp4 b/video/GzNaCp6Vcg_39018182.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d06eadebce2f4db66728428b85a3411cb407e55d --- /dev/null +++ b/video/GzNaCp6Vcg_39018182.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:668370e13e35a9e3e4c64a39647d81a06cf050a4611b92261d1ef2adc6bc395d +size 2988381 diff --git a/video/GzNhzX9kVa_39018181.mp4 b/video/GzNhzX9kVa_39018181.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e479ecfad03f96d4e976826ffc125d2cb66c71b3 --- /dev/null +++ b/video/GzNhzX9kVa_39018181.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:462272d6c6992b1fd54717891a1453f59b61e30c471f60f4cfc4ffa728b6a8e3 +size 2436619 diff --git a/video/H3UayAQWoE_39017152.mp4 b/video/H3UayAQWoE_39017152.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ab04db07f075e32d065af1aa3cccabd6a4fd7c96 --- /dev/null +++ b/video/H3UayAQWoE_39017152.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c81d71fdb3d726dcb847525f7a5bd09bff3114fee2cd55a78b0400bc4c3225f +size 2956196 diff --git a/video/H3at5y8VFW_39027933.mp4 b/video/H3at5y8VFW_39027933.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1e1cf2091db68f92107f09851c76fc25ceda9eea --- /dev/null +++ b/video/H3at5y8VFW_39027933.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d8d8d89fc018b6d374696cdcd59fd97cabd2bc281bcec095efa64cfe1f5b2ab +size 2688246 diff --git a/video/H5z0XqEX57_39028457.mp4 b/video/H5z0XqEX57_39028457.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8a2495630c226411151c9cd47ed0c7a2da999d6b --- /dev/null +++ b/video/H5z0XqEX57_39028457.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6339a956cabc987bf1b6ed249f4ba4e0a7d3ca483aaafbdcceb97dd9190cfdf0 +size 2680911 diff --git a/video/H7SaaqfCUi_39028181.mp4 b/video/H7SaaqfCUi_39028181.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..49a51704986c738244a6e450be52223409e9fd5a --- /dev/null +++ b/video/H7SaaqfCUi_39028181.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a4b33ce854299e6f9a96d7644ce965dcf9ef46863c4a13d36b00e522afb2551 +size 2938210 diff --git a/video/H7qVZ0Zu8E_39026885.mp4 b/video/H7qVZ0Zu8E_39026885.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e72619e536507022284b74efe36ee5bebfb5a594 --- /dev/null +++ b/video/H7qVZ0Zu8E_39026885.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eff0a7b2f546c28969049f6765290b9027b24aa5d4fa642991e04455943af593 +size 1364425 diff --git a/video/HAcaANQNMK_39026811.mp4 b/video/HAcaANQNMK_39026811.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..20d570d4f3c22d60de37f92a8ac12485f1bdac52 --- /dev/null +++ b/video/HAcaANQNMK_39026811.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1bc7aaeb67cf7219a60c3b7ddfc1c7635c5feff07cf4c4bcb2bde92575e5be8e +size 2017092 diff --git a/video/HC0msxE3sf_39018923.mp4 b/video/HC0msxE3sf_39018923.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8b71e8afd5fdc8de9a20553aaaefb039a83ef636 --- /dev/null +++ b/video/HC0msxE3sf_39018923.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:258d3f33deaf25c24e711422541c6504efe2007e6dec74ce39839e3ff9022936 +size 433875 diff --git a/video/HCTikT7LS4_39025167.mp4 b/video/HCTikT7LS4_39025167.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e4720f4c1bce80fb578e58175a3bb115020dc80a --- /dev/null +++ b/video/HCTikT7LS4_39025167.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d636ef632f589166898b5f82d8e253cde611a948e348a7e1fff3cf1eb82dcb4e +size 2304365 diff --git a/video/HDVsiUHQ1w_39027385.mp4 b/video/HDVsiUHQ1w_39027385.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6b4b47bde4d4e1646ae5caf0893788bcc239fe59 --- /dev/null +++ b/video/HDVsiUHQ1w_39027385.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:11f4956bc9fa4ae91e1665a89b1af0acdc55f0ecf975cc60dda9bdc91534ddc0 +size 2467757 diff --git a/video/HFS800reZK_39026197.mp4 b/video/HFS800reZK_39026197.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b3abde5491c7632ca09ed5e63a073078dd9462d8 --- /dev/null +++ b/video/HFS800reZK_39026197.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6defdcb6782ef4b2d8938f996098e4d5435d158fc89948e91c498f3042eeb951 +size 3202385 diff --git a/video/HGNTcy4eEp_39028247.mp4 b/video/HGNTcy4eEp_39028247.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..af1ac922f90240a1be8cc95e00ee9c30bc502fc2 --- /dev/null +++ b/video/HGNTcy4eEp_39028247.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2244f2802c1e272a67b7c2eabce2714d32a0de57145f4398703e910e5f64c5b3 +size 2800567 diff --git a/video/HHbRxoDTxE_39018178.mp4 b/video/HHbRxoDTxE_39018178.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b6b3926b89ffe69c06c01bef349c1aac9c5a0295 --- /dev/null +++ b/video/HHbRxoDTxE_39018178.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:878870662c820ee3801be86a7d51dffb8229d6b0622c4920c3768be977edc295 +size 2200057 diff --git a/video/HQgHCVZiHw_39024823.mp4 b/video/HQgHCVZiHw_39024823.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6e541ce7487bfe49d2062fcf2fac0390c8724489 --- /dev/null +++ b/video/HQgHCVZiHw_39024823.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c45b8257f9ad2f4c2e828c4be59e1b6601908a8c3bc9c1662296f4ac0d31a56e +size 3153204 diff --git a/video/HRkyLbBRHI_39018174.mp4 b/video/HRkyLbBRHI_39018174.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1597a6cbea5a76a012a0aa5f25f62915a9312ab4 --- /dev/null +++ b/video/HRkyLbBRHI_39018174.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5e3d440df2dfe3c5507857eafcb9ce2f5916d4384d3d637d90990557c92fb9a +size 2588813 diff --git a/video/HRnSVflpgt_39025549.mp4 b/video/HRnSVflpgt_39025549.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a3746667761856e996ff5e9e950080eb1c42cae2 --- /dev/null +++ b/video/HRnSVflpgt_39025549.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9e0ec23d625da82fc7e394d0115ef72edb6318ebe8b6d441e61429e742c2ee8 +size 3209263 diff --git a/video/HSJOt2hyDf_39025235.mp4 b/video/HSJOt2hyDf_39025235.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b04c68f63122ea533520c10f40288fafe0dded2c --- /dev/null +++ b/video/HSJOt2hyDf_39025235.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a64b5f39be4aa3383abcbdb7e1c21d9b3846a0a6e61a2fe96aac40d58ff129fc +size 2701326 diff --git a/video/HShs7q1Njh_39025308.mp4 b/video/HShs7q1Njh_39025308.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..90b6532ecc513905b8d0d7843f6e3c46a034f833 --- /dev/null +++ b/video/HShs7q1Njh_39025308.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0716dc2d0d5c9c99a17b2e2991fda91e59034f77bebfbfb6024dfd08d00714ab +size 2594690 diff --git a/video/HT2dAhh4uV_39018172.mp4 b/video/HT2dAhh4uV_39018172.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c1afa14a99dab27e16057091408b94b8a5179512 --- /dev/null +++ b/video/HT2dAhh4uV_39018172.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71501f8bce072ba01526ed54940b0b35990744a913bb612a9fafbf61faf79250 +size 2769899 diff --git a/video/HTLJptF7qM_39026335.mp4 b/video/HTLJptF7qM_39026335.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8a0b15275beeb4eaac2111985e7e193081c5a513 --- /dev/null +++ b/video/HTLJptF7qM_39026335.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3aa7fa4e15d70e096b7437ecbec397941309fe6f59da80eed827a89e6f5ffeb2 +size 2440566 diff --git a/video/HUxtJcQpDS_39025032.mp4 b/video/HUxtJcQpDS_39025032.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e3eebcc4ff560181e3eabdb7309d8134ef88af66 --- /dev/null +++ b/video/HUxtJcQpDS_39025032.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b39d6960815001511e2672dfadc18810ddd5cad5775c2051fc44a3e17b9361cd +size 2462437 diff --git a/video/HXWTXXtHNl_39018170.mp4 b/video/HXWTXXtHNl_39018170.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1ab400fb7f85c12aa039c42bab46cec1e720b1c1 --- /dev/null +++ b/video/HXWTXXtHNl_39018170.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45c2d6f29ff391f9684a9242e80faaf479103d26114af690d4d1d77f3ae559b6 +size 2605363 diff --git a/video/HXc5aXeoc8_39018169.mp4 b/video/HXc5aXeoc8_39018169.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6feadd91699ffbda16f9826a3adfeb165b789896 --- /dev/null +++ b/video/HXc5aXeoc8_39018169.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c03592ab68a92bc6b6a1e371dc6e9bd74633fd3b88626f7797b5bc06f1f9f66f +size 2732258 diff --git a/video/HXdAfK488A_39025317.mp4 b/video/HXdAfK488A_39025317.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a64b51d5125bfd6215025df33d90b51ab401f1b5 --- /dev/null +++ b/video/HXdAfK488A_39025317.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c9c18285ccc82bba6c7a87a23a2f4e6c77991bdf527adcef37acec56b03812d1 +size 2696137 diff --git a/video/HYa3eu8scG_39025003.mp4 b/video/HYa3eu8scG_39025003.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f1e938128b9bbf858483f6638be6bc06da3e3f05 --- /dev/null +++ b/video/HYa3eu8scG_39025003.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d2d96983ce915f929024f5c42050d6ed685413db803b2d1984475e38e8a0ef3 +size 3265068 diff --git a/video/HZ3S17EI0o_39018167.mp4 b/video/HZ3S17EI0o_39018167.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6e2fc6cc45a3c8b51105a295e83f97620a935129 --- /dev/null +++ b/video/HZ3S17EI0o_39018167.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b21cfd49ba7917289e371c8284cf0aee1bf75522eb33a18da8a9178ec10bdb2e +size 2316602 diff --git a/video/HZndRcfyNI_39018166.mp4 b/video/HZndRcfyNI_39018166.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1748e094b55db90768018b5ae5d37e48abc8c385 --- /dev/null +++ b/video/HZndRcfyNI_39018166.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e1c43b10ba76fc6851fb804c10329c0481a31d9885faecfa7e20169fe2c11a9 +size 2137934 diff --git a/video/HbIBqn3grD_39025160.mp4 b/video/HbIBqn3grD_39025160.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c3bd2ecebd04ba3d6027a1577585931cb325228b --- /dev/null +++ b/video/HbIBqn3grD_39025160.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d946ea3452bed1c09d1bdd3db28f12209511f2f1e527c48b0c296d3a8fe7bbdd +size 2510657 diff --git a/video/HcqnhqoXS3_39027551.mp4 b/video/HcqnhqoXS3_39027551.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..49163c67337068d6ac9ec041870fe938d7969dc0 --- /dev/null +++ b/video/HcqnhqoXS3_39027551.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4d37d32a4def4b1a30504c64327f55bf9283930ef3d951f297c91f324e85bec +size 2395214 diff --git a/video/HeJ1cBAgiV_39028514.mp4 b/video/HeJ1cBAgiV_39028514.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..905a571fdcea233f57ada7b3639ac68265c51e40 --- /dev/null +++ b/video/HeJ1cBAgiV_39028514.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a330faf5f573023cd1ed091a621919b060d2b9aeb593cb2bc063d14f6b8f708a +size 2213107 diff --git a/video/Hew2JSDycr_39026180.mp4 b/video/Hew2JSDycr_39026180.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5d2eeb06564fa91d5666d281f80f24ef4cf73730 --- /dev/null +++ b/video/Hew2JSDycr_39026180.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68a97325340872edd49f8919e553f43b3ee3798f84c66c1c925791fcc305c4de +size 2843031 diff --git a/video/HfQF8LoLhs_39028011.mp4 b/video/HfQF8LoLhs_39028011.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fc84d611354a8f5385296444bb77256ad3809845 --- /dev/null +++ b/video/HfQF8LoLhs_39028011.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:92eecf155daa1eb6a5a4caae7c3b821b2e1021890210935083e0034c4d1ccfc9 +size 2569405 diff --git a/video/HfSJlBRkKJ_39024428.mp4 b/video/HfSJlBRkKJ_39024428.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7a53677b363e12f85c1573cefb9b7dbbcba2885d --- /dev/null +++ b/video/HfSJlBRkKJ_39024428.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:043f5e42759e88e28239fb9bf2a9378bbcf18742a05d5283b03733d508c8004a +size 2476888 diff --git a/video/HfpV6u0kbX_39027110.mp4 b/video/HfpV6u0kbX_39027110.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5d69c008c8931f162a165560ebce87be953e345a --- /dev/null +++ b/video/HfpV6u0kbX_39027110.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c01573f42e59499635cff0a09b8c334d71fb917d0cf27965a99c044348ac6369 +size 2830003 diff --git a/video/HiYMiZYwkw_39018159.mp4 b/video/HiYMiZYwkw_39018159.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..84361b45bcd987a164e9d5f35e82bf52bf2a3ce8 --- /dev/null +++ b/video/HiYMiZYwkw_39018159.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ed365499491762437aa2211d1294e5ed7c6db84af0c7afa3f75e09e5297b572 +size 2172520 diff --git a/video/HkC4OYee3Q_39027782.mp4 b/video/HkC4OYee3Q_39027782.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..39ed170e6e235b47872b61337b36849e361446fa --- /dev/null +++ b/video/HkC4OYee3Q_39027782.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6e14df4a67ed7fe1e6961b701e8ceaff1dda3b579f4c2e01f0746b15e7b5636 +size 1841365 diff --git a/video/Hlcek7AYgP_39026999.mp4 b/video/Hlcek7AYgP_39026999.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..64f83b998b0971d578d58946a477ab23a38ae028 --- /dev/null +++ b/video/Hlcek7AYgP_39026999.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7ae4b1e507ea102c5b2a5a12cd64dd3541adf4a4662ae9c1ef989b3d9414bdfc +size 3070601 diff --git a/video/HmCmxbCpp2_39028600.mp4 b/video/HmCmxbCpp2_39028600.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ae3a757df9759b37c0772eaa85485e412664be1a --- /dev/null +++ b/video/HmCmxbCpp2_39028600.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4bd5e0f3c0dc3699813f0b3c2594b5735da3105f746954cdeb3d635611b19144 +size 45944 diff --git a/video/HmMSBhMAw4_39026274.mp4 b/video/HmMSBhMAw4_39026274.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6c2e0ea831a18037567879d2b6c7d164d289828d --- /dev/null +++ b/video/HmMSBhMAw4_39026274.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:08b5d7345832b2b646d04c8e71b0db210fa30c533a97d61fd3bc2b3de8f8c358 +size 2355115 diff --git a/video/HpN4xeDJQF_39025359.mp4 b/video/HpN4xeDJQF_39025359.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4ef7cf1ae5eccebb5f94b985d1ce1c3a7e4cfc41 --- /dev/null +++ b/video/HpN4xeDJQF_39025359.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dea1277f4b13b6ba15556690ab2801614c214a0abf8ce919e2904c64953656b4 +size 3065703 diff --git a/video/HtlfNbyfOn_39025898.mp4 b/video/HtlfNbyfOn_39025898.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b9b388aed18ec58a3194f89b0a6f11b6a8bf1307 --- /dev/null +++ b/video/HtlfNbyfOn_39025898.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a3f1c5f36e6bb9501a928ab1a72923f995aed76007bd930a4e13346f94ae0d4 +size 2090558 diff --git a/video/HwO1mNluoL_39024747.mp4 b/video/HwO1mNluoL_39024747.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..52d61f01caf19f5a64a09c7c68726acf91918351 --- /dev/null +++ b/video/HwO1mNluoL_39024747.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ddb8985e2d8238703ef57fb883e5de8e4acd4f2aec4d58757700984efb34a48a +size 2548363 diff --git a/video/HxGdbAmYYr_39027352.mp4 b/video/HxGdbAmYYr_39027352.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3e232c2fdedbb396685c84ef2525a11ed26fe95a --- /dev/null +++ b/video/HxGdbAmYYr_39027352.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5695118b1a10e6e700d23b1faa55d0f2da862f4d090c2fd81a81dff5bd7a8ff1 +size 3269021 diff --git a/video/HzANl2unCB_39025294.mp4 b/video/HzANl2unCB_39025294.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6f1eba7d481b13eb5b3a1048dea55ed001d61a9b --- /dev/null +++ b/video/HzANl2unCB_39025294.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e2ef2df9ab1616bc43744b52b6af1775e2b3d5eb734148deaadca19677283e4 +size 2362835 diff --git a/video/I1quoTXZzc_39019232.mp4 b/video/I1quoTXZzc_39019232.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2d44cbf8e33725bc546e4bbdefa1134d839ef1b7 --- /dev/null +++ b/video/I1quoTXZzc_39019232.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:761c4293c3233b9afc9f17377b0d3cc3aafacb89440e337ac340c11d16b32e2f +size 2434244 diff --git a/video/I2mIxuXA72_39018157.mp4 b/video/I2mIxuXA72_39018157.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ef30f204bfd0d16b76c933a1d21e9a79766b2869 --- /dev/null +++ b/video/I2mIxuXA72_39018157.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e6aa35d397a5cadacd564c21f5deb3765145471e8ada76004aa192716c9916a +size 3104240 diff --git a/video/I3IuclVLFZ_39027259.mp4 b/video/I3IuclVLFZ_39027259.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5bc5375bd53816520680a9cd7931db4e705b358f --- /dev/null +++ b/video/I3IuclVLFZ_39027259.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e442db07d873209b7d733ff23d0e7546c37fb72dc738ba3711e507889d9e79c4 +size 2387196 diff --git a/video/I6tBNcJE2F_39026769.mp4 b/video/I6tBNcJE2F_39026769.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c25117c7f22c35d835624264de32637dfd577e82 --- /dev/null +++ b/video/I6tBNcJE2F_39026769.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:694c9c6ae12b87e07c107bc225460494c1298fb457ccf56f07abebf9fcbd493e +size 2574970 diff --git a/video/I90ypQpLgL_39028111.mp4 b/video/I90ypQpLgL_39028111.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f886a175911410ca5faa0be4b1598da5d33a29d5 --- /dev/null +++ b/video/I90ypQpLgL_39028111.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa5ae0cb604295ce717c73f63c049e9792f26c646e37b2d6f420bb53e807c1ab +size 1985820 diff --git a/video/IDn9SiKgLy_39026589.mp4 b/video/IDn9SiKgLy_39026589.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..203f75b436fcb64673d4c469ee90c27302fa6673 --- /dev/null +++ b/video/IDn9SiKgLy_39026589.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8582920f0bf8c238f5310b92e5f61780725e80bf360cda0151f8fab9484c3a43 +size 2344810 diff --git a/video/IEyXWuXAQT_39027958.mp4 b/video/IEyXWuXAQT_39027958.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bc4c86978af2ec73dd0fd41acf6988aa1f50e23a --- /dev/null +++ b/video/IEyXWuXAQT_39027958.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2228cf62c86f15de3449138770cd8480de2b0dc6f498fe57a88913f74cd4ab52 +size 2335242 diff --git a/video/IGCaTQ4n1R_39028081.mp4 b/video/IGCaTQ4n1R_39028081.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e3126023820856a293f461186ef0669dd9238062 --- /dev/null +++ b/video/IGCaTQ4n1R_39028081.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c51451335b29d5dfb31d4f62297ba81268b904a56fd1b9b216c2e6890374c09c +size 1699632 diff --git a/video/IGhpUd496D_39027423.mp4 b/video/IGhpUd496D_39027423.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8d3aa1b782b162a5b2f0ade476a66b6edd766949 --- /dev/null +++ b/video/IGhpUd496D_39027423.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f514f22a63f614db1bd2dc360e84151b890e05a3520e22f6c3c31b746df17b9 +size 3302473 diff --git a/video/IHjoPnNZb9_39024906.mp4 b/video/IHjoPnNZb9_39024906.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ad307b3dc4ee8f31a935bca6af6784d4e53a46d7 --- /dev/null +++ b/video/IHjoPnNZb9_39024906.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71313b25ee3ee49acf92d2f2a1112f2c94131f7da46625e4185bfc7efdb53f5e +size 1170721 diff --git a/video/IIoH8bf5BA_39024400.mp4 b/video/IIoH8bf5BA_39024400.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dd66d96da6c4aa58fb3c8c6cfff004662784d20c --- /dev/null +++ b/video/IIoH8bf5BA_39024400.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5be4f0be8d80572b9761f7f51ae06a3f484c46a7f3340026fb58bea3e1aff135 +size 2580124 diff --git a/video/IL71c1z7et_39018153.mp4 b/video/IL71c1z7et_39018153.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b3508f1f314a1352e60318110a92e87fe010ecf5 --- /dev/null +++ b/video/IL71c1z7et_39018153.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d797d3b59b663c197fbe128741a0cd8120cae32d53ddcc1caa84a1a3acf87efe +size 2191377 diff --git a/video/ILYjDvUM6U_39018152.mp4 b/video/ILYjDvUM6U_39018152.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d94a0c8337ed3e3f85e95e7f084180e23738acf9 --- /dev/null +++ b/video/ILYjDvUM6U_39018152.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:226c05792afdc262f26e965678e83eb12c5ed84a5226744c47b6681e4c510f7f +size 2475658 diff --git a/video/IM4LtYRWdE_39026799.mp4 b/video/IM4LtYRWdE_39026799.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..808fc0c2f8d3e8b25af737a35b116cd799f89a80 --- /dev/null +++ b/video/IM4LtYRWdE_39026799.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f857a1f81c41265a301ada5cdfb7cef39b89850f187d08d928a02ce9d2e5eb6 +size 1843157 diff --git a/video/IMlDpZmLnL_39028848.mp4 b/video/IMlDpZmLnL_39028848.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..435770accb0468ec936c32c703a3ed990080fd85 --- /dev/null +++ b/video/IMlDpZmLnL_39028848.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d4576215b4864e2bc406e62518a4930a169ca7fdebf2ba8060701ac624199b9 +size 2659851 diff --git a/video/IOKLUxB05h_39028712.mp4 b/video/IOKLUxB05h_39028712.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4c3e50dca4c0597fa3172cb3267934d94d390fb3 --- /dev/null +++ b/video/IOKLUxB05h_39028712.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f08f52d41c956684ae214b26718d4fc2844547786201f519166f85311b85c37e +size 2119265 diff --git a/video/IRcv4yFX6z_39018833.mp4 b/video/IRcv4yFX6z_39018833.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4a1ad624b954bf30f4f96b2956a94e93a2827a37 --- /dev/null +++ b/video/IRcv4yFX6z_39018833.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c32910923f7eb715f49acf4628ccbf828b6d213739d6e8b9ec67e2d8d8b35991 +size 3021032 diff --git a/video/IYxDy2jDFL_39018981.mp4 b/video/IYxDy2jDFL_39018981.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..52fbb24a2d01b051fb41849c881a60b43a308a43 --- /dev/null +++ b/video/IYxDy2jDFL_39018981.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac18a34115d77af55e9926213d75624bb55dd4279f10cb854874e48949ad40a4 +size 2538731 diff --git a/video/IbIB8SBKFV_39028402.mp4 b/video/IbIB8SBKFV_39028402.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..527679ed68187b9618aa584b2af137137a2dc673 --- /dev/null +++ b/video/IbIB8SBKFV_39028402.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b6df87d8fee93a1a0d1217c08783bdbde7522b639fc52f7d75806ea96013d1d +size 2558177 diff --git a/video/IcR1OOFzxm_39018146.mp4 b/video/IcR1OOFzxm_39018146.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c9ebe8859c4df64988a6a4ef545bea1c1bbaf390 --- /dev/null +++ b/video/IcR1OOFzxm_39018146.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce70ceec62fd650f34cb594125bd1fe1f7808e3799b09beafb06b3cdcf3367b0 +size 2620872 diff --git a/video/IcVNBR7qZi_39018963.mp4 b/video/IcVNBR7qZi_39018963.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1c79c3f6a6724e44b045237744ebe0a5684ca594 --- /dev/null +++ b/video/IcVNBR7qZi_39018963.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:018dbbdbc153f42abc29557427fe5090d46c6714170f57f86dde3d1e1364e79b +size 2164597 diff --git a/video/IjMUGuUmBI_39018820.mp4 b/video/IjMUGuUmBI_39018820.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..92dab37c3151cd428b2f4c64b365813417da5497 --- /dev/null +++ b/video/IjMUGuUmBI_39018820.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ef1fc655218d3b21bdaccfed1d39329cbd1884a6a648a983b6dd2bee98a77a3 +size 1908029 diff --git a/video/IlIDNMvwmX_39024556.mp4 b/video/IlIDNMvwmX_39024556.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..42cf1bb6a19f143496e6b6fa738b07de8729ada1 --- /dev/null +++ b/video/IlIDNMvwmX_39024556.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:021afa17108931ac14b4d1d157790a139c6508f8f741d079dfeb26aa4ae7e3f0 +size 1397569 diff --git a/video/Io1qKqCVIK_39024729.mp4 b/video/Io1qKqCVIK_39024729.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c89a6cc7ed658f3633a3f624ad90f38716ad3ada --- /dev/null +++ b/video/Io1qKqCVIK_39024729.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3aeddf0242ffe35a1cd6f9b222b652c3c906d27744f67074b68b725f1de3627 +size 2134188 diff --git a/video/IoKRezZMxF_39018144.mp4 b/video/IoKRezZMxF_39018144.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..967dd116ecf6db85848cf798c434a54c5e4b740f --- /dev/null +++ b/video/IoKRezZMxF_39018144.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:416159f4d2a7caec7bb07c1077584817827934c74c3bb2393f90c41f76f58f65 +size 1673328 diff --git a/video/IoRT7EhFap_39027293.mp4 b/video/IoRT7EhFap_39027293.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ea4216f73fbc71b28d1c447f6067e261c5647af4 --- /dev/null +++ b/video/IoRT7EhFap_39027293.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2c86e7061213cba2c2d45d3d7505812a2641ac32f408d3cc165d5e5d0ed06e6 +size 3574215 diff --git a/video/Ioabr42B44_39025038.mp4 b/video/Ioabr42B44_39025038.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b985f6fcf39cc9e5f6e82af5f7cbd1dc42fc4de0 --- /dev/null +++ b/video/Ioabr42B44_39025038.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1cb39b535a8fbd5b4b2f120e818272176c072e9692c9f49d7f5b947f94756a51 +size 2418318 diff --git a/video/Iq2IAWozNr_39026332.mp4 b/video/Iq2IAWozNr_39026332.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..17e1950181a47a4f0562452e177f809c78c6c36f --- /dev/null +++ b/video/Iq2IAWozNr_39026332.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d737ef7a698fcf62f94b999a508e01af776926e6ac1b9371be5ad5f6292ba7b3 +size 1621837 diff --git a/video/ItzD2Cnu9y_39025970.mp4 b/video/ItzD2Cnu9y_39025970.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7934c2c6ff600283395d9ae814a1fe12087c43c7 --- /dev/null +++ b/video/ItzD2Cnu9y_39025970.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb893c0ff070939437a0f1cb52d3e08851fc9e8abfce20951d8053e081f1f0e3 +size 1735714 diff --git a/video/IwNTiNPxFt_39025793.mp4 b/video/IwNTiNPxFt_39025793.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0f6b7ed21eaba79930625da6a42eca4040a1e27a --- /dev/null +++ b/video/IwNTiNPxFt_39025793.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:073a59e4b1ebe4444cb247afd5bda026b747e606a18afbd9036badbe98247254 +size 2555702 diff --git a/video/IxEhb4NCvy_39027107.mp4 b/video/IxEhb4NCvy_39027107.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5010b3c418e7f4a9cf4f9c5cc7fd5b8e8cd5002a --- /dev/null +++ b/video/IxEhb4NCvy_39027107.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2e3f3b57365548f2eafa3d6cdd02a1e942eed3132475d30d7e33051cb12517d +size 2990053 diff --git a/video/IxRf7Q3s5e_39028072.mp4 b/video/IxRf7Q3s5e_39028072.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a1cd9fc898f8d4b5bb93cacfa9ba95b0c911a97f --- /dev/null +++ b/video/IxRf7Q3s5e_39028072.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:213cfaa885b5686d317b38684d837e7a41a81c8ba9f7403082fd16454e542fd7 +size 2664920 diff --git a/video/Ixi4j6LtdX_39018142.mp4 b/video/Ixi4j6LtdX_39018142.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6baa9c8e7b627a6b6f1199ddc9cd08b8ac7c748f --- /dev/null +++ b/video/Ixi4j6LtdX_39018142.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f1cf4d31fca5f0d19f2ee1f1cf32c82fda3c15cf57ac81e88d0d5f15218a4e9e +size 2462161 diff --git a/video/Iyve2ycvGZ_39018141.mp4 b/video/Iyve2ycvGZ_39018141.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..39a3a2aecd948bf754c0176b0135297b6cafda04 --- /dev/null +++ b/video/Iyve2ycvGZ_39018141.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2bd49326686c006a213f8be36d6b3f159f6621683cd850a2ab5499645d941a68 +size 2167254 diff --git a/video/J1djqLAa6N_39018139.mp4 b/video/J1djqLAa6N_39018139.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..060cf41f91aff81ae2f7d12926d04c15462070cc --- /dev/null +++ b/video/J1djqLAa6N_39018139.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae12ed49b9a3633d026e69684216a45e293bd906eef97a3225f1d42b0cbbf47d +size 3859019 diff --git a/video/J3w0AXtEhp_39027117.mp4 b/video/J3w0AXtEhp_39027117.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1ff796ae72e826d1ff862cfa20ceaa94f96dc754 --- /dev/null +++ b/video/J3w0AXtEhp_39027117.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f90807b78515e8ecaa06645849fac68cc684f2fa71c0eaf8f2734abb8cb5b94 +size 2407667 diff --git a/video/J6zHcScAo0_39026728.mp4 b/video/J6zHcScAo0_39026728.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..98dedae0a0605e31aa5f2bb98a342a7bcff9327d --- /dev/null +++ b/video/J6zHcScAo0_39026728.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5a0821738119a7c356ce31539c9b8882f7b2d80053be4787873b1498adc20ac +size 2797397 diff --git a/video/JAhNsZ9dvG_39027731.mp4 b/video/JAhNsZ9dvG_39027731.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d6ffbbe988aa04840a4c8189cd478d95f7dd24eb --- /dev/null +++ b/video/JAhNsZ9dvG_39027731.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e42bb46035ca2a44082089339753bb8d3b60fdfae51c2ea17de6b4f73f5c99fd +size 1818121 diff --git a/video/JCyBN5syv3_39026871.mp4 b/video/JCyBN5syv3_39026871.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5ffe49aaeb29a5676fadf25e9361abe76cf46ae8 --- /dev/null +++ b/video/JCyBN5syv3_39026871.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09953d1ce3a4be69bdf3e176c8a5f12b5d004c781d8ab3fa6ad065ae4ca9178f +size 874389 diff --git a/video/JEKXTLjEIq_39026307.mp4 b/video/JEKXTLjEIq_39026307.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a20fc836f4e64d9f1db0ffad8bfed0b344592c65 --- /dev/null +++ b/video/JEKXTLjEIq_39026307.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c6438b47e0221ca5041339dd782193e79741ac26a11f66dd0fd98e828c0889a +size 2201921 diff --git a/video/JInTfcxH3Q_39028339.mp4 b/video/JInTfcxH3Q_39028339.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ca534be977bceccb4b2f204df3de5bd55ed362b4 --- /dev/null +++ b/video/JInTfcxH3Q_39028339.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b7cc81fbbe7fe43e5f04140f4828b84a6ba468796a1d770c122094f82f4a00c +size 2856613 diff --git a/video/JJGfCvjpTV_39028574.mp4 b/video/JJGfCvjpTV_39028574.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9ff265e3dca12fd926ee5c656e7bff5318a4a87f --- /dev/null +++ b/video/JJGfCvjpTV_39028574.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:040a50ab7ab80c387722df0c684871586ad5148ba968c229813e69a440332d5c +size 2812358 diff --git a/video/JK728xy8G7_39028002.mp4 b/video/JK728xy8G7_39028002.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f123e751020719354d96260c5916808ab57daa8f --- /dev/null +++ b/video/JK728xy8G7_39028002.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9bd1daa2ca18c2c4d7332459068840dd4001940ce0099fd2ca22f2fad7d22ad +size 1981219 diff --git a/video/JKEIYQUSUc_39026592.mp4 b/video/JKEIYQUSUc_39026592.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..10d65f08005d1822f80e9a78b1bbb01c069a0573 --- /dev/null +++ b/video/JKEIYQUSUc_39026592.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8fac8d8855b7de56a2a0cb48fb3fd870f2ff4461f036f0111477486687540da +size 2738273 diff --git a/video/JL2eMCfDW8_39026968.mp4 b/video/JL2eMCfDW8_39026968.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..634aaee3a9567523df68c2f6d8601e2f02a5c147 --- /dev/null +++ b/video/JL2eMCfDW8_39026968.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0db251b6d6be8fee651b38c89b3d4b8a2586c09df633ebe2333b10aad06749d6 +size 2914772 diff --git a/video/JM0IQSliol_39026814.mp4 b/video/JM0IQSliol_39026814.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6f4c25a6717dc5040fdf8ad3d14ec3cc71ed9842 --- /dev/null +++ b/video/JM0IQSliol_39026814.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:117905c4a64334c4854af9d49658786da73c6bb6fe662a69e7187679c059265a +size 2388549 diff --git a/video/JN7TcCm9LF_39017127.mp4 b/video/JN7TcCm9LF_39017127.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..252c24ce0bef5579e7efb138e0ba58a274ad9bd4 --- /dev/null +++ b/video/JN7TcCm9LF_39017127.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d8e627940c100a2cb3f2d4115ce117b6aa4b1c7103a8f659dc7e635a575f5e1e +size 555918 diff --git a/video/JNl6h3U3oW_39027741.mp4 b/video/JNl6h3U3oW_39027741.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3d22d3b385e5b82942972259f9d7321c4c79f3a0 --- /dev/null +++ b/video/JNl6h3U3oW_39027741.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f119bef5d5e7b515dcf3483a94a192f3e183783796533c3b94b9ab056cf3738 +size 2774662 diff --git a/video/JO7k0SJ5V6_39017041.mp4 b/video/JO7k0SJ5V6_39017041.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..51a5bd55018084f3b0f2ef45d68d44add57713a7 --- /dev/null +++ b/video/JO7k0SJ5V6_39017041.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a868f12702278cab580a348ba3b8c89a7c72308056c4a2d355012eee4fe7314 +size 2599466 diff --git a/video/JW3jTjaaAB_39018937.mp4 b/video/JW3jTjaaAB_39018937.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7bf9bdb5a45c34f604d0754c1ae8a076fffac276 --- /dev/null +++ b/video/JW3jTjaaAB_39018937.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cb17efad3ca8984bf7d9e22e3ca292f821da72b855c61c908a343816c9e8cf87 +size 9534869 diff --git a/video/JXKbf1d4ib_39027392.mp4 b/video/JXKbf1d4ib_39027392.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d5a79e2bf46b242fd34bea75619fbafd5cffe15b --- /dev/null +++ b/video/JXKbf1d4ib_39027392.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0738f245183b5d484800551aeef94f7cff64f829584fcdc3496172e252f8282b +size 2084589 diff --git a/video/JYu5Flqm9D_39017106.mp4 b/video/JYu5Flqm9D_39017106.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3ec880a701a0b0c9bd050c813901bb1325468bf8 --- /dev/null +++ b/video/JYu5Flqm9D_39017106.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e28e6a394428200257c55f152abe140e2c33b781bf564b853b97c11c50e02810 +size 2842395 diff --git a/video/JZHFRLoqDq_39027761.mp4 b/video/JZHFRLoqDq_39027761.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..123f3a8bd5ff9f662659701990518477dfd1af53 --- /dev/null +++ b/video/JZHFRLoqDq_39027761.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:772151805cadd10003feb6800871e8dc439551d623c9c722c3af6fee14c0cc71 +size 2963051 diff --git a/video/Je5SHCKpPa_39018136.mp4 b/video/Je5SHCKpPa_39018136.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f9c294509f43d9222cddd55c90f3be74bcf0128a --- /dev/null +++ b/video/Je5SHCKpPa_39018136.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23d895167ac9340da8ae99abfad259ae7c516c7d1facd5f8c6fd7d26edb447b4 +size 1557545 diff --git a/video/JgqftqZQZ7_39018633.mp4 b/video/JgqftqZQZ7_39018633.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1427a4da4c46508e0b0823dd46960cf4ba663176 --- /dev/null +++ b/video/JgqftqZQZ7_39018633.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0240bce79b442f163c7fa6f3e4d1c2f8b783c3219b5d2ca3a6c808e90749fbdb +size 1170402 diff --git a/video/JiQXsLvDls_39027089.mp4 b/video/JiQXsLvDls_39027089.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..01d877c773668a1a934a168b7be041005f7bec23 --- /dev/null +++ b/video/JiQXsLvDls_39027089.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1fdffb46c4a0bf6609c4d42cebb21fd09c38d79359e767644bd32d6a8509039e +size 1816206 diff --git a/video/JiRGxrqHh0_39026006.mp4 b/video/JiRGxrqHh0_39026006.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a2121dc8e4a9ceb7b39872349ac9a6c3ff375e11 --- /dev/null +++ b/video/JiRGxrqHh0_39026006.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b52650c91373fcfea00907061c60e8e639377e9239f8696d8d574817ec4bc1b8 +size 2539403 diff --git a/video/Jkt42QYyEH_39025839.mp4 b/video/Jkt42QYyEH_39025839.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9172a34c8d0e0921fb9e01e4b2e2bd733d077611 --- /dev/null +++ b/video/Jkt42QYyEH_39025839.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5778f6369542d67f20fd21528866d923b31514f9d7dbe2dec19f75e0bebefba6 +size 1634830 diff --git a/video/JlWn80mTJi_39025610.mp4 b/video/JlWn80mTJi_39025610.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f3738c43653a5ff5ebb77c23c9867c231949e58c --- /dev/null +++ b/video/JlWn80mTJi_39025610.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0d58b28a6396e9b2fdfd48cd0ca9fb9c3c9eba15928bc35fc1a37b95e9e86eb +size 2710544 diff --git a/video/JnYaF3vv3G_39018132.mp4 b/video/JnYaF3vv3G_39018132.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4fc54366e8745c5706d215dc18cde586e585eb9b --- /dev/null +++ b/video/JnYaF3vv3G_39018132.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f958fbd1780cd93f0d9a0af7290962bb17980a6615195090bb392fdc6900308f +size 2606918 diff --git a/video/JpqEzPTuv6_39025832.mp4 b/video/JpqEzPTuv6_39025832.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4162f63e8c7e7590bcbb89dfc395255c63a67a77 --- /dev/null +++ b/video/JpqEzPTuv6_39025832.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:639f8b4c4a1c1db48f49676792d2962523330e01516cabd199ff091a0896afe2 +size 1641303 diff --git a/video/JrIPBXWiS8_39024629.mp4 b/video/JrIPBXWiS8_39024629.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1cf8f1b8b8e3dec16dbf0f39d0055867cfd4a702 --- /dev/null +++ b/video/JrIPBXWiS8_39024629.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53a76941fe48d10f959c7ec37e248f03789e7bcded246d0fbea81302f7c95885 +size 3233847 diff --git a/video/JrmPG9ufKg_39018661.mp4 b/video/JrmPG9ufKg_39018661.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..02d3e697d23877bf8bd83b6f9ea494457368c89f --- /dev/null +++ b/video/JrmPG9ufKg_39018661.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e2512da5bd7fb288e7cc83b2472e6da02c7e727a8020f48042e413359024d84 +size 2571749 diff --git a/video/JsnR0YO4Fq_39018131.mp4 b/video/JsnR0YO4Fq_39018131.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5fa87c2ac8ccf142ed477c78f0e4294c795d60e6 --- /dev/null +++ b/video/JsnR0YO4Fq_39018131.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47856cabc75ca06a5dae98e43c1367c5ac28da64ddbc3516b4fed6825a982157 +size 2081502 diff --git a/video/JzG7kSpjJk_39017061.mp4 b/video/JzG7kSpjJk_39017061.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..11436ba2fc0a0d996b55b74e6b0d0e92d896bbef --- /dev/null +++ b/video/JzG7kSpjJk_39017061.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f751ab277f6a1cf44e2c9f0ec9068b12df40a3776358402c91f8fb2111304846 +size 1230077 diff --git a/video/JzcIKnnOpJ_39025132.mp4 b/video/JzcIKnnOpJ_39025132.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f6b7c4cc4651c12f332b764757ff71d450cbb296 --- /dev/null +++ b/video/JzcIKnnOpJ_39025132.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e5fcfbd9684ecb42558c329047f126ebcbd11872e72a7d9678dc830be209cc6 +size 2424511 diff --git a/video/Jzog9gvOf6_39027201.mp4 b/video/Jzog9gvOf6_39027201.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9fa0b62a68620644b3551d4d40bf2e4c5eac7b64 --- /dev/null +++ b/video/Jzog9gvOf6_39027201.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb659239172423395a24c81a6cb295062519a9ee8f4f4178fdc95028b1b1f5f2 +size 2468229 diff --git a/video/K3h2kZFz8h_39026110.mp4 b/video/K3h2kZFz8h_39026110.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e89871efeff12bc2559157525f9b8eef8b4da78b --- /dev/null +++ b/video/K3h2kZFz8h_39026110.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:feb73a34151bdaea012bea4f4da1eb09da313386bdff5358f795a411f624d33e +size 2409522 diff --git a/video/K5PA3SK2jB_39025162.mp4 b/video/K5PA3SK2jB_39025162.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2ba1eb90608a5324e2ab6c1ace63583575184f73 --- /dev/null +++ b/video/K5PA3SK2jB_39025162.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d6a740bfe4153356f37118701ba01c07347e6a9e56edd8da0eeecd2d95f8ca4 +size 2662607 diff --git a/video/K9V7ugVuUz_39018125.mp4 b/video/K9V7ugVuUz_39018125.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bfcd2d1cf76f5f637a2b80d72c06ece366e12f6d --- /dev/null +++ b/video/K9V7ugVuUz_39018125.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2fa812f7c7102e2f454f99dda9d1824e90c28f380d038fa7f1b02949b26da51a +size 2761376 diff --git a/video/KAAUvi4kpb_39027996.mp4 b/video/KAAUvi4kpb_39027996.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6625966444c87def90441d6a0818b02bdb6b38c6 --- /dev/null +++ b/video/KAAUvi4kpb_39027996.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05f5826e7ebe522c1a07d4e638be2eec4810cad2f4a3dd1212abde913e62b0d1 +size 2187712 diff --git a/video/KEe4IUp20I_39025023.mp4 b/video/KEe4IUp20I_39025023.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..153faf53257796ea9def6f3115263c79cb2dd351 --- /dev/null +++ b/video/KEe4IUp20I_39025023.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3566f035b241258d18dfb4c6269dfd21b322f505c0de0af6ec05d5d55000acbd +size 2071265 diff --git a/video/KHX0dKXdqH_39026959.mp4 b/video/KHX0dKXdqH_39026959.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..53284549a985c49522d9d4dcbf8dcf5aff89f298 --- /dev/null +++ b/video/KHX0dKXdqH_39026959.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af9fd9cec044f947392842282ae254ad5dc3222d605477a2fb8e5bafdc914e38 +size 2204025 diff --git a/video/KHcB1drMRX_39024969.mp4 b/video/KHcB1drMRX_39024969.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e3dd5da2cacb2b61f6858986bafc98f2bad9899d --- /dev/null +++ b/video/KHcB1drMRX_39024969.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a99fed46b0a442d6c78489d6ce1ac2d173020bfdd78c61209ecdeacab87455d9 +size 2647998 diff --git a/video/KI5TANE02e_39028637.mp4 b/video/KI5TANE02e_39028637.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c7a0a516692623acdc48c9d3739479161db1ce4a --- /dev/null +++ b/video/KI5TANE02e_39028637.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90a27613e5871be43c28a94e678a525cab80af47e72b6fe64bd4ab95394a439a +size 2918645 diff --git a/video/KI9NqjLVDT_39017085.mp4 b/video/KI9NqjLVDT_39017085.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8a83d9c7d78beeb72d37dcf5103a1654d0cdfd53 --- /dev/null +++ b/video/KI9NqjLVDT_39017085.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fce988131f05112e4a317bd41db59c4be7f43e85360d2ea62bc9ba08ab5ba50 +size 2447327 diff --git a/video/KKrj1vCQaG_39027963.mp4 b/video/KKrj1vCQaG_39027963.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..332714021b2049dd36a08d46669ad7051d3dcb83 --- /dev/null +++ b/video/KKrj1vCQaG_39027963.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ecfffa0dc691e613e6141bc8d0844f1d27b17edfde5c12c2abdb3b0430223146 +size 2752857 diff --git a/video/KLL70pTQ17_39027275.mp4 b/video/KLL70pTQ17_39027275.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3d1e5285545f2d7200f733d63cd57638e2c4e33a --- /dev/null +++ b/video/KLL70pTQ17_39027275.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23d640b8dcf6d4a19e79cf74d39bded9c78567d5d0f6c2a34990f3a8f945cf96 +size 2618715 diff --git a/video/KNrwaFEi1u_39028673.mp4 b/video/KNrwaFEi1u_39028673.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7f63ece7fbbd74e3a80ca8d3954943bce9daad3c --- /dev/null +++ b/video/KNrwaFEi1u_39028673.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f29f8c58463ff166f4971247c1f5d11dc731d352c898ec728f41f5cf0563e6f +size 1565782 diff --git a/video/KOZu91CzbK_39018842.mp4 b/video/KOZu91CzbK_39018842.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c4651593483adb24348c6d545b6d086ac9bdc4d7 --- /dev/null +++ b/video/KOZu91CzbK_39018842.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a400cc977aa718089ca415fb41855be1181845fc59630c1b0b9ab7f550dc782 +size 2363724 diff --git a/video/KQe9tHd0k8_39018828.mp4 b/video/KQe9tHd0k8_39018828.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e2e119532bf8792eceb02cd5ec8ea61617dc239b --- /dev/null +++ b/video/KQe9tHd0k8_39018828.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe2e159726dab3465d5681eaa41bbffa89d98161abec34f816734d78819c9355 +size 3399046 diff --git a/video/KVAx5tys2p_39026577.mp4 b/video/KVAx5tys2p_39026577.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e83de8cee5927d09e88d49e289eea5d1d5a7346b --- /dev/null +++ b/video/KVAx5tys2p_39026577.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85ec9b449d524d3375a4b50fb9af2920eb7187470ac3796f6d15802c40f75ea8 +size 2518755 diff --git a/video/KXUijdMFdG_39025113.mp4 b/video/KXUijdMFdG_39025113.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fefca64398ec2667bc16bd4efddae347ffaae6c9 --- /dev/null +++ b/video/KXUijdMFdG_39025113.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0aee3e1268c47faa32addd0622069de3002d8162b32b3d4bd1fae91d2cb08bc +size 2375901 diff --git a/video/KY07A73F3Y_39027046.mp4 b/video/KY07A73F3Y_39027046.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fddf0741ad48698910ce84908dd19898c719dff4 --- /dev/null +++ b/video/KY07A73F3Y_39027046.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4de12af101d0f92c80a18914c6b60a0984afb7b30566086c30b55f8dde52fc8a +size 3009605 diff --git a/video/KYHVBsEHuC_39024701.mp4 b/video/KYHVBsEHuC_39024701.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f6f09e2658d73cebb927e34d80b53ef937e79c44 --- /dev/null +++ b/video/KYHVBsEHuC_39024701.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8b7e071fba52f5daa6cf7e9b57dfc436d2f3e077befa1b577f9293e52b7a94c +size 2116628 diff --git a/video/KYHma7hzjr_39027914.mp4 b/video/KYHma7hzjr_39027914.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1d9aa22a97de3e271ecc8bd5c71ea1389f04cd25 --- /dev/null +++ b/video/KYHma7hzjr_39027914.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff53c34c5d73f3ec61f273f369e2706ea71986d3721946e7638522e4bb3da27c +size 2967016 diff --git a/video/KZSEgJGPxu_39017198.mp4 b/video/KZSEgJGPxu_39017198.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..097c11dc89c88a02c3deb1bf516a4028602d85fd --- /dev/null +++ b/video/KZSEgJGPxu_39017198.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05bf1349b40e033aebc2c759e53b21d35887f1578ebf218691224dbbd44ea823 +size 2417065 diff --git a/video/Kcsj9FGnKR_39026844.mp4 b/video/Kcsj9FGnKR_39026844.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d820764fcb380622154d18abc5b5554c5965bb1f --- /dev/null +++ b/video/Kcsj9FGnKR_39026844.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb6cbbf2c873fc8b61e25222e1cb0186c85283bf32a1bbf6f3aef7c4aa2f8513 +size 2403717 diff --git a/video/Ke3MSP8Nr6_39026732.mp4 b/video/Ke3MSP8Nr6_39026732.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c1c15b5c150a3ef25474e08af770b0b0f1777632 --- /dev/null +++ b/video/Ke3MSP8Nr6_39026732.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dcdb2fd4639a172db988e5c8334ab40d0c449dcfb84f44a4daf44a9cf882647b +size 2729867 diff --git a/video/KhwOuB0fs9_39027549.mp4 b/video/KhwOuB0fs9_39027549.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..82bc05a1f4165f593d0b3c23df0d4fbe3a95f9ed --- /dev/null +++ b/video/KhwOuB0fs9_39027549.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4032604e53dcb5f8a6d2a3722a1e861d2a53a35bbad9ad2949639e07bb25236a +size 2685800 diff --git a/video/KjNEzWRIqn_39026327.mp4 b/video/KjNEzWRIqn_39026327.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..aa52a2e104a983b7e5b615d484d356f7fa42c47e --- /dev/null +++ b/video/KjNEzWRIqn_39026327.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2bc8a1d34b15424e55026b66f91fc05eab55bda78755bf9052294b44b11ddb8 +size 2530814 diff --git a/video/KqbLzSIXkm_39025060.mp4 b/video/KqbLzSIXkm_39025060.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..45a9df80a004d6942b51bcf0a1887aac8defb7e0 --- /dev/null +++ b/video/KqbLzSIXkm_39025060.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b24446c8b7d211d354fb589e94771938e3eb02a7accd754bd56e345ed8de98b +size 3245841 diff --git a/video/KrtGfTGaGe_39018738.mp4 b/video/KrtGfTGaGe_39018738.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4602090f5436c81dd298a9cc60788352088eb317 --- /dev/null +++ b/video/KrtGfTGaGe_39018738.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:476d934e8b882528692d9339c5801a6bb3a8a8b96897cd14af4d4dea72606b3f +size 1306464 diff --git a/video/KsLX5pFpOs_39027662.mp4 b/video/KsLX5pFpOs_39027662.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1fcc77d5b75726dda5c5ab75e7a6b1f2e070cd45 --- /dev/null +++ b/video/KsLX5pFpOs_39027662.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e028d9a096acd68dc43f975dec4580be89b02115f7bddae44d0821d1d4f8ec5 +size 3248284 diff --git a/video/Ktx95ZuRjP_39024579.mp4 b/video/Ktx95ZuRjP_39024579.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2d67d589537f4e059b91cc11a0403a152552ab50 --- /dev/null +++ b/video/Ktx95ZuRjP_39024579.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61a74b3ccdda8c9641425ccf38f8b5d4c2dfaaa5a307ae3611810d75356e4984 +size 2715394 diff --git a/video/Kuj5gVp5GQ_39018100.mp4 b/video/Kuj5gVp5GQ_39018100.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a56b6943b126c5f61da50f2898027a470373b0c9 --- /dev/null +++ b/video/Kuj5gVp5GQ_39018100.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb274ca9e030e361458e82cf304db5e3e45cc9c57d5e472d6ac7269ce484587e +size 2868004 diff --git a/video/Kx8I0rP7w2_39025049.mp4 b/video/Kx8I0rP7w2_39025049.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..74e78e6d1c5926d15eb73b32776c44e740cd62f7 --- /dev/null +++ b/video/Kx8I0rP7w2_39025049.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6c62f51b9e8db2f35b9560bf1c63a220f376ecf21c791d54b754ad52bcc9c83 +size 2075144 diff --git a/video/KyNO0n1bJ9_39025462.mp4 b/video/KyNO0n1bJ9_39025462.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..05990ff5029241f3dda62a998df1933fba5b7d26 --- /dev/null +++ b/video/KyNO0n1bJ9_39025462.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:056d80caa666a34451fa8ffb0ed81d5faeb6235f5f9b64a9ff7154d4beac2ab3 +size 1610926 diff --git a/video/KyVBzkConO_39028301.mp4 b/video/KyVBzkConO_39028301.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1ab10d5f4266d21a5112d678a20db4d56bc33f60 --- /dev/null +++ b/video/KyVBzkConO_39028301.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fe0b5ada08fc6e52fabc1d557f752ac480273a056faf6f4969224af82c6cc54 +size 3189588 diff --git a/video/Kz3yckpCN5_39018099.mp4 b/video/Kz3yckpCN5_39018099.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8f527d56807799ad40fd60439481d1fc158c9ea6 --- /dev/null +++ b/video/Kz3yckpCN5_39018099.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:caaa32628e95dc11985b0645d11429a8bda4bfa25a5088caaba6133c527491d8 +size 2557167 diff --git a/video/Kzno1r3Xef_39027600.mp4 b/video/Kzno1r3Xef_39027600.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a46cd3f1dc022d39942e047ddbfaa10ebccc8899 --- /dev/null +++ b/video/Kzno1r3Xef_39027600.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e16e2633484176892ced19b3906c22b16b6b60e2e6d720ce250ff5a69de8d77d +size 2092837 diff --git a/video/L0r0GphlIL_39018098.mp4 b/video/L0r0GphlIL_39018098.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..43dec357ba944b015da41ac25ddf0867db7279da --- /dev/null +++ b/video/L0r0GphlIL_39018098.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b5098683ea23e24d743e9da2813fc695e6f3e3d8dfbb6d3500675423261368c +size 2687184 diff --git a/video/L1mMK39Z7P_39025327.mp4 b/video/L1mMK39Z7P_39025327.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..de685c23c5a8d5e846f567eb54aa051c94ed09a4 --- /dev/null +++ b/video/L1mMK39Z7P_39025327.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7f61a272354dc7479336bfca5a3e7dd27eaf07407c0411535332c2208fd1643 +size 2527781 diff --git a/video/L3RYBqzRmF_39026584.mp4 b/video/L3RYBqzRmF_39026584.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7f39e7abc3220c0b6d10d5bab0f708d75df6131c --- /dev/null +++ b/video/L3RYBqzRmF_39026584.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d8f28c2fcf181c17d0f97aa0f729de0e8dd6ceb3eae7df4b799c19f63076a5d7 +size 7020371 diff --git a/video/L6ICzOxAfi_39025700.mp4 b/video/L6ICzOxAfi_39025700.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5c0c33e1ce310bdf6d2eeca2edac852fc80e9cbc --- /dev/null +++ b/video/L6ICzOxAfi_39025700.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd21cc6a8400846d6101adf6f3f111f8fc33890e98c00328174f59542d3a8ebf +size 2355597 diff --git a/video/L86glqNCUj_39028714.mp4 b/video/L86glqNCUj_39028714.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..46e129365ae2a35fdb87dc63b5436d607e86ebdd --- /dev/null +++ b/video/L86glqNCUj_39028714.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d7e417b5c38de9e329a70157875ce271f6cbbb1d4873a687010819dcfa74629 +size 4059635 diff --git a/video/L8Q21Qrjmd_39025122.mp4 b/video/L8Q21Qrjmd_39025122.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..125603fb77bdf7102d1e3df5289accee3597f75c --- /dev/null +++ b/video/L8Q21Qrjmd_39025122.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38e99fcb168b10bae1c0d5c224753babe0928505352b35295685a25d9d40350e +size 2416720 diff --git a/video/L8UNn7Llt4_39018094.mp4 b/video/L8UNn7Llt4_39018094.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6dff867952f32019e72b0b0cd2fcce69cc0fc330 --- /dev/null +++ b/video/L8UNn7Llt4_39018094.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fa6e3cf2afbf3ed9dc9517824b9570be82df90c5a69dc6f32f451b1c096cd31 +size 1597878 diff --git a/video/L8h6cozcbn_39027499.mp4 b/video/L8h6cozcbn_39027499.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b50b6d07aa56dedcdb1a521d8f4fc75d56d0193d --- /dev/null +++ b/video/L8h6cozcbn_39027499.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c861749fd2743f9af7bd9199dc39274519dc5dec3da9743df92acb45072c584 +size 2860456 diff --git a/video/L9U5MJJleF_39018943.mp4 b/video/L9U5MJJleF_39018943.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4f8a5a6a623c33e27029f02bdff8c915f012817e --- /dev/null +++ b/video/L9U5MJJleF_39018943.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3653dbbf665530a80e80e1ec7ef183801dd19c7c44fc6d5c4f04a4a479e3222 +size 2252420 diff --git a/video/LDzrQB4X5w_39028476.mp4 b/video/LDzrQB4X5w_39028476.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2d785982973ac7a5175ad980b2d42c16a222b262 --- /dev/null +++ b/video/LDzrQB4X5w_39028476.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4571a5b96da05a0b8bc4c47f27433a012d3994c820dd001d41469e4974e77e1b +size 2697445 diff --git a/video/LEYUkvdUhq_39018093.mp4 b/video/LEYUkvdUhq_39018093.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9ad35a8c7c02d4d66d1302ead32b27293e323cea --- /dev/null +++ b/video/LEYUkvdUhq_39018093.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a6de37f02a01fc7d63cedccfb5761f1f9babf2af33472b48514b3de753c355b +size 2830443 diff --git a/video/LEed5Is4oi_39024996.mp4 b/video/LEed5Is4oi_39024996.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1168854a6d63b22a3b7ee845781b800d3e87e255 --- /dev/null +++ b/video/LEed5Is4oi_39024996.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fef297fe2cc484133f154799ee51b575dec97e009ec1faeb11bcd36776a81d31 +size 2649926 diff --git a/video/LGXeIx75sc_39028290.mp4 b/video/LGXeIx75sc_39028290.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d3ca948d242246fdbf8fa7f16fdaedb6df1cf81e --- /dev/null +++ b/video/LGXeIx75sc_39028290.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b2446f7badbd51cd09b25c92c16d8457fd66f391c411a5cae0c6df435a296c8 +size 3150666 diff --git a/video/LGus3wXPxc_39027489.mp4 b/video/LGus3wXPxc_39027489.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f588b9f116ee80300b0243af7a4ffa88575a2b4a --- /dev/null +++ b/video/LGus3wXPxc_39027489.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1cb86810100dc2429c3a848889630ea07b5ed52dfb226cf9d42fec948edcceaf +size 3309622 diff --git a/video/LJCQH6U0pl_39028171.mp4 b/video/LJCQH6U0pl_39028171.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d7c2107e87843b09325e753f89fcadcac5e4b7b5 --- /dev/null +++ b/video/LJCQH6U0pl_39028171.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23d72e91a940484fa4d50cdefb4cbe8029360f9fd897cb12b1bb8dbc9e243d5a +size 2623843 diff --git a/video/LJNqVIKSCr_39027399.mp4 b/video/LJNqVIKSCr_39027399.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..69b931d3e16dfb61b531bd3ac7e264245f2505ed --- /dev/null +++ b/video/LJNqVIKSCr_39027399.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4cba351629e968394024ede8af8875c60a60153a40b553900548f2e3d75c408 +size 2746541 diff --git a/video/LPbqZszt8Y_39026190.mp4 b/video/LPbqZszt8Y_39026190.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8b6637cfc83be4c78e7d6d55bf3fcecb4425066a --- /dev/null +++ b/video/LPbqZszt8Y_39026190.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae9506096d9fc0df9abff56deb79a0060224228ca69b6201f3c8ad15916f01e2 +size 2443229 diff --git a/video/LQBlSGeOGm_39024684.mp4 b/video/LQBlSGeOGm_39024684.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d30b6d91ad901f0932c06ed914e9d2db589c6e04 --- /dev/null +++ b/video/LQBlSGeOGm_39024684.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8f8d32cb87969cd779c11ab5c709e09af5b6d9d524a25c783a0b5a54aa35430 +size 2769408 diff --git a/video/LR1nnsD7H0_39027688.mp4 b/video/LR1nnsD7H0_39027688.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..737cb5bfa3fc2e22fc20c7a15e60daa06cf49f84 --- /dev/null +++ b/video/LR1nnsD7H0_39027688.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:76a5bc4342c31b71f4fafe290cc345f8082ba22ff30c85d684db2f7848c7240a +size 2667003 diff --git a/video/LSYhE2hLWG_39018089.mp4 b/video/LSYhE2hLWG_39018089.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e3981758e953bd1d37ec2577e527b405ed607535 --- /dev/null +++ b/video/LSYhE2hLWG_39018089.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c4caa9639864c1e11330f4845643de356a66bd1af2894ed1e458e61e3142b02 +size 2004536 diff --git a/video/LUIXdWn6Z5_39026228.mp4 b/video/LUIXdWn6Z5_39026228.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..74c229c45d741b0bbb2dd2b98cac62034b6c6d3e --- /dev/null +++ b/video/LUIXdWn6Z5_39026228.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cdefd9a8a8fa65c7a761191f18f72ee6e538c18546c1db49eb9d0e6228db3041 +size 2250505 diff --git a/video/LX1lwP90kt_39025860.mp4 b/video/LX1lwP90kt_39025860.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ca657572e5552c719435d9169fe854e18d1c46d1 --- /dev/null +++ b/video/LX1lwP90kt_39025860.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b43cae3c4d390456f3ef27eae482411fadfe8aafdafc427bf9c3f0969f43b953 +size 2323669 diff --git a/video/LXz1xIEBkF_39028115.mp4 b/video/LXz1xIEBkF_39028115.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9f9198d971d3b72c1af5736118f934b0515cdb8e --- /dev/null +++ b/video/LXz1xIEBkF_39028115.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aadcce386afc0145675df26d1aa22583247398a6ed5432a29589aee7b78f9939 +size 3096804 diff --git a/video/LY3ukUANko_39018085.mp4 b/video/LY3ukUANko_39018085.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b32e58fcff4f5c6e0dc9c63d091579a4aa509bc4 --- /dev/null +++ b/video/LY3ukUANko_39018085.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9506f0cc7fcb53dfd48cd9c2ec42ed03155a6fbab752da0170a0daa9cfd06402 +size 3200451 diff --git a/video/LYivxMp5es_39024919.mp4 b/video/LYivxMp5es_39024919.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..af536b8eb9cbcf14cb41b06d20ae7e65587e4796 --- /dev/null +++ b/video/LYivxMp5es_39024919.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a05d62255d2b6fcd8141f98751790e1d58fc6d757459741fd50bc2c3607053bc +size 2771429 diff --git a/video/LYx4w3CAgy_39024441.mp4 b/video/LYx4w3CAgy_39024441.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a6d2fb898e79ed89e22677f615fbdf0a0ec99010 --- /dev/null +++ b/video/LYx4w3CAgy_39024441.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae83df76b85580c4617a8357493f482f62b83092f543a79c01ac442ce66c1ba9 +size 3266974 diff --git a/video/LbJqRGNYCf_39018083.mp4 b/video/LbJqRGNYCf_39018083.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..80db02ebac1b96807dd4b7ff25c8d53458d7f8e3 --- /dev/null +++ b/video/LbJqRGNYCf_39018083.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09a28568f59dc30704c04c6efec73fbf52871aeaaae56f40f9d397caf4f2f343 +size 3133893 diff --git a/video/Lbuxdzg1pd_39026822.mp4 b/video/Lbuxdzg1pd_39026822.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0061b171bf49d4cc3617405ad77381a2ff26a070 --- /dev/null +++ b/video/Lbuxdzg1pd_39026822.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75bb56eaa7dd4741cb740d294a78f9bbac7459a24bb02cb463579d9f566a3f89 +size 2507786 diff --git a/video/Lc8gemv97Y_39027304.mp4 b/video/Lc8gemv97Y_39027304.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6bbd30dcd519ccafb3c4d9193d1e032817ee468f --- /dev/null +++ b/video/Lc8gemv97Y_39027304.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:26e0355df04cb9eaca92ef9400986f61d98e2e68a09a1266f596a12e64cbc2b9 +size 2378048 diff --git a/video/LemSSn8htt_39018081.mp4 b/video/LemSSn8htt_39018081.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2a68645f84844df48c83b196bc95c069c628c861 --- /dev/null +++ b/video/LemSSn8htt_39018081.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0ff7999a13af09893fb9ad5c8254097a139e728a3d30884d51b4698bee51828 +size 2912584 diff --git a/video/LezAEImfoc_39028318.mp4 b/video/LezAEImfoc_39028318.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b5fea1cb26aa47cbe8669d0afd266ffc77baeec8 --- /dev/null +++ b/video/LezAEImfoc_39028318.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:952c491896faab3570a04a8a3f3e3f1ee99f8d52be86d5fafc441be5eec09d4d +size 1845370 diff --git a/video/LfmZh91tDI_39018079.mp4 b/video/LfmZh91tDI_39018079.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..379a4ae574df28b9284bd85fc475fc689073bc12 --- /dev/null +++ b/video/LfmZh91tDI_39018079.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:42de688e5c3a0c3826c93fa43427a051d1f23e8f62b365e4d3dd644944c98165 +size 2233221 diff --git a/video/Li9YTHoItP_39027057.mp4 b/video/Li9YTHoItP_39027057.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4aed45e658ddafd1aed47bb779a5721cef7d4428 --- /dev/null +++ b/video/Li9YTHoItP_39027057.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4652a792ced1bc4e2fdf1368802c72e0fa59cc29a0e9bd84c921af82853ecfb +size 1730595 diff --git a/video/LjeqMvQpen_39017480.mp4 b/video/LjeqMvQpen_39017480.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d57c0ceac9e0c277c06a47160188c193c22d1243 --- /dev/null +++ b/video/LjeqMvQpen_39017480.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ba07b45c2e0fb9121fe82a878b6e2a4a649f8934c129b75160cff51b594c1b2 +size 2248030 diff --git a/video/LmjLRHVCMG_39025526.mp4 b/video/LmjLRHVCMG_39025526.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d550213de199027d097dcbf4fd2a2d0c35948f28 --- /dev/null +++ b/video/LmjLRHVCMG_39025526.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:def524515f03f490eca7a8da30390e647970b74de84f1928596847990dfaf031 +size 2701602 diff --git a/video/Ln8ogihZ2S_39027412.mp4 b/video/Ln8ogihZ2S_39027412.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4f7b69ba4c88aeb5229612ac8d058619c9e725f0 --- /dev/null +++ b/video/Ln8ogihZ2S_39027412.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43ce6c117206eb484fe1d58ea3df81b844572850704f9bf88d7570304cc5e23a +size 2060716 diff --git a/video/LpXV29Ggl3_39025477.mp4 b/video/LpXV29Ggl3_39025477.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..71d514e9bc1cef235677f594a9d62bf04df9a150 --- /dev/null +++ b/video/LpXV29Ggl3_39025477.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ff679cc94f6739ec9911d36500613e7a3d6a4638f51d053e6ba5a8806de10e4 +size 1278315 diff --git a/video/LpvSHL9lcK_39025830.mp4 b/video/LpvSHL9lcK_39025830.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4b6100462f699ed3ae88118b2576de46a8ed6f3d --- /dev/null +++ b/video/LpvSHL9lcK_39025830.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e0c25a41ac045fdaed3ca4979c6aeae66ac768db3814fcc21f4d3a526cf6343 +size 2644875 diff --git a/video/LqRGsGWOTX_39018076.mp4 b/video/LqRGsGWOTX_39018076.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1735635155845c0bcaa30f350d01f90597bac91c --- /dev/null +++ b/video/LqRGsGWOTX_39018076.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cdaca362c59f3383a29e94ec6faacc0fc9f7d1d0132b43a33f49eb6313f389d3 +size 2228696 diff --git a/video/LqdcdqIeVD_39028799.mp4 b/video/LqdcdqIeVD_39028799.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..42e51474d7c70e06e7727cee52c5f62cc8421f34 --- /dev/null +++ b/video/LqdcdqIeVD_39028799.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a922df9e00515d88c89877dc39f2586b8720afd227caee45876b80954f406eb +size 2819084 diff --git a/video/Lt6wO0oZ8k_39024393.mp4 b/video/Lt6wO0oZ8k_39024393.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..004e5de8daf9ae558758bf58c0225041915ec6e9 --- /dev/null +++ b/video/Lt6wO0oZ8k_39024393.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d4b968bdd2fd4d16633b0ffc4cd74926c2b41b7501880c3d6537bf93cc277896 +size 2656345 diff --git a/video/LuCLf4BJsr_39026276.mp4 b/video/LuCLf4BJsr_39026276.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..42458332934710eef5797bb37565478751df4f6a --- /dev/null +++ b/video/LuCLf4BJsr_39026276.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6e9cc95c48ae39934b32e7b5b47f22ab437ed9c1ba14c2dda614913656ffed2 +size 2682786 diff --git a/video/Lvf7GnaLru_39019171.mp4 b/video/Lvf7GnaLru_39019171.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7e062f87dda8d6a8a61223f611e02d063f3a87ae --- /dev/null +++ b/video/Lvf7GnaLru_39019171.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d34d9aceaa895381586ab451fe178307ceb76d2225b85b98caba595025be922 +size 2896089 diff --git a/video/LxxIiInmuF_39026922.mp4 b/video/LxxIiInmuF_39026922.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e9bd572f094acc3338ec3722b370bd70d45add01 --- /dev/null +++ b/video/LxxIiInmuF_39026922.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cba172faedf3460298d6ed99d656d46a1e30f59e14196a917c50fc1f0c8d374b +size 2329992 diff --git a/video/LyAFfdx8YF_39027341.mp4 b/video/LyAFfdx8YF_39027341.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4fbbc00cdaa85422bf2e03d5692b685db9ca1c64 --- /dev/null +++ b/video/LyAFfdx8YF_39027341.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d2a3f667d8d7970a37c096e0d99c64f4702e0687c44e352fa895375e5e47628 +size 2280657 diff --git a/video/Lzl8qJYXv5_39028608.mp4 b/video/Lzl8qJYXv5_39028608.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c9af53aba9360f48656e35b3ad9c23311e8edf56 --- /dev/null +++ b/video/Lzl8qJYXv5_39028608.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:694323f075520b1b8460ff1ba55ead2ad05c251f0b709b6622e38a321f8f3b6c +size 2958098 diff --git a/video/M1PRU0x1Iz_39027923.mp4 b/video/M1PRU0x1Iz_39027923.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c5d189f2c566e59bd10f3d066af2250693e83c14 --- /dev/null +++ b/video/M1PRU0x1Iz_39027923.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc0f3fb287c6dbd391f892273c12e0511f333d530c1e4188b1940ac455e5ec6c +size 2406755 diff --git a/video/M2QREVHK1V_39025034.mp4 b/video/M2QREVHK1V_39025034.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a169bbd99a81ba53786e9781ac5e145c3577d526 --- /dev/null +++ b/video/M2QREVHK1V_39025034.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:59828316a1676a1d5d8daa26417a5a2cd0c225533b66e6ab48915a26d20045c6 +size 2621196 diff --git a/video/M2UzLRoqic_39027663.mp4 b/video/M2UzLRoqic_39027663.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..207b6fdd957ca0a06367327a50bfee065981f96f --- /dev/null +++ b/video/M2UzLRoqic_39027663.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf135ccd5ce9be453308f08751dd60b0d49312a769b6fa68433ed54704bf85de +size 2473139 diff --git a/video/M3BIsgGQNb_39027400.mp4 b/video/M3BIsgGQNb_39027400.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7b706db2f78500a3064eba27e2b52380cef64f5b --- /dev/null +++ b/video/M3BIsgGQNb_39027400.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:583d862d0ea7418dc4222018b1a868bc9eb5d4c8c684e9262bf82d6cef553de0 +size 7776 diff --git a/video/M75dBr10dZ_39027482.mp4 b/video/M75dBr10dZ_39027482.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ec07e075eb409debb1ac939048e8b58d5a699949 --- /dev/null +++ b/video/M75dBr10dZ_39027482.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b7f526bcf09ad845908d9cc276c8e09163c4a486939a6e9e6dad2f5a3e449df +size 2530120 diff --git a/video/M8dy0ZuSb1_39025009.mp4 b/video/M8dy0ZuSb1_39025009.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3b4f71ced8e1edf725deb9026addc2683a4e314d --- /dev/null +++ b/video/M8dy0ZuSb1_39025009.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1fdb0678e862b30c667f05b628348c1e4815a22cf0e980160cdae66bce3b5408 +size 3088991 diff --git a/video/MCl0TLboP1_39018068.mp4 b/video/MCl0TLboP1_39018068.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2692441bc170f7c53882629695821f8ca933a282 --- /dev/null +++ b/video/MCl0TLboP1_39018068.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da5949faf1a70040954a35ee9ec61db31e5e1d8e08d6bd23eb744a6492fd11ac +size 3307787 diff --git a/video/MEGQGNUfPx_39018067.mp4 b/video/MEGQGNUfPx_39018067.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..94bd60f8a4852ab2e2ae00f550878ae04a3c17c7 --- /dev/null +++ b/video/MEGQGNUfPx_39018067.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17eada88bbaaebaead01f3cca9542c2ae9128c472e0c5ede52d25aea3c917ded +size 7826291 diff --git a/video/MFCjgEOLJT_39018910.mp4 b/video/MFCjgEOLJT_39018910.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..74b51a1b2518a18441c845f07fe31ed67494a452 --- /dev/null +++ b/video/MFCjgEOLJT_39018910.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e145fb2deca61f288dfce1192004f50c8119512b5e33011aa407af77d47a006 +size 2533991 diff --git a/video/MFKfm5scHi_39025999.mp4 b/video/MFKfm5scHi_39025999.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d70cfc98e065dfd792fee259d77c9423ce7c2ef6 --- /dev/null +++ b/video/MFKfm5scHi_39025999.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d67ce0396a6074dc4a949bbbf26621e6238bfe161f63e3f26f072a55c063d45f +size 2652754 diff --git a/video/MIEnYtlGyv_39018066.mp4 b/video/MIEnYtlGyv_39018066.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fb363b70b29ada55839b963cae714bbd99a73d29 --- /dev/null +++ b/video/MIEnYtlGyv_39018066.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:59d749bf538148835d136a7ad7d688e6f892bea884a51b4b12af72b4a02fbfc9 +size 2557464 diff --git a/video/MLBdiWu4Fw_39018851.mp4 b/video/MLBdiWu4Fw_39018851.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..36c128c372c4a41067e7f8002fd75b40f1a69d58 --- /dev/null +++ b/video/MLBdiWu4Fw_39018851.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa9d2ac712a7e43c28a829f92249efda7bb80d467aea3bad716f074645c72497 +size 2381667 diff --git a/video/MLgFu6dQYc_39026654.mp4 b/video/MLgFu6dQYc_39026654.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c4d8a61ba273525200645c2dd24125d203c7b296 --- /dev/null +++ b/video/MLgFu6dQYc_39026654.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d75e65f9adb0610357fee7139627f3bd536ca86a5be9faf51016735f3c1d8f5 +size 2552463 diff --git a/video/MN3yH2ovHb_39018062.mp4 b/video/MN3yH2ovHb_39018062.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ffa12e118425af01bebedf38f76b4cc18f2f8670 --- /dev/null +++ b/video/MN3yH2ovHb_39018062.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a2524901572af91a282932ba89ae28b5119ff6240ddc5c622cdd20e724d5f0d +size 2230922 diff --git a/video/MN4nt01TeO_39025019.mp4 b/video/MN4nt01TeO_39025019.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..16ba2be98116483cb7fa1688c2e762131a73b634 --- /dev/null +++ b/video/MN4nt01TeO_39025019.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:602bf8cc85a3c35f5765226b2233acd93906f1fb6810be5d7e6085a254040069 +size 2586462 diff --git a/video/MNg331t8Tj_39026875.mp4 b/video/MNg331t8Tj_39026875.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c1de72a3688a0353223a238aa4676e5ba5772345 --- /dev/null +++ b/video/MNg331t8Tj_39026875.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3101b3eac9ca71d3fbc961432b11eed7ac6fadfadb19ef161e794f7f9097a86c +size 2496721 diff --git a/video/MNyOI3C7YB_39018060.mp4 b/video/MNyOI3C7YB_39018060.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1a4bee2b1bfe43f5a425e48795ab4ef070c5adce --- /dev/null +++ b/video/MNyOI3C7YB_39018060.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a62b464ad74c8135977af211e826e493011ca6804e313a43b6e82338518a75b +size 2918774 diff --git a/video/MOFwt8OeXr_39024575.mp4 b/video/MOFwt8OeXr_39024575.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1213247ec9c97f21dde0c98629d9d4110665e71d --- /dev/null +++ b/video/MOFwt8OeXr_39024575.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a198f7be97762add581b91d0198a0f80b43f2d495b79117d311cf0ef7792519 +size 2458313 diff --git a/video/MP7j58lbWO_39025660.mp4 b/video/MP7j58lbWO_39025660.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..58da6ea7feb0112bd63e117215c71f9ff14bd70e --- /dev/null +++ b/video/MP7j58lbWO_39025660.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e1e42f85cec92fa809653c52d0d8b56ac133adfd4339aa96932f3bba8aa0865b +size 3177066 diff --git a/video/MQIET1VfoV_39028445.mp4 b/video/MQIET1VfoV_39028445.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bd245cf511659e6999ff5ef80aff9c1a4d91132d --- /dev/null +++ b/video/MQIET1VfoV_39028445.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6bd0de1d16bb4d3ed3c9c69369e43c6d5ba81755516f12739b829b8d462fe548 +size 2393561 diff --git a/video/MSe8YFbhUE_39018056.mp4 b/video/MSe8YFbhUE_39018056.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d561998e7270beb14abb2a1b31fc53bb5687c58e --- /dev/null +++ b/video/MSe8YFbhUE_39018056.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e012b88891612f58269e7697a8b5fb5390b423f1c3021968828b44b88fd7babe +size 2678136 diff --git a/video/MSsQDWUWpd_39027710.mp4 b/video/MSsQDWUWpd_39027710.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d0bde44244a2b5cb55785cb8b8ce6aba9d8bcd57 --- /dev/null +++ b/video/MSsQDWUWpd_39027710.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4cdde05061aeb5dc026993ea65784c47a2bc689a72f0324ec13b753d82ac633b +size 2619697 diff --git a/video/MTMShU5QaC_39025241.mp4 b/video/MTMShU5QaC_39025241.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f22c79104d0e261ebdd0d8dc8e212162f44689dc --- /dev/null +++ b/video/MTMShU5QaC_39025241.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0f08d8fc996e9f05a9b3b74b74edaeba676455c6746507e48cb572828faf92f +size 2638616 diff --git a/video/MU27zjHBcW_39025514.mp4 b/video/MU27zjHBcW_39025514.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b71c3a5ba77156bc2bac98b1c52002c3bf736adc --- /dev/null +++ b/video/MU27zjHBcW_39025514.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da06eefb748242c28641e2d2d91f98c3fc6abf97b1394dc4e1c930fa5302b887 +size 1969437 diff --git a/video/MXOzgjlWDF_39026752.mp4 b/video/MXOzgjlWDF_39026752.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..af0cb16bb97aff572ebf166e9c7fbf4fcf06b2fc --- /dev/null +++ b/video/MXOzgjlWDF_39026752.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f46eb1b88bd9db68dbafcce3c29965553376890cc06467867cc4eb35d8f20de6 +size 3358839 diff --git a/video/MXY0qsGgeO_39025929.mp4 b/video/MXY0qsGgeO_39025929.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6263cbbf0237444f6cb9adec1a4e787ba20a1768 --- /dev/null +++ b/video/MXY0qsGgeO_39025929.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f931bd69783d07b56aa19fe426f97d9d71e6224eb74c643071b6c20958d7ef2 +size 2980920 diff --git a/video/MXzr10iX2d_39025163.mp4 b/video/MXzr10iX2d_39025163.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..789bb16dbb7a5164800a4029636d702d0f1ecf7f --- /dev/null +++ b/video/MXzr10iX2d_39025163.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5908ebeb253b5282a42b54781a52380d862cb174ddc27cfbe04c12529a4dccc +size 905672 diff --git a/video/MY0qlcFcUg_39018054.mp4 b/video/MY0qlcFcUg_39018054.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dfa6a123c6980a7dd0b3b4f717dda9c611bdb75a --- /dev/null +++ b/video/MY0qlcFcUg_39018054.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23bb79c0591174f5472516585bdf7dc421f57028bf2f11ba9e56341042913359 +size 2647274 diff --git a/video/MYI443zCvv_39026763.mp4 b/video/MYI443zCvv_39026763.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..89b1052912f3e8b2cea53033c4fcdd536d87656e --- /dev/null +++ b/video/MYI443zCvv_39026763.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ead9438ba5df8980cbf2cd8c656cda1189e3c2faf11d7c768b54d0cba4c8f33 +size 2207339 diff --git a/video/MaDykgj4Ru_39025907.mp4 b/video/MaDykgj4Ru_39025907.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..276146bf8ab460c986a3a1459747bf990ffb258f --- /dev/null +++ b/video/MaDykgj4Ru_39025907.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2c39df1b8397da2cbeae1f4fab0ca29e5d8b6a18f09246fc891cd39646990a7 +size 2812129 diff --git a/video/MbZuh8L0Xg_39025509.mp4 b/video/MbZuh8L0Xg_39025509.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d9c056c1d7c41981ed8f20879e3bcc56c3c55a93 --- /dev/null +++ b/video/MbZuh8L0Xg_39025509.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6752ccf27ac31d392c10d88e82d25d5d0a0542891f911b1fbf733c0dbdc5f4d +size 2544754 diff --git a/video/MbfAK4s61A_39018856.mp4 b/video/MbfAK4s61A_39018856.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..00fb49182e41299a89329b179bfc8e63d575b8bb --- /dev/null +++ b/video/MbfAK4s61A_39018856.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe63bece919ee0f04162916c3f6eeeeb13462fdceaeb5355c1addb85091bcb2a +size 2615077 diff --git a/video/MelYGfpy4x_39027463.mp4 b/video/MelYGfpy4x_39027463.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..aa4dfbf4ceabd5e8a25bac8aac767c5a7a272add --- /dev/null +++ b/video/MelYGfpy4x_39027463.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:250861bee9d1efaab2ed01792165310b4e6c01f01672c8e129309b248c97886a +size 1393897 diff --git a/video/MiRPBbQNHv_39018048.mp4 b/video/MiRPBbQNHv_39018048.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..94d786d12dd143fb5b026850b1b4bb50cb489e46 --- /dev/null +++ b/video/MiRPBbQNHv_39018048.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2eb41c21344e4e1efce3117c9657c9190e0212d772b45cef90c531b57b24960f +size 1613741 diff --git a/video/Mktgayam7U_39027757.mp4 b/video/Mktgayam7U_39027757.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d26d0649479bfbb29e3ac0190e968f705757b267 --- /dev/null +++ b/video/Mktgayam7U_39027757.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb926ad7db08c4119b77485a5be8a4e074745bf2523924a5670f487c66248a8b +size 1838141 diff --git a/video/MncgmW8b6q_39027928.mp4 b/video/MncgmW8b6q_39027928.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8b74eb94cc291515ccf689a23d712eea022ba136 --- /dev/null +++ b/video/MncgmW8b6q_39027928.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb2af98505ef7f18c9d05df9f2bdc659fc4b850121a3f03617091ae8905c1c81 +size 1981359 diff --git a/video/MrYiwlDRQO_39018046.mp4 b/video/MrYiwlDRQO_39018046.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..af20c5997be4efc40a3dfe01abd8de518750f9ef --- /dev/null +++ b/video/MrYiwlDRQO_39018046.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:86750ce1bdb779a6b0f9047e3e1dea70dae2e0f5c3a251aaf8f5fd3bf1a2dc57 +size 2585175 diff --git a/video/Mrs9a1XQAp_39024994.mp4 b/video/Mrs9a1XQAp_39024994.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dc070713413dcbc01aa302922a145a1ba224bb53 --- /dev/null +++ b/video/Mrs9a1XQAp_39024994.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21d78466b9cbdc4150313461d50cd38aa4a614f2ddab8c4687b46d151fecb9e7 +size 1594876 diff --git a/video/MtRvzJBsBA_39025074.mp4 b/video/MtRvzJBsBA_39025074.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a22fcff905fffc223a6b8aef78e319334ad7bedf --- /dev/null +++ b/video/MtRvzJBsBA_39025074.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5376a2069658a8fb405a1c89b47da7b624146e20410e0956c32e141f5554b820 +size 2473740 diff --git a/video/MuPlJ9fT4b_39026330.mp4 b/video/MuPlJ9fT4b_39026330.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a5fdbb1ebb9ef1c98252fde1b2541b4f52ade3b8 --- /dev/null +++ b/video/MuPlJ9fT4b_39026330.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:358dc5c53668e18ae7fc83420d873f433632d36a5d59e3d3984ab516c70476e6 +size 2097516 diff --git a/video/MwFeh4RqvA_39027035.mp4 b/video/MwFeh4RqvA_39027035.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..638df055f53dff5672929e04e7362cb94491543a --- /dev/null +++ b/video/MwFeh4RqvA_39027035.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e886eb629d0391150d66ecd856928c2a27544f505dd91509b49dc41efce1389 +size 2511833 diff --git a/video/Mwj57TcHWX_39027971.mp4 b/video/Mwj57TcHWX_39027971.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3bfbf105920a9ec82b867d7c1816533d9828fe38 --- /dev/null +++ b/video/Mwj57TcHWX_39027971.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2607ee6ce5264d0c54ca73e2e3b85aa05d35751ea90a367850c791c33c8106a +size 2976197 diff --git a/video/MwmmBg1VYg_39024560.mp4 b/video/MwmmBg1VYg_39024560.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..03673b53c23f9a9ba182fadc03c2d6cb124298b2 --- /dev/null +++ b/video/MwmmBg1VYg_39024560.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:979587dfac021c63f3943076052b52033a788fa4c3d7c0d390e61d9c1f8bb4a7 +size 3336943 diff --git a/video/MxWpCherzD_39024547.mp4 b/video/MxWpCherzD_39024547.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a1224f64e25c8bd1277369bc7f275336834989b6 --- /dev/null +++ b/video/MxWpCherzD_39024547.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad9d92492551e9b5b228f9bed3ad93a7bc16747a692eb98d86051dbf4adc3eed +size 3066956 diff --git a/video/My7lkRNnL9_39017203.mp4 b/video/My7lkRNnL9_39017203.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bbc46f71d00e7c740ba4daff62bffa4c1d1de840 --- /dev/null +++ b/video/My7lkRNnL9_39017203.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c4c90c8ebeba1d935743142e6d0da1848545c55e0b63d81bf4ddf77890ac8cf +size 2710695 diff --git a/video/MyVyH5Jo1l_39024368.mp4 b/video/MyVyH5Jo1l_39024368.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..072b834929f4f53191e0ac9a35cdb8592451be9b --- /dev/null +++ b/video/MyVyH5Jo1l_39024368.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2f4eb118269c51b99150f44e8d7aab6800bcb81a4c6f2630db4020a85596bce +size 2116046 diff --git a/video/MzTdZhMjeC_39024668.mp4 b/video/MzTdZhMjeC_39024668.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..63413d7f316e56bb955d53a1e3f00111cbb52f96 --- /dev/null +++ b/video/MzTdZhMjeC_39024668.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93405cd353ba723e9b71e6cad6ef389f201a60a8a6034ed84efa0eb2eea0e055 +size 7756 diff --git a/video/N0gT4A0jNV_39017692.mp4 b/video/N0gT4A0jNV_39017692.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..02fe7be98d90e94f6c2ee11ec177cf4be77c7d73 --- /dev/null +++ b/video/N0gT4A0jNV_39017692.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b60e93ed57f7695af108994129a07698ccced7b64db9e1628809fc2d4502fa06 +size 1938550 diff --git a/video/N0nTk5BSvO_39018045.mp4 b/video/N0nTk5BSvO_39018045.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..25ed8e8125dbd5b293a315b33cb651774bc54c7a --- /dev/null +++ b/video/N0nTk5BSvO_39018045.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e9516767dc4c4edb0fbee7abab8bf358c703163cf88447e2e9967a8c2201a65 +size 2459411 diff --git a/video/N12B6wvA55_39026002.mp4 b/video/N12B6wvA55_39026002.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3f84781f81b5923737111e39ef7a439a8d4c1831 --- /dev/null +++ b/video/N12B6wvA55_39026002.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5996555b4fadde3846e01c08dbdda40ab5ccdc26795dc421ca2c4c2323261d64 +size 2310055 diff --git a/video/N23A4ybMJr_39018043.mp4 b/video/N23A4ybMJr_39018043.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b486cd422b1ffe72202872a68683bd199c8d4986 --- /dev/null +++ b/video/N23A4ybMJr_39018043.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aec9ab1016973ce00b82833622b9fd197c750c1f3c2a6a62f94d2192a6a148c7 +size 2188328 diff --git a/video/N2RaC7LO6k_39026940.mp4 b/video/N2RaC7LO6k_39026940.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2b1184a662946b1139ed2be391ae7484eabdc784 --- /dev/null +++ b/video/N2RaC7LO6k_39026940.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45920d905146f51e7a245bef312ebd0538283a87858328f55a50d704eee38405 +size 3322228 diff --git a/video/N4quRxE19p_39026150.mp4 b/video/N4quRxE19p_39026150.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..042f85f2a854b5990794b3d243ccfd7dc315fd82 --- /dev/null +++ b/video/N4quRxE19p_39026150.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:864a512be2b805fc6aadf76af67a47df403f4be377d16c9b008304c8dab8989e +size 2777842 diff --git a/video/N6zJ8DclC2_39025864.mp4 b/video/N6zJ8DclC2_39025864.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c82e5e7e3bfc92367451c4c8689080d7f8813e93 --- /dev/null +++ b/video/N6zJ8DclC2_39025864.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18a3c228b8f58c08681aafb641e48b4c56fedf87b865f8201e7d159ee425f301 +size 1302809 diff --git a/video/NAcHv7vtL2_39028260.mp4 b/video/NAcHv7vtL2_39028260.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dedf0788c3d3c31397db78989d876026168db703 --- /dev/null +++ b/video/NAcHv7vtL2_39028260.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccb4a40838c22fba61ad6eef4010590e65bc50b07a36b127104413a0079676d4 +size 2389899 diff --git a/video/NBq1vmfP4X_39026903.mp4 b/video/NBq1vmfP4X_39026903.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fc03d1f256b46f09656bc80ce2e938264441f7ca --- /dev/null +++ b/video/NBq1vmfP4X_39026903.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ce996ed8dbd50b72409b0d19b24617df19511ab616bd57fe19187bf177062bd +size 2390336 diff --git a/video/NCX3Kgb1nh_39026316.mp4 b/video/NCX3Kgb1nh_39026316.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d6913bb9d76d155fdafb35f0d776a8316d1b87ba --- /dev/null +++ b/video/NCX3Kgb1nh_39026316.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d17825f0c13fdb8320d7fb1c5418283ff4c3db954fdff498b8bd1c13294f0f91 +size 2302522 diff --git a/video/NGpMCH5q7Y_39024516.mp4 b/video/NGpMCH5q7Y_39024516.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..868b3b3de67aa9a51b28187581aa2b0517341b7a --- /dev/null +++ b/video/NGpMCH5q7Y_39024516.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:424015b8bb4a7cdc812f7a9520e0170ff2185ea23af02e5d4931cf5fe5cc6a80 +size 1607108 diff --git a/video/NGuGVT7ar2_39028347.mp4 b/video/NGuGVT7ar2_39028347.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e3c78f18ca1efa8eebdf539fc690d88eb2c8684e --- /dev/null +++ b/video/NGuGVT7ar2_39028347.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21652aaadabcb32505c32ba044030ea07e14829f51f90ef81de249bf2fa5d560 +size 970962 diff --git a/video/NKPXHzYusG_39028747.mp4 b/video/NKPXHzYusG_39028747.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..76602ec1ae985bdaaecefc3387d77d3c65ff306d --- /dev/null +++ b/video/NKPXHzYusG_39028747.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9020e65f944d5e527053dd8240943dc67b90c0e11c17afffa93cb816530251b8 +size 2953689 diff --git a/video/NKzLqRgG45_39025633.mp4 b/video/NKzLqRgG45_39025633.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cb522c9185275c59cd7b44993e99895cc54c9d6c --- /dev/null +++ b/video/NKzLqRgG45_39025633.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7ab07f8fdaab459d69b68bc3946a26c53a8c5c1cec35bb6ded115ffc0a77bffd +size 2423785 diff --git a/video/NLUYZ4ZqNq_39025374.mp4 b/video/NLUYZ4ZqNq_39025374.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..392dddc492a604bb71142ced7a0c64c93da1191c --- /dev/null +++ b/video/NLUYZ4ZqNq_39025374.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a3ca83f2fad0ed4ba46155455ca664eba0ce7b75fed97d36e01a40a20187a77 +size 2695998 diff --git a/video/NLevOah0CJ_39018036.mp4 b/video/NLevOah0CJ_39018036.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8eda492f52b227e9a44623ed1d0fe65ca400847e --- /dev/null +++ b/video/NLevOah0CJ_39018036.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c8d7da2d925d77cdd274ac4c1875b79064b1fd6c6993dd0f202b2180babdfa0 +size 2647659 diff --git a/video/NN9U0lEcAn_39028232.mp4 b/video/NN9U0lEcAn_39028232.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7aaa3312764150ef4d53826010c8913fb180764d --- /dev/null +++ b/video/NN9U0lEcAn_39028232.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1add8273d52a2f0d57b7b2f9b41b145e8eeff397e32da92fae65027c00f4e795 +size 2839592 diff --git a/video/NPu7Cdk2f9_39026499.mp4 b/video/NPu7Cdk2f9_39026499.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c5cc4e82d8b785155d366b26f01db725c8ba05aa --- /dev/null +++ b/video/NPu7Cdk2f9_39026499.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30669759d7dfa215739e98b48da04697616c243315fecd3fc5d8ab57d80d5e07 +size 2698583 diff --git a/video/NQB9myZksw_39025458.mp4 b/video/NQB9myZksw_39025458.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..15e25597767e44e518895239bcae84a77232195f --- /dev/null +++ b/video/NQB9myZksw_39025458.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1647fefc87f8a09ea85583a31d23c0ca1cb351bc85f791df5537ebcdec8e8e7 +size 2766828 diff --git a/video/NT8Z5NjwxF_39026053.mp4 b/video/NT8Z5NjwxF_39026053.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..15b0f3951f130d6301178ff363fd61c40a1c614e --- /dev/null +++ b/video/NT8Z5NjwxF_39026053.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3eb7592f24b06d7ad9af2d23efcb5eda8a1564248b9edf117dd56ef4949f1566 +size 2496057 diff --git a/video/NTWXVvIXJM_39026759.mp4 b/video/NTWXVvIXJM_39026759.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..84ae5ad13c9c30e706161ba964b44ee2e1a24cb3 --- /dev/null +++ b/video/NTWXVvIXJM_39026759.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c12943403d5ab5f02e499ebf32f9556bb57bbbedbd9abe391c314a138d78f73 +size 2649929 diff --git a/video/NU3tE3lIqf_39025245.mp4 b/video/NU3tE3lIqf_39025245.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b1dd193143f22dade09aad3ebcd05e1a8cc5eea6 --- /dev/null +++ b/video/NU3tE3lIqf_39025245.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5eb376f0fa93ab5038124fec73283f9aa4d4d5c2b9fa27231f6b1b61ba380145 +size 1182679 diff --git a/video/NaCXcUKihH_39025158.mp4 b/video/NaCXcUKihH_39025158.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..72739e45f590d5c607dce1c694dc36aae67b388b --- /dev/null +++ b/video/NaCXcUKihH_39025158.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a13fa6121794483d9b0f204feceb1f5a0e2a6964706693a65c18b9c31f9abc3 +size 2825491 diff --git a/video/NadTwTODgC_39028131.mp4 b/video/NadTwTODgC_39028131.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4b2d01f25117553faeac28b8a1d4165e7a5c616d --- /dev/null +++ b/video/NadTwTODgC_39028131.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61cb54ef9eccc549c383f4a1159692ecd674425a1d7b7ff094f47f82202bb2c0 +size 1919699 diff --git a/video/Nb5xlelV0C_39027536.mp4 b/video/Nb5xlelV0C_39027536.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fc29bab3b83e0e21d826d1b31760361f5c7aa54c --- /dev/null +++ b/video/Nb5xlelV0C_39027536.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a747f6f3f3a23a45db74efd55f9225a609b227d6350d7dfd2d62f9e11f416de +size 2912743 diff --git a/video/Nf4MHF1pi5_39026525.mp4 b/video/Nf4MHF1pi5_39026525.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e1233de674c02019405673ad9400b7016fa7df2e --- /dev/null +++ b/video/Nf4MHF1pi5_39026525.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c3f55540261d65bc8d48c5ad89e55c00f73a073ce5ae180c6fcfa552a007d8d5 +size 3133071 diff --git a/video/NgaLU2fP5D_39018028.mp4 b/video/NgaLU2fP5D_39018028.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8de686b7f52ba62db9f2d688c2e3672fb82d3506 --- /dev/null +++ b/video/NgaLU2fP5D_39018028.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0fcde02e9c2ccf184ef936345c8c65ded664bcf4209e61713641652e054b56a +size 2186658 diff --git a/video/NgyT80IPUK_39028635.mp4 b/video/NgyT80IPUK_39028635.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2503adbbbe79adf7d8e447aa5c611ea2bd61108f --- /dev/null +++ b/video/NgyT80IPUK_39028635.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca50abadfb071c04933ad9b1379d4bf2ea367e0b1b1539852082f825f29d96c4 +size 2698920 diff --git a/video/NjNfLdxr3A_39019059.mp4 b/video/NjNfLdxr3A_39019059.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ecfdf590a7b6a559243c0410da92ee312fbf134c --- /dev/null +++ b/video/NjNfLdxr3A_39019059.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b9cf3a7c735dd13ee49f8c9e673e32196d0be621b1594825a448ad51a2164e1 +size 1570950 diff --git a/video/NjewXJUDYq_39025005.mp4 b/video/NjewXJUDYq_39025005.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bb4da2c6ddd7ccd5e2b5f9eb2bf16953599dc4f6 --- /dev/null +++ b/video/NjewXJUDYq_39025005.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd6280f09ceeab99eeb12474de105b41801328c74c64e1d8d7debeb861402988 +size 2856041 diff --git a/video/NkmJotfL42_39018659.mp4 b/video/NkmJotfL42_39018659.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8a43e15a5e5b88375b5be4ea388a415bb1849598 --- /dev/null +++ b/video/NkmJotfL42_39018659.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39d6104b24c0ced5bc87d32af021fa87ddbbfe945d82a566d69e256cf46693b7 +size 1643574 diff --git a/video/NlpHKNjNNZ_39026254.mp4 b/video/NlpHKNjNNZ_39026254.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..68b37545a918fb29a2b5c9fbe9cba56a44929238 --- /dev/null +++ b/video/NlpHKNjNNZ_39026254.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6a52bdc101a61e2208fadff0d0e977b7067dafbf9fea84e9c73ea4ed302affb +size 2711102 diff --git a/video/NmlnmLYMZ4_39026273.mp4 b/video/NmlnmLYMZ4_39026273.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..214b69115e5310fd6857ebbedb1f228ef60f52ca --- /dev/null +++ b/video/NmlnmLYMZ4_39026273.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3c03cda9b24ab3c1ca093a28e7f438b850fc35ac8d78321337abb2da3e1a30c +size 2928459 diff --git a/video/Nmmiyjw7Xg_39027069.mp4 b/video/Nmmiyjw7Xg_39027069.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c52829a7aa86b6cbdd18c2bdd8aa13f14132f079 --- /dev/null +++ b/video/Nmmiyjw7Xg_39027069.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:656a024a300d89ec4b8acaa8b2bb15af6adc762457581f7b5569baaaf34a8f9c +size 2743211 diff --git a/video/NnAi0L5H8J_39027354.mp4 b/video/NnAi0L5H8J_39027354.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..682e7a6f183b19ad9afc5542cc30ca50ec91e95e --- /dev/null +++ b/video/NnAi0L5H8J_39027354.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9ab2e27bdca237ed0fff692218b2d8fffec8e9e8fb8dd4cc878a6fcc4c225c0 +size 2225071 diff --git a/video/NnoAj91HZX_39025051.mp4 b/video/NnoAj91HZX_39025051.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e8b9533fe77da071b33fafa01b8bd606472aef93 --- /dev/null +++ b/video/NnoAj91HZX_39025051.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:435a9790a41972879c82064ac46e2c8592039183297d62f431386a18c25178fe +size 92610 diff --git a/video/NnyD0Rjx2B_39017038.mp4 b/video/NnyD0Rjx2B_39017038.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7339e5c984838930128af9798dd607e34718e0f4 --- /dev/null +++ b/video/NnyD0Rjx2B_39017038.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4831d93c8fab9e123ad25214a55d1661f52eeaf39db89a3ff43d2ff49b3377fd +size 2339848 diff --git a/video/Nq45xeghcL_39018022.mp4 b/video/Nq45xeghcL_39018022.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e04d6d6f320f511df4784ea5b1cc5950e3459489 --- /dev/null +++ b/video/Nq45xeghcL_39018022.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc47306972d591192f14f83ded866d72575eeb38f03095a2e5c66a8cc7d8dbe0 +size 2448435 diff --git a/video/Ns0LQokxa5_39026448.mp4 b/video/Ns0LQokxa5_39026448.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9186ed4baa44f4b467cfe09d2998adf4f86a26a1 --- /dev/null +++ b/video/Ns0LQokxa5_39026448.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b92ad424b625c487565a41f7b47fcb71e34dd7fccb7fdd93171e476839ff8587 +size 2442082 diff --git a/video/Nshk5YpdWE_39018021.mp4 b/video/Nshk5YpdWE_39018021.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..93bd23e4c0230973009fce3bb9e7c109278db4e6 --- /dev/null +++ b/video/Nshk5YpdWE_39018021.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51d07c96852645da4daf29efa358d158f77bd9c675c7e9cfc27ed1620a21bd21 +size 2373379 diff --git a/video/NtNTfRTjE8_39028178.mp4 b/video/NtNTfRTjE8_39028178.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1458c1d8ce57132152c04b89198bc8caf84c3581 --- /dev/null +++ b/video/NtNTfRTjE8_39028178.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:129b56c39da4abdbbce242d119f4374263b6b5e4e4598610e237e9c335c1a1c9 +size 2980653 diff --git a/video/NvbeD9Ttkx_39018018.mp4 b/video/NvbeD9Ttkx_39018018.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bd2544d4fdc4d8c1880f8581dcfb2759be53608e --- /dev/null +++ b/video/NvbeD9Ttkx_39018018.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2952a3760f166e93d7d933fff6ae78a5b0a336dd926483e20718731c3ce419c0 +size 2499498 diff --git a/video/Ny8NiVfi95_39018016.mp4 b/video/Ny8NiVfi95_39018016.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..465e5e69c4a060332097ec195565a0594f3aeffa --- /dev/null +++ b/video/Ny8NiVfi95_39018016.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:341ed02dc2d866c8b319e7121da550667c177965edfa495c98eeb216159a4b07 +size 2192255 diff --git a/video/Nycj81Z692_39027232.mp4 b/video/Nycj81Z692_39027232.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d187344d5b155953ad0853320b8e684acfa721e7 --- /dev/null +++ b/video/Nycj81Z692_39027232.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba27fecdee0967d5fb847dc49af6c943d4ea4595fd17fb87ae8b4e6679779219 +size 3023475 diff --git a/video/Nzfg1LXTdS_39027676.mp4 b/video/Nzfg1LXTdS_39027676.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7d17966990b37b9e7a7e5b72f37f081b6e2e750f --- /dev/null +++ b/video/Nzfg1LXTdS_39027676.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05e4704a6d11f78faf25d8dbd6aa7bf0a673160b91927829cc08b0562175744f +size 2702128 diff --git a/video/O1fp9nVraj_39025726.mp4 b/video/O1fp9nVraj_39025726.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1e99d8efa91d007d13e9d1a923bd1d24697fa8ab --- /dev/null +++ b/video/O1fp9nVraj_39025726.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5e1290e78449a4c35947b54bf5cce0a7e532ab714b3441af120b022ace64a5d +size 3367199 diff --git a/video/O23XfTnhWR_39024789.mp4 b/video/O23XfTnhWR_39024789.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2181a40bc09a4d6cd089dbf5308b5e3b989966fa --- /dev/null +++ b/video/O23XfTnhWR_39024789.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:669a9861103413242d864b01e9512d8077aef3c43f09526415f9a3a4a11b872d +size 1953454 diff --git a/video/O4RCFjVUBJ_39026290.mp4 b/video/O4RCFjVUBJ_39026290.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c32985f7bae277f706bff5ea9b679730255a139b --- /dev/null +++ b/video/O4RCFjVUBJ_39026290.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c50f4fe92d8673e212715ba95ca9eddb1f0daaf736b5fb6e72cc701f1bef0bc +size 2161817 diff --git a/video/O7IN4nsaIO_39026590.mp4 b/video/O7IN4nsaIO_39026590.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7b0a11c9fb5f1e14ec946a9bdf4da76b356632cb --- /dev/null +++ b/video/O7IN4nsaIO_39026590.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5388280a486c15f3e9dff2ae924916e04b780ce8859bcd067966b4d53bce6e4c +size 3017072 diff --git a/video/O9PArxKLe1_39018759.mp4 b/video/O9PArxKLe1_39018759.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8102e16572f6fcc9c49d0756909c509b33eaeb9a --- /dev/null +++ b/video/O9PArxKLe1_39018759.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74e031a2366c9a043bf1839cbc5beca290d0c68ab54ef38d0ead623a3dab9351 +size 2781014 diff --git a/video/O9RZAEp34l_39028526.mp4 b/video/O9RZAEp34l_39028526.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3bd3079c6bbfd0df38f636ecdc720f03f82ab5fc --- /dev/null +++ b/video/O9RZAEp34l_39028526.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d98ca2f600c34915f17b0ecf994e1686cee531925bc06bc8db94ebf1477d27e9 +size 2863441 diff --git a/video/OCcfKzXded_39027452.mp4 b/video/OCcfKzXded_39027452.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..418700587329067cdfc14fd1d229c4a2e98350ba --- /dev/null +++ b/video/OCcfKzXded_39027452.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6cb341adcbc3591cc03dd40d25b8e9cb31e4612da183c69d9f233251a79510f1 +size 2538032 diff --git a/video/OEL4FJMg1b_39018011.mp4 b/video/OEL4FJMg1b_39018011.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b6d0b4070277c5adf79e2a7450ccd43e108a30b1 --- /dev/null +++ b/video/OEL4FJMg1b_39018011.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b3fd2bcdb88302c0adcf36826e275bbe2fc06f4a91d67ddb58968b7da8f71ebf +size 2710097 diff --git a/video/OFmclNhp0y_39026091.mp4 b/video/OFmclNhp0y_39026091.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2bd03346b033c8e96f2489663ceb3b48158fc0bd --- /dev/null +++ b/video/OFmclNhp0y_39026091.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5fed5462dc6d755ff7e50dcb1614e38bff23267237352de5402a1d525ac17ff3 +size 2265837 diff --git a/video/OI3RoHoWAN_39018009.mp4 b/video/OI3RoHoWAN_39018009.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cf1db9ee7c831cbce7c3e84cb4c85d9b8989d377 --- /dev/null +++ b/video/OI3RoHoWAN_39018009.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c3fe413a924d2786353e55ffa5b0f966e186153355e8b05d24ca94eb43e11c2 +size 1499446 diff --git a/video/OIsUWQSvkD_39028819.mp4 b/video/OIsUWQSvkD_39028819.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3221821239abf56e8dec5af295c0c488f0297661 --- /dev/null +++ b/video/OIsUWQSvkD_39028819.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d1eac923eb80e123e60193d7d55db4bccfdf065bcfba8bf456053540c98f954 +size 1279488 diff --git a/video/OIsahq1UYC_39018008.mp4 b/video/OIsahq1UYC_39018008.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bcceddf5f3553b1868a9787c4b3171fff1f4b876 --- /dev/null +++ b/video/OIsahq1UYC_39018008.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b433c0ba7b01a82345deb8ba1eeeba1f377af84132cd844e0c4b9aff43b0f66a +size 3006221 diff --git a/video/OJxua0PAIo_39024410.mp4 b/video/OJxua0PAIo_39024410.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e4ac1356d6ed2f86fe1b4c15cbb45288abc8601a --- /dev/null +++ b/video/OJxua0PAIo_39024410.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d080ed7b1e26d14a007a8e8826c31ad6a6dca3d807ae8515015f8dfd233fd2c +size 2112953 diff --git a/video/OONojmx3wH_39026837.mp4 b/video/OONojmx3wH_39026837.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..20988d027599a3b05d82fbb5b28f8cf56eaf7fe7 --- /dev/null +++ b/video/OONojmx3wH_39026837.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dab6417cd799a74fa9967dd0b2724949e4ce5889f4d89c2b277269a4fad523a6 +size 2306455 diff --git a/video/OOiRS6fiM7_39028846.mp4 b/video/OOiRS6fiM7_39028846.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a1e4c4b6c0a89407d58c0482532290359a41e761 --- /dev/null +++ b/video/OOiRS6fiM7_39028846.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39925170000682ea23fc4d4ad49760c9e7380f38689930a3f285698bf3005aaa +size 2685233 diff --git a/video/OP2D9sIdo4_39025436.mp4 b/video/OP2D9sIdo4_39025436.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c984ed5112fba71295a143c993dbe8437c91e7a7 --- /dev/null +++ b/video/OP2D9sIdo4_39025436.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df250675d26d613b6f351f5d6729b583cdc76e4b360b097d2471f4d93d367385 +size 1941135 diff --git a/video/OPrPegYIZo_39026551.mp4 b/video/OPrPegYIZo_39026551.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bc6d47b81c4b43bb4e0d93d798d25425935fc9ef --- /dev/null +++ b/video/OPrPegYIZo_39026551.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac928023e9074a3f8438b6f5cc7bb0dfec7530501c48445c0fd9c88abc8dbb76 +size 3059905 diff --git a/video/OQUg2T4qJB_39025180.mp4 b/video/OQUg2T4qJB_39025180.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..94df6c82b4b100399767301eca188e444e1cae52 --- /dev/null +++ b/video/OQUg2T4qJB_39025180.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:846c9c0d60449ea66b4b29a8cc2c9ce686e477e72bf8287a3a846aee5530e84f +size 2882850 diff --git a/video/ORQiboaRqY_39026280.mp4 b/video/ORQiboaRqY_39026280.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9473b0dfc3d09c37bb187c6aef19e70aecc2d326 --- /dev/null +++ b/video/ORQiboaRqY_39026280.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0562069ed3a0661cb4986d1a964d5437af41a1c48888b3c97791adf74828e053 +size 2097949 diff --git a/video/OUkZXbbwQr_39018001.mp4 b/video/OUkZXbbwQr_39018001.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..54adb192c61410c45b317afa3b8bcfa62d23e568 --- /dev/null +++ b/video/OUkZXbbwQr_39018001.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:707d8294b4e220b4dcc777c2417bf3ecebd545eed3c4844bf8282eb4c02853ca +size 2460062 diff --git a/video/OV8YUk151r_39026610.mp4 b/video/OV8YUk151r_39026610.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ce6f1a799963f19a9b5a0cbc3330bb1b1fc5886a --- /dev/null +++ b/video/OV8YUk151r_39026610.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24db2fffafebc21262f0192570c8931615e2da459692b9b2453e80afe86ad8a6 +size 2509949 diff --git a/video/OWmu3QOa0O_39027999.mp4 b/video/OWmu3QOa0O_39027999.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e20d6f41e0dfc0384a8dcf24fefc454e986ee62c --- /dev/null +++ b/video/OWmu3QOa0O_39027999.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39570285d07d1c25e44fc68b22a8c6e2992df0db2a7736b64cf1742f54d4af91 +size 2512632 diff --git a/video/OWwdlxwnFN_39026147.mp4 b/video/OWwdlxwnFN_39026147.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1f14ac1501a2e1420b6973325a464d64b00aae23 --- /dev/null +++ b/video/OWwdlxwnFN_39026147.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:054c88255fc1a46b03044f61fe12aeecb9fdcb1e8b73f3827f6afa49f424db34 +size 3062899 diff --git a/video/ObUjBHBx8O_39024381.mp4 b/video/ObUjBHBx8O_39024381.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5fe73146418747360a54158b30b07be361353e28 --- /dev/null +++ b/video/ObUjBHBx8O_39024381.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa5a573ee228464e36686f6103f3eed97d3c30679181b2e2904d7ca33988fd89 +size 2375508 diff --git a/video/OdpIjS0vkO_39017999.mp4 b/video/OdpIjS0vkO_39017999.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..55e06afdc1a11bc860d05c3eef1b44a5a4299f58 --- /dev/null +++ b/video/OdpIjS0vkO_39017999.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5f33ad06b133c79c0abe7ea895ea07bf3bbb8771ac49fb9f472f85987fcc175 +size 1364799 diff --git a/video/OeQE9zsztS_39017997.mp4 b/video/OeQE9zsztS_39017997.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9ecb12a481bb43026ce53119e371aae08fdf3ce1 --- /dev/null +++ b/video/OeQE9zsztS_39017997.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b6eef016347d0d4daad5cf66fdd31d4f0be87900ba3afe53f2dd80df5139ad3 +size 1820666 diff --git a/video/OfXqQ5TRwp_39017995.mp4 b/video/OfXqQ5TRwp_39017995.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..28c9a571e01bb757e2c8d0931943e352b9e83fe0 --- /dev/null +++ b/video/OfXqQ5TRwp_39017995.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6e9ae30ff9a58ab91bfa43d4f3ff9118519729cbde46331bfc61f3f841d9afa +size 2416560 diff --git a/video/OgnYoIxtIN_39024889.mp4 b/video/OgnYoIxtIN_39024889.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..818fa0bd861d79439ff78d6a50de818321ace866 --- /dev/null +++ b/video/OgnYoIxtIN_39024889.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f78e153a3d4e10a138dd7ad2736ed3480c31ba9bc17fa91c3f4e98d83225f40f +size 1779956 diff --git a/video/OiVxYf9trg_39028049.mp4 b/video/OiVxYf9trg_39028049.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fc29e66160c66d8dcb048430b77302acba96ea5c --- /dev/null +++ b/video/OiVxYf9trg_39028049.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f467ba2a2173db91f0c23ec464e9424ad2e6c9638dbfd68699a44557a38aff48 +size 1699755 diff --git a/video/Oju2Qu9jvn_39018898.mp4 b/video/Oju2Qu9jvn_39018898.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..810cab48d8e0b3857750750b6275cbe2c23f24d6 --- /dev/null +++ b/video/Oju2Qu9jvn_39018898.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:071cba1ab7acd74190ede43162d149e814385f11b14ff9404b9ab8f340474200 +size 3123289 diff --git a/video/OrOd8PxOO2_39017994.mp4 b/video/OrOd8PxOO2_39017994.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..abff589989dd49890d53a3fc6f3086b4367f50be --- /dev/null +++ b/video/OrOd8PxOO2_39017994.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57c08511dd9513c73635fc61ec9696b25d1634d5e0c6334dc3b6751af83183af +size 381300 diff --git a/video/OrtN9hPP7V_39026880.mp4 b/video/OrtN9hPP7V_39026880.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..049fb5333f31d472c48859462aa78ae14dbbc36b --- /dev/null +++ b/video/OrtN9hPP7V_39026880.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a628e54d98334481551ee2c0e9dacb1485395dcae8fb300fa79c0c78a4e345eb +size 45234 diff --git a/video/OtYCp1yfbX_39024769.mp4 b/video/OtYCp1yfbX_39024769.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..90f7e38cbf16b4a79d4bd8e26de4ebb4e2b88269 --- /dev/null +++ b/video/OtYCp1yfbX_39024769.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b0a0021664e92fd3a6472e5ea41e7b0d4742b382b2e46c61683a906da67c193 +size 2614520 diff --git a/video/OuV9ZrkQlc_39018883.mp4 b/video/OuV9ZrkQlc_39018883.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..853d0119ef3dde5c8e3a99f225677272d91923af --- /dev/null +++ b/video/OuV9ZrkQlc_39018883.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e1d59f4238b28f7f74d5c6562fddfbe5778d094896b3efd1f806f109cf57bead +size 2099033 diff --git a/video/Ouc1F0Sfb7_39025748.mp4 b/video/Ouc1F0Sfb7_39025748.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0bca849f58e58c98984020a03415cb81857bcf48 --- /dev/null +++ b/video/Ouc1F0Sfb7_39025748.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:773a2b5b4acf827dad9229c67029fc19956cbbdb04dc10eb71c002c636f2483c +size 7774 diff --git a/video/Ouj6p4ca60_39017992.mp4 b/video/Ouj6p4ca60_39017992.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..16acc2163a430d1a4f641b7adbc90aa8a41dc6d3 --- /dev/null +++ b/video/Ouj6p4ca60_39017992.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34a5eb1259d03c563b508d56555dd7b24ec631376facba821e7b5d0495f16e4c +size 2284506 diff --git a/video/OvlcyABNQT_39019069.mp4 b/video/OvlcyABNQT_39019069.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..656777658c94ac89115382817c0f1a479d79d3ad --- /dev/null +++ b/video/OvlcyABNQT_39019069.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:60e167367f2a78fdf5665bfcebe1616e3d5155b9215f61432001edcd3944a93a +size 2585830 diff --git a/video/OwtMhMSybu_39017991.mp4 b/video/OwtMhMSybu_39017991.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b0fe7b5ca54e49156b42e728a701fb2a59494e00 --- /dev/null +++ b/video/OwtMhMSybu_39017991.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f1154b211b6504b4ebae67f0445d2dce65f642fdf267579a34cfa267bf85217 +size 2722882 diff --git a/video/Oy2x0Xfx0u_39024850.mp4 b/video/Oy2x0Xfx0u_39024850.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..772b953e1003fdc454ecc93f901b60efa77ae11c --- /dev/null +++ b/video/Oy2x0Xfx0u_39024850.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed996cbd0c3be8cc6734a6d299b9f03377e77b91aa961dc2c973ec6779a059f0 +size 1773760 diff --git a/video/OycU0bAus6_39026749.mp4 b/video/OycU0bAus6_39026749.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8da146b72c77f903085a83262b4fd393f1065a3b --- /dev/null +++ b/video/OycU0bAus6_39026749.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66c3ea7e670ac6e01148c3d286bd39d7b2175c97176a159e4ee17c37a5754452 +size 2952822 diff --git a/video/P15CHILQlg_39017989.mp4 b/video/P15CHILQlg_39017989.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4b4046da8591687f7be0b88cb61b6687c5ca4126 --- /dev/null +++ b/video/P15CHILQlg_39017989.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6afde3788f41e7bf5f709e773ccf1fec24112651c2c8718d3beccc3cdd6475a5 +size 2671466 diff --git a/video/P1ANzoGg3W_39017988.mp4 b/video/P1ANzoGg3W_39017988.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c3beb99c209212bdc0ea9a7d32d3dcc6cca96fb6 --- /dev/null +++ b/video/P1ANzoGg3W_39017988.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:feea57294b29a0e17ec0f9f1828e21d6b51ad489135ef04c03ef1f5ce0db2d6c +size 2121068 diff --git a/video/P1aobHnjjj_39017986.mp4 b/video/P1aobHnjjj_39017986.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..371d02bdfe35afbfbfc34330ee379a603b3dc6e3 --- /dev/null +++ b/video/P1aobHnjjj_39017986.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa44bd9c6e6a6ddbe4ce6750cb31880d29c5d193f0a5deb983d03da819cc1615 +size 1991121 diff --git a/video/P3v3x7HnV0_39025301.mp4 b/video/P3v3x7HnV0_39025301.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c5a21d817a2dbfe9b7cef21e96c348a4b034e1e1 --- /dev/null +++ b/video/P3v3x7HnV0_39025301.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8909b3abf635692228ab0362c9d11c972093351f84b25d8d980c415e56067e17 +size 2214058 diff --git a/video/P4s6FUpCbG_39025772.mp4 b/video/P4s6FUpCbG_39025772.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6a4d5ebe5b28edadba401c724763f53ca6fd696d --- /dev/null +++ b/video/P4s6FUpCbG_39025772.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b899b0b1abe064a64fa341bc0d476aa11158fe1ad12771f1820fb89e561410e5 +size 2873388 diff --git a/video/P5dEZeECGu_39025498.mp4 b/video/P5dEZeECGu_39025498.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..42880ed3cea641fbdb5c1ab5faf933947446f575 --- /dev/null +++ b/video/P5dEZeECGu_39025498.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9bf6c5ddc69c4480535252a5d82122691a4c06b63b1838158e4a152659a9d600 +size 2482426 diff --git a/video/P5yezHuMSS_39024724.mp4 b/video/P5yezHuMSS_39024724.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..adf833eda71d556438f8ef65a2b2c146bf577b2e --- /dev/null +++ b/video/P5yezHuMSS_39024724.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c43c3d49601fe4191f98f14d7f1a1deadd79ae241d339b4619d20e2b769962bb +size 1185714 diff --git a/video/P6nVDZRZRB_39026437.mp4 b/video/P6nVDZRZRB_39026437.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..35fbb6144cc542b43c864b071bcd0b5a7067853c --- /dev/null +++ b/video/P6nVDZRZRB_39026437.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac355cae4c709b63a53337adacb02fb1d971cf60d8573e8813f919ebcf6137fd +size 2296491 diff --git a/video/P8rTCT6g45_39024725.mp4 b/video/P8rTCT6g45_39024725.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c024900a51beceda87acdf2fa56b8f7e291dc40d --- /dev/null +++ b/video/P8rTCT6g45_39024725.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7cff2cdd223aa33cbbce41e756645cb28cb39e120e89928210322ac0114afb32 +size 2357348 diff --git a/video/PAWQvrForJ_39027674.mp4 b/video/PAWQvrForJ_39027674.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..86c1ad919fa4d45cfbcdedc9ea2bdfcee9cb64fd --- /dev/null +++ b/video/PAWQvrForJ_39027674.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c7706c0da25e9268c744a496814b3dcff2b8eb0f5d5d3cab20de27569f002a7 +size 2935215 diff --git a/video/PEEqnXlSCk_39025811.mp4 b/video/PEEqnXlSCk_39025811.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0cbb7e5f1583127566d638ba2071ad828a6fa956 --- /dev/null +++ b/video/PEEqnXlSCk_39025811.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea1df09f4cfaee0458c9ef5b60e956394f23f482956d0e72b34b3b9dd410c213 +size 2413106 diff --git a/video/PGOuBHYdbr_39028418.mp4 b/video/PGOuBHYdbr_39028418.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..76b23dc4e110e6e46658f54ffc0e1165aebbd2a5 --- /dev/null +++ b/video/PGOuBHYdbr_39028418.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d506693eb1451ac75a6c9a145b8e46bb48eced1a9f0173e3a3f088673e3af9a +size 3044125 diff --git a/video/PHLVmV88Zy_39017983.mp4 b/video/PHLVmV88Zy_39017983.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0647091b8a55e712ac1da6f7072e6723df542591 --- /dev/null +++ b/video/PHLVmV88Zy_39017983.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:307f720b7903191f158645e420d9cc400c1a2ea0cbb1b6d00e6e2a0f744b832a +size 1662687 diff --git a/video/PJVUWpPnZC_39017982.mp4 b/video/PJVUWpPnZC_39017982.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..40f8d7c49357b6386e223e8a06dc5a9837310e3b --- /dev/null +++ b/video/PJVUWpPnZC_39017982.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4aed1598d896b9b99beca3454aa4dd6c18a86cf4ceb7fd8deadfef298f02499 +size 1554490 diff --git a/video/PJwAkg0z7h_39019149.mp4 b/video/PJwAkg0z7h_39019149.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dbcd732e36a835791bed67e0c9aae902f7d1869b --- /dev/null +++ b/video/PJwAkg0z7h_39019149.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7c860099cc093562e581594a2bae9c555f27bc175786a7d0218cc8be7b33004 +size 1089752 diff --git a/video/PK8xOCBQRO_39026363.mp4 b/video/PK8xOCBQRO_39026363.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8c9523944b93b10d76d81395d301525dc812a5f0 --- /dev/null +++ b/video/PK8xOCBQRO_39026363.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc634a3c1b7d58fb1e0c2fe3b4dc8249dbce619b3d1eda7b5dd0001dce69e71c +size 2305936 diff --git a/video/PLbFid00aU_39025692.mp4 b/video/PLbFid00aU_39025692.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9bbdb422ed8211c1da1c0f8e4c4561e6b9882b6e --- /dev/null +++ b/video/PLbFid00aU_39025692.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab74e14c94e6f26dfff3155bb8ced0438e2982bf0c39b48cb438b6ba805ff528 +size 2543213 diff --git a/video/PLoWVP7Mjc_39017980.mp4 b/video/PLoWVP7Mjc_39017980.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9751b77c4dd7a30e412c14896a61625c6ac8ae71 --- /dev/null +++ b/video/PLoWVP7Mjc_39017980.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:945c6e27250969a36083a6f9e3afddcda19b0594451b5804c97957b968219fba +size 2100629 diff --git a/video/PPdJPIO3mV_39024581.mp4 b/video/PPdJPIO3mV_39024581.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3f224455ef724c1ff492f5c4cefd684032d78c58 --- /dev/null +++ b/video/PPdJPIO3mV_39024581.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7644b66765326278db0a0fe801ffc77edd8649adf3c93c88cf3480d85554d02f +size 3073112 diff --git a/video/PQt6Vg2X5u_39027544.mp4 b/video/PQt6Vg2X5u_39027544.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..342f8978eb4c530ed4a663a1e46c4c3c6acb284b --- /dev/null +++ b/video/PQt6Vg2X5u_39027544.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4da1a9e9bfa2d2186f1f98848030f825dfd56a857e1fe3ab96f0f2eaac7979b6 +size 3117592 diff --git a/video/PRBsEz8rnV_39026102.mp4 b/video/PRBsEz8rnV_39026102.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d6e7b9b786db642f40674d85b5c8829a23eaa6d9 --- /dev/null +++ b/video/PRBsEz8rnV_39026102.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21ce29aa868d6a08557b77b2194ea8a5a646e8a6bb021f743640d68aa115394b +size 2707877 diff --git a/video/PSLH5q7PFo_39027734.mp4 b/video/PSLH5q7PFo_39027734.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..097f635569dcb9f78503759ddb300652008b8e95 --- /dev/null +++ b/video/PSLH5q7PFo_39027734.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:482d7c8144351b72bac96e5b5dc5fce04c12504cacffdb74d640c5572c77de8a +size 2226783 diff --git a/video/PSPtj26Lbp_39026883.mp4 b/video/PSPtj26Lbp_39026883.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e867f4696b879d6742ef68716349226d9e31139e --- /dev/null +++ b/video/PSPtj26Lbp_39026883.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f37ea0bb6f99869831e5b5b1b77988c2dc32450f587d4962842cee68b16f630 +size 436754 diff --git a/video/PThi9hf9UT_39028099.mp4 b/video/PThi9hf9UT_39028099.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a86cf6bb15eb699aa6ff6f4ed9adbc240d7158df --- /dev/null +++ b/video/PThi9hf9UT_39028099.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:31654c78d04b0409f314184004999c3299f8041ac734dd96978a8808d204d312 +size 2292009 diff --git a/video/PWkjxjgGLP_39026038.mp4 b/video/PWkjxjgGLP_39026038.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a7ed3adc05b37d5527619ef2c0d528631f17af6a --- /dev/null +++ b/video/PWkjxjgGLP_39026038.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a268b728d558cd5df89f6f9723e62137e4e52623bdf3e654b8aa561950cb8f8 +size 2876605 diff --git a/video/PXD3FAVHJT_39017976.mp4 b/video/PXD3FAVHJT_39017976.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..89e08b04682843ae7aa15cd624de666af5cb1ff9 --- /dev/null +++ b/video/PXD3FAVHJT_39017976.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e23881883acc228278df65f59afa5313b519dfdd809a39b0331e0bd65037554 +size 2425263 diff --git a/video/PXGY9Fz8vC_39027166.mp4 b/video/PXGY9Fz8vC_39027166.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..eeb68889a20c091f54e1fec6a9a6fd396b8be36e --- /dev/null +++ b/video/PXGY9Fz8vC_39027166.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba9100194987d88ad7de44be4f85a9819dc937a4894b2c1ed490c2088c06962c +size 2845761 diff --git a/video/PXNrncg2DF_39017975.mp4 b/video/PXNrncg2DF_39017975.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4bd69ef0ea801b3eefe6d9f0c99cdbcad4ab3bdc --- /dev/null +++ b/video/PXNrncg2DF_39017975.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4528ef052e1ea5a00a043fedc41eb625eb82fea89da6f3a587612ccb40a20a4f +size 3473922 diff --git a/video/PZCiWtQjAw_39024580.mp4 b/video/PZCiWtQjAw_39024580.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3913846c52046c7b90328bf7b0a6ad4feb9e8822 --- /dev/null +++ b/video/PZCiWtQjAw_39024580.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b64bb996ca8c5e8c75b821ff62f1023170ae727dce68624e01bd543c9ee70e5 +size 3079717 diff --git a/video/PaqJ71zf1M_39027355.mp4 b/video/PaqJ71zf1M_39027355.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..711535757d4c386884e0303b133192e08394e288 --- /dev/null +++ b/video/PaqJ71zf1M_39027355.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:81c0e61c083fb4cfd112b06592783c7dc10ab134dec58c8eb0817b48157902ce +size 2744563 diff --git a/video/Pc9LLjTL5f_39025214.mp4 b/video/Pc9LLjTL5f_39025214.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2df2af86d66115745b1ca07b18a91c869dcfc43f --- /dev/null +++ b/video/Pc9LLjTL5f_39025214.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:347d6f182e1f1ef652464c8b01b5bc01deaa7e8bbdbde551b5ab7e703fff400f +size 3101021 diff --git a/video/PcxQgtHGj2_39018839.mp4 b/video/PcxQgtHGj2_39018839.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1a90f5583d02278605648518c6a11c0060e329a9 --- /dev/null +++ b/video/PcxQgtHGj2_39018839.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:341f179d184e6f4c4f8f9b454b8f6aa392ab9fecdf49ee8bd402a93fa0a23a1d +size 3341943 diff --git a/video/PdaPky8MUn_39017971.mp4 b/video/PdaPky8MUn_39017971.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c571460f52f11c0c7ea09007d8cea9cfb4665f8d --- /dev/null +++ b/video/PdaPky8MUn_39017971.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca68fc6d5995718280b938c928ba82e4d823de9e77af21ddf284c2af1c400ca6 +size 2853704 diff --git a/video/Pezt0xttae_39028859.mp4 b/video/Pezt0xttae_39028859.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7373c2664b5d0c1788d8f6bf6e6cce1cbc319fe1 --- /dev/null +++ b/video/Pezt0xttae_39028859.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dee6e1cde00028e93cbd60006266949cc70a7f5a8c712f9a51f023c3da9cd796 +size 2975753 diff --git a/video/Pf7kdIjHRf_39026420.mp4 b/video/Pf7kdIjHRf_39026420.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..488111fc6aa4bfc89b99c4ade1a35eb32fc0f2ec --- /dev/null +++ b/video/Pf7kdIjHRf_39026420.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cfc61de034b4a89944fcc175d22a1cf67e0c85800004d2aa84c5adee7a621313 +size 2070962 diff --git a/video/PfOeAKxx6i_39024374.mp4 b/video/PfOeAKxx6i_39024374.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a5fa9b4d23c08fda0ea0891e998eacf8b76049b0 --- /dev/null +++ b/video/PfOeAKxx6i_39024374.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e5684a994c922eafc70fec5bd1d48275042a211c0831011288b6f2c6d7e963a +size 2630896 diff --git a/video/PfPnugdxup_39017141.mp4 b/video/PfPnugdxup_39017141.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4947abf33d3cd044b4142584dccc6c65c2ea7e6f --- /dev/null +++ b/video/PfPnugdxup_39017141.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:891418ec4e894010a014be3f60db3229ea8815ce10b805de1bf09305039063ec +size 2945285 diff --git a/video/PhLlE8UOEv_39028350.mp4 b/video/PhLlE8UOEv_39028350.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9d3e76c19bbfa0633008efb4a6228eb7a9ad15c7 --- /dev/null +++ b/video/PhLlE8UOEv_39028350.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6e6bfaf02af557935044ae73f68ddbe704a3446ef21776ae1d6a8308a2921e2 +size 2940813 diff --git a/video/PhjnK9KWOx_39028264.mp4 b/video/PhjnK9KWOx_39028264.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..902d134bf8a8fc33ec1a2268669400ccd0934c3a --- /dev/null +++ b/video/PhjnK9KWOx_39028264.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6eb9cd883af6e9811672b9ad4761274a266db4c05d2609b8fee0fb4f59257fc4 +size 2531709 diff --git a/video/PmLty7tODm_39028308.mp4 b/video/PmLty7tODm_39028308.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..841fb26c1a4a733b006b35ee5efaa299a835b77f --- /dev/null +++ b/video/PmLty7tODm_39028308.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b23fecdb5cc8df68f10d4c15b640fd4788779ed693c8fd0f4a550e40d9fc229b +size 2168670 diff --git a/video/PnR1MNen7u_39018971.mp4 b/video/PnR1MNen7u_39018971.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b3abc8c9ad2ae86a5846113d23d9d3d464ae0d27 --- /dev/null +++ b/video/PnR1MNen7u_39018971.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7c07db25edd7caba8390d2f9c60db20fa2b70d1721b72a63dc2bbd01b64a06f +size 2656557 diff --git a/video/PnlCHQrM69_39025047.mp4 b/video/PnlCHQrM69_39025047.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dcfaba1bfff50643bc937662892a6783f212ee7d --- /dev/null +++ b/video/PnlCHQrM69_39025047.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51ef1fb8bed7941afe4abd10805e24800af8491358bc7e9fc617789b3c77ba80 +size 2719681 diff --git a/video/Pnv8C0bU9t_39027067.mp4 b/video/Pnv8C0bU9t_39027067.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9d188293a45999301a56ea8aa0a388b78a6e9726 --- /dev/null +++ b/video/Pnv8C0bU9t_39027067.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0658f926caa14494cad2c9d64d1d509b47bbf61a71455f30386fa5ebb24c9f7 +size 2851039 diff --git a/video/Po7iQKKT5b_39027631.mp4 b/video/Po7iQKKT5b_39027631.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7b706db2f78500a3064eba27e2b52380cef64f5b --- /dev/null +++ b/video/Po7iQKKT5b_39027631.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:583d862d0ea7418dc4222018b1a868bc9eb5d4c8c684e9262bf82d6cef553de0 +size 7776 diff --git a/video/PoCs4jq7cV_39025676.mp4 b/video/PoCs4jq7cV_39025676.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d0d67215faed44c654612528cadc898177383849 --- /dev/null +++ b/video/PoCs4jq7cV_39025676.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c2e6351a75106bea83a675dafbd09495216f228686b9882fc0f1ca635dce1396 +size 2899894 diff --git a/video/Pox8jNQOo5_39027054.mp4 b/video/Pox8jNQOo5_39027054.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..426b5183ae2d1f73ef41764675bf6216fb69718d --- /dev/null +++ b/video/Pox8jNQOo5_39027054.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad68a3b5ce589acb4e15ff29f5c1540244c9f35b0cc653bf2fad79ab06fdd2e8 +size 2676294 diff --git a/video/PqlKliEXyJ_39025541.mp4 b/video/PqlKliEXyJ_39025541.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e307c67f9e7c2b037bb3bcae9a3551cf5f5350fd --- /dev/null +++ b/video/PqlKliEXyJ_39025541.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45f7a3784013f5a018620d98147b4ffd05fef67c0bfaf76d06db28b3a886e1ad +size 2342249 diff --git a/video/PsDFgTosqb_39018905.mp4 b/video/PsDFgTosqb_39018905.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ac6ab284399a9e5caaed2ca60196fba9373142c7 --- /dev/null +++ b/video/PsDFgTosqb_39018905.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9084a834d8b414172745b95ef63b6d2af204c8ff4f9b00a6e993e9ea23fcf1d4 +size 1697745 diff --git a/video/PuXYI4HOQU_39025243.mp4 b/video/PuXYI4HOQU_39025243.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1fad353ed60322e2337247b1e60be958d26819a4 --- /dev/null +++ b/video/PuXYI4HOQU_39025243.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e991bb51ae495348bf73c4e4a9879fc03fa1681e55a83535adc6e3a5c3076a58 +size 2722199 diff --git a/video/PvJnX3dwsD_39017967.mp4 b/video/PvJnX3dwsD_39017967.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..eae54a93f45f8a1ae73125274ae2183dbc599aca --- /dev/null +++ b/video/PvJnX3dwsD_39017967.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b15b56960b37b75a9e1a91f2d1c9866cad99ffc232f6281b34a0f59b1e35ba55 +size 2376364 diff --git a/video/Pwl9n4zlf5_39026272.mp4 b/video/Pwl9n4zlf5_39026272.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5d706cfdab04ddc5ae2f6c4059579bdfb3830861 --- /dev/null +++ b/video/Pwl9n4zlf5_39026272.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e95819c019214b5c3b2ef57c282ae21efe6f81725c1ce7f96d2aa136ef959dc +size 3199492 diff --git a/video/PxoFut3dWW_39018855.mp4 b/video/PxoFut3dWW_39018855.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..53064e9c33f56ae55a0b597684e3be34c103504e --- /dev/null +++ b/video/PxoFut3dWW_39018855.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ab5bcb6278cb7b24d32c0c5e8b604c839f4896c9ab782771ad0283bca5be13d +size 1943087 diff --git a/video/Q0KwoyZlSo_39025556.mp4 b/video/Q0KwoyZlSo_39025556.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..271264a6e522bc6dac7fc7d3ab8357ecdf8d093e --- /dev/null +++ b/video/Q0KwoyZlSo_39025556.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c676474770ab4ae56ba5a8ac8044be968fbce95ed391e60a1b66359ca744bb73 +size 3170200 diff --git a/video/Q1u25ahSuy_39017966.mp4 b/video/Q1u25ahSuy_39017966.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6c6cb890fa5bff00797c1a3f2116086a3a64fcc9 --- /dev/null +++ b/video/Q1u25ahSuy_39017966.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b69c87947e93a05ab5544e597057f19d187b1519d8879d562ea376da06d9b0d +size 1856903 diff --git a/video/Q3YaCghZNt_39017964.mp4 b/video/Q3YaCghZNt_39017964.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2866ae3df52b15d9d151cb1fbc94c738813422dc --- /dev/null +++ b/video/Q3YaCghZNt_39017964.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4386b138aa8eee7fcf15653d010ab641a76fcb5567328822de3b8159c3118bb6 +size 2571888 diff --git a/video/Q4NWfStqVf_39028480.mp4 b/video/Q4NWfStqVf_39028480.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b183578ed3a104bb99398af9e9c3bd956da80544 --- /dev/null +++ b/video/Q4NWfStqVf_39028480.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:98bf1c29583870b5b6f6510444dcc68d276aaf5b830df2da11b349c7dc5cdd75 +size 2613792 diff --git a/video/Q5RYn6jagC_39025322.mp4 b/video/Q5RYn6jagC_39025322.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..975fa77ae05138105bf98bb553bb1aa1757d9d73 --- /dev/null +++ b/video/Q5RYn6jagC_39025322.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1420a8b43d8ae6872b518a8a92135797362ae19288a8b86e0d23e205366fbdd +size 3026547 diff --git a/video/QC4e0vOanp_39024659.mp4 b/video/QC4e0vOanp_39024659.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..82d75faef2182589cdb4aa8ebf59079f452044fc --- /dev/null +++ b/video/QC4e0vOanp_39024659.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8bc4e9a2a13bf533306cdfae5e2b168ec0c1cba21e60294ec0d4d7daa001bdd2 +size 2736468 diff --git a/video/QDG2q5MYHV_39025281.mp4 b/video/QDG2q5MYHV_39025281.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3816ee0122d252db241c8cc85b11b17dc8e49cd0 --- /dev/null +++ b/video/QDG2q5MYHV_39025281.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78fe294d1692750277b9782613e8e3bf162ee62ad9cc64bef6bbe60ec7c341ed +size 2473084 diff --git a/video/QDprhde3jb_39026168.mp4 b/video/QDprhde3jb_39026168.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..251a17b2254a1ce9a5968cea9608cb47be8d5e12 --- /dev/null +++ b/video/QDprhde3jb_39026168.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1dff2288feb06820b70f68d54978a39ff2dfad43ddee7d2d626aea4fcd078b46 +size 1871817 diff --git a/video/QEUntqKvmm_39027844.mp4 b/video/QEUntqKvmm_39027844.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4f5644185417eb1378cea2e0227529ff65039cf1 --- /dev/null +++ b/video/QEUntqKvmm_39027844.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ceb8265593b41c5733ad642ec39d25a1959586e4e299528d5568cb6487523c1 +size 2201252 diff --git a/video/QFUsZvw9mx_39025475.mp4 b/video/QFUsZvw9mx_39025475.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c11a9549648dbef25815b214e84e55def47a80f3 --- /dev/null +++ b/video/QFUsZvw9mx_39025475.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f9a973fce7f16b171cab7e7ccf1f1e0b08c2562de562907d5ea5fa8ca78900f +size 3294820 diff --git a/video/QHRLFdhkLu_39027534.mp4 b/video/QHRLFdhkLu_39027534.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a5e1505122961ea65678b33ee42654499d2a72c4 --- /dev/null +++ b/video/QHRLFdhkLu_39027534.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85ab29d37a6092b3fc9f8081f353a7780843cc20c440aa833aba1a390dc72b5e +size 2729626 diff --git a/video/QHROe7Mfcb_39017962.mp4 b/video/QHROe7Mfcb_39017962.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7a48f0eb8fc5e4aa86915ef102e34ea28cfbb006 --- /dev/null +++ b/video/QHROe7Mfcb_39017962.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7ecbb5af315fd354746cc1d1263b503b5cd00c59b8590f4dff015bc070609ccc +size 3059825 diff --git a/video/QJGj07PD9C_39017961.mp4 b/video/QJGj07PD9C_39017961.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..722e52dcb3ff69da98eebf283a4ff7e6703f7b3b --- /dev/null +++ b/video/QJGj07PD9C_39017961.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e11ac9ddc1fcd0a81caced5c14fbf4309593d5e54faf8f45da2c172985747b0a +size 2569114 diff --git a/video/QKp3nhPU41_39025857.mp4 b/video/QKp3nhPU41_39025857.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..780da9213f1c7115a524cd1bc0764ce9d01e48e7 --- /dev/null +++ b/video/QKp3nhPU41_39025857.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:abfc14c2395503e9311f5351ca5ebf176c5f76c529a38d785b2f9dd4c1fdb4d8 +size 3045014 diff --git a/video/QLRO8o4bol_39025796.mp4 b/video/QLRO8o4bol_39025796.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8fc810b263a485e52a379594a42f346106a9ac11 --- /dev/null +++ b/video/QLRO8o4bol_39025796.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:624a525c71cb84d1ee01f4cdc2caa088ad41982049209d0845ba78e7a64752e5 +size 1177039 diff --git a/video/QNieOPt4fg_39026356.mp4 b/video/QNieOPt4fg_39026356.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0cede46cb5a1f4d684a0fa7caeb17816742c2c28 --- /dev/null +++ b/video/QNieOPt4fg_39026356.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d02582344979bae896f82f16b7da579d0de5e068dab11741e98a5d1b1758f8b +size 3258783 diff --git a/video/QUYLbzwtTV_39026140.mp4 b/video/QUYLbzwtTV_39026140.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5cc6f41e6ff9cbd9c4eb60a1444820a1fb067a78 --- /dev/null +++ b/video/QUYLbzwtTV_39026140.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca95b07719a340bc46e6213e898cc84d623089669e31acc1699cbb1cf5990a32 +size 3054659 diff --git a/video/QZtJ22aOV4_39026734.mp4 b/video/QZtJ22aOV4_39026734.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c2178baf6acafa9ff4be009c3127ba23f5c9a919 --- /dev/null +++ b/video/QZtJ22aOV4_39026734.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a40f58821d97aa1f2014e7dbec74a871b42bba98b9f6bef2b55eb316b65b764 +size 2497590 diff --git a/video/QbPHYPZKJI_39026478.mp4 b/video/QbPHYPZKJI_39026478.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c6a33d688f052e8dc85476b0889549600725d387 --- /dev/null +++ b/video/QbPHYPZKJI_39026478.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a096030f29d8048d17e2ec6d60743d9035e5e9317ec195622ba3012c604c9f59 +size 2371828 diff --git a/video/QbsPz0SnyV_39025809.mp4 b/video/QbsPz0SnyV_39025809.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4e36192a4833a30a13dd6f14a1ad585c101e52e3 --- /dev/null +++ b/video/QbsPz0SnyV_39025809.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8f8c621f0e839d9cb408700a2e9918ae8f83cd3f0de699098a5d44fb6fae758 +size 2396543 diff --git a/video/QiCJomIW3l_39028611.mp4 b/video/QiCJomIW3l_39028611.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f92da31232e0486bd6767a4ecd4c1f0073c12e97 --- /dev/null +++ b/video/QiCJomIW3l_39028611.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:81d16c1a7422cd9d71f20d4866f793c55988a9580fe60a8969a99d89ed9507ec +size 2750654 diff --git a/video/QpKWFLtZKi_39027643.mp4 b/video/QpKWFLtZKi_39027643.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8679d94bd087a737da97f88bdd88e90742cc3cad --- /dev/null +++ b/video/QpKWFLtZKi_39027643.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2928339dfce38a11b341992ceddbe402f58dd3ef0f71abefef64ef399f62281 +size 2667328 diff --git a/video/QrE9QPq4ya_39027661.mp4 b/video/QrE9QPq4ya_39027661.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c1ebe206c18200f5519c1c057d640c5146acce56 --- /dev/null +++ b/video/QrE9QPq4ya_39027661.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d8d0b6e9cc57fccef9a457ceaa269d8c3a2cfd731a358243c23bd1727cb3bdd +size 2058248 diff --git a/video/QrEHs9w5UF_39019248.mp4 b/video/QrEHs9w5UF_39019248.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..eb99e71ec869993f0a84ba8eb724ef81ef3e2202 --- /dev/null +++ b/video/QrEHs9w5UF_39019248.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e114b0e8b00c4b695a94da1e29c5aad47cccb649274e1d6a347e0c7ec9d0bf3 +size 2309627 diff --git a/video/Qtf6Xz4VvE_39028738.mp4 b/video/Qtf6Xz4VvE_39028738.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7702d0fd557e000a576c4dbc25420d0f930e1d22 --- /dev/null +++ b/video/Qtf6Xz4VvE_39028738.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2234a37a9b027f00c44d98874030b584b80a127c0cfadbe3b7702f1afa94640e +size 2099611 diff --git a/video/QuIiLSktO4_39017950.mp4 b/video/QuIiLSktO4_39017950.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0ec25f59a9ef7760a3fbaad533f500d2e43b67db --- /dev/null +++ b/video/QuIiLSktO4_39017950.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b57ecfebbd40f6d5916009e0ef574860b5798e63f66ab10a30b7cf77fe3e4a91 +size 2788505 diff --git a/video/QvqLdeSLWA_39028229.mp4 b/video/QvqLdeSLWA_39028229.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b38d552ce242e445a25d0f10dcf779e430b268cb --- /dev/null +++ b/video/QvqLdeSLWA_39028229.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2404fea0aedc0b0d9d20b29e2b6ce0a154b9d9d816f90002620042d1e5d49ef5 +size 3456573 diff --git a/video/QxItoEAVMb_39017178.mp4 b/video/QxItoEAVMb_39017178.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4a525b3c7239f5757a10d5f979593b2dff2cd199 --- /dev/null +++ b/video/QxItoEAVMb_39017178.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b325fcb11fda39a60f85bb34ebe880a2075fb5321a56c96487652a399eb1f246 +size 1311555 diff --git a/video/QyR1dNDxRP_39025564.mp4 b/video/QyR1dNDxRP_39025564.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..24eab799def1cf735b2d12bffc6bfe0ba16e755e --- /dev/null +++ b/video/QyR1dNDxRP_39025564.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bac5c6fb9e5d6b66d8b845b403a928a337408481ad7535ac0deb19bbfe27cfaa +size 2342963 diff --git a/video/R0bnWrpIeN_39026013.mp4 b/video/R0bnWrpIeN_39026013.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dbbc61fcd7c09755458e065b4a67f89dd5d1df25 --- /dev/null +++ b/video/R0bnWrpIeN_39026013.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aedabf4f0919ff945201a4f32ffd275f8595d08bbac3fa843b5ab26807228b00 +size 2856059 diff --git a/video/R3Tf7LDdX4_39017945.mp4 b/video/R3Tf7LDdX4_39017945.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ebabd971a334cee705993c864ce0f47be4e7f945 --- /dev/null +++ b/video/R3Tf7LDdX4_39017945.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6ffeeb89291b1a1a5ba26f470f72242c6d2c3fc1393de7594ea3287b3f01982 +size 2549602 diff --git a/video/R4IBZrSF5d_39027249.mp4 b/video/R4IBZrSF5d_39027249.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e20320527b5c848aa3395ace04137a1e1b572632 --- /dev/null +++ b/video/R4IBZrSF5d_39027249.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99b581fdbd1f32d5a13953f7d32ba25d2b6d2debaebc92f2e949ce6826a96031 +size 2729789 diff --git a/video/R6N9AGyz13_39025290.mp4 b/video/R6N9AGyz13_39025290.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9babe4ce0dd4242b3a46e69760e60ec72d9b53aa --- /dev/null +++ b/video/R6N9AGyz13_39025290.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffc407b8403d7e0b0bc88c174a5293dd36773187220f139825770fdc638edc9d +size 2270240 diff --git a/video/R8znYRjxj3_39024761.mp4 b/video/R8znYRjxj3_39024761.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2d420e51b6ea78dc6950bb1cd35108a5e259015f --- /dev/null +++ b/video/R8znYRjxj3_39024761.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a70bc8114b88bf057a07ec8675d8ac62f9b97fd8db994195d8161a947590d15 +size 2627570 diff --git a/video/RA6rzOJ2zI_39028887.mp4 b/video/RA6rzOJ2zI_39028887.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..630c542b663c0d60dcdfc393e1507883bc4082aa --- /dev/null +++ b/video/RA6rzOJ2zI_39028887.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c39cffb5c52631c7fddcbc3f8e15a52db4c07c2db5900aedfb0dc840ca7c9893 +size 2565062 diff --git a/video/RB1F2h5YEx_39027509.mp4 b/video/RB1F2h5YEx_39027509.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2724827334ee4e379e6308a721a789f646646a08 --- /dev/null +++ b/video/RB1F2h5YEx_39027509.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d52b0eec4f8e583a8d7c60e00c1565dd0c46b631dbc767811449ae655a82c2e +size 2814754 diff --git a/video/REIK4SZMJt_39024911.mp4 b/video/REIK4SZMJt_39024911.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f0f781a29775cda96e9691cb1f3f7ee21f88a6b4 --- /dev/null +++ b/video/REIK4SZMJt_39024911.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:485a906d510a6b68e4c4611333e18f85daee2c46ac84282786c44fe48135257c +size 2055938 diff --git a/video/RERls4Opnm_39027535.mp4 b/video/RERls4Opnm_39027535.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8aeb9388e006f400dc6bbac62e8fe8778c29cd30 --- /dev/null +++ b/video/RERls4Opnm_39027535.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f248a4a7a8348df0a002c09fb25c793fd382ddc05bc0e30a22bd776107a6726 +size 2311950 diff --git a/video/RH7tfqhiZY_39028589.mp4 b/video/RH7tfqhiZY_39028589.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..efcabb8f748d230ca7702c082adc55cd8ed967b0 --- /dev/null +++ b/video/RH7tfqhiZY_39028589.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c64d999a6ce60f58be895b9fef3936fed6c1493d614b93be7520a81c58598ba +size 454057 diff --git a/video/RIEW6M9YoV_39018845.mp4 b/video/RIEW6M9YoV_39018845.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b354f77fca237b8fbc474c28cc4ba0e21787d44d --- /dev/null +++ b/video/RIEW6M9YoV_39018845.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d7d08f351a697fd63c90894698f9589cf8c2f83a2ba174556cd19deba6f194e +size 1746480 diff --git a/video/RIuevDSK5V_39018844.mp4 b/video/RIuevDSK5V_39018844.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d99f3417703541ae014a9c026031eb9a628f24d2 --- /dev/null +++ b/video/RIuevDSK5V_39018844.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1b56a6addf7399efe39bcd80e2ba179af560898eb9d74abfe30aa7d8fd395b5 +size 2178723 diff --git a/video/RJDjSXNuAZ_39019128.mp4 b/video/RJDjSXNuAZ_39019128.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..33227ce6d66efafb4cf9363b1c43ca78d5c6b917 --- /dev/null +++ b/video/RJDjSXNuAZ_39019128.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58d9a09d477af5a2924eeeb198298dc6594ed4ec5058bb1156498bc29b43b2d9 +size 2236125 diff --git a/video/RL4FXrGcTw_39025205.mp4 b/video/RL4FXrGcTw_39025205.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e1032e8cd5a7eab8a6261ce2753255533d67f3c0 --- /dev/null +++ b/video/RL4FXrGcTw_39025205.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72d5d6f1f0c9fdc5be123b22cb8f245b9f2c1833dee2cf6de1eac13cbc250c22 +size 7766 diff --git a/video/RMdnTnffou_39025227.mp4 b/video/RMdnTnffou_39025227.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b1e18fe63f749c1df8355572e9640c882590ad02 --- /dev/null +++ b/video/RMdnTnffou_39025227.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74fce36eb2defe99bd15afaeb7982c54019781e11fcff56c3524283f33b9b32f +size 2926679 diff --git a/video/RNbrIQ0se8_39026425.mp4 b/video/RNbrIQ0se8_39026425.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dea79e2c3f017a51d45dba565ff44318421f91ce --- /dev/null +++ b/video/RNbrIQ0se8_39026425.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce9cf6cb15703dc2458f42f967b5f02185f7d055d0256ed75e73410aa8d3fadf +size 2213275 diff --git a/video/RPChapuXlC_39028442.mp4 b/video/RPChapuXlC_39028442.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ca2a67be44850e7201a2a1c7675f764de670198f --- /dev/null +++ b/video/RPChapuXlC_39028442.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4d83699adb8d8a1aed656acaafe3a77e7235d42d2c7db3fea817f02fe418bf0 +size 2619981 diff --git a/video/RQCmMSSzvI_39028241.mp4 b/video/RQCmMSSzvI_39028241.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0f5707a50fa379430e4b6753a872e3228c080afd --- /dev/null +++ b/video/RQCmMSSzvI_39028241.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8655c7b8aab5b04926a0e176560bd7dbdadcb7621f3950f3403add03772f6fdb +size 1952779 diff --git a/video/RR8y0WKrFv_39018879.mp4 b/video/RR8y0WKrFv_39018879.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..88a5d207f9e1cb5a5afd395a21680ee54a177559 --- /dev/null +++ b/video/RR8y0WKrFv_39018879.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:86b8603c7f13519cd4c797efd7bb55e1dc4c01662729d87ca16b4466072cec31 +size 1815445 diff --git a/video/RXFVcynVe1_39017937.mp4 b/video/RXFVcynVe1_39017937.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ffaead273b9f2cc3b36caca0fb598700cc0e20d8 --- /dev/null +++ b/video/RXFVcynVe1_39017937.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:81b77abfb95b5cf03e50438470cc13c41fc2739876eedabc62baa1eee5e35b5b +size 1906598 diff --git a/video/RY3rDQV0tQ_39027490.mp4 b/video/RY3rDQV0tQ_39027490.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cc94ef78dc7a92f5e756030eb3b1a0f06be82be6 --- /dev/null +++ b/video/RY3rDQV0tQ_39027490.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:87b269513ee112d5814ab83ce641a3303e593f7af66ef5f3f82fec62b58f0fcf +size 2770429 diff --git a/video/RcPAJAnpnm_39028722.mp4 b/video/RcPAJAnpnm_39028722.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5a700e6b2a28d55c15e00cd99c86b2b57763dd0d --- /dev/null +++ b/video/RcPAJAnpnm_39028722.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4dac1690166c0a56d25817e24da32693216a57a68065d244214dc14d920f970 +size 3228836 diff --git a/video/RcPHbofiCN_39026159.mp4 b/video/RcPHbofiCN_39026159.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c95b26a8232d467fecf4d42fea6e648c16bd258b --- /dev/null +++ b/video/RcPHbofiCN_39026159.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f91675cde372e9fd12ee7a92169b471bbce4a140e62be3162638b6003ba13b1 +size 2048592 diff --git a/video/RfSvAom7sS_39025778.mp4 b/video/RfSvAom7sS_39025778.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ba6dd60d955726f68d8fa26215fa172ffa0c03d2 --- /dev/null +++ b/video/RfSvAom7sS_39025778.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b1df9ba1f903aa9175364301aa8cd375de0e5675d550b34021f5629528908fa3 +size 3130610 diff --git a/video/RfsfRn9OFd_39028279.mp4 b/video/RfsfRn9OFd_39028279.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6ce3261831dbc836359572e66faf0e6dc4e150fc --- /dev/null +++ b/video/RfsfRn9OFd_39028279.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c93887fd8ca69e0c1e9c92b88cce9643a643366c690aeab4eb543b1825f68926 +size 1910807 diff --git a/video/RlZgnEZsOH_39028289.mp4 b/video/RlZgnEZsOH_39028289.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b7a6c67bc7b5f93924f1253e3c8abf4f7d6ec652 --- /dev/null +++ b/video/RlZgnEZsOH_39028289.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:680ea612028224395c57311647736dcd35db807bce49418bdd927a7ab873033d +size 2782412 diff --git a/video/RnQdRY1h5v_39025079.mp4 b/video/RnQdRY1h5v_39025079.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8a5a4292b076d1bd910c51bf9bd14410a3fab7a5 --- /dev/null +++ b/video/RnQdRY1h5v_39025079.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c0972e11536c539dde337b866d00d2920f4c7ce18980cc557137b6aa44c8110 +size 2018656 diff --git a/video/RrTjcbcHEH_39028576.mp4 b/video/RrTjcbcHEH_39028576.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f5dbb7009033c2ce5f51c67b6c0364690ad62374 --- /dev/null +++ b/video/RrTjcbcHEH_39028576.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:27aeb631c8ded2fb8e4d42082a05e633c50d2552f64379890bdde2748713b287 +size 2712015 diff --git a/video/RsawwSBCs7_39028568.mp4 b/video/RsawwSBCs7_39028568.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..750575728a807d0f60cf37e05776eb334fb26510 --- /dev/null +++ b/video/RsawwSBCs7_39028568.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d21ad32a9a7faba8ccf85a8b8083582ecbfa1e78b244586d49c586a1ccf0a6f6 +size 2206244 diff --git a/video/RsztjXcvUf_39017932.mp4 b/video/RsztjXcvUf_39017932.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9b5193b6f0bd084f27b11a69dd70e94d34ed3b73 --- /dev/null +++ b/video/RsztjXcvUf_39017932.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:135005449346041bbbc8821091cbb69b7075dcde6438b20d6c96549e3e15b492 +size 3131398 diff --git a/video/RtAct1E2zS_39017931.mp4 b/video/RtAct1E2zS_39017931.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9e65925c20a2504c8f27c972ca48930e9cd5ae45 --- /dev/null +++ b/video/RtAct1E2zS_39017931.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93ca6aa34cd58e7cf46e59bd8f4742fc75510e7d8eab1551da465ecae8e05cbb +size 2378318 diff --git a/video/RthOl4jHw5_39017929.mp4 b/video/RthOl4jHw5_39017929.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3b51953acfa62ee7c9ebf958063de07084b9e9fc --- /dev/null +++ b/video/RthOl4jHw5_39017929.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c42bb827c8dfb10347735b39a379f3151b460ce5561f1b347cf0397c04718e3 +size 2487854 diff --git a/video/RwBObRsIzC_39026711.mp4 b/video/RwBObRsIzC_39026711.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f7d5df7fb7d1d60cf089f062dc541f28dd43ab4f --- /dev/null +++ b/video/RwBObRsIzC_39026711.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4bf7704a443cdf653f4cee2c7f24cc2270c90d77441a78c2c0391d034acb28c6 +size 2314635 diff --git a/video/RwI7ZEfR27_39019093.mp4 b/video/RwI7ZEfR27_39019093.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e99d2267ad9733ae7441c7b3826e36ef034d9e76 --- /dev/null +++ b/video/RwI7ZEfR27_39019093.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:771f294bffb9587c5b403bff929fdb8f35a155cd3acca362430225f2bdd48f00 +size 1492199 diff --git a/video/RwK0tgfptL_39024746.mp4 b/video/RwK0tgfptL_39024746.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..83a09f8bf75f88131fa8e5757e5a66a3fbefdd92 --- /dev/null +++ b/video/RwK0tgfptL_39024746.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a65863349c7d6af61442b13423ce1140cbfb8fe9c43621abb0723d82e631d08 +size 3460695 diff --git a/video/RxQoIekEa2_39028807.mp4 b/video/RxQoIekEa2_39028807.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e242aa8e3821c8255055b25cd3bc15584ddb2e80 --- /dev/null +++ b/video/RxQoIekEa2_39028807.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20c928598a1333d3d89a16571d1bf023e31c6646b9372334767d6f74d7e5a4ab +size 2304166 diff --git a/video/RxXdokK2qz_39024967.mp4 b/video/RxXdokK2qz_39024967.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..da90723b82ae3add15958e54f5f66207949603ac --- /dev/null +++ b/video/RxXdokK2qz_39024967.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1951cc81059a26bbd21b5a10e4bdfd81b76fd85601af9b23ed19d35c1c9ffefa +size 2453995 diff --git a/video/S0Ci1AsJL5_39027383.mp4 b/video/S0Ci1AsJL5_39027383.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a9fd7e492995f95a56d65ab6de73dceda4b2b2c0 --- /dev/null +++ b/video/S0Ci1AsJL5_39027383.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2189bbdf7f423c40dc1173e7f2219307c5c05c0c36e8a403f0e7f4223ff40164 +size 2386997 diff --git a/video/S4YRCLbUK1_39028661.mp4 b/video/S4YRCLbUK1_39028661.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5e6602472ead01f353f77cb9e51e4c6eb3d29a58 --- /dev/null +++ b/video/S4YRCLbUK1_39028661.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a089a4ff4e9eae50cc5d5f252b6f5f48812ab9b7c536410fe4f6dc96ab7c168 +size 3208848 diff --git a/video/S5coB5kqSD_39024561.mp4 b/video/S5coB5kqSD_39024561.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..771b6d70c1163162796d690a05c5f53a8593eff7 --- /dev/null +++ b/video/S5coB5kqSD_39024561.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9f5432fd15b0d1aeed3d796d9825e142f01fada88a1fbb957d49c0e17246351 +size 2411033 diff --git a/video/S7THlpvH8i_39025309.mp4 b/video/S7THlpvH8i_39025309.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c93b6d2fbed30948ed1af6626bb8e67eeea22f27 --- /dev/null +++ b/video/S7THlpvH8i_39025309.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7257dad1389d75696789b4ac6ca29a355bcb39efa9fcbbb33ab8747edc8309e3 +size 2513123 diff --git a/video/S8wFXyT4dY_39025735.mp4 b/video/S8wFXyT4dY_39025735.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5a290e0e821dc5b3fe9489b23b44798380f2a799 --- /dev/null +++ b/video/S8wFXyT4dY_39025735.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b197093b5e68399cb6e048927093dd6ab880c5172393a40ae22a2846041b0468 +size 2547678 diff --git a/video/S93hrwT8u9_39028300.mp4 b/video/S93hrwT8u9_39028300.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d648b144061e22befd4043ec638e2406f4b74e37 --- /dev/null +++ b/video/S93hrwT8u9_39028300.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c643854d53a8430550a47049f9b59458aa865816dc7143b00ac2a469490e3a1b +size 2036099 diff --git a/video/S98OzJD3jn_39025366.mp4 b/video/S98OzJD3jn_39025366.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d5e14acb1e22b6a27007239a4fe1f693f75dd815 --- /dev/null +++ b/video/S98OzJD3jn_39025366.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af1c7e0b3aa4b02a312bb8de9e22fd4f36ba3cccd9018eccd7508b86431c5824 +size 2971769 diff --git a/video/SA19ijj44B_39017922.mp4 b/video/SA19ijj44B_39017922.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..aa863978d701b7425dabd8ea975821de4814d941 --- /dev/null +++ b/video/SA19ijj44B_39017922.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:27be08887ed09dbc66d84fc6e6399aed44471a3b2c49780b7e57e6b51e4f2041 +size 2634871 diff --git a/video/SAZeQV2PtT_39028215.mp4 b/video/SAZeQV2PtT_39028215.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1aa47e35cdda1a448431688e46f68c6fa68256f9 --- /dev/null +++ b/video/SAZeQV2PtT_39028215.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a28b9cb1bc5ac69de5acc2ef2933cee5bb467853f61aefa49569358a4181c3a8 +size 2376454 diff --git a/video/SBj2Qdhgew_39017921.mp4 b/video/SBj2Qdhgew_39017921.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..abab4322078ab7d66bf4e27da96442abbe56eddf --- /dev/null +++ b/video/SBj2Qdhgew_39017921.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ea5f3c356c4b37d1b8d3241335ccb0f6c12e388dd4ce12c733e36b8a1d872ef +size 2592715 diff --git a/video/SCEdoGghcw_39025524.mp4 b/video/SCEdoGghcw_39025524.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ff91a363b1e0773d5311b6fee1c83a25fbb2f104 --- /dev/null +++ b/video/SCEdoGghcw_39025524.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6dca834e965c1c01c32d091f38116f025dcfc335e3c408369abfa1e0bc9d2c9a +size 2808297 diff --git a/video/SEflLHIhhJ_39024871.mp4 b/video/SEflLHIhhJ_39024871.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e03ff0a4132f8156f7f39ed43d8141f2f0e55622 --- /dev/null +++ b/video/SEflLHIhhJ_39024871.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74b06b9e1e53c7d909918880dfc150fcbeeeab1fe0b3b3879ce5559d9ccba05e +size 1859646 diff --git a/video/SFk7AMpyhx_39024833.mp4 b/video/SFk7AMpyhx_39024833.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bf9be249cd7071e594c21b97455301d88edbe5ec --- /dev/null +++ b/video/SFk7AMpyhx_39024833.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d4f768968737b1b31a5acf4cde2d6641922a9f831d36b5ec4e00c425f98d2bfb +size 1319540 diff --git a/video/SGcnphYOeq_39027268.mp4 b/video/SGcnphYOeq_39027268.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6e633e0cf72c1248d3199086e2f88bdc9fdf16ab --- /dev/null +++ b/video/SGcnphYOeq_39027268.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a87b1e327fed6d1f88f520992f44731fd0d6f35eeeb4a933939ecf18de472194 +size 2920401 diff --git a/video/SIZWiya7FE_39017097.mp4 b/video/SIZWiya7FE_39017097.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..35dce09f70d9744cd9574f9c9c3707adf101b7f5 --- /dev/null +++ b/video/SIZWiya7FE_39017097.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30bf470460a39a4c6807ee855550c490cd65a79fba00cb0b726f1e62969ca82f +size 2706674 diff --git a/video/SKhR5CuiqQ_39025643.mp4 b/video/SKhR5CuiqQ_39025643.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..04c402e575209f74e4723873c2c0e2e0e1033325 --- /dev/null +++ b/video/SKhR5CuiqQ_39025643.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:729a207103d9655bd268dc8f71567e58857a5825c749b28966f4f6972e79b1ac +size 2411145 diff --git a/video/SLw9fp4yI6_39017916.mp4 b/video/SLw9fp4yI6_39017916.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..012e08f976defe2a432c9acaa2e2771596280ca8 --- /dev/null +++ b/video/SLw9fp4yI6_39017916.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0381c4005d1ae0c4c25c285f15c80cd7717bcc9d9d2e5b2b3a296ff83be027f6 +size 2034494 diff --git a/video/SM9IWrHz4e_39025440.mp4 b/video/SM9IWrHz4e_39025440.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7d49079c1ac4975651cd8219a746bb785bf04986 --- /dev/null +++ b/video/SM9IWrHz4e_39025440.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e159098c8fa0eb509045b8ec230eb23f6268d05b79bd57cf0ec6dff9c8bb84b9 +size 2253880 diff --git a/video/SO1aRpwVLk_39024660.mp4 b/video/SO1aRpwVLk_39024660.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7f87b255cdb687da1ffc2787df7980646e673aed --- /dev/null +++ b/video/SO1aRpwVLk_39024660.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9073d2586429b590dee79d687c4911acab5de31e30728572200324f3575732a6 +size 1526268 diff --git a/video/SQpnEfv9WH_39017055.mp4 b/video/SQpnEfv9WH_39017055.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..36a3a2f6db5d6ff11ace380a4c41ad6406f4d9b1 --- /dev/null +++ b/video/SQpnEfv9WH_39017055.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63baf362dfee5584baade4bbc2a8bba31544d79a1be3202cd63f5606049c2a5b +size 1987810 diff --git a/video/SQrHpTllXa_39018722.mp4 b/video/SQrHpTllXa_39018722.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..577cacc06ce17ae01ec5e2aaf1a1d4ab4d11396a --- /dev/null +++ b/video/SQrHpTllXa_39018722.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:627da3a104eadef53cb85105a2f40532cc57ca85c5a4559d294cbde22ab6b87d +size 3119898 diff --git a/video/SSCtCq2MH2_39026792.mp4 b/video/SSCtCq2MH2_39026792.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fa3a71a51440db4a7e220374f7110d5da236b19d --- /dev/null +++ b/video/SSCtCq2MH2_39026792.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:247050630f4461ce327e6df65a766eaf0d8f1d9af28246e90ceb5f055453f94d +size 849262 diff --git a/video/STrpbhrvt3_39025815.mp4 b/video/STrpbhrvt3_39025815.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e81e05f6b59cf8398b51a4222ef213aaf038e1db --- /dev/null +++ b/video/STrpbhrvt3_39025815.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85a4733605dc1c689a31902622a903aceda5031b76fd27d994c25333ebd6a68d +size 2793992 diff --git a/video/SXbyy0a3rY_39028420.mp4 b/video/SXbyy0a3rY_39028420.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e2ba879068a156896a8447665ad330b9cb34636d --- /dev/null +++ b/video/SXbyy0a3rY_39028420.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd73e9a89d5b16c99d0011e85b78c17c38a865a76035dbba045ef46b0dce34f7 +size 2968980 diff --git a/video/SXy1nVGyO7_39024845.mp4 b/video/SXy1nVGyO7_39024845.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..89314cc9be6bc8076bcba7b9abfc13923082bbb9 --- /dev/null +++ b/video/SXy1nVGyO7_39024845.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d122acd4354a4d7b9f61aeb675089b1075485f229d22d7914d4cdebb41ed6f80 +size 2604827 diff --git a/video/SZzQz8ikwg_39017912.mp4 b/video/SZzQz8ikwg_39017912.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..589cb1c410f058d61d4059c4087a31b2b6e53a51 --- /dev/null +++ b/video/SZzQz8ikwg_39017912.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b28a54a3e392b5a710d56848f9574dd556cbf39cf69952429dddd91fe081425 +size 2741899 diff --git a/video/SdLOs1FR4h_39025479.mp4 b/video/SdLOs1FR4h_39025479.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5a6cbd249c31196fe9a7f2a88cc1130b73b1d75d --- /dev/null +++ b/video/SdLOs1FR4h_39025479.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1442c2e003d217ae6ef2305962a17322847e6ca3e5d3433e6439b3afda26cf56 +size 2101916 diff --git a/video/Shwtw8uV8l_39024807.mp4 b/video/Shwtw8uV8l_39024807.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bffae5cc51af982a43ba323f5de3382c582a10b7 --- /dev/null +++ b/video/Shwtw8uV8l_39024807.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:79e85d17b1e3cd3fa624d96590affd29c7710a2fe4cd8b17b579b1d942c9c850 +size 3021456 diff --git a/video/SjQ1iIqpfU_39028493.mp4 b/video/SjQ1iIqpfU_39028493.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d0cd966df2518f872c30bfba38eb88277f856888 --- /dev/null +++ b/video/SjQ1iIqpfU_39028493.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea8ce792567bef4442fee37c4d17e789986aba5439d4a560e422aac5b52d9378 +size 2671857 diff --git a/video/Skv26JteFz_39026659.mp4 b/video/Skv26JteFz_39026659.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..83d1a266e1fa7a010f4a54b2c7772c91d08d59c8 --- /dev/null +++ b/video/Skv26JteFz_39026659.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8ac73ed42de7035ba783ce0abb3938727f1e6e85b5c095e03cb3d87b4ec7c43 +size 2339449 diff --git a/video/SlDx451MjC_39027148.mp4 b/video/SlDx451MjC_39027148.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d07d5359dd2a51e0d2523fceb18da2eeee6465c0 --- /dev/null +++ b/video/SlDx451MjC_39027148.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef87854c992ff1c2d35ac7b2811e4aec6a630b0cc189852b94ed48b4c2c025c8 +size 1731477 diff --git a/video/SoYCqMiVIh_39026928.mp4 b/video/SoYCqMiVIh_39026928.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4798f63850d4da1356c90a176cb6fc02113ffb11 --- /dev/null +++ b/video/SoYCqMiVIh_39026928.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f019757f916d518fc9a031bd6fda02b24bfb06d0d0a378267d18c279a783c423 +size 1770721 diff --git a/video/SrFbgIjb53_39028036.mp4 b/video/SrFbgIjb53_39028036.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cdb570cb217aff2e80ebdb1fd7836f98fca7c43a --- /dev/null +++ b/video/SrFbgIjb53_39028036.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4c3fe01701c8815382ee6b56375edf516a61a26344b3126428d727a494f11b0 +size 2824057 diff --git a/video/Ss7l98DVvD_39027953.mp4 b/video/Ss7l98DVvD_39027953.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d54cef8702b7809fe054c87c8b951221eabbfd68 --- /dev/null +++ b/video/Ss7l98DVvD_39027953.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0e503e4be6bfbd3ac68467adf259bc44ae4da802b1eb7697938493e20a00b57 +size 1727770 diff --git a/video/StapcUWm9q_39026706.mp4 b/video/StapcUWm9q_39026706.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cfb5fa52de0a786bdecc0190ca2cacd17e868977 --- /dev/null +++ b/video/StapcUWm9q_39026706.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f414ee76bc5200e72ad3b3c8f2f7b880a46f0592e7b4d11c3f09ea5fbc709a1b +size 2930858 diff --git a/video/SuLxkxCENa_39025963.mp4 b/video/SuLxkxCENa_39025963.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..99957dbbdc68fa2964edc7df28218f7b8607e3b4 --- /dev/null +++ b/video/SuLxkxCENa_39025963.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e95efd21d3c4613af8b1c68d3e95a83ef4abcdd75cfc45ad7757fc16c36695ce +size 2380912 diff --git a/video/SvmJJJS0q1_39027843.mp4 b/video/SvmJJJS0q1_39027843.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..47ea6f197e997eb33843084bf01f12c09f220bce --- /dev/null +++ b/video/SvmJJJS0q1_39027843.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e51eb4d510e75910d174953ed8e8e2ffd1cea547be2b3e3ecf74a4e1f1df180e +size 2727086 diff --git a/video/Swh8LxuycA_39026043.mp4 b/video/Swh8LxuycA_39026043.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1e9977dfcfdcd9d51f40f27d36a4e01fc75cb115 --- /dev/null +++ b/video/Swh8LxuycA_39026043.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:009ab35d9677978d0a1001931ca0d108e66eb5429d7fe04a33726d8d5ab226fb +size 2486429 diff --git a/video/Sx7BIiPzys_39017901.mp4 b/video/Sx7BIiPzys_39017901.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cf57c498b6c092c3aaa29b3eeb59d44ce246239f --- /dev/null +++ b/video/Sx7BIiPzys_39017901.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63bf1868ee46930bc6e85233b3ff08e033e16f454749e40e7113274c908ea7e0 +size 2679620 diff --git a/video/SxRblm9aMs_39024497.mp4 b/video/SxRblm9aMs_39024497.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a36f07fcd2015393f4394bbb26f8c6f1c014ac67 --- /dev/null +++ b/video/SxRblm9aMs_39024497.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8cc5e8a5c6d3c08cdd62e1786bdd5952709c0e93a164a67985039343f059f865 +size 2007761 diff --git a/video/SyMhGilvCv_39026173.mp4 b/video/SyMhGilvCv_39026173.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ec4d42355cfd66834b013550443b884fe92a6352 --- /dev/null +++ b/video/SyMhGilvCv_39026173.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9853eab35c7a4179ee56dca13d21d0048ce642c83977672c9a5b7cfe714c000e +size 2476963 diff --git a/video/T0e4Nw09XX_39026089.mp4 b/video/T0e4Nw09XX_39026089.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4a96053ed4db2a058549a1a1cb0e8001626f046f --- /dev/null +++ b/video/T0e4Nw09XX_39026089.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10c8b686324b11fb454d8f535309d4f8ce0eba806d9e4ba0dd6f7635de252698 +size 2909242 diff --git a/video/T0glCBw28a_39025285.mp4 b/video/T0glCBw28a_39025285.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a8e3a64a6861e4dc43d13753a54a69421253c7f4 --- /dev/null +++ b/video/T0glCBw28a_39025285.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b21193bbf8d2400e1bcd7bc633a3a4b43f737f28fd3736b336a3edb7b3945b71 +size 1697168 diff --git a/video/T56j6aV8Oc_39027134.mp4 b/video/T56j6aV8Oc_39027134.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..90d28e5c5b247e2ed116c5a053cd51e7f0004623 --- /dev/null +++ b/video/T56j6aV8Oc_39027134.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30efd6f81f10788924a636255f11cd151cc8c7d3066a8de6dc1b27149bcda030 +size 2269669 diff --git a/video/T7dS1Ghwwu_39026035.mp4 b/video/T7dS1Ghwwu_39026035.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c9c06106912df4e614cf765066f6410fce644f15 --- /dev/null +++ b/video/T7dS1Ghwwu_39026035.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:653d7506dd03092b40c7d61668b5918893e6152631f4e176fca5ceea09216f7e +size 2401466 diff --git a/video/T9GbbWbNQG_39025421.mp4 b/video/T9GbbWbNQG_39025421.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..54887308e717410d92b38232a12fe0f5e80aee4e --- /dev/null +++ b/video/T9GbbWbNQG_39025421.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5820186f2a33f6e2a654ab50b0e7b0cabf0db80ce700af51720f1b6f1488c42a +size 2281886 diff --git a/video/TA5zPfH8iI_39026470.mp4 b/video/TA5zPfH8iI_39026470.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f50db47c26511bd87d7bff18085c2901e6c598e7 --- /dev/null +++ b/video/TA5zPfH8iI_39026470.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:26b6d38a950314df4f9dc0eb342df0764128ef699153bd6fe622cd867f2c16a9 +size 2581929 diff --git a/video/TALJtWX7w4_39028299.mp4 b/video/TALJtWX7w4_39028299.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..733b250c0aef6eec95b6c5d166a9c168846f4f77 --- /dev/null +++ b/video/TALJtWX7w4_39028299.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54a99cc916ea8fe8d04340a08000a9f62e167c22bceb8678a678f4c871b54eb7 +size 2775227 diff --git a/video/TFAG9UznPv_39026037.mp4 b/video/TFAG9UznPv_39026037.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3aaaac8869de2829c45fa7873bbd00d49f71d408 --- /dev/null +++ b/video/TFAG9UznPv_39026037.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c6222daec26de2c3c0eb5ca8b93e51e45f4f2434f3b703169f724de4c1ec2d57 +size 1815548 diff --git a/video/TFKIfhvdmZ_39017899.mp4 b/video/TFKIfhvdmZ_39017899.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..962bb72a2e4a4e8a44544eea4d2916d87900420d --- /dev/null +++ b/video/TFKIfhvdmZ_39017899.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:553317f8444a4bebc3244326ea72fefde5ef3db590378713b55834c01c660492 +size 2757072 diff --git a/video/TGmwp9jJXl_39026992.mp4 b/video/TGmwp9jJXl_39026992.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f690b90e2324034ba60cfb03bd265e7306a6bc6b --- /dev/null +++ b/video/TGmwp9jJXl_39026992.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b25e23d010e72382a4deac97635f5c4fbf5e90804ccd0cd6762afd30ed81bec9 +size 2536922 diff --git a/video/THJEa8adBn_39018650.mp4 b/video/THJEa8adBn_39018650.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..45aa189a2c63edb655c9a9cdafbea935a5991d0a --- /dev/null +++ b/video/THJEa8adBn_39018650.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0891f68ef55942e09d8c0edaf4d996c5b85df0599727e0e6908a64b6a9045b1f +size 2729146 diff --git a/video/THUBTfSAS2_39017898.mp4 b/video/THUBTfSAS2_39017898.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9da4dccf60992e8513c6c033622f65feaebb5f2b --- /dev/null +++ b/video/THUBTfSAS2_39017898.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de4f605d94a11a08ad890e70cc47f04bf30b4195722d6f5124055841ba31dd0d +size 2339528 diff --git a/video/TIhiFqGOYC_39027784.mp4 b/video/TIhiFqGOYC_39027784.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fc2699f78f99341764fefa32c76d3cf74988781b --- /dev/null +++ b/video/TIhiFqGOYC_39027784.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8bbbd7f1bccf9fb3f08da628f2ca6777aef619bc7b6f3d5a4d187130c2658f3 +size 2982270 diff --git a/video/TJsknGasMy_39027398.mp4 b/video/TJsknGasMy_39027398.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a20d02469117ab99891bedd3f2ffaf562f26aac9 --- /dev/null +++ b/video/TJsknGasMy_39027398.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:850250aff424f1c7a83f4eadf6089664c6a3765b9dcf2f54b8e43b72b038a78a +size 2689698 diff --git a/video/TTrzgEZt9s_39017892.mp4 b/video/TTrzgEZt9s_39017892.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3fc97137177c42a95460fc161747991c7571030a --- /dev/null +++ b/video/TTrzgEZt9s_39017892.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:beb1525cf4a831dc3308f947a5ae8b8e23bfc7e7fd3dff65c7978245f807521b +size 2025657 diff --git a/video/TVDUVpgu9s_39018666.mp4 b/video/TVDUVpgu9s_39018666.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..118d8ab921a9528d2a8dc5d814d28867f7723726 --- /dev/null +++ b/video/TVDUVpgu9s_39018666.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a227458e26929dc0dc1cc48917d72092d48d4a002e1c6bd4ed1c9429502916b6 +size 3163649 diff --git a/video/TVbCKAqoD8_39025897.mp4 b/video/TVbCKAqoD8_39025897.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e3aef09137b8ca38eaddbba267a3e5aba463ef20 --- /dev/null +++ b/video/TVbCKAqoD8_39025897.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:080a2da798782b5ece2d185ad10f643a8e1dd7a5f1aa85cb89ee99f6aaa1ac4a +size 2186766 diff --git a/video/TWeVQ5meMW_39024623.mp4 b/video/TWeVQ5meMW_39024623.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2c6604dec4dd763c471cf61edac7d99d125b0639 --- /dev/null +++ b/video/TWeVQ5meMW_39024623.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7ff8a8d1670a919eaffe0ccd9c5181332d32809a16ee3c0a235b1be943a5ad88 +size 3184983 diff --git a/video/TXsRGrzICz_39028421.mp4 b/video/TXsRGrzICz_39028421.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ceb7b4905a9942e77f15e868d61f275ab528e979 --- /dev/null +++ b/video/TXsRGrzICz_39028421.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5bdd338afefa84e3a6c744a56b254a9af3b8f85ef45181021aca9c38ea8628b3 +size 2418015 diff --git a/video/TZ5k9IYBBf_39028614.mp4 b/video/TZ5k9IYBBf_39028614.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0b7f66ab903bb7a0a7aa85b7e6826db2eb88cdca --- /dev/null +++ b/video/TZ5k9IYBBf_39028614.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77d6f91dc1367c73270bcac9afb475044e947309d62897153965cef9483126ea +size 2048723 diff --git a/video/Tck41RANGK_39026703.mp4 b/video/Tck41RANGK_39026703.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..821b97b24b51d0e6ac705f1647c6a67c394d6caa --- /dev/null +++ b/video/Tck41RANGK_39026703.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:012c214fc881155f6f480d01b7148691c5b32731b088f27fc48a71be90d8e379 +size 2356963 diff --git a/video/TeBKVfhP2M_39024636.mp4 b/video/TeBKVfhP2M_39024636.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9f0ad51e14b6526a01bdf2de37774cd932879e08 --- /dev/null +++ b/video/TeBKVfhP2M_39024636.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8128d33194862d3b551a0f9bdb56b54e2d4658a2dc0a65a58cc2b897f67d42be +size 2916226 diff --git a/video/Thou1rKdpZ_39024786.mp4 b/video/Thou1rKdpZ_39024786.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..acb739ffee5af64803a515c0e1279181424053e1 --- /dev/null +++ b/video/Thou1rKdpZ_39024786.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7112eab19d6734789c462d13e2a1b6fecce30670f07f0714780e532a4a548181 +size 1829059 diff --git a/video/Ti3ciyqlS3_39027926.mp4 b/video/Ti3ciyqlS3_39027926.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1b1ec76561f51a224b018315abfcc89e9630192d --- /dev/null +++ b/video/Ti3ciyqlS3_39027926.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfbb2a7abbcf10f2fc1bd3be152c01dc65d70fc14f76addedb9a5d6c80b76839 +size 2928545 diff --git a/video/Tlsdsb6l9n_39019165.mp4 b/video/Tlsdsb6l9n_39019165.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fdf317791e31d0f1b2fb0bfd83b803089bdecd5e --- /dev/null +++ b/video/Tlsdsb6l9n_39019165.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0352ac4ff2d44283a866a073739734b6af9f63998a381dd3db9c68b2d7a8ee84 +size 2584074 diff --git a/video/Tpx9gcZVBf_39027948.mp4 b/video/Tpx9gcZVBf_39027948.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..25ebbdaeae3e1dedceccfdcc09c12f26559cdf8c --- /dev/null +++ b/video/Tpx9gcZVBf_39027948.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6449edb4b9c742455b197fac6231afe9df3262acbfd3afe73b9c09d4fd5050c2 +size 2788276 diff --git a/video/Tr0lPx9woF_39017877.mp4 b/video/Tr0lPx9woF_39017877.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..76639a87806e721639cd5fb092ef08e9daf2720f --- /dev/null +++ b/video/Tr0lPx9woF_39017877.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffac659f87d2be49b8029fdfa4ac76a0e8c8f2eccfe05ba661b2b40e79dc1cf3 +size 2583500 diff --git a/video/TrXV4dMDcG_39025384.mp4 b/video/TrXV4dMDcG_39025384.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e277d7abcb215b8d944482dac25f6767396f223b --- /dev/null +++ b/video/TrXV4dMDcG_39025384.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43df9048310ad07511af3efb3ac0d47ad8dda4da4c5efeee5cb85d816bc07098 +size 2634657 diff --git a/video/Ts95eXsPBc_39017876.mp4 b/video/Ts95eXsPBc_39017876.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2c3f0db26000ee0ade2adf39ce1c95ee8698ba79 --- /dev/null +++ b/video/Ts95eXsPBc_39017876.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d017c0eca2b151969ee272170dd5dfe2d60b82109f85b683014f208e782fd59 +size 2508435 diff --git a/video/TskzCtpMEO_39017875.mp4 b/video/TskzCtpMEO_39017875.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..697dcc37617ff11b7f1a2ca6114d2e7e76991c16 --- /dev/null +++ b/video/TskzCtpMEO_39017875.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:855ca79bb22c6f7680a9020a2c2cf047afe26156ba56fcb5fe876b23d214afaa +size 2381469 diff --git a/video/Tt2xJaxDc4_39026626.mp4 b/video/Tt2xJaxDc4_39026626.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0a334ddb95ba8eb9b860ab2feb95cbd6601ac3de --- /dev/null +++ b/video/Tt2xJaxDc4_39026626.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:03f23e4f174bcb1b1d94ff15816aa8d3497b4f7565d9f6a4813fd5dd1fe1fe69 +size 2699018 diff --git a/video/TuspoNzIdB_39028893.mp4 b/video/TuspoNzIdB_39028893.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e8a0af4978aaf258d71ff42f16ae5988bdbad40b --- /dev/null +++ b/video/TuspoNzIdB_39028893.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:068d158c74fb80b573c59a228b073e3465a899b739720a4ce4daabd1168fd048 +size 2879959 diff --git a/video/TusuJSbRxm_39026786.mp4 b/video/TusuJSbRxm_39026786.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c5866794a98d5f22cec5a2bf3875868cb77f36d7 --- /dev/null +++ b/video/TusuJSbRxm_39026786.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c99acbdf4dc6b98274d4a10ccd800ae2ea7e406405e85424b823beea3786bc43 +size 3256326 diff --git a/video/Tvwf4Vsi5F_39017873.mp4 b/video/Tvwf4Vsi5F_39017873.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bf12f068f47bcbf39204ff13938042ccad67f1de --- /dev/null +++ b/video/Tvwf4Vsi5F_39017873.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e50f1114e83688f1d91727ddb20aa79e23f6494243156f6f123f2a47d10c23f7 +size 2937465 diff --git a/video/Tw032H2onS_39025731.mp4 b/video/Tw032H2onS_39025731.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cd3e65049833a948253b53bd529ec1ea25b2ae42 --- /dev/null +++ b/video/Tw032H2onS_39025731.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fcdd1b18b277073a72e99d29e33f54244f2cdd6a8891098b7e9b7724ee223499 +size 1666754 diff --git a/video/Twqa0GFMGX_39028285.mp4 b/video/Twqa0GFMGX_39028285.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d64a9a6e6abd5f042208d1c38cc97aa7d15986ad --- /dev/null +++ b/video/Twqa0GFMGX_39028285.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41c1fcfa1f72099b83335dd9309fc4fdc6147d5f3f13e47c10700a14badd1a11 +size 2411526 diff --git a/video/TxffvJMnBy_39026060.mp4 b/video/TxffvJMnBy_39026060.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..abb1ab2fde54f4fb4aaa04236e5ba37e9f022d09 --- /dev/null +++ b/video/TxffvJMnBy_39026060.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ba3aed230ca5b002d9a2aa2aad6b0a44623fc2ec5104bcc114416fe530ad53a +size 2530772 diff --git a/video/Ty25oVKTqj_39028842.mp4 b/video/Ty25oVKTqj_39028842.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5298d756992f89dc4a5114d3f4ed1980a5605b5e --- /dev/null +++ b/video/Ty25oVKTqj_39028842.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8755443b7620735a820d7dea66c71de1f542c245a844e458c5caf72ec151493 +size 2596819 diff --git a/video/TyFrPOKYXw_39018993.mp4 b/video/TyFrPOKYXw_39018993.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..66d86fbba16a11729bd44491df3c3eda603ef6b4 --- /dev/null +++ b/video/TyFrPOKYXw_39018993.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da7d4fdd408bbf5a9e838eba3d456f44fc001458f415a32e005043a0a578cb6b +size 3082036 diff --git a/video/TzoHLiGVMo_39018962.mp4 b/video/TzoHLiGVMo_39018962.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bc85a5b26ce6a6f0ab4ab5d2e9e9da7959815112 --- /dev/null +++ b/video/TzoHLiGVMo_39018962.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f22eab02ecbc63702a29582429e73aa9bda56f182d91776b27075e2f25e0346a +size 2316690 diff --git a/video/U3Rgdb4li9_39026457.mp4 b/video/U3Rgdb4li9_39026457.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a7df89d40db045f98c9c5416bac2c3df05c1e13f --- /dev/null +++ b/video/U3Rgdb4li9_39026457.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:28be8cde83f636e8edf9f7aa8717be02c1a8df6224f98b8ea54ddd220cf2940d +size 2772057 diff --git a/video/U4BC0GrFAz_39025420.mp4 b/video/U4BC0GrFAz_39025420.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..10ac6dcb1f7ebfe20ee9c3457bb84ed8bd1ef001 --- /dev/null +++ b/video/U4BC0GrFAz_39025420.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70a5f300465aa9df9efb302515fe0839f4f4191cac5536d7b30762e660318a04 +size 2233944 diff --git a/video/U4KldRgoph_39027100.mp4 b/video/U4KldRgoph_39027100.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b91e6adc6b65807e1e9beec76197deefbbdaa561 --- /dev/null +++ b/video/U4KldRgoph_39027100.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4245657f8e98a8f9f64f79f3782dcb26feba58e3738bcb47d572876109984a5c +size 3057550 diff --git a/video/U6oQEzSp8z_39028065.mp4 b/video/U6oQEzSp8z_39028065.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7f061da7940c241b1217a97d1c71355b7d52f2e0 --- /dev/null +++ b/video/U6oQEzSp8z_39028065.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c96e73fa914357ad2de804d5a899138d9ba11b5f49e95459c1d10de18f90f981 +size 1596325 diff --git a/video/U9MzoDOKZu_39028569.mp4 b/video/U9MzoDOKZu_39028569.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6c8c1087572de374c8014df1866c5b799d7eec05 --- /dev/null +++ b/video/U9MzoDOKZu_39028569.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b964b4cae0d5d7e3575bc7660b26710d35819c578c42a71dcdf26eb4757d1d15 +size 2925377 diff --git a/video/UBVNwD3hPN_39019158.mp4 b/video/UBVNwD3hPN_39019158.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ad66f0417816882cbad6fffcd3a133a0617e91c8 --- /dev/null +++ b/video/UBVNwD3hPN_39019158.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d4d8b4d735a002827afc72fa5b92b7309575d33ed017d6d4e7a9665533c629e8 +size 2795866 diff --git a/video/UCfz492fM8_39017114.mp4 b/video/UCfz492fM8_39017114.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..05912c7ef7df6892df8d9d72c7a5da5cc092613f --- /dev/null +++ b/video/UCfz492fM8_39017114.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85f5fff4bbf4653410920d0038c41a0f190b9a6ee2a2697e5ba51d91915a5768 +size 291810 diff --git a/video/UDi51I8K1p_39024734.mp4 b/video/UDi51I8K1p_39024734.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..329f07c82f0be706762a9a9d49e9ac5a99490d04 --- /dev/null +++ b/video/UDi51I8K1p_39024734.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43ad60327e2795222f81b5d07fd55cd49301bcd9d963d4fce2843f470e6a00eb +size 2140761 diff --git a/video/UE6CeRMnq3_39026192.mp4 b/video/UE6CeRMnq3_39026192.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9e565ee56911d5ff979152868e07ea8302a4d6aa --- /dev/null +++ b/video/UE6CeRMnq3_39026192.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ca44c88e09d6a0bafd743be24ca57c3d56107f415670786a90ecc41907702cf +size 2185992 diff --git a/video/UFRZHFYW8e_39024539.mp4 b/video/UFRZHFYW8e_39024539.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e33fb10c933b3f3223c132e0bdbe860a7793df04 --- /dev/null +++ b/video/UFRZHFYW8e_39024539.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9b03fab33c2a58b37d1a012010054aea6e9f7f4eb45bb6b291d861bcda3259a +size 57477 diff --git a/video/UGlDVc0GTU_39024923.mp4 b/video/UGlDVc0GTU_39024923.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4a26bfaf9e3db9cd068fe67ff576eae1c7832714 --- /dev/null +++ b/video/UGlDVc0GTU_39024923.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45bc0cd29e4e8908f94e1ad0f030b12539695834e6a274b105d7b05dafc45d27 +size 1785569 diff --git a/video/UMfcdRIotC_39019029.mp4 b/video/UMfcdRIotC_39019029.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0fd869ec2957b0d9360186ee56e95ce59c41a649 --- /dev/null +++ b/video/UMfcdRIotC_39019029.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:affac9c5875dd2aab8e9c43038b274f3b9c0e3c02cc37f501c8ecd337e6a28b8 +size 2272139 diff --git a/video/UN7nXLeh9D_39024868.mp4 b/video/UN7nXLeh9D_39024868.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5c390df542d40545a3b1b1452c825c8751772b6e --- /dev/null +++ b/video/UN7nXLeh9D_39024868.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f20f5a4b7a7caf68655cedb063b81e09b12430159ac2927c7cc58840a61f305 +size 1994954 diff --git a/video/UO7Mvch1Z5_39026996.mp4 b/video/UO7Mvch1Z5_39026996.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e9f95bc3834d5a591a530cd0e83acbb4a5533d9d --- /dev/null +++ b/video/UO7Mvch1Z5_39026996.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2ba07c22174355eefcad40cf2f8d5642e1bebf2afdf20596d2da9ab9b1b5227 +size 2150134 diff --git a/video/UPvufoBAIs_39017866.mp4 b/video/UPvufoBAIs_39017866.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..44f95fe669966f7ab29753e6b235a58568557e1a --- /dev/null +++ b/video/UPvufoBAIs_39017866.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:edb2f000b850ed750a4443f7c32195bf1d528f97c49ee2d1afd76a9305d2b79f +size 2477354 diff --git a/video/UPxFYvHsyN_39028485.mp4 b/video/UPxFYvHsyN_39028485.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4548ebd98521b72e1e0a6a1aebf714f7693bf527 --- /dev/null +++ b/video/UPxFYvHsyN_39028485.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f65dddd9c95f8e91913da2d24d581fa08f83186462f304003629681eb471c0f +size 2571284 diff --git a/video/URyeU8mwz1_39024357.mp4 b/video/URyeU8mwz1_39024357.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e55886c3663ede4ac26aa3d48ba76406516d2c02 --- /dev/null +++ b/video/URyeU8mwz1_39024357.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99f6cfd2b336f5da1a3dc45ff68cbc18caf4ae5b06d27e16120ce645f8d70ca1 +size 2652212 diff --git a/video/UTNZKl5BUc_39026571.mp4 b/video/UTNZKl5BUc_39026571.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..918580b14931695631388dcc650d3e66586eaa89 --- /dev/null +++ b/video/UTNZKl5BUc_39026571.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6db96e416a4d5d154b5944b81a85d9e96b963517ef3f0bc93446834d27a6f652 +size 1508666 diff --git a/video/UWUUVKtKeu_39026893.mp4 b/video/UWUUVKtKeu_39026893.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0558e858d94bfcb43092587a89feed1666758a1c --- /dev/null +++ b/video/UWUUVKtKeu_39026893.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c5f912af9926dc40efd7ec5db35175298e7123664fb48c5b1b40200479d4037c +size 2393008 diff --git a/video/UaJErAOssN_39024680.mp4 b/video/UaJErAOssN_39024680.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8e5c4546799c055404f567bf8a5af8d2f028fbc0 --- /dev/null +++ b/video/UaJErAOssN_39024680.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0b4fdb5c736e012a23987ef3e0959f322a475e296ac60b73ae595307105fd91 +size 2286582 diff --git a/video/UahrHR5HQh_39024502.mp4 b/video/UahrHR5HQh_39024502.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6bca71fd1d3f53232ed4e6db350bf3f22c0073c6 --- /dev/null +++ b/video/UahrHR5HQh_39024502.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a49df503fc0985044ad15ee1fb34019dd48938465345dad839fbfa9af35970e +size 2691854 diff --git a/video/UdXE5V2d0O_39028228.mp4 b/video/UdXE5V2d0O_39028228.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fe693df1ab4147fadd693c8f459991c3f3e3e43c --- /dev/null +++ b/video/UdXE5V2d0O_39028228.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f0316bd68da825211b6df0735321e91c992e2dc8bc9d8173cdbe0ff576ace1a +size 2336757 diff --git a/video/UddVRqTrjt_39027255.mp4 b/video/UddVRqTrjt_39027255.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f063561fec382a0b6d061e76071b7968fcd54dc9 --- /dev/null +++ b/video/UddVRqTrjt_39027255.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6e62e0644f50451c32f90539811d4376f299aaf521bf35fc0bd24baf0457f310 +size 2321121 diff --git a/video/UdxpjKO2F9_39025677.mp4 b/video/UdxpjKO2F9_39025677.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5d89fc8ab9792e211f030eeefe0025e405eec17d --- /dev/null +++ b/video/UdxpjKO2F9_39025677.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0aaa7c89a1cc6a959c420fdd29978cb17ce15165aff80e9283ecb543cbfda70d +size 3092844 diff --git a/video/UekHycx0lz_39026129.mp4 b/video/UekHycx0lz_39026129.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fa3881a10d0ceef02e06b703a6b11e8a76bc55bd --- /dev/null +++ b/video/UekHycx0lz_39026129.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9195090fc752add7f90b3ac0be8399febc52cd11f57bd714b001663e11d4c44a +size 2902031 diff --git a/video/UmW9BYj761_39024938.mp4 b/video/UmW9BYj761_39024938.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..95187dfc3547b17f0953f8e29aaa9783babbaeb1 --- /dev/null +++ b/video/UmW9BYj761_39024938.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52779b59047d359802eae534416664e2f5a341484e9bc830a99b5ed2f4ff8c31 +size 2172563 diff --git a/video/Unb5CVPtae_39019020.mp4 b/video/Unb5CVPtae_39019020.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..019bc8667834a359d8906479dad6ce5ac9e27278 --- /dev/null +++ b/video/Unb5CVPtae_39019020.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7426e07bb11b349fbb4be05a7f058fca51773675a22897ca0b79821eaa55f86 +size 2254032 diff --git a/video/UqvEHAnCJC_39026624.mp4 b/video/UqvEHAnCJC_39026624.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c0daf7752d182a36e1144021125a5bb59435bcd6 --- /dev/null +++ b/video/UqvEHAnCJC_39026624.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84c4857c406e1814d18a40d0ffb907e8ece0df98c782fd740f087a3c1cb30c90 +size 1237213 diff --git a/video/Ur9f4hNIpN_39025403.mp4 b/video/Ur9f4hNIpN_39025403.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..89d66f6dec2a8a3a83d8ab7d224d336b431317cb --- /dev/null +++ b/video/Ur9f4hNIpN_39025403.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2afb08482d6ff65b5d36cd8ca9bcc52765973608eb7c73653bd9c6c7e4a05f5 +size 3166226 diff --git a/video/UtTjgMDTFO_39026923.mp4 b/video/UtTjgMDTFO_39026923.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..85740f4cc52d133e6d9bed139afda8e24d74fa7e --- /dev/null +++ b/video/UtTjgMDTFO_39026923.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e7f57affbe37d03f4c26680c8231d5b44da6d182e75e1a6db688c340847774a +size 2792779 diff --git a/video/UvbpbEhGaw_39028490.mp4 b/video/UvbpbEhGaw_39028490.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f2574d52791eb5d76797d2857151304a3a5438d3 --- /dev/null +++ b/video/UvbpbEhGaw_39028490.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:828135832bbb700e09f91170bdf872be8d7077965e63dc7edf1b94d2b002840d +size 2049205 diff --git a/video/Uw2eJOI822_39025446.mp4 b/video/Uw2eJOI822_39025446.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ba2bef25de4e34ee4c5c7dedb2ee5c417771c7d9 --- /dev/null +++ b/video/Uw2eJOI822_39025446.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd765979821c521d54780c232e6f2c321e3ecd9ec238dba47760425f8a339374 +size 2805761 diff --git a/video/Uw8xvFqVAE_39019125.mp4 b/video/Uw8xvFqVAE_39019125.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0d4160d9b361a3b7e69f70fdbd9c73711dc42f19 --- /dev/null +++ b/video/Uw8xvFqVAE_39019125.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51ac196c8cfdfa96203a1792531e8cb0f1a160a50c5b5cd6f63f367478a1f9b4 +size 2358074 diff --git a/video/Uymv9ThB50_39025461.mp4 b/video/Uymv9ThB50_39025461.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..56baab59b39c6b8893446bf3a4667c9ea0ef7c13 --- /dev/null +++ b/video/Uymv9ThB50_39025461.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:316013e217a2b3afd6c15c2610ba821ef84209d41ad12e801dd0f9696acb2007 +size 2280388 diff --git a/video/V0oJaLqY4E_39026619.mp4 b/video/V0oJaLqY4E_39026619.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ddf4d2bb98e2de2dbb19aa772fd77f6f084e638b --- /dev/null +++ b/video/V0oJaLqY4E_39026619.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f5c6f918307add0715b39add215476b04749b5829f02c6665d298d98911c62c +size 2799934 diff --git a/video/V1GM9xDvIY_39018893.mp4 b/video/V1GM9xDvIY_39018893.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..12dac8aabcd57b1875363f8ccd47ce0dbad1801b --- /dev/null +++ b/video/V1GM9xDvIY_39018893.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5433620615fc82265c642af26b8d2109f3891071c6e7878086c2b56eac014d03 +size 2861193 diff --git a/video/V2MBWYXp63_39025119.mp4 b/video/V2MBWYXp63_39025119.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..def9f06bb042a8991ecc4f7c33db9950d174b554 --- /dev/null +++ b/video/V2MBWYXp63_39025119.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b9b3c8653c83e4b56c90d00249ab9ee7188d61bb9c042ebb81ee6d849f998ab +size 3195480 diff --git a/video/V3QZCM1AQv_39025494.mp4 b/video/V3QZCM1AQv_39025494.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e8687b9af80d179e1475105c82955923bb83d266 --- /dev/null +++ b/video/V3QZCM1AQv_39025494.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5588f170594e9ee0cddb1e17ff7d98830effb76829f6c801db1eab1dba3523a8 +size 2948495 diff --git a/video/V4tzn87DtN_39028643.mp4 b/video/V4tzn87DtN_39028643.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d6f9353a5d6532244ab0c5552d728ffe3e390e59 --- /dev/null +++ b/video/V4tzn87DtN_39028643.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c3c798235b8e4cfb43a9b3366c70cdc7fbcc0a3b6ae1c4e50aa7b5fcab83c7c +size 2976423 diff --git a/video/V5tdi14ple_39017858.mp4 b/video/V5tdi14ple_39017858.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d146d51485c9b829a1cbfd81f756c750b92a8ccc --- /dev/null +++ b/video/V5tdi14ple_39017858.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:702139916b103e24c029bdc15c17edb549d7b157ba25b54c1735eca77d860a7a +size 2644196 diff --git a/video/V6hrg4O9gg_39027257.mp4 b/video/V6hrg4O9gg_39027257.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..330a87fa64a5373fa9eb8b07fc326a1e73df5e0f --- /dev/null +++ b/video/V6hrg4O9gg_39027257.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d455047a74ab1de12e0a1398d986f15fa2d6fe473b0830a7bf7af02661f34b93 +size 2320033 diff --git a/video/V6qdb1AgsM_39026124.mp4 b/video/V6qdb1AgsM_39026124.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7c49e87be62839fa2568b2a951d5eb816e727b39 --- /dev/null +++ b/video/V6qdb1AgsM_39026124.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a1e9bdd94439259b34a0aa890dab008bb5d8a3a1b8492f80f297c195a9eb196 +size 2858060 diff --git a/video/V6w7keoTqn_39027212.mp4 b/video/V6w7keoTqn_39027212.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4f1eda8eb05f8e0809b898ad9d83c97ebab67095 --- /dev/null +++ b/video/V6w7keoTqn_39027212.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b3e4419e7210392f30df0e305d2cc786501b6e1a5ea552c4060b7af66b1171e +size 1816237 diff --git a/video/VDPZe0NbpE_39024626.mp4 b/video/VDPZe0NbpE_39024626.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6ed4c71f7e8bcc7be78cba2c3bfc58adeb254948 --- /dev/null +++ b/video/VDPZe0NbpE_39024626.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43bd5cbe602d91314b1817a03c696a63bf7eb68a26130557b7febeeb85e726eb +size 1896733 diff --git a/video/VFRyS7Wx08_39025515.mp4 b/video/VFRyS7Wx08_39025515.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..836186f49c1b96c1bb159043a4e1e2a6a3d528b9 --- /dev/null +++ b/video/VFRyS7Wx08_39025515.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:94b3f2f744736592dca63395ac663a9f24ebe07a6d00c46c90b0d79e3e76314b +size 900087 diff --git a/video/VFqzxhINFU_39027908.mp4 b/video/VFqzxhINFU_39027908.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..57852e8fb49a3c32da728443eb7cf5b9739d3a55 --- /dev/null +++ b/video/VFqzxhINFU_39027908.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9e424ea1e3cf0d0e773f7d437be2ffca8d2a5d594b79bd137272fcd6c13bd61 +size 2574979 diff --git a/video/VJMYOfJVC2_39025259.mp4 b/video/VJMYOfJVC2_39025259.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..624a627efee7f8bfdbab703eab1c19d259d116ff --- /dev/null +++ b/video/VJMYOfJVC2_39025259.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9509a99aeaa56dea1b343fe8ef7d0ed9beb2d34e48033541ba6886cf7f1066b +size 2479266 diff --git a/video/VKt0K3iOmO_39027019.mp4 b/video/VKt0K3iOmO_39027019.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..70e09ebc21338db27f08ed973e57c6304a81b40e --- /dev/null +++ b/video/VKt0K3iOmO_39027019.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:37289fcc1b2245fb475fbc88a97f8d1a089a0ebacdb177134f95732bdb69a9b2 +size 1628452 diff --git a/video/VLw8ZyKfcm_39028707.mp4 b/video/VLw8ZyKfcm_39028707.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7bb4f787a7ff555e37e0d8ba99dfc1b258d43c00 --- /dev/null +++ b/video/VLw8ZyKfcm_39028707.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ddade221b529f100fa9d59250a662c85b1d05f68794a1d6aa518dcbe64fccb2 +size 2555167 diff --git a/video/VMsHnv8cVs_39026285.mp4 b/video/VMsHnv8cVs_39026285.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e9981f491bf1f452155820166820e1c12b21f474 --- /dev/null +++ b/video/VMsHnv8cVs_39026285.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0cfd6e6696c401c56c23e3f1c29c070402afacb601b534a39c227dab8277af2f +size 1905708 diff --git a/video/VNbQbv658b_39027677.mp4 b/video/VNbQbv658b_39027677.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..188a902e156631ecbb0e40ab6c3f0aeed06cb0dd --- /dev/null +++ b/video/VNbQbv658b_39027677.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:59da82ed750c2c303694da3a7c255037624cb86869aab44b11d98ce393be68f7 +size 2760271 diff --git a/video/VOVyeOzZx0_39024999.mp4 b/video/VOVyeOzZx0_39024999.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..52a798ea39d3ea3b4f07779929e406c0fa4680ac --- /dev/null +++ b/video/VOVyeOzZx0_39024999.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a636dcf49e2b3f9a969092c29901703265f65d9fab3abe44a9aeff13ee0f5c8 +size 487416 diff --git a/video/VQyb9LKmUH_39025750.mp4 b/video/VQyb9LKmUH_39025750.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2826d8eb8684bd142a55275a4de24c2d68081625 --- /dev/null +++ b/video/VQyb9LKmUH_39025750.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d01982d3a8bdb79aaf4003ec4140b55cd36dc65802de029a3368a301667efb2 +size 1275951 diff --git a/video/VSz9na5Jtl_39028495.mp4 b/video/VSz9na5Jtl_39028495.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7b706db2f78500a3064eba27e2b52380cef64f5b --- /dev/null +++ b/video/VSz9na5Jtl_39028495.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:583d862d0ea7418dc4222018b1a868bc9eb5d4c8c684e9262bf82d6cef553de0 +size 7776 diff --git a/video/VTYg5ykEGS_39017856.mp4 b/video/VTYg5ykEGS_39017856.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d6950c81395c1b5103dfbe06d8fd1cbfa896c1a6 --- /dev/null +++ b/video/VTYg5ykEGS_39017856.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef21e47ad3c104082e6ebd0fedf9c0ce827b22576a99235e01ef4759ac67e2ae +size 2454419 diff --git a/video/VUWvVvNi6r_39027217.mp4 b/video/VUWvVvNi6r_39027217.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cf8229ef8584eb6a41be9499398588d5c15a57cb --- /dev/null +++ b/video/VUWvVvNi6r_39027217.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bcd441aa54f3f4445f73267afdc87da6f416266b52aff3adce1be0d38f988ef5 +size 3105546 diff --git a/video/VUgXAWOCQz_39024695.mp4 b/video/VUgXAWOCQz_39024695.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e7933a2ad6838fa667b32cf78d0f496fbfa1f998 --- /dev/null +++ b/video/VUgXAWOCQz_39024695.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38c2b9218575fdf79090ffe01c7919b874cdceafbd6f13f3dc756e1ec7bd4788 +size 1797263 diff --git a/video/VXJVNdmXO4_39025000.mp4 b/video/VXJVNdmXO4_39025000.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2304c15eb6fb3f6b1fc43192535191d1749a2260 --- /dev/null +++ b/video/VXJVNdmXO4_39025000.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec033ebcd3d3f18f5cb13a433432a30c390a321903b2510b9d7d65ba83f63305 +size 2143812 diff --git a/video/VXxj3XZ1X8_39025644.mp4 b/video/VXxj3XZ1X8_39025644.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..70fd73649a2570df2f67f2382c2aa7d763485f5c --- /dev/null +++ b/video/VXxj3XZ1X8_39025644.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a5c2e23104bb43e2d9cb4825ad2c5fe4790180ef6fdd2e7f98b569e37ee4783 +size 3130516 diff --git a/video/Vhh7ONtfvV_39027194.mp4 b/video/Vhh7ONtfvV_39027194.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b59b97bfb8ce9550ea7306ce9335119fc7fbbf22 --- /dev/null +++ b/video/Vhh7ONtfvV_39027194.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1fcb15df72caef2f53ca5b280a330f44ab29aed8980430a49dc33c5843b96bbe +size 2506652 diff --git a/video/Vi8AepAXGy_39028019.mp4 b/video/Vi8AepAXGy_39028019.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cfe5e3f3a8ee8a65d868fae1901e533b06a6e527 --- /dev/null +++ b/video/Vi8AepAXGy_39028019.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2565d1e757c4bdcb2afb8adb8bb5da16c913e220dbbe04754ae3bca7c3628d4 +size 3815839 diff --git a/video/VikufBLOW1_39027768.mp4 b/video/VikufBLOW1_39027768.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ef37731c8f4acf64433017c9f5b92006ff04857d --- /dev/null +++ b/video/VikufBLOW1_39027768.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4677d0ae2e5057a829c816107b0eea46280e3094a6b135e81644cf6d3d5469c6 +size 2334149 diff --git a/video/Vja3ecieXY_39019113.mp4 b/video/Vja3ecieXY_39019113.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c68c80afcfb0502d1c41739482f5e4407fa15f8f --- /dev/null +++ b/video/Vja3ecieXY_39019113.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:91b0dbf4c10c90ce50e763ee28c6e48cc5b74680494c4ca453d46155a5d7ad8a +size 1890443 diff --git a/video/Vn0FWRImra_39027477.mp4 b/video/Vn0FWRImra_39027477.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4f7e235ea29a694bba2fdcfc4d5c8481c9c33fd0 --- /dev/null +++ b/video/Vn0FWRImra_39027477.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18308228cf3a08fa4085ed6d4ed415fed74787bb4c1e081d0d9d00669a0b73d0 +size 2415985 diff --git a/video/VoLDkQ6yR3_39017848.mp4 b/video/VoLDkQ6yR3_39017848.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..79d9fe0c9e566e9bb9a3a01bd864f1f5894b49a1 --- /dev/null +++ b/video/VoLDkQ6yR3_39017848.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd48bf1002e705bbb600f663f23c21d88a47fe5de35891f85178ac4700f4584e +size 3063147 diff --git a/video/Vq2kzpig8v_39024877.mp4 b/video/Vq2kzpig8v_39024877.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..23d2d12be4abd01fa4420461bfb298cac842aead --- /dev/null +++ b/video/Vq2kzpig8v_39024877.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d62c313dc7a98fcf22e0467a4ea0343536196c1aee80415e8f3bf2dc7c42fb1b +size 2651979 diff --git a/video/VqFz7iTGcl_39028664.mp4 b/video/VqFz7iTGcl_39028664.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fdb882c8d692d5d08446ded015d6754857bb0c88 --- /dev/null +++ b/video/VqFz7iTGcl_39028664.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:092d8a89be680fc7f7eb052eab2e401311e4ba4999744311e00c95933368dc16 +size 2631972 diff --git a/video/VqkAKQibpq_39027411.mp4 b/video/VqkAKQibpq_39027411.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..10bdab1301afbcda0146a52dbfd00d440014d360 --- /dev/null +++ b/video/VqkAKQibpq_39027411.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d71343118a97f97d1a1315afb62dec0501feb821b9f5590dc4201df6c953d36b +size 2915395 diff --git a/video/VqxODXhU4k_39024715.mp4 b/video/VqxODXhU4k_39024715.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ece5c216dea4470c5e0030bd5d3825e3fca75be8 --- /dev/null +++ b/video/VqxODXhU4k_39024715.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e36bdd39136b60f233670c26b80b489cf833f4acc29db32121c607bb902aca26 +size 2016403 diff --git a/video/VrVx83BkQX_39024817.mp4 b/video/VrVx83BkQX_39024817.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2e6a898cb7020b21039f2189a3c68ea765835072 --- /dev/null +++ b/video/VrVx83BkQX_39024817.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f315451e7212fca3058807faacfbdf60bfd944dfcd090ba10f7737865ea3d93 +size 2901610 diff --git a/video/VwUTz2pOnD_39024962.mp4 b/video/VwUTz2pOnD_39024962.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..938eb5c8d736d48584e2a6e2c325a5f308dec511 --- /dev/null +++ b/video/VwUTz2pOnD_39024962.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fea0b3faf6d1f6be7c636eafc5392ec543e4c383228a6e7480aa00edec86935e +size 1910192 diff --git a/video/W0okTgsPvM_39026142.mp4 b/video/W0okTgsPvM_39026142.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6b9743af7181fc32c7dadec1002364f98f65a5e8 --- /dev/null +++ b/video/W0okTgsPvM_39026142.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f18eee40baf962b609bcc1750f6559086a15b861963c276b80dba20d34f7eba +size 2432125 diff --git a/video/W0wq9njGHi_39025165.mp4 b/video/W0wq9njGHi_39025165.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7b6f5db83bf427aead395a0ef9b68b343877eb37 --- /dev/null +++ b/video/W0wq9njGHi_39025165.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a5871c9c46ee54a28bf074cf103032ccbd14af2bec21057097194e1e2d3b5be +size 2325338 diff --git a/video/W2d3LZbhhI_39017846.mp4 b/video/W2d3LZbhhI_39017846.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4b5343c0f11025dc16ab649eababb590b4bc38c1 --- /dev/null +++ b/video/W2d3LZbhhI_39017846.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8881c3775888fd643c9df1caecff0a2b1a14b1da1be08eda31fe580dd73facd0 +size 2941095 diff --git a/video/W4pIBQ7bAI_39026230.mp4 b/video/W4pIBQ7bAI_39026230.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4516ef5d5a34c179a18b0fa1a7925902e6b97cec --- /dev/null +++ b/video/W4pIBQ7bAI_39026230.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23972afc2e22a619f533d60aeaadc3b897195a496dbc6b7c7de32eac07839bcf +size 7776 diff --git a/video/W5U3XB1C11_39025412.mp4 b/video/W5U3XB1C11_39025412.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c68eddabb1deea8740ab415abb20ce2f0601def5 --- /dev/null +++ b/video/W5U3XB1C11_39025412.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52a594da2caf8ea022846289a317900924112a91885eb477159bbbdb953ad098 +size 2244846 diff --git a/video/WAiqLGfqX6_39024804.mp4 b/video/WAiqLGfqX6_39024804.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3a048985325e412b250e32793ab71de844b1d9ea --- /dev/null +++ b/video/WAiqLGfqX6_39024804.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39a45542e80a1eaaf768de279166bd6bdc3a2c1cc9fb42e69681d950f6d7616d +size 3073892 diff --git a/video/WBLPlszJI5_39024472.mp4 b/video/WBLPlszJI5_39024472.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d1f9599fbc6ec2c5c180fbd51714aa9a5ebb2b4c --- /dev/null +++ b/video/WBLPlszJI5_39024472.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78d76612c064fd48cd944439aa0b1ac20dac1c26b4223921ffbca8d7668ce599 +size 2064336 diff --git a/video/WCc440cUhX_39027829.mp4 b/video/WCc440cUhX_39027829.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1eee5d9147379dbb77f4a1635a2bb42d028524db --- /dev/null +++ b/video/WCc440cUhX_39027829.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c0782941d4a54abc0c6452dc3788d9b253bc4a022cd36fe73deca1e7383bcbb +size 1417744 diff --git a/video/WCnJmb7cv1_39026289.mp4 b/video/WCnJmb7cv1_39026289.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9b0321588a235551fb7b31929ff810d9ec293ef0 --- /dev/null +++ b/video/WCnJmb7cv1_39026289.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d26e48ebcb5f1126fc3913bed94caaa8a8d9cc9f4554cdf37c7a23a7b34963e4 +size 3338943 diff --git a/video/WEf2LT8NtY_39027205.mp4 b/video/WEf2LT8NtY_39027205.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..77ff4101ad4854c959a64775bae4efda811768e9 --- /dev/null +++ b/video/WEf2LT8NtY_39027205.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71d054a4f6d7b7fdbdd184f78734a82aa7d077117ec00b3cbbd37526e8c72742 +size 2983516 diff --git a/video/WEoOreP0n5_39028407.mp4 b/video/WEoOreP0n5_39028407.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..64bb33ea0cba61b3392097b5be6e3c58ae5785bc --- /dev/null +++ b/video/WEoOreP0n5_39028407.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:301231bd2ffef5bc903a9d2ffe552521dc2f96e36fa81d01fa2251f6dcb99787 +size 2232410 diff --git a/video/WEs4WMzndY_39025993.mp4 b/video/WEs4WMzndY_39025993.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ee4e189d3339cf641b76f98c6d784237fe716c0b --- /dev/null +++ b/video/WEs4WMzndY_39025993.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fdb4159840dd75a46bd49ef3e5400a1a29a539fdbf540c5a15ffe02e730b9ee9 +size 2340865 diff --git a/video/WH5blx5tZ1_39026876.mp4 b/video/WH5blx5tZ1_39026876.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1e1b00d6d539765cf359e22b12a9e306f3ebe1fc --- /dev/null +++ b/video/WH5blx5tZ1_39026876.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51b44eb01377f7e5c2a2f9851add537a5d8ad6067eb03516b579cd7c88c20d9f +size 2685022 diff --git a/video/WI2VpcBdnd_39028293.mp4 b/video/WI2VpcBdnd_39028293.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..17ea4c6dcde07b30ea5aa2ef2c224d1713b98090 --- /dev/null +++ b/video/WI2VpcBdnd_39028293.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c87550a3ea482ef23dbb77980ce135e810360f27608fd248ebfe498b6ee40f9 +size 2728380 diff --git a/video/WILLwyVmP8_39025553.mp4 b/video/WILLwyVmP8_39025553.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1dc14778634a87c4449cda6c530df04fbff28bfa --- /dev/null +++ b/video/WILLwyVmP8_39025553.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8767c3500f6a43f6768d3072da269dfabc034af492ff93ae7c79eb8dd7589289 +size 2025043 diff --git a/video/WJ04ZX8txM_39024854.mp4 b/video/WJ04ZX8txM_39024854.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2344b19182e1abf7e69435a07399fe2cf84255bd --- /dev/null +++ b/video/WJ04ZX8txM_39024854.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8348e62bc3c8c81d5190625cf357b3b03f41672b942ce6b3f9ffb6818e100c83 +size 2427304 diff --git a/video/WK2KxPAMQv_39025330.mp4 b/video/WK2KxPAMQv_39025330.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fdfbc0a299cc014512f4f35face0e7b9f82fb7aa --- /dev/null +++ b/video/WK2KxPAMQv_39025330.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e97f87eae870715f92e50dbe7bab666190a2c371349b1d3de6f76839112d795 +size 1535552 diff --git a/video/WNQjN5HzXt_39017842.mp4 b/video/WNQjN5HzXt_39017842.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d32ed08df1cf1e223715c9af356749144a5fb95a --- /dev/null +++ b/video/WNQjN5HzXt_39017842.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3b28f27e0cb8e58a54b8bee0246819049bba175bdf8271d0b3b60daf72ca6dc +size 2813988 diff --git a/video/WPPC7FHtaM_39027115.mp4 b/video/WPPC7FHtaM_39027115.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f5dd454090e958fe49777ae900e6e9751676c5f7 --- /dev/null +++ b/video/WPPC7FHtaM_39027115.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c431824f462d8ed84fc8974a94d42b6832f39c67683d1a58904326c9c636a4ea +size 3379467 diff --git a/video/WPxa6OcIdg_39025326.mp4 b/video/WPxa6OcIdg_39025326.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..14b02af91ce5afd27472f6d43af2446d684fe966 --- /dev/null +++ b/video/WPxa6OcIdg_39025326.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2228f463e1d3b6006f452ba5897e474376ed77e10c60c25945e944d99a78f682 +size 7787 diff --git a/video/WQYHbr36Fo_39018933.mp4 b/video/WQYHbr36Fo_39018933.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d47ff057b21a5752ab99a18f79081a74b45afa59 --- /dev/null +++ b/video/WQYHbr36Fo_39018933.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50aa032e570adac40692055d2f98f89d87236c2ce6ddfed51432506616288500 +size 2822460 diff --git a/video/WRCFuoiz1h_39028084.mp4 b/video/WRCFuoiz1h_39028084.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8d73167b8d7736a1670658d03ebf416577d3dd8e --- /dev/null +++ b/video/WRCFuoiz1h_39028084.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9575b116933ba2efb05dafe4631c3e042c3cbb283d2c609b5c52321e7d52da8 +size 2716109 diff --git a/video/WSsht66fbC_39027188.mp4 b/video/WSsht66fbC_39027188.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c91714ae76b083d97c491db0f1621b8befae9c2e --- /dev/null +++ b/video/WSsht66fbC_39027188.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:915ed5785a873ec307ac03fd2b2209e85f802fefdd2a63c77c9aacbcabaee8ef +size 2698402 diff --git a/video/WSu1PPi2UP_39027103.mp4 b/video/WSu1PPi2UP_39027103.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..00c5aaf9e663a506014b10369557b3aa17060a80 --- /dev/null +++ b/video/WSu1PPi2UP_39027103.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2f30c404f4e567dd3d2ed2a5da5920a59024314fbde8e8d626ce92f66f08befc +size 3022233 diff --git a/video/WTLvXdzhmP_39025118.mp4 b/video/WTLvXdzhmP_39025118.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6220915290c9a372a495852b714bfae2ee7d7cc1 --- /dev/null +++ b/video/WTLvXdzhmP_39025118.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fdc249bc9331f9bf4d720fa8d50ca27ee6e441c1c1900ab85f0c21c842a8e7a +size 3021276 diff --git a/video/WXqukapoa7_39026678.mp4 b/video/WXqukapoa7_39026678.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ac33886f62999f78088193212192a84fb512dbdb --- /dev/null +++ b/video/WXqukapoa7_39026678.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b1a9a31ca51709a3bf2f176d0288c2a81356ff79bb85fc3b9daec6aa25713c40 +size 2482268 diff --git a/video/WY3xgXIZUR_39024390.mp4 b/video/WY3xgXIZUR_39024390.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e41db3d830417f77977c5d4ac4462b45f1b7cf73 --- /dev/null +++ b/video/WY3xgXIZUR_39024390.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d20b487046f815fa119acc71a19a2478bf49b464254147807fcaa737b649fad +size 2288883 diff --git a/video/Wc0vlQuoLb_39028321.mp4 b/video/Wc0vlQuoLb_39028321.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ad8f11b31107d610c85bf41e296d6a7422a814ec --- /dev/null +++ b/video/Wc0vlQuoLb_39028321.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dea766f9c1e1523bb7bbb579f75eca1516ce72e4d70d058680812e265356d286 +size 2622407 diff --git a/video/WcmqdY2AKu_39026082.mp4 b/video/WcmqdY2AKu_39026082.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a84d368f1ae1cc66cabc74dc660bbc27814e1919 --- /dev/null +++ b/video/WcmqdY2AKu_39026082.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c598ee1a1dfe34e29e0b217accde87c58d6d9d529c40151452f5af9341e26d6b +size 1818363 diff --git a/video/WeoNd6PRqS_39025093.mp4 b/video/WeoNd6PRqS_39025093.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..de8af0ac20fe4a7649b1220c3f2f56f430a254c2 --- /dev/null +++ b/video/WeoNd6PRqS_39025093.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a1c4b42c11663764e255dd8be625551bb6b112c5f164355074196cac0811aaf +size 2532435 diff --git a/video/WfpvtH7oC1_39026196.mp4 b/video/WfpvtH7oC1_39026196.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..277b0df0265cff3337c8cb5d7e36003b9d68ed82 --- /dev/null +++ b/video/WfpvtH7oC1_39026196.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd0b33b2d2e5ca2c18a8b1490299b559fda939b4d5b4e3ad839f46f0be7453f4 +size 2405001 diff --git a/video/WftaVkL6G2_39025668.mp4 b/video/WftaVkL6G2_39025668.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3ee9cd26b32e17ee22ecae9bf9f9262424f6df18 --- /dev/null +++ b/video/WftaVkL6G2_39025668.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccb3d37f02454a3b3154e6c0d030710cfe04c2a4195e13b0dce947225500ba65 +size 2640791 diff --git a/video/Wh9ssqlCNg_39027675.mp4 b/video/Wh9ssqlCNg_39027675.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9c7679afa1079d1818a79ba4739c75e921ed8760 --- /dev/null +++ b/video/Wh9ssqlCNg_39027675.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d0d07464312c865a9115105ce959071985761a8d5893864f54bdf55db2f8314 +size 3021266 diff --git a/video/WipsLtH77t_39018744.mp4 b/video/WipsLtH77t_39018744.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a20253847d9ea5514ebe04c0e80578378fd1ebb0 --- /dev/null +++ b/video/WipsLtH77t_39018744.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5274d751d0b6ef64ee47b87d74102e0a9fa140655885eda3d20026b58305a2c8 +size 2535891 diff --git a/video/WjRPZsfeBO_39017834.mp4 b/video/WjRPZsfeBO_39017834.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..36991304484e44e629ea25e99f5165b87d50473c --- /dev/null +++ b/video/WjRPZsfeBO_39017834.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e011fa011704865733e9aa50302fb8e69fa157ef64628ad0f57eb325293574fa +size 1717219 diff --git a/video/Wl2optQcng_39025246.mp4 b/video/Wl2optQcng_39025246.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d767bcf84aa6da9f02c4649375c1d86b676caf3a --- /dev/null +++ b/video/Wl2optQcng_39025246.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7debf502d7c6140bb3b96deab9aab89d509d703d2fef873dc06dc44454e792e1 +size 2246372 diff --git a/video/Wq6aY6fC2H_39026634.mp4 b/video/Wq6aY6fC2H_39026634.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..60e6c4232fff52a8c991126ef290f06a2198804a --- /dev/null +++ b/video/Wq6aY6fC2H_39026634.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a114f4960b40e4fc494e571cbd31f1311f5b6ca090afa2febc94bd5deff4b6de +size 2141995 diff --git a/video/WvoKwq12x5_39028078.mp4 b/video/WvoKwq12x5_39028078.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b18e38abfee084e79aba60ba9a0668a51d1cf65a --- /dev/null +++ b/video/WvoKwq12x5_39028078.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:351be80f928857df382f2416f5bbe8712398db4dd5228c74911ef0ba1c3a5865 +size 2162265 diff --git a/video/Wy9UgrMwD0_39024828.mp4 b/video/Wy9UgrMwD0_39024828.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d5bd744efe8459c90147624c333c06fee6e1b672 --- /dev/null +++ b/video/Wy9UgrMwD0_39024828.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78f8e097433e78152a676896b40ee49f52c9f669b2f62f6639232c44d9207361 +size 1947807 diff --git a/video/Wyp8vsL9de_39026911.mp4 b/video/Wyp8vsL9de_39026911.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1ec06a79c5ede495cb1ce0943ce071e559ab169e --- /dev/null +++ b/video/Wyp8vsL9de_39026911.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4dd59cf2d474431a35503990d10b8912006306cd4a3af08a95e0feeca7ab69c1 +size 2494061 diff --git a/video/X2G7LA7Av9_39027365.mp4 b/video/X2G7LA7Av9_39027365.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..99e93a064385f0e73c207b471d93b6483dfb4131 --- /dev/null +++ b/video/X2G7LA7Av9_39027365.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21d7725ed7e0c09939994f35abde56d3bbd299af9f2c92a01240bc972b8a4a9a +size 2244948 diff --git a/video/X2UMdvcmMo_39028594.mp4 b/video/X2UMdvcmMo_39028594.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..11a8386d4a2c559a8e2307eaf60f1e8e6a0fdd5e --- /dev/null +++ b/video/X2UMdvcmMo_39028594.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b45138fda05677f33a9ca85b3137e64db8083fa36b16026464a2fcf832a69b8 +size 2507752 diff --git a/video/X34GKv8sYT_39026623.mp4 b/video/X34GKv8sYT_39026623.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d6d58b35fccfc6fd5f36e59701cdf8c505cdc8cb --- /dev/null +++ b/video/X34GKv8sYT_39026623.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0df33e5811b62f74b7f55e56c4dac205d3ad3127a8dbc898f4db3b65722d40cd +size 2147567 diff --git a/video/X3ydKRcQr6_39027104.mp4 b/video/X3ydKRcQr6_39027104.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..95ce473eba77597909517a7ba16aa56eb2a9aec7 --- /dev/null +++ b/video/X3ydKRcQr6_39027104.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35fa870003b0a5aceea886ab0f8a639d1d74192f249f36160031b7d7c698c7df +size 2177790 diff --git a/video/X6rqEpbnj3_39026111.mp4 b/video/X6rqEpbnj3_39026111.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..11d27b31984f7e5ac205979d08e8c45557aa856f --- /dev/null +++ b/video/X6rqEpbnj3_39026111.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dec341d2e348eb7c24029b945b586acf9e607175b0c67d211211e6ce7ac402f0 +size 2400838 diff --git a/video/X6tNkN6ate_39017832.mp4 b/video/X6tNkN6ate_39017832.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b93da15c8501840021950e508c3a0c1d21dea5a7 --- /dev/null +++ b/video/X6tNkN6ate_39017832.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:052535f4cff0b59dcc7dc1fe31d43e30faa0c9b9f93887c001a6472deac3a0f3 +size 1904987 diff --git a/video/XAKALzI3Gw_39024363.mp4 b/video/XAKALzI3Gw_39024363.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6af5201c666230c8d02161797c8f19950d83d863 --- /dev/null +++ b/video/XAKALzI3Gw_39024363.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eeb2976c410822c636686e445db5987a854189129c3593e34cfe64a91222c6e5 +size 2671942 diff --git a/video/XEbPJUQzs3_39026709.mp4 b/video/XEbPJUQzs3_39026709.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cf795ead8d40387313acb12427698e5c776e0eed --- /dev/null +++ b/video/XEbPJUQzs3_39026709.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63b80ef70512461863094d7d872cc81b5561f8fd2018b403b5e353ff4196dcc9 +size 1535844 diff --git a/video/XErWgdxaFU_39028487.mp4 b/video/XErWgdxaFU_39028487.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8454e3a796c1891a39c2b419841cf388584f9091 --- /dev/null +++ b/video/XErWgdxaFU_39028487.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e9e7e931b5d06d220ebdd77d494b75c52fa7acbe81ef9485097bfadd1686fed +size 2567857 diff --git a/video/XF1jpo5k6l_39027719.mp4 b/video/XF1jpo5k6l_39027719.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..162eb3ca78b82a10fb46db2a9c2437a823b2d968 --- /dev/null +++ b/video/XF1jpo5k6l_39027719.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d00e44e4b3abb5dbc9784f4de15304955416da888710807b145d3b52572eca6 +size 2536126 diff --git a/video/XHCYZNmqnv_39027366.mp4 b/video/XHCYZNmqnv_39027366.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4476b517f9ba5d21debc50ff488cce8037b359fb --- /dev/null +++ b/video/XHCYZNmqnv_39027366.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b37f0f94c56edf8c3d5c54ac8a547bc2951722019b92e440fc2b575912b12477 +size 2579239 diff --git a/video/XHTl2k1LYk_39024386.mp4 b/video/XHTl2k1LYk_39024386.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5a3d2a75654e830d512c39b58ed27292a90fb59f --- /dev/null +++ b/video/XHTl2k1LYk_39024386.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45373bfc5db34a10bad81082336e1df42cb0af9b628d1f7410746e6c557e5e82 +size 2475872 diff --git a/video/XHWkHFWi3k_39026267.mp4 b/video/XHWkHFWi3k_39026267.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9a470535dbe95de9f34877c7796459dc0d79b66e --- /dev/null +++ b/video/XHWkHFWi3k_39026267.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc221a64a64c02ee9587a9eb7f6029302c325c6849bcea47a2734db996fc978b +size 2642393 diff --git a/video/XIaS66XkNA_39017829.mp4 b/video/XIaS66XkNA_39017829.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..636d8e8ec68fafe6b982d94ab240a93d792f7e32 --- /dev/null +++ b/video/XIaS66XkNA_39017829.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:261aac0ca12276295259e7633ebb6eae796dd6d4c3f7a2ad10a734f94b9823c5 +size 2390311 diff --git a/video/XIcBCBe6C3_39026718.mp4 b/video/XIcBCBe6C3_39026718.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e561797b38aa63e98e33a73edbe2ec2e2711a794 --- /dev/null +++ b/video/XIcBCBe6C3_39026718.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:11f8e264ff28c17801be55d93d82cd4debf49215a3cfeeaba1cd169472c796e2 +size 2674667 diff --git a/video/XKrSB5a79F_39024738.mp4 b/video/XKrSB5a79F_39024738.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cbac30d1b88c6e666e8fd3863568c4dd8e051af5 --- /dev/null +++ b/video/XKrSB5a79F_39024738.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:592bdf027e359faa36e3c2e9467343c4e9f9ea58a3f5ca4d789ba66265ab5d85 +size 2518131 diff --git a/video/XMQTNzlgTJ_39027664.mp4 b/video/XMQTNzlgTJ_39027664.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0a14afeb2a23db1eae95d270dc4a69d37e71e887 --- /dev/null +++ b/video/XMQTNzlgTJ_39027664.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23b65ee624f7806aeb7a5edc8a86fe9014055d82fae90b71d35d85895b9700ad +size 1403729 diff --git a/video/XOVks7JHQA_39025048.mp4 b/video/XOVks7JHQA_39025048.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7a32c911677922e87f436b8bfcf1c45deb6dbcce --- /dev/null +++ b/video/XOVks7JHQA_39025048.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:068ce51d2287f4ace9d00072e23c0dee13eeea66ab100f0517eb8113efd5decf +size 3034632 diff --git a/video/XRJXKBeeTD_39027302.mp4 b/video/XRJXKBeeTD_39027302.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..76d09625d79439092e7e6f66a81080424e6784ef --- /dev/null +++ b/video/XRJXKBeeTD_39027302.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:478ac7df955f2eb8f6aacea282d4d4e4cda9781029ac87a0f385ae6caded1543 +size 2851098 diff --git a/video/XTHfNGI3zT_39017825.mp4 b/video/XTHfNGI3zT_39017825.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..af8d3b27bc91f85a05f88df9e2653a7212590a11 --- /dev/null +++ b/video/XTHfNGI3zT_39017825.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e474b60a4bd304485564f4bcee6b09ecf5d70c6e6645738df05eaeb0d0a4ce77 +size 2560911 diff --git a/video/XXOMCwZ6by_39027475.mp4 b/video/XXOMCwZ6by_39027475.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fb58ab569a97d9061be8916c42aa713a7dd42aca --- /dev/null +++ b/video/XXOMCwZ6by_39027475.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf63d858847394361d1c13883872ab485eaed3d8f016de83e635be3d762fe794 +size 1815923 diff --git a/video/XXVfj4P8nr_39027190.mp4 b/video/XXVfj4P8nr_39027190.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7c605536be2d6e49c42ec7d7a7b8acf0e1773939 --- /dev/null +++ b/video/XXVfj4P8nr_39027190.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8859d2625424b08cb44247c15c3498c9e947169f37d7d99e5b3bac9691aaa92 +size 2273815 diff --git a/video/XZ0fpoAKEB_39026778.mp4 b/video/XZ0fpoAKEB_39026778.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fae9d4a64a918a0fa2db19d776c01f2e39df8df8 --- /dev/null +++ b/video/XZ0fpoAKEB_39026778.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0129e0a9e9359579b497c4f31bd8028ea324f10c7049adfe3de33ad76f4e337b +size 848224 diff --git a/video/XZ4XSUTGRb_39027415.mp4 b/video/XZ4XSUTGRb_39027415.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3518327c1c207df9f3119a6cb454d083fce5fa07 --- /dev/null +++ b/video/XZ4XSUTGRb_39027415.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b43999726c3bc3b3e690a2e29878c975c3dc1eba7799a0958c71bef9b9cb4b5 +size 2565961 diff --git a/video/XZp1uP0hh2_39024420.mp4 b/video/XZp1uP0hh2_39024420.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f73a75abc879c04d36e7f91d7cb7bd1f26256aa9 --- /dev/null +++ b/video/XZp1uP0hh2_39024420.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e46c0c456f74d52bd3feecf8fdc2c41a61cacac7065a1c1bb6332ab5f7aa123 +size 2444346 diff --git a/video/Xa3dVaolKo_39026469.mp4 b/video/Xa3dVaolKo_39026469.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..55c09929373a673a1ecf75d5b29278732ff4a171 --- /dev/null +++ b/video/Xa3dVaolKo_39026469.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0519a28acad5fff46fe3566bb2f73396dc68fb0a6a6690bcc796cae33ecdb69b +size 2566015 diff --git a/video/XcbgkjWSJ7_39024593.mp4 b/video/XcbgkjWSJ7_39024593.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5e7b9fc72c7fd78fe8391721b18641d277ce7955 --- /dev/null +++ b/video/XcbgkjWSJ7_39024593.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9541db94320e52b5d2b3851a3475ad581c865a86ed581bda35e2e448c5369db1 +size 6733568 diff --git a/video/XgAzCLsJAq_39026662.mp4 b/video/XgAzCLsJAq_39026662.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ebda70ea9e6aa0e4d842da37340ce6a293168a2e --- /dev/null +++ b/video/XgAzCLsJAq_39026662.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:baa825e6303bc7c62cf146726e844aecc57717ef7577dc7e301ed6b5f9e32703 +size 2265207 diff --git a/video/XlAbMZu4Bo_39027431.mp4 b/video/XlAbMZu4Bo_39027431.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dea6e4925d507a40a0b82c1ce9c29febc5560b55 --- /dev/null +++ b/video/XlAbMZu4Bo_39027431.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45c358544d7f0f2a79bb19670d168328778e65b59d405c94f0195da83866fb86 +size 2600027 diff --git a/video/Xo1Yqyw7Yx_39025216.mp4 b/video/Xo1Yqyw7Yx_39025216.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..81c8c3f37bbdd1aa123fde82000406f617efb822 --- /dev/null +++ b/video/Xo1Yqyw7Yx_39025216.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8210f9970a5ffda843f21f9d9fe59b0d7d3a6677b3ec248f829670eb97b754f1 +size 2892333 diff --git a/video/Xq9HQf7VNV_39025298.mp4 b/video/Xq9HQf7VNV_39025298.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e0bd74541a90fcded8d50a0964ff60aa3d61a722 --- /dev/null +++ b/video/Xq9HQf7VNV_39025298.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6601c83bf76b92f238b0709e16385bf8cf542e29375ad1b06351091d45ed568 +size 2489808 diff --git a/video/XrK4JK2jBr_39027819.mp4 b/video/XrK4JK2jBr_39027819.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e7cc717d673a957f003cee0b6ad4ad080e011608 --- /dev/null +++ b/video/XrK4JK2jBr_39027819.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78bc86ee8cf8b2167675f126fdb407395dcad19efa936479e3873d3baeee54e2 +size 2923321 diff --git a/video/XsNA2b8GPz_39025759.mp4 b/video/XsNA2b8GPz_39025759.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ab2a883098125ae45d3a4199409811745f56b4e7 --- /dev/null +++ b/video/XsNA2b8GPz_39025759.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3728d3c2ee3654b58eb162b29fbf41a94220ebd9ac5c709b6ec557f7283954b8 +size 2140623 diff --git a/video/XxSME6GE1G_39028388.mp4 b/video/XxSME6GE1G_39028388.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0a41520631631293084ee13f37dc624c0238aef5 --- /dev/null +++ b/video/XxSME6GE1G_39028388.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dbd2f7d3d2da9a96fdae25fc8c2f8f3c0a0bf0010ff68114a896e2fbeec3d110 +size 3000435 diff --git a/video/Xz13DtbOVW_39018647.mp4 b/video/Xz13DtbOVW_39018647.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ce40359e930ca00fef7d2d64191e222ae27fb6bf --- /dev/null +++ b/video/Xz13DtbOVW_39018647.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:984d2606849528c467b916762cf1d4aeb04e1fb5559de516d2c9eb594a630fcc +size 2787478 diff --git a/video/Y0EfJJeb4V_39028538.mp4 b/video/Y0EfJJeb4V_39028538.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5ff152417bee37724202948e7fc93b265216da0a --- /dev/null +++ b/video/Y0EfJJeb4V_39028538.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bcc7fbbf5c5538bb0f131fd497cb091f5d84597af8b331e0120209cc01f60f7d +size 1462424 diff --git a/video/Y13gSfTjGr_39024954.mp4 b/video/Y13gSfTjGr_39024954.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..01938269e77d358f7ca2906b7fd8c78f3702ee49 --- /dev/null +++ b/video/Y13gSfTjGr_39024954.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84de646dc9947097f92c307cb5e0f700f1c80b468d23f4bc320fe8daa2ffd80b +size 2665592 diff --git a/video/Y1rOWS2Z4i_39026070.mp4 b/video/Y1rOWS2Z4i_39026070.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b667d383d19b0a5ee218476cea8c792d192b345a --- /dev/null +++ b/video/Y1rOWS2Z4i_39026070.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cbc337c4f98b9fc469e823e45c469a44d207dd50e5afcfb7b5270c1d0c0c365 +size 7746 diff --git a/video/Y2I0Fy4sm7_39026902.mp4 b/video/Y2I0Fy4sm7_39026902.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..108f0ed9e82ddde99ad6f0cf86cc2093aa7a3eca --- /dev/null +++ b/video/Y2I0Fy4sm7_39026902.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e0b4e301246f13bde1bb4e7db8d6fb14a61b03c868fb8b7cb6f02665cc85370 +size 2531325 diff --git a/video/Y2NWKlrDrX_39025708.mp4 b/video/Y2NWKlrDrX_39025708.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bef8370b8024016a9b855c3dc6b420442cedc1fe --- /dev/null +++ b/video/Y2NWKlrDrX_39025708.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:95d65eea6dec081e11b2129977cad496db2192cb89731ed169f7ba426c220ed2 +size 1676809 diff --git a/video/Y3wpuxd7u9_39018995.mp4 b/video/Y3wpuxd7u9_39018995.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4b0d3fb8ec3d6732df142991c3d00f831e521637 --- /dev/null +++ b/video/Y3wpuxd7u9_39018995.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be300a4dabc6d6f25378ac49967ed2e9320ad342828fbfa30de63eb8bd33e53c +size 1955565 diff --git a/video/Y4L8GQXZZO_39027934.mp4 b/video/Y4L8GQXZZO_39027934.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..34e64523f5e0681c0d6503a97f9a35536bcc643a --- /dev/null +++ b/video/Y4L8GQXZZO_39027934.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1cf4fbd04ea85b138a7c313f6cdcb4fbc42bcecf170118205d0e03ffb46ae8c5 +size 2736522 diff --git a/video/Y4mBaZu4vy_39027715.mp4 b/video/Y4mBaZu4vy_39027715.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..884487a0b5bad185d31c33f864db1e7beea4d487 --- /dev/null +++ b/video/Y4mBaZu4vy_39027715.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:01cc62aead961565d44c8f5278ea0c6be59497537e1257f9a0477844419513eb +size 2045784 diff --git a/video/Y4tHp5Jilp_39027875.mp4 b/video/Y4tHp5Jilp_39027875.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a7e1acf9d7c0a514f3088b5d22f8cb2b1ba777c8 --- /dev/null +++ b/video/Y4tHp5Jilp_39027875.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a8b450ec7e8513e69a2629e09f73b18fa16c15c8302582173ffaf7470399c7f7 +size 2341845 diff --git a/video/Y7HPB7pL1f_39027491.mp4 b/video/Y7HPB7pL1f_39027491.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1937db171ab13fa4e0d868984726b20018d14bfc --- /dev/null +++ b/video/Y7HPB7pL1f_39027491.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a28b9ed38413073e2935b7a239104dbddaabb7ad016b4debb06cc8a4bf06eb8 +size 2721543 diff --git a/video/YCKuXkw6UL_39024614.mp4 b/video/YCKuXkw6UL_39024614.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..69a0afeaf45e3dc2fcf1d7faedb2638fc9c9b75e --- /dev/null +++ b/video/YCKuXkw6UL_39024614.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:653323f94da8659dcb3d8c51ee80b9f6a1331db8d905cf6a50446fdca5ab34aa +size 3094706 diff --git a/video/YCPDFfmkFr_39017817.mp4 b/video/YCPDFfmkFr_39017817.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..992fc4d9d3b1d9f5cf162a1c2b288f1bc9e2f502 --- /dev/null +++ b/video/YCPDFfmkFr_39017817.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e14fe0cf9f189d00e6b24c1e3fe4f0b0a15ad58e3cd78cf398ccd2bf691663d3 +size 2222992 diff --git a/video/YHUGlwTzFB_39017813.mp4 b/video/YHUGlwTzFB_39017813.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b1441c88a5dcc3a151d0c1b7f4004860891d717d --- /dev/null +++ b/video/YHUGlwTzFB_39017813.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e1376a15bcdf364de6f5d401849bce9464f1316cfb52a89a17d3db0d47c9eca +size 2515152 diff --git a/video/YIB7REL8UC_39025590.mp4 b/video/YIB7REL8UC_39025590.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8925b2b76b6c36c13195e3dc6aed90d8f9c62154 --- /dev/null +++ b/video/YIB7REL8UC_39025590.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35718f1bc9cdb4426caed87fc2b07282b15a1688accbc1fcc0d57b12d6a0f33d +size 2600000 diff --git a/video/YIOvR40hSo_39025219.mp4 b/video/YIOvR40hSo_39025219.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b8deb0f8926e039a68b8a946efa99e8c7583c771 --- /dev/null +++ b/video/YIOvR40hSo_39025219.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cef024e3d17e7a80fcbdabe838c0d56da2abaaccd47f59f593e7b20b0d764240 +size 1732369 diff --git a/video/YNRYWZHmKY_39028511.mp4 b/video/YNRYWZHmKY_39028511.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..90b1ab975101ae86b98efc0e4b2204632b184087 --- /dev/null +++ b/video/YNRYWZHmKY_39028511.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:038c595d9a85080543553026f92dc30371497e4dc205bc820220220a3c57f845 +size 2620653 diff --git a/video/YO6GVPUrKN_39027529.mp4 b/video/YO6GVPUrKN_39027529.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e896132e6388f0480c78254e08187cd1d4e73c77 --- /dev/null +++ b/video/YO6GVPUrKN_39027529.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:79da4c10423389e06ae651eb0b54f31af4647f937631e8cdf97678d13ecf847b +size 2658655 diff --git a/video/YPqHSTSoFs_39026253.mp4 b/video/YPqHSTSoFs_39026253.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a1e97b798470cbad208d0e776587d96346723b8e --- /dev/null +++ b/video/YPqHSTSoFs_39026253.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90603c23a0c26b0d4e232f022ebc30a0117efcebb73f30d4b7c3b81ba1d31a1b +size 2069454 diff --git a/video/YSs1z5udBY_39024609.mp4 b/video/YSs1z5udBY_39024609.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e8024d2d5062eb00c39538e5693753ae0eda3cbc --- /dev/null +++ b/video/YSs1z5udBY_39024609.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:32c5c3a07c17d06355503f7720e08a064a52e4aeb74e39d8d7baad4771983b51 +size 1878641 diff --git a/video/YTHJ8O6SCB_39028584.mp4 b/video/YTHJ8O6SCB_39028584.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5e9048d7a57adc539976d0da3da698621399dadf --- /dev/null +++ b/video/YTHJ8O6SCB_39028584.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7cce2d74ef26d4b2981fe372c5f2d70a459b7fc5a814feff43e17c4557fa3da5 +size 2771529 diff --git a/video/YWTpmLktMj_39026767.mp4 b/video/YWTpmLktMj_39026767.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..676ec206e24c895a6aa86bd395d79c90efedec44 --- /dev/null +++ b/video/YWTpmLktMj_39026767.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cec689166f0fbf4194b9b54d2e0ed71a5f4e2b75ab8774c7dfe363034afc8782 +size 2668881 diff --git a/video/YYY5lzE547_39026376.mp4 b/video/YYY5lzE547_39026376.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..560cca4b17bcb243904f8d1c813367dc5d470c51 --- /dev/null +++ b/video/YYY5lzE547_39026376.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b588893e6a24ba424c29aa8c6ae7dc8416fe9caef04ca43531f0d49d2131764c +size 2695842 diff --git a/video/YYnP3Xpv3y_39025394.mp4 b/video/YYnP3Xpv3y_39025394.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9fd5d97fd2c302fe4c7dba6f7c80791dcf534c98 --- /dev/null +++ b/video/YYnP3Xpv3y_39025394.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f3d0f3f455683a7f0d796fb243921b09696afaee311decb61dd833bc6f0a8fc +size 2741494 diff --git a/video/YaPhvbGqwO_39024895.mp4 b/video/YaPhvbGqwO_39024895.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..afd45d5ca7ed04643786c79a11b758cf183807ce --- /dev/null +++ b/video/YaPhvbGqwO_39024895.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d5dce0a0a2f1ccd88d95b18d0b7fde7d93c9a8c90d171e04024e76e04974cdcd +size 7784 diff --git a/video/YawXY6mWiK_39028768.mp4 b/video/YawXY6mWiK_39028768.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6278c1d4ca959e689553500fd798c9f43ee28508 --- /dev/null +++ b/video/YawXY6mWiK_39028768.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:31481852ea4abf53dbb0e21eb7f830a4d26d4eb1e80247430891f838ba7b70e6 +size 1230142 diff --git a/video/YbZxT0SON4_39017806.mp4 b/video/YbZxT0SON4_39017806.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6c415ba916b2fc6f02d0fda22a05974c886862c9 --- /dev/null +++ b/video/YbZxT0SON4_39017806.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:302298b2a2dc77de887878f5d6fed9581570f1729cc05573fb43ebeee6b1f792 +size 2220177 diff --git a/video/YbxFwaSA9Z_39026419.mp4 b/video/YbxFwaSA9Z_39026419.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a15c47ae8aed2cda9a76cc6fccc1dd4430c6fc24 --- /dev/null +++ b/video/YbxFwaSA9Z_39026419.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d041aa10fc77b92087574ca3e3e36055da77f31409162e525c4f1bcb7db5106 +size 2505279 diff --git a/video/YcW8i9VCf5_39017805.mp4 b/video/YcW8i9VCf5_39017805.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ca4e4f5007f63533dfec7b5188f61d833320f522 --- /dev/null +++ b/video/YcW8i9VCf5_39017805.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cea5c72058dcf047b3a142387b8b2b1cf3a19da614105d52e68639fe3702307d +size 1774674 diff --git a/video/YdfZP7qMzp_39025880.mp4 b/video/YdfZP7qMzp_39025880.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5e2cec75a83e9c9d62421dffeb5247f4804adf41 --- /dev/null +++ b/video/YdfZP7qMzp_39025880.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e529474c471322deecd33f088c586058c0881e0727e121abf6f7a2717add2ca1 +size 2365112 diff --git a/video/YfVMcbcDqo_39026684.mp4 b/video/YfVMcbcDqo_39026684.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..40c27eb72bf3dfa84e9d7e7aa1685addf605bf10 --- /dev/null +++ b/video/YfVMcbcDqo_39026684.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc142e376b337f8d488e0e11f24850b3e2ce7250a9255ba1db193e289bfb0bd3 +size 2236906 diff --git a/video/YlIvhHFwQ2_39025781.mp4 b/video/YlIvhHFwQ2_39025781.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0d6b6515a0baa6ab91708c09a4f698977461a6f3 --- /dev/null +++ b/video/YlIvhHFwQ2_39025781.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:244f9bf3c7e1d528c6e353a5a189326f55fdd3a43fb0a4f9190cbe57452e38cd +size 326852 diff --git a/video/YlmYm7sHDE_39027583.mp4 b/video/YlmYm7sHDE_39027583.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8253bb42ec87c9db194ec36cbfe7f87ba9f21a24 --- /dev/null +++ b/video/YlmYm7sHDE_39027583.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97422e7c07f804e3bb4236f62a8bd4e23cb01e3039ada1012126887d8a6f354b +size 2603471 diff --git a/video/Ylvviju6MD_39024470.mp4 b/video/Ylvviju6MD_39024470.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1087f99681ef02282ef480d2fd5e64e04fbef3b6 --- /dev/null +++ b/video/Ylvviju6MD_39024470.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1469bfb423bdad53806efd990b80f753d6b0b709c9264ac276db7fd46d7a1551 +size 1302967 diff --git a/video/YrXHEb2qMb_39017078.mp4 b/video/YrXHEb2qMb_39017078.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d5698d0b9e0f5918c56508c85b196905949edab9 --- /dev/null +++ b/video/YrXHEb2qMb_39017078.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70057b65f49f62079e40a098c23689eba14eb9e91748517377ba17f01bbe70c9 +size 1866879 diff --git a/video/YscR3LBIi7_39024652.mp4 b/video/YscR3LBIi7_39024652.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..070a968a54831979359a8896a3a887490904c891 --- /dev/null +++ b/video/YscR3LBIi7_39024652.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:010c858fe0b094be9f94e9524b22e8bc1c750d72603373107dcff8f3581bd66a +size 2303975 diff --git a/video/YvA8UF0I37_39026144.mp4 b/video/YvA8UF0I37_39026144.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..780ab87cd333a4b43a8f4bda79173650ad7286c1 --- /dev/null +++ b/video/YvA8UF0I37_39026144.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3d1990357682a91ca93f3725dc2219aedb62bba88761c97f5b4cd603a42b1dd +size 2257007 diff --git a/video/YxyYTcv3hp_39028583.mp4 b/video/YxyYTcv3hp_39028583.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b6a83390900bd6468aa9a799bbe600c3ffd3e188 --- /dev/null +++ b/video/YxyYTcv3hp_39028583.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b03fe49e17c20951ac253856198c896e02909516f478bf2c5a6d13320977cb8 +size 2517682 diff --git a/video/YyMiO0DWmI_39024602.mp4 b/video/YyMiO0DWmI_39024602.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7948053431d004f21799d4ebc54b3b9f546c728e --- /dev/null +++ b/video/YyMiO0DWmI_39024602.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73ff863571beb65e9a468a3ec246cfd2bacb5b4e23efa1dc09ba4749c15887a2 +size 2802702 diff --git a/video/Z0Nq3hHeEG_39024726.mp4 b/video/Z0Nq3hHeEG_39024726.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f63ea9b671cc5683407ed658a10dbae242b4b6b7 --- /dev/null +++ b/video/Z0Nq3hHeEG_39024726.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d690940f6b258b9a3b026de0a8c984ed52aefc97fd583ba827665ebef213331f +size 2810691 diff --git a/video/Z4R2rkPgBy_39027910.mp4 b/video/Z4R2rkPgBy_39027910.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bb008d52ce25df9ce1650ea091dfe583f349122c --- /dev/null +++ b/video/Z4R2rkPgBy_39027910.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f57086081c9c5a3c15c86fd3e6df8b4bf3211e9137f9bbb485f2f2d40d49e8b9 +size 2861754 diff --git a/video/Z8UfDs4J46_39018623.mp4 b/video/Z8UfDs4J46_39018623.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..040fbbb59585aa9dd92ad222e93ad478d714292c --- /dev/null +++ b/video/Z8UfDs4J46_39018623.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00f3dc1d46f455b55555ba687e8e5421b844f2ca7999d7b40bbc573485b40f97 +size 2568803 diff --git a/video/ZC0PSk6Mc6_39025225.mp4 b/video/ZC0PSk6Mc6_39025225.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..525f4c5513369e10a22a1c3daacec171d8756437 --- /dev/null +++ b/video/ZC0PSk6Mc6_39025225.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c7fe211af2ef0cfff8efe88e72e923ed7d5f63b1e971472f586c23f769e58738 +size 1538124 diff --git a/video/ZEZ0CPmoSI_39017800.mp4 b/video/ZEZ0CPmoSI_39017800.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dd763e7fc6ba91eae60b80777eb9d43f8f54735e --- /dev/null +++ b/video/ZEZ0CPmoSI_39017800.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ed1f463689d4325834bcd79b7ef91e48528c842ab5c9c346bad9c707c1ad561 +size 2550904 diff --git a/video/ZJ2ONmSgCS_39025393.mp4 b/video/ZJ2ONmSgCS_39025393.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d7b31c0b46ba6d0bda795aaff8eb76e8f5e7faee --- /dev/null +++ b/video/ZJ2ONmSgCS_39025393.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:103c55ba7f421f38ded028d84315793de82ba136ea60b5d70daa219d939463ad +size 3154303 diff --git a/video/ZJBBeyEAyX_39027027.mp4 b/video/ZJBBeyEAyX_39027027.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..399f3ec944979735e84b3467912b844dab0eea64 --- /dev/null +++ b/video/ZJBBeyEAyX_39027027.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ada6e236cf47e5dc1d0b83cadd4812f8e38635d632f638f87d290634fa484dff +size 1998609 diff --git a/video/ZK1CZXKgG5_39026022.mp4 b/video/ZK1CZXKgG5_39026022.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..660f814a3913a090d67cfecb6b17d892545c1fa9 --- /dev/null +++ b/video/ZK1CZXKgG5_39026022.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:184006c03b81642782e96b701e994f9e24431d5e9412caa58ab59e6799613a39 +size 1502832 diff --git a/video/ZKEuFKfCKA_39018857.mp4 b/video/ZKEuFKfCKA_39018857.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ec2bc9322f77e2ca8bb0eb3f5ce14b87dc704095 --- /dev/null +++ b/video/ZKEuFKfCKA_39018857.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:76278c770812bb8daf7fc8b2e2dd7b7706049f1e61f67fb5d27029f204c5a028 +size 2718587 diff --git a/video/ZMv6zKYYUs_39017796.mp4 b/video/ZMv6zKYYUs_39017796.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a6777888147bb9505c990c6245b67d55bf833781 --- /dev/null +++ b/video/ZMv6zKYYUs_39017796.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c12047343c5b3c1bec8739348ebd01f3be5509f07649eae2360d30a1fecb6c14 +size 2567805 diff --git a/video/ZNcJtNN3e8_39028335.mp4 b/video/ZNcJtNN3e8_39028335.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..280b71228cafa2f3f09841afcb56be7ae87325d0 --- /dev/null +++ b/video/ZNcJtNN3e8_39028335.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8067f5b567b149cd94e9a3f3cb434414789e1004840b43f6ce6f299bcb810527 +size 2896009 diff --git a/video/ZOZjMs3JTs_39026015.mp4 b/video/ZOZjMs3JTs_39026015.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..725e5fa8639df19fa02886102733fc69e3f346e1 --- /dev/null +++ b/video/ZOZjMs3JTs_39026015.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8cabe35f5cb564376dc300caaed607de214205365ee273082b90d979390fc9ff +size 2424372 diff --git a/video/ZPdZLlNXSm_39017795.mp4 b/video/ZPdZLlNXSm_39017795.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..67f03e17cd706472deb0fe0f62544022ba2cae33 --- /dev/null +++ b/video/ZPdZLlNXSm_39017795.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b277f5c7a30588f85c12b1dffff977dee3bd8973a8f1c19a8bc036300018f3c5 +size 1770042 diff --git a/video/ZRYFftR4xn_39028026.mp4 b/video/ZRYFftR4xn_39028026.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..025a13a77b32be7dbe4f931529bd02a4b8afbd3c --- /dev/null +++ b/video/ZRYFftR4xn_39028026.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe829759ff271520f47d9bd00f67e189bcfd5c2456c9e54862c3cd3fc832eb9a +size 2373912 diff --git a/video/ZRz7XlxBzQ_39027692.mp4 b/video/ZRz7XlxBzQ_39027692.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a4f68b249fb3bdce63f96ed1cb731499550fb916 --- /dev/null +++ b/video/ZRz7XlxBzQ_39027692.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4713945cab961716d6c7fad69e2cad46fd22634b50d93b25e43a192fdcad4ba +size 2721373 diff --git a/video/ZULjcYLWKe_39017792.mp4 b/video/ZULjcYLWKe_39017792.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fbf27dfacab439219d001dd16ef2e72c080bec58 --- /dev/null +++ b/video/ZULjcYLWKe_39017792.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c787bbdd620e8039cedc20f13242c162801ab7e91187a84b9dc32e1949a81c4e +size 2558886 diff --git a/video/ZViYPzh9Wq_39027111.mp4 b/video/ZViYPzh9Wq_39027111.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5ecb1d7e83bc33311d5188f149da51f4a3c79a9c --- /dev/null +++ b/video/ZViYPzh9Wq_39027111.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:401186a4cb33f0a9656d8ae3003012ddeccbd19981d0895dbe55e5217453e14f +size 2358003 diff --git a/video/ZVrrPNqHFw_39025918.mp4 b/video/ZVrrPNqHFw_39025918.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2d280df2db12216ff7877311317ff6ccaecbaf11 --- /dev/null +++ b/video/ZVrrPNqHFw_39025918.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9174f0acfdc726c6fb3eebd3621f664190093a541dbeb9c43847ef5582ebad3e +size 2836477 diff --git a/video/ZX6CEo1Wtv_39024753.mp4 b/video/ZX6CEo1Wtv_39024753.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..30ecc0f726b62a1699db73d1e45122e37eb57025 --- /dev/null +++ b/video/ZX6CEo1Wtv_39024753.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7941de941b63be459670d22efbbcbee12d03e28457971ba53339c09409ed39b3 +size 2566727 diff --git a/video/ZYrZ5V84ZI_39027952.mp4 b/video/ZYrZ5V84ZI_39027952.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e8e82f14209f9b6ec91f7ff7735e9dd471063713 --- /dev/null +++ b/video/ZYrZ5V84ZI_39027952.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:957777f07b650102f61c9ca4bd6ee184414279cefd86ce9b605c08932af23453 +size 2563518 diff --git a/video/ZZTkLDRmkg_39017787.mp4 b/video/ZZTkLDRmkg_39017787.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4b442cf8a3a651abca14870efbb2f7a9f0d66452 --- /dev/null +++ b/video/ZZTkLDRmkg_39017787.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e96dcdf8cb8922226e26423079146b19450ba2d194c084795f0cad1cf79efb00 +size 2967659 diff --git a/video/ZZoW4Z3le4_39024813.mp4 b/video/ZZoW4Z3le4_39024813.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4df0416849d0736e7b761b2363f2a1977e299b50 --- /dev/null +++ b/video/ZZoW4Z3le4_39024813.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83dc04e469daa17c691dcc8a521ffae237205760376d49c32400cca05a378beb +size 2891823 diff --git a/video/ZbjJE6Nq5k_39025771.mp4 b/video/ZbjJE6Nq5k_39025771.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0f19f98922785ae88f98e8ffc9ad330c876b082c --- /dev/null +++ b/video/ZbjJE6Nq5k_39025771.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2110b6747f092dea977f85e1366e76d770002ed28ccebfb8587574e9863c3b89 +size 2258336 diff --git a/video/ZehccYKkNH_39027265.mp4 b/video/ZehccYKkNH_39027265.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a5455facae4e0f94fea738a4c5dbdfab1e2e0319 --- /dev/null +++ b/video/ZehccYKkNH_39027265.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6c5bbc3248d151ae4a26687e873febef2a796e10c1638445acbb48d04817d53 +size 917820 diff --git a/video/ZeihWodDVh_39027755.mp4 b/video/ZeihWodDVh_39027755.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f18b1aca09504615d9e8a7f29b6015809c8e46c6 --- /dev/null +++ b/video/ZeihWodDVh_39027755.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad791ebd48e5ee1e2439af0e335bb35e5c861dcc4647900dcf437d0b977f3a58 +size 2855015 diff --git a/video/ZfRGRK5Kxl_39024899.mp4 b/video/ZfRGRK5Kxl_39024899.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c209ae1009cfeea839fd16a15e20ee2a71245404 --- /dev/null +++ b/video/ZfRGRK5Kxl_39024899.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9aaf3db0a632c634fff6a2f8b90f0a0aa8e969507d81dd63172b663a54b7d4ec +size 3298008 diff --git a/video/ZfXRAqbBKX_39027714.mp4 b/video/ZfXRAqbBKX_39027714.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0f911c366e46f28fba6b26725dd23df85ed8bf05 --- /dev/null +++ b/video/ZfXRAqbBKX_39027714.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47992278197d2912abfaaf4c75d49c2d298cabffb15c92f23de49028041210c8 +size 2815256 diff --git a/video/ZgtLQQR1K7_39024821.mp4 b/video/ZgtLQQR1K7_39024821.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d022994ec5c8618e5d1f8789040a0736d090c56e --- /dev/null +++ b/video/ZgtLQQR1K7_39024821.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b169dc557f7737ad5f56d0612afa65975361090bf9e9fdcf1766a6b90c7ee23 +size 2251977 diff --git a/video/Zh2iqiOtMt_39017786.mp4 b/video/Zh2iqiOtMt_39017786.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9fe9d1da2d5174d02fa14238b966092083f58f59 --- /dev/null +++ b/video/Zh2iqiOtMt_39017786.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1afa50f9271c225685eb6ef29e62330fc9ad568ad06565e9b6f6f7afbc3fa24 +size 2633281 diff --git a/video/ZjgcYMkCmX_39027515.mp4 b/video/ZjgcYMkCmX_39027515.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dfbec47f0830b99d1f117f36b988da14f2410ac8 --- /dev/null +++ b/video/ZjgcYMkCmX_39027515.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:37f4d5a46ff39b3241cd8b98145d4897dacc719386ef49d60efddf956853af3c +size 3231189 diff --git a/video/ZlQRiFmq7Y_39017785.mp4 b/video/ZlQRiFmq7Y_39017785.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f1c2c9d5d931ec9f40462e9ff3de548cf0fe8cb7 --- /dev/null +++ b/video/ZlQRiFmq7Y_39017785.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e1796a1706ba0952c747831e8a446e4a90ffb441f6a155e0d4a1bbbc88641097 +size 2869155 diff --git a/video/ZoarR5QmFX_39027550.mp4 b/video/ZoarR5QmFX_39027550.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a67486b0deab9cfa91e027deab4e9acffaa18630 --- /dev/null +++ b/video/ZoarR5QmFX_39027550.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:653892e045cf7d153dec54832df884802ce0c509c82a14fe6f4bb7a6480de3e6 +size 2634224 diff --git a/video/ZsxZ65YqL1_39025439.mp4 b/video/ZsxZ65YqL1_39025439.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..15a88298093b2ff217b57a43c38dcce3c61f76c2 --- /dev/null +++ b/video/ZsxZ65YqL1_39025439.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1380f31a6e8b19af56a80006a6c1cd5f024f46d3a92c5e6395ce0fc1a6cc313f +size 2329362 diff --git a/video/ZulWEWQOp9_39025320.mp4 b/video/ZulWEWQOp9_39025320.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5d67028f30389225daea5ad1a3485b0637b54e23 --- /dev/null +++ b/video/ZulWEWQOp9_39025320.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:041ee3e4eb42755b74c647a4a9f2f7461824ab13a9cff044ca29703e467d1e95 +size 3474350 diff --git a/video/ZwiG9KjfHV_39028717.mp4 b/video/ZwiG9KjfHV_39028717.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a9a45ab877f614e419798d5a1206facecb64831e --- /dev/null +++ b/video/ZwiG9KjfHV_39028717.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c07d73f1f62d36ec10b61facde351dff47f49c452a10c4b241b3d72a1cef790 +size 2210038 diff --git a/video/ZxtaNh5UYB_39028116.mp4 b/video/ZxtaNh5UYB_39028116.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e62d7e763820458d2852bb83f60970652888c280 --- /dev/null +++ b/video/ZxtaNh5UYB_39028116.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4294419a635bd91e6f063f1c7ef0500d529164a81b93d0c76f115b2a835915f1 +size 2348187 diff --git a/video/a1wf2N967T_39026370.mp4 b/video/a1wf2N967T_39026370.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8fdb0ed36b1f40f8ac22800b4e1d2c211351fdee --- /dev/null +++ b/video/a1wf2N967T_39026370.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d97f1878d0514925d79aae38870367ca749901cbc2bde22c7cafd8907634064 +size 1571237 diff --git a/video/a560KLF3v5_39027867.mp4 b/video/a560KLF3v5_39027867.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a3d18f485312e0a053294b938d06588e357e7134 --- /dev/null +++ b/video/a560KLF3v5_39027867.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24258c0d75811d10bc5cf73cd709a4ebf54aec565d85348c356dfbe23973c221 +size 2385570 diff --git a/video/aBMESB1Ajx_39028647.mp4 b/video/aBMESB1Ajx_39028647.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b5b7b97c0d88fe44c61261390dfd3ced087da880 --- /dev/null +++ b/video/aBMESB1Ajx_39028647.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3decd9e3bb6d0b73a51bf4aafee3712da7a882e267646846eafb05623394e618 +size 2575074 diff --git a/video/aBUidW4Nkd_39017780.mp4 b/video/aBUidW4Nkd_39017780.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a843bc9ded91625c471906d910c78356129f5599 --- /dev/null +++ b/video/aBUidW4Nkd_39017780.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9010e6b1a0783a4b8395f4915d25c27d1ef7705061a07563824a76b002ad819 +size 2184459 diff --git a/video/aBmiyi7iA7_39026257.mp4 b/video/aBmiyi7iA7_39026257.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6b4eed7e8d7293621bd90725c7d627abea93001e --- /dev/null +++ b/video/aBmiyi7iA7_39026257.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4a9cb7d9a1a0849922861ad280e0de72419b140c81a3c9ca63f638ec4180cc5 +size 2634043 diff --git a/video/aBpxukZS37_39026856.mp4 b/video/aBpxukZS37_39026856.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..57617626b66c495ada1bb3f167e933fecd2c4cd6 --- /dev/null +++ b/video/aBpxukZS37_39026856.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0efeb351bd934d1558b96c5ac02a800cca83fac9bb3ba1bec404abf57e5680e4 +size 2939330 diff --git a/video/aDQlAz09dS_39024507.mp4 b/video/aDQlAz09dS_39024507.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..738180d7e29466f5252c8faa76f7c3a3f92be5d0 --- /dev/null +++ b/video/aDQlAz09dS_39024507.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1cc584b86b296edf8800f41aaceae94d64f51b433217ffff3714010bba535bd +size 2251387 diff --git a/video/aFB97F8QSF_39026721.mp4 b/video/aFB97F8QSF_39026721.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ed34e08f5bd7efe8d33c83676b846c3594ca299d --- /dev/null +++ b/video/aFB97F8QSF_39026721.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46086fd1ae5ebd7a987e68b53c0e2a8f5847d612e26c9d842426787058b5cf7b +size 3133830 diff --git a/video/aFWx1N84Fe_39024399.mp4 b/video/aFWx1N84Fe_39024399.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..25291a10b86b12b1f893264162ad5f2392f9a30d --- /dev/null +++ b/video/aFWx1N84Fe_39024399.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a1ad28389728d263e981887eb9e87e9533c421484b84be9543252f5e874974c +size 2528718 diff --git a/video/aGH43rjoe4_39019185.mp4 b/video/aGH43rjoe4_39019185.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..637e5c3e7936eed01152a651117111e52f1db949 --- /dev/null +++ b/video/aGH43rjoe4_39019185.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c09308a988d00d7e4e0174d87e06b5bf804a9c22e27d773ff507dfeebd3f0cf4 +size 1439462 diff --git a/video/aIPwlkdOut_39025474.mp4 b/video/aIPwlkdOut_39025474.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8d14ce11815e29605781b9f79d01dbfcf1d21596 --- /dev/null +++ b/video/aIPwlkdOut_39025474.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b5a8daad17bfc90c83b17e3c4881e1a9e5ae4f377214664ec8abb49c2043826 +size 2735275 diff --git a/video/aIeXn5103e_39026713.mp4 b/video/aIeXn5103e_39026713.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6253e95ac148dc728d7c44342c63e9db115f34ae --- /dev/null +++ b/video/aIeXn5103e_39026713.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e696bbcadbdbeba5be1bbde95c386bb8c92c2d106e8f187378ce8a45d2675425 +size 2513693 diff --git a/video/aIok3ZD9to_39017213.mp4 b/video/aIok3ZD9to_39017213.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7a4120b996187918230ec5cb2fe5bd4f6aae48d2 --- /dev/null +++ b/video/aIok3ZD9to_39017213.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21c8d3a4c1e4ed18f504b1437ef95e4a7e43cf0502a91ee5601a4af6533e90fe +size 2991509 diff --git a/video/aJGKs7QOZM_39025457.mp4 b/video/aJGKs7QOZM_39025457.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..19ec8cc8ad18dda665880cbb9d311f347656594d --- /dev/null +++ b/video/aJGKs7QOZM_39025457.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b231ac8cd0dd11951d3fca97650411101aab0dfafda9db483a8d81cdeb9ecfd +size 2778330 diff --git a/video/aKJEHWmBEf_39018998.mp4 b/video/aKJEHWmBEf_39018998.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8a64cc180355ce00afa8eca02abccbcc9d1aafb9 --- /dev/null +++ b/video/aKJEHWmBEf_39018998.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:110fad3a0a0fcccf805a020c8ca04b35b776dd14e5e467a3030d7d24ed6a44a0 +size 2677981 diff --git a/video/aLzA7MSc6Y_39027487.mp4 b/video/aLzA7MSc6Y_39027487.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5bcd8b8899945618b429883da9dae635d1d92c27 --- /dev/null +++ b/video/aLzA7MSc6Y_39027487.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b18d3c3b694222587abbd6ff5b95d315c69b85f53e7169781e588dcf9ffdc2f2 +size 2325333 diff --git a/video/aRokfUfIQs_39026858.mp4 b/video/aRokfUfIQs_39026858.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6f6219b3c72f739255887fad45f1e8ffdfe38925 --- /dev/null +++ b/video/aRokfUfIQs_39026858.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe2046b937adc073627c4003189b865e3da41fae74c271f04f246639e5ddc16c +size 2376008 diff --git a/video/aUHSwmHRVb_39026585.mp4 b/video/aUHSwmHRVb_39026585.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..192cd67ff864886fc94764ee6b01378c5cb4c341 --- /dev/null +++ b/video/aUHSwmHRVb_39026585.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:360a1234a88b91b64f225310b63d61043242782e4d8542dedd0dd16cd29ef5a8 +size 2689281 diff --git a/video/aVK4JFpegy_39026979.mp4 b/video/aVK4JFpegy_39026979.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d897e0e09889439cb45b7b03740b7e3d266f36b1 --- /dev/null +++ b/video/aVK4JFpegy_39026979.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a61e632ddc6916d6132aa90a019a6f3865978cb601265c7a58dea478a744c7c2 +size 2789415 diff --git a/video/aXApeuAYkg_39027620.mp4 b/video/aXApeuAYkg_39027620.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6604e6ac0ab93f7cbfef8e15241b80e003e5953e --- /dev/null +++ b/video/aXApeuAYkg_39027620.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:60b7704d7b8df05f2f355a32de6f9dd241106fca41232047720605747b5efda0 +size 2544024 diff --git a/video/aXS1pwMa8I_39024842.mp4 b/video/aXS1pwMa8I_39024842.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..281758bfc777c0185f1e295552ff840f822cada1 --- /dev/null +++ b/video/aXS1pwMa8I_39024842.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b37985a9b1285f8a5229a726e1e9b7ea5d51ff56643825dabe79f6903e7cf40 +size 2617505 diff --git a/video/aZH1dM3GOX_39017777.mp4 b/video/aZH1dM3GOX_39017777.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..433dec85ac4c3c162bbd97d045296ace22988b70 --- /dev/null +++ b/video/aZH1dM3GOX_39017777.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5acc0a69b3f04e880dc7a6e17845bb01cd0ed3b925986667a8e179fb282533e2 +size 2027398 diff --git a/video/aaBnFAyW9O_39017776.mp4 b/video/aaBnFAyW9O_39017776.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..db9029a61de944b4ff71b420f0a13334da0c89fb --- /dev/null +++ b/video/aaBnFAyW9O_39017776.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:327a9a048baa29c177e3e2e265cc8814befd76c420a70091ce2a00501a944518 +size 1847963 diff --git a/video/adSGeugiuj_39017774.mp4 b/video/adSGeugiuj_39017774.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..65380670f8cec67f5ef25862e0c9b34545e21538 --- /dev/null +++ b/video/adSGeugiuj_39017774.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1445288ed0d5d925cdab928a62e8cd4bf4383aaebc7df7f5d72a448c11b5888a +size 2405562 diff --git a/video/aeYNVtTo7o_39026890.mp4 b/video/aeYNVtTo7o_39026890.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3524f5bc964269e50665dab0b72450e506b9ed95 --- /dev/null +++ b/video/aeYNVtTo7o_39026890.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14b18d5735bf798d2a56066d0231434fa9ac0f625db1419ea18988bfcd314a7b +size 3517680 diff --git a/video/ag3o2T51Ht_39017771.mp4 b/video/ag3o2T51Ht_39017771.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5952926320c5cb7ff5d68a1b5178466f69527d25 --- /dev/null +++ b/video/ag3o2T51Ht_39017771.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:339f4792ff7ea3d1bb27430e1f81045700dd9bfa22a972e4717a653c6b2010c6 +size 3023781 diff --git a/video/ag7piyoyut_39027335.mp4 b/video/ag7piyoyut_39027335.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..54a16bab95bf522de2ba2562a08c720bb52bbdaf --- /dev/null +++ b/video/ag7piyoyut_39027335.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ebc0613ed4c6750b531f083901487a9eaa364eaf24b7e1b2d87614258cef9ee +size 2073408 diff --git a/video/ahvOhPkkMx_39028451.mp4 b/video/ahvOhPkkMx_39028451.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..672a3879383bc4ddb4b811ff3ccf444b92ab4f60 --- /dev/null +++ b/video/ahvOhPkkMx_39028451.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1a712bf0ee4a3f54aacd44d1b9c1ec3376897eaf2e2fe737ad16cdf4f322c6c +size 2572884 diff --git a/video/anyZgGLQ6n_39026467.mp4 b/video/anyZgGLQ6n_39026467.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..219c5f5f0225337930986cf0f89638831b0efc77 --- /dev/null +++ b/video/anyZgGLQ6n_39026467.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23ea2b61bca2994507a3c6b03ab1e65bf865f32bb79a7765409263b07cbdafc5 +size 1952336 diff --git a/video/anzIzGZuLi_39018696.mp4 b/video/anzIzGZuLi_39018696.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..750b2065b20393b61c9426301e0df24e307d8585 --- /dev/null +++ b/video/anzIzGZuLi_39018696.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4762cc65ac0e3a1ff68beab05c036fb41ad12ff7ade39864d1ef9c90b5e53e47 +size 2011151 diff --git a/video/aq3I5B6GLG_39026459.mp4 b/video/aq3I5B6GLG_39026459.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5b7f3d89da4616915128e8855a55d255cc0d45aa --- /dev/null +++ b/video/aq3I5B6GLG_39026459.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:450d787ecbb622be692c6ca46deb5d35f7c52f49a41803551b3adda1cc9c60a5 +size 3052014 diff --git a/video/atDcnWqG5n_39024379.mp4 b/video/atDcnWqG5n_39024379.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a7c10415ebe94034c369d56d29298c81038fa0ac --- /dev/null +++ b/video/atDcnWqG5n_39024379.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:895b980e528356a18838281a9bba2f16eb25421f528244b9c8fda5b70d87147f +size 2608388 diff --git a/video/axX62CQJpa_39025431.mp4 b/video/axX62CQJpa_39025431.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..05f66519a5a3cb6e7d0d59173ba791c5c8a74706 --- /dev/null +++ b/video/axX62CQJpa_39025431.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e0807cc06263f8e9a4dd760641e0796c8c4938136d2a9d3165ee43178a06dd7 +size 1828075 diff --git a/video/b172ac0R4L_39027484.mp4 b/video/b172ac0R4L_39027484.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5ceff42051be01cd985b6166cb87ee17b30aa9d3 --- /dev/null +++ b/video/b172ac0R4L_39027484.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b986437616e2f6e32cecb9fb227819845fa67c75e7b7f681b1e91ce6695aacbb +size 2728935 diff --git a/video/b1XPHC7MQB_39027294.mp4 b/video/b1XPHC7MQB_39027294.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0fb8fbbc61e79401f60c307a15d1885ab8dca802 --- /dev/null +++ b/video/b1XPHC7MQB_39027294.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e1fd8e65644d30e856ae3fdc0baddf125609b00fe501b1712a04a80055aae1e +size 2386102 diff --git a/video/b1ggjW00NI_39027465.mp4 b/video/b1ggjW00NI_39027465.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..767c61dce8a73300fe5071b09667b4b5a7eea30e --- /dev/null +++ b/video/b1ggjW00NI_39027465.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f637b55f324d3da135c56b67a4abd885d29ed4be21e47f3d7e57fb6f7fbe358 +size 2255820 diff --git a/video/b3kDP3IytM_39017763.mp4 b/video/b3kDP3IytM_39017763.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3d85da34e9d2dd6dc1f5a423cdd57cb32704fba3 --- /dev/null +++ b/video/b3kDP3IytM_39017763.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44a87062125283dfa6639122447a2fd73fb2560a464de60c8ab5e2ae932e861b +size 2537328 diff --git a/video/b7hmPlOqr8_39028030.mp4 b/video/b7hmPlOqr8_39028030.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..32dbe127354094b57cb0c4057cb7beb2d9722c14 --- /dev/null +++ b/video/b7hmPlOqr8_39028030.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c433158ebe2be89c4615e4e61351733ed5b8c8231f53259d3973bf386edd8ba6 +size 2000980 diff --git a/video/bCqIx5Q8qX_39028428.mp4 b/video/bCqIx5Q8qX_39028428.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f5381993806263992ccecdde95c64a87b89b4b43 --- /dev/null +++ b/video/bCqIx5Q8qX_39028428.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6be59446493afc87c837e3d8bcd611d3ff14adcb61a29393df359032e2556aad +size 2082395 diff --git a/video/bEunGps83o_39026653.mp4 b/video/bEunGps83o_39026653.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c9de928c09e79f775914755b1821a93cb1b487b4 --- /dev/null +++ b/video/bEunGps83o_39026653.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f76baf1accb89d5bedd58f334c8f49c3e0c55e55da851e663ee978f53f2861c +size 1538588 diff --git a/video/bFoQXD7Uls_39027299.mp4 b/video/bFoQXD7Uls_39027299.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..972e5d99833e026b07bd328118e0891760aeb425 --- /dev/null +++ b/video/bFoQXD7Uls_39027299.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b96000b7cd966dd99f9f73447615bdd0b048b6b9b75ef0469f5d500f67c5d5f +size 1925507 diff --git a/video/bFrNPlWchg_39025760.mp4 b/video/bFrNPlWchg_39025760.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7f5cbf914cab232c3e6e977d88f2190201b9992a --- /dev/null +++ b/video/bFrNPlWchg_39025760.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23cf2bf8115bf3a0ba7c13c6736cf16799594521c43c4e11eb6b95d92ac1157f +size 2420219 diff --git a/video/bGhsbfyg3b_39026779.mp4 b/video/bGhsbfyg3b_39026779.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4f74d29befb730b9e3ed2f0ccf31c9f293904603 --- /dev/null +++ b/video/bGhsbfyg3b_39026779.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cda3bc67017f3b85e1f4c1441d36f0b911d9c2438f4a75ae7e5cac4a1f32155b +size 2884329 diff --git a/video/bHP9hX4SvI_39028133.mp4 b/video/bHP9hX4SvI_39028133.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f9171ff3a04a49f2b21888e6e9a3f5fda87cfcfc --- /dev/null +++ b/video/bHP9hX4SvI_39028133.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da448e663c05bbf34d9f19dcf52684f73dd3661292e1a3cf600359221133d203 +size 2612710 diff --git a/video/bHgkT0sUy6_39026131.mp4 b/video/bHgkT0sUy6_39026131.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a934ccbf18ac676163a26fef7da03aa01c1a7133 --- /dev/null +++ b/video/bHgkT0sUy6_39026131.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6189068b677f78f6f24dcada978d688888b4bc024af8408c98e1eba8a133656 +size 2068302 diff --git a/video/bIa03mAtxQ_39028776.mp4 b/video/bIa03mAtxQ_39028776.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b4bd92cee0ff1cc2e21318919de83bd66ba9cea8 --- /dev/null +++ b/video/bIa03mAtxQ_39028776.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14ec0e69d15c8a6b96bf5e8f99141f77b731d5d712fbbc57003f23b53c2c5d30 +size 3496786 diff --git a/video/bKOZYBJE4Z_39026260.mp4 b/video/bKOZYBJE4Z_39026260.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b16d97b4fa2b4e333558a25528695670c47b013d --- /dev/null +++ b/video/bKOZYBJE4Z_39026260.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6da9fe99a2171bfb613e027565681996cde2c6c8b9476c201281979509f4e935 +size 3011145 diff --git a/video/bMTn8KKrbq_39027413.mp4 b/video/bMTn8KKrbq_39027413.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c978d153665fdca84766f56b3f8e56df49b0c5bc --- /dev/null +++ b/video/bMTn8KKrbq_39027413.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d99e1fb82d5a65135e4d0e1710d710dd9f64ab509427ac8bc7145642cc1c8a5 +size 2854621 diff --git a/video/bNDwOoxj6W_39028231.mp4 b/video/bNDwOoxj6W_39028231.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e11d708bd9c10e2992643b8fe8ab7fb577397d24 --- /dev/null +++ b/video/bNDwOoxj6W_39028231.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:544f16309801cfdda322ad0c51ba2e72aeae3abbd001c338978bbd87caef6d41 +size 2181751 diff --git a/video/bO5bUxvH6m_39026614.mp4 b/video/bO5bUxvH6m_39026614.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..237c5bff4b3841326dc28a7833aeaab8b900e90f --- /dev/null +++ b/video/bO5bUxvH6m_39026614.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7dd9f91e2f3c439ad751a53762e882b3f2ab05811941e58edd0cbb9d25edd72b +size 2154628 diff --git a/video/bOS6WPV0Jf_39026128.mp4 b/video/bOS6WPV0Jf_39026128.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6ac2365fdc1318c74878878dfd2b0fc962c41f93 --- /dev/null +++ b/video/bOS6WPV0Jf_39026128.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2bdcc24fcd35c6ad1a8fc98f4863226b8587a5b1ac359c7c40b401c9b088f888 +size 2618349 diff --git a/video/bQMevGCYVM_39024566.mp4 b/video/bQMevGCYVM_39024566.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0d28c1a74551e19f117495bfa5dd22ecc24ea140 --- /dev/null +++ b/video/bQMevGCYVM_39024566.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2ed6b6a0daf86c2834c515f99f83ccc9708a6fcf94150f6b1e6f6773b543274 +size 2939716 diff --git a/video/bRLed9prWC_39017753.mp4 b/video/bRLed9prWC_39017753.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1942e22ebf99bebee5ce56c08a74a87355be4856 --- /dev/null +++ b/video/bRLed9prWC_39017753.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0a9f8558ad19b22e362017992046108f5df849205f11551d69d9c4e06bedecf +size 2316588 diff --git a/video/bUi2xECa7w_39027959.mp4 b/video/bUi2xECa7w_39027959.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d73668631989211e0688e4bbb84dacdba6c166fd --- /dev/null +++ b/video/bUi2xECa7w_39027959.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b7ca4fb3a30f62278d1de6e76e7731fb52c516a89f600755c386493e18b5848 +size 1747785 diff --git a/video/bWNJFD1l8M_39017751.mp4 b/video/bWNJFD1l8M_39017751.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4a0d2f291e2d41a396c4e18bc835ede394b75435 --- /dev/null +++ b/video/bWNJFD1l8M_39017751.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:03efa6f20acd5bc40c4cdbcf81123789ef5e137bfc6cc241d46079827d4c0f12 +size 1362444 diff --git a/video/bbCL5aRjUx_39017749.mp4 b/video/bbCL5aRjUx_39017749.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7005bd87f9b2816503cff4f6641bb9b4b1581785 --- /dev/null +++ b/video/bbCL5aRjUx_39017749.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7277841a7ab2c93c0c86a3f5e686b31845c3c87082694b8b55b685f98d23e537 +size 2908238 diff --git a/video/bbGPoL1NLo_39027343.mp4 b/video/bbGPoL1NLo_39027343.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5145d22758a08ff8385c7b099ca1605025fda6c6 --- /dev/null +++ b/video/bbGPoL1NLo_39027343.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f912f7971ee9e57de44ff26b05f8c351ffdc42154dffe2822c801f300544bcfc +size 2537413 diff --git a/video/bcVLFQCOjc_39026952.mp4 b/video/bcVLFQCOjc_39026952.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..eb27ee35dae226ddef8c41a8aa0e7a78506cab33 --- /dev/null +++ b/video/bcVLFQCOjc_39026952.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b140e27629fe11758b3642c1673dbcf0d814303209ecec446f6ce48f78a1cb3 +size 2652850 diff --git a/video/bg6fVPVs3s_39027378.mp4 b/video/bg6fVPVs3s_39027378.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fc0de89efced4bb6ccc14a6aaa23a6fce24871f2 --- /dev/null +++ b/video/bg6fVPVs3s_39027378.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61e6ba59dd7cded858f1b0f4b43d9391dda92b6ef212b22fd7397cf59eb8f660 +size 3108644 diff --git a/video/bhSfbjS6j9_39026171.mp4 b/video/bhSfbjS6j9_39026171.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..602c464db198fbdaaf54de711f6598f872ca1211 --- /dev/null +++ b/video/bhSfbjS6j9_39026171.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dba41d3f2f65395d92f94fb0def2abc48812e5276f8b60d8e509ced6a79ff299 +size 2404289 diff --git a/video/bkUvKPKafQ_39026188.mp4 b/video/bkUvKPKafQ_39026188.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a4bcd05ceec85b07f514f8acb6332238b84488df --- /dev/null +++ b/video/bkUvKPKafQ_39026188.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13ee9e0f9303bc42a409637fa936c11220b3da2829a4ff2802bddc33f6014142 +size 1579150 diff --git a/video/bkdWThqE6q_39019216.mp4 b/video/bkdWThqE6q_39019216.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e299fe1f87c4bbd5667f3f9cbea08e18ef09eaa5 --- /dev/null +++ b/video/bkdWThqE6q_39019216.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d5c25e6f0b2fc35a0d15472a3bab995607d6584c07050c86bd8dd26ed02ec95 +size 2848762 diff --git a/video/bmoS6Ggw4j_39025278.mp4 b/video/bmoS6Ggw4j_39025278.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f21667322a8ffbe2fe9d3a1a653e0b6382415351 --- /dev/null +++ b/video/bmoS6Ggw4j_39025278.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea8486678fe29375465b501162f6cbea61c857832a4baaf6c26cdfcb5ed1012b +size 2357034 diff --git a/video/bnzeOG0yey_39024567.mp4 b/video/bnzeOG0yey_39024567.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..50f45f272c350e7a6c03563f6156cd27b1fae8c8 --- /dev/null +++ b/video/bnzeOG0yey_39024567.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7117c98412640725a05ad91cd6604ca3facfcfc3fbf327271828136eaa31b172 +size 2872838 diff --git a/video/btLLWaOrFs_39025594.mp4 b/video/btLLWaOrFs_39025594.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..591a9163eabd3465addfcceccef01f8f73b4c2a0 --- /dev/null +++ b/video/btLLWaOrFs_39025594.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:892c73683ac5896daef383a6238f7d1cc41958e4307676c4c4b205ed68b56ef6 +size 2380090 diff --git a/video/btuHzsAVsK_39027516.mp4 b/video/btuHzsAVsK_39027516.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cf8bea9c925a3825d8985d75552532a0f76f9369 --- /dev/null +++ b/video/btuHzsAVsK_39027516.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef1a6d52af765e176e260005d73f7d38cba3293f0bdd8ddf6365541002c7c7c7 +size 2548718 diff --git a/video/buqvMT3B4k_39025183.mp4 b/video/buqvMT3B4k_39025183.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5dbc85ad06f4192e8e0b04eeac7445d59636ea5f --- /dev/null +++ b/video/buqvMT3B4k_39025183.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75af160be8dc3ff1d100bb6f984c54280daf8c403a2be9ce3baf74954a013b70 +size 2619043 diff --git a/video/bxH6T1w1FW_39024651.mp4 b/video/bxH6T1w1FW_39024651.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..584bbfb9d2b1bffeea9be18f2fbdb46cba296b00 --- /dev/null +++ b/video/bxH6T1w1FW_39024651.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ad4d9b240ced1aa7d0f2f846b1645350a9fb3f011a6a7de8a9b6e193a052eaa +size 3072057 diff --git a/video/bzuQtVDxv0_39025311.mp4 b/video/bzuQtVDxv0_39025311.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9613856c2a93093e82b0a47a9510be69d5ee24e9 --- /dev/null +++ b/video/bzuQtVDxv0_39025311.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20fff3a1b7375e61d6a0cbe8a111626ccb2c5a5700234acd3bda8c8d608b606d +size 815758 diff --git a/video/c37x7CXZ2Y_39027495.mp4 b/video/c37x7CXZ2Y_39027495.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5846a53a51407246969ae42f742a5d4e10b4959b --- /dev/null +++ b/video/c37x7CXZ2Y_39027495.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e28d94309434b74fa92474ba14c945bbff91837d624fa943f0a652b23e027d50 +size 2139505 diff --git a/video/c4ElkpA0kh_39025919.mp4 b/video/c4ElkpA0kh_39025919.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2a78fad609ceebfb6cd4b3b1f1f0220ef1aa1dd8 --- /dev/null +++ b/video/c4ElkpA0kh_39025919.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d957c20f5a8bd4a2254e13318e240ba986f3df7fa6e71a783db142b3fb0cd67 +size 1290027 diff --git a/video/c7DND1iIgb_39017028.mp4 b/video/c7DND1iIgb_39017028.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d1471c7ed17ba56f5e1111b3d0af480a69c473b8 --- /dev/null +++ b/video/c7DND1iIgb_39017028.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2eedcc0fe19fb79f2e3f85937ddecbca104f3a3c08c319dc3261776958fdfbd7 +size 3544585 diff --git a/video/c7m1HahBNf_39025094.mp4 b/video/c7m1HahBNf_39025094.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6f9a836cb226d16a24ea3b188a69c7143cfa70d4 --- /dev/null +++ b/video/c7m1HahBNf_39025094.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:417f98ad738967d712d86e5305358ea895bbe3e257687f9ef1b25f017191bd06 +size 2745977 diff --git a/video/cDS8WxnMVP_39025948.mp4 b/video/cDS8WxnMVP_39025948.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5a51205f7d80a70a0d685a734bb1abfd2c236bc3 --- /dev/null +++ b/video/cDS8WxnMVP_39025948.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4143320ad6daa72eaf5a3753623e37d028b9429b8d7061536c2bf84dbb10895b +size 1928775 diff --git a/video/cEtExbAKYV_39028811.mp4 b/video/cEtExbAKYV_39028811.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..117243533e0de3f05e4c3762ebc702db19cdfaf5 --- /dev/null +++ b/video/cEtExbAKYV_39028811.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:063246ff115c27976bd5d921874df592f0c49f2bf856fceb6b652a2376556d20 +size 2271601 diff --git a/video/cINwAhrgLf_39017739.mp4 b/video/cINwAhrgLf_39017739.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..167e868bea616a7e0c42da2474abd9aeb25cd1f3 --- /dev/null +++ b/video/cINwAhrgLf_39017739.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dba437e81c9bc697a678da3d4c25f546efc63eb5e094ca526a9917a138e3cca9 +size 2130170 diff --git a/video/cPzjN7KABv_39027871.mp4 b/video/cPzjN7KABv_39027871.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6fb5744a0dd9dfa32825f27a97642174942b7121 --- /dev/null +++ b/video/cPzjN7KABv_39027871.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae542dcc768f7a2951f2b94c641f4968e5f8acf53fd0d231e2e2f305c936bf9d +size 3050340 diff --git a/video/cQoAgPBARc_39025844.mp4 b/video/cQoAgPBARc_39025844.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..63413d7f316e56bb955d53a1e3f00111cbb52f96 --- /dev/null +++ b/video/cQoAgPBARc_39025844.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93405cd353ba723e9b71e6cad6ef389f201a60a8a6034ed84efa0eb2eea0e055 +size 7756 diff --git a/video/cRLFvSOrzt_39025124.mp4 b/video/cRLFvSOrzt_39025124.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2f514437093206bd098ab63a70d16c035ba54d33 --- /dev/null +++ b/video/cRLFvSOrzt_39025124.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7ff180a4418027f9fa5e1ba65e1537974699d1c071efda0203e716990911f07c +size 2150082 diff --git a/video/cRlQHncjwT_39025885.mp4 b/video/cRlQHncjwT_39025885.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7e2ddce420faf925ae3ff658ad3a88e5a9427289 --- /dev/null +++ b/video/cRlQHncjwT_39025885.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8627ed5055b3e84cbe573a16531ec7815861ce552ac933762beb136ac5c68e77 +size 2224749 diff --git a/video/cSfxzCozPU_39028752.mp4 b/video/cSfxzCozPU_39028752.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b1d1f7f759d4a90e42c40c22061b339954cf79f4 --- /dev/null +++ b/video/cSfxzCozPU_39028752.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83311f3ecf7a31f720f39b6cd017087267bdc42e5b89ffc8eef532a4c0b0d3e9 +size 1739249 diff --git a/video/cUGf2HaNcs_39028007.mp4 b/video/cUGf2HaNcs_39028007.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5cf3a19d32f405add831888477bee16e8b6835db --- /dev/null +++ b/video/cUGf2HaNcs_39028007.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:80c87a77670cdaa69bf44f5bd69f47f94c1c0529a220561881ab2bcb35ba36c1 +size 3124627 diff --git a/video/cUSNs8nGaV_39018718.mp4 b/video/cUSNs8nGaV_39018718.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5db4293ca27f0fe4d804f1a5a5e69e500d298e27 --- /dev/null +++ b/video/cUSNs8nGaV_39018718.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7154077b2623ba95124f374075a4231e195b0c0314659e1fb0e2c2d43adfbb71 +size 2808692 diff --git a/video/cV2LKBdlz4_39025133.mp4 b/video/cV2LKBdlz4_39025133.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a2e59f2880084a5f75626245bcb983ec0e2c6a0f --- /dev/null +++ b/video/cV2LKBdlz4_39025133.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:07db9690cda321154725732c1dccf8118801f5c41a3090691f8dff35a8d1ce49 +size 2053225 diff --git a/video/cWdAYDLmPa_39018727.mp4 b/video/cWdAYDLmPa_39018727.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e70cb2c40a8fb97c6660d45594739e775097e4fb --- /dev/null +++ b/video/cWdAYDLmPa_39018727.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:28ee8ae31bf91f838885f92cc3c9acb6a12a2d43eee50e21d8d65b2bd394cfa3 +size 2264270 diff --git a/video/cYZibc2gKf_39027381.mp4 b/video/cYZibc2gKf_39027381.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7c2b6ffe91333d8b4b9a00498363f05e650aef99 --- /dev/null +++ b/video/cYZibc2gKf_39027381.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:307d5ef9c574bb38a7c46f296b37d40dbb63639c7cf1b4dda2d2fe8b21084dd2 +size 3093382 diff --git a/video/ccQ4fmwLDb_39026821.mp4 b/video/ccQ4fmwLDb_39026821.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ca04704e5936aeff9e418bd447ff7b6de1561d9c --- /dev/null +++ b/video/ccQ4fmwLDb_39026821.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ee507cd96e8d7c62b6f9d816acea4070cc38e1b59fbb17b5c4b96c4adac54ed +size 3092292 diff --git a/video/cdUpf6t6LZ_39017734.mp4 b/video/cdUpf6t6LZ_39017734.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3fd06c419400eb2ca7d3fad8512e429fbf61c2f1 --- /dev/null +++ b/video/cdUpf6t6LZ_39017734.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2aedb021840b8db367b8099747e81ef33a338fc92b6e2aa1e93be32e3975e5b +size 1851397 diff --git a/video/cgiOX8lfwG_39027086.mp4 b/video/cgiOX8lfwG_39027086.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3762a6cd5259045e0555e57db8c41ae6653d792f --- /dev/null +++ b/video/cgiOX8lfwG_39027086.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:08eedd61220d703c33df9ec46b3573b1feff308a07d6fa13646ac59e4427eef8 +size 3330226 diff --git a/video/ciwOcmo8CC_39024997.mp4 b/video/ciwOcmo8CC_39024997.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..27d12a8408e5426f7852d251af052e00f590610c --- /dev/null +++ b/video/ciwOcmo8CC_39024997.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00c1f5c937203468fd4e2b82853077c399fd156539c31190c74e32adcf72847f +size 1376299 diff --git a/video/clBiQUgj4w_39028295.mp4 b/video/clBiQUgj4w_39028295.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fd78444f81464caaa498d7e465b5f2ff1111ef4c --- /dev/null +++ b/video/clBiQUgj4w_39028295.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a0bcfc29b1e4e002fa9b9d38e94a75eef52aef9c1836313d9c87bd0fa0290a2 +size 2437125 diff --git a/video/clDGHpx2la_39027334.mp4 b/video/clDGHpx2la_39027334.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2f24bb500cbe0585c28f37cd0e150c0d940e1587 --- /dev/null +++ b/video/clDGHpx2la_39027334.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f53370a9de0e7a45a3ec73853316202150a67e0c6206cd879b74de9a1ebb418 +size 2427302 diff --git a/video/clQdPtooRD_39027382.mp4 b/video/clQdPtooRD_39027382.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f5e82a63081aca03193acd8f09e477126d86cfaf --- /dev/null +++ b/video/clQdPtooRD_39027382.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3b9bf6f594cd410d1018c0d8df888775a9f79ac7812f8aa44ff000cd0df2cf4 +size 2207315 diff --git a/video/cmSNX47aEH_39027709.mp4 b/video/cmSNX47aEH_39027709.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..015a77341f4d75e6d4c033d5fa8bcce16d7ad010 --- /dev/null +++ b/video/cmSNX47aEH_39027709.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:80dc89267e870dc726807f8bc27a9ed12fdb04585a3122c7e9ae0d0af1b8e99d +size 1878502 diff --git a/video/cmcD05NPKa_39017732.mp4 b/video/cmcD05NPKa_39017732.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2295db110fd092ce6a09d1c8fe27b1f517c8be7a --- /dev/null +++ b/video/cmcD05NPKa_39017732.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c71236d9df8fb6e587d7b2ffdc01771ae532546876e70e866f42ef1ee20e90c +size 2759206 diff --git a/video/cnpR4e2HCQ_39026081.mp4 b/video/cnpR4e2HCQ_39026081.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1c6a936830bb80ebd8b11f6636a0f233848ee91d --- /dev/null +++ b/video/cnpR4e2HCQ_39026081.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:79b26d0baaeae762b9dedddf9a1d114cbdd0ff1c79d2773b3a8bb65c6bd36413 +size 1774874 diff --git a/video/coIaBY8EVF_39017731.mp4 b/video/coIaBY8EVF_39017731.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..da035c2eb6346c45fbf35a62345ce0f746934064 --- /dev/null +++ b/video/coIaBY8EVF_39017731.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f7c25a2bab82678c554b01a90f7ffdd0024f6279b0b556abd7242cf2baae62b4 +size 2771033 diff --git a/video/cphhnHjCvC_39017196.mp4 b/video/cphhnHjCvC_39017196.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9d35e42481411279e34756d5f2880efd9af103d7 --- /dev/null +++ b/video/cphhnHjCvC_39017196.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce397e21cc11aea624e13f335f86ff89f2b4316df09298848a2ac5268ec51d25 +size 1364609 diff --git a/video/cpklMJqZDE_39025934.mp4 b/video/cpklMJqZDE_39025934.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ec3e47da6030de309ed18fd46d9942b68c48bf65 --- /dev/null +++ b/video/cpklMJqZDE_39025934.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f0b8dec0d6b4ec452f3869de527b30f08daed9cbfa14c666b860e28f4094731 +size 1880028 diff --git a/video/crlvDzDPgM_39027817.mp4 b/video/crlvDzDPgM_39027817.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3c4380854ec2a105ae3353a5509d86d169147b0d --- /dev/null +++ b/video/crlvDzDPgM_39027817.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2883a489d9f5f6019d555624b5b9541af43c3f14d34b9ba9fc728b8d9822450f +size 3151408 diff --git a/video/cs1HISJkLU_39024631.mp4 b/video/cs1HISJkLU_39024631.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..eada2b15045c59f5dbf92725bfed1bdbd3acf239 --- /dev/null +++ b/video/cs1HISJkLU_39024631.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35b74fbd2dfdca7d1f6293b1bd0ba7a6892fc1aae5240f134db447d13242e449 +size 2234787 diff --git a/video/csukJcpYDe_39017033.mp4 b/video/csukJcpYDe_39017033.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c97c2d4af67073c780385660417fc1518d42b40c --- /dev/null +++ b/video/csukJcpYDe_39017033.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0cae8067950f80dd1856f2d989b740c1dc7bb6e6612ca21621b432c8d940b28f +size 2595056 diff --git a/video/ctxtY3VGGq_39024618.mp4 b/video/ctxtY3VGGq_39024618.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..aa91a33b87d7c1595ce43064bf9d198c5f4d7dc5 --- /dev/null +++ b/video/ctxtY3VGGq_39024618.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:56dd8124e6b1047f885422219c580cbcd948d8655f6959bbe6600316768c1fcf +size 2654716 diff --git a/video/cuAxSHcsSX_39017730.mp4 b/video/cuAxSHcsSX_39017730.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9871174dae122be93989e39f3030ddc06375b78e --- /dev/null +++ b/video/cuAxSHcsSX_39017730.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d20d92ee47cad6732d8afb39f95ac913a08062c2f41c83e8081eb546939be415 +size 2967786 diff --git a/video/cuWsR25bbI_39027457.mp4 b/video/cuWsR25bbI_39027457.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2bfdd71878597f86e9c0adaf64ad290b02025b40 --- /dev/null +++ b/video/cuWsR25bbI_39027457.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7529ddaee5405e2ae9f709ca7a3b374b7ae98e9410046e0949561efb59c78dfd +size 2300972 diff --git a/video/cw5mgd71jW_39028223.mp4 b/video/cw5mgd71jW_39028223.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..60002018d07495248489d8e44136655b2f655863 --- /dev/null +++ b/video/cw5mgd71jW_39028223.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7229b7fe9ef3b745cea61467ede9458bf7532d31d2444a775ffd29d73019b13 +size 2492442 diff --git a/video/cxfPefbu1s_39017729.mp4 b/video/cxfPefbu1s_39017729.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a03c9d0cc4372a025ac38a3ef7f926d3471685e0 --- /dev/null +++ b/video/cxfPefbu1s_39017729.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb2545fc9492df0c3c414988059f977dec8e57e0eddf083e648f41f5bd148752 +size 1696056 diff --git a/video/cyJxphdw3B_39025452.mp4 b/video/cyJxphdw3B_39025452.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..439812be37be495611a38524df6373873ee565ac --- /dev/null +++ b/video/cyJxphdw3B_39025452.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3bb466547e298bf46c2e79ebd8cd525beacdf8a425ef04e015e67d120756312b +size 1911570 diff --git a/video/cyv0LkIaoH_39024511.mp4 b/video/cyv0LkIaoH_39024511.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..44cbcd046129cb6890f66cd5e1e59ac03d7e58a3 --- /dev/null +++ b/video/cyv0LkIaoH_39024511.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57c815e34969f6df300dcd6ea7210a7225d91484175602bda35e914ca20b96cc +size 2085073 diff --git a/video/d226uyWYUo_39026371.mp4 b/video/d226uyWYUo_39026371.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..385f7ef9010f104ec17efb20630607d91e850681 --- /dev/null +++ b/video/d226uyWYUo_39026371.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb912b0c9bb7b4ce7f13b92bb955077816dd310fb70c525e5e83730716b36642 +size 3186117 diff --git a/video/d2lPM1Aczs_39025618.mp4 b/video/d2lPM1Aczs_39025618.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c0ff9ca8596716a9efdcae9b82ef8cf6a5d1a979 --- /dev/null +++ b/video/d2lPM1Aczs_39025618.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9198be1001f6169cb23d7a02f3532ed45d3201028e3ea0320bf002f8f9baf50a +size 2329548 diff --git a/video/d5cKDHCrFJ_39025520.mp4 b/video/d5cKDHCrFJ_39025520.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6ff236074629e7aaf113a62ba75c8e5ef0f3e9f2 --- /dev/null +++ b/video/d5cKDHCrFJ_39025520.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10374a044d5f0f168e89640efbd3746bc7da5bb209b8ab2189324127270c1379 +size 3343329 diff --git a/video/d6tUsZeVs7_39017724.mp4 b/video/d6tUsZeVs7_39017724.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2a8655ff8d7290a473746b8315dac828086c089c --- /dev/null +++ b/video/d6tUsZeVs7_39017724.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:937bc22436d5a93e66913b9e8acd4e62853507f3aa89a15baba2a84b201a4324 +size 1941031 diff --git a/video/d75qCZb7TX_39026730.mp4 b/video/d75qCZb7TX_39026730.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7147a2dd4c7371864248b84ebbc23e22f5b2e796 --- /dev/null +++ b/video/d75qCZb7TX_39026730.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1821d043409d886226f9f895409a291a08429e3116c871810d41fa6a6d0cd65 +size 2241419 diff --git a/video/d94x0gWTUX_39019100.mp4 b/video/d94x0gWTUX_39019100.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7e229c116a58b3130f68f6dd08a466d8bbf083a0 --- /dev/null +++ b/video/d94x0gWTUX_39019100.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db7eacdd2f77442912af7863346a3b54fb038e056b8feee66a3eb78e101ab625 +size 2163684 diff --git a/video/dAXuir2ets_39024501.mp4 b/video/dAXuir2ets_39024501.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b76e5375ded188f8b1313143de1f5138ee22423d --- /dev/null +++ b/video/dAXuir2ets_39024501.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0f2586152d9e7182618e049b34e210e97187e03633f965b2931b5aed7c6fc09 +size 2336466 diff --git a/video/dB6gwSDXKL_39024944.mp4 b/video/dB6gwSDXKL_39024944.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1adf14b9b6c91b8c28d53779613a017d66d1e66d --- /dev/null +++ b/video/dB6gwSDXKL_39024944.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:119ab509739407f38646beac6bf9ce8caf86b0de8cbc03da7b644c9bae748baa +size 2832299 diff --git a/video/dBynjEbAt0_39024831.mp4 b/video/dBynjEbAt0_39024831.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..47b876783e330e84f401d24ea04f5836791233b5 --- /dev/null +++ b/video/dBynjEbAt0_39024831.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eda36ba76b6d5ea65bad14aa80250ea1b476d5004ad8799f5a646544f0d97665 +size 111412 diff --git a/video/dGQtja9X2C_39024638.mp4 b/video/dGQtja9X2C_39024638.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ea5d8eba48acdc32142795128353168f20b5e19f --- /dev/null +++ b/video/dGQtja9X2C_39024638.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:deb9eb52917e91e9422a959a817e8fe495391b3481b397acb7f377c2d9fea4df +size 2841490 diff --git a/video/dHIKahbV6G_39028037.mp4 b/video/dHIKahbV6G_39028037.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2656972187f54f588ab82e8e32215456b69e7c85 --- /dev/null +++ b/video/dHIKahbV6G_39028037.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b3823db851ed7a853d587cd073d5e1b55229f8bc698f388e034d995c4a78c9b9 +size 3023549 diff --git a/video/dIHXwKjXRE_39028585.mp4 b/video/dIHXwKjXRE_39028585.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1f59d559ad5653855e842deecc76fe4231495d75 --- /dev/null +++ b/video/dIHXwKjXRE_39028585.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c545b38bd3738755411c93417fba5cc6bd583d140063dd6baf0012ecf1aae97 +size 2802621 diff --git a/video/dIVb5C0QFf_39028733.mp4 b/video/dIVb5C0QFf_39028733.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7b706db2f78500a3064eba27e2b52380cef64f5b --- /dev/null +++ b/video/dIVb5C0QFf_39028733.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:583d862d0ea7418dc4222018b1a868bc9eb5d4c8c684e9262bf82d6cef553de0 +size 7776 diff --git a/video/dJ9KzkQ0oH_39024810.mp4 b/video/dJ9KzkQ0oH_39024810.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..25496efb3309b2a108699668b98a521c4ff6a6bc --- /dev/null +++ b/video/dJ9KzkQ0oH_39024810.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:568d8bcd81d67b81ecae9a7344a45fde88b1ecbfb487a66a1a845b34be57d578 +size 3113410 diff --git a/video/dJUb9XRoZI_39027237.mp4 b/video/dJUb9XRoZI_39027237.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..aaef683ee8f3081b69c690c48ebabca9b4e2d2e9 --- /dev/null +++ b/video/dJUb9XRoZI_39027237.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b92da210f7d6e6ab4963d31be755fd750bfe91b4049d16f652c362f076e41983 +size 2787775 diff --git a/video/dKl6lMwbCy_39017721.mp4 b/video/dKl6lMwbCy_39017721.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6c24015d440b4a3c9224cc6f31e450094b2993ba --- /dev/null +++ b/video/dKl6lMwbCy_39017721.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77a445910392f93dea475a1107f3ac168cdb2b7c55f4961e41e6f9d0d4b497e0 +size 2084080 diff --git a/video/dLnduWGTB4_39026854.mp4 b/video/dLnduWGTB4_39026854.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bd99503575985889a3fa2c147de9d8184d503bb5 --- /dev/null +++ b/video/dLnduWGTB4_39026854.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b48d4b1e29695d07175bbc9f1183fae91a1a6cb2fff51043337ad61354e1537 +size 2707409 diff --git a/video/dPHLbUqGbr_39019010.mp4 b/video/dPHLbUqGbr_39019010.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0d4db701173a437750b37a835fdc3ca7f0d02a27 --- /dev/null +++ b/video/dPHLbUqGbr_39019010.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:07f54a54fc7b4fe4b807ecf2f45476872b5a295b4cd71521bdb51ab0846135f8 +size 3031146 diff --git a/video/dWwin2uGYE_39024549.mp4 b/video/dWwin2uGYE_39024549.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dda8dc0718eca221d623cc832880aaa678fe0fcf --- /dev/null +++ b/video/dWwin2uGYE_39024549.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b79b728ddf7db4dbfa419b8881f32c5de708cacb818e7bdccd83090549a6b71 +size 2953127 diff --git a/video/dYIqAZXQNV_39027956.mp4 b/video/dYIqAZXQNV_39027956.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..51be39170af3cd462a81ac38401be64cb5f1d015 --- /dev/null +++ b/video/dYIqAZXQNV_39027956.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d5307c237ebea319cf2274b9f1649b1b3e9948e010c1a1b36a706f57c2c572ac +size 2360925 diff --git a/video/da0ZJatRCN_39027747.mp4 b/video/da0ZJatRCN_39027747.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..83bc15984e1a92d360e05b9fc6d79b0de978f948 --- /dev/null +++ b/video/da0ZJatRCN_39027747.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97cff512d8ec906f091de7f4ff57366ea80dde223a49cc47a71d534e784c220f +size 1624874 diff --git a/video/dbQH9AOVd5_39019284.mp4 b/video/dbQH9AOVd5_39019284.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0b1d03391a90cb551dd10e49116e54a7691df42e --- /dev/null +++ b/video/dbQH9AOVd5_39019284.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb0075148e976db090b97508e5e5f3056f61022692934d3fd732d123fc5fe402 +size 2705858 diff --git a/video/dbnEf790Kv_39027627.mp4 b/video/dbnEf790Kv_39027627.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6da22eaba7e5e5138e9c9127941379cdab6d5809 --- /dev/null +++ b/video/dbnEf790Kv_39027627.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9bcc65e62d2030fa7d46214a9a2bb84937bf4368a2d89f52f70a7a9bfb70c58e +size 1284345 diff --git a/video/dfqsW38v1X_39028268.mp4 b/video/dfqsW38v1X_39028268.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5347ceefc2d6624e32affd1e0970772f64efdf86 --- /dev/null +++ b/video/dfqsW38v1X_39028268.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:058b829dc0ceb994a85bb9b5e105edf824af91de98d0efa4d7cbe61aefe875e6 +size 3653425 diff --git a/video/dg3tI3c2B1_39028156.mp4 b/video/dg3tI3c2B1_39028156.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3a4732a610ca6f7cd9a00880809d11472ad0e449 --- /dev/null +++ b/video/dg3tI3c2B1_39028156.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:988d9e124eb813be2088ae7436e75793669ca6496c150abd348b4efdea9dc8dc +size 3092880 diff --git a/video/dhFHO90INk_39026670.mp4 b/video/dhFHO90INk_39026670.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4286bfa7a8aac7d039ce38a5b007a966efa795b6 --- /dev/null +++ b/video/dhFHO90INk_39026670.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9090613b0383eb3db42d5fe0a9a3225bf3ce8cf87c4ceb72a2299c9da31b7b22 +size 2739441 diff --git a/video/diYnEYUbIU_39026500.mp4 b/video/diYnEYUbIU_39026500.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3f31838314ef7a7dd9d306f93ac1c1afee6ac443 --- /dev/null +++ b/video/diYnEYUbIU_39026500.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c0fb6230172b2e4426f5543f329dafc9667698c689eaa8542d00e39745e502a +size 2115290 diff --git a/video/dlCTmEyq6y_39025528.mp4 b/video/dlCTmEyq6y_39025528.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f385111a7154ff8cb01dcefcbdf16bc7e18c96d6 --- /dev/null +++ b/video/dlCTmEyq6y_39025528.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:649b9a66cb72d8e1f87f85ece1a037af837f6ce8f2479de54463e7495507be35 +size 2850956 diff --git a/video/dmhi2ydnXZ_39024458.mp4 b/video/dmhi2ydnXZ_39024458.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ba58dd201676e3c52c2e1e9e24fe626922831937 --- /dev/null +++ b/video/dmhi2ydnXZ_39024458.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0877d6953326ae0216b3e4b7d390f0b3c81dcee01047346e03c10eb3b59d13a7 +size 2456619 diff --git a/video/doaJTihgIZ_39027771.mp4 b/video/doaJTihgIZ_39027771.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4439f833512fc3e162b347d417a60b7b2d36ddec --- /dev/null +++ b/video/doaJTihgIZ_39027771.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4870dfc11bf1e51318590c706c528e707bbd8d9864270c8e43c8f3ab3e703b11 +size 1503864 diff --git a/video/dqT9MC5NQl_39025195.mp4 b/video/dqT9MC5NQl_39025195.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b73afb87ca382c0334d9f95d60d9d88b49e63145 --- /dev/null +++ b/video/dqT9MC5NQl_39025195.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10bdb5acced6fb8640262bc7f5a28edd6f0056aa47d3524cc02c607d852f1cb9 +size 2926968 diff --git a/video/duyA42HlCK_39017703.mp4 b/video/duyA42HlCK_39017703.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5e9dbd95d29dad2bcb4a4722977ff17ee1f2f826 --- /dev/null +++ b/video/duyA42HlCK_39017703.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca8aa8db911061853da57f4f1d40ac6a15239c13db1d128c56430b21543f7d1b +size 3195612 diff --git a/video/dxwIaCVkWU_39028389.mp4 b/video/dxwIaCVkWU_39028389.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..49dffbc5c4cbfcf6b1bf02ebfc3f7da8b113cf3e --- /dev/null +++ b/video/dxwIaCVkWU_39028389.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1bf4d545f9a40d3e549a3dbccd17c0534c0e8675c22f4910d9833fb4cbb1cf36 +size 2051641 diff --git a/video/dxxj4S06YL_39025064.mp4 b/video/dxxj4S06YL_39025064.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..34acf81fab9d181b1a18c27d30bbfea084a0a52b --- /dev/null +++ b/video/dxxj4S06YL_39025064.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8fccc7182448ae5e1691dfe674eb23082a388e9c5014fab6e93b964d7198b5e +size 2922171 diff --git a/video/dxyNVEBQMp_39028775.mp4 b/video/dxyNVEBQMp_39028775.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b42e4449f42894b91d45324b63672aa9968a1d80 --- /dev/null +++ b/video/dxyNVEBQMp_39028775.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e75071f37036c2c6767e9d38defc0fab4e3edd7390bb47eff283e037473d19e +size 2621707 diff --git a/video/dyrGMhicMw_39018617.mp4 b/video/dyrGMhicMw_39018617.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8fe4175eae3dc7ff7604d92f8d79d613322101d8 --- /dev/null +++ b/video/dyrGMhicMw_39018617.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb975c51a13ecafb3c11117d79b9df5dbca717dcdb4e87f2f588e17ff7adebb2 +size 1688134 diff --git a/video/e2INndPINB_39024587.mp4 b/video/e2INndPINB_39024587.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..07b34e55b0b23362665ab155bba0a47a15c6abff --- /dev/null +++ b/video/e2INndPINB_39024587.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f75cc790577be82a2eef8b4ee2f5dc2591d4d3fafa3aa955d47d429cd1163b51 +size 3156530 diff --git a/video/e397soEZh8_39025617.mp4 b/video/e397soEZh8_39025617.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..87029130472dfe46bf1c2549542c38d5103b30c1 --- /dev/null +++ b/video/e397soEZh8_39025617.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50ecafa8fcb5c52347890b3e44a021fa94c1638881cbe23d55db90d94688e2db +size 857274 diff --git a/video/e4xS9ZarDr_39017699.mp4 b/video/e4xS9ZarDr_39017699.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0014bd2ac4e1e22643cc89b93fed65364ee55571 --- /dev/null +++ b/video/e4xS9ZarDr_39017699.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b394933230df03aa9c749acb4a1ea4903d6fe5fa4d31c66dbc4d6ce170c98ec +size 1487622 diff --git a/video/e5icsXBD8Q_39024619.mp4 b/video/e5icsXBD8Q_39024619.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6da2a2b7979186ac4373ffe122c2de733727e104 --- /dev/null +++ b/video/e5icsXBD8Q_39024619.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3209298f5934edcdb2f7825f5c68665dc0953657cbe9815e9bb2ec2f7aeb43d6 +size 2370804 diff --git a/video/e6WrwIvgzX_39027234.mp4 b/video/e6WrwIvgzX_39027234.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3db10001c682d9babd911ce6569843ce9249a53d --- /dev/null +++ b/video/e6WrwIvgzX_39027234.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2fc4c6a425ea17112f37a2b0619ead5537df225d84e9f7618148ae161e7309da +size 3090033 diff --git a/video/eC5qdC4ZTQ_39028705.mp4 b/video/eC5qdC4ZTQ_39028705.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ef5a56c75940f6e6bbf3e2f52ec2b3b65f847e8a --- /dev/null +++ b/video/eC5qdC4ZTQ_39028705.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b456a1754ba70d60cd0620a658362b97b7eb3b5e735e86eab1999e4de92ae605 +size 1151345 diff --git a/video/eDNslSwQIj_39026046.mp4 b/video/eDNslSwQIj_39026046.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bdbfe115e8c5da9237f90607ec070a90ef744519 --- /dev/null +++ b/video/eDNslSwQIj_39026046.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3944ca7a888eb04dd93ba12f4e048146656655359573ff381d49c27ce1a907a6 +size 2218470 diff --git a/video/eFrdRuyHR9_39027962.mp4 b/video/eFrdRuyHR9_39027962.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c2c5c1ac3ef4bdf0db52082619fb1792568b3b43 --- /dev/null +++ b/video/eFrdRuyHR9_39027962.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74dcbe8f9cdce780e15da02925538eed992782cb882a2891099448baabec1c9a +size 1962308 diff --git a/video/eHzIwAhj06_39027063.mp4 b/video/eHzIwAhj06_39027063.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1bd0daae20be84bd503afdb511f7194ee233d575 --- /dev/null +++ b/video/eHzIwAhj06_39027063.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4b9360f36ba874066052a1f743521c239b79866b38a2bdf61dd3a8bab138c71 +size 2491732 diff --git a/video/eKSRTlzRWG_39028822.mp4 b/video/eKSRTlzRWG_39028822.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a34d3784cd90028916b98e456a2efb56c5d4d48f --- /dev/null +++ b/video/eKSRTlzRWG_39028822.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:170176a7d085f629068d1c7200a903c4ce51fbdbfde6f523497c3d6be30a432d +size 2269912 diff --git a/video/eKVugi5zr0_39027526.mp4 b/video/eKVugi5zr0_39027526.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..26febc464f1bb949be719969346dec0999610a25 --- /dev/null +++ b/video/eKVugi5zr0_39027526.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:511cc2b4254b85dd4ef5fdf9d9c6c72c91bd0a3a2b56e3bdc9167411f1f4e4a9 +size 2645368 diff --git a/video/eMHn77ZKOp_39017695.mp4 b/video/eMHn77ZKOp_39017695.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..caa4e7120f17a05c51c945e7e91a2e3e350cff96 --- /dev/null +++ b/video/eMHn77ZKOp_39017695.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d1bc0e0462fac047429e28a41d5831285848130efabc1bfa8b563214ae2963f +size 848888 diff --git a/video/eNM94i7R3A_39027308.mp4 b/video/eNM94i7R3A_39027308.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..eee4397856e226553aa90579d4274e1b303899d9 --- /dev/null +++ b/video/eNM94i7R3A_39027308.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eff6087f1f787eb09ec1f9374efc1824cad4cf422f5c9e70edb24651d988864a +size 2494015 diff --git a/video/eNeqGc9AgR_39025527.mp4 b/video/eNeqGc9AgR_39025527.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7f33e9fc6af177e291bc476f53d4cfd9d164e6c1 --- /dev/null +++ b/video/eNeqGc9AgR_39025527.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b21dbf8859cdd6e32175feaffabe96ef5e016c32219041553e72ae741d206dd +size 1924940 diff --git a/video/eNvVjpx97O_39028543.mp4 b/video/eNvVjpx97O_39028543.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3194a4e2b1aa78d4cf92fb062bf3d6ffd42815fe --- /dev/null +++ b/video/eNvVjpx97O_39028543.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:648a85619d2946cf6d3b5d6de7da1740499231ed545a1c0db37a2cb8b2391101 +size 2476092 diff --git a/video/eOAPWWOGs9_39025083.mp4 b/video/eOAPWWOGs9_39025083.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b661a093034de1daed86388d2ea37cf27f476910 --- /dev/null +++ b/video/eOAPWWOGs9_39025083.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ff10df1f16dd2cf4a6f6864267429e6763dabd56678634d497dac74815afb96 +size 2562426 diff --git a/video/eOx0SMRUv7_39027716.mp4 b/video/eOx0SMRUv7_39027716.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..66f682a39bafa792dfa34fea52f0f17f666adb28 --- /dev/null +++ b/video/eOx0SMRUv7_39027716.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d3c3abac405af6fd0cb3f72a1299ab8a21b4e97a2d74b3f06108cbae7863989 +size 1373215 diff --git a/video/eP9auEJqFg_39027327.mp4 b/video/eP9auEJqFg_39027327.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ae32fe1750a12bfe91bbde6a59695723ddc9b16e --- /dev/null +++ b/video/eP9auEJqFg_39027327.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47c80241fcc4b6e89b68b27907336ddc31918addb5a0356fc44961b9c7650330 +size 2408505 diff --git a/video/eSes1Mic9d_39026987.mp4 b/video/eSes1Mic9d_39026987.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..25e0c0af0a022c25d2dd181a8d33d634997e506b --- /dev/null +++ b/video/eSes1Mic9d_39026987.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4cef832967c051b91bdad4230af4e30e0954cdad9067cfc1e5da2d9b7cdcfdc9 +size 2550481 diff --git a/video/eT6oLkm1cm_39019028.mp4 b/video/eT6oLkm1cm_39019028.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9fb4320acc64a51e3a075c310dc8de1b7d8a8930 --- /dev/null +++ b/video/eT6oLkm1cm_39019028.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa96b76ce2307d58ea2e6a430679fbdc63b41a8494369b203c948d876361ea94 +size 2732103 diff --git a/video/eTu6kvrkSq_39027410.mp4 b/video/eTu6kvrkSq_39027410.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5e46913f00e01dd879051b988236066424ac66c1 --- /dev/null +++ b/video/eTu6kvrkSq_39027410.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ddbd90ee84e44987af4401e960b396d30f411c1da8c12953d7a3025d97b5c5b7 +size 2626911 diff --git a/video/eUg64OsGDE_39026655.mp4 b/video/eUg64OsGDE_39026655.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dcf18932af8793f506638c8fb747a1764aa8c7c9 --- /dev/null +++ b/video/eUg64OsGDE_39026655.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1dc21f49bcf1be9e2216cdbd9a2056bb86bf81ce3b0cbba85802585ee0eeee51 +size 2741200 diff --git a/video/eV5YIrJPdy_39027121.mp4 b/video/eV5YIrJPdy_39027121.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c511897e4dff646133927786f4b4357dedf6d467 --- /dev/null +++ b/video/eV5YIrJPdy_39027121.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:650b6b51823292249327ba594216b50025a123f057a2693fd8d935d0bbb58770 +size 2808216 diff --git a/video/eWiGn0Fcdx_39027504.mp4 b/video/eWiGn0Fcdx_39027504.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..30e6b044cb4e019da635e00aaee041df13d4e9eb --- /dev/null +++ b/video/eWiGn0Fcdx_39027504.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24f64018c3520b805968e5ad824db17bd5d88cb93b63dd65ba48e36f3098331e +size 2865600 diff --git a/video/eY7sLb0dVF_39018908.mp4 b/video/eY7sLb0dVF_39018908.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..93a15c69397ba2dc335fb12aa2d2a9611efeee4d --- /dev/null +++ b/video/eY7sLb0dVF_39018908.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c3a9b2dd29ff7fe38ef8d2da03da4c5510f80cb24a12a3911d1e2a3b276d28c +size 2206493 diff --git a/video/ebBnKVxMcZ_39024752.mp4 b/video/ebBnKVxMcZ_39024752.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4e9bbd5d2c6c7871344c02ff3d212ff5023eabe9 --- /dev/null +++ b/video/ebBnKVxMcZ_39024752.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffda4f35955668ffe4a69d0f070c1175df130087e6b5ecb51d1d2f551fde036b +size 1411188 diff --git a/video/eezCLKwx6T_39028354.mp4 b/video/eezCLKwx6T_39028354.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4ff0d713a28ccf7d7f717dc002c591e8ba3fa40d --- /dev/null +++ b/video/eezCLKwx6T_39028354.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e0782664a77bcca65388b610dffd84ca2a391de4053dbc9223b930547daf98d +size 2982031 diff --git a/video/efFmBWioSc_39017424.mp4 b/video/efFmBWioSc_39017424.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2b355330940d1f2b37a238c1617a4622adecda65 --- /dev/null +++ b/video/efFmBWioSc_39017424.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b2869041b2aeb4f7a95e79eb9dac58c8b787f2c724bdd119014053dcd04c1ff +size 2960151 diff --git a/video/ejIzdt50ek_39027191.mp4 b/video/ejIzdt50ek_39027191.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8f13119369a4da92341d1813ca03df56269607b7 --- /dev/null +++ b/video/ejIzdt50ek_39027191.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48b2891ae66a1cde5d595ecbab0e37514e12714df790ddba096df4a9c3a5e0c1 +size 2033196 diff --git a/video/ektPEcqGLb_39026798.mp4 b/video/ektPEcqGLb_39026798.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..42d36b0a7b47c40db6536d80f7f5fdd0c10eaeb1 --- /dev/null +++ b/video/ektPEcqGLb_39026798.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e0589a05e90d4b88d7a3fd57ef91d8a3478cb756505c9955995c00ee05200324 +size 2993379 diff --git a/video/enlxHLwwFf_39026225.mp4 b/video/enlxHLwwFf_39026225.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..977864ef3835c444b89ad3dc42ec751f795051cd --- /dev/null +++ b/video/enlxHLwwFf_39026225.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:addeb8ce4e7c487000bb10783bda06d7c4ba9981807295fcd86ccc2a6c1edbe2 +size 2485623 diff --git a/video/eo9dHwtTFt_39017687.mp4 b/video/eo9dHwtTFt_39017687.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..195872f0e12c33ac6874764c05bb056f03bf14ad --- /dev/null +++ b/video/eo9dHwtTFt_39017687.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f5fd8fdb73baa6cb7249a31274c9b5245e69ac647f102ad6685d69119c9320d +size 2518749 diff --git a/video/eowkjKVPoH_39026646.mp4 b/video/eowkjKVPoH_39026646.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..347b1b3ab70d2f853ba3848f3a70ac95e1a16eab --- /dev/null +++ b/video/eowkjKVPoH_39026646.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da9589d6f81b2e395e4adb0bded353b72c0c015123651a5a2d56f3fbbf28c168 +size 2663872 diff --git a/video/eqMNwXvOqn_39027756.mp4 b/video/eqMNwXvOqn_39027756.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6f725834c709b0787961df849d1869c1b6898b16 --- /dev/null +++ b/video/eqMNwXvOqn_39027756.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6c3d07fc06529e2019d5d8e420bca2ae12eb268a6d4941bd6676ecbfa1ec40c +size 2544389 diff --git a/video/erjQDJ0z9L_39025976.mp4 b/video/erjQDJ0z9L_39025976.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b1f9c65ad5977001a0e900715c0afcf282177e21 --- /dev/null +++ b/video/erjQDJ0z9L_39025976.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d74e844670a6975fa8896914a0111f34d4f5bdc5c79de973793a078c362440c +size 2408484 diff --git a/video/esVleaqkRc_39027056.mp4 b/video/esVleaqkRc_39027056.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2bcd837cbfaccb297d62bc4d5731312f61a9ea13 --- /dev/null +++ b/video/esVleaqkRc_39027056.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1dd825c1ab53d973586bc5c4221df7216559db7d031ebb0d9633a2de5728c60 +size 2775501 diff --git a/video/etPAH4xSUn_39025870.mp4 b/video/etPAH4xSUn_39025870.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f19502219d463183ee81a3dce46d3aab5706a35f --- /dev/null +++ b/video/etPAH4xSUn_39025870.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cbcceb7fcc25bf5ffc0cca2522f5738e2316c4dbbe49aa71147a0b422a314ef7 +size 2050660 diff --git a/video/exATQD4HSv_39026872.mp4 b/video/exATQD4HSv_39026872.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..edcdc5b0c4616c28ba41cdedc4718b0b1add52e4 --- /dev/null +++ b/video/exATQD4HSv_39026872.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d840bb718deebf5c880c45e60ca13d4b96bb2defa22d27d4c18576253183ede +size 2646087 diff --git a/video/ey3GhWXQ97_39019235.mp4 b/video/ey3GhWXQ97_39019235.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..55dbda6e174ef2ff3134e54969ab9ec3b4ad7eca --- /dev/null +++ b/video/ey3GhWXQ97_39019235.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30f4c6f15ab9bd5055da6b3bbd156f46364fc823a6d255923e66abf64b7d1a05 +size 1876244 diff --git a/video/eyfYC19gOd_39027702.mp4 b/video/eyfYC19gOd_39027702.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..225cd953e01799971ae63865bb13029173dea067 --- /dev/null +++ b/video/eyfYC19gOd_39027702.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba304c91a18c4a9661bb102e83e016590028a3b85e906592960babbf9a8fd332 +size 1722448 diff --git a/video/f4v7cmm5sC_39027199.mp4 b/video/f4v7cmm5sC_39027199.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..53b0d9db31d61465ebeb27e68672f19d669f04ea --- /dev/null +++ b/video/f4v7cmm5sC_39027199.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85606a33c0539c01cc79440f1de589eefad91a4e2cd8892e9865de187d32f8d8 +size 2711095 diff --git a/video/f63DKIpx0I_39026735.mp4 b/video/f63DKIpx0I_39026735.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..573d8dbbfaf989b2ecffaae01ee2c55d91606168 --- /dev/null +++ b/video/f63DKIpx0I_39026735.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4be636547acc0356554f18970fb034fe6ea3a119808bba114256f1697388311a +size 1143765 diff --git a/video/f70e6YYFHF_39026435.mp4 b/video/f70e6YYFHF_39026435.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..035b1dc5ee05bd5f4a3837796a4b713499cd1e7d --- /dev/null +++ b/video/f70e6YYFHF_39026435.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a55cf574d94af36b4ac6473614a0e3029c55e8a58856668adb1eeb707648d5b8 +size 39665 diff --git a/video/f8MrWxlnRz_39028570.mp4 b/video/f8MrWxlnRz_39028570.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..70264863a2e3236a2087925b6c63416105053dab --- /dev/null +++ b/video/f8MrWxlnRz_39028570.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c59941fb9bb0580ac6937e4828ace3642a436688dbfe985142996963496b86d +size 1554439 diff --git a/video/fA3RMMl8ii_39024405.mp4 b/video/fA3RMMl8ii_39024405.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1d2eeb0d7fa46bbed1f2066f702f480fdd091bb6 --- /dev/null +++ b/video/fA3RMMl8ii_39024405.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b08c9d73e2f973cc2395227c56413870fa3f33e59dc3f210e74642ce5f979e5 +size 2477173 diff --git a/video/fAlcxvrOEX_39026526.mp4 b/video/fAlcxvrOEX_39026526.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..68c230b464922123c260109bd2f40c98ad495cf4 --- /dev/null +++ b/video/fAlcxvrOEX_39026526.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a56c9285b8cb6b02ee5c78d4385f450d23d0db4f0b43b4d537e8cd9284d5c612 +size 3218315 diff --git a/video/fDiZJ7mmOV_39024777.mp4 b/video/fDiZJ7mmOV_39024777.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..76b1e56d0979976b80e3db15e84df87089e19c84 --- /dev/null +++ b/video/fDiZJ7mmOV_39024777.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:65c3d26ccd75ce0e1bfdbd92b97fa4b37da1f69f532bd101a071d55c1ff7b987 +size 3136449 diff --git a/video/fE3RqiF4Nx_39026268.mp4 b/video/fE3RqiF4Nx_39026268.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..45688ce270c6c20b764983f82f9363ad39a56273 --- /dev/null +++ b/video/fE3RqiF4Nx_39026268.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8524b7413224c8ca8136968b9313836c3a24b8a8ba58aa00443831ea783373df +size 2248876 diff --git a/video/fHq4x2YXVv_39027505.mp4 b/video/fHq4x2YXVv_39027505.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1c8cc35c4ea3e49b9ccd5ea037fcbaa3213e7db3 --- /dev/null +++ b/video/fHq4x2YXVv_39027505.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad4086d701f9937c9a765a89640b4c9ce2936abf2aaa91a21770232e1c70737c +size 2838420 diff --git a/video/fIz8K4DJ7w_39027079.mp4 b/video/fIz8K4DJ7w_39027079.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3b1f6f057f69ba89f34100e6ec674cdd232df160 --- /dev/null +++ b/video/fIz8K4DJ7w_39027079.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6d2ccef0f72d36123fee44cd99e73c0c5297f8974ed7792a8530e0bd3f5607c +size 3440481 diff --git a/video/fMWrTAe5Iy_39025959.mp4 b/video/fMWrTAe5Iy_39025959.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f06c3b4325171f925991a7de4e4d98a4cf19e393 --- /dev/null +++ b/video/fMWrTAe5Iy_39025959.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05bc45bb4c52a8a045802de71fa6b8e0429c1a3a18bec597d5ddb6fc73096970 +size 2802069 diff --git a/video/fMdrBucZnj_39028153.mp4 b/video/fMdrBucZnj_39028153.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1b94f7c0d4cfd815dd9e29fd6d9cd7c06e9c9c31 --- /dev/null +++ b/video/fMdrBucZnj_39028153.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e4a36b0c23809e15f9c17223a35f899cd4a8c92693fd15ceddd3ae3aef7e3d6 +size 2218523 diff --git a/video/fNakQltI1N_39028069.mp4 b/video/fNakQltI1N_39028069.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e43074d13b24baaeeb7b363c378258ee25e9b77d --- /dev/null +++ b/video/fNakQltI1N_39028069.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3555401fb5f6f58dc18db7f6802b065f9d5db9e8c05bcf4e667598cbc12207eb +size 2361689 diff --git a/video/fNoleQa9RX_39026960.mp4 b/video/fNoleQa9RX_39026960.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..42f51b54a14e5c1d15058d32e86d6799bf437719 --- /dev/null +++ b/video/fNoleQa9RX_39026960.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c87baefda609699ca18aef46aaf1e7fc0753b7a3ee0f33cdaa08d566db368535 +size 2659381 diff --git a/video/fVRCsK4EoM_39024521.mp4 b/video/fVRCsK4EoM_39024521.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fb454bb45fd9ddd80fc357fc3861715bfb93aa3f --- /dev/null +++ b/video/fVRCsK4EoM_39024521.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d5d78d1f9da6f552e4a59a62998e9637662e0fe74c87f2fd6fc0584bedf88c3 +size 2376209 diff --git a/video/faBXeVBNqz_39024737.mp4 b/video/faBXeVBNqz_39024737.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a0addb3283775c6afef7922acd02c731ba3dff15 --- /dev/null +++ b/video/faBXeVBNqz_39024737.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd2b29e7d31079c54d9184802e747daac30f70a30e0853f61ce282e51625750d +size 1845853 diff --git a/video/faj2EBhdHC_39027565.mp4 b/video/faj2EBhdHC_39027565.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..908b0926227e3f638eb1b9d06aec53e32b462436 --- /dev/null +++ b/video/faj2EBhdHC_39027565.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3afadd4ed2092f2a0d148b5389a4947916f0701167ad5179213e86e8e1ce84b +size 1784485 diff --git a/video/farT6XXntP_39017668.mp4 b/video/farT6XXntP_39017668.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..917710d0a4f49d3c374cc4debf9d1fd92197a225 --- /dev/null +++ b/video/farT6XXntP_39017668.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f759af2675bf7336b52d9fc7544f8be72fd3eeff11ba38b36888ee5c395e7524 +size 2780110 diff --git a/video/fe6ANBxcKM_39017667.mp4 b/video/fe6ANBxcKM_39017667.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..01b4aa0b9604610499059ede0234c5a060c93f9c --- /dev/null +++ b/video/fe6ANBxcKM_39017667.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:732ec017537baae2aaa8973a4182f02314c960f3b34d325063106ccfd228fe4e +size 2455946 diff --git a/video/ffeUBoTcdS_39026993.mp4 b/video/ffeUBoTcdS_39026993.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4d34b0e1e42df15eeb4314114b7c343cf660eedd --- /dev/null +++ b/video/ffeUBoTcdS_39026993.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57bfb8b723d2d0407b31c4431b261a255c495ea1e4741ad6f090f7cc5d086210 +size 3165494 diff --git a/video/fgKjiVrm6u_39017665.mp4 b/video/fgKjiVrm6u_39017665.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8596dc3eb6259d1f1ab3769b7d234bcf2b500aaa --- /dev/null +++ b/video/fgKjiVrm6u_39017665.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f5e6f4ae05bf1f946220e66c0e17413dbe8ea7fb274c0c3622640f75a9e3397 +size 2485792 diff --git a/video/fi3aKVnBQo_39025881.mp4 b/video/fi3aKVnBQo_39025881.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2e9f18fcfa0e2841d383eb401970163dd9bbd8e6 --- /dev/null +++ b/video/fi3aKVnBQo_39025881.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1cbd69284c578f4fb9946c82567b2e93f62b4b4bedecf87240d00b38a4c98af0 +size 2337767 diff --git a/video/fjpfCOV4ru_39017660.mp4 b/video/fjpfCOV4ru_39017660.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ae2403536579943b444e5c41bc26876a91982f56 --- /dev/null +++ b/video/fjpfCOV4ru_39017660.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5314457acaed200a1d382b72db9becfd78d0657341e089f1ab30d1891a7ffc1e +size 1932529 diff --git a/video/fogJgrozu1_39025312.mp4 b/video/fogJgrozu1_39025312.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..330ce6db0b6c13e60cc01d21ad7c938e90e561c9 --- /dev/null +++ b/video/fogJgrozu1_39025312.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ba38cbda27050383e1bf99d7125cfa2c9ac748e58fc9e9ef8a90181d47a3fc3 +size 2656539 diff --git a/video/fpOnUMjLiO_39028713.mp4 b/video/fpOnUMjLiO_39028713.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7113fa3a600374c4e7c0eb4d1d27f9557bf5060c --- /dev/null +++ b/video/fpOnUMjLiO_39028713.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7fc1c5ae8bb8ed22f2a6a79845c102c6d85cd59b4e9cc971288dc8db70b15abf +size 1900306 diff --git a/video/fpxRpPbF1t_39028734.mp4 b/video/fpxRpPbF1t_39028734.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..15167d3e2a5a2e118dd569620fdddaecb0bf0c81 --- /dev/null +++ b/video/fpxRpPbF1t_39028734.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05d5402b73807fbf464491e247cfdf0b13a3d6d2d50c5f49df224af9afa464a7 +size 2536290 diff --git a/video/fs28jccJj5_39027745.mp4 b/video/fs28jccJj5_39027745.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2b47bbd4c21672d7482b76e43c78e025fad7051e --- /dev/null +++ b/video/fs28jccJj5_39027745.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ecee3a932cb6d41968061f27f61d70020c9d7b26ae740097b393e7cafaee807 +size 2723054 diff --git a/video/ftqjwZQz10_39025724.mp4 b/video/ftqjwZQz10_39025724.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cf53c5b92a8b5f7aec2e1ef24e5a3050550e90ba --- /dev/null +++ b/video/ftqjwZQz10_39025724.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06d3a5eedee3c775a1329fa0dfcfaaf3662dbaae2343cc83aef821060f3837ed +size 2493977 diff --git a/video/fu0xdh4aEJ_39027941.mp4 b/video/fu0xdh4aEJ_39027941.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5ecf465c1b08208b0a2620df4227bebfead563f4 --- /dev/null +++ b/video/fu0xdh4aEJ_39027941.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13be325bbd5a7fc12ed7fbed0561fb9011481451f6cfc34ea1ceb82835054581 +size 1600605 diff --git a/video/fvOCJAAYLx_39027988.mp4 b/video/fvOCJAAYLx_39027988.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c056a33ae628db32d5d4f448ccc05b8064203b54 --- /dev/null +++ b/video/fvOCJAAYLx_39027988.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14819f5c3611638048f5c9d4a2cefa94b3cf5e9cf6cee9e2c24a96ce0da5a433 +size 2213341 diff --git a/video/fwCoLe3TAX_39019148.mp4 b/video/fwCoLe3TAX_39019148.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..af0d24c53d98f0e208410858af53fe3cdf3612d3 --- /dev/null +++ b/video/fwCoLe3TAX_39019148.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1613edac60611c0061b87ca3f6ef8a24ab2c698614f3bef40b52a72db171dce3 +size 1782530 diff --git a/video/fyYrZbWtNz_39025604.mp4 b/video/fyYrZbWtNz_39025604.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..409041ecdf800b062aefe649cf64e75fce02b347 --- /dev/null +++ b/video/fyYrZbWtNz_39025604.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:229bcde788382b43b0ccf8c2471da20403faa76f000c43dd922ad7f10c40ab5d +size 2044471 diff --git a/video/fykjplMc0V_39027753.mp4 b/video/fykjplMc0V_39027753.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2131c99ddfd346d33a69cd208ee42b647b233e87 --- /dev/null +++ b/video/fykjplMc0V_39027753.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec7b50d71ea8616f42ef17ffcf4a81d6a85dd4e2c1cfb6c7488eabbdc1fc8459 +size 2472029 diff --git a/video/fzlMza6dRZ_39027483.mp4 b/video/fzlMza6dRZ_39027483.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8c01a949ec6f0ab4c2ad236f2297ebafa02c18ba --- /dev/null +++ b/video/fzlMza6dRZ_39027483.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73890acaeadf5a2e968692bd892d0ea5be58a1d453df009c2889d55556da8d5c +size 3020830 diff --git a/video/g52tgL8jy6_39017652.mp4 b/video/g52tgL8jy6_39017652.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5bdc892d89b552bde76015e71be289a992df42c2 --- /dev/null +++ b/video/g52tgL8jy6_39017652.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fddd66769540022492d02b1caaeb6f9d9c1610e0745c8c3bb372e177903d3e67 +size 1262489 diff --git a/video/g5DyqerUpX_39028107.mp4 b/video/g5DyqerUpX_39028107.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d2d621f4ad001aceb5b391a80ab14b20382a5c7c --- /dev/null +++ b/video/g5DyqerUpX_39028107.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f74765a029291ea95c49afa940ac9030e1c5a25d0c876ac15fcf365501d222c +size 4147697 diff --git a/video/g6rZtxaXRm_39017650.mp4 b/video/g6rZtxaXRm_39017650.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8d796f539e6143f7fb4a246c636dfcc47f918fa6 --- /dev/null +++ b/video/g6rZtxaXRm_39017650.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b68b43a899f0ec812c223ab58c22bb3c36810a50bf32537e61ecfa15d407d0d +size 2449417 diff --git a/video/g8kFlZDcaX_39025464.mp4 b/video/g8kFlZDcaX_39025464.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1201cf3513d964a4bf62ca5b4470a14d428db51d --- /dev/null +++ b/video/g8kFlZDcaX_39025464.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6480c76e926dd5fbdbb892784671c9cb5de85d74a1abb0d977d9f441a57ce860 +size 3114795 diff --git a/video/g8sGBSQjYk_39017648.mp4 b/video/g8sGBSQjYk_39017648.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ca4550922f588444c128cf29728911f1b8046e39 --- /dev/null +++ b/video/g8sGBSQjYk_39017648.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67c3e5a7e92426f9ba54276f10355db14b0252692e9e3934dc40e546c53f6489 +size 2250424 diff --git a/video/g90ysX1sVs_39017647.mp4 b/video/g90ysX1sVs_39017647.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bb2dcd75c6130190b22d97950db14d7be113857d --- /dev/null +++ b/video/g90ysX1sVs_39017647.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c98b901d2fa558873ccf373d2c1afca002af788840573396c31eec90d5400918 +size 2572684 diff --git a/video/g9diuvxN6D_39019280.mp4 b/video/g9diuvxN6D_39019280.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9052e84f9b49414a8b0dad7e7f29dd98f84dcd35 --- /dev/null +++ b/video/g9diuvxN6D_39019280.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e70d9f6298a08f4e2681434baea75f2ced660ce5cf3d9207fb143e2042d0529 +size 2567313 diff --git a/video/gAgwqHOBIg_39028460.mp4 b/video/gAgwqHOBIg_39028460.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..53a6cabae340e3cfd04c146149e80c08c0ddc893 --- /dev/null +++ b/video/gAgwqHOBIg_39028460.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9970104548176f9b79cf6479e40d67c87503924bfd1cfe06215ec6e72393f6f8 +size 2479897 diff --git a/video/gCCMzedgbo_39026422.mp4 b/video/gCCMzedgbo_39026422.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..63413d7f316e56bb955d53a1e3f00111cbb52f96 --- /dev/null +++ b/video/gCCMzedgbo_39026422.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93405cd353ba723e9b71e6cad6ef389f201a60a8a6034ed84efa0eb2eea0e055 +size 7756 diff --git a/video/gGR9dJbe3r_39026343.mp4 b/video/gGR9dJbe3r_39026343.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c6ed26d05829cc0f2b3f7d365cac8ca14f043c33 --- /dev/null +++ b/video/gGR9dJbe3r_39026343.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a907ab101fb9cb1319699c215fd3fc2dcf0d1b3920f924a10e56c1c4391210e +size 2849309 diff --git a/video/gHCFduRo7o_39024778.mp4 b/video/gHCFduRo7o_39024778.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c77a1b84b40b5b79a9eb779e71194167c78b99d6 --- /dev/null +++ b/video/gHCFduRo7o_39024778.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9268dc1cbf967811b5134897b3dfa99528b33c9c118ccc4b3075263eb2596b5 +size 2833283 diff --git a/video/gJxEiRcnao_39025530.mp4 b/video/gJxEiRcnao_39025530.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b06be062e1b14a0b49f694d4d84226695efde281 --- /dev/null +++ b/video/gJxEiRcnao_39025530.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b0bc63933ac3110e7a0fd54fabfa9e252b4a4d22a0c6877f0afb09b20f2b8be +size 2579323 diff --git a/video/gKLgY3m9zj_39028612.mp4 b/video/gKLgY3m9zj_39028612.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ea806114e266e7eb07c7e5faebab29316ce5ddac --- /dev/null +++ b/video/gKLgY3m9zj_39028612.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bcb4e910c83ff9516bc2e9eb001421596b8f294013f624b6200fa9594e4d9224 +size 2292474 diff --git a/video/gL5nT4y8fn_39026439.mp4 b/video/gL5nT4y8fn_39026439.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..348ca10e19b93caaec65b8a28c8d1ca3deebdd8e --- /dev/null +++ b/video/gL5nT4y8fn_39026439.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ee68ebe6c1cefeeda1e91df72afa6c446c77e1c97ec15f6bed8de47c0f22a20 +size 2547179 diff --git a/video/gMqaKJCOCB_39027762.mp4 b/video/gMqaKJCOCB_39027762.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..87d487a16bbf043508c5795274fe004bf072f4a4 --- /dev/null +++ b/video/gMqaKJCOCB_39027762.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c4aa13bbb2eb7115d7101ceb04befa2d4641e6b15eb63b8a0f5c98f6a20f9e1 +size 1754288 diff --git a/video/gN1iKwxlL5_39026205.mp4 b/video/gN1iKwxlL5_39026205.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ead73886511b9258d614b656ddf6c1567412b316 --- /dev/null +++ b/video/gN1iKwxlL5_39026205.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9e14fb13862cdf013526d004306a30b22fd6a7440fa28a1d0c264f004c72b73 +size 2154173 diff --git a/video/gSGLkCX9sc_39024685.mp4 b/video/gSGLkCX9sc_39024685.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f896263a557c810e41830480f701c8f1383d42b3 --- /dev/null +++ b/video/gSGLkCX9sc_39024685.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:080ef98091a1b8fe6f79932a091518f6a9af8a423dd989991c95e9dbf3af54ab +size 2063808 diff --git a/video/gVTkMsaaGI_39026978.mp4 b/video/gVTkMsaaGI_39026978.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..aa93978525552bcaa15f6e3d358460b7e772e099 --- /dev/null +++ b/video/gVTkMsaaGI_39026978.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:284c9e449812e58972acfc48abbae741bf8202438b453d9222ad16286acf5a4c +size 3364786 diff --git a/video/gW0znG5JCG_39028134.mp4 b/video/gW0znG5JCG_39028134.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..118a4b6e865f68c325c6cdf6fb8b54045c1465aa --- /dev/null +++ b/video/gW0znG5JCG_39028134.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e950fa0a939d69812a14e70068750d2c4723f1a335362a15003b35304815bdd +size 1944848 diff --git a/video/gYjM1BZzdX_39024958.mp4 b/video/gYjM1BZzdX_39024958.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e253cb77d611785a63d577a8351d1881217757ae --- /dev/null +++ b/video/gYjM1BZzdX_39024958.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db97227b7164c0a0d4c0b17e38ffde61ca2afa7dd5921665b78ef3cd3bbca434 +size 1792464 diff --git a/video/gZWYdJ3c26_39028248.mp4 b/video/gZWYdJ3c26_39028248.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bbd1d822832e974c5288f079d716d85395ad093c --- /dev/null +++ b/video/gZWYdJ3c26_39028248.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d5ebceebdd8a3b821fdc62bf107150056e4adccda81f4be55b5734b0d4abac48 +size 2307413 diff --git a/video/gjeQKFxFpZ_39017634.mp4 b/video/gjeQKFxFpZ_39017634.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..185f1f1a087a735ab1bb7698994f0a433ffdeef3 --- /dev/null +++ b/video/gjeQKFxFpZ_39017634.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b241831fff1d1894f8b7751ea94f683218282dac3e33d95b0b600ee20c12c84 +size 2993818 diff --git a/video/gkOzoHBXUw_39027380.mp4 b/video/gkOzoHBXUw_39027380.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ddc8e73719b60746af08cd136fa4741cccc97750 --- /dev/null +++ b/video/gkOzoHBXUw_39027380.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:430a2440185b1df3cbf8858dbb2ad04976713b5a878e30c865a83ec346ae7bcb +size 1928763 diff --git a/video/gktA1Qycj9_39025733.mp4 b/video/gktA1Qycj9_39025733.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0039604ac9c27494f93e941bbc21c5010abfeee4 --- /dev/null +++ b/video/gktA1Qycj9_39025733.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93290733afd23594b02ba8501707d66c672beb91a9521d41e631a1e52922515f +size 2108465 diff --git a/video/glGeXu1zG4_39028144.mp4 b/video/glGeXu1zG4_39028144.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..688337af6529905c536c0eace613a06d0b74adaa --- /dev/null +++ b/video/glGeXu1zG4_39028144.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd6a0bb64d1dde59329782ccbbee39220438e15637ec40940438eb536ea98cd7 +size 2905280 diff --git a/video/glgZZAfssH_39028258.mp4 b/video/glgZZAfssH_39028258.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..af138f52363ddb223b8257a691d59b1870e2429c --- /dev/null +++ b/video/glgZZAfssH_39028258.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0693b3e29d49f38e0bd32facd0ae579b52831168ce378498ce27e7b8640b024 +size 2151724 diff --git a/video/gmf5Aj01Hz_39025800.mp4 b/video/gmf5Aj01Hz_39025800.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6c867a726188fd86cd8559fa0cf15c0aa0f614fd --- /dev/null +++ b/video/gmf5Aj01Hz_39025800.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:318e3e6470f9ae4dde72dce17afd62c4b9885563c81e3322f798e45f56c26760 +size 1690685 diff --git a/video/gppLqZLQeY_39017631.mp4 b/video/gppLqZLQeY_39017631.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3600820bd6fedff80e0310be2cf82c5133e7ef69 --- /dev/null +++ b/video/gppLqZLQeY_39017631.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:037061d31934c2a27def9357aaaf6372c542a788ba71040b5d3e3ba677d6a684 +size 2551177 diff --git a/video/gtU2eLSAmO_39024403.mp4 b/video/gtU2eLSAmO_39024403.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4298e8a276923608e5700d05ebeac61c1a89b6c4 --- /dev/null +++ b/video/gtU2eLSAmO_39024403.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a3dd55337579fde33cd1cae489faa55861b2eeefd61f6f42a02977c89701e93 +size 2729150 diff --git a/video/gvg8pExqdd_39027540.mp4 b/video/gvg8pExqdd_39027540.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..89e52b9191186d016c23c1bf4b7589e9a0d7e56b --- /dev/null +++ b/video/gvg8pExqdd_39027540.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b214ebe7ae8b567cfc3135350b7171aea375a938185c8130ffe62ab27a9dc7f7 +size 2312033 diff --git a/video/gvtCR7dHJ3_39026650.mp4 b/video/gvtCR7dHJ3_39026650.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..220bd2c5ee7f79fe8298e8ad67c961f89003a0da --- /dev/null +++ b/video/gvtCR7dHJ3_39026650.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:310ca3fdca402fec488d9be45f5a9128ef0cfe367c350e8b00914d4550cd6d25 +size 1727677 diff --git a/video/gx2BT0a9MQ_39017628.mp4 b/video/gx2BT0a9MQ_39017628.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..97e58968d5579ca603550f3144747b5a3a9815dd --- /dev/null +++ b/video/gx2BT0a9MQ_39017628.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f36ae3bf5915b760ce9b9b2a05404942e5f81ff014209f41137a78686c84c73 +size 1999484 diff --git a/video/gzh9nTUtsY_39027377.mp4 b/video/gzh9nTUtsY_39027377.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..106515d70a7c8aefb5f951d05321ae42ef580121 --- /dev/null +++ b/video/gzh9nTUtsY_39027377.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:820cde502df4e6c3b70256937055bcc6b09488b6e65b4c77e380291b2f72454d +size 2043004 diff --git a/video/h05eQniJsQ_39018950.mp4 b/video/h05eQniJsQ_39018950.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..de2180538b9e933b8a6ca58e6ecee2b09a1d8440 --- /dev/null +++ b/video/h05eQniJsQ_39018950.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89500bc2cb507b5487a0ae8de4b080893647e96a1f52d4050809ea30360f156b +size 1689254 diff --git a/video/h3BdT2UMWQ_39024462.mp4 b/video/h3BdT2UMWQ_39024462.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e2476d33003c2c9ced69a6e4ae0d24b1ec7558db --- /dev/null +++ b/video/h3BdT2UMWQ_39024462.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:315a68bf7280bfc5935816ea1a7a630269c03c80657e08c972706f0b091141a0 +size 2326996 diff --git a/video/h922Qhkmx1_39017617.mp4 b/video/h922Qhkmx1_39017617.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e03de50b5f8e599ee6f7a787be52542422762d40 --- /dev/null +++ b/video/h922Qhkmx1_39017617.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1ad02840b058fcc6caa909be7f25bb45bd2a0cc19888a4b3620bb98a47ac408 +size 2470950 diff --git a/video/hB5NkiET32_39025967.mp4 b/video/hB5NkiET32_39025967.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a197912cad2189d5d8e61fe9b1687ff76c915a6b --- /dev/null +++ b/video/hB5NkiET32_39025967.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fca49b405e395c569793ff9355ac79408fe1258dfd5b4817c2caa96d8fa54618 +size 2913658 diff --git a/video/hB7SlfEmze_39017614.mp4 b/video/hB7SlfEmze_39017614.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d530ed41ceb6bfc27ee4d16f9f76a5b842b91797 --- /dev/null +++ b/video/hB7SlfEmze_39017614.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a542c1bbdce2dadc17c248365186e41e29b5350895be43a5f72babfe8119e19b +size 2298767 diff --git a/video/hBCxxVQDBw_39027951.mp4 b/video/hBCxxVQDBw_39027951.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..df03be9b72c49923d9edcf0867b82697e71c0dee --- /dev/null +++ b/video/hBCxxVQDBw_39027951.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7073409a30a8f17e17e0037dec5b551c82aa1336ea3ad109055f36c61e162b0a +size 2405090 diff --git a/video/hE6ZxU0N3c_39027367.mp4 b/video/hE6ZxU0N3c_39027367.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9e13bcaaaf39b0025d9a841e42fa54ec0eea28d5 --- /dev/null +++ b/video/hE6ZxU0N3c_39027367.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b068db9c4775e6d925691ce10b6a13e78f61189eee06102c9a27e74c6d55431 +size 2569409 diff --git a/video/hFTye9Ge40_39027514.mp4 b/video/hFTye9Ge40_39027514.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7984a6b41fcafa99b46b1387c18d2a0f0b07854c --- /dev/null +++ b/video/hFTye9Ge40_39027514.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:311e3d195e4f950a351a97de617a061c133517d87743a856f586c904cf3eaabd +size 2395928 diff --git a/video/hGgkdFF2hR_39028834.mp4 b/video/hGgkdFF2hR_39028834.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5819eefb33fe80e594b21f8daf64056020b104bf --- /dev/null +++ b/video/hGgkdFF2hR_39028834.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:938f30c6d8090aa301579a6566ca773e093484ca0bdd86218214b67787896c85 +size 2697806 diff --git a/video/hILVmJ4Uvu_39017109.mp4 b/video/hILVmJ4Uvu_39017109.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8c004e0cf5fa3a21c8eddd8a32ce190711a58f21 --- /dev/null +++ b/video/hILVmJ4Uvu_39017109.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a64c60bb286ce11a255c938106b9cdba0c554a27758cc69800c2e4d568cddcf +size 2203989 diff --git a/video/hOMVq57Ce0_39017611.mp4 b/video/hOMVq57Ce0_39017611.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8fabd99af548bd6a1db33e216ddb7f06888f73cc --- /dev/null +++ b/video/hOMVq57Ce0_39017611.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cea0f1d766b54ac1372e991b00d31a090c0366359eacce044fdb724afa71f47a +size 2595764 diff --git a/video/hQJksiskaa_39025288.mp4 b/video/hQJksiskaa_39025288.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bbd0b028441464cd0002820c57f723e0a01a2533 --- /dev/null +++ b/video/hQJksiskaa_39025288.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:361a42f65ff36faa4965ca242eb2bcd3256fb794786883d4cd9aaf180e2e9538 +size 1020200 diff --git a/video/hQfcrTBHeD_39028166.mp4 b/video/hQfcrTBHeD_39028166.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..73ea8b304c8f11a8a5759177aceac2d19f884748 --- /dev/null +++ b/video/hQfcrTBHeD_39028166.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78b8804519d073779b72411393ea767d5604501f63fd9665a544cc6278cb538b +size 2524590 diff --git a/video/hRqaot0NZF_39028459.mp4 b/video/hRqaot0NZF_39028459.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..347f47b0b4760a761c798bf6150d7e2021162f78 --- /dev/null +++ b/video/hRqaot0NZF_39028459.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8160a0e9782f36e6c7df088870b10e146540f555339d45e79311df4c867a3384 +size 2506398 diff --git a/video/hW5QWiCctl_39027991.mp4 b/video/hW5QWiCctl_39027991.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..020f73f80012dbf0c630a06c641b0bc8d29484d3 --- /dev/null +++ b/video/hW5QWiCctl_39027991.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d876df4456bf6292299aa7488e189a04be2d5dce1142596700adb6839faed529 +size 2885466 diff --git a/video/haUnEiXgQ7_39027973.mp4 b/video/haUnEiXgQ7_39027973.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7c35f2c4a0f40c4be799cfe6f858e16208d3d6ec --- /dev/null +++ b/video/haUnEiXgQ7_39027973.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:02773638bdb66ceba8e2e5a6852e60c2cdc8cb91a9dc4c1643b3dc6dacafa598 +size 2757550 diff --git a/video/hgdh4foghu_39028870.mp4 b/video/hgdh4foghu_39028870.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..09d3bb7021a0bcc85583fcbc33a15c23bdb646c5 --- /dev/null +++ b/video/hgdh4foghu_39028870.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5dd00ac2ccd710e8d3773d8b08daeb26f472d9f3bf4cf076b4dcffd445ac4aa0 +size 2573611 diff --git a/video/hilGwNabqB_39027301.mp4 b/video/hilGwNabqB_39027301.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4d6e909ce900a8dd2c77ae3944828886113594c3 --- /dev/null +++ b/video/hilGwNabqB_39027301.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a92f5a91a72e2310b8fd521f1ab0c96d474ad4bd937d54eb3016dfb68d26919 +size 2820996 diff --git a/video/hj9ZuNimRl_39017603.mp4 b/video/hj9ZuNimRl_39017603.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ed40521ecb43a839ad5db0e287897a18514af5ae --- /dev/null +++ b/video/hj9ZuNimRl_39017603.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18c12727b35b6d0cfb4af14cde6d39c282016699140ad82a70f5825446b5f348 +size 2675699 diff --git a/video/hkujvAPVsg_39026223.mp4 b/video/hkujvAPVsg_39026223.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2208447dfdf883494fea5032048301bcad2c91dd --- /dev/null +++ b/video/hkujvAPVsg_39026223.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab22606443198892aa184228e10782ceb9a2f382e404e4853b4349ccbd5c6acc +size 2817252 diff --git a/video/hoVXLC8vQU_39027621.mp4 b/video/hoVXLC8vQU_39027621.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1af2684a1768ccde62b98cffd9a536d45a74a3a0 --- /dev/null +++ b/video/hoVXLC8vQU_39027621.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a883aba2c11b9f5990495cd775368e5ef4793acc92e67a80747f590f1c62815f +size 1481637 diff --git a/video/hpvJwmzEHX_39026028.mp4 b/video/hpvJwmzEHX_39026028.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5a386eaa3e1c24359d712b20313c12025b819270 --- /dev/null +++ b/video/hpvJwmzEHX_39026028.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9fcea3d64dd4d261feb2f66ab37145b1889d71e5032f45771fb587fb5d4c8a57 +size 2599494 diff --git a/video/hsgNvC5YM9_39025877.mp4 b/video/hsgNvC5YM9_39025877.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..25276f0375e9c78c4abf299a5ed919a1de3aeb95 --- /dev/null +++ b/video/hsgNvC5YM9_39025877.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7bbc8bccb9bbbe9880d287e8aa4dfec4883f244c59caf8b6d2f3824c97d15761 +size 3011399 diff --git a/video/huGECz8dPp_39018664.mp4 b/video/huGECz8dPp_39018664.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..59ba3491a10e2560587230038cae6ca1d3f1b605 --- /dev/null +++ b/video/huGECz8dPp_39018664.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea5686271e9f175b8bcd3a9738f224c79bd0473810c05c6d7644ffd9c2c6581a +size 2620318 diff --git a/video/hw76X5uWrc_39025293.mp4 b/video/hw76X5uWrc_39025293.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4a2f1f7cdf3bbabfd3be4414c6de760f73c43eac --- /dev/null +++ b/video/hw76X5uWrc_39025293.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51a018fa88dc293bd7ac2ae843083a2781d9d63e85bdc128b98f16100da44c90 +size 3118143 diff --git a/video/i2oacRDF5L_39025120.mp4 b/video/i2oacRDF5L_39025120.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f82b5e2900c813fa7e9693551c75c02cc3afb83c --- /dev/null +++ b/video/i2oacRDF5L_39025120.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:94539e02bc6940ed46f4e32012e5d18ea156020cf1bcf313ec762851cb4db6d4 +size 2152516 diff --git a/video/i8LoWBJf7j_39024956.mp4 b/video/i8LoWBJf7j_39024956.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..773e0a183a2043f84f52f504321282e427c7187b --- /dev/null +++ b/video/i8LoWBJf7j_39024956.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c9a83babbed566d8795848abbbfb85eb5e6ac6a013ec77cc72f506e53ff9e59 +size 2847442 diff --git a/video/i9wDX850jR_39017592.mp4 b/video/i9wDX850jR_39017592.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..879b678e387cf11cd0f7ba10e42a2f2819324d7a --- /dev/null +++ b/video/i9wDX850jR_39017592.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4dada574aee08decb253feec326f4272de630af6703b4e543ebbf4e62ba185e0 +size 2346953 diff --git a/video/iAW2EQXfwb_39017591.mp4 b/video/iAW2EQXfwb_39017591.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c1a9826715687db66d911228c22ff77a3c917ad8 --- /dev/null +++ b/video/iAW2EQXfwb_39017591.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61e254051c66d74c94ee253c9f58276f3203932bf01e08759f98db7d44d86ca3 +size 2383927 diff --git a/video/iD18l6prA7_39028332.mp4 b/video/iD18l6prA7_39028332.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..94f077f72fcfe9d04e2136d9b30f60a124044342 --- /dev/null +++ b/video/iD18l6prA7_39028332.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af61cc0ef8fe0617f4b6f339e5e50b5fced566bd8907e4f0efc0d4eb91f5981a +size 2549489 diff --git a/video/iEeiZlTbts_39024959.mp4 b/video/iEeiZlTbts_39024959.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7d43944cd94cc7306e82b4e5e970d4cfb5337b57 --- /dev/null +++ b/video/iEeiZlTbts_39024959.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cda6c8b46dc9fc5add1748ef7cf205408498a03ae7448bfe0803535d4f888235 +size 1168954 diff --git a/video/iEsyRsg6t1_39027739.mp4 b/video/iEsyRsg6t1_39027739.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a8c7a38bfa924c43e369da053886761595258f25 --- /dev/null +++ b/video/iEsyRsg6t1_39027739.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:decca31530a15656976b200a35f536d61f23ff53aa5180217c156799f847d417 +size 3132418 diff --git a/video/iHcTLIor0m_39019271.mp4 b/video/iHcTLIor0m_39019271.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f58d549dfa20b22f450cd84b92dbd13a8f4913d0 --- /dev/null +++ b/video/iHcTLIor0m_39019271.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8593da5ef88c1fa5a98ec8ee74286d307e4191ed74212f6836cdfe0cee2dddf1 +size 2494546 diff --git a/video/iMEAHXDiNP_39026252.mp4 b/video/iMEAHXDiNP_39026252.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7ae1072fc6de709fef36c133ad7ae3319fca70ce --- /dev/null +++ b/video/iMEAHXDiNP_39026252.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:759117f7bedaa27c50493ca28559b24a5b53747bb82105eb92831472139f9bb0 +size 1810468 diff --git a/video/iN43sJoib7_39027182.mp4 b/video/iN43sJoib7_39027182.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5764ce11357a027287651c3c0aaff2ad3342ae12 --- /dev/null +++ b/video/iN43sJoib7_39027182.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7dd541ca07909bc2b076dc06650a46db969336d91abad7f7a93b68c4cfdf4389 +size 2233255 diff --git a/video/iNS3SC949v_39027215.mp4 b/video/iNS3SC949v_39027215.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..182d4ac52357096e74a7052cb08f0a5d37089370 --- /dev/null +++ b/video/iNS3SC949v_39027215.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35643450e2363668d146a0d9427d70740114b4f805a8b8df0fc23e9de649e7e6 +size 2738968 diff --git a/video/iNUKoLU8xb_39025220.mp4 b/video/iNUKoLU8xb_39025220.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..92590d6dd2f1738083b4815de6773655564cf2bf --- /dev/null +++ b/video/iNUKoLU8xb_39025220.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9420618996ede7f348136f14d8308636cc61233d9f805a9fb82244cb319b796 +size 2891717 diff --git a/video/iPWxqnt2ke_39017587.mp4 b/video/iPWxqnt2ke_39017587.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a412f8036bf357713543ca1e2d5f7a62aa685408 --- /dev/null +++ b/video/iPWxqnt2ke_39017587.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d920911c50e3b131eb3e7a90d9fdaba613a0d998dd8464b2a6d336fd1a6e7c3 +size 2017697 diff --git a/video/iSfCWhvEGA_39026264.mp4 b/video/iSfCWhvEGA_39026264.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..52ddf0e6b2af79e2e3f69fa34e8b35a1ba5539dd --- /dev/null +++ b/video/iSfCWhvEGA_39026264.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d5cb23c46bdb772281e2420c732a2dfd71ddcad260aee4125d273f9e36af3c4 +size 2231607 diff --git a/video/iSjqTQ5S1f_39028627.mp4 b/video/iSjqTQ5S1f_39028627.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..59eeb0676089f58ea7dd47b85a63fdbcf923b75a --- /dev/null +++ b/video/iSjqTQ5S1f_39028627.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed24c0a2cdb9e71d822807f5927858f5ed81ced78fc4fc3cb4fe9362102525c5 +size 2964817 diff --git a/video/iYcY7KAkSy_39025055.mp4 b/video/iYcY7KAkSy_39025055.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3cf5d94a3783a7fc9ab6d8055bbca403f7d803e3 --- /dev/null +++ b/video/iYcY7KAkSy_39025055.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df28f7377478ae7df6247d2f4f124fd4a29d4c699e2cd88533ece789228e95b9 +size 2158901 diff --git a/video/ibKpPabHVn_39024500.mp4 b/video/ibKpPabHVn_39024500.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fb11a86b38382c8886dd2d07b8d3433be0e7befb --- /dev/null +++ b/video/ibKpPabHVn_39024500.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:60bca9287317566ac13bf8920e08e2d18c5a0a9308369e133b0efce259f049ed +size 2333801 diff --git a/video/iiYadgKHwo_39025385.mp4 b/video/iiYadgKHwo_39025385.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fa65f49677b99781e80be9282af082bc38bf0ae7 --- /dev/null +++ b/video/iiYadgKHwo_39025385.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70cdbec737c18f01b03bc09e7d5459eb2323f769b24c2ed3b3cb7de248731766 +size 1790996 diff --git a/video/ijK5hyxs0n_39017578.mp4 b/video/ijK5hyxs0n_39017578.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c67d6f58991c04e5ae637d0178506c67eb94dc9f --- /dev/null +++ b/video/ijK5hyxs0n_39017578.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c864e29e35f0f083f19d09f1b8abd84dbb9a649ad6e7cec5767cdb6d72870501 +size 2255839 diff --git a/video/ijoqFqSC7p_39019079.mp4 b/video/ijoqFqSC7p_39019079.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bd28e288a4ea2f7d5090b279b56de8e4738dc8cd --- /dev/null +++ b/video/ijoqFqSC7p_39019079.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:197b56b21ed85b450ea41620a0499fbe85c5966c519b9da7a15598ccea5a347b +size 1478642 diff --git a/video/ioe66JeCMF_39028311.mp4 b/video/ioe66JeCMF_39028311.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..893b3f1b9dffb2fdc4d39a984d5ef0a4c426f870 --- /dev/null +++ b/video/ioe66JeCMF_39028311.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5a0e1bc00179faad74fb0c26dfe6f933224b454f17f9f517a35d709b3b71acd +size 2540216 diff --git a/video/itGkF993gz_39019199.mp4 b/video/itGkF993gz_39019199.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5bacbaa25772c18225fe6a103be81415231d88e6 --- /dev/null +++ b/video/itGkF993gz_39019199.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a16f04ce89f82ec4655a4c8c9c2c89b6326baa8656f0cedf7e3ea5e1c4b121bc +size 2242347 diff --git a/video/ix7rLVHXyY_39017573.mp4 b/video/ix7rLVHXyY_39017573.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..01863306829069afbd6db84248638cfb3a3ac1bd --- /dev/null +++ b/video/ix7rLVHXyY_39017573.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f97c6ba974ff9516c601677368c1f7e7592fd01c2b48a6837ccd2a0490b99de +size 3118345 diff --git a/video/izrOLJov5y_39017572.mp4 b/video/izrOLJov5y_39017572.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b2c93ce70b2de28dbd436eb51ef843f12883e6ca --- /dev/null +++ b/video/izrOLJov5y_39017572.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34823a68d6dad147d1fbef57e7f67e735309a0caa3d17c1d0abf7bb14f0f8f92 +size 2152564 diff --git a/video/j14wStqZni_39028234.mp4 b/video/j14wStqZni_39028234.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..247da50b2102eaa84c6c880816755e4badaa4abc --- /dev/null +++ b/video/j14wStqZni_39028234.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f19cd0bb4713aca5f2470dfcfc8d84237f45b3efdcbdbb990fa40918c4e35d6 +size 3205573 diff --git a/video/j2wCrWmgMX_39027611.mp4 b/video/j2wCrWmgMX_39027611.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bcb89a87182ec0dda2e6d67e5fe45a949c213758 --- /dev/null +++ b/video/j2wCrWmgMX_39027611.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:084458a127d448e7f9b559815fc2eefd4190fd6483892eee884bd5e68b94fb98 +size 1791289 diff --git a/video/j6Zsoj544N_39027289.mp4 b/video/j6Zsoj544N_39027289.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..810029ffb7093d0d1b2ef5bc37e82f8183433766 --- /dev/null +++ b/video/j6Zsoj544N_39027289.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7573a10d594dcd9204d7c26fa28d1ce309f1a7e1c421c2324626b0a61f27f39a +size 2793318 diff --git a/video/j6kJSS9O6I_39026440.mp4 b/video/j6kJSS9O6I_39026440.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..aada5d711bf52aa40c0c67933b2d67dbac90dcb1 --- /dev/null +++ b/video/j6kJSS9O6I_39026440.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24895971dddf2efed859d03856352a2a8c34298708b59fcb033b8a187efcd671 +size 1999638 diff --git a/video/j8hdRqOUhN_39019211.mp4 b/video/j8hdRqOUhN_39019211.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4d83872ea74271ad82e5b792230359228a4eb4fd --- /dev/null +++ b/video/j8hdRqOUhN_39019211.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:faf227eda5e8b8d047aabb1f45fa545c344598d9b6c59c5d7466d03b5e248788 +size 3270263 diff --git a/video/jCMYIUwprx_39025368.mp4 b/video/jCMYIUwprx_39025368.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..efcc41d0de005dc4d9ec6fbf68daef7dab356f2c --- /dev/null +++ b/video/jCMYIUwprx_39025368.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:954be1c1581d748eaee25787d0d0cc73c3f73023127f8f52cd3cb3e86822c421 +size 2868051 diff --git a/video/jFJPd9kIiF_39018824.mp4 b/video/jFJPd9kIiF_39018824.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4413b50db75ea03ec215cc0fb248b4558e446b3d --- /dev/null +++ b/video/jFJPd9kIiF_39018824.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:937032b3b7f41c99bced5f141b2aa936b864a145ae7a01d7ccba594f9496088d +size 2610134 diff --git a/video/jHh804fZ5l_39025774.mp4 b/video/jHh804fZ5l_39025774.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6aefab5fe7594b59256c915736717ae2f18f8231 --- /dev/null +++ b/video/jHh804fZ5l_39025774.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c4c7fa9025a3e9338c7ef14a04bd35b38e8aac91b852ed320626fa28fe06a5e +size 2556786 diff --git a/video/jIabKyXOTt_39025506.mp4 b/video/jIabKyXOTt_39025506.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5c77787601438cd092768e063ab9d74007f61ccc --- /dev/null +++ b/video/jIabKyXOTt_39025506.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7263de92ec61d02fa7c158d4698ae0337dea9d0b3fdc77416aac73391266db1f +size 2190571 diff --git a/video/jId5PXbBbX_39017565.mp4 b/video/jId5PXbBbX_39017565.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..732dc2a81b900218b541b4a48f1879d258c67e6d --- /dev/null +++ b/video/jId5PXbBbX_39017565.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b082b67f2c26a0381b19ce8bc4e93406959f65cff30cc3cd6f5a70d0f1519bb8 +size 2311540 diff --git a/video/jImXgQEmX3_39026160.mp4 b/video/jImXgQEmX3_39026160.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3c61f06e0a4ce9c158b62b80fae6a811bf5dba96 --- /dev/null +++ b/video/jImXgQEmX3_39026160.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34e9ad5f59c50c6737be1ed4452d2a37335fb4da8853df66cd4aa4ea73bfd49e +size 2315281 diff --git a/video/jKLyKeZfzv_39028195.mp4 b/video/jKLyKeZfzv_39028195.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a7e3606d2102b2ae8cdc78bc8c4ebcdbddf21b08 --- /dev/null +++ b/video/jKLyKeZfzv_39028195.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d6688936e88b5a6936d902bffa93153a8ea1d6a6526458eece7a0786109db51 +size 2553180 diff --git a/video/jL0EsbfbAV_39024772.mp4 b/video/jL0EsbfbAV_39024772.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f8f054ddf703c7a829b84e68e5cf3f3e18c04d56 --- /dev/null +++ b/video/jL0EsbfbAV_39024772.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dcd019f91542102f70f6e6c88d996ed737a6abccc6385eee6245745cb08e3b09 +size 2642572 diff --git a/video/jODehvtTDx_39017559.mp4 b/video/jODehvtTDx_39017559.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5f1cfedb0b542fef0bed8050c6a5c4aa3f0cd61e --- /dev/null +++ b/video/jODehvtTDx_39017559.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc869c3f1842a9123965e707e05594d3388b345f2f81697ee00af887f78abb26 +size 2192076 diff --git a/video/jRtxzzk0a6_39027921.mp4 b/video/jRtxzzk0a6_39027921.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6ae52df0e3d97e8f34079f59af6709a7c8164582 --- /dev/null +++ b/video/jRtxzzk0a6_39027921.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a81a3f59296b3d6bcce8973cdee16d4f80d7de97aeb27816edd2874a1c33b3f5 +size 2243572 diff --git a/video/jWGGEDYORs_39028422.mp4 b/video/jWGGEDYORs_39028422.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8d58b632dedf11fe4cc57bbade95ff5c1edfbc27 --- /dev/null +++ b/video/jWGGEDYORs_39028422.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4df978080e4e10215587f7919a6b94d5d90cc3c0a88a4700f2d33e25d7db76da +size 765034 diff --git a/video/jXgHEwtXs8_39024874.mp4 b/video/jXgHEwtXs8_39024874.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1d6d6e99abd7206664860fdf1487cc15d8d91008 --- /dev/null +++ b/video/jXgHEwtXs8_39024874.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7a5aff8b5330e37d14943c230355c598030d327ff053753fcd8b2b51000a241 +size 2758961 diff --git a/video/jXsxGt80sv_39027000.mp4 b/video/jXsxGt80sv_39027000.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6653c6c7d766c79fe7e4ab300492571fe9408fbc --- /dev/null +++ b/video/jXsxGt80sv_39027000.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85e6c051ca14db10383ff453dbda886048c8303f9501b450a82a5a9685c26adf +size 49196 diff --git a/video/jXxvSkb9HD_39026538.mp4 b/video/jXxvSkb9HD_39026538.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6e60db70df5d635b76d9050e86d4324ee9cbc310 --- /dev/null +++ b/video/jXxvSkb9HD_39026538.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3369f1e49c264460cfa15385b0db2d4893b88bd28868d0a2a54f6f5b6f3a8d3a +size 2815573 diff --git a/video/jd3msHMtTL_39025332.mp4 b/video/jd3msHMtTL_39025332.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4c4b50c6cad05c18afb194922aad4ec14b57dbd6 --- /dev/null +++ b/video/jd3msHMtTL_39025332.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6b11f2879d73d89477d7ec9a03b40bb08ec2aba3f2dff8de5dff438582af58d +size 2449940 diff --git a/video/jfHkAEgKwH_39026983.mp4 b/video/jfHkAEgKwH_39026983.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ba6f9e1c414c5bfae37967e85e68cd92b630f240 --- /dev/null +++ b/video/jfHkAEgKwH_39026983.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24e103083da83e69c71343e61f0b8936771e58cc69a33101066080907f7bf866 +size 2209188 diff --git a/video/jfkid2HwNr_39025944.mp4 b/video/jfkid2HwNr_39025944.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dc3a44e1750c39a96b142589ca11ce597e865288 --- /dev/null +++ b/video/jfkid2HwNr_39025944.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a49aced4238c02b6d2600b23c3fdf5f5a5761ce335218711d7cbaffecfcdfe35 +size 2461658 diff --git a/video/jgpWXnXdME_39028682.mp4 b/video/jgpWXnXdME_39028682.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e5aa83df3c41ab2e7f5af5a81c68edec72d21a54 --- /dev/null +++ b/video/jgpWXnXdME_39028682.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19724a13b659fb8b2fc7d737dcc61058fb3f0c43a911b4a31c115e599a103b13 +size 2402340 diff --git a/video/jhPvuc7kxB_39017551.mp4 b/video/jhPvuc7kxB_39017551.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..eee0cdc8c4b769a505ba655dcfea05dbea148ea6 --- /dev/null +++ b/video/jhPvuc7kxB_39017551.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cde0622dd5f7d8991fa89863bbaaeda4c7b0ed86e7a7a3dc028eb61ff82c7499 +size 2113714 diff --git a/video/jj5ZjZsWJe_39017549.mp4 b/video/jj5ZjZsWJe_39017549.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ae89dd019e8a115d402425889ff145ca5f81393c --- /dev/null +++ b/video/jj5ZjZsWJe_39017549.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac0a9de09f57b6dc5b26d637ab16a187877e32e52ba67f86852e034d099f6152 +size 2763668 diff --git a/video/joNPMCzVIi_39025995.mp4 b/video/joNPMCzVIi_39025995.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..049b58370edd2764c2c9fc7001408189ec37819f --- /dev/null +++ b/video/joNPMCzVIi_39025995.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dcfeb350fd2c3e53925ea7371ccc9bb72977a8dcf40c05bbcc67e2a5f1d348c8 +size 3404382 diff --git a/video/jrNlWfor7q_39026947.mp4 b/video/jrNlWfor7q_39026947.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1278c970ea427d97fcdd4933559f85d8eced37e6 --- /dev/null +++ b/video/jrNlWfor7q_39026947.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1dec4f03a5854359ab6f555da03fe98d48620b6d44bc01e8c5a7e67f0ebc4d6d +size 1606687 diff --git a/video/jsgYYXaSiS_39025338.mp4 b/video/jsgYYXaSiS_39025338.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..70716834809cad567d2968261eee44a6b572abf0 --- /dev/null +++ b/video/jsgYYXaSiS_39025338.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:98f175f2eedf435bae7921a6e2df89e36616a3be8eee9392168364ed99d1220f +size 2152074 diff --git a/video/jzkpwcj200_39027903.mp4 b/video/jzkpwcj200_39027903.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c5cd7c300cb81e52e08dbf114a5217c086fb06f7 --- /dev/null +++ b/video/jzkpwcj200_39027903.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d1dd0a61dcac6a68c49206506f0f2da313f890b89d35ae7bf006ee8455f4398 +size 1932989 diff --git a/video/jzngdJQ2lY_39028736.mp4 b/video/jzngdJQ2lY_39028736.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..83c1df4ed563364da82815bf9e85bff852f8cc9e --- /dev/null +++ b/video/jzngdJQ2lY_39028736.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:40eb3c7a25e81c9263a1eca0126735bbeaf8d96bee36e7834f7affd6c0411c00 +size 1914526 diff --git a/video/jzzEHTBFOT_39018868.mp4 b/video/jzzEHTBFOT_39018868.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dca922369c3c09b68aecde8d28b569156c5a0aa8 --- /dev/null +++ b/video/jzzEHTBFOT_39018868.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c49c33047644ff58246abe2def5bc5a152593a9620896e6ff26f7cc5b7911828 +size 2010697 diff --git a/video/k6ZHvF1vkg_39025076.mp4 b/video/k6ZHvF1vkg_39025076.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f6b259735adfd86f181bb91541c618965eeb84ca --- /dev/null +++ b/video/k6ZHvF1vkg_39025076.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0286504b6b0b1283d91940bb74a9368fb1eeff26dcf56d2983606193a3a1cb8 +size 1995954 diff --git a/video/k8AYft5ED1_39025613.mp4 b/video/k8AYft5ED1_39025613.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d10aa558c2239b7323f81c3e8cfd1d27eb76b24a --- /dev/null +++ b/video/k8AYft5ED1_39025613.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f22d2be2096d3835d2a6994f2670e917e5a5ba9469838decb8ba95986404272a +size 3224306 diff --git a/video/k9SH68MvJs_39024442.mp4 b/video/k9SH68MvJs_39024442.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9143dabedcc17469e013ab3df0e220088658e9ca --- /dev/null +++ b/video/k9SH68MvJs_39024442.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72c490734c5c3442f2e37beafe914060af99b5216f177f3e4c3480cfcd839e03 +size 2438397 diff --git a/video/kCabCEhQWv_39025331.mp4 b/video/kCabCEhQWv_39025331.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..661ee7a47d2afc4f7dd5d02cd12992e0f03ab80c --- /dev/null +++ b/video/kCabCEhQWv_39025331.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d7571c9615b8f22e22d0874a2f428aa986b9fd1f15e271d52ba8f9c570a8735 +size 918263 diff --git a/video/kIP0duasBb_39017528.mp4 b/video/kIP0duasBb_39017528.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..91e11334f5cc01837999bb4f9689ff558fc594db --- /dev/null +++ b/video/kIP0duasBb_39017528.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7eb22bf106481c4461818d2db6cfd6d211497462fc2bc61f544b019b4e28620 +size 3050894 diff --git a/video/kJ0qp9Xdsh_39017526.mp4 b/video/kJ0qp9Xdsh_39017526.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6636053e81740b388fd2492532316d851737678a --- /dev/null +++ b/video/kJ0qp9Xdsh_39017526.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8ca1b95a9ef9930399dacd4c7440dd9bb9d18913936d2444d42828300d15535 +size 2117633 diff --git a/video/kLiWXUdCEw_39026412.mp4 b/video/kLiWXUdCEw_39026412.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9ef00171936ddc2a6e8d58c5ba05539c9519a08f --- /dev/null +++ b/video/kLiWXUdCEw_39026412.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a7c5d08dc7569f534906003b1ada9c61b4ed5199a8e6a4986fc4b3871fefd34 +size 1368173 diff --git a/video/kOMrm4ZJ3m_39024672.mp4 b/video/kOMrm4ZJ3m_39024672.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5a3ca60ff387e38c8232389b80dfa2c5b0fd6eb2 --- /dev/null +++ b/video/kOMrm4ZJ3m_39024672.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f108721d077c1e68ca5455a7e95df70f5de6a311f6dfa57de5352a190cd18978 +size 2212308 diff --git a/video/kPBEAZU5Nm_39028052.mp4 b/video/kPBEAZU5Nm_39028052.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..feeafd13a1b41ef21f3bc048737310088b4d6d80 --- /dev/null +++ b/video/kPBEAZU5Nm_39028052.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74730b30a6ab4bef1594a79e45936a5d265dfe5e2d039648a7cffeb91e583639 +size 2483034 diff --git a/video/kPmSfhCM5s_39025249.mp4 b/video/kPmSfhCM5s_39025249.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..07052aeeff26a17a33c6e17d80fa70ed688733c9 --- /dev/null +++ b/video/kPmSfhCM5s_39025249.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f1ef8e1b6fae7ffb369d5007a45afe19e8c5d3f4519c5df136c4e673357c14da +size 2942407 diff --git a/video/kQ9LgM2JQT_39026596.mp4 b/video/kQ9LgM2JQT_39026596.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..85ca1d152d3078f9b970e7a11e7b40d0536043d5 --- /dev/null +++ b/video/kQ9LgM2JQT_39026596.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:26bb97bd4d3abf6514c4512f595c23cd2a7252cc54cc815f5a11ca5f320dcb12 +size 2452087 diff --git a/video/kQMyiDWbOG_39025666.mp4 b/video/kQMyiDWbOG_39025666.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3575051d66ca84c7ae7be871878f5237e01d0e6b --- /dev/null +++ b/video/kQMyiDWbOG_39025666.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23940653c2d8bf4f743473b47dd46d31e27facc07f30dfca3dbfb702fa151ed6 +size 2557872 diff --git a/video/kQPzFiwVIu_39026897.mp4 b/video/kQPzFiwVIu_39026897.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..60d8b8f7b22048cc97c969818590e58df9113038 --- /dev/null +++ b/video/kQPzFiwVIu_39026897.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dbe52676dccd383b5f13e5147a53e98fe193a4afcbd25c0f0d2e59b65aabed2c +size 3064442 diff --git a/video/kRwQCAIA7z_39028678.mp4 b/video/kRwQCAIA7z_39028678.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..430bf080d6b263fc7b3fc0bc5ed037f079e1f6fc --- /dev/null +++ b/video/kRwQCAIA7z_39028678.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1df526565f0402dc679f4a7c25d391575788c61a09ff13a55fa7933d64deeba4 +size 2731388 diff --git a/video/kTtK65vKvD_39028212.mp4 b/video/kTtK65vKvD_39028212.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e4f78dd812136ff7bab072cddfb5d88733dcaec5 --- /dev/null +++ b/video/kTtK65vKvD_39028212.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:edf55a7acd9e8381f40e6836fe74f913c41326a8e0ca5bab0bb9e16ad9ad950c +size 2915533 diff --git a/video/kUCgHbmO11_39017520.mp4 b/video/kUCgHbmO11_39017520.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..54299a39aec7f6aa91f75b93dc4e8f524856b523 --- /dev/null +++ b/video/kUCgHbmO11_39017520.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f59ca62861655a872a7c30fe2ec9727e84effeda4cf985b18c76b97043be4b66 +size 1912020 diff --git a/video/kUuKFW7DIF_39017519.mp4 b/video/kUuKFW7DIF_39017519.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..90162ce25ae651b5cd26b62823b928189f37a792 --- /dev/null +++ b/video/kUuKFW7DIF_39017519.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd25b472543c62a104c5fdd1ae2db3a81baed36e6c8013163c16842771d6fbd1 +size 2182466 diff --git a/video/kVL5rvkqGG_39026692.mp4 b/video/kVL5rvkqGG_39026692.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..39b822eb15ec5b9b40dd9b899e8ad8fd86928a2c --- /dev/null +++ b/video/kVL5rvkqGG_39026692.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d99e129c56d8e0981684685dd93c920d38a10adc7ef3bfd8a0c1b1edf7c4f94a +size 2637057 diff --git a/video/kXKrLsR4aJ_39027047.mp4 b/video/kXKrLsR4aJ_39027047.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3002f22249003ae6e752c702426708575e4d4479 --- /dev/null +++ b/video/kXKrLsR4aJ_39027047.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a155964a5d9db3c2aff985681e58d8ca6f92fa91d3a5ee6c100db4f988e92c91 +size 1424165 diff --git a/video/kamAXSJxGV_39025200.mp4 b/video/kamAXSJxGV_39025200.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a1dc34b7f21a39e4c377063b6ad7fe06ff03d9ce --- /dev/null +++ b/video/kamAXSJxGV_39025200.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83f3232ebce1089b7cb9f667cc3938a5b0f3e4fad050fd76410539d55a84308b +size 2920044 diff --git a/video/kfdEXQu6MC_39028039.mp4 b/video/kfdEXQu6MC_39028039.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..80a6778b729267be2aaed1037f0ca8b851511666 --- /dev/null +++ b/video/kfdEXQu6MC_39028039.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e9ad6e80b6d3d7ad652c1bf16d0590657c612df9425e006325fe5aa43a717aa +size 1903050 diff --git a/video/kk0Eaunc58_39026807.mp4 b/video/kk0Eaunc58_39026807.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9460577b81a41a0194fa9017970ca63cd36d33ea --- /dev/null +++ b/video/kk0Eaunc58_39026807.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b10b1e38e9aeba7b5a57d838e156ae9e485938fa01f10a49d4a1178eb80c62c8 +size 2478001 diff --git a/video/klsyhjLlX5_39024515.mp4 b/video/klsyhjLlX5_39024515.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..38997d8915d4b6f2f33cee164b6d0a7f83f0fb75 --- /dev/null +++ b/video/klsyhjLlX5_39024515.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f81bbb11e1584afccd97b6cf3c67dba70d8bf67fe36adbf4172b979e333f88c5 +size 2474630 diff --git a/video/kmn0BhQk7p_39017513.mp4 b/video/kmn0BhQk7p_39017513.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..608de571f02813b14b1dbff099f8e4a54237dee2 --- /dev/null +++ b/video/kmn0BhQk7p_39017513.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0266e5a5f127df1593c40b1988101a6706195d962d643493b3c6aaeaed0b720 +size 2455248 diff --git a/video/kngLs5H6l1_39027016.mp4 b/video/kngLs5H6l1_39027016.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3eb68ae550a05b8cca3193f60ff24bd8748d3b9c --- /dev/null +++ b/video/kngLs5H6l1_39027016.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d456770f0ed75c46e9926c64b2b2ff42b9976c18691968b910b2c56cd3d5f46 +size 1145275 diff --git a/video/kpo6ZCgVZH_39028190.mp4 b/video/kpo6ZCgVZH_39028190.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..017f6538647af977a8d0002fd8f87229da0db91e --- /dev/null +++ b/video/kpo6ZCgVZH_39028190.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c65825382a499759415f95c89fbda7b63fad33e75f0ad5479ef77ef99915e7b +size 2335619 diff --git a/video/kr7eN85mIT_39027957.mp4 b/video/kr7eN85mIT_39027957.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b65b0d0a0c78f890314b3932939e014904378cf3 --- /dev/null +++ b/video/kr7eN85mIT_39027957.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cb2310f8bc19655ae716fe013e7021e671baff226387046c74e8dbdb89f60081 +size 1671503 diff --git a/video/krx55l2A6G_39017512.mp4 b/video/krx55l2A6G_39017512.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..49b72140216a4f8cf1717d99dd8153f68ff96722 --- /dev/null +++ b/video/krx55l2A6G_39017512.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f703d49ede315bcfbcb02f2e998d4b026f9411828e18b7b3d0611f1361108b0 +size 2617468 diff --git a/video/kvByNnMERu_39017508.mp4 b/video/kvByNnMERu_39017508.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..840dc452f2900d9c0f193081236204916a3c0766 --- /dev/null +++ b/video/kvByNnMERu_39017508.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f43a9e24b2b2350c2b2a8426f46db4c0734feaeff0c60360d89386431397435 +size 1873383 diff --git a/video/kzJ9P7VPnS_39024686.mp4 b/video/kzJ9P7VPnS_39024686.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..688328144309dabfc8c5a7df3f8f8cee375258f8 --- /dev/null +++ b/video/kzJ9P7VPnS_39024686.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b6e6e306ffaa9f733bc321a6bf497557dcf2d74a12eef6e9313cda9d0abd6d5 +size 2621527 diff --git a/video/l04i6dPMxK_39025277.mp4 b/video/l04i6dPMxK_39025277.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ddb406ad44fe071a7c5b6cfb61507f4ffba5776a --- /dev/null +++ b/video/l04i6dPMxK_39025277.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ea04c12cc97540224a235c93cb55b224d1ac7d6f9999a7ce2be9191b67d4526 +size 2091777 diff --git a/video/l2yvtrz3On_39028024.mp4 b/video/l2yvtrz3On_39028024.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f532ef873d974f3d1a62521e23202b56c8a8ae65 --- /dev/null +++ b/video/l2yvtrz3On_39028024.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57a53cf9f6fdfa6feed4eee1312bb48ce1e946e2bf77c232085b56592f680b02 +size 3430398 diff --git a/video/l3qtSNsPvC_39018621.mp4 b/video/l3qtSNsPvC_39018621.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0ddb0d79fe16e71c1495e0a6536ffd0dc94673d4 --- /dev/null +++ b/video/l3qtSNsPvC_39018621.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c48ae72d80d1e7450cab5e1ae6703b737775f24ad17a46523141c81e01420b01 +size 2456839 diff --git a/video/l6iICoILGB_39028815.mp4 b/video/l6iICoILGB_39028815.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4063c2136f24650392f7f66b0c7f7c89c75e33e6 --- /dev/null +++ b/video/l6iICoILGB_39028815.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccc793285f7e491b3f870b4ff279e2919ba24307d52389b0c9be0e79316160a7 +size 2497042 diff --git a/video/lBh5kuuY1L_39027979.mp4 b/video/lBh5kuuY1L_39027979.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..833bfca4c7c17729a6b5c84720bf45209f1efe0e --- /dev/null +++ b/video/lBh5kuuY1L_39027979.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c30d66199f09c516497d57e353d67ea8623ea6cf952890e0e31a3c21bfcb337 +size 2492406 diff --git a/video/lBp2cda7sp_39026981.mp4 b/video/lBp2cda7sp_39026981.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e51b1bdcdc11d7525daf11323fea62a950d79451 --- /dev/null +++ b/video/lBp2cda7sp_39026981.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57e68ec1a45053a43f1320e95c449348b0f73acfefe2cf8a3f2fa33f100a656f +size 2523033 diff --git a/video/lF2aip4Scn_39019189.mp4 b/video/lF2aip4Scn_39019189.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e183b2c4b717411cc95cfce18b5203d92e513d36 --- /dev/null +++ b/video/lF2aip4Scn_39019189.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e6136ecb79cdb40c1d4203e7e306d98cc630ba8eec055e0784aeac233dc7db7d +size 1733509 diff --git a/video/lIH6oCdppg_39026848.mp4 b/video/lIH6oCdppg_39026848.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1c7da67cc443ef66b535d67ff6f5085dde51680d --- /dev/null +++ b/video/lIH6oCdppg_39026848.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9cac22aa5c4aba2278c6565f25b125e0c5aba960b3a87936dbbb6c9afeda6a24 +size 1129280 diff --git a/video/lKnl4CLhhS_39026382.mp4 b/video/lKnl4CLhhS_39026382.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a737e1a5d584384d52ec74c806ffc2dcfe0ca29d --- /dev/null +++ b/video/lKnl4CLhhS_39026382.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c8c131a5d736350475288caa137b5a384a3f78ce44202e63a322551cd3143f0 +size 1094394 diff --git a/video/lNCsyA5uS1_39026029.mp4 b/video/lNCsyA5uS1_39026029.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cbf6415b4d88e2d93049002f695ab02f922bcd77 --- /dev/null +++ b/video/lNCsyA5uS1_39026029.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f753b53d5037575f3fe39bb90f12ad9045c92dc152b12e9bb8f3b02d003c9857 +size 1939951 diff --git a/video/lOMHt16T8R_39025641.mp4 b/video/lOMHt16T8R_39025641.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..453be45f81f87ada9c0162db9cdca70082fca5a1 --- /dev/null +++ b/video/lOMHt16T8R_39025641.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3447e9c93c16b98f8c1a49a624595047952f7fb5dca6d7f69d990c62fd5ed00e +size 7752 diff --git a/video/lPDxPVS6ix_39027279.mp4 b/video/lPDxPVS6ix_39027279.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cbd3b788653963b1b1e3767f66520cf437673db1 --- /dev/null +++ b/video/lPDxPVS6ix_39027279.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:075eadbd1e657f43f6dbf94b96757876e617955b28ff521b5c86454b14f2b8f5 +size 2994482 diff --git a/video/lPTWdyIY4O_39025310.mp4 b/video/lPTWdyIY4O_39025310.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c4d5a6e9a8373b8a765369c94b60ff784b2428bf --- /dev/null +++ b/video/lPTWdyIY4O_39025310.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72c122d228b42fab7d0755f7784cabad8e2c33a530b4455328062b0a79cc777d +size 2506879 diff --git a/video/lQ45aR8L7D_39025780.mp4 b/video/lQ45aR8L7D_39025780.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ab70c7740e5df5ad7876e3865d2147d42897b1db --- /dev/null +++ b/video/lQ45aR8L7D_39025780.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f18fd06928e72ab4a2cbabea03068aa52a597d4d5e5b21d05213d00c14c3819 +size 1910257 diff --git a/video/lR3rk7ysXz_39017494.mp4 b/video/lR3rk7ysXz_39017494.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bd4d438595924a2c37316672c74612bdd531c22e --- /dev/null +++ b/video/lR3rk7ysXz_39017494.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9ed5d21e05f452ae08de41e7a0673c730b33119263c51f4f19b214f815fc462 +size 2267330 diff --git a/video/lV1wGHKd5x_39027684.mp4 b/video/lV1wGHKd5x_39027684.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2d92cca095c159962a45ee61bfec2b589e82fb40 --- /dev/null +++ b/video/lV1wGHKd5x_39027684.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8bb96ae1503069ebcfdf752c350078a7cb5680631a1d90d6b051668f89808134 +size 2519404 diff --git a/video/lYdjzx3DYu_39025130.mp4 b/video/lYdjzx3DYu_39025130.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e93fa73ade87025818ed4929f4326b9eb26ac50e --- /dev/null +++ b/video/lYdjzx3DYu_39025130.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d23609cf7460cb699e99b907a0d29daaf20b7ed9839663af64077ef2aeb069b9 +size 2518465 diff --git a/video/lZJ0WYI5YC_39026498.mp4 b/video/lZJ0WYI5YC_39026498.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fdaa0b9e78ce478ccfda05a7cb8d5fce679102b8 --- /dev/null +++ b/video/lZJ0WYI5YC_39026498.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:618f571ad4510e1c95e33a8d0ba5dbe6bc506b513c5b53c8f7d5d94e3ba92bf2 +size 2040357 diff --git a/video/lbLC5OV9GY_39026455.mp4 b/video/lbLC5OV9GY_39026455.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3651ccd0a14308b9a9c6f671ada35d6fc6b324f3 --- /dev/null +++ b/video/lbLC5OV9GY_39026455.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9196247a2b631d0da9d6de46a84ef4dc1218b68690dba51555faa07909ef3446 +size 1719681 diff --git a/video/lcALCNF2qe_39025678.mp4 b/video/lcALCNF2qe_39025678.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..156af4879c4ca7f5f8a5d2a8fc466daed7fdd552 --- /dev/null +++ b/video/lcALCNF2qe_39025678.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30cd5951e6e8b1a20e19a196a23f5fc109dc4ff57519f57b0d0d6b34c4cb4106 +size 2738735 diff --git a/video/lckAdnVzsT_39024607.mp4 b/video/lckAdnVzsT_39024607.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..208abfa7f67693ccc4fb8d9928142ac4783c7479 --- /dev/null +++ b/video/lckAdnVzsT_39024607.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffc1b98ef31ef429797f17f0efb16ce6a7a3640d5e47a2b8d09ab9da0a584f77 +size 1564086 diff --git a/video/ldJXXxPE0L_39017491.mp4 b/video/ldJXXxPE0L_39017491.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..19441ce9b1665084eebd10710be1d18fab688d10 --- /dev/null +++ b/video/ldJXXxPE0L_39017491.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88b478ad4ea5bdfaaab0574e0d13c62079d7a4729b6b9efc2364a2ff0d494821 +size 2591241 diff --git a/video/ldvfaYzG35_39028096.mp4 b/video/ldvfaYzG35_39028096.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b2a2180b9eca04683116c3535ca0fda7ddab8d4a --- /dev/null +++ b/video/ldvfaYzG35_39028096.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb5b4e0ed30151edd12a56f33bb112c3ea490fce9cd7c33e0d5aa4a0b573c1f6 +size 2131048 diff --git a/video/leqD3bJ4Ly_39026630.mp4 b/video/leqD3bJ4Ly_39026630.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a5a0e580436c1fc005fde9800b9bf6a92554c7d2 --- /dev/null +++ b/video/leqD3bJ4Ly_39026630.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00346129f581dfd1dff962e0ffc697c416fc975185c1d7500180b12398b3a560 +size 3295058 diff --git a/video/lfY0SUT3m9_39028272.mp4 b/video/lfY0SUT3m9_39028272.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bf95056b8e0ad398a05f1cafe477fcc6a0010b69 --- /dev/null +++ b/video/lfY0SUT3m9_39028272.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:195f682188bff34215a8e37dcb88326760705c82e628ef12f8db7e84e740886e +size 3211532 diff --git a/video/lflwtGE6Vf_39024791.mp4 b/video/lflwtGE6Vf_39024791.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f37930774c7d7b3ed82edcc7f6700ce51d47171e --- /dev/null +++ b/video/lflwtGE6Vf_39024791.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c22cc60468f4aeea5a26dbc6fc9faddf572292749f0996524cdc5b91ec8b2456 +size 2573223 diff --git a/video/lgtsXxk4dF_39024498.mp4 b/video/lgtsXxk4dF_39024498.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..79ae824d28aa952e82143d58219f5573ffc8ef73 --- /dev/null +++ b/video/lgtsXxk4dF_39024498.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa9037d8e4d0180343eee7f02fdaf7f116b7c88c05e8238c2435c837fdd9c14b +size 2689684 diff --git a/video/liHe9iumIi_39026322.mp4 b/video/liHe9iumIi_39026322.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..81361530e6a82434548c067492c24dc43c06a3bf --- /dev/null +++ b/video/liHe9iumIi_39026322.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce42d2a99f980d120152ab8b9ab98e62b9443a3ff86a6029e71046e5dc45d83e +size 2525911 diff --git a/video/likXVjmh3E_39017488.mp4 b/video/likXVjmh3E_39017488.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7aaa8600d0d71ff91b0a56d03e100ff3c5c4a1d5 --- /dev/null +++ b/video/likXVjmh3E_39017488.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e81941294a67c9511c9a8c825a7b52a54beea28b22f239bcc6ab65bf53e4e40 +size 2773414 diff --git a/video/lkx3OpcqSZ_39026557.mp4 b/video/lkx3OpcqSZ_39026557.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..82c7f700b0242faeed9bc68174e28c0c67995ad4 --- /dev/null +++ b/video/lkx3OpcqSZ_39026557.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16614d356b1df19394b87bc522df231f44b0cffffaea50f89f1c1b8855f5519c +size 2747017 diff --git a/video/llTroju97T_39027030.mp4 b/video/llTroju97T_39027030.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d1d2d673b054e9dd366fa88e890be8c0b053b1b2 --- /dev/null +++ b/video/llTroju97T_39027030.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e6f60b4141e9cb2771361f8a034e9aa17af6feae058a8163e5d6000bf3091cb +size 3311656 diff --git a/video/loYSzjSaAK_39018641.mp4 b/video/loYSzjSaAK_39018641.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b9abd0b40680b91756745b0565ff438f2c5d93da --- /dev/null +++ b/video/loYSzjSaAK_39018641.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fab5485d63590b122d9a6b4f7b6603f37a00be52bebab640805b64e086a9d70b +size 3080259 diff --git a/video/lpFDhC91Oj_39025459.mp4 b/video/lpFDhC91Oj_39025459.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f5ac7bbfafd4e388d539048beb0dfe966bde9230 --- /dev/null +++ b/video/lpFDhC91Oj_39025459.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:42eb47849aec59f05d4783ef9611993b57425af27074d4fdbd636474caa44ef2 +size 1985756 diff --git a/video/lpXDZKiAnt_39024720.mp4 b/video/lpXDZKiAnt_39024720.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2f91f25c747c0792363e35bad4a12fa967c4cff2 --- /dev/null +++ b/video/lpXDZKiAnt_39024720.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58315c250ddfb2235c37297ca82aaf615abb839344e34a929ea0d44ee3ee0262 +size 2476194 diff --git a/video/lwpfH9wVkO_39028607.mp4 b/video/lwpfH9wVkO_39028607.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c3c3513650ab56f3510bcbff46527c5ce7915129 --- /dev/null +++ b/video/lwpfH9wVkO_39028607.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6b637794c03948d1fccd521b3f18e9f65c54f6744037ddfc17c31541f5f1942 +size 2475693 diff --git a/video/lxhoVDf1Sw_39027654.mp4 b/video/lxhoVDf1Sw_39027654.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..20da0f96cc8c81657131e9a7e68e62cd8ef02bb5 --- /dev/null +++ b/video/lxhoVDf1Sw_39027654.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ebac0e4a4cafbb6e9712ce444c8136b104dadbbcb79155d9bc136daae282ea7 +size 2153892 diff --git a/video/lxuXvJSOcP_39025072.mp4 b/video/lxuXvJSOcP_39025072.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..63413d7f316e56bb955d53a1e3f00111cbb52f96 --- /dev/null +++ b/video/lxuXvJSOcP_39025072.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93405cd353ba723e9b71e6cad6ef389f201a60a8a6034ed84efa0eb2eea0e055 +size 7756 diff --git a/video/lzfzjYuWgY_39027866.mp4 b/video/lzfzjYuWgY_39027866.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1c12bab9df8b9572bf2eb48fe949d9545334494a --- /dev/null +++ b/video/lzfzjYuWgY_39027866.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39780eb8ac337b6346bfdb006ab87b57c6dd18d91e34383fae1292b33d19ae17 +size 2651392 diff --git a/video/m1PVjNHvtP_39028512.mp4 b/video/m1PVjNHvtP_39028512.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7e60824c0b11b5554ad80153f6fb5988a970b9a0 --- /dev/null +++ b/video/m1PVjNHvtP_39028512.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7f336a8f71f957f74a7360b4069bd60f1c86fcbb92b485bad9698cef073a269 +size 2906681 diff --git a/video/m296WJXyzQ_39026241.mp4 b/video/m296WJXyzQ_39026241.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..117ee2bd0b0d26c0876780a3badc2e9644c96812 --- /dev/null +++ b/video/m296WJXyzQ_39026241.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:547490d1a2b9bc0e6cc81efb085f96c644a8801c44074b3cc546db6a560bac38 +size 2742723 diff --git a/video/m2NVG4Htxs_39017481.mp4 b/video/m2NVG4Htxs_39017481.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..91a8843adcb986d0a60a09eb4ed8cb6190b04919 --- /dev/null +++ b/video/m2NVG4Htxs_39017481.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ad8a73af2a67257609d1a42805ea36b8985326b2058476eb19d02036aa03a3d +size 2292404 diff --git a/video/m4ZcDrVvid_39028368.mp4 b/video/m4ZcDrVvid_39028368.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e1d3e243698218c8d8246d3d0cea3dbc1f9c7dd7 --- /dev/null +++ b/video/m4ZcDrVvid_39028368.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44d88f88d9e096764544059f487e72172f3310985d3c55ee6bbd0df4abc60aaa +size 2617405 diff --git a/video/m5106RRLgx_39026321.mp4 b/video/m5106RRLgx_39026321.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0182b7690f28eeb545879cc70a79079992e109e3 --- /dev/null +++ b/video/m5106RRLgx_39026321.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b854e1f94ba1b7c6dd91172879631f1ee0b3efa3d2bd395f838192171f9d78d5 +size 2599303 diff --git a/video/m5dyKArVn8_39027880.mp4 b/video/m5dyKArVn8_39027880.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..175335ae9e352cd4f0c1adeabaf9aa08fb738fb9 --- /dev/null +++ b/video/m5dyKArVn8_39027880.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:735c15ad44c0631eadddce304901314ae701bc58a404b86e98e8ff9f8b28f8e1 +size 2408018 diff --git a/video/m906PS5G9x_39024597.mp4 b/video/m906PS5G9x_39024597.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1aaf747720dcbefb0711a17b934e64be89b4f098 --- /dev/null +++ b/video/m906PS5G9x_39024597.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d78439d60474f12786fb30994a29741eeb029fc573fb7926a7d04765a588b53f +size 2540240 diff --git a/video/m9WZrEXWl5_39026774.mp4 b/video/m9WZrEXWl5_39026774.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a557a514914fcc7cf86941e71917eee7c13f3214 --- /dev/null +++ b/video/m9WZrEXWl5_39026774.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58488a55e1bdb3c563e11b46ac15ddb176751e7f05e24131be062f79598ca989 +size 2687559 diff --git a/video/mGHJAyR8w0_39019187.mp4 b/video/mGHJAyR8w0_39019187.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..53fcb0f75c805d7e80df331c01afd6163991b011 --- /dev/null +++ b/video/mGHJAyR8w0_39019187.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18f10ccbe9aa4337c3212420f9a279ffa6aecf4bbde6294d324304856f0a0edc +size 2109303 diff --git a/video/mH1xtt2bJE_39027229.mp4 b/video/mH1xtt2bJE_39027229.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e4281bf59e0948d6e3760059cb973292b0c07d90 --- /dev/null +++ b/video/mH1xtt2bJE_39027229.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3da8659ab31cdd6b696c0d5fb4d285a89e0f612a42d9c8799c61f11e6a88224 +size 2182254 diff --git a/video/mHVmsy9len_39027736.mp4 b/video/mHVmsy9len_39027736.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c28f54a22817deea1846e6d7daee68da9d046ef0 --- /dev/null +++ b/video/mHVmsy9len_39027736.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c55d7238e59f76d90cabd54145198f5988d96f6b0b5a93540774aacf6ae92580 +size 2214854 diff --git a/video/mOK4yD8JFd_39028689.mp4 b/video/mOK4yD8JFd_39028689.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..43644cc5081edef0fb79999facdc0892d9a37117 --- /dev/null +++ b/video/mOK4yD8JFd_39028689.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71b655242c352f14a90b60b9dcbf24fdcd76ec79148ef2e94aaeb85eb0e0e631 +size 3108634 diff --git a/video/mQ72XRfYRZ_39017472.mp4 b/video/mQ72XRfYRZ_39017472.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..da95e7909a0fc11aa8af9555c0c73301768699df --- /dev/null +++ b/video/mQ72XRfYRZ_39017472.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e257bbfaaa0df5bbc8fd163b80c7a5c52e40ec6d0105e974b91655ebe31dda32 +size 2715678 diff --git a/video/mRIQz8Zd6O_39025063.mp4 b/video/mRIQz8Zd6O_39025063.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..23adefba8d05e24c2b6009848a0d55a830ae31a5 --- /dev/null +++ b/video/mRIQz8Zd6O_39025063.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b6df7b97875c2fd57f3e817396846f042406066f5336162eccd299578c2e21d +size 2247194 diff --git a/video/mSHs6C7Nfa_39026080.mp4 b/video/mSHs6C7Nfa_39026080.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..253981adc38b418e6ebd380e6fbe811ab0dd6db5 --- /dev/null +++ b/video/mSHs6C7Nfa_39026080.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25f4fda618766c2ce5a6e40a97f5158ff97a39bbcc1602c4375983fb9590b52a +size 2448584 diff --git a/video/mSaqxZVZW8_39025296.mp4 b/video/mSaqxZVZW8_39025296.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..00e40bdcfb2b56f6026ffecff11d4f7c8eda6f29 --- /dev/null +++ b/video/mSaqxZVZW8_39025296.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d07d9650bfdb1c995e479249c9a1ff07cbf6ae916f6fc5ea74701fbf2ac8105b +size 2800496 diff --git a/video/mXlR1FLFDc_39024779.mp4 b/video/mXlR1FLFDc_39024779.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..12970b80a5c62c3061087260e60691c9a3ac3825 --- /dev/null +++ b/video/mXlR1FLFDc_39024779.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:42f7d6fba131f36550aeab237ce4f41b8b11bb618c4ef579b42c14d8fe4ebfea +size 3105200 diff --git a/video/mY0ZnS2s9u_39028200.mp4 b/video/mY0ZnS2s9u_39028200.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6ea88676dc28f23b6db7e3d5e78e0d3ee727fb28 --- /dev/null +++ b/video/mY0ZnS2s9u_39028200.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9102c0883c019cb314ca9a187bd3c0d919e7cb1e1efd3f7c2b9bdf279d01db37 +size 2202096 diff --git a/video/mYWsyTuiRp_39017470.mp4 b/video/mYWsyTuiRp_39017470.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..64d2a588cfd0ef55c7c6c93493c1eae3e2fabdff --- /dev/null +++ b/video/mYWsyTuiRp_39017470.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a416228c6553011dd62044cad0e67e202c4514bb0fb386962382264e76a9a547 +size 1380794 diff --git a/video/mZHbkbYWTp_39028253.mp4 b/video/mZHbkbYWTp_39028253.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..767d6520d0d95418a28b50a1ec00eddee6124120 --- /dev/null +++ b/video/mZHbkbYWTp_39028253.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ac7cb0c1472cf95edb715bfc83b9feac0716de5409b38edd500142bd0826475 +size 2637257 diff --git a/video/manHbkpIW6_39026319.mp4 b/video/manHbkpIW6_39026319.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e371d4ebe2c2366c595d9d1179d2ea7280e654ba --- /dev/null +++ b/video/manHbkpIW6_39026319.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:135dd12ed7b225fb44a2953b8079d68feae4303a08ccf3e5add37a2ba87c967f +size 2964960 diff --git a/video/mfTvNzhsht_39025819.mp4 b/video/mfTvNzhsht_39025819.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..23a7bbeedf4aea51bca4c711fe8c6c502fcc4dda --- /dev/null +++ b/video/mfTvNzhsht_39025819.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f8a6cbe52352d403bcff1aabeca4749e3687fa40dcbace200b2573547a13ce2 +size 2671000 diff --git a/video/mhhlZeAr67_39025905.mp4 b/video/mhhlZeAr67_39025905.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1e3c657651110cd434da1b152dab8fab384e4514 --- /dev/null +++ b/video/mhhlZeAr67_39025905.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3018db9b53c8d7d20cb917eb85487ec7b38dcc2f4eed97718f779c4ffbf67906 +size 1758389 diff --git a/video/mirkQqx6po_39025329.mp4 b/video/mirkQqx6po_39025329.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..61c40e6905145a886497fc06136c9895d926180f --- /dev/null +++ b/video/mirkQqx6po_39025329.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ecf6a67a07d88a327ad244791f216a4bcc4eba885e50a92dad74777a58682997 +size 2371613 diff --git a/video/mkw6x0OExg_39026085.mp4 b/video/mkw6x0OExg_39026085.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..79b874e2ddd2513073453be76facc0e509c341c9 --- /dev/null +++ b/video/mkw6x0OExg_39026085.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4941f0cf16195ce657446e64c1b9e0b76ba019bef3ecd6eb8b5e9ec4e408fea7 +size 2538259 diff --git a/video/ml01XyP698_39024611.mp4 b/video/ml01XyP698_39024611.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e12c0cb627f78ba8235fc2b04d2a680ef19a2509 --- /dev/null +++ b/video/ml01XyP698_39024611.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d2b57d391b627ff3e2c72afab8ad4b7d24c9a80522211e5e6816d4955d4e006 +size 2306895 diff --git a/video/mljDUaQpln_39028517.mp4 b/video/mljDUaQpln_39028517.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..69627fef130cab217ff83ad596bac351ad33a95f --- /dev/null +++ b/video/mljDUaQpln_39028517.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66484d02ba29a1ddf3c9ff8eb5c1d56bf6a4e6fb030a38d57f7da5aaa9cf2896 +size 620797 diff --git a/video/mmSFfib6pI_39024884.mp4 b/video/mmSFfib6pI_39024884.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..335516d7f229052bbedd8aeeebc434b538b047b4 --- /dev/null +++ b/video/mmSFfib6pI_39024884.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aac156006f1a58d5665b664c06d1135080656263c42d912051db3d06265d937e +size 2492377 diff --git a/video/motImXq3B1_39025744.mp4 b/video/motImXq3B1_39025744.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bfc5090a22e41c7cb55389a7f897297ea34fd91e --- /dev/null +++ b/video/motImXq3B1_39025744.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82aea977cd4affd03a5fa2ccfabeed0a1272ace5f45a650c696c03fea00f0790 +size 380734 diff --git a/video/mp6OWpDIJC_39027931.mp4 b/video/mp6OWpDIJC_39027931.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ea56eb986e1770f4ee6a90fd1d928ef013b2a484 --- /dev/null +++ b/video/mp6OWpDIJC_39027931.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c6fe160bf37975784faf1f5d7f2823e0385a43683728f931bdeb61b9cc905f52 +size 2915403 diff --git a/video/mp8u2Pcmqz_39026575.mp4 b/video/mp8u2Pcmqz_39026575.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a2aec59d2cea7a52b1c57a3db6c1ce515d19c407 --- /dev/null +++ b/video/mp8u2Pcmqz_39026575.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96a6af87be386dada24e7011063443ae4bcb72afa9e36171a441a5c42b0cd350 +size 2784269 diff --git a/video/mpDbWjLzfT_39025584.mp4 b/video/mpDbWjLzfT_39025584.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7a5462503fcdbdc5576b782b1524a745033efd18 --- /dev/null +++ b/video/mpDbWjLzfT_39025584.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fce51529a1fa8083b876203caa256ac940483a57ae9f203ff7c56cb4e6bd809b +size 2381359 diff --git a/video/mqVgBbNCm9_39018863.mp4 b/video/mqVgBbNCm9_39018863.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..243a0ca93af206afbda358f80f802079a6e9142b --- /dev/null +++ b/video/mqVgBbNCm9_39018863.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c72ad14eca180945d30da065c25f0992307ea6c8288983080b4fbd8775c562b6 +size 2624286 diff --git a/video/ms0VgzSGF2_39017461.mp4 b/video/ms0VgzSGF2_39017461.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a9b3b2561ae0a456c0a317fd3fda936c9a8f316e --- /dev/null +++ b/video/ms0VgzSGF2_39017461.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3bf5fa8e1956c85013e889ef2c129dcf84d11a506afcaea7204983d14f5bc296 +size 2085668 diff --git a/video/msXxrttLOi_39017126.mp4 b/video/msXxrttLOi_39017126.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..19120289f45c7537d255311c7086ad01fe8c665b --- /dev/null +++ b/video/msXxrttLOi_39017126.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea30d3bdcd7dbf2960ab2df34b160d0eb6e1aae20bdcf3c87990c8492afdde92 +size 2470727 diff --git a/video/mtBmKqyqGS_39027814.mp4 b/video/mtBmKqyqGS_39027814.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3fb679016ed0024c96be28239b0d24318b256447 --- /dev/null +++ b/video/mtBmKqyqGS_39027814.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2bbec8fde062a02d8ef7c8ca8471316a738f57119860f2c83fb2335409508622 +size 2235804 diff --git a/video/mw1PWNSWZP_39018956.mp4 b/video/mw1PWNSWZP_39018956.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..47f568a5ab79b2527d06f55539e4ed9011120151 --- /dev/null +++ b/video/mw1PWNSWZP_39018956.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6bf30cb4221229c3476065b76efefae02c3d7b33c76fb44e3795c66850140fca +size 2717032 diff --git a/video/mwN1bbD5DQ_39027129.mp4 b/video/mwN1bbD5DQ_39027129.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2b9d5ae5195638c6f9b1e9c9156ddcbd30f9fdc6 --- /dev/null +++ b/video/mwN1bbD5DQ_39027129.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2aa8e7b5cf40b802517b27883b330893271c8beb74618d332b6dc2309afc0b3 +size 2944962 diff --git a/video/n0arS0DDot_39025651.mp4 b/video/n0arS0DDot_39025651.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..331be5742fe027cee320b4229ed4c1115913a169 --- /dev/null +++ b/video/n0arS0DDot_39025651.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82401812f97d59be20ab772202b889665207fe92e2c30637dbf997333ed8be1f +size 1962883 diff --git a/video/n60xBFZWrk_39026673.mp4 b/video/n60xBFZWrk_39026673.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c3f67076d1b3aca85118eff05413388f383bbcd9 --- /dev/null +++ b/video/n60xBFZWrk_39026673.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4296786728ded49cbc8b3044d492c078ed143f8efa112d0d18d9174c2c61e56 +size 2150630 diff --git a/video/nAIhvNy15T_39026339.mp4 b/video/nAIhvNy15T_39026339.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bc741ab1df96e667353cbacf838934bbe9a6d617 --- /dev/null +++ b/video/nAIhvNy15T_39026339.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c67ff37b89e1d3af8ba3fc5ea5e1a602f62328cceb85507a16a8d82023fdf2f +size 2716951 diff --git a/video/nAnEStxyfy_39027390.mp4 b/video/nAnEStxyfy_39027390.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..17843bfd06c1b154fcad14aac2a3bfe2bf609c67 --- /dev/null +++ b/video/nAnEStxyfy_39027390.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19f59317f4ec4e5ab420238cf539bae93091a793481e62c4612aa54787841cda +size 1839345 diff --git a/video/nBOdYBptWW_39026622.mp4 b/video/nBOdYBptWW_39026622.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..71893484fcadb063ba4803c763977133000a61b6 --- /dev/null +++ b/video/nBOdYBptWW_39026622.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19f52a5f7dc9eadc1740cf4bff042407f0464136a9590b31272664d03c8f586a +size 2461864 diff --git a/video/nBhfIcDnRP_39025103.mp4 b/video/nBhfIcDnRP_39025103.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..957c0be92dfaea3f529a5f0a7cb7b8cf5a8c86d3 --- /dev/null +++ b/video/nBhfIcDnRP_39025103.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23947496a45e1c3c2189c00f767712a48b6ea528d2bd8fae16f8ecb1422f3327 +size 2456978 diff --git a/video/nBjmMF2IZU_39028668.mp4 b/video/nBjmMF2IZU_39028668.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7f9279151946ee51244c3445b98174b10b8d55ca --- /dev/null +++ b/video/nBjmMF2IZU_39028668.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7092e8f1f0370a2cd43ca220853f86b9067d6200d34d6aed7dbcb168617e4dc +size 2935195 diff --git a/video/nF34qXcY0b_39025912.mp4 b/video/nF34qXcY0b_39025912.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b801fc37697524d87b199a346806088441a9d4a4 --- /dev/null +++ b/video/nF34qXcY0b_39025912.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff0577b1812ea7473b6e9d687fa0edd0aa84dc10d21281d565aa6f5384339a3f +size 1222909 diff --git a/video/nFI3wFM9yN_39017120.mp4 b/video/nFI3wFM9yN_39017120.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a31c187dc539b05eb44fdeb87f194a3e2f3f1ac2 --- /dev/null +++ b/video/nFI3wFM9yN_39017120.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7d26c02aca5d35730f6c9eed2f77f4cc5aa2bb23f3f33e6111aa23f7c21c9ca +size 1336469 diff --git a/video/nJnky5K944_39019005.mp4 b/video/nJnky5K944_39019005.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4709ff352d53b9bbc3e21b83957cc8f0c0fa7c5d --- /dev/null +++ b/video/nJnky5K944_39019005.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f48de7550857d6be2b914377975635408b1bad2258a6fb5309ab6ec112c09839 +size 2081034 diff --git a/video/nJvkQSu9Z5_39028677.mp4 b/video/nJvkQSu9Z5_39028677.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5b1c73038fdbc3fd60ce58c4f2543538c213291d --- /dev/null +++ b/video/nJvkQSu9Z5_39028677.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8b0939164f1cb65dd7e54e6cc55d06ac6f74c5d1f9145e0c01feb896182413b +size 1941511 diff --git a/video/nK6OnCpd3n_39027059.mp4 b/video/nK6OnCpd3n_39027059.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e0ebcbcde91a32e7fb1abb82524e6c9026914c26 --- /dev/null +++ b/video/nK6OnCpd3n_39027059.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9afd8613d5316d1d8dd0c9ecfee45470c24d050f77089a514029e6ace11470d2 +size 871375 diff --git a/video/nLQeE8QGGe_39025086.mp4 b/video/nLQeE8QGGe_39025086.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6c7cc01722ef86a130457ac856cfdf4be55025da --- /dev/null +++ b/video/nLQeE8QGGe_39025086.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f355c35588920302685e611246aa7d9ef0d270e7ab79398dab98d1b8e2dd004 +size 2205977 diff --git a/video/nLWiR5P3wr_39017193.mp4 b/video/nLWiR5P3wr_39017193.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..db0a72f5a96cc6616c555b0571aa75493eaf42d0 --- /dev/null +++ b/video/nLWiR5P3wr_39017193.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc9f0b4699fbba62eb836690398f7c299214c749c6a9b443749837198ee71195 +size 2980949 diff --git a/video/nN6NSd1Qds_39028615.mp4 b/video/nN6NSd1Qds_39028615.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..95dd094d7c863f38e995cafdab8e3d3ce02c5b2c --- /dev/null +++ b/video/nN6NSd1Qds_39028615.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c765aacf9d0ca6903ab8166637b6c3213e87ee7be7124a9a829dd9ac0b89832c +size 3389340 diff --git a/video/nO344avRib_39017449.mp4 b/video/nO344avRib_39017449.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..781beea1f9af0b1cbdfa133604ac6ea5ee2b464f --- /dev/null +++ b/video/nO344avRib_39017449.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71fa0f543b9a2fe0345f5de6ebd1bf04b37e0b235643ee0a363d7fdbf3b00663 +size 2663529 diff --git a/video/nQl8EjyMzh_39026907.mp4 b/video/nQl8EjyMzh_39026907.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..00c7e496dbbe2b22e662c149cb1e97ae7ee10304 --- /dev/null +++ b/video/nQl8EjyMzh_39026907.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f85dfcc9b07d7e9ce868cd39811e33459f4b8ec38d48c215a8a8d49120cfcc8 +size 2179999 diff --git a/video/nRdST1qifJ_39027318.mp4 b/video/nRdST1qifJ_39027318.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ee2e4be0169a6d9c0fbed6dc9105299283c6c6f8 --- /dev/null +++ b/video/nRdST1qifJ_39027318.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06c7d6429e5c47a5fafe99c06b1ca080fce998e5ea2f4565b2f6eb98a0e1a50d +size 2076827 diff --git a/video/nRp0XhTf61_39025579.mp4 b/video/nRp0XhTf61_39025579.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..52f2302991813df8fee6f4118375d3bee820ab96 --- /dev/null +++ b/video/nRp0XhTf61_39025579.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c0852f85af483c1b11337ab2bd0d21d6dd6036f347653a0af6a60e1ee90208d +size 2148741 diff --git a/video/nTJeOXlWyV_39026256.mp4 b/video/nTJeOXlWyV_39026256.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1d03bf1f00a470d8939280147fb04ebbfecc6529 --- /dev/null +++ b/video/nTJeOXlWyV_39026256.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83341582a84ea0f726dbfa5099ec843ffb6cade7ae1eb61a6d394d9e48c6e0f1 +size 1248183 diff --git a/video/nWMqQHzI3W_39025318.mp4 b/video/nWMqQHzI3W_39025318.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..01e4d19dac78a2f290d4282c7d72890c888350d5 --- /dev/null +++ b/video/nWMqQHzI3W_39025318.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:879ce741633787e2a6c2bcfe59da62c4af00684a36f5589f1e43e3ec709ed1de +size 2280349 diff --git a/video/nXXwYsARXB_39027464.mp4 b/video/nXXwYsARXB_39027464.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1e7d7c61e89a22faebb18f0bdcb9c91ef89196cd --- /dev/null +++ b/video/nXXwYsARXB_39027464.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8234f8fd95b1a4fa88d21598a7d4c2e2454525e7ed7efc346e337873a8cb08c6 +size 1779633 diff --git a/video/nXYedmTf1T_39026861.mp4 b/video/nXYedmTf1T_39026861.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f8144931a949e33d67caff05eccf772026ace9b0 --- /dev/null +++ b/video/nXYedmTf1T_39026861.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a366e75c64d0b986d0e2b513672f4ac6284e2096ea2d17b4b58c3c6a01b73349 +size 2153387 diff --git a/video/nY0BrZdqLt_39026386.mp4 b/video/nY0BrZdqLt_39026386.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7abeab2612c449f6ad31d716428e085993e06eb5 --- /dev/null +++ b/video/nY0BrZdqLt_39026386.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b66ade533ee50bcd822b056791ea084cfb554db7d7bf9ccee50afac009b7566 +size 2386418 diff --git a/video/nY7fGtsspU_39024880.mp4 b/video/nY7fGtsspU_39024880.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3438ae361cdc0b63d395663ca4fb7dbe9f99b377 --- /dev/null +++ b/video/nY7fGtsspU_39024880.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a1785802a8d2aa2241fd1cd35d296643b81120875c97811844c3ef46fa6149d +size 2063448 diff --git a/video/nbqvjkOs6S_39027830.mp4 b/video/nbqvjkOs6S_39027830.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6ece96dd274d96a294a3fa804e4ab39d7ce39306 --- /dev/null +++ b/video/nbqvjkOs6S_39027830.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1c14db3b8bb7b7cfdaac714d3ff62d951688db6878b49f92f4c33fe00f4074c4 +size 2965234 diff --git a/video/ncqauwSyl5_39028774.mp4 b/video/ncqauwSyl5_39028774.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e53921ea06ddefcf4b3804f35eb92fce5733a143 --- /dev/null +++ b/video/ncqauwSyl5_39028774.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5bdb95271aaa9037bfbc06a40e019ac783aa671e8984061cee0d5dc21b66248d +size 2566394 diff --git a/video/nd8Q4a8aWl_39028090.mp4 b/video/nd8Q4a8aWl_39028090.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0c9927613e69b05c778dedc1693a10c78c05974b --- /dev/null +++ b/video/nd8Q4a8aWl_39028090.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca55f6bed189347a00c4b6b7ee18f5682f4b77ed9e82f802a85bb019c3612c29 +size 2823283 diff --git a/video/nfq3GKfb4h_39025100.mp4 b/video/nfq3GKfb4h_39025100.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b8c10aebcbac066ec747599ea9fcd6057d645309 --- /dev/null +++ b/video/nfq3GKfb4h_39025100.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9166dab22d3e96dfda6785370210d2e2c705cedd6382bf17388fa6a1df71930 +size 2765963 diff --git a/video/njvPjG0BfK_39026042.mp4 b/video/njvPjG0BfK_39026042.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e8e50a3accde99086184dff285e46b1a9704ab94 --- /dev/null +++ b/video/njvPjG0BfK_39026042.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:518c62dd899edcaa0b13f5531a7a9caad8aec171771518c6d30b64fc41df44a1 +size 309980 diff --git a/video/njwYBFau8E_39028706.mp4 b/video/njwYBFau8E_39028706.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9f48efd8cfcd7060831284d5dc7404217bc6dd0e --- /dev/null +++ b/video/njwYBFau8E_39028706.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39c0dcacfda305e1cade04aa8deacfcfbf9e9bf9072a8cb9120757e75e201122 +size 2484517 diff --git a/video/nnicaG5xiH_39018679.mp4 b/video/nnicaG5xiH_39018679.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..018cba73a4ef5912f44fe9e4a1b388b2c2c17997 --- /dev/null +++ b/video/nnicaG5xiH_39018679.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c643437468f2b04218cdfec2d65306dce3147f9a2d62204ff15151070863f548 +size 2287915 diff --git a/video/nrgyOGU7ZP_39024552.mp4 b/video/nrgyOGU7ZP_39024552.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7d4a32da860853f2573327d0d81b941bdbbe16c3 --- /dev/null +++ b/video/nrgyOGU7ZP_39024552.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b92358167b1db821ed8edc589820b49416ee205710866e3613ea9ee54e485e5 +size 1533837 diff --git a/video/nsNyDvNQTc_39018700.mp4 b/video/nsNyDvNQTc_39018700.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a0ab07eaf7a474356394472da55e7cab5beca41c --- /dev/null +++ b/video/nsNyDvNQTc_39018700.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64be162382b3af40da8e2299a87236c9fdd433dd74d39bca4a7ed83f1d0c1ad9 +size 2655452 diff --git a/video/ntF7D8tAlQ_39027434.mp4 b/video/ntF7D8tAlQ_39027434.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f677a11773859c0d11ec306172bfcdb66d3c2209 --- /dev/null +++ b/video/ntF7D8tAlQ_39027434.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa3babfb5c612746a3c4521f0c80341d65d27f4be747b13888450e09f5fdec5c +size 1000039 diff --git a/video/nv7ox1vd3q_39024655.mp4 b/video/nv7ox1vd3q_39024655.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8afd5d618f3b8f2f09d5f821d81eaabee9336cfb --- /dev/null +++ b/video/nv7ox1vd3q_39024655.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3c5a41ea9dc8f18efd75f0f0e0478f0320b28eff52accb556f849d8e6970bff +size 1551915 diff --git a/video/nw4TWuEPGx_39028563.mp4 b/video/nw4TWuEPGx_39028563.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..940b0c25a4fbb3a08e34017ff1bc529b19f48694 --- /dev/null +++ b/video/nw4TWuEPGx_39028563.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13fa5b5041faba30b5a55226697c45b02d1ea7e83681f441621e6ab8b401240b +size 1832723 diff --git a/video/nw6ANsC66G_39026304.mp4 b/video/nw6ANsC66G_39026304.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3f3a80f717c7936ceca47f39dfcc976119bd604f --- /dev/null +++ b/video/nw6ANsC66G_39026304.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8774e9e2cdd5057502c2f7cdbe4df8303eb00e94915db02827feb9aa63a0e78 +size 2762768 diff --git a/video/nw8cXoNvep_39026729.mp4 b/video/nw8cXoNvep_39026729.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..42dbdf47c11c56e1d6f1b3fb5a379141b8ecd0b5 --- /dev/null +++ b/video/nw8cXoNvep_39026729.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f5122e6663fdf3cac7aeec22b640018fdbacfd3dbc7ab4938a7858c397424bf +size 2837110 diff --git a/video/nxumYwxJPB_39027547.mp4 b/video/nxumYwxJPB_39027547.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..32284a8df83ecbc4a7053d478c3ef60c3b3ef38c --- /dev/null +++ b/video/nxumYwxJPB_39027547.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a63607ecf8d82963d1dbdf6aa86469087de11928170cfae7e6de83a852ee8525 +size 2724546 diff --git a/video/o4coDIby7e_39028243.mp4 b/video/o4coDIby7e_39028243.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c2badb9ea10ae602346ca93bd12e94fbb34fcba7 --- /dev/null +++ b/video/o4coDIby7e_39028243.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ce8e5011439754f1e16cbc971661b2f4e6861245c16dd9a4afa022bd2146daa +size 2371015 diff --git a/video/o7DOGbZeyP_39026629.mp4 b/video/o7DOGbZeyP_39026629.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9c791e71d9ebb37a5951a506481caea530d85914 --- /dev/null +++ b/video/o7DOGbZeyP_39026629.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2006b552399d90533fb63de74a71d49a44ad5144058c530a7c8e6079d8532a19 +size 2705270 diff --git a/video/o8m4RM5mBk_39028779.mp4 b/video/o8m4RM5mBk_39028779.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6f371b6bcf0bfd9b7c76c42a70eaaddaa5c625a4 --- /dev/null +++ b/video/o8m4RM5mBk_39028779.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfa54a1733a3fa8cec609e12ac282eb991e38671f9678fbd3cf238c879d23859 +size 2975080 diff --git a/video/oAMArMMQxb_39017391.mp4 b/video/oAMArMMQxb_39017391.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..457c8d0072b202f6603ad4de3de8888bf874160f --- /dev/null +++ b/video/oAMArMMQxb_39017391.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f287bd57763db84649e359eeca7bc9fecd1999143cc662145948dd3c56a8b7c6 +size 2014256 diff --git a/video/oBvaZJ1C71_39025336.mp4 b/video/oBvaZJ1C71_39025336.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f8216ca3da0decdf1626c65ce6bfa33db112ce2c --- /dev/null +++ b/video/oBvaZJ1C71_39025336.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b323e7cdcba47ff75c670038ef366dd09ffd887177609e24da24df0deead1c1 +size 2567107 diff --git a/video/oEF7qExD9F_39018860.mp4 b/video/oEF7qExD9F_39018860.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..52bc9a0447d15d611fff72932d2714eda99a7a5e --- /dev/null +++ b/video/oEF7qExD9F_39018860.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8d2b9a28dc3711cd1b90cfda70b1743214a54eaa884ba980b486896a1634302 +size 2419848 diff --git a/video/oFgTScAsBr_39028529.mp4 b/video/oFgTScAsBr_39028529.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6e4227c10c93c6d3db054789ce74f4cb115c9067 --- /dev/null +++ b/video/oFgTScAsBr_39028529.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a94fdff21a78790c2ecd35ce140ab16159defd72f5c89ec4a9b8eb0cd8d9005d +size 2498766 diff --git a/video/oGNdBvymod_39017389.mp4 b/video/oGNdBvymod_39017389.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bc1b0841dc491c3dc514e60204b438747ace61bb --- /dev/null +++ b/video/oGNdBvymod_39017389.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8cd937a4830c961d6b0e58632bbe36f8d301c789e0a7b755f22d1ca0ff829d56 +size 2816406 diff --git a/video/oLcPadFrY3_39024762.mp4 b/video/oLcPadFrY3_39024762.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e53ff9034dc2c895497d47f295425fdbdb4c5b12 --- /dev/null +++ b/video/oLcPadFrY3_39024762.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1726d128796ec80bd8f9e3c5f5f735de35a7e8d7985fa313e18d27be97b5ec00 +size 2482482 diff --git a/video/oMHpejyGdx_39026233.mp4 b/video/oMHpejyGdx_39026233.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9685a39a48f3e2ad21180eaac96df54dc08cfd76 --- /dev/null +++ b/video/oMHpejyGdx_39026233.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83c7d411d83ec3f21bad158dd69bde450a12d9265d4527894ddfc6fddd00a9ea +size 2173291 diff --git a/video/oMLQB4EZE1_39017386.mp4 b/video/oMLQB4EZE1_39017386.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7edc5ee6c954bb964e31d80b5bfd6e86e2402871 --- /dev/null +++ b/video/oMLQB4EZE1_39017386.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47cb6c6d60efd32b36e85d117da37a71112eaf8a3448340226673356ec4b1d94 +size 2346862 diff --git a/video/oMNkj4ER7V_39017385.mp4 b/video/oMNkj4ER7V_39017385.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c7d65dc6186c24b33ec584bb9eecf642a7923ccb --- /dev/null +++ b/video/oMNkj4ER7V_39017385.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5081961dd011cbf3f02a4e13448897bed39d83bdffc26a562f3201c8e501340 +size 2296646 diff --git a/video/oNMnR0NJ2e_39027041.mp4 b/video/oNMnR0NJ2e_39027041.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..23fd5dc063f165c9802f13cf7adc236d38698906 --- /dev/null +++ b/video/oNMnR0NJ2e_39027041.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1ec366e6084dd224f0fa1b894676136b689a966977c71d031fa788d8e518e27 +size 887955 diff --git a/video/oO6FsMyDBt_39018919.mp4 b/video/oO6FsMyDBt_39018919.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0d89968b6bdd610ff1af6559f18c603a0b8c7bac --- /dev/null +++ b/video/oO6FsMyDBt_39018919.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5266f9855c306edb3d24844772ad5f484d22af089d52dd508b6143b9ab8d97c0 +size 2595479 diff --git a/video/oOwDQl8haC_39017383.mp4 b/video/oOwDQl8haC_39017383.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b48235eeb65ec036db1252c2299981ef24277003 --- /dev/null +++ b/video/oOwDQl8haC_39017383.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d3adc3d36f2065d744a8365ee2cd862f715a267701e1e4d725f855912f05972 +size 2659260 diff --git a/video/oQ1Zj9iH88_39026407.mp4 b/video/oQ1Zj9iH88_39026407.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..042b09de6b9aa9005c3b5a1a228359aa1a09344b --- /dev/null +++ b/video/oQ1Zj9iH88_39026407.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0eace9e47ce9fdf599fda938b94454ea4d2332826585e66fcea41fb09d220f42 +size 1880816 diff --git a/video/oSOVME9kl2_39026852.mp4 b/video/oSOVME9kl2_39026852.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..edf28825253b51ebc2510adb4945bba5215c5e2b --- /dev/null +++ b/video/oSOVME9kl2_39026852.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:98cdba8ff2e82ee90fd5016dd06b98db0c49713c4be069e0ca521b2ad0f4bac1 +size 1633418 diff --git a/video/oTRwljRgiv_39017045.mp4 b/video/oTRwljRgiv_39017045.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..455b8d20b777332d2c6d1ffe15c05d6ea9cd006d --- /dev/null +++ b/video/oTRwljRgiv_39017045.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d4d240ebe2c49fd568e1726b53d77e8a24c3e8b1c6a3e29f24b4a46e36ed8349 +size 2512930 diff --git a/video/oTzydUKWpq_39026067.mp4 b/video/oTzydUKWpq_39026067.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e90503a4f70ed7fe8484169436df3117bc4de36a --- /dev/null +++ b/video/oTzydUKWpq_39026067.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:125209d08addb0f852d9c0c6b66dddc1606f05f6e88a7f87903ec43654f9d53a +size 3336549 diff --git a/video/oUXiNX5KRm_39026523.mp4 b/video/oUXiNX5KRm_39026523.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4aca28ca3a447fe774499e698eb3717c0b6c1d58 --- /dev/null +++ b/video/oUXiNX5KRm_39026523.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1fe1f0d0092c259e76685d44a2bc374a7676f16b6f553d39a99f37e3337abe12 +size 1963619 diff --git a/video/oX6aIl9f0Y_39024951.mp4 b/video/oX6aIl9f0Y_39024951.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9f04498cd931a959f84c5fc6fa0f3a64e942f374 --- /dev/null +++ b/video/oX6aIl9f0Y_39024951.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c276394bc19a95793871f3cd19d4edf1e2529c727767a9c1816fe7e35d2e84ea +size 2694205 diff --git a/video/oXHyYHp4Zb_39027433.mp4 b/video/oXHyYHp4Zb_39027433.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4a1e086d3c3bfc3b638e6b8ad0f72d63a3a88fbb --- /dev/null +++ b/video/oXHyYHp4Zb_39027433.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2dbd7dacd7c39048ea4ab5a771bf96b8c34a2a5035f579a07a0a7df0604cd258 +size 2780363 diff --git a/video/oYjPk8mqAV_39017381.mp4 b/video/oYjPk8mqAV_39017381.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..75bd930c27d11add6c65d9793aa950cb002b7e05 --- /dev/null +++ b/video/oYjPk8mqAV_39017381.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6be6651af1a11c100d19c13d1c0e3cc19dcf9087d8c8bb4a9084f8dcd8dd8677 +size 1820662 diff --git a/video/oe7MfqFK1M_39028023.mp4 b/video/oe7MfqFK1M_39028023.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d521654dcc302d884bc58f3461a311c12d3048b8 --- /dev/null +++ b/video/oe7MfqFK1M_39028023.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6eda6833b390152cecdc14a82091cd6428fc87a62744ac52d78fa78a9ae6fba +size 2611469 diff --git a/video/ogk236hsJM_39024536.mp4 b/video/ogk236hsJM_39024536.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9ed84109e85bad85fd2f27daee4152a2e77102cf --- /dev/null +++ b/video/ogk236hsJM_39024536.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffb3488979b4dadcc5ebe71ccab1f76b989be3f39ee10f453c2aca4a58667a13 +size 2330354 diff --git a/video/ohvXBIPV7e_39025695.mp4 b/video/ohvXBIPV7e_39025695.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e172041787d623b4a3b1feeade3615de45aca5ec --- /dev/null +++ b/video/ohvXBIPV7e_39025695.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b89267fba41dd23193f445ed9028188722ab062a1048a2d09d976e0af0647a40 +size 3254129 diff --git a/video/ojIJZDNIBj_39017378.mp4 b/video/ojIJZDNIBj_39017378.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c02ef9dd28fdfd3b31561eac4a333b258a0c02fd --- /dev/null +++ b/video/ojIJZDNIBj_39017378.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b9d1d181bd3abd579af733966e2c7d679a8cf71020d480704aa9021736ee7e7 +size 3252065 diff --git a/video/okYdj8Ysru_39018921.mp4 b/video/okYdj8Ysru_39018921.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bd5fec37aaa7f223635d06ece47740228d67e87c --- /dev/null +++ b/video/okYdj8Ysru_39018921.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5b6c59827f87e8cc6cdf1328741b257035543bfd56f2e9a757f60be29944202 +size 2512008 diff --git a/video/opaRhDvQRD_39025395.mp4 b/video/opaRhDvQRD_39025395.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1cfd2ee2b4f9aa0b421997e28e7903040681ec4b --- /dev/null +++ b/video/opaRhDvQRD_39025395.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c08480984cd213713766732c3989a94734a4aa53d2cc5474f56e0ed1fe0f43a4 +size 2628570 diff --git a/video/orxQccN8Fm_39025467.mp4 b/video/orxQccN8Fm_39025467.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bdc63c42bdcc6057a8bacb94cbc5150b0709595c --- /dev/null +++ b/video/orxQccN8Fm_39025467.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7faee549e18f6d4c74d9f092c8dbabf35b39b06aa0af17d11963a9088b6ef324 +size 1288290 diff --git a/video/ouoBW2PXFQ_39026942.mp4 b/video/ouoBW2PXFQ_39026942.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..da33de11dd1aa6341b7ef6d32451907bdeb10297 --- /dev/null +++ b/video/ouoBW2PXFQ_39026942.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:893d1a9d1f977b8443b6ef22db45644328dda71bf550863c6c5012028a445110 +size 2195540 diff --git a/video/owuEcT6BTl_39027240.mp4 b/video/owuEcT6BTl_39027240.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0f7036a9d670a777a8c59320efb924e07200b14b --- /dev/null +++ b/video/owuEcT6BTl_39027240.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a03f255965a593c700a328578058036fbe0c43bd206689e0bfaf28cb493bf226 +size 2905011 diff --git a/video/owziuM1nsR_39018836.mp4 b/video/owziuM1nsR_39018836.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6fa2d177eeefac4a5525214d1cfe6b77d3d08c14 --- /dev/null +++ b/video/owziuM1nsR_39018836.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee958722773ff1fa5178f1f68b22d65bd85c3ab7fc3d635369c7ec35210c632f +size 2672073 diff --git a/video/ox2ATRM90I_39017374.mp4 b/video/ox2ATRM90I_39017374.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6c5f29ca7b1932cac311fbb3f326ea25be0ca37d --- /dev/null +++ b/video/ox2ATRM90I_39017374.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b40126388f9e4899a44b7d686d9920b5dd84a19ff5f6b1ec2b751f9f16ad13b7 +size 2529380 diff --git a/video/p3hNrpeWMe_39028336.mp4 b/video/p3hNrpeWMe_39028336.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a20f2556264c38ef0f7b1174f10ef0a78fe40fde --- /dev/null +++ b/video/p3hNrpeWMe_39028336.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77642bf55a222e97f0462fc84a84c7eec43bb4bb90239093093fa8775226b40e +size 3139486 diff --git a/video/p3nPHMpx04_39024801.mp4 b/video/p3nPHMpx04_39024801.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..27b95a7956e11f36dd74a52a7ee1010836dbf622 --- /dev/null +++ b/video/p3nPHMpx04_39024801.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69353e5f10ad8bc2459bd3e37ceaaa48b8d65733b3c7654ce0ab078cbab9b2d9 +size 1993536 diff --git a/video/p3tSEFMwpG_39024694.mp4 b/video/p3tSEFMwpG_39024694.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3b40080a2ca8e46da06e01b548653e9def962376 --- /dev/null +++ b/video/p3tSEFMwpG_39024694.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05eb776896cb8bdf2c87288d58eba064c093826e80680073f96cce76396e7594 +size 1749245 diff --git a/video/p43ObIwJFW_39025150.mp4 b/video/p43ObIwJFW_39025150.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..568bf2342d96726b8485263187475209e0b31ad9 --- /dev/null +++ b/video/p43ObIwJFW_39025150.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5aab83083f8e772583663df30b65209ac9701d98bad249f66b3f28d6bd22d5cb +size 2943358 diff --git a/video/p50Dyqk0GX_39024862.mp4 b/video/p50Dyqk0GX_39024862.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a5d4e213965e0af12f4cb495b685c950607d5d94 --- /dev/null +++ b/video/p50Dyqk0GX_39024862.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fb106e8f89580c52e787754688e558b8fb6bfa77f77aafb10a5c1ec5e836ab3 +size 1723256 diff --git a/video/p54CYwdjVP_39025316.mp4 b/video/p54CYwdjVP_39025316.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e60ac473c249e07f0567bbe60d23389add3f930e --- /dev/null +++ b/video/p54CYwdjVP_39025316.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c7bc835da05614951f75be475a77d1a092892c9aeb03f83fe9d60d749310993 +size 2899268 diff --git a/video/pASJxzMJb7_39027098.mp4 b/video/pASJxzMJb7_39027098.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5ee720b1e2da80fa183bc2a150ecce025354ad6c --- /dev/null +++ b/video/pASJxzMJb7_39027098.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d38e45bc769221bad7b3e6792047b7b8d8b3a7c7c306929a2947b97a14e8f04 +size 990404 diff --git a/video/pB1FeRSQxh_39019203.mp4 b/video/pB1FeRSQxh_39019203.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..787079108972c97dcc9adb90b2aa4ffbd4fe95f5 --- /dev/null +++ b/video/pB1FeRSQxh_39019203.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b13c8781de43702e38094a408e4a4dbaade97dcce3cdd2c997c306a7de56cc19 +size 2234531 diff --git a/video/pFOoOdaiue_39017358.mp4 b/video/pFOoOdaiue_39017358.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7b2a1cbd6e569d203b21ddc9e6c6039a50bc93fd --- /dev/null +++ b/video/pFOoOdaiue_39017358.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f247f0053f74e398781f9cdc235aa5387ff8e06e2a7f717a06db5d4be5b8219f +size 1385912 diff --git a/video/pG380vLYRU_39028731.mp4 b/video/pG380vLYRU_39028731.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0a559dd921dd9f39e09861ea9c1c57d7c6a1d09b --- /dev/null +++ b/video/pG380vLYRU_39028731.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74d8f2003d2ed98333ed1879d36c0a29bbc7947e121ce077b5ae5e792d6200aa +size 3315390 diff --git a/video/pGEY8JQ3qx_39025413.mp4 b/video/pGEY8JQ3qx_39025413.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e8c2392a3c1c5c70abac5c9047b9413dcf4fa6c4 --- /dev/null +++ b/video/pGEY8JQ3qx_39025413.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84e6870d6c15022d62dce207159355ef12db497f475134760fbe1377bd4e9445 +size 2477067 diff --git a/video/pGOBEYcXzs_39025208.mp4 b/video/pGOBEYcXzs_39025208.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f7b796930ddac0cf6cf64134f89a6424adbd3533 --- /dev/null +++ b/video/pGOBEYcXzs_39025208.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3acd92284f7e38e2a94958d6d0322ed2736394931c256c644067b925f515296d +size 1925998 diff --git a/video/pJlFURyTG5_39026691.mp4 b/video/pJlFURyTG5_39026691.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cbeec190ef60837c86b8f888cba4e6b7261f8bf4 --- /dev/null +++ b/video/pJlFURyTG5_39026691.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85b07d0fc4075b408f9587a3fee796d2cb97a3cbac3afffd186deffb870eb3db +size 1297352 diff --git a/video/pLoX8Og3bH_39026483.mp4 b/video/pLoX8Og3bH_39026483.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2a964901fd877856289d38ab3e6139bd773a6123 --- /dev/null +++ b/video/pLoX8Og3bH_39026483.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e95781f9d0b2bdafb45932eca99f7189b369987d951920d2b4ee7f8ec9835ac +size 2368932 diff --git a/video/pMaCRgu8GV_39024901.mp4 b/video/pMaCRgu8GV_39024901.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5764b8f1a63c8bbccf410c1e54a3c894124ad128 --- /dev/null +++ b/video/pMaCRgu8GV_39024901.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7c7f3ddf3f5e5d8f45248007b3e4fde157c938922428ffa0a6373aa7bc16e40 +size 1922211 diff --git a/video/pOXgdFEB7q_39027460.mp4 b/video/pOXgdFEB7q_39027460.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1ec3c96d8e9cf8a7bb2f134a484497700dbeddc5 --- /dev/null +++ b/video/pOXgdFEB7q_39027460.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0eae20adce153e3254651d41529fc94a4749a97c2ff954832fa3d3afb84de3a4 +size 3563154 diff --git a/video/pPSWHsgqRp_39026708.mp4 b/video/pPSWHsgqRp_39026708.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0ea3d857d60edb2f08504ef31d56c65770d949fa --- /dev/null +++ b/video/pPSWHsgqRp_39026708.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c94a2aa3f1c52e632e64a808840b9f3b09553c5f8094b89c46d701c3cd43f055 +size 1349506 diff --git a/video/pPeXYByHNd_39025957.mp4 b/video/pPeXYByHNd_39025957.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..62c7821cc72818ad49ffac1407579a1b6257bb9f --- /dev/null +++ b/video/pPeXYByHNd_39025957.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a714901ab61cc3d13dc4892c77417cb8e8a726ffbf1a11a04e9221ec2f09cef9 +size 2759047 diff --git a/video/pRQmRaonxf_39028663.mp4 b/video/pRQmRaonxf_39028663.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..474657cca44373ebc716870adc40aacdec20d897 --- /dev/null +++ b/video/pRQmRaonxf_39028663.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c10c10ff50b44db4cd617046daacfa81f536004ffc95d9691dd5ce87dcb8cfd3 +size 1501790 diff --git a/video/pU0z2sNM1M_39026362.mp4 b/video/pU0z2sNM1M_39026362.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5c38a1a47adb07d67f7f666382c1bc8a9bcae90d --- /dev/null +++ b/video/pU0z2sNM1M_39026362.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:609aeed36f088e8660f2d788bf4545df755490e7928d136b83c48c0770ff6b31 +size 2374478 diff --git a/video/pW9Jwim918_39026716.mp4 b/video/pW9Jwim918_39026716.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ca4cf97bdf8a8daef12d2da8e51d628b57416b01 --- /dev/null +++ b/video/pW9Jwim918_39026716.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21bccfc039d63cb46ace5238454e873407a8185a4fa582b54e65dc3488c5382e +size 2807792 diff --git a/video/pWowK7jqok_39028636.mp4 b/video/pWowK7jqok_39028636.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cbd1816a47f06b2cf3d40ff0e096c579a820bb94 --- /dev/null +++ b/video/pWowK7jqok_39028636.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cc1aec3cb02443556463462c9233b1a601099964daf4b00d6529c8141e24642 +size 787386 diff --git a/video/paYwtPBpyZ_39025655.mp4 b/video/paYwtPBpyZ_39025655.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ba2fe599a90a9a8b7ec60677df0cb3386836d34a --- /dev/null +++ b/video/paYwtPBpyZ_39025655.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a4125b833ee7c3d04fbbfed414bdcd5025c08ad2ba0940b3811b249be7c5130 +size 3274905 diff --git a/video/pebP89l4v6_39026186.mp4 b/video/pebP89l4v6_39026186.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..14df6664786b7ad42f15a549e68e96afcf68d068 --- /dev/null +++ b/video/pebP89l4v6_39026186.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:31c55b9cd4c010f5afb9b19ea381adc392d63528bb4a02b56cf56f229390c10e +size 3373873 diff --git a/video/pjD08dtAh0_39026601.mp4 b/video/pjD08dtAh0_39026601.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a74a7e29195e736f64e59db7dac0110bed15ca9c --- /dev/null +++ b/video/pjD08dtAh0_39026601.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c47ae8f111c11cd28ccf08b297a741136a0931f360ccef5890bdb131981fd790 +size 2309781 diff --git a/video/plH8gW7tPQ_39025253.mp4 b/video/plH8gW7tPQ_39025253.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..eefe600c975b92f7dcca0403e8a84be65eea4b63 --- /dev/null +++ b/video/plH8gW7tPQ_39025253.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:246b7b080fc4b6ffbb5fbe10455cb3fac86fe5a88b143adfa4ae7e2029e44048 +size 2276821 diff --git a/video/pnmUiVAGnv_39025687.mp4 b/video/pnmUiVAGnv_39025687.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bb9a737b9536eca8709197493a15cc9284978787 --- /dev/null +++ b/video/pnmUiVAGnv_39025687.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:55dd483137a7d316727ae2a6bdf20a52c63ec48352c1eb0ea1a234762c3519fc +size 2880218 diff --git a/video/pqD7ckR8AF_39027721.mp4 b/video/pqD7ckR8AF_39027721.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8bb71d84add371554f0e8d97e7ed2ba29675f343 --- /dev/null +++ b/video/pqD7ckR8AF_39027721.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:03c24a6111c5105e8844da54fac06d5a28f23601ebd53a3a145dc85a06931ace +size 3107999 diff --git a/video/prXfM5X2Db_39026095.mp4 b/video/prXfM5X2Db_39026095.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..55b4ed7b170e5438fb6b3503876102fa64e2c5f5 --- /dev/null +++ b/video/prXfM5X2Db_39026095.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f7135727be7ecd6e94e4a09d6e868a8691422a89f5fc0b27ea35a4ebee6f928 +size 2129427 diff --git a/video/prgxz9fYbf_39026726.mp4 b/video/prgxz9fYbf_39026726.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..76318721b2a74f360ac89715a697ea7d598b1c30 --- /dev/null +++ b/video/prgxz9fYbf_39026726.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aff315ead28c46e2312881a1dc134ba11cd9d81afed514284b100967e01ef57e +size 1701081 diff --git a/video/pwKkNSuuEs_39027353.mp4 b/video/pwKkNSuuEs_39027353.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e90674c50b4892c2012cd41ff615be58c5d8ff84 --- /dev/null +++ b/video/pwKkNSuuEs_39027353.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd0c2191abd819181c1d93aad935ff1cf4432f133e454f691fc80d5f5f445acc +size 2175870 diff --git a/video/pwLdvYIMrF_39027401.mp4 b/video/pwLdvYIMrF_39027401.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7f1dbe15f617ffaa42b9a7fec080e1983207e3bf --- /dev/null +++ b/video/pwLdvYIMrF_39027401.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c6a5c9eaa48f4da956d32274cc5663486d0a65f7ea7b0d9db10e9091213f36e +size 2166974 diff --git a/video/pwRVGRWtGg_39026294.mp4 b/video/pwRVGRWtGg_39026294.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9d854019261901df8ecd883bb60549fa5b3824fe --- /dev/null +++ b/video/pwRVGRWtGg_39026294.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16b1208abd8dc23149e8093d19f3d95c881754e80a50fd5f193bdc3e4c18db97 +size 2648790 diff --git a/video/pzElnMrgSD_39017338.mp4 b/video/pzElnMrgSD_39017338.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b3892c83452e3c2c0a6b8e5c4b2914281c678873 --- /dev/null +++ b/video/pzElnMrgSD_39017338.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e34912488564f9204f4e3ecaf3117a87cbbf793814554a6f8dfb015e2d65fed +size 2064022 diff --git a/video/q7TxGUWlhD_39028047.mp4 b/video/q7TxGUWlhD_39028047.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d17ebf5267f2b9b606dbb4e83163435919cb040e --- /dev/null +++ b/video/q7TxGUWlhD_39028047.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:60d02d255e6e6060c38f4a9f4b659821a2e1146de2ee8366df5d04df35b1588f +size 2383998 diff --git a/video/q9dKv1AK6l_39025932.mp4 b/video/q9dKv1AK6l_39025932.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5152c5cad67ad8c249f34769b4948e7bd60d36da --- /dev/null +++ b/video/q9dKv1AK6l_39025932.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4998f79c129d12a3ee32f50a7d3271cab1e92c1fcf68563e723be3e606988e97 +size 944024 diff --git a/video/qAP6RyYIJc_39027296.mp4 b/video/qAP6RyYIJc_39027296.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..63de9c952c68e88f130d021b8f79f396932f8ecf --- /dev/null +++ b/video/qAP6RyYIJc_39027296.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74cdd13273388741f274d4948253c9559e3e11a03afde9c586a89c7e9f30bf18 +size 2651973 diff --git a/video/qCpCy0EQAJ_39026921.mp4 b/video/qCpCy0EQAJ_39026921.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9ed53b6178ef01374d164ee8e6222a727c2553e4 --- /dev/null +++ b/video/qCpCy0EQAJ_39026921.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:361602f3f1bede266b4cccd3ad5fe27b50f4c4fdf996010e5d5fe209efaa96a4 +size 2229431 diff --git a/video/qDuqp1nZZ6_39027480.mp4 b/video/qDuqp1nZZ6_39027480.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..47c38a2640556984622a919ac4b89510878bd0bf --- /dev/null +++ b/video/qDuqp1nZZ6_39027480.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ec14f80690c29c0671a4fef2bb8d5931e45f90d340ae82ec42900eacb22082a +size 1247278 diff --git a/video/qGiZQb1Khm_39028221.mp4 b/video/qGiZQb1Khm_39028221.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ccca8061b6021805601c3e33f8fc0dda5b074d45 --- /dev/null +++ b/video/qGiZQb1Khm_39028221.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35242ece7da5e8d7db3d760befe15cbf7a4b96673dfeaa955d18186495a9f9df +size 2685105 diff --git a/video/qL9gogRepu_39017328.mp4 b/video/qL9gogRepu_39017328.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7befbd7faeaf80d2c6464def385d447c4cba4a9b --- /dev/null +++ b/video/qL9gogRepu_39017328.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5307706191c710c522137a0092dd67b8223dc448d1078330b80653c9edb63916 +size 8306 diff --git a/video/qLnXPVvwLx_39026243.mp4 b/video/qLnXPVvwLx_39026243.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6196cbfc8a462a11eef1cd316b8aeaf9c6239e8a --- /dev/null +++ b/video/qLnXPVvwLx_39026243.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:672e14569a94acea851f7d696a99a5fe1bc5f1ca6eaf8e263ec3bd240c53e42a +size 2552715 diff --git a/video/qNXRXUC90b_39025779.mp4 b/video/qNXRXUC90b_39025779.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..87e3be14df6b2cfad5d170c76381977d513834e9 --- /dev/null +++ b/video/qNXRXUC90b_39025779.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6e04869a5c6d482744fa4f4ddb7d66044aa59987cbe59d9e5ecda84b78113fb +size 3370786 diff --git a/video/qOSFiJdVkZ_39027208.mp4 b/video/qOSFiJdVkZ_39027208.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..91a0f0117b3bc179a12220dcc52554b4370ac5d3 --- /dev/null +++ b/video/qOSFiJdVkZ_39027208.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:11e487407db8b6786cdf8caf76b45b360cb480ea2a56dac6a8d3dc16cc6a1ada +size 1888821 diff --git a/video/qPFsIbF3V6_39018781.mp4 b/video/qPFsIbF3V6_39018781.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d60c8d494cf01c156b06d741ef39404cac4ffad8 --- /dev/null +++ b/video/qPFsIbF3V6_39018781.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d464bed6447ab2096dfdfecd47010771f9a198c5eee1c32588a819530f78c479 +size 2672142 diff --git a/video/qTypwXvNJa_39028251.mp4 b/video/qTypwXvNJa_39028251.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4916172e65224e932479fc2501b530a70e3b91a5 --- /dev/null +++ b/video/qTypwXvNJa_39028251.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5fb7245b99811397ce7de115da8679fd68f138ed4e84ff0a1440d3d69530044b +size 2132339 diff --git a/video/qV83K9d5WB_39017324.mp4 b/video/qV83K9d5WB_39017324.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..14c83cd242d649b9931c17f24e7ca5b34b770505 --- /dev/null +++ b/video/qV83K9d5WB_39017324.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b7d871314460aa64f9bd2e78cb44e2bbf7998313af86d6a75bdaab0e1d4b4cd +size 816323 diff --git a/video/qamfjyhPeg_39028333.mp4 b/video/qamfjyhPeg_39028333.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..72e2db3f0893377d71a7213197064ce16836df74 --- /dev/null +++ b/video/qamfjyhPeg_39028333.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1bc1c294f63dd4f38d7b6c5936910367e45cd89805660c5a52f09976a47ca330 +size 2821792 diff --git a/video/qbvt3ocQxB_39028744.mp4 b/video/qbvt3ocQxB_39028744.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6728d8287fe05c4d1344dfc16f306953950cec15 --- /dev/null +++ b/video/qbvt3ocQxB_39028744.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2f053bcc0a50f871c0831007f4b33aaeb70fc0ed9a62a820b22c7d46b860d8b +size 2705368 diff --git a/video/qd8blc0o0F_39026250.mp4 b/video/qd8blc0o0F_39026250.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6bc5fddd6693458708263a0802057338ba4d116b --- /dev/null +++ b/video/qd8blc0o0F_39026250.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23002fa9ce996f2cfc1e07be7d9dc8f018cbe4395ef12c4910432c7081aa3373 +size 2328939 diff --git a/video/qdV1vp1AtL_39027150.mp4 b/video/qdV1vp1AtL_39027150.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..aa6ee24b2f2eb513cf40363fced9b6e93730a6c5 --- /dev/null +++ b/video/qdV1vp1AtL_39027150.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1db6473ae5afa8380932828bf65d29f658eee42d2aa5ecd9ede82cc9e1064766 +size 2718373 diff --git a/video/qf1ncViBr5_39026737.mp4 b/video/qf1ncViBr5_39026737.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f5513056190335d1b85c2f33bfa5b7bb0a5bceb6 --- /dev/null +++ b/video/qf1ncViBr5_39026737.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0763121d6e8bdec18a47b89d8fc1e68095c73c381e230fd0e30a8a3edb15301 +size 2887740 diff --git a/video/qlH21Ig1IC_39024866.mp4 b/video/qlH21Ig1IC_39024866.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..eab7c7dfd3471a1eaa91aa373dfb5571499bb001 --- /dev/null +++ b/video/qlH21Ig1IC_39024866.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:031e5b0539ccbb87864cae34d0dfa8b32633603debefb465bb09e4e47db3c1d4 +size 1743858 diff --git a/video/qmXedvwrT1_39017297.mp4 b/video/qmXedvwrT1_39017297.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..41a194ccb1d1a58f3ac0dfb4691219c8a080166c --- /dev/null +++ b/video/qmXedvwrT1_39017297.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35a2793e850f607605efecffb292c3fae01f3f9ad99b07d0cfb2465255d7ae20 +size 1950470 diff --git a/video/qpeAtfUWOQ_39026112.mp4 b/video/qpeAtfUWOQ_39026112.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..629ea5fd06b2e2dffdbd3267de469492d48559b1 --- /dev/null +++ b/video/qpeAtfUWOQ_39026112.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97bf08c04cfb8506ffbb3785a2f89d35afea0064723545af1a713f81997024e7 +size 3184364 diff --git a/video/qrfp4eeZ47_39028791.mp4 b/video/qrfp4eeZ47_39028791.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c1978adf0e7585fa8cc93e706a32f0bac8a710d5 --- /dev/null +++ b/video/qrfp4eeZ47_39028791.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f9cece3fb7e0a85f2d9bddec494ba1586de195fbada33256be7953262196edb +size 2660556 diff --git a/video/qwl3EiDi9r_39025858.mp4 b/video/qwl3EiDi9r_39025858.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f7ce882bd9a89855bcd3d117d5e1ec928a7b35b4 --- /dev/null +++ b/video/qwl3EiDi9r_39025858.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ac5020f1151ff9e08d07c514f7edbe999c7674716a123f9317cb826cb6f806d +size 584434 diff --git a/video/r0eSCJ6qsL_39025204.mp4 b/video/r0eSCJ6qsL_39025204.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ccd0010d7d76b81873a956b462762161f231a4c7 --- /dev/null +++ b/video/r0eSCJ6qsL_39025204.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e890b6741de00bb81c5b6e8f4214069fa4822dcd4ac0fde065a217f855ef84f +size 2842272 diff --git a/video/r5nev2SHtJ_39025982.mp4 b/video/r5nev2SHtJ_39025982.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..db6f08431293a3e171e4db1ad8d3caf53f010674 --- /dev/null +++ b/video/r5nev2SHtJ_39025982.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac9f37905efa1d78d384eafb0d4205ca13d55a02d11a66ba6095aef0c8c60c05 +size 2274220 diff --git a/video/r6V7EjANUK_39024570.mp4 b/video/r6V7EjANUK_39024570.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a279e44df240f2de0f296aefa0d7d3f5ac6715c6 --- /dev/null +++ b/video/r6V7EjANUK_39024570.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccd60451ae2c18ee1061591a55aff5c344dc89b4288597d67268d651e3dc4783 +size 2274004 diff --git a/video/rCnZrFikX6_39027964.mp4 b/video/rCnZrFikX6_39027964.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5b793e71e042582d53de9704d7bc33bdb2d61c72 --- /dev/null +++ b/video/rCnZrFikX6_39027964.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c3d9f51c3885bd82a299b936d682e9b9ee9ede2fc48d279c8d4e8a57e1d26dc +size 2579433 diff --git a/video/rF1YRtZfoJ_39027891.mp4 b/video/rF1YRtZfoJ_39027891.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..be99522e580164ca0a88219332be6556775482ee --- /dev/null +++ b/video/rF1YRtZfoJ_39027891.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:650329e806a6ec77bab97be0048a7443918a9e7f643797aa79d44ef586906c03 +size 2871007 diff --git a/video/rI7oZj1WMc_39026744.mp4 b/video/rI7oZj1WMc_39026744.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..66f33f6e1ebf36a2a36538ea788b6ebd6fd9d308 --- /dev/null +++ b/video/rI7oZj1WMc_39026744.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06fbc1f48bd2102050f2b4be06998448aa3cd24e4b0fc66ff2a605d1b2249fe1 +size 2952493 diff --git a/video/rI80PHlnFm_39024578.mp4 b/video/rI80PHlnFm_39024578.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b83886b474e745e2b6e3c4bad141fcafd2ad5449 --- /dev/null +++ b/video/rI80PHlnFm_39024578.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a2a632e29c78e47f8d370b0f99a245322ca62e00a25d44f8404f17a7649be30 +size 2827175 diff --git a/video/rIOTceoNc8_39026393.mp4 b/video/rIOTceoNc8_39026393.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..db6e0f7a84cd267c81102cb473fcf6886a142f27 --- /dev/null +++ b/video/rIOTceoNc8_39026393.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6be8d13f7bf74f34d7a2eab6d9914384e8004db743942a42b7db08afbec2c3a7 +size 1631760 diff --git a/video/rIOl7KbSkv_39025872.mp4 b/video/rIOl7KbSkv_39025872.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..455f3dab3c9239479b3baf44565e2af24c4ebbe6 --- /dev/null +++ b/video/rIOl7KbSkv_39025872.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:04bf2f1087c170ad60ebdc0c3cefd1d38480beb1c519420619b7f405f00c95f8 +size 2277401 diff --git a/video/rM24UUgZg8_39024998.mp4 b/video/rM24UUgZg8_39024998.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..50240f41d1f0349b4664e429990b398091bcb950 --- /dev/null +++ b/video/rM24UUgZg8_39024998.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2803a1972257f14d6fc80d56672dc13ceb49994261ba8f39e6e0df2dc0942ea8 +size 2407417 diff --git a/video/rM3FFH1mqk_39025620.mp4 b/video/rM3FFH1mqk_39025620.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a647c1edd25c8d6c00ceb380d78418fa4cf4f12f --- /dev/null +++ b/video/rM3FFH1mqk_39025620.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:525f3f81f0b359d6ec7d3cfadfc47598f8b7fbfe1b8e3b49fc2d85af417c4c1f +size 2566273 diff --git a/video/rPgc5brxmT_39026908.mp4 b/video/rPgc5brxmT_39026908.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..df1b6cec6f7ccb48dc8e580c8f093afcf48593da --- /dev/null +++ b/video/rPgc5brxmT_39026908.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:707168e907df2d0c772f193f2c91949ed0bdb63406e9cfa9b394ddb778fa177b +size 1993089 diff --git a/video/rQYyWGYuzK_39026408.mp4 b/video/rQYyWGYuzK_39026408.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..525d765caf91b8b44b9dc20fe005d663a9f1bd91 --- /dev/null +++ b/video/rQYyWGYuzK_39026408.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3c156b58a926c6c436134af8fadea246a7a4dbc09bcd14b157015fe3dd06832 +size 3223573 diff --git a/video/rYjYwuM6yH_39024943.mp4 b/video/rYjYwuM6yH_39024943.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a928615857826710e2980d467d839e7e67dcb4e8 --- /dev/null +++ b/video/rYjYwuM6yH_39024943.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4487c2818de9caf3ca70d8c100d98cdb1cc6faf7d353da6a746340776f2fe95f +size 2300643 diff --git a/video/rYs2Dmn9tD_39028832.mp4 b/video/rYs2Dmn9tD_39028832.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..352bc44fdfe1965e6550f499175568d8da708cc2 --- /dev/null +++ b/video/rYs2Dmn9tD_39028832.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eb395aaefd9307741d84bf81c283b5687a767a9c6a501eea3f4c8f0b7ee199a6 +size 2401143 diff --git a/video/rafVvthuxD_39028886.mp4 b/video/rafVvthuxD_39028886.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bd3fef3b1aa014e08482fd248d56bb959cac999a --- /dev/null +++ b/video/rafVvthuxD_39028886.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52946b4188b885d9eb7183c29b47c47a8c82f0997830708ab240bdede007313a +size 2993215 diff --git a/video/rajRJ6WKj2_39025551.mp4 b/video/rajRJ6WKj2_39025551.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c7e1e488a9311e67035469d8dd4eabc15c26133e --- /dev/null +++ b/video/rajRJ6WKj2_39025551.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:927ec672ee8b84e08a6414669f8afe56c02a927bc3e22ba9b3ad20af05aa67be +size 3062814 diff --git a/video/rbtnRsiXSN_39028184.mp4 b/video/rbtnRsiXSN_39028184.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..504e10ae86afe081ad335de350e239be0f97105a --- /dev/null +++ b/video/rbtnRsiXSN_39028184.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1629372e12acdf58496ac8f7da937822181a21ed1eea8cb7829dc2087a26b5c8 +size 1348208 diff --git a/video/re2jPCnzkA_39025634.mp4 b/video/re2jPCnzkA_39025634.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cbb88a8e931bf736c37defe4593fb3e568580f30 --- /dev/null +++ b/video/re2jPCnzkA_39025634.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:280bf25644dae6355ab90a323b2e7305806f7b4f8edb36b3d48a7abe982185e0 +size 1893639 diff --git a/video/rjSPDVdUaw_39027066.mp4 b/video/rjSPDVdUaw_39027066.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f9be0420dca23089906ce883dfc413ac3912436f --- /dev/null +++ b/video/rjSPDVdUaw_39027066.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f75ad8ff893623e54c2154be8a9744a93ff99a6bb368d598c6aa7c83ed9b03f4 +size 1896056 diff --git a/video/rkuVYosT2c_39024915.mp4 b/video/rkuVYosT2c_39024915.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6856d5042e1ca1023f3482ac0028a836fed84c53 --- /dev/null +++ b/video/rkuVYosT2c_39024915.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69b281a5deeb539f166c83489c34637d81b888f059dbe5cfe00c26232e42b3c0 +size 2767475 diff --git a/video/rle9X7DQuH_39026616.mp4 b/video/rle9X7DQuH_39026616.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bfe71c54b5ee129f941edc7e290f8e71a3743c51 --- /dev/null +++ b/video/rle9X7DQuH_39026616.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c1e25909e18e13a919cfa9104b6c23748c46c2d23fd3a5a2b4f5446d20142649 +size 2872878 diff --git a/video/ruGY8v10mK_39019027.mp4 b/video/ruGY8v10mK_39019027.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b468a88890373e1303f745134aceab8fdd1b6b2b --- /dev/null +++ b/video/ruGY8v10mK_39019027.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b6b019651da8fdb3b3d7d95e81822a8a6164e708b945b577aa1770de5944a96 +size 2974993 diff --git a/video/s1MoH2pACa_39026519.mp4 b/video/s1MoH2pACa_39026519.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2bfba33f07c2fd815cd7ccec17d868ff20156ec1 --- /dev/null +++ b/video/s1MoH2pACa_39026519.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88a264bce333ba2b88b5333bb00c751a578fc37f3d2f8e1ce15329732505a992 +size 3061385 diff --git a/video/s2hA6Bz3LE_39024514.mp4 b/video/s2hA6Bz3LE_39024514.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6e7ecadd7a23e138774c4c8cd56f6c7978c0b6fe --- /dev/null +++ b/video/s2hA6Bz3LE_39024514.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f927b007c8f414b77de7a79c0fd9d879ab75424e8e4d0b6b71a29eb052db6f7 +size 2413306 diff --git a/video/sEpSxteEKJ_39027820.mp4 b/video/sEpSxteEKJ_39027820.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6c208c465f2cda4b24c02566d1bf2732834690d5 --- /dev/null +++ b/video/sEpSxteEKJ_39027820.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4195034ad85cd856a7cf8e7ec30810bb3c9adc20b3132a65e13058360be4942 +size 2836319 diff --git a/video/sGvZyV2iqN_39025098.mp4 b/video/sGvZyV2iqN_39025098.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..970eece5adc33a5edd401b936eb6331ee7fb4442 --- /dev/null +++ b/video/sGvZyV2iqN_39025098.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:843a17391143ae01a1353fd67033859280f8d8e632e0b02b36e9bf0827866f1e +size 2925098 diff --git a/video/sMoifbuxjB_39017266.mp4 b/video/sMoifbuxjB_39017266.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f58abb33e5338f8307af4e3d21bc81a68bc383fc --- /dev/null +++ b/video/sMoifbuxjB_39017266.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d352a8f98a5e8ce39686d6a026cccda6309868d704834d6ddb648332a400174 +size 2437837 diff --git a/video/sRILMnkkQd_39025341.mp4 b/video/sRILMnkkQd_39025341.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e4502dad9869645209d10ec476480fc670c81809 --- /dev/null +++ b/video/sRILMnkkQd_39025341.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fdd5b73fad9f13b1dec11e6a81cfa8063a1c87a39fce7a8f4ac27dedce548197 +size 2502968 diff --git a/video/sRSjr9SDKR_39024918.mp4 b/video/sRSjr9SDKR_39024918.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..19359ec7f0dd087cfda239bf4468d8eda7c2594f --- /dev/null +++ b/video/sRSjr9SDKR_39024918.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5d1fad94bcdcb1e4118493718eaa797483043045ae4a1319bea6ea7bc4916fe +size 1534292 diff --git a/video/sSyytcewxe_39017035.mp4 b/video/sSyytcewxe_39017035.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a876f8e3438698bf2116e1a57c5618115f3ea049 --- /dev/null +++ b/video/sSyytcewxe_39017035.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e1309d8256b6a5596903e816254ea1596f7ece98e063fe4a09bc3a401f7ac613 +size 2140029 diff --git a/video/samyfu6G93_39017260.mp4 b/video/samyfu6G93_39017260.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..afa7f50ada9b052d3f1bee553e43cef07f5bd0a5 --- /dev/null +++ b/video/samyfu6G93_39017260.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c911e610a487d39dcba2e2276f41ded726bf1e164871661fb9c85d3aca3b5ce +size 2545068 diff --git a/video/satH8Evs2y_39024404.mp4 b/video/satH8Evs2y_39024404.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fae8e7b735cbc4e5c03557afe546589c910edf1c --- /dev/null +++ b/video/satH8Evs2y_39024404.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:419d5283c6e3a57328a0aeefc32528fc2b8c387ff24b3c41a8fcd4ab68e63907 +size 1885562 diff --git a/video/sbsaRj475E_39028525.mp4 b/video/sbsaRj475E_39028525.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3d532b7c539b4a88ebd46f2793f857b7a966b44f --- /dev/null +++ b/video/sbsaRj475E_39028525.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dba106d781e47d3d5539ae505c663a6b8a8253f3f47e64e11f6038764905ae57 +size 2008142 diff --git a/video/scw6Et4pEr_39026918.mp4 b/video/scw6Et4pEr_39026918.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..790b35c94aab27494bd1c70690990cfc29028ae0 --- /dev/null +++ b/video/scw6Et4pEr_39026918.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccd5cde83fca6fbf6258d8d94f5dec05072855e9d94d8bf57b9ab617b87776e7 +size 3869591 diff --git a/video/sgVOjDqUMT_39025454.mp4 b/video/sgVOjDqUMT_39025454.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8b3f51eeacc080fb243e2da90867e5bcb4b60a8b --- /dev/null +++ b/video/sgVOjDqUMT_39025454.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47b750541604d6f085de79998937125160a7c6598d8d61c5ebc10147b5f2fbf6 +size 2635082 diff --git a/video/shYQXpnBLB_39025593.mp4 b/video/shYQXpnBLB_39025593.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ca3419f059aaf54d6e53d6b9f7ddd57b7e72099d --- /dev/null +++ b/video/shYQXpnBLB_39025593.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e1d4916b1a6538ac1d51c4b3aebc6e08b494289d0251907784d27e97b5eca6c +size 1991399 diff --git a/video/skcTCdJz0f_39017257.mp4 b/video/skcTCdJz0f_39017257.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..277a55ef1c2348577644641675e35111a807b41d --- /dev/null +++ b/video/skcTCdJz0f_39017257.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38cad5821eab8819191975a52e0fbcce71c01db99636c29785cfa584c5e9308c +size 3221863 diff --git a/video/skeopn3q5Y_39025085.mp4 b/video/skeopn3q5Y_39025085.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f3a0bb026937ec1ac6277194b9dc6d958a03ca1d --- /dev/null +++ b/video/skeopn3q5Y_39025085.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41027f0212cf90d4d9e2be457226e57dd3c198de62adba5a00dc360030a9f094 +size 2210223 diff --git a/video/slSmYGc8ee_39019112.mp4 b/video/slSmYGc8ee_39019112.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..029fa4180f1d0eca897701193d8c53ff6561e3bd --- /dev/null +++ b/video/slSmYGc8ee_39019112.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4c53e15131dd793d22af25571c58e7e8c68ad0363b1e9f5e16a86155a58a5d6 +size 2693038 diff --git a/video/sntv8Ac3U2_39025974.mp4 b/video/sntv8Ac3U2_39025974.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0a6826da2cfc5a03d167404034210a00bfb05fdf --- /dev/null +++ b/video/sntv8Ac3U2_39025974.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6e962bcca64aaca4353120bc0498565d0be7f1a49b75af2f15ee57000abeded +size 3179225 diff --git a/video/snxWD0Q4EI_39028349.mp4 b/video/snxWD0Q4EI_39028349.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f859c9417eb636fdb26d7d3ba4a872e67e08bc55 --- /dev/null +++ b/video/snxWD0Q4EI_39028349.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0bbb996122fa0f80669fa5741757ead61af26697e3256408803a5616bf028c6e +size 2010605 diff --git a/video/soUXmwL5aK_39026261.mp4 b/video/soUXmwL5aK_39026261.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7b706db2f78500a3064eba27e2b52380cef64f5b --- /dev/null +++ b/video/soUXmwL5aK_39026261.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:583d862d0ea7418dc4222018b1a868bc9eb5d4c8c684e9262bf82d6cef553de0 +size 7776 diff --git a/video/spvaV5LELF_39017254.mp4 b/video/spvaV5LELF_39017254.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..211d8cc50a55573c5cb0a29f90301b89486eb9b5 --- /dev/null +++ b/video/spvaV5LELF_39017254.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b209004b12e71d3dd8dd6854cd184a5a42cccc707859714758fb42d41849eb77 +size 2891893 diff --git a/video/suYAAOI5bd_39028432.mp4 b/video/suYAAOI5bd_39028432.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6d612c1bf587d88fa51a23be76653747b743b3e9 --- /dev/null +++ b/video/suYAAOI5bd_39028432.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:786f88f32d1f34ab49e295cea728d44b4b835719586f32cdff086757f76ddbad +size 2948903 diff --git a/video/t3vnnLeajU_39019081.mp4 b/video/t3vnnLeajU_39019081.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0c174e8fa8565909778937ed18e921b2dbbba298 --- /dev/null +++ b/video/t3vnnLeajU_39019081.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:433119a9e626c3c2f2024c40db86c8c42d98f6c963627952cf199d6ba80c90f5 +size 3040843 diff --git a/video/t8eO0CiZJV_39017250.mp4 b/video/t8eO0CiZJV_39017250.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a5eb3bf96beea513573c42c6d6d5ed47681bddb9 --- /dev/null +++ b/video/t8eO0CiZJV_39017250.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:28b8d23a0ab75f8ed436b73ed3bcc30f870ff6cc410b36adebbf0a56bf5a221b +size 2769879 diff --git a/video/t8iosEWoyd_39024706.mp4 b/video/t8iosEWoyd_39024706.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..44076f90e20154790cd38e3ea478831072f098c6 --- /dev/null +++ b/video/t8iosEWoyd_39024706.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7ea031576eae07cb12b173c8d37cece2003e04dcb692b382be8cbf288d3df454 +size 1828572 diff --git a/video/tAOg1HdvGy_39026489.mp4 b/video/tAOg1HdvGy_39026489.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..65941894745d3ab8cd7a40aefee5c64330a09928 --- /dev/null +++ b/video/tAOg1HdvGy_39026489.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4b8a6f7b4256fd34a5a4e884a870bd342fd217a473099c4b8f4cfefaf786ec2 +size 2445768 diff --git a/video/tAlMAcqK9s_39026193.mp4 b/video/tAlMAcqK9s_39026193.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..db8a346efd30dc2582949393c075d1b42cd8809b --- /dev/null +++ b/video/tAlMAcqK9s_39026193.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13c439ceaabe8f678c0fc390da70b2ac8528615bcc54c175fd3f8de04a639acf +size 2906883 diff --git a/video/tBRNC6YemY_39024887.mp4 b/video/tBRNC6YemY_39024887.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ee951723bd3c378ca84bb7f5c90a945dfc9c7f6f --- /dev/null +++ b/video/tBRNC6YemY_39024887.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06f93c6a4e4c498190c5685ee19915a8972007fb311e1e4f31e623c44d708509 +size 2473596 diff --git a/video/tDvFa5OJyS_39026349.mp4 b/video/tDvFa5OJyS_39026349.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0beda836d245bae5d3761d96feffdf6d66ece112 --- /dev/null +++ b/video/tDvFa5OJyS_39026349.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3960c94a22bf66686ee1a7fedd2631e7a4d22dd89e7f22f191af736f0bb9c2fd +size 2276462 diff --git a/video/tEEpVPDaRf_39027356.mp4 b/video/tEEpVPDaRf_39027356.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..be0e31af1b0d9dc6894d6f3a7dda24b2f5217b35 --- /dev/null +++ b/video/tEEpVPDaRf_39027356.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4168952b79a1bc6c2e962b61f7b855d393cdea0bf7a01e17d687b650bb11b3b9 +size 2624247 diff --git a/video/tFB5SsabVb_39025690.mp4 b/video/tFB5SsabVb_39025690.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a038d17c88d3dbb730c45285420fcb1adf38f1c1 --- /dev/null +++ b/video/tFB5SsabVb_39025690.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:601cbffd6b0a9fab5a6523679ea9732e9cdeba4dd7ba9efd460097412874ff15 +size 2590641 diff --git a/video/tGQirjzddO_39018989.mp4 b/video/tGQirjzddO_39018989.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b2eb1a085d5ff0dbd7fff560c66b770658da3187 --- /dev/null +++ b/video/tGQirjzddO_39018989.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10d11ff684038a3a6a5d573e3979b95c430909987cbbcb5793a2655efe3b0beb +size 2115999 diff --git a/video/tKuLgnDWWN_39025302.mp4 b/video/tKuLgnDWWN_39025302.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e5745bdabdf6afabf7391f76d72221c25fb8b6d3 --- /dev/null +++ b/video/tKuLgnDWWN_39025302.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e96847126ba871449f53119b16eaf6a36b90e5225634a9470597d4ac2888bb30 +size 2627756 diff --git a/video/tPgagXpvcV_39027306.mp4 b/video/tPgagXpvcV_39027306.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..768988930eb2442d4f5837fb8b9e78ff817ddb67 --- /dev/null +++ b/video/tPgagXpvcV_39027306.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f90fe1028a850d1217e3b75ea71f538d9cd7544c187d931d2c53d05c4f005bbb +size 2477683 diff --git a/video/tQukGCDaNT_39027277.mp4 b/video/tQukGCDaNT_39027277.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9bab2df0c6574b4c38b951c2174eca13f8538fee --- /dev/null +++ b/video/tQukGCDaNT_39027277.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a521190855f43c57ade106e92fb67fdd42c0b96e4a6c39f8a3bf7112b339fbac +size 2372186 diff --git a/video/tTnFH7D1h4_39028710.mp4 b/video/tTnFH7D1h4_39028710.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..14eaacabe395229a2e7707f7a8557e2225fd9970 --- /dev/null +++ b/video/tTnFH7D1h4_39028710.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:988d40078a097d593a8c767e4cce19df7f2babf168783d74558a0dbe867dad65 +size 2583401 diff --git a/video/tUVG9nGzgE_39017244.mp4 b/video/tUVG9nGzgE_39017244.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..78b957cc28411b1f3558b4dee430013639fcd50f --- /dev/null +++ b/video/tUVG9nGzgE_39017244.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cca1d45411aee74abd1e0778074353f9d555097c55142d497b4612a848857d20 +size 2940496 diff --git a/video/tUpcRQNvVM_39027726.mp4 b/video/tUpcRQNvVM_39027726.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e311eeb56e0f888e77ac004ea609795b8238a010 --- /dev/null +++ b/video/tUpcRQNvVM_39027726.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bcce089356ddabd4fbd43f798f6f156876a864cc79a9c3188b23020eb1d61540 +size 2632846 diff --git a/video/tVConYid20_39025082.mp4 b/video/tVConYid20_39025082.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5cd80b0b3fb7404b1b18d16e5cb597132a62a6e5 --- /dev/null +++ b/video/tVConYid20_39025082.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c92177b84b98aac9ef494a522ee596f35336e6873f07013c2474c79f9ddedf0 +size 2702017 diff --git a/video/tVMPfEGT2w_39017242.mp4 b/video/tVMPfEGT2w_39017242.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..287280df58ee34702f8e29dbcfa420c15064bf99 --- /dev/null +++ b/video/tVMPfEGT2w_39017242.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:55b78e52b996f037948d5cd22e432bf7de67ca6c7125195168587bb699c7175e +size 2288908 diff --git a/video/tWkL7k1u5v_39025408.mp4 b/video/tWkL7k1u5v_39025408.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c52914bc9ad9f6c2b1c464f735253cb42fe66813 --- /dev/null +++ b/video/tWkL7k1u5v_39025408.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:feb37f33cc05d417c693ea3ff0e1f9a30bff4cb91abebc6e00b3e295dbe8bbac +size 2651718 diff --git a/video/tZtepJBtHg_39024816.mp4 b/video/tZtepJBtHg_39024816.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..aa99254b4acac695a26fe96cde57af17d6499c42 --- /dev/null +++ b/video/tZtepJBtHg_39024816.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f03270f5c02e3ee4577c87ab6286f1a7f12741198ebadde9a5b4916439b1902 +size 1926139 diff --git a/video/taI8M5DiXj_39026378.mp4 b/video/taI8M5DiXj_39026378.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fb027e9e62f443edc6b2ed4c56c6c0d1fd42ba88 --- /dev/null +++ b/video/taI8M5DiXj_39026378.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:447786995b13609d88a556f4e205ffa274058f525f73e4c2d15a634078b0ea71 +size 2494359 diff --git a/video/tb1MlJCY5g_39026424.mp4 b/video/tb1MlJCY5g_39026424.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6f4e33c8d73241fe20ec224fbf76cb8893769025 --- /dev/null +++ b/video/tb1MlJCY5g_39026424.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:168894dd42ac428d7824526adcd518cf2b213a6c5e395416a3735ca2968644f2 +size 2147994 diff --git a/video/tiiAzqi6Ol_39017237.mp4 b/video/tiiAzqi6Ol_39017237.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..50f01f291055fdfa46a65dbd4692ec6a768b203c --- /dev/null +++ b/video/tiiAzqi6Ol_39017237.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a206b19765cfb05523ddb03df13ff987c1267afc287b7b8f122f32139705703e +size 2711343 diff --git a/video/tnh4LK72yj_39027802.mp4 b/video/tnh4LK72yj_39027802.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c185b06d667e460ccf727b8ddb12787edcc422e3 --- /dev/null +++ b/video/tnh4LK72yj_39027802.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:681df2d9ca5d69eb0db1ffb3439786a2d863d1f41e1cf8f623b2315f39cac5f7 +size 2198917 diff --git a/video/tplXNcHZs1_39017230.mp4 b/video/tplXNcHZs1_39017230.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dd84e045e285ee211127917b1363e6a98d398c9c --- /dev/null +++ b/video/tplXNcHZs1_39017230.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf1ff589177ade74cd3253380961aa628750b806138eabe8510d02fae1a3bb75 +size 2420258 diff --git a/video/tqh1zdXIra_39018920.mp4 b/video/tqh1zdXIra_39018920.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3f98f7871919f58755289e2a9beba02e49317fac --- /dev/null +++ b/video/tqh1zdXIra_39018920.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41b9202ae6e2119bf1fc312070bf5e6dff0bf4c48b0c5c15ea96e0bee616d8ab +size 2840970 diff --git a/video/ttUXtV2YrA_39027834.mp4 b/video/ttUXtV2YrA_39027834.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5a7ea7a712c4f5b89cd6f680389216093e1f4b63 --- /dev/null +++ b/video/ttUXtV2YrA_39027834.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:08d0eab84145e46ea6c8bba357282276b8cc7a84c6cfefc2948685609d6e4521 +size 2052704 diff --git a/video/ttXg3SKAg5_39018881.mp4 b/video/ttXg3SKAg5_39018881.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c6e2b9b6f00c1a1532da29268f00b66978ef1892 --- /dev/null +++ b/video/ttXg3SKAg5_39018881.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0db744db3776fe6f3da6308569a83a3e9756041d2cb84cebcc4151e9256d994c +size 3209078 diff --git a/video/tu1oC7zHGW_39026278.mp4 b/video/tu1oC7zHGW_39026278.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0d8ec4dea22fda9d52180309b1c228331f344580 --- /dev/null +++ b/video/tu1oC7zHGW_39026278.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:043a195f685966cc744f20fbfb8e2f4d3240a9707dac9e57e49384efc62193dd +size 1921110 diff --git a/video/tuiqq1G8I5_39026976.mp4 b/video/tuiqq1G8I5_39026976.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e76b1bef97510ded81104a1a88f88467d5fe1cf0 --- /dev/null +++ b/video/tuiqq1G8I5_39026976.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c07e6737fac2c225753add78a76dfd3b03ee67887c6cb05b74503fe51f4552f5 +size 2217596 diff --git a/video/twYE75Mnkt_39025441.mp4 b/video/twYE75Mnkt_39025441.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cedd45c36822ec3accaf5cbd5d632cd805abb11c --- /dev/null +++ b/video/twYE75Mnkt_39025441.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:490f09b3b273726ac0591421400daed4ea9d333b4e9005b2335356ec37c742a1 +size 2642375 diff --git a/video/twpPD9UMUN_39028150.mp4 b/video/twpPD9UMUN_39028150.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a23493e56bd294e47edaa3c72bd1e38168250036 --- /dev/null +++ b/video/twpPD9UMUN_39028150.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c98fdb65eb2b806fe05e583ee365cd2649411dd2733ae630579edfc8320d651 +size 2629275 diff --git a/video/tyPcIETPWM_39028058.mp4 b/video/tyPcIETPWM_39028058.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6e16191588e946919d25c102c8310e1f5645c18d --- /dev/null +++ b/video/tyPcIETPWM_39028058.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:871e6d24bd3c5dd01f764fbdc98a287ae1050ce8c56a3d0cc66cbabdabe01af6 +size 2771714 diff --git a/video/tz83Nyb71l_39027021.mp4 b/video/tz83Nyb71l_39027021.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bb535256ad81692a33f7edcbb18d7937d13c480e --- /dev/null +++ b/video/tz83Nyb71l_39027021.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0297cb878c127a42f88153583c855b883621cc24bbb96313a0dd6c6893a73c22 +size 2468650 diff --git a/video/u1Z3HWz4VJ_39026072.mp4 b/video/u1Z3HWz4VJ_39026072.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..48bdc5ae77fc7712766afe502689d81d1197f4e0 --- /dev/null +++ b/video/u1Z3HWz4VJ_39026072.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4542255f57206f4b5965bf4d58221e9493ce6c6583d66e162ca45616673a6924 +size 2331343 diff --git a/video/u3dHl287oB_39019126.mp4 b/video/u3dHl287oB_39019126.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..82d98fffc6425ac5d7f932667283c31a0b666597 --- /dev/null +++ b/video/u3dHl287oB_39019126.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a61d04513e9b2d64de265c9de19dc45ecf76412ac6679710231856d57ce4329 +size 1642564 diff --git a/video/u3mZzd0Pdx_39025237.mp4 b/video/u3mZzd0Pdx_39025237.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7eccddd13ccc6c8d4b6cdb8330bc28fca905096f --- /dev/null +++ b/video/u3mZzd0Pdx_39025237.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:831651c840200dd3cdd6565531d65910bfc80a66841a5465c1d96775e514a588 +size 2233555 diff --git a/video/u6FuiKzT1K_39027332.mp4 b/video/u6FuiKzT1K_39027332.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..18aa31d7634134c97c12e6478037a9dea69512f4 --- /dev/null +++ b/video/u6FuiKzT1K_39027332.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d89fa83bc8d712d41b7386d59b4b62e8d956d87d7019277616edf1ef1ae6803d +size 3121218 diff --git a/video/u6imHU4Ebu_39017228.mp4 b/video/u6imHU4Ebu_39017228.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7eae985c874e65a59ce91bd51eb58dac7b7a5b83 --- /dev/null +++ b/video/u6imHU4Ebu_39017228.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec7aa5d381c6f4fdb2ed1ee1598b22bf996464eba003555dcfa0101e4e85d821 +size 2392391 diff --git a/video/u7JRmrGutT_39028835.mp4 b/video/u7JRmrGutT_39028835.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bf68f8e5fcbcfaadcd39d9ecb9550eaf3d7e8984 --- /dev/null +++ b/video/u7JRmrGutT_39028835.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15c48897e89d0db0ee0870a13fea5ec65039f90c7d31bc2faa7f67816436c05a +size 8308904 diff --git a/video/u859gX7ADC_39017226.mp4 b/video/u859gX7ADC_39017226.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2c7c3ed1e1ddd3c17a02cbe24ce83767e9e06700 --- /dev/null +++ b/video/u859gX7ADC_39017226.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2102f6714469fb398f9eb05402341932259b2ad9ef93985ecb49f4631f6dee45 +size 2265599 diff --git a/video/u9ShP64FJV_39026121.mp4 b/video/u9ShP64FJV_39026121.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a35dbd0b9803b2f533286de4ff5b6ac610b14a7f --- /dev/null +++ b/video/u9ShP64FJV_39026121.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74b7c997d62678357c0e524092ed952735a55f3c8fe4ea78f3a6679e15723303 +size 1670580 diff --git a/video/uCZI8gSfD4_39028110.mp4 b/video/uCZI8gSfD4_39028110.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ac6e0ad620853cfd1c1431a8cf279b87e06a5ab0 --- /dev/null +++ b/video/uCZI8gSfD4_39028110.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e73341207d371d05aaf683618a8c8b4aef822ab2807d2c25bb4b1a2504ce6e5 +size 2734939 diff --git a/video/uCvdw0IOuU_39027149.mp4 b/video/uCvdw0IOuU_39027149.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..01414289ac7222762bebb362b2d278de79c37a94 --- /dev/null +++ b/video/uCvdw0IOuU_39027149.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f38ca6195f01e202acd553d3074705435c8eea1f989a9297385b2c9bcd4086a +size 2705206 diff --git a/video/uDD44NROOt_39024783.mp4 b/video/uDD44NROOt_39024783.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..24398e91080e92a1e4f1f184533df21e7215dcf8 --- /dev/null +++ b/video/uDD44NROOt_39024783.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:539f44665376665d2363e00d5f2ba73107388fff3384565095511fbc9f192ba2 +size 2819323 diff --git a/video/uDxhMgjVJB_39024504.mp4 b/video/uDxhMgjVJB_39024504.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..64ba02cb8b983c35e987c52c77d42cebe76e8af5 --- /dev/null +++ b/video/uDxhMgjVJB_39024504.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:31be25b20e6548250a546f3d8a81249c8f09d75bf10b49f35cbaaa2fb0680da2 +size 2805140 diff --git a/video/uFXGsiYkkX_39026300.mp4 b/video/uFXGsiYkkX_39026300.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a75fb46ca0deefba7e80ab15632ab44ab07a286c --- /dev/null +++ b/video/uFXGsiYkkX_39026300.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c02be3ddbd520d791cc1fe14f5e935a3cb162c2e737cbe2e5e93a36e9f25653 +size 2003855 diff --git a/video/uHml6eyoVF_39027824.mp4 b/video/uHml6eyoVF_39027824.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..532f5e11eb2b5837e5891bb5107e1d06490bed9c --- /dev/null +++ b/video/uHml6eyoVF_39027824.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:907f7f18ea84d64bc51cc76156ef48c6e5e9835f39b98e90566de1bf6bb89e5f +size 2723401 diff --git a/video/uKB4cFNQFg_39017219.mp4 b/video/uKB4cFNQFg_39017219.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..48c4ac52d14e1638241b9176f332c4bb6ebc0906 --- /dev/null +++ b/video/uKB4cFNQFg_39017219.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb77eedda145369d0247d0d81d597866aacaece6e0f088ac4c7510a464277e7e +size 1592472 diff --git a/video/uM3rQ14iex_39026308.mp4 b/video/uM3rQ14iex_39026308.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fbe55fd14ef40d56ec0feb1c2e0b53184481acd7 --- /dev/null +++ b/video/uM3rQ14iex_39026308.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69ba13ef29ef6ff63a8aa119067cf82457c6943e9f38a6af29d1bb427cd21a8c +size 2530708 diff --git a/video/uNKlTQ8mBD_39027856.mp4 b/video/uNKlTQ8mBD_39027856.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d6ace10c7a66711db36ccc5e488bc15230574864 --- /dev/null +++ b/video/uNKlTQ8mBD_39027856.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:59e3b521ecf1430eb5ec4f6c1823d2d25365dd7f1fccaa0738d10b02fc398b3c +size 2959916 diff --git a/video/uNrFpDPMyo_39017217.mp4 b/video/uNrFpDPMyo_39017217.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bfd46d6f9bcf4862e3db1567e2cb865bba59984a --- /dev/null +++ b/video/uNrFpDPMyo_39017217.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3488dbc74863944fec912758231736c72522782770c21af3b0c0c9c4fde64052 +size 88220 diff --git a/video/uO53206oLJ_39025405.mp4 b/video/uO53206oLJ_39025405.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d33abdce3fa1efca2db1683a79ebec243edc67f2 --- /dev/null +++ b/video/uO53206oLJ_39025405.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea5cfe039d22168b945293dd2ae48577eaae262f187a18dcfe937890ef98ad6b +size 1932344 diff --git a/video/uRnTYPkF3V_39027698.mp4 b/video/uRnTYPkF3V_39027698.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ff3eb1b0ab4a5e35af165399b7eacd5299d01b95 --- /dev/null +++ b/video/uRnTYPkF3V_39027698.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d429d9b2bbe41fedcf552bf2ccf058ef788d1c2f7006f112ee8ac4f2feb76fc3 +size 1281191 diff --git a/video/uatPOPWzzU_39024700.mp4 b/video/uatPOPWzzU_39024700.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7be99665533400d70237cc9578be0e4a9935c4cc --- /dev/null +++ b/video/uatPOPWzzU_39024700.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cba17c1ef5dddb633cf40126fcd89e511f52dad831ab524028628744fc0077f8 +size 1988775 diff --git a/video/ud0RBkdBfE_39026222.mp4 b/video/ud0RBkdBfE_39026222.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..12108e06639913a9934a6f5b6d3a46cc0c63f91d --- /dev/null +++ b/video/ud0RBkdBfE_39026222.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00790a77f203ad1330912afd8d71c051023f6879f74626f2789571030540d13f +size 1573167 diff --git a/video/udTwwF7tks_39024419.mp4 b/video/udTwwF7tks_39024419.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e545fa9903472f86b6c44961eae5c4164659566d --- /dev/null +++ b/video/udTwwF7tks_39024419.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1887b9322dca4951bf6976980bbe6c689ce0b0ab30b0edccb90311a547e2981 +size 3040053 diff --git a/video/ufKBRvYxtp_39026569.mp4 b/video/ufKBRvYxtp_39026569.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e42a7f19e779191726530ee69bec9e76573790bd --- /dev/null +++ b/video/ufKBRvYxtp_39026569.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da5987e0ec186f23d2db631388316bf56bcb8eeb744ef173ffecba7e632deb35 +size 2094646 diff --git a/video/ufPPf9ghzP_39025950.mp4 b/video/ufPPf9ghzP_39025950.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d225f7682cb500c4206b18d387d8e51271a53d47 --- /dev/null +++ b/video/ufPPf9ghzP_39025950.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7cb0c85936a00c15a2c507da0fab6562cbf526ba575816f136c040e5d901b8f8 +size 2030769 diff --git a/video/uikhNa4wam_39027278.mp4 b/video/uikhNa4wam_39027278.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a24fab7353011d85506cd69ef8c2df8bade7453f --- /dev/null +++ b/video/uikhNa4wam_39027278.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e63a71cdd3c28fec83a575db9db5d5faab726cd2415ed010ff251ac8e97b0024 +size 2097165 diff --git a/video/ujk0XrNTQZ_39028376.mp4 b/video/ujk0XrNTQZ_39028376.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..30b626a983fe03e2b9f7ec018d75f867c4253f08 --- /dev/null +++ b/video/ujk0XrNTQZ_39028376.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2ebad467cd8d738332e338109f102790b9b46ffc4db58e231edaa60929be493 +size 2353122 diff --git a/video/ulaUJFd96G_39019096.mp4 b/video/ulaUJFd96G_39019096.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1310eea86154fd5d18eb1b636216e4330c1832d4 --- /dev/null +++ b/video/ulaUJFd96G_39019096.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd0788d01b6db5125360a7f8afdd06927338cf18b8f1796465eef2a5afa4515d +size 2110127 diff --git a/video/umukvCdGI6_39025146.mp4 b/video/umukvCdGI6_39025146.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f7248284406be3b3464bbc9e33ef73930855984b --- /dev/null +++ b/video/umukvCdGI6_39025146.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:048008fe85b70325035f3f4696535b9b849b3762b864c790f73c56a424cc8d1b +size 2959168 diff --git a/video/uoJQ9qadjY_39027122.mp4 b/video/uoJQ9qadjY_39027122.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d3c717d6c8f296a7d18c0f592fa57f57bef0c073 --- /dev/null +++ b/video/uoJQ9qadjY_39027122.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:273ac5546dce4db8a73710c672db4d2ac4b24401af6dc438a52a82a0e2e16058 +size 3270473 diff --git a/video/up4tWnwRol_39027613.mp4 b/video/up4tWnwRol_39027613.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f438dfa0fb1f7851dbb58528d897340cac5c7996 --- /dev/null +++ b/video/up4tWnwRol_39027613.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:884ef2c5978f031234340b2c03ce83b58f6584eeba3a4102b21f9248e5f8f00b +size 2103936 diff --git a/video/uqWfLgZpV1_39026863.mp4 b/video/uqWfLgZpV1_39026863.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dd26615f4b2a737ec4d73f5c38008dc986422b8d --- /dev/null +++ b/video/uqWfLgZpV1_39026863.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9283354931f20e4e4fe10c714194d00219b1f7a368aa7f9e58cc3cc7df01a969 +size 2898127 diff --git a/video/uqxBTcWRnj_39019139.mp4 b/video/uqxBTcWRnj_39019139.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..427348335b562d2ba780e34157319d0be65c4b36 --- /dev/null +++ b/video/uqxBTcWRnj_39019139.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4bfe2ab14213af1fb78b0c108a5e5488254ee08a55814da5d0b1d4d8ef73837d +size 2934458 diff --git a/video/uuQQwrjMzb_39028015.mp4 b/video/uuQQwrjMzb_39028015.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8cc383e14018dea65d8cb8608296f4f0bd8b2d19 --- /dev/null +++ b/video/uuQQwrjMzb_39028015.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52f3eebac6ebb6adf9051d1296e94d5b5244f0cc3ae4ebc51d7f6037ad653fa1 +size 1311554 diff --git a/video/uvFhCUPjtI_39017431.mp4 b/video/uvFhCUPjtI_39017431.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..566c72c0b325a211efba6da3bd4de3ba1049c240 --- /dev/null +++ b/video/uvFhCUPjtI_39017431.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd85710158d5666229e1fb96306a90269c405cb1d31bd9b97b037b777c6c212a +size 3095701 diff --git a/video/uyqjpycMbU_39026187.mp4 b/video/uyqjpycMbU_39026187.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e7a276774b41300f76e6ccc4aef1244c4354ac55 --- /dev/null +++ b/video/uyqjpycMbU_39026187.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38519d9760da325781ccc211beccb6b10848c16ff75bc4130b17ee9572a50052 +size 1667699 diff --git a/video/uzIWqRzjEP_39025215.mp4 b/video/uzIWqRzjEP_39025215.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1cb9f57275b7f68fc64bec82d1d9b1c24f007f15 --- /dev/null +++ b/video/uzIWqRzjEP_39025215.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea85004286450264066e477139c7bb028edd4fbdf6bf4e7f5d53e0e147544811 +size 2944157 diff --git a/video/v07KRLYxDX_39025573.mp4 b/video/v07KRLYxDX_39025573.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8fe47bc08c6593bfdb90afd553cf61ff010d6d78 --- /dev/null +++ b/video/v07KRLYxDX_39025573.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5aeb921952d10550335c7b9af366be15507a573c7cf83361a1c59507a5ee07f3 +size 2608344 diff --git a/video/v1BIm8wESL_39027750.mp4 b/video/v1BIm8wESL_39027750.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..41c6498a886a34a05e7a3002a50e0c6906407486 --- /dev/null +++ b/video/v1BIm8wESL_39027750.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50a12e771e4ea522a2bd1461ce0ee9c4b36f0c2d00a789e406aeb129c73a832b +size 2604373 diff --git a/video/v1VvCWJAL8_39017427.mp4 b/video/v1VvCWJAL8_39017427.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dad469493b1cdb14b8b27371b2cd0d64127444a5 --- /dev/null +++ b/video/v1VvCWJAL8_39017427.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:80684ed38230d0de0666eaac521db2b549cf42b23662ebd87070ed2ca08758d9 +size 2370757 diff --git a/video/v3K5TVP8kZ_39017426.mp4 b/video/v3K5TVP8kZ_39017426.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bcbe9ba41549fa832c2926f3952e76fc7d87e3a5 --- /dev/null +++ b/video/v3K5TVP8kZ_39017426.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dc5874fda2e72c9bd03d248b30101fb5fc700e7ce9c820fa07fdd3d396fcfbe0 +size 2511677 diff --git a/video/v3XXtxWKi6_39018858.mp4 b/video/v3XXtxWKi6_39018858.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..85c668fb71ec40cb55576fe90a083f669379feea --- /dev/null +++ b/video/v3XXtxWKi6_39018858.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce1c146795f59ba8c7d3227219b0d8747467a2b93df929b29abfc214dba2e4dc +size 2221658 diff --git a/video/v4dXL3LsGX_39025552.mp4 b/video/v4dXL3LsGX_39025552.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9dac77f6d1fd1f2d90dbde977dec1f5ed39a1bda --- /dev/null +++ b/video/v4dXL3LsGX_39025552.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e548184a07f931c3a336e0ffc38430b839f67b9d2571f0cef20d700e049a8e14 +size 2487547 diff --git a/video/v7vYVvmfru_39028483.mp4 b/video/v7vYVvmfru_39028483.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..748e6eaf3f786119b4de441e264fe05255aca6d3 --- /dev/null +++ b/video/v7vYVvmfru_39028483.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de544833f15a814680a829a2dbffd0acae73040d80bb4f7cc3cbf5eefe8278f2 +size 2009084 diff --git a/video/v8RRFNbJ43_39024891.mp4 b/video/v8RRFNbJ43_39024891.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1aa44873b289debbef29472be9314ff0ac306c2d --- /dev/null +++ b/video/v8RRFNbJ43_39024891.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:abe4ce9cef2b3f8204917aa8ec06a8c21c4270ef3c6c6674f615d63d10d41135 +size 2334172 diff --git a/video/v8X70gTodR_39026359.mp4 b/video/v8X70gTodR_39026359.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4ac4aee3f6b24cd9c926e0585b03aded9b9dcca3 --- /dev/null +++ b/video/v8X70gTodR_39026359.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70fdccea308e0c5daf4fac3f583709fa72fd551133d816f0089ee44357d54d8b +size 2539512 diff --git a/video/vA4s3kN4QE_39026421.mp4 b/video/vA4s3kN4QE_39026421.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..893b25f0e6de7fd54555fc30a34ad69562bb28ac --- /dev/null +++ b/video/vA4s3kN4QE_39026421.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ec072e0a6b9c7776f7b6ae783e9ac045a9a1550703639ba6ef8ac76c2b08335 +size 2707710 diff --git a/video/vAOgaPvgYr_39025251.mp4 b/video/vAOgaPvgYr_39025251.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d0eecd44fe5811b4b550105f563afc3eb5286e71 --- /dev/null +++ b/video/vAOgaPvgYr_39025251.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ab6d6ac1258a83ad768f7f064629ac723ad5124ff634ab52d2b5fd7d16b6316 +size 2109635 diff --git a/video/vBGMbFgvsX_39026894.mp4 b/video/vBGMbFgvsX_39026894.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..54dc4c551ca55721f3dbc7256a391bda68bb9987 --- /dev/null +++ b/video/vBGMbFgvsX_39026894.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:987d23ecf177879038b465222570703e365574d3b11fad00ee5d932e51c9b313 +size 2349209 diff --git a/video/vBah12uVbD_39024438.mp4 b/video/vBah12uVbD_39024438.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2435b85acdc2be95ed4fb75e777554548b094645 --- /dev/null +++ b/video/vBah12uVbD_39024438.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cec8cab089e961ba61e8960446882a3982bd4156ecb4a671100c2ec8f289cb6b +size 2401377 diff --git a/video/vBlzen37i0_39026034.mp4 b/video/vBlzen37i0_39026034.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8c20b92f6408f3f03ad813fd02c32ee85a5aef7b --- /dev/null +++ b/video/vBlzen37i0_39026034.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff5396f5e07b5e415bccc4ef9a05f62bd32b5f82abd7396b9d788c34fb67c786 +size 2809592 diff --git a/video/vCOgjBIZuL_39026505.mp4 b/video/vCOgjBIZuL_39026505.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..41307798184201e08edda1c4f193bc72d27d9a2c --- /dev/null +++ b/video/vCOgjBIZuL_39026505.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2890304c1c51843e13a94de70502054a34adf1c80113b8255ccdd78a95b85ef +size 2425359 diff --git a/video/vE5MyzpP92_39017060.mp4 b/video/vE5MyzpP92_39017060.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f773e66500acca6cecc8fa5894d7a005f7ba3ced --- /dev/null +++ b/video/vE5MyzpP92_39017060.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d69f63c0879dedc68de33ff7afac49decf796d00b3e9d9b7ef98866f78cf4ac2 +size 2332869 diff --git a/video/vEfmVS5ywF_39019276.mp4 b/video/vEfmVS5ywF_39019276.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b8dffa11444581d514e2062ad548076684345ee6 --- /dev/null +++ b/video/vEfmVS5ywF_39019276.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed794df720bb572ab8f8dd0f708f886e45411772d1e350bb32c528d3187ddb3e +size 1966521 diff --git a/video/vH7GcaDhAo_39024912.mp4 b/video/vH7GcaDhAo_39024912.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..884b6228daeb94a2be3a05afcb9c29b2d12965ed --- /dev/null +++ b/video/vH7GcaDhAo_39024912.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0714c847f6e368b05ca950890879df77c71c72e57382a495b6bcc3cef2fceefc +size 3184732 diff --git a/video/vJMMdFfL0A_39026163.mp4 b/video/vJMMdFfL0A_39026163.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..514dcc486c6b820e2841ed8e704f354e35c669b3 --- /dev/null +++ b/video/vJMMdFfL0A_39026163.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c507734016b8c93d618346f3c53190fa5fb431b04ffd444d077d3e5f318868c8 +size 2387613 diff --git a/video/vLJcd43U7a_39019032.mp4 b/video/vLJcd43U7a_39019032.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ea351a0cda314379be2e5400e754b69a37dbee93 --- /dev/null +++ b/video/vLJcd43U7a_39019032.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0afef477ddb25b77bd54c39143e7bdad8cad46773ed9b8de9f398bb21213421f +size 2532602 diff --git a/video/vMMzjCr5Zj_39026185.mp4 b/video/vMMzjCr5Zj_39026185.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..25b1adcd807da2fcb697940cea99d20a23c7a59e --- /dev/null +++ b/video/vMMzjCr5Zj_39026185.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89445732f7900bbc1ea3f446016206a8b1197437f7cb902afd1c9c05862829b3 +size 3275388 diff --git a/video/vP9qAzr2Gw_39026618.mp4 b/video/vP9qAzr2Gw_39026618.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7a06e5423e7fec4d3421ba9ae8a76be6191dcbe3 --- /dev/null +++ b/video/vP9qAzr2Gw_39026618.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d28edff4071fd145ef5b225237427382cd6692199e09a5a8566b14414bfe7ec8 +size 2966777 diff --git a/video/vU1SiBb57j_39026325.mp4 b/video/vU1SiBb57j_39026325.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fd8f31b6d0d5312cdd21417bf6244757f7fc92ed --- /dev/null +++ b/video/vU1SiBb57j_39026325.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b19dab3e0dec8e6b4318284214d0e4e6ba30efb061d3f04f3648dbea7067238 +size 3035807 diff --git a/video/vUrOuc6NR3_39027503.mp4 b/video/vUrOuc6NR3_39027503.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5593d00e971830e4c1ba5d66e4011cfd5b60ccce --- /dev/null +++ b/video/vUrOuc6NR3_39027503.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c0a865e7ee8f92bb6d5cef878cf88a3c719f54c74588d13856f6bf712493b84a +size 2298086 diff --git a/video/vWSll6M9pj_39026865.mp4 b/video/vWSll6M9pj_39026865.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..93aa7f52b92aee263a972d36c16f34b6edf83cd9 --- /dev/null +++ b/video/vWSll6M9pj_39026865.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3a77eb920a18afe9bec12ef992eab3215b45ed3a3e5dea231a98291c71dd725 +size 2572744 diff --git a/video/vYUx8j5KK2_39027176.mp4 b/video/vYUx8j5KK2_39027176.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e467e5cc6df6b44406d2e977c41295144df05020 --- /dev/null +++ b/video/vYUx8j5KK2_39027176.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2957ad7c669afce642ab480efc4f3a0555fa21d91bf582fd7ad39df894ab0b2a +size 2016640 diff --git a/video/vZZ4hhniJU_39017413.mp4 b/video/vZZ4hhniJU_39017413.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b56624c714f7bffae8fd2bd33dd968eeaf57b994 --- /dev/null +++ b/video/vZZ4hhniJU_39017413.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d513c4ab2388be71b041457239163b7286f90c2de4d298780b15ad32d5482096 +size 1728668 diff --git a/video/vePdNU3u6n_39017411.mp4 b/video/vePdNU3u6n_39017411.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..4e7bd55f9b4504326140a672a89926eb60c0b677 --- /dev/null +++ b/video/vePdNU3u6n_39017411.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:32c325fcff3029e13a0796b252c1fd8ce15a89d741223a854a6b0269e4845726 +size 2955399 diff --git a/video/viftsX50Rt_39017409.mp4 b/video/viftsX50Rt_39017409.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..71aa8726785db85606e60198d78af8f6a1175655 --- /dev/null +++ b/video/viftsX50Rt_39017409.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17027a77a1bd5622c3166c62a07f66fbe43e283f74b6df375af76dae7fa066f0 +size 2526855 diff --git a/video/vjsd8Bcipv_39026818.mp4 b/video/vjsd8Bcipv_39026818.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..024dce438b6dacf5144129a9cd47a05492714fe0 --- /dev/null +++ b/video/vjsd8Bcipv_39026818.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e1e68981ff84c76529bea575b8c6a8609a06aaddc924f322244689d5ab2b2e0b +size 2805388 diff --git a/video/vtRotUd539_39025370.mp4 b/video/vtRotUd539_39025370.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a8b31dd9732ebf46819edde0d7c301d737b978be --- /dev/null +++ b/video/vtRotUd539_39025370.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a315002663bbb0d1195fe34e1c12e2f8975df5a635180a6fda9208947ddb1cab +size 2596115 diff --git a/video/vwgWbCxeAQ_39028817.mp4 b/video/vwgWbCxeAQ_39028817.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5ba0a9c3294ff25632d89e53e43f63b4b2ad3ed2 --- /dev/null +++ b/video/vwgWbCxeAQ_39028817.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:76eaf042248a58415cb6a5655dac4b95de77e78c9f33b477df42a984bda44fc7 +size 2387099 diff --git a/video/vymkuBMLlh_39026347.mp4 b/video/vymkuBMLlh_39026347.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d453b8e6c6cf6fc55bea4399c465278c5170d377 --- /dev/null +++ b/video/vymkuBMLlh_39026347.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6ed821287d672456bd6b19fd6a7519d0415693fbe93e4ec639f34268fd9c180 +size 3010976 diff --git a/video/w1JanwReU6_39017399.mp4 b/video/w1JanwReU6_39017399.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..99b363d4b67edaffa2fef5777f071286aefaa380 --- /dev/null +++ b/video/w1JanwReU6_39017399.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6bc4205570b929b140a9c6fffdbf53cf0fcffe8d96a9e937547b2c694cd5c0b +size 3010207 diff --git a/video/w28i9oe9Xr_39024875.mp4 b/video/w28i9oe9Xr_39024875.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..33681a78ec5adfa84f348011b2bd0c5666bce888 --- /dev/null +++ b/video/w28i9oe9Xr_39024875.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd214d494d9dfb548c971341fabb30aa35804b1d69d1455f0fd761b64c5f309a +size 2432384 diff --git a/video/w2L3Ll1jbV_39026331.mp4 b/video/w2L3Ll1jbV_39026331.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0955fca1353d50730ede4747c4fad7cadb322a73 --- /dev/null +++ b/video/w2L3Ll1jbV_39026331.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:56717fdeeb0aa4f627671a5c93c773543e7b74235805e798e9d63378000389f2 +size 2381861 diff --git a/video/w3JCTBRduf_39027376.mp4 b/video/w3JCTBRduf_39027376.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..21c12184a879e675b9e0976f8dee2d7d378251a7 --- /dev/null +++ b/video/w3JCTBRduf_39027376.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4dfa7df3322dba769baa46f086a6ae1c442df4a1b54b1d5ea347eed397c81065 +size 2582543 diff --git a/video/w50ICQC6QJ_39027349.mp4 b/video/w50ICQC6QJ_39027349.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9e95a1c5b2cb26cc88d19d5592573ca6c2ce759a --- /dev/null +++ b/video/w50ICQC6QJ_39027349.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a9a7368221ab7bb2e294ac9611990a1599ccb19516b7f9544fe22e915444280 +size 2058668 diff --git a/video/w67vRHZF13_39025292.mp4 b/video/w67vRHZF13_39025292.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..84c408c75fab8d59f7a12157958ac661d2716719 --- /dev/null +++ b/video/w67vRHZF13_39025292.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:195b4cf15ee9176b770645b4b929a6965b980dfafe9ba39d0fe33bf211fb354d +size 849631 diff --git a/video/w6vbfSC1y0_39025969.mp4 b/video/w6vbfSC1y0_39025969.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..88a273c38a62db411bc8d869af200896c26eb03c --- /dev/null +++ b/video/w6vbfSC1y0_39025969.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:116230ebb805b905db3d3f30c5eb5bd31262c3716b24ee4120107ed7a0429812 +size 2552133 diff --git a/video/wAqdvcK1Fv_39025084.mp4 b/video/wAqdvcK1Fv_39025084.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..18b41f1b661c5ab158c30407333ceb49c1a7b5c9 --- /dev/null +++ b/video/wAqdvcK1Fv_39025084.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1ee435314ad5f82dcd65c9ec39a43c15fdef48075b9c88419d215600ba74bf7 +size 2338250 diff --git a/video/wBtmN8SZ2B_39025561.mp4 b/video/wBtmN8SZ2B_39025561.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..93cbf34344df617db47f736fe394face38ac1c16 --- /dev/null +++ b/video/wBtmN8SZ2B_39025561.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d3d7338794a44ae56b23ca0987f2a5e627cca55aad73c9345be6c9f1d0f7f9a +size 3424029 diff --git a/video/wBzvYh3PRA_39028836.mp4 b/video/wBzvYh3PRA_39028836.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..125cc520ae61b9188e214be7ef49695d89a77015 --- /dev/null +++ b/video/wBzvYh3PRA_39028836.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a67ad788a4cebad6c78f2e0c518f07b09e401482827cbd01115802c9a11a0848 +size 2104909 diff --git a/video/wDDvJzvvBR_39028760.mp4 b/video/wDDvJzvvBR_39028760.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3f1be8d5a415f41795e7952e63a18cc6941ac55c --- /dev/null +++ b/video/wDDvJzvvBR_39028760.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c5e16356fccab623ba5a59bf0a2271b5cea80977ad3a7d15839b9a8770267bae +size 2360374 diff --git a/video/wDirCeTIoz_39028447.mp4 b/video/wDirCeTIoz_39028447.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..47c627348fac176264fefef8f0a48d7f6d543136 --- /dev/null +++ b/video/wDirCeTIoz_39028447.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d58c2b7dcef1b2ec50141766202d4187da04970c411d3f84fa841942a5e41576 +size 3706977 diff --git a/video/wG12xUSqrI_39018595.mp4 b/video/wG12xUSqrI_39018595.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ec8edfdefadc31f21bb55f7a53c507718ac822c3 --- /dev/null +++ b/video/wG12xUSqrI_39018595.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad62d3eea2ff70f7ff53e8903e5f4b4346f1e7ca19a6c56102c90049da9ec125 +size 1165037 diff --git a/video/wGP1tBCP1E_39026631.mp4 b/video/wGP1tBCP1E_39026631.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f4b7106c0a8d2aaa33bbd75d8ccb676b89bf3a4c --- /dev/null +++ b/video/wGP1tBCP1E_39026631.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e30959da6baa7b82e47086231f78cae3db7b6bb65331fb01093d855e65ad0756 +size 1554778 diff --git a/video/wGjSbaMsop_39027588.mp4 b/video/wGjSbaMsop_39027588.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8956a429f85c1d78413403fbdc504efa579a217b --- /dev/null +++ b/video/wGjSbaMsop_39027588.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7348a9c9c69105daae76ceca44a7fc2ac48f039e7e2a3dbece032601140b0de4 +size 2877501 diff --git a/video/wISvONp3Kq_39018593.mp4 b/video/wISvONp3Kq_39018593.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2e2c9b264e93c6a348969f4f188b2cd6b3d49109 --- /dev/null +++ b/video/wISvONp3Kq_39018593.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:510044d681a5b8d5806f69662b61b44064dd5c814c247c6bcdf7d6877dde03cb +size 2704711 diff --git a/video/wJAF8TGVUG_39025187.mp4 b/video/wJAF8TGVUG_39025187.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..68873ec9073da5ee733b6e6ec10459e795c47f52 --- /dev/null +++ b/video/wJAF8TGVUG_39025187.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a4e251af7ff173357b6bb21d89f7726014dd2956977e94deafa257923bf34045 +size 1728650 diff --git a/video/wJaCsnT9UE_39027767.mp4 b/video/wJaCsnT9UE_39027767.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3c70932ccf2b2b3115effa88c5eb7da484ae825b --- /dev/null +++ b/video/wJaCsnT9UE_39027767.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:623bce5685388ad2770cbacd69e3b49ef6e3daa08788b5af4c2d10f56ab063b5 +size 2864121 diff --git a/video/wN5AgP0DJ0_39026251.mp4 b/video/wN5AgP0DJ0_39026251.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ec901b4cf8e1a811c3e73cfd0b8f902365e8ce30 --- /dev/null +++ b/video/wN5AgP0DJ0_39026251.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d01eaf5d2e16746bac905e33801920801a2671d001c368d4be68910ae218dfd6 +size 2418056 diff --git a/video/wT5AgMVkaJ_39028088.mp4 b/video/wT5AgMVkaJ_39028088.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0b2e86ec6c09b2adc52139b2dae10d5840450964 --- /dev/null +++ b/video/wT5AgMVkaJ_39028088.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf8b13e2e411ebed88ad15a03c4ae043430d5c5365b26a8b6049ef27ffc01b10 +size 2524939 diff --git a/video/wT6GHk5ShC_39026477.mp4 b/video/wT6GHk5ShC_39026477.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..73d4f19752dcff565afdeeb882a0093699d9f234 --- /dev/null +++ b/video/wT6GHk5ShC_39026477.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2cfdd288cf2dc14a8ba832e6a21b218a3759d72eb7113d8016d51cfc7e0d72e +size 2965650 diff --git a/video/wTIzpqX121_39024864.mp4 b/video/wTIzpqX121_39024864.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2b98f669814e6ab6b9fbe309a9e9a808f7c2c1a7 --- /dev/null +++ b/video/wTIzpqX121_39024864.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7014ff11dc2f6f09615044c94dd9658b2435d93a7bdb3cb653ec0df93793d03b +size 1774709 diff --git a/video/wWguwYhpAY_39026506.mp4 b/video/wWguwYhpAY_39026506.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8e53275cd3dd36e8d78ac531bc75502b74651132 --- /dev/null +++ b/video/wWguwYhpAY_39026506.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd9246e711f1f07d31e1d475f58253cb4a4a69b733cf9fcfa998b500560ee60b +size 2635425 diff --git a/video/wYvuY60SdD_39018588.mp4 b/video/wYvuY60SdD_39018588.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7befbd7faeaf80d2c6464def385d447c4cba4a9b --- /dev/null +++ b/video/wYvuY60SdD_39018588.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5307706191c710c522137a0092dd67b8223dc448d1078330b80653c9edb63916 +size 8306 diff --git a/video/wZgw4CrxwK_39027622.mp4 b/video/wZgw4CrxwK_39027622.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..966294be3b42f192ad9a60b3556daf96fff348e6 --- /dev/null +++ b/video/wZgw4CrxwK_39027622.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b9e839c11ba885209b886565ca0db7e98a9ae463dc22e50cce292728580a167 +size 1968362 diff --git a/video/wZigMVFURk_39026137.mp4 b/video/wZigMVFURk_39026137.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5abb0b74947f45772e3a5cf368861faef734ae31 --- /dev/null +++ b/video/wZigMVFURk_39026137.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8894272f638a0d1f37786e10be7d92689daa11acbcfca3449732202284435c9 +size 3611151 diff --git a/video/wblxm5zdkE_39028670.mp4 b/video/wblxm5zdkE_39028670.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9636504b27b250221b2d885045961076492a1837 --- /dev/null +++ b/video/wblxm5zdkE_39028670.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ddf291a9b280871ac17ead68b48698316c2b8dbf75422871b099dc977876ff30 +size 10184961 diff --git a/video/weemASPtzg_39024572.mp4 b/video/weemASPtzg_39024572.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d8db8300df2d59ac90742741a11df3ec221937ba --- /dev/null +++ b/video/weemASPtzg_39024572.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9bfce187888617fb15af417282aabdd403c47b82626c6ffc02edde41a84d5dd +size 2637560 diff --git a/video/wfU2CdgmWt_39025419.mp4 b/video/wfU2CdgmWt_39025419.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e180fc40a8304e7f9ec2a5820f0ec7de654db6f4 --- /dev/null +++ b/video/wfU2CdgmWt_39025419.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f438ece583d95c35e22cb6e953789de07171d81a819efdec8dee02cbd22beb6b +size 2631782 diff --git a/video/wg8NPfeMF9_39018583.mp4 b/video/wg8NPfeMF9_39018583.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..db6035f4b8d4a9c99a6da40d42e109611c0fe397 --- /dev/null +++ b/video/wg8NPfeMF9_39018583.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b6e65763fc6d1cac0dde30681ecd719dc2973231a6e1de7b38c8e46452a2587 +size 6490672 diff --git a/video/wiK6bwuxjE_39028867.mp4 b/video/wiK6bwuxjE_39028867.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1ea5953a073c52dbb7bc3068a2c06f120f2cdd3a --- /dev/null +++ b/video/wiK6bwuxjE_39028867.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cace2509e125c57ab8ebb969edbd684883576e8593af0ebb071e5a82ad822d43 +size 2077146 diff --git a/video/wiMaws0FWB_39025485.mp4 b/video/wiMaws0FWB_39025485.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..101ac5986e16c8551f8b9587c3a9bc1f7c3e3502 --- /dev/null +++ b/video/wiMaws0FWB_39025485.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6bd34ca221a53face9e6b81d74092a47067bc1a019ab4a2bda987e1eb125e937 +size 837344 diff --git a/video/wjbTHLUSzU_39027517.mp4 b/video/wjbTHLUSzU_39027517.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..56cf1c3062d79d0a5c8c465987c66806ed33c42d --- /dev/null +++ b/video/wjbTHLUSzU_39027517.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6d26e5b453e545c2db16195410fd7e50a6d4566e8e67f1a0f6c4478c945ac3b +size 2355669 diff --git a/video/wlqfOvlTQz_39025123.mp4 b/video/wlqfOvlTQz_39025123.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b80dde7b9b6f6da80df9bd507befafc61f3c4b67 --- /dev/null +++ b/video/wlqfOvlTQz_39025123.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:525c3e35f6c617039261ad4bc2405dff73452124bdd4b23e24d64d47a443387b +size 2458616 diff --git a/video/wqs2RMq4CW_39025835.mp4 b/video/wqs2RMq4CW_39025835.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..6ea5253d4b2bd33a8407cd044ace221049d93332 --- /dev/null +++ b/video/wqs2RMq4CW_39025835.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9cef3b1e59f3cdf8765bd65da53a842e9a7bee1602d856debea43400b6c0e85a +size 3041783 diff --git a/video/wsHMb4J2o9_39028725.mp4 b/video/wsHMb4J2o9_39028725.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..acc2f4bf6bac6b23f29bc62064abe4d9435264dc --- /dev/null +++ b/video/wsHMb4J2o9_39028725.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:901c4f8b5715a58c7dcc2e653d8cbcdc4122650bd79112dfd9aaf3a078bfa71d +size 1379108 diff --git a/video/wsRXwlwx4w_39018612.mp4 b/video/wsRXwlwx4w_39018612.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f10a9816b8b1d40c2b99a5e3d53268d120564705 --- /dev/null +++ b/video/wsRXwlwx4w_39018612.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7fee37e4ba5e0506a38f722b610f41bad6a65e7bd5693feda6ce8f583327695b +size 2819425 diff --git a/video/wsqDJHPUHN_39027738.mp4 b/video/wsqDJHPUHN_39027738.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..20df32a0fdf52853f163bb1fa277d70885dc663c --- /dev/null +++ b/video/wsqDJHPUHN_39027738.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e8e5ed37dde046ff6cd9e91e1ad5a68ab8a7cd1a47b55e842cb97c36643e858 +size 2769525 diff --git a/video/wz2KvvEk44_39025169.mp4 b/video/wz2KvvEk44_39025169.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..391117cf4720c7632d22f9f08155966c4789e2af --- /dev/null +++ b/video/wz2KvvEk44_39025169.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d470b65f9a20d6b705e05a30e6a2a284435ff5754bae083dc85236039277afe8 +size 2619982 diff --git a/video/wzof7Y66xs_39024705.mp4 b/video/wzof7Y66xs_39024705.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ccbc170998a13f71c75c15f2d613da5bb392da74 --- /dev/null +++ b/video/wzof7Y66xs_39024705.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5cb4cd544e0eb4dd36e4618b0b16d9471c57dc9ef89561d71dd094f4a70dfc76 +size 2610594 diff --git a/video/x1ptaXpOYa_39018574.mp4 b/video/x1ptaXpOYa_39018574.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..64c6a4185fb5d0343da24993f50216235d1f54c4 --- /dev/null +++ b/video/x1ptaXpOYa_39018574.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f53b49782218be0291ec72da0f68e32f784e778e41dcf44cc4b147e522a31d6d +size 2946568 diff --git a/video/x2780VcMOI_39026657.mp4 b/video/x2780VcMOI_39026657.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fe8dbddea6f984c324a142f2f8d20a72d4f2b5d6 --- /dev/null +++ b/video/x2780VcMOI_39026657.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f08b90a35e6b74444f2d441972b573f80d3cd6302e70d987bdc0ba67f44fde2f +size 2558415 diff --git a/video/x2zY4hZcmg_39026162.mp4 b/video/x2zY4hZcmg_39026162.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8801b0227e3c25fb1d77e1779fe36244fe234e4b --- /dev/null +++ b/video/x2zY4hZcmg_39026162.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:32c40d67715625bec8d1d27bc31a787b5b21d8dc057c324cf046dce68ce6944d +size 2689922 diff --git a/video/x33oWJQyH0_39026982.mp4 b/video/x33oWJQyH0_39026982.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..af2823d7c4c858df9ce547179da29bff8787c75a --- /dev/null +++ b/video/x33oWJQyH0_39026982.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff776bb0cb8c8f044bedbcc798c46c4f999c05a27788b99a03a7857019ee49ee +size 2577085 diff --git a/video/x4EoTQW7ka_39028586.mp4 b/video/x4EoTQW7ka_39028586.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e9b7842c8c5943f9fd9cd866a3729d7c31932a20 --- /dev/null +++ b/video/x4EoTQW7ka_39028586.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da512ba8f1ec0597cc058116815f0b8c21ce59941b48dfdbd452e734da5cac28 +size 2201740 diff --git a/video/x4Kk4FxLs3_39026591.mp4 b/video/x4Kk4FxLs3_39026591.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..08f1815c4bba1a7fe826cf600d85dae63c78a666 --- /dev/null +++ b/video/x4Kk4FxLs3_39026591.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:01142e2e991fc40370d33496fa468532662c75610880a106b6349f238306ced0 +size 2980159 diff --git a/video/x7AD0343Jz_39026935.mp4 b/video/x7AD0343Jz_39026935.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0686a9dabfeca1be082d790cd42c268710bc7f05 --- /dev/null +++ b/video/x7AD0343Jz_39026935.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:874dcde4073ee7574948a9f97a1e0cf9db2b7a49da001ba1a69f4582163b98ad +size 2439279 diff --git a/video/x7d1qXEn1e_39018570.mp4 b/video/x7d1qXEn1e_39018570.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..22530009954ea5c5124997e9ed35c07a1c9abd98 --- /dev/null +++ b/video/x7d1qXEn1e_39018570.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:754c602bf445f6c10872d84398d47f02c69fd6786f483a04ca265f6cb1f6b5e0 +size 2841037 diff --git a/video/x7pjdDod6Z_39027013.mp4 b/video/x7pjdDod6Z_39027013.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8ae668f0d596c4b586e7772095c1c93373ae1413 --- /dev/null +++ b/video/x7pjdDod6Z_39027013.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82e8fac7433ec94acef029297afb16c2e5c55086a19c4aebd11e631c353fd674 +size 1930277 diff --git a/video/x9eFgahVBI_39024759.mp4 b/video/x9eFgahVBI_39024759.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..79645df3b5a1a2f382b58d211b63bc9d19ffb45f --- /dev/null +++ b/video/x9eFgahVBI_39024759.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef43b48708f9e6a75010471850e92cfd06dd698331e45677a804aa310ff03ba5 +size 2705499 diff --git a/video/xCIbVuXwPM_39028758.mp4 b/video/xCIbVuXwPM_39028758.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..687cdd6f719a12cc1e235d61dadc779b2f27afd7 --- /dev/null +++ b/video/xCIbVuXwPM_39028758.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:27f4eed24e6ebaf156810310a58f8f795c2edf7f1fa739339fc84d7c753142c0 +size 2462354 diff --git a/video/xHmCdSArUC_39018566.mp4 b/video/xHmCdSArUC_39018566.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f122a327282879bf6ca3b0fe7d7267bb2e08fb67 --- /dev/null +++ b/video/xHmCdSArUC_39018566.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d198b48efd33cb5b0bb53fd8786a24c63982bd168920fd3c89de361ba0fa6d98 +size 2853863 diff --git a/video/xJ5N8qrEPl_39017064.mp4 b/video/xJ5N8qrEPl_39017064.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7e503cb68bfe337b976ef65a1d5e20bcd6704f73 --- /dev/null +++ b/video/xJ5N8qrEPl_39017064.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e077a4bcd1f5f9623e425bd44e47ed8b87c728daf5089cefb60655f07703494d +size 2452825 diff --git a/video/xJbsmB8UMx_39018564.mp4 b/video/xJbsmB8UMx_39018564.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c22bba10f0c93a4ae57187e26704eb88d017c65b --- /dev/null +++ b/video/xJbsmB8UMx_39018564.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fd0575064eef23f8db7a94990ad3c33d265957ba189045cea3bdc971b9099a6 +size 2830843 diff --git a/video/xL7Ve14AHA_39027288.mp4 b/video/xL7Ve14AHA_39027288.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d9f85d8f7ff114a43b3da7c8089d659a2097b015 --- /dev/null +++ b/video/xL7Ve14AHA_39027288.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:912de01a8d043d3943968493fdeee8e7200e37fd784f24375d577456025d6f18 +size 1532204 diff --git a/video/xM5m7J6Lbl_39028053.mp4 b/video/xM5m7J6Lbl_39028053.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7432f4db5b12e90bbdeb2471c2e4549c1544059d --- /dev/null +++ b/video/xM5m7J6Lbl_39028053.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:31c6235c876ee4a1f70f8409bf665efd4a1ad829c7a3bbf8f8a91551f0027fe3 +size 2319945 diff --git a/video/xRdpCOdghl_39028858.mp4 b/video/xRdpCOdghl_39028858.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c88f85fc69e34c8a0d122a2f85dea3f4cf8c8526 --- /dev/null +++ b/video/xRdpCOdghl_39028858.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1c0102bb6efec6c6fbd9cf911226bb029960404c865eb2809a25630635ac5acf +size 2182674 diff --git a/video/xUzWmFdglP_39018562.mp4 b/video/xUzWmFdglP_39018562.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c07ce508dda0bf14cf85d841a79c5bddb68ea26c --- /dev/null +++ b/video/xUzWmFdglP_39018562.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9a468a14e54c99895976b88eb4eb6266a48eb5c5e8e54a1fe5d1cf477a9e8a4 +size 1766887 diff --git a/video/xZDWO0oejD_39018561.mp4 b/video/xZDWO0oejD_39018561.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..cd8d310a5e16d35c8bd1ded2e45dbe817fee9e9c --- /dev/null +++ b/video/xZDWO0oejD_39018561.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e3e674ea2c3879ce04c3648927ec127bce498db3922457010001662d0569933 +size 2895193 diff --git a/video/xZKXGvLB0c_39027337.mp4 b/video/xZKXGvLB0c_39027337.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7ca1c7dbf89c52e1ec4acb67feb78ec0b37a3feb --- /dev/null +++ b/video/xZKXGvLB0c_39027337.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a196a75a5d449602d8fa9604316e0adbe46f130211691620a7c1f34afb961549 +size 2376052 diff --git a/video/xZxXNhndXU_39028235.mp4 b/video/xZxXNhndXU_39028235.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fb2bd7c897cbcc79f8393de2331d54784ad33d62 --- /dev/null +++ b/video/xZxXNhndXU_39028235.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d784e7d6ffc75b46c9818544cd41896750ea26a306a19546951ae88e8c6bf512 +size 2847754 diff --git a/video/xavWvnJTST_39028149.mp4 b/video/xavWvnJTST_39028149.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0657632a817479aaae01e47e6f25995736186e9d --- /dev/null +++ b/video/xavWvnJTST_39028149.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06092ef4c230065eaafef99db2e398b35321ebd9ca2dca73961a4c7e67391422 +size 1337611 diff --git a/video/xcF2VbyZts_39027815.mp4 b/video/xcF2VbyZts_39027815.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c9f3c044045e93393bc44f2a80f4763dd51dd36b --- /dev/null +++ b/video/xcF2VbyZts_39027815.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c5036164e8efdc64de52fcd76f3be3df7e6860a70aacb2456a0bf626da20c8c6 +size 2864220 diff --git a/video/xcMmebCT7s_39019083.mp4 b/video/xcMmebCT7s_39019083.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..df12cea50373f8f1089689433d906807afa583c6 --- /dev/null +++ b/video/xcMmebCT7s_39019083.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c7b782d7ab0b93b6fe65204e548daae68e595db161ec781b5d1dda2bf3282bf +size 2725971 diff --git a/video/xcqSOfHt4g_39024647.mp4 b/video/xcqSOfHt4g_39024647.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..9062699e19573744513c637d8cb2cd73d964891c --- /dev/null +++ b/video/xcqSOfHt4g_39024647.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96ed5e1146a26c76aa8c2911341463834342556407deb98db469d2fb53eeed43 +size 2731703 diff --git a/video/xkXdE81mOK_39019164.mp4 b/video/xkXdE81mOK_39019164.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c8955b7e6182b01945ffa429c59a0b13240782dd --- /dev/null +++ b/video/xkXdE81mOK_39019164.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e061c5e3b6ec91d8dc9124baaab5fc8e4538d3c8e55ba0d438b731ea3998b08b +size 2098703 diff --git a/video/xnmm1jThkv_39024893.mp4 b/video/xnmm1jThkv_39024893.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7a8d9276dc8373b8b709fb9d7a0351559e474394 --- /dev/null +++ b/video/xnmm1jThkv_39024893.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c46fe10faab10ee9ad0624ba0b5b40c02ca56ae88e4098ab46f8882415293243 +size 2832762 diff --git a/video/xoCFd1WKpf_39024859.mp4 b/video/xoCFd1WKpf_39024859.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..31d8c5141d1e64cd5588b8686aca4d7a7b08fe78 --- /dev/null +++ b/video/xoCFd1WKpf_39024859.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33f319d467156a9020474073716f8c043bc0401d4bace58fbb4dd6aa96aa9d05 +size 2641838 diff --git a/video/xqc8yyhScL_39026803.mp4 b/video/xqc8yyhScL_39026803.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..93abc7cf8ae02a1515d3da711205ba1fb99465ab --- /dev/null +++ b/video/xqc8yyhScL_39026803.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d79417dcc171e131d7f165c62b4c3dc84985ccb3abbadffc6e55b4c6893f076 +size 2220665 diff --git a/video/xrbgXJomJp_39027527.mp4 b/video/xrbgXJomJp_39027527.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..70b25aa32261a088329ed033c9e2cf9e369511b7 --- /dev/null +++ b/video/xrbgXJomJp_39027527.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a5d7f347228c53fd17997c7966711b57dcf40606824daab43ac06d1791993dc +size 1542332 diff --git a/video/xt9Bu66rqv_39018557.mp4 b/video/xt9Bu66rqv_39018557.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..26f68d63e40ba22991cdbd5ec68b35b35cf0cfbb --- /dev/null +++ b/video/xt9Bu66rqv_39018557.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66c0d07c9a6eef0957d4e5a98017a27f9f92e61791933669d966f997f808e47f +size 2657974 diff --git a/video/xtK3gZjQDC_39025014.mp4 b/video/xtK3gZjQDC_39025014.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..eb067a21a6d49490c40f14ba279d5ab013768711 --- /dev/null +++ b/video/xtK3gZjQDC_39025014.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce4f48c02450e5adf5ef10170edc2cd25028e93288f2f25ecfa67abdac3b603e +size 1751640 diff --git a/video/xtOydkE1Ku_39019176.mp4 b/video/xtOydkE1Ku_39019176.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..dc66a604012473611f39dc89a64e73a38b6fe7b9 --- /dev/null +++ b/video/xtOydkE1Ku_39019176.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19e5e4bef413bdf0952daf23f95391e067c66a2a2f57de97af1c3f2edbb25d13 +size 2018926 diff --git a/video/xuY33XhEGR_39018743.mp4 b/video/xuY33XhEGR_39018743.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..51b4857a3812c15d9558e2ef5693a587bb62b614 --- /dev/null +++ b/video/xuY33XhEGR_39018743.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f8c85dcca981f1a5d79aa7b67a65b3e5cb97f869afdebb0648f4de20a0984d3 +size 2218828 diff --git a/video/xutrKezbPF_39027292.mp4 b/video/xutrKezbPF_39027292.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c8831a573bead6ff600bcaa7462b15ed7d22c519 --- /dev/null +++ b/video/xutrKezbPF_39027292.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1285b257144baa55491cb18b72f2d5e8b797652741b126f89e0cc662fa13b4b +size 2495907 diff --git a/video/xvYI7TCiU6_39024598.mp4 b/video/xvYI7TCiU6_39024598.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3030c276651c771819300d50b2829741c1532ab0 --- /dev/null +++ b/video/xvYI7TCiU6_39024598.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7ef6cabc9a54111c039297e08e05d0256ed3a570110d43db322fb7ec07c551fc +size 2056272 diff --git a/video/xxY8d4rnSb_39026214.mp4 b/video/xxY8d4rnSb_39026214.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a0d997fbddae03362b3b9e025c7121bc315ff2c8 --- /dev/null +++ b/video/xxY8d4rnSb_39026214.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52775cf06ad0a78897cef04ca95e6f9be452f3a069895d9ff24b0d152e72d780 +size 2109389 diff --git a/video/xyxU99Nutg_39018750.mp4 b/video/xyxU99Nutg_39018750.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fc3d83e2a2fcf52d904164b51f55b943a0321204 --- /dev/null +++ b/video/xyxU99Nutg_39018750.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9dff2ffb024a53f0def0cb24de27a851157e732c3eec2ef2b550f6e0dfc7749e +size 2201501 diff --git a/video/xzCuBjHQbS_39026001.mp4 b/video/xzCuBjHQbS_39026001.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1d81b925497fed2da441e8cd7230e3700b1d978d --- /dev/null +++ b/video/xzCuBjHQbS_39026001.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1402a457824ded7f79503e33843698dc361e710850a9abaecedcd77532ded20b +size 2447326 diff --git a/video/y21ZO6M86t_39017160.mp4 b/video/y21ZO6M86t_39017160.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..80e2461dc25d30fd3fb84e4627466b5c1afbe254 --- /dev/null +++ b/video/y21ZO6M86t_39017160.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3b9a863096ec5e5506d86316051f1022c74846ee0646fb654df80609f418bcf +size 2612993 diff --git a/video/y2fAmldTIf_39025334.mp4 b/video/y2fAmldTIf_39025334.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0e5180551ade23a6be2b00ad3e3c4d2c5c821015 --- /dev/null +++ b/video/y2fAmldTIf_39025334.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dcdf1ecb0af39eaadffac51684ac021a96b557015235d515695230bf43ed4c55 +size 2485437 diff --git a/video/y6qhVtFG77_39028497.mp4 b/video/y6qhVtFG77_39028497.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ccaebcf87539ae6a788f80f971ae574942b77775 --- /dev/null +++ b/video/y6qhVtFG77_39028497.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6dcde5758090e2a8b817f43ba326d86567139c08196167863fec442310c9656c +size 2542562 diff --git a/video/y8P633E5HQ_39026336.mp4 b/video/y8P633E5HQ_39026336.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ba4dafc635457626da2df0cdae9e90885af3c44d --- /dev/null +++ b/video/y8P633E5HQ_39026336.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b469a08751b3af76347b31b7d9bf21512531a39570fbbf1b8209402874edb7a +size 2831612 diff --git a/video/y929esCZNJ_39027105.mp4 b/video/y929esCZNJ_39027105.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8fc17854652aed944971aa31d04afe4dac761068 --- /dev/null +++ b/video/y929esCZNJ_39027105.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc5079df56466e0e1699c27b5d0c37bf5b83f2cfaaa23f3f8ebd7944cb1d9553 +size 3176401 diff --git a/video/y9huwsnGRJ_39027641.mp4 b/video/y9huwsnGRJ_39027641.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c090257c75c35b68acf82b540bdb02887dc50297 --- /dev/null +++ b/video/y9huwsnGRJ_39027641.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:87a37c72c25a77224d133303d66f5d307ec837ce4e7d426f8a2571a2124da032 +size 2626731 diff --git a/video/y9zIRxshzj_39025847.mp4 b/video/y9zIRxshzj_39025847.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..756fc91f8f0942a1f47d743a6db8ddafd0b10046 --- /dev/null +++ b/video/y9zIRxshzj_39025847.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3dbaf8b23479b2957f7134d544a8478e25f0366f3c118d481434431b8f31a5c6 +size 2443373 diff --git a/video/yAAQWBMGiT_39024640.mp4 b/video/yAAQWBMGiT_39024640.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a6c949949cf6ac68712083b80c9bf36576793aa8 --- /dev/null +++ b/video/yAAQWBMGiT_39024640.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0824b28dc079ba1fafc2bb6c0f88b17cdb1794e1b5da96bd1c8888530071a683 +size 2970835 diff --git a/video/yBHbeSpwYS_39026023.mp4 b/video/yBHbeSpwYS_39026023.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c4d8509585b51ce29334b25aac04534bed29ece9 --- /dev/null +++ b/video/yBHbeSpwYS_39026023.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:229a9bf9507501a139bb7899518ec804bb08e81f3db1b9ac25a38ffe90f1adc9 +size 2326636 diff --git a/video/yBrxziByeG_39028852.mp4 b/video/yBrxziByeG_39028852.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..07c8fd71994d532d8bf08df60ef3ae01e06486b6 --- /dev/null +++ b/video/yBrxziByeG_39028852.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c17832965251b1d773d2b1670e25051854787d4f249962b473670444422b7e6 +size 2177251 diff --git a/video/yCh1z6Dcto_39027087.mp4 b/video/yCh1z6Dcto_39027087.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c81da7cf1496b4984cd2251bdce18e5fe9ae9a76 --- /dev/null +++ b/video/yCh1z6Dcto_39027087.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5cb02989a76be5dbc471bf0892df0bca9ef4ae5d10a9345aae466f31f0ddb27c +size 3282198 diff --git a/video/yN4Wv17ss3_39018548.mp4 b/video/yN4Wv17ss3_39018548.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..fd7e2138882a349a1dd45109931e76abbed0482e --- /dev/null +++ b/video/yN4Wv17ss3_39018548.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b1b9ad1dd6c840f137d0549b88fe60f9edd482822fea766426ea4ae5bad27cf +size 2421433 diff --git a/video/yOe6ajdslI_39028624.mp4 b/video/yOe6ajdslI_39028624.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ac70f48ff96941ddd2b5d93d7ba09fc3697643d1 --- /dev/null +++ b/video/yOe6ajdslI_39028624.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:06fd747c8b200fa2322e5d02e81e2ee76f00b7b6826ce0edca694387b741fc9a +size 705384 diff --git a/video/yQL5tutdaH_39024973.mp4 b/video/yQL5tutdaH_39024973.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..31bff614d00abd99e8ded33342b7839bdc8bfbe6 --- /dev/null +++ b/video/yQL5tutdaH_39024973.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e22060d2fc8b3d416d646105d8ae2d97a16e4601290186dbfc6817391f57bdd1 +size 2538549 diff --git a/video/yRhrVaDOWE_39027026.mp4 b/video/yRhrVaDOWE_39027026.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2610a36c4349ed2fe2ebfb76fcb0251641fd427f --- /dev/null +++ b/video/yRhrVaDOWE_39027026.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:032ca7661354b7c9885aed342a6c20d32270d67e55719f6c86b3aa525a274efe +size 1599951 diff --git a/video/yRuJqoWoCs_39028304.mp4 b/video/yRuJqoWoCs_39028304.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b496649f7d5329094b555e494415c4d1d68d6adc --- /dev/null +++ b/video/yRuJqoWoCs_39028304.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e98eaeadd6baa6ea27322faed5e09f2e57da3aef96759bc8791412487aa1a41 +size 2795310 diff --git a/video/yTBXeXdbMf_39018545.mp4 b/video/yTBXeXdbMf_39018545.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0fe4d261d1fa97c25666eb735ecb3f03a0763773 --- /dev/null +++ b/video/yTBXeXdbMf_39018545.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77e6fe34e9e067465d580319806a0d331712abffedaee485e95f82422b7cbc29 +size 2903966 diff --git a/video/yTTomSJsSW_39026324.mp4 b/video/yTTomSJsSW_39026324.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..eb4606c9300aa6ac7fde9c0959860d277ab7e760 --- /dev/null +++ b/video/yTTomSJsSW_39026324.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:217e270b08e5619fba804194f81559160075a227de3bc365e869123a509f28b6 +size 2283944 diff --git a/video/yUckuDjAE0_39027073.mp4 b/video/yUckuDjAE0_39027073.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..25aff107357c128ca83e13d17c582759c405ce46 --- /dev/null +++ b/video/yUckuDjAE0_39027073.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5cb635e2f29bd84ecefa362b8b9d4687102b53b4e13656c347ceb83641cdc7eb +size 1490710 diff --git a/video/yUqUBGioBG_39027795.mp4 b/video/yUqUBGioBG_39027795.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3d6247ac43c79efc6b9701cae7812dae9d02e3c0 --- /dev/null +++ b/video/yUqUBGioBG_39027795.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19db2dd25f90d63ff66a6910619feb93c53e6c1ca85e5fe016f897f18a688cf0 +size 2920430 diff --git a/video/yV6fD7LYkF_39018691.mp4 b/video/yV6fD7LYkF_39018691.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..02d6d093008d20b0ea7fdf0bb8473f83332f6172 --- /dev/null +++ b/video/yV6fD7LYkF_39018691.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f218af773cbe7823d447260f4db16bee0883b1efcc5d7dd1879c2b845bef6e6 +size 2988250 diff --git a/video/yVzWlFhpRW_39028033.mp4 b/video/yVzWlFhpRW_39028033.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ab6ae205b5ac6ce34a688439240863e539890f33 --- /dev/null +++ b/video/yVzWlFhpRW_39028033.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f2cea5a9c9864d6921a38b5091128603558cb7dc6d63981a02273fe1f377b97 +size 1886103 diff --git a/video/yWq89o19wf_39027640.mp4 b/video/yWq89o19wf_39027640.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..21b0f34ef2caef579832e3e72171c38ed5f846c1 --- /dev/null +++ b/video/yWq89o19wf_39027640.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:171c9bb396264c96c76f37633f2d2706fc4076160c8f26b6b7c659bc641d0566 +size 1778141 diff --git a/video/yXW2dCTQdi_39025703.mp4 b/video/yXW2dCTQdi_39025703.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..b0fb6d47ac7c0fca4a7dcf2e2ca69f7fbcb54db3 --- /dev/null +++ b/video/yXW2dCTQdi_39025703.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:62dd0266346511635e1b41c3ec757d1261c338a4fea8e06d5710cffa5c0e6989 +size 2429790 diff --git a/video/yXpfrLMIr2_39027902.mp4 b/video/yXpfrLMIr2_39027902.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1ced7d2b2c9cf340aef5825be131fe880b42fd14 --- /dev/null +++ b/video/yXpfrLMIr2_39027902.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:043359a980a58338b272038531415a49afec2672c83e96017f1c4c841d290b77 +size 2131789 diff --git a/video/ybHPzL7eYT_39027854.mp4 b/video/ybHPzL7eYT_39027854.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3040d3fd5fb3ace8c40a51a6636038cbc5e77504 --- /dev/null +++ b/video/ybHPzL7eYT_39027854.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a3f8e988f47df2a72a8ea298adcfa36c2cc48039f540f6ab33abe7d52d2923f +size 1884780 diff --git a/video/ycF7mKfVGO_39019133.mp4 b/video/ycF7mKfVGO_39019133.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..07454e450284c761ad2cba2804b9d3b3b728c569 --- /dev/null +++ b/video/ycF7mKfVGO_39019133.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a47fbe7a69b4309d9107113168124414da5ff701540e0b84522e0768b82e02d6 +size 1297063 diff --git a/video/ygDl8q02gA_39028773.mp4 b/video/ygDl8q02gA_39028773.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2fc94fac24a9f8a614360adb0b4ba0e8fe564b73 --- /dev/null +++ b/video/ygDl8q02gA_39028773.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e210ce3ff7e09fa8a4e3b2930fb557620b3b6accc54541eb5965984398ac2bcd +size 2271877 diff --git a/video/yiXZZC5qDI_39024827.mp4 b/video/yiXZZC5qDI_39024827.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7b0c0ef8faa8ec9550054e5fd24ca48345da070e --- /dev/null +++ b/video/yiXZZC5qDI_39024827.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e220c4a54cc0494e3f3d76fa6b2a17ab8b5e54234b2022d45363ddfb45973341 +size 2846843 diff --git a/video/ykQnxko1cJ_39025598.mp4 b/video/ykQnxko1cJ_39025598.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c803994eb46045331967a160d2acb603352a7d79 --- /dev/null +++ b/video/ykQnxko1cJ_39025598.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:df0d3137410313f20542fa7be2271a9b26f6dbe8e395541ccf8828207ec8bde3 +size 3043613 diff --git a/video/yktQNqtepd_39028470.mp4 b/video/yktQNqtepd_39028470.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2e038d62657e8f4584cdf8f7540737b4d971daa3 --- /dev/null +++ b/video/yktQNqtepd_39028470.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46ac08f68e25e4d778f85a7882c0c1e7db9622c647bcb003f5723d01a9be8a7e +size 2821705 diff --git a/video/ylceJ2xIw5_39028797.mp4 b/video/ylceJ2xIw5_39028797.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..2bdfb919a629840cd1016b436c0e0a3daddcb522 --- /dev/null +++ b/video/ylceJ2xIw5_39028797.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49038da3059ab5142c016905a09070d8bf61da41fa6adf0cdf9b4e715674171d +size 3017395 diff --git a/video/yltJAlwtW9_39024873.mp4 b/video/yltJAlwtW9_39024873.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0c0aac1ae2e04a3b14cfb15f6878afd3a7f190fa --- /dev/null +++ b/video/yltJAlwtW9_39024873.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:099c51939ab138038f075342fd61fb1ebd61ef9b9a70b8be4e9491f58f46e175 +size 1912630 diff --git a/video/ynJr0RW6FR_39024397.mp4 b/video/ynJr0RW6FR_39024397.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..77071dd89edb834222d8ab15acb5b0426454e428 --- /dev/null +++ b/video/ynJr0RW6FR_39024397.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c49dab979608fadcf98fd9a46cfe2b70af6011fa5f7a8c880be22f872d9bad5 +size 2109371 diff --git a/video/ypEamFKu2O_39025640.mp4 b/video/ypEamFKu2O_39025640.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..935b2df21fb2075611e87597e54a039b19889faa --- /dev/null +++ b/video/ypEamFKu2O_39025640.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d359bffa6b9a3717e1fb79d87a3bbffc31aeffb59ac04183480d9dd6789463d8 +size 2296561 diff --git a/video/ypFgcT147Z_39028263.mp4 b/video/ypFgcT147Z_39028263.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5cb7572a0bc48f6afccb36af3f2e8a3bb081491e --- /dev/null +++ b/video/ypFgcT147Z_39028263.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e9757d87ffd06c8bc354d6e01220201c15094d0a11c64fb233445702f15772a +size 2574005 diff --git a/video/ypaqE8UwsC_39025361.mp4 b/video/ypaqE8UwsC_39025361.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..15f2960ca19d2eedbe4e26bf5d4b8e84203f92e0 --- /dev/null +++ b/video/ypaqE8UwsC_39025361.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b1c33b87edc2e98542b88088ce005fd1e982d0fa4af5de6494f1da2a6ab23d9 +size 2826603 diff --git a/video/yppcLFeZgy_39024896.mp4 b/video/yppcLFeZgy_39024896.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..df104c89a1929d6eac841c5c857ae1efc8e72668 --- /dev/null +++ b/video/yppcLFeZgy_39024896.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b130ae0de086d173be206297aba59c89a28ca68a643f29db135fc9549c9c8ca +size 1759419 diff --git a/video/yxKZGQLzOP_39018537.mp4 b/video/yxKZGQLzOP_39018537.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a6f657542cf418b643e3c7dd902b12a93dea9fe4 --- /dev/null +++ b/video/yxKZGQLzOP_39018537.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b360bc962b9a583e53bbe6bbd13464def4e8766c67bbb649bde56d4a560fbf64 +size 2402166 diff --git a/video/yxOrSmS5wR_39028655.mp4 b/video/yxOrSmS5wR_39028655.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..34efb467afed314ab919ad7049c8795ce09c2858 --- /dev/null +++ b/video/yxOrSmS5wR_39028655.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3264fd1c98c4e33a9ba42bd98e1cca5a8fea03a87a2cf170f772286e45eff23 +size 7773 diff --git a/video/yxjWAJzUyV_39028558.mp4 b/video/yxjWAJzUyV_39028558.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ca4a64408613a296c8d021d364573f59d36f5190 --- /dev/null +++ b/video/yxjWAJzUyV_39028558.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:189df6590fd62ee09792771025b7284bb63be6bd5aa86d38cad9e541b672b2d8 +size 2583454 diff --git a/video/yySpldUsU2_39025470.mp4 b/video/yySpldUsU2_39025470.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..39f1e1bde661b3bb4ccf0569ff17d41ec6b3e0fe --- /dev/null +++ b/video/yySpldUsU2_39025470.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8420fe8735b0711b747ac528be080bfd167d7de8b284ea87aa72469935817f5e +size 2153453 diff --git a/video/z0I2SbjN0R_39025612.mp4 b/video/z0I2SbjN0R_39025612.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c076cca9942f4c9fb8a623316fbff784fda19f69 --- /dev/null +++ b/video/z0I2SbjN0R_39025612.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2dc33057158441cfef6ebe73d4b63c6607124735755141a376d9ad9760ee9ec6 +size 2509607 diff --git a/video/z4duW3KzlD_39027273.mp4 b/video/z4duW3KzlD_39027273.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7f02c8fa579939eaefd894e23a6cb85d03fe6c85 --- /dev/null +++ b/video/z4duW3KzlD_39027273.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c1fb263df9125784ce8f0297533ade57f41d7e09dc4ada420de5c6fb2bfe3e1 +size 3215455 diff --git a/video/z4eVwH484M_39024756.mp4 b/video/z4eVwH484M_39024756.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e4aa43e509db38bf86ac30968ea91003b9451ea9 --- /dev/null +++ b/video/z4eVwH484M_39024756.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:414265435e072d8aa369a69f97a51395bca4bfaf037bd37d0fb26ab81c8c0ef7 +size 2821942 diff --git a/video/z6KS9D1dxt_39019004.mp4 b/video/z6KS9D1dxt_39019004.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d44cc0d3a6ce7811b4d568a9fc8ccbeff41f7f45 --- /dev/null +++ b/video/z6KS9D1dxt_39019004.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:13116a6644fd13418e102588fb174166fb0aae7e7e2817441b2ac1d589ebd177 +size 1838404 diff --git a/video/z6reLFqv6w_39024542.mp4 b/video/z6reLFqv6w_39024542.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5c23e280d0c4d59b362b26a53111ccb5633808b6 --- /dev/null +++ b/video/z6reLFqv6w_39024542.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb5194aed69c882b591dcbcb6abce64a03e6d23cbeb663f794540a7579f9acf9 +size 2703431 diff --git a/video/z7h7zMgyPJ_39024878.mp4 b/video/z7h7zMgyPJ_39024878.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f76082ae549dc772cbe523e8c0f90b10f1b348b8 --- /dev/null +++ b/video/z7h7zMgyPJ_39024878.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34e30b0b28988a6835896f5c8a8c4a103ea352e4b6fd9c938ff0b5fede86f943 +size 1399437 diff --git a/video/zApFYcLg6K_39028302.mp4 b/video/zApFYcLg6K_39028302.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..355556216ba542c895c54dbae8768eb04debcc1c --- /dev/null +++ b/video/zApFYcLg6K_39028302.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:59e3e68baef553afd2e857b87b388dede6695ede7bd76cbd76ca0fb34321601a +size 1948656 diff --git a/video/zBG7WogAvm_39027837.mp4 b/video/zBG7WogAvm_39027837.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8c450889d52514f2bf3977e33d334710d51f5304 --- /dev/null +++ b/video/zBG7WogAvm_39027837.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3054b8ccd5f12443b15e7bf33895fd9ca5b3d11522c4ba9133e3060e9940fc72 +size 2327152 diff --git a/video/zDaD8zv8tG_39025073.mp4 b/video/zDaD8zv8tG_39025073.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d58a87e4b84f736a6d6e0ef1c568a7e82c3ed07c --- /dev/null +++ b/video/zDaD8zv8tG_39025073.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a86925a515e49c49658dcf6312a15b6a097d1f514a80e065c8306397d1f720f0 +size 2796631 diff --git a/video/zGN0YWy2he_39025508.mp4 b/video/zGN0YWy2he_39025508.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..e39c5fafd9a9181d9957756cbbaf93602a8a0e14 --- /dev/null +++ b/video/zGN0YWy2he_39025508.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9903e4199de41c268aa89933cdd810cfccc82dfca873d73cb0ef157d0aa5bcaa +size 2374502 diff --git a/video/zJremsKVyh_39024771.mp4 b/video/zJremsKVyh_39024771.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..d193db6417e55c6a535d41407687a1e959eeb79f --- /dev/null +++ b/video/zJremsKVyh_39024771.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f428a2e08865bf07a9e64bbd18c96c0976e84c265e2e9c0b4b8a7a60e00aba59 +size 2375980 diff --git a/video/zLU21oQjD5_39027479.mp4 b/video/zLU21oQjD5_39027479.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..1c6d7632021f13cd801e9f845ec8ba903368eee8 --- /dev/null +++ b/video/zLU21oQjD5_39027479.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:205861ca4e0f3e96cbcce465158833ad63e0f88b898325ba9109e71665972600 +size 3487317 diff --git a/video/zMvMwNvs4R_39018530.mp4 b/video/zMvMwNvs4R_39018530.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bc7c3285f82aa39b9272f5e1e26cc029dfd722a1 --- /dev/null +++ b/video/zMvMwNvs4R_39018530.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1733ed277d820defc7ddf5c4dbce82d9fc39a8a02c94b69823f8ee000e6595fe +size 2635758 diff --git a/video/zNiJZUAlxg_39025775.mp4 b/video/zNiJZUAlxg_39025775.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..0169014d91b76ba6b9ab10c7351b81f611174dfc --- /dev/null +++ b/video/zNiJZUAlxg_39025775.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f5281d5ba6b8ccc2ed024038a220276f03a38ef8c426b9c8fd4f6e24aabe9f0 +size 3062599 diff --git a/video/zO55ovdLJw_39025114.mp4 b/video/zO55ovdLJw_39025114.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..f1edbb8a65c1b367c4b095215a68196486e076aa --- /dev/null +++ b/video/zO55ovdLJw_39025114.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:748f3cfbc128eb10cb8b480eacbd42bf23429bc8b7eefa32444e671d67082666 +size 1738393 diff --git a/video/zTu0QEpvtZ_39026609.mp4 b/video/zTu0QEpvtZ_39026609.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..70e0ed1eef5e903080ffcd49720b08cf2162727b --- /dev/null +++ b/video/zTu0QEpvtZ_39026609.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea52225edaeb4a46fb00abd3065e68dd27198a39882e6401a17538ef5cef346c +size 1882407 diff --git a/video/zWuHSIALBh_39025203.mp4 b/video/zWuHSIALBh_39025203.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..df5e4438621a2253bd3391b52a4ba61c1010968b --- /dev/null +++ b/video/zWuHSIALBh_39025203.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4864db45a4488bf7a9e6d3f3d0fbf005a7062e0f247338c507fb81bdd568cfff +size 2481699 diff --git a/video/zZVqZRXSao_39027236.mp4 b/video/zZVqZRXSao_39027236.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..81333c7ae0e306729d45d5a3fec4ca01cbaabc94 --- /dev/null +++ b/video/zZVqZRXSao_39027236.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:11e246d395d3cd9e62ef7f897bf6b03c7ab88563837e90207b84d3d87536d647 +size 2849522 diff --git a/video/za9Jx8yqUA_39028601.mp4 b/video/za9Jx8yqUA_39028601.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..a91a1c708fb611ed9295b9d5886447eba84a72da --- /dev/null +++ b/video/za9Jx8yqUA_39028601.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fbeba85788fd431f27b5a2325633d9443a7e21c54d3aedd701cbfe10e3b51be0 +size 1876553 diff --git a/video/ziDFH8TPPK_39019250.mp4 b/video/ziDFH8TPPK_39019250.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..82ee2087ceb4ac8394705ccf455e043325ee23ea --- /dev/null +++ b/video/ziDFH8TPPK_39019250.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61ce8aefa2fa4318c67bc8c1512e1cc43c4f460f1f1f62951a4ccd40e7ccdfbe +size 2348470 diff --git a/video/ziYC4FHRNr_39026075.mp4 b/video/ziYC4FHRNr_39026075.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..24928c7d2f5917c4d4b3de7284a7aac96eb53775 --- /dev/null +++ b/video/ziYC4FHRNr_39026075.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d6ac699714e418fb7edb2dac22dc1a8855e51084460b3b193e9309fdfa622a5 +size 1083846 diff --git a/video/zkfCa4oESF_39026270.mp4 b/video/zkfCa4oESF_39026270.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..84dec8846d1fb662fb820dc79c5ed40e0cdf2849 --- /dev/null +++ b/video/zkfCa4oESF_39026270.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f814aadd9d3edcbfc14b316aeb176f15f3b9eebc8731b89dc1be312710c616a6 +size 2430559 diff --git a/video/zkhyrxlwqH_39026164.mp4 b/video/zkhyrxlwqH_39026164.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..8cf7a18fc7cc39d55899c3ba1f98416fe25f26b3 --- /dev/null +++ b/video/zkhyrxlwqH_39026164.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f5e476f76af81c2c8b97ea1a7d6cbdab61d3263a45619c87d323c04fb8fc848 +size 2727011 diff --git a/video/zlgfRk2CQa_39026368.mp4 b/video/zlgfRk2CQa_39026368.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..99c2da01df28b6f7defb33fe9032cda0dabf1d8d --- /dev/null +++ b/video/zlgfRk2CQa_39026368.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:60d217fed1179dce7b32872896402206f1d4076e9c47206bd91620e5c7397087 +size 1230702 diff --git a/video/zlkXLb3wpF_39018996.mp4 b/video/zlkXLb3wpF_39018996.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..c6ffc55f6ca8e89f5d98635d4c065c52f2e00d5a --- /dev/null +++ b/video/zlkXLb3wpF_39018996.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20bcead83bf30b7ff9c74745d0c58774848fc4cc1b144e402aa5635c76960973 +size 2420093 diff --git a/video/zm1LcgRpHm_39025597.mp4 b/video/zm1LcgRpHm_39025597.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..7950553a647f9e2c02df53dfcc9a2a9f777c3a4b --- /dev/null +++ b/video/zm1LcgRpHm_39025597.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4206f34cddb0b91ca114d4ce8d62e9c82cb6bf3ea7717a974953471ddffd9ca4 +size 2089690 diff --git a/video/zqLAMwVLkt_39025890.mp4 b/video/zqLAMwVLkt_39025890.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..22fbc9143fde97148a18ca7ef35cc611a0c0d25e --- /dev/null +++ b/video/zqLAMwVLkt_39025890.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a654611407cf036ed4e8a74d1513ce97637553ed0c8ea52c2fb137501f9f1614 +size 2470267 diff --git a/video/ztwl4ubnXV_39024646.mp4 b/video/ztwl4ubnXV_39024646.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..3c5e158bce79d4eb34715a8c93f02d01345c549e --- /dev/null +++ b/video/ztwl4ubnXV_39024646.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43d797b77daff21fb2b0577adfd40483b838da784321e4e6af5238f0c7728250 +size 2040457 diff --git a/video/zuwLGhgxtQ_39028785.mp4 b/video/zuwLGhgxtQ_39028785.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..5274c2fc89c409e06a8ad856550b533dc46ec327 --- /dev/null +++ b/video/zuwLGhgxtQ_39028785.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5080e5cec128ca9e4bb3ddb0216b5928a5830fc439324bf1f8bb4363e3e61890 +size 2597552 diff --git a/video/zuwpeRkJNH_39025347.mp4 b/video/zuwpeRkJNH_39025347.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..469f12410ee1e40b67c2c64202ddee120998f8db --- /dev/null +++ b/video/zuwpeRkJNH_39025347.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5837cf52e2fdd27c4c3df2a9b85de3bd6164350e14bb96670d86092d99f919c8 +size 2995713 diff --git a/video/zv9gYC3xgF_39027145.mp4 b/video/zv9gYC3xgF_39027145.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..bad88aaa760c980d26f3def4e5f4264ad3ec28fc --- /dev/null +++ b/video/zv9gYC3xgF_39027145.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:65fda63c7fd4572d4c9bdf3a0c2708770cbfc33a5f6e76d0525063719b3b886f +size 2234667 diff --git a/video/zzOOqD6R1b_39024537.mp4 b/video/zzOOqD6R1b_39024537.mp4 new file mode 100644 index 0000000000000000000000000000000000000000..ed9406f3c0248671c956115551bdd76309ba18b8 --- /dev/null +++ b/video/zzOOqD6R1b_39024537.mp4 @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf3a07aaa40dccbec2f56bfeb7b8abebdce24057916974b74a92510b0248c25d +size 2458694