Computer Vision - ECCV 2022

Computer Vision - ECCV 2022

17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XI

Cisse, Moustapha; Hassner, Tal; Brostow, Gabriel; Farinella, Giovanni Maria; Avidan, Shai

Springer International Publishing AG

11/2022

745

Mole

Inglês

9783031200823

15 a 20 dias

Descrição não disponível.
A Simple Approach and Benchmark for 21,000-Category Object Detection.- Knowledge Condensation Distillation.- Reducing Information Loss for Spiking Neural Networks.- Masked Generative Distillation.- Fine-Grained Data Distribution Alignment for Post-Training Quantization.- Learning with Recoverable Forgetting.- Efficient One Pass Self-Distillation with Zipf's Label Smoothing.- Prune Your Model before Distill It.- Deep Partial Updating: Towards Communication Efficient Updating for On-Device Inference.- Patch Similarity Aware Data-Free Quantization for Vision Transformers.- L3: Accelerator-Friendly Lossless Image Format for High-Resolution, High-Throughput DNN Training.- Streaming Multiscale Deep Equilibrium Models.- Symmetry Regularization and Saturating Nonlinearity for Robust Quantization.- SP-Net: Slowly Progressing Dynamic Inference Networks.- Equivariance and Invariance Inductive Bias for Learning from Insufficient Data.- Mixed-Precision Neural Network Quantization via Learned Layer-Wise Importance.- Event Neural Networks.- EdgeViTs: Competing Light-Weight CNNs on Mobile Devices with Vision Transformers.- PalQuant: Accelerating High-Precision Networks on Low-Precision Accelerators.- Disentangled Differentiable Network Pruning.- IDa-Det: An Information Discrepancy-Aware Distillation for 1-Bit Detectors.- Learning to Weight Samples for Dynamic Early-Exiting Networks.- AdaBin: Improving Binary Neural Networks with Adaptive Binary Sets.- Adaptive Token Sampling for Efficient Vision Transformers.- Weight Fixing Networks.- Self-Slimmed Vision Transformer.- Switchable Online Knowledge Distillation.- ??-Robustness and Beyond: Unleashing Efficient Adversarial Training.- Multi-Granularity Pruning for Model Acceleration on Mobile Devices.- Deep Ensemble Learning by Diverse Knowledge Distillation for Fine-Grained Object Classification.- Helpful or Harmful: Inter-Task Association in Continual Learning.- Towards Accurate Binary Neural Networks via Modeling Contextual Dependencies.- SPIN: An Empirical Evaluation on Sharing Parameters of Isotropic Networks.- Ensemble Knowledge Guided Sub-network Search and Fine-Tuning for Filter Pruning.- Network Binarization via Contrastive Learning.- Lipschitz Continuity Retained Binary Neural Network.- SPViT: Enabling Faster Vision Transformers via Latency-Aware Soft Token Pruning.- Soft Masking for Cost-Constrained Channel Pruning.- Non-uniform Step Size Quantization for Accurate Post-Training Quantization.- SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning.- Meta-GF: Training Dynamic-Depth Neural Networks Harmoniously.- Towards Ultra Low Latency Spiking Neural Networks for Visionand Sequential Tasks Using Temporal Pruning.- Towards Accurate Network Quantization with Equivalent Smooth Regularizer.
Este título pertence ao(s) assunto(s) indicados(s). Para ver outros títulos clique no assunto desejado.
artificial intelligence;computer networks;computer systems;computer vision;image analysis;image coding;image compression;image processing;image quality;image segmentation;machine learning;mathematics;network architecture;network protocols;neural networks;signal processing