


default search action
18th ECCV Workshops 2024: Milan, Italy - Part XI
- Alessio Del Bue, Cristian Canton, Jordi Pont-Tuset, Tatiana Tommasi:

Computer Vision - ECCV 2024 Workshops - Milan, Italy, September 29-October 4, 2024, Proceedings, Part XI. Lecture Notes in Computer Science 15633, Springer 2025, ISBN 978-3-031-91978-7 - Mona Sheikh Zeinoddin, Chiara Lena, Jiongqi Qu, Luca Carlini, Mattia Magro, Seunghoi Kim, Elena De Momi, Sophia Bano

, Matthew Grech-Sollars
, Evangelos B. Mazomenos
, Daniel C. Alexander
, Danail Stoyanov, Matthew J. Clarkson
, Mobarakol Islam:
DARES: Depth Anything in Robotic Endoscopic Surgery with Self-supervised Vector-LoRA of the Foundation Model. 1-11 - Tao Huang, Xiaohuan Pei, Shan You, Fei Wang, Chen Qian, Chang Xu:

LocalMamba: Visual State Space Model with Windowed Selective Scan. 12-22 - Sifei Liu

, Weili Nie
, An-Chieh Cheng, Morteza Mardani, Chao Liu
, Benjamin Eckart, Arash Vahdat
:
Compositional Text-to-Image Generation with Feedforward Layout Generation. 23-33 - Haoran Xu, Ziqian Liu, Rong Fu, Zhongling Su, Zerui Wang, Zheng Cai, Zhilin Pei, Xingcheng Zhang:

PackMamba: Efficient Processing of Variable-Length Sequences in Mamba Training. 34-42 - Edwin Arkel Rios, Femiloye Oyerinde, Min-Chun Hu, Bo-Cheng Lai:

Down-Sampling Inter-layer Adapter for Parameter and Computation Efficient Ultra-Fine-Grained Image Recognition. 43-54 - Seyedarmin Azizi

, Mahdi Nazemi
, Massoud Pedram
:
Memory-Efficient Vision Transformers: An Activation-Aware Mixed-Rank Compression Strategy. 55-66 - Anthony Sarah, Sharath Nittur Sridhar, Maciej Szankin, Sairam Sundaresan:

LLaMA-NAS: Efficient Neural Architecture Search for Large Language Models. 67-74 - Nikhil Mehta

, Jonathan Lorraine
, Steve Masson, Ramanathan Arunachalam
, Zaid Pervaiz Bhat, James Lucas, Arun George Zachariah
:
Improving Hyperparameter Optimization with Checkpointed Model Weights. 75-96 - Gihwan Kim

, Jemin Lee
, Sihyeong Park
, Yongin Kwon
, Hyungshin Kim
:
Mixed Non-linear Quantization for Vision Transformers. 97-112 - Federico Fontana

, Romeo Lanzino
, Anxhelo Diko
, Gian Luca Foresti
, Luigi Cinque
:
CycleBNN: Cyclic Precision Training in Binary Neural Networks. 113-130 - Jiantao Wu, Shentong Mo, Sara Atito, Zhenhua Feng, Josef Kittler, Muhammad Awais:

DailyMAE: Towards Pretraining Masked Autoencoders in One Day. 131-149 - Ofir Gordon

, Elad Cohen
, Hai Victor Habi
, Arnon Netzer
:
EPTQ: Enhanced Post-training Quantization via Hessian-Guided Network-Wise Optimization. 150-166 - Sota Kato

, Hinako Mitsuoka
, Kazuhiro Hotta
:
Generalized SAM: Efficient Fine-Tuning of SAM for Variable Input Image Sizes. 167-182 - Huan Wang, Feitong Tan, Ziqian Bai, Yinda Zhang, Shichen Liu, Qiangeng Xu, Menglei Chai, Anish Prabhu, Rohit Pandey, Sean Fanello, Zeng Huang, Yun Fu:

LightAvatar: Efficient Head Avatar as Dynamic Neural Light Field. 183-201 - Richa Upadhyay

, Ronald Phlypo
, Rajkumar Saini
, Marcus Liwicki
:
Giving Each Task what it Needs Leveraging Structured Sparsity for Tailored Multi-Task Learning. 202-218 - Xinyi Yu, Runan Yin, Zhihao Lin, Yongtao Wang

:
ERF-NAS: Efficient Receptive Field-Based Zero-Shot NAS for Object Detection. 219-234 - Gabriele Lagani

, Fabrizio Falchi
, Claudio Gennaro
, Giuseppe Amato
:
CA3D: Convolutional-Attentional 3D Nets for Efficient Video Activity Recognition on the Edge. 235-251 - Maxime Girard, Victor Quétu, Samuel Tardieu

, Van-Tam Nguyen, Enzo Tartaglione
:
Memory-Optimized Once-For-All Network. 252-267 - Hui Shen

, Zhongwei Wan
, Xin Wang
, Mi Zhang
:
Famba-V: Fast Vision Mamba with Cross-Layer Token Fusion. 268-278 - Francesco Pasti, Marina Ceccon, Davide Dalle Pezze

, Francesco Paissan, Elisabetta Farella, Gian Antonio Susto, Nicola Bellotto
:
Latent Distillation for Continual Object Detection at the Edge. 279-294 - Sudhakar Sah, Darshan C. Ganji, Matteo Grimaldi, Ravish Kumar, Alexander Hoffman, Honnesh Rohmetra, Ehsan Saboori:

MCUBench: A Benchmark of Tiny Object Detectors on MCUs. 295-311 - Federico Betti

, Lorenzo Baraldi
, Lorenzo Baraldi
, Rita Cucchiara
, Nicu Sebe
:
Optimizing Resource Consumption in Diffusion Models Through Hallucination Early Detection. 312-326

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














