


default search action
AIware 2025: Seoul, Republic of Korea
- 2nd IEEE/ACM International Conference on AI-powered Software, AIware 2025, Seoul, Republic of Korea, November 19-20, 2025. IEEE 2025, ISBN 979-8-3315-8269-2

- Emre Dinç, Eray Tüzün:

Judge the Votes: a System to Classify Bug Reports and Give Suggestions. 1-10 - Daniel Maninger, Leon Chemnitz, Amir Molzam Sharifloo, Jannis Brugger, Mira Mezini:

Benchmarking Web API Integration Code Generation. 240-248 - Amir Molzam Sharifloo, Maedeh Heydari, Parsa Kazerooni, Daniel Maninger, Mira Mezini:

Where Do LLMs Still Struggle? An In-Depth Analysis of Code Generation Benchmarks. 249-253 - Adam Bodicoat, Gunel Jahangirova, Valerio Terragni:

Understanding LLM-Driven Test Oracle Generation. 29-39 - Wenliang Shan, Michael Fu

, Rui Yang, Chakkrit Tantithamthavorn:
SEALGuard: Safeguarding the Multilingual Conversations in Southeast Asian Languages for AI-Powered Software. 197-206 - Jiahui Wu, Chengjie Lu, Aitor Arrieta, Shaukat Ali:

A Tool for Benchmarking Large Language Models' Robustness in Assessing the Realism of Driving Scenarios. 263-267 - Kevin Lira, Baldoino Fonseca, Wesley K. G. Assunção

, Davy Baya, Márcio Ribeiro:
Beyond Code Explanations: a Ray of Hope for Cross-Language Vulnerability Repair. 1-9 - Christoph Treude, Margaret-Anne D. Storey:

Generative AI and Empirical Software Engineering: A Paradigm Shift. 233-239 - Ítalo Santos, Cleyton V. C. de Magalhães, Ronnie de Souza Santos:

Model-Assisted and Human-Guided: Perceptions and Practices of Software Professionals Using LLMs for Coding. 105-112 - Hidetake Tanaka

, Haruto Tanaka, Kazumasa Shimari, Kenichi Matsumoto:
Understanding the Characteristics of LLM-Generated Property-Based Tests in Exploring Edge Cases. 11-18 - Yung-Shen Hsia, Fang Yu, Jie-Hong Roland Jiang:

Neuro-Symbolic Compliance: Integrating LLMS and SMT Solvers for Automated Financial Legal Analysis. 1-10 - Truong Hai Dang, Jingyu Xiao, Yintong Huo:

Envisioning Future Interactive Web Development: Editing Webpage with Natural Language. 61-66 - Sanket Mhatre, Yasharth Bajpai, Sumit Gulwani, Emerson R. Murphy-Hill, Gustavo Soares:

SWE-Sharp-Bench: A Reproducible Benchmark for C# Software Engineering Tasks. 277-280 - Anamul Haque Mollah, Ahmed Aljohani, Hyunsook Do:

Assertion-Aware Test Code Summarization with Large Language Models. 1-9 - Gurbinder Gill, Ritvik Gupta, Denis Lusson, Anand Chandrashekar, Donald Nguyen:

From Search to Reasoning: a Five-Level Rag Capability Framework for Enterprise Data. 1-9 - Md Nazmul Haque, Hua Yang, Zhou Yang, Bowen Xu:

How Does Quantization Impact Privacy Risk on LLMS for Code? 50-60 - Ximing Dong, Shaowei Wang, Dayi Lin, Gopi Krishnan Rajbahadur, Ahmed E. Hassan:

Promptexp: Multi-Granularity Prompt Explanation of Large Language Models. 1-10 - Sivajeet Chand

, Melih Kilic, Roland Würsching, Sushant Kumar Pandey, Alexander Pretschner:
Automated Extract Method Refactoring with Open-Source LLMs: A Comparative Study. 113-122 - Rufeng Chen, Shuaishuai Jiang, Jiyun Shen, AJung Moon, Lili Wei:

Examining the Usage of Generative AI Models in Student Learning Activities for Software Programming. 76-86 - Ruizhen Gu, Jingqiong Zhang, José Miguel Rojas

, Donghwan Shin:
On the Promises and Challenges of AI-Powered XR Glasses as Embodied Software. 207-212 - Julia Gomez-Rangel, Young Lee, Bozhen Liu:

Security in the Wild: An Empirical Analysis of LLM-Powered Applications and Local Inference Frameworks. 149-159 - Varun Bharti, Shashwat Jha, Dhruv Kumar, Pankaj Jalote:

Combining Reasoning Optimized LLMs and SMT Solvers for Automated Loop Invariant Synthesis. 192-196 - Rabimba Karanjai, Lei Xu, Weidong Shi:

Securing Smart Contract Languages with a Unified Agentic Framework for Vulnerability Repair in Solidity and Move. 1-10 - Cheng Cheng, Jinqiu Yang:

CFCEval: Evaluating Security Aspects in Code Generated by Large Language Models. 1-10 - Maria Deolinda Santana, Cleyton V. C. de Magalhães, Ronnie de Souza Santos:

Software Testing with Large Language Models: An Interview Study with Practitioners. 96-104 - Takaaki Toda, Tatsuya Mori:

Chase: LLM Agents for Dissecting Malicious PyPI Packages. 1-10 - Humphrey O. Obie:

A Vision for Value-Aligned AI-Driven Systems. 143-148 - Myron David Lucena Campos Peixoto, Baldoino Fonseca, Davy de Medeiros Baia, Kevin Lira, Márcio Ribeiro, Wesley K. G. Assunção

, Nathalia Nascimento, Paulo S. C. Alencar:
Turning Manual Tasks Into Actions: Assessing the Effectiveness of Gemini-Generated Selenium Tests. 40-49 - Arup Datta, Ahmed Aljohani, Hyunsook Do:

Secure Code Generation at Scale with Reflexion. 1-9 - Rabimba Karanjai, Lei Xu, Weidong Shi:

HPCAgentTester: a Multi-Agent LLM Approach for Enhanced HPC Unit Test Generation. 213-222 - Tasha Settewong, Youmei Fan, Raula Gaikovina Kula, Kenichi Matsumoto:

Human to Document, AI to Code: Comparing GenAI for Notebook Competitions. 1-9 - Asma Z. Yamani, Malak Baslyman, Moataz A. Ahmed:

Are We Aligned? A Preliminary Investigation of the Alignment of Responsible AI Values Between LLMs and Human Judgment. 133-142 - Robin Gröpler, Steffen Klepke, Jack Johns, Andreas Dreschinski, Klaus Schmid, Benedikt Dornauer, Eray Tüzün, Joost Noppen, Mohammad Reza Mousavi, Yongjian Tang, Johannes Viehmann, Selin Sirin Aslangül, Beum Seuk Lee, Adam Ziolkowski, Eric Zie:

The Future of Generative AI in Software Engineering: A Vision From Industry and Academia in the European Genius Project. 170-181

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














