More Realistic Than Reality!

Search
Close
Search
 
  • home
  • 교수소개
  • 전임교수

교수소개

전임교수

  • 조교수 인공지능, 자연어처리
  • 최윤석 홈페이지 바로가기
    Lab Data & Language Intelligence Lab

관심분야

Natural Language Processing, Multimodal Learning, Generative AI, Recommendation System

학력

  • (Ph.D.) 2016.03 - 2024.02, 소프트웨어학과, 성균관대학교
  • (B.S.) 2012.03 - 2016.02, 소프트웨어학과, 성균관대학교

약력/경력

  • 2024.09 - 현재, 조교수, 성균관대학교 컴퓨터교육과
  • 2024.03 - 2024.08, 조교수, 한국외국어대학교 Language&AI융합학부

학술지 논문

  • (2024)  A Study of Defect Detection using External Knowledge Prompt with Pre-trained Language Models on Code.  Journal of Korean Institute of Intelligent Systems.  34,  2
  • (2023)  READSUM: Retrieval-Augmented Adaptive Transformer for Source Code Summarization.  IEEE ACCESS.  11,  1
  • (2020)  Neural Attention Model with Keyword Memory for Abstractive Document Summarization.  CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE.  32,  18
  • (2017)  Detection of Document Modification based on Deep Neural Networks.  Journal of Ambient Intelligence and Humanized Computing.  9,  1
  • (2016)  Detection of Content Changes based on Deep Neural Networks.  Lecture Notes in Electrical Engineering.  421,  1

학술회의논문

  • (2024)  Code Defect Detection using Pre-trained Language Models with Encoder-Decoder via Line-Level Defect Localization.  International Conference on Computational Linguistics.  이탈리아
  • (2023)  LOAM: Improving Long-tail Session-based Recommendation via Niche Walk Augmentation and Tail Session Mixup.  ACM SIGIR Conference on Information Retrieval.  대만
  • (2023)  BLOCSUM: Block Scope-based Source Code Summarization via Shared Block Representation.  Annual Meeting of the Association for Computational Linguistics.  캐나다
  • (2023)  CodePrompt: Task-Agnostic Prefix Tuning for Program and Language Generation.  Annual Meeting of the Association for Computational Linguistics.  캐나다
  • (2023)  DIP: Dead code Insertion based Black-box Attack for Programming Language Model.  Annual Meeting of the Association for Computational Linguistics.  캐나다
  • (2022)  TABS: Efficient Textual Adversarial Attack for Pre-trained NL Code Model Using Semantic Beam Search.  Empirical Methods in Natural Language Processing.  아랍에미리트
  • (2022)  IA-BERT: Context-aware Sarcasm Detection by Incorporating Incongruity Attention Layer for Feature Extraction.  ACM SIGAPP Symposium on Applied Computing.  미국
  • (2021)  Learning Sequential and Structural Information for Source Code Summarization.  Annual Meeting of the Association for Computational Linguistics.  태국
  • (2021)  An Embedding Method for Unseen Words Considering Contextual Information and Morphological Information.  ACM SIGAPP Symposium on Applied Computing.  대한민국
  • (2020)  Attention History-Based Attention for Abstractive Text Summarization.  ACM SIGAPP Symposium on Applied Computing.  체코
  • (2020)  Source Code Summarization Using Attention-Based Keyword Memory Networks.  IEEE International Conference on Big Data and Smart Computing.  대한민국
  • (2018)  Abstractive Summarization by Neural Attention Model with Document Content Memory.  Conference on Research in Adaptive and Convergent Systems.  미국
  • (2016)  Recurrent Neural Network for Storytelling.  International Symposium on Advanced Intelligent Systems.  일본