메뉴 건너뛰기
.. 내서재 .. 알림
소속 기관/학교 인증
인증하면 논문, 학술자료 등을  무료로 열람할 수 있어요.
한국대학교, 누리자동차, 시립도서관 등 나의 기관을 확인해보세요
(국내 대학 90% 이상 구독 중)
로그인 회원가입 고객센터 ENG
주제분류

추천
검색
질문

논문 기본 정보

자료유형
학술대회자료
저자정보
Ignatius Iwan (Hankuk University of Foreign Studies) Sean Yonathan Tanjung (Hankuk University of Foreign Studies) Bernardo Nugroho Yahya (Hankuk University of Foreign Studies) Seok Lyong-Lee (Hankuk University of Foreign Studies)
저널정보
대한산업공학회 대한산업공학회 추계학술대회 논문집 2023년 대한산업공학회 추계학술대회
발행연도
2023.11
수록면
2,671 - 2,691 (21page)

이용수

표지
📌
연구주제
📖
연구배경
🔬
연구방법
🏆
연구결과
AI에게 요청하기
추천
검색
질문

초록· 키워드

오류제보하기
Federated learning (FL) offers a decentralized way to train a generalized global model between a server and a bunch of clients while respecting the confidentiality of the client’s data. For real-life implementations, the server needs client selection algorithms to select honest clients whose labeled data are similar to the server interest and eliminate clients with data that are too different from the server interest which are called malicious clients. To achieve this, the server usually checks each client’s local model performance using a labeled dataset as the test set. Unfortunately, this assumption is a hard constraint, as the server may have data of interest from other sources, but they are often unlabeled. To solve this limitation, this study proposes a way to eliminate malicious clients by comparing their data description and server data description using the Large Language Model (LLM) where data description is in the form of a text. As LLM can perform well for semantic tasks in text-based data, it can help to separate honest and malicious clients by analyzing and comparing each client’s data descriptions with server data descriptions. First, all available clients sample data from their local dataset and transform it into data descriptions to be sent to the server. After clients send their data descriptions to the server, LLM proceeds to compare their data descriptions and outputs a list of honest clients. For performance evaluation, this study experimented on image dataset such as MNIST and CIFAR-10 datasets using a pretrained image-to-text model to obtain the data descriptions. This study demonstrated that the proposed method outperformed the baselines.

목차

Abstract
Introduction
Problem & Contribution
Related Works
Methodology
Experiment Setting
Result
Conclusion
Reference

참고문헌 (0)

참고문헌 신청

함께 읽어보면 좋을 논문

논문 유사도에 따라 DBpia 가 추천하는 논문입니다. 함께 보면 좋을 연관 논문을 확인해보세요!

이 논문의 저자 정보

최근 본 자료

전체보기

댓글(0)

0

UCI(KEPA) : I410-151-24-02-088370490