¼¼ºÎÇÁ·Î±×·¥

 

[C2] How to Tame Your Large Generative Models?
ÄÚµå¹øÈ£ : 35
¹ßÇ¥ÀÚ : À¯ÀçÁØ
¼Ò¼Ó : UNIST
ºÎ¼­ :
Á÷À§ : ±³¼ö
¼¼¼Ç½Ã°£ : 14:00~15:50
¹ßÇ¥ÀÚ¾à·Â : 2007-2011 Çѱ¹°úÇбâ¼ú¿ø Çлç Á¹¾÷
2011-2013 Çѱ¹°úÇбâ¼ú¿ø ¼®»ç Á¹¾÷ (Áöµµ±³¼ö: ¿¹Á¾Ã¶)
2013-2018 Çѱ¹°úÇбâ¼ú¿ø ¹Ú»ç Á¹¾÷ (Áöµµ±³¼ö: ¿¹Á¾Ã¶)
2018-2019 ³×À̹ö Clova AI Lab Research Scientist
2020-2021 EPFL Postdoctoral Researcher
2021.07-ÇöÀç UNIST ÀΰøÁö´É´ëÇпø ÀüÀÓ ¹× Àü±âÀüÀÚ°øÇаú °âÀÓ
°­¿¬¿ä¾à : We find ourselves in an era dominated by large generative models, which, while demonstrating remarkable performance, demand vast datasets and substantial computational resources. However, these resources remain inaccessible to the majority of researchers and all but a few well-resourced companies. As we navigate this landscape, the quest for scalable, efficient, and privacy-preserving mechanisms becomes more crucial than ever. In this seminar, I will delve into our recent research efforts, making generative models more accessible, efficient, and compliant with the stringent requirements of practical applications.
¸ñ·Ïº¸±â