·Î±×ÀÎ
  • Àλ縻
  • Á¶Á÷Á¶Á÷
  • ÇÁ·Î±×·¥ÇÁ·Î±×·¥
  • µî·Ï/Âü°¡¾È³»µî·Ï/µî·Ï/Âü°¡¾È³»
  • °Ô½ÃÆÇ °Ô½ÃÆÇ
  • Past KRnet
  • ¼¼ºÎÇÁ·Î±×·¥

    ¼¼ºÎÇÁ·Î±×·¥

     

    [C3] On-device Generative AI for Video Virtual Try-on
    ÄÚµå¹øÈ£ : 39
    ¹ßÇ¥ÀÚ : ±èÇü½Å
    ¼Ò¼Ó : ¼­¿ï´ë
    ºÎ¼­ :
    Á÷À§ : ±³¼ö
    ¼¼¼Ç½Ã°£ : 16:00~17:50
    ¹ßÇ¥ÀÚ¾à·Â : ¼­¿ï´ëÇб³ µ¥ÀÌÅÍ»çÀ̾𽺴ëÇпø ±³¼ö (2020 ~ ÇöÀç)
    Google ¼ÒÇÁÆ®¿þ¾î ¿£Áö´Ï¾î (2019 ~ 2020)
    UC Berkeley Àü±âÄÄÇ»ÅÍ°øÇкΠPostdoc (2016 ~ 2019)
    ¼­¿ï´ëÇб³ Àü±âÄÄÇ»ÅÍ°øÇкΠ¹Ú»ç (~2016)
    °­¿¬¿ä¾à : We present MIRROR, an on-device video virtual try-on (VTO) system that provides realistic, private, and rapid experiences in mobile clothes shopping. Despite recent advancements in generative adversarial networks (GANs) for VTO, designing MIRROR involves two challenges: (1) data discrepancy due to restricted training data that miss various poses, body sizes, and background
    ds and (2) loputation overhead that uses up 24% of battery for converting only a single video. To alleviate the problems, we propose a generalizable VTO GAN that not only discerns intricate human body semantics but also captures domain-invariant features without requiring additional training data. In addition, we craft lightweight, reliable clothes/pose-tracking that generates refined pixel-wise warping flow without neural-net computation. As a holistic system, MIRROR integrates the new VTO GAN and tracking method with meticulous pre/post-processing, operating in two distinct phases (on/offline). Our results on Android smartphones and real-world user videos show that compared to a cutting-edge VTO GAN, MIRROR achieves 6.5¡¿ better accuracy with 20.1¡¿ faster video conversion and 16.9¡¿ less energy consumption.
    ¸ñ·Ïº¸±â


    TOP