KRnet 2004

INDEX

  • Àλ縻
  • Á¶Á÷Á¶Á÷
  • ÇÁ·Î±×·¥ÇÁ·Î±×·¥
  • µî·Ï/Çà»ç¾È³»µî·Ï/Çà»ç¾È³»
  • °Ô½ÃÆÇ °Ô½ÃÆÇ
  • Past KRnet
  • ¼¼ºÎÇÁ·Î±×·¥

    ¼¼ºÎÇÁ·Î±×·¥

     

    [I3] DeepSpark: Spark-Based Deep Learning Supporting Asynchronous Updates and Caffe Compatibility
    °ü¸®ÀÚ (krnet) ÀÛ¼ºÀÏ : 2016-05-04 12:24:14 Á¶È¸¼ö : 1166
    ÄÚµå¹øÈ£ : 2
    ¹ßÇ¥ÀÚ : À±¼º·Î
    ¼Ò¼Ó : ¼­¿ï´ë
    ºÎ¼­ : Àü±âÁ¤º¸°øÇкÎ
    Á÷À§ : ±³¼ö
    ¼¼¼Ç½Ã°£ : 16:00~18:00
    ¹ßÇ¥ÀÚ¾à·Â : 2012-ÇöÀç: ¼­¿ï´ëÇб³ Àü±âÁ¤º¸°øÇкΠ±³¼ö
    2007-2012: °í·Á´ëÇб³ Àü±âÀüÀÚ°øÇкΠ±³¼ö
    2006-2007: ¹Ì±¹ Intel ¼±ÀÓ¿¬±¸¿ø
    2006-2006: ¹Ì±¹ ½ºÅÄÆÛµå´ëÇб³ ¹Ú»çÈÄ ¿¬±¸¿ø
    2006. ¹Ì±¹ ½ºÅÄÆÛµå´ëÇб³ ¹Ú»ç
    1996. ¼­¿ï´ëÇб³ Çлç
    °­¿¬¿ä¾à : The increasing complexity of deep neural networks (DNNs) has made it challenging to exploit existing large-scale data processing pipelines for handling massive data and parameters involved in DNN training. Distributed computing platforms and GPGPU-based acceleration provide a mainstream solution to this computational challenge. In this paper, we propose DeepSpark, a distributed and parallel deep learning framework that simultaneously exploits Apache Spark for large-scale distributed data management and Caffe for GPU-based acceleration. DeepSpark directly accepts Caffe input specifications, providing seamless compatibility with existing designs and network structures. To support parallel operations, DeepSpark automatically distributes workloads and parameters to Caffe-running nodes using Spark and iteratively aggregates training results by a novel lock-free asynchronous variant of the popular elastic averaging stochastic gradient descent (SGD) update scheme, effectively complementing the synchronized processing capabilities of Spark. DeepSpark is an on-going project, and the current release is available at http://deepspark.snu.ac.kr.
    ¿Â¶óÀÎÇà»çÀå :
    ¿Â¶óÀιßÇ¥Àå :
    .
    ¸ñ·Ïº¸±â

    TOP