원본 보기역링크PDF로 내보내기맨 위로 Share via Share via... Twitter LinkedIn Facebook Pinterest Telegram WhatsApp Yammer Reddit Teams최근 바뀜Send via e-Mail인쇄고유링크 × Neural nets Number of neurons per layer Number of layers Optimizers SGD + momentum Adam / Adadelta / Adagrad / … In practice lead to more overfitting Batch size Learning rate Regularization L2/L1 for weights Dropout/Dropconnect Static dropconnect 관련 문서 Hyperparameter tuning open/neural-nets.txt 마지막으로 수정됨: 2020/07/15 08:27저자 127.0.0.1