본문 바로가기

Happy Sisyphe

검색하기
Happy Sisyphe
프로필사진 happysisyphe

  • 분류 전체보기 (12)
    • Programming (6)
      • ML&DL (4)
      • Mobile (1)
      • Cloud (0)
      • Programming-Language (0)
      • How-to (1)
    • Daily (2)
      • Road-bike (1)
      • Travel (1)
      • Food (0)
      • Product-Review (0)
      • 초보 블로거 일상 (0)
      • 회사 일상 (2)
Guestbook
Notice
Recent Posts
Recent Comments
Link
반응형
«   2025/07   »
일 월 화 수 목 금 토
1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31
Tags
  • deep-learning
  • 구매리스트
  • cross-entropy
  • 입문
  • 변속
  • NN
  • 회사
  • serviceworkerversion
  • 안장
  • universal-approximation
  • sftp
  • optimizations
  • 로드
  • 일상
  • 다중
  • vsCode
  • how-to
  • flutter
  • multiclass-label
  • jupyter-notebook
  • stone-weierstrass
  • miniconda
  • 업무강도
  • neural-networks
  • loadentrypoint
more
Archives
Today
Total
관리 메뉴
  • 글쓰기
  • 방명록
  • RSS
  • 관리

목록deep-learning (1)

Happy Sisyphe

Why Neural Networks Can Approximate Any Function

Universal Approximation TheoremTheoremLet $f: \mathbb{R}^n \to \mathbb{R}$ be a continuous function defined on a compact domain $D \subseteq \mathbb{R}^n$. For any $\epsilon > 0$, there exists a feedforward neural network with a single hidden layer, using a nonlinear, continuous activation function $\phi: \mathbb{R} \to \mathbb{R}$, such that:$$\sup_{x \in D} |f(x) - \hat{f}(x)| $$where $\hat{f}..

Programming/ML&DL 2024. 12. 17. 15:29
이전 Prev 1 Next 다음

Blog is powered by kakao / Designed by Tistory

티스토리툴바