🤖🧠 Deep mind AI blog series

      • Agent evaluation
      • Agentic workflow
      • agentic-workflow-deep-dive
      • Basics
      • Integrating tools, functions with the ChatGPT
      • Memory
      • Meta prompting
      • Response caching and its role in agents
      • Organisation ID error
      • Ai Human Brain Overlaps
      • An Introduction to Self-Supervised Learning, How Machines Teach Themselves
      • Back propagation Calculus
      • Cost function
      • Delta error
      • Gradient descent
      • Heuristics in Programming, Navigating Complex Problems with Intelligent Shortcuts
      • Learning Rate
      • NN Learning
      • Vanishing gradient problem
      • Weights and Biases
      • What is convolution
      • Assistant API
      • Assistant vs Chat completion api
      • ChatGPT outputs
      • Model Context Protocol (MCP)
      • Ollama
      • Pydantic model and Json Schema for structured output
      • AI Podcast
      • Emergent Properties of Large Language Models
      • Getting started with deep learning study guide
      • How Large Language Models Learn the Theory of Mind
      • Prompt Engineering
      • Text to speech
      • Tokens and latency
      • Web socket
    Home

    ❯

    First Principles

    ❯

    Gradient descent

    Gradient descent

    Sep 12, 20251 min read

    Learning Rate Back propagation Calculus NN Learning

    Gradient descent Vs Mini batch Stochastic Gradient descent

    Randomly shuffle your training data and divide it into a bunch of mini-batches


    Graph View

    Backlinks

    • Delta error
    • Learning Rate

    Created with Quartz v4.4.0 © 2025

    • GitHub
    • Discord Community