Image Layer & Cache
Image layers explain how Docker accumulates filesystem changes and reuses previous build results. Cache is the mechanism that keeps unchanged steps from being repeated.
βΆArchitecture Diagram
π ProcessDashed line animations indicate the flow direction of data or requests
If every code change forced dependency installation and the full build from scratch, both local development and CI would slow down badly. If every image push or pull retransferred the whole artifact, registry and network costs would also climb quickly. Teams need step-level reuse.
Before images were split into reusable layers, teams often rebuilt application environments as one large artifact. A small code change could trigger the same OS package installs and dependency downloads again, and even unchanged base files were stored and transferred repeatedly. Layers emerged as the answer: split filesystem changes into reusable steps so unchanged parts do not have to be rebuilt or moved every time.
For example, if `COPY package.json` is followed by `RUN npm ci`, the install layer can be reused until the dependency files change. A later `COPY . .` layer invalidates only the steps beneath it when source files change. So layers are not just storage details. They are part of how teams design rebuild cost.
Layers and multi-stage builds both matter for image optimization, but they solve different questions. Layers are the caching and accumulation unit inside one stage. Multi-stage build is the pattern for separating build outputs from the final runtime image.
In practice, teams place slow-changing dependency installation early and fast-changing application copies later. When CI becomes slow, the first question is often not machine size but which layers are being invalidated too often.