output.experimentalMinChunkSize
Overview
experimentalMinChunkSize sets a minimum chunk size threshold (in bytes). Rollup will attempt to merge automatic chunks below this threshold into other chunks to reduce the number of output files.
// rollup.config.js
export default {
input: 'src/index.js',
output: {
dir: 'dist',
format: 'es',
experimentalMinChunkSize: 1000
}
};2
3
4
5
6
7
8
9
Scope
- Only applies to automatic chunks; does not rewrite chunks defined by
manualChunks. - Only runs under the default multi-chunk path (
inlineDynamicImports = falseandpreserveModules = false). - Executes in the final phase of the chunk assignment algorithm, after dynamic entry optimization and re-clustering.
For the position of
experimentalMinChunkSizein the overall chunk assignment flow, see Chunk Assignment Algorithm.
Reproducible Baseline
- Source code baseline (current Rollup repository commit):
c79e6c201d1f99e126d2e6bfb3f8c5c100ddcebf - Key file:
src/utils/chunkAssignment.ts
Conclusions are strictly based on the above version; not guaranteed to apply to other versions or branches.
Merge Algorithm Details
Min Chunk Merge Flow
MERGE1) Algorithm Overview
The merge optimization uses a greedy algorithm aimed at reducing the number of small chunks while ensuring:
- Side effect invariant: Correlated side effects for each entry remain unchanged after merging
- No circular dependencies: Merging does not create circular dependencies between chunks
- Minimize additional loading: Prefer merge targets that introduce the least additional code
The entry function getOptimizedChunks coordinates the entire merge flow:
function getOptimizedChunks(
chunks: ChunkDescription[],
minChunkSize: number,
sideEffectAtoms: bigint,
sizeByAtom: number[],
log: LogHandler
): { modules: Module[] }[] {
timeStart('optimize chunks', 3);
const chunkPartition = getPartitionedChunks(chunks, minChunkSize);
if (!chunkPartition) {
timeEnd('optimize chunks', 3);
return chunks;
}
// ... logging
mergeChunks(chunkPartition, minChunkSize, sideEffectAtoms, sizeByAtom);
// ...
return [...chunkPartition.small, ...chunkPartition.big];
}2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
- Location:
src/utils/chunkAssignment.ts:740-769
2) Core Data Structures
ChunkDescription describes a chunk to be merged:
interface ChunkDescription extends ModulesWithDependentEntries {
containedAtoms: bigint; // Atoms contained in this chunk (bitmask)
correlatedAtoms: bigint; // Atoms guaranteed to be loaded when this chunk loads
dependencies: Set<ChunkDescription>;
dependentChunks: Set<ChunkDescription>;
pure: boolean; // Whether the chunk has no side effects
size: number;
}2
3
4
5
6
7
8
ChunkPartition divides chunks into two categories by size:
interface ChunkPartition {
big: Set<ChunkDescription>; // size >= minChunkSize
small: Set<ChunkDescription>; // size < minChunkSize
}2
3
4
- Location:
src/utils/chunkAssignment.ts:19-39
BigInt bitmask: Each atom corresponds to a bit position, e.g., atom 0 = 1n, atom 1 = 2n, atom 2 = 4n. Bitwise operations enable efficient set operations.
3) Partitioning and Sorting
getPartitionedChunks divides chunks into two categories by the minChunkSize threshold and sorts them by size in ascending order:
function getPartitionedChunks(
chunks: ChunkDescription[],
minChunkSize: number
): ChunkPartition | null {
const smallChunks: ChunkDescription[] = [];
const bigChunks: ChunkDescription[] = [];
for (const chunk of chunks) {
(chunk.size < minChunkSize ? smallChunks : bigChunks).push(chunk);
}
if (smallChunks.length === 0) {
return null; // No small chunks, skip merging
}
smallChunks.sort(compareChunkSize);
bigChunks.sort(compareChunkSize);
return {
big: new Set(bigChunks),
small: new Set(smallChunks)
};
}2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
- Location:
src/utils/chunkAssignment.ts:771-789
4) Greedy Merge Loop
mergeChunks iterates through small chunks in ascending size order, finding the best merge target for each:
function mergeChunks(
chunkPartition: ChunkPartition,
minChunkSize: number,
sideEffectAtoms: bigint,
sizeByAtom: number[]
) {
const { small } = chunkPartition;
for (const mergedChunk of small) {
const bestTargetChunk = findBestMergeTarget(
mergedChunk,
chunkPartition,
sideEffectAtoms,
sizeByAtom,
minChunkSize <= 1 ? 1 : Infinity
);
if (bestTargetChunk) {
// Execute merge...
}
}
}2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Property update rules during merging:
| Property | Update Method | Reason |
|---|---|---|
modules | Concatenate | Merge all modules |
size | Sum | Accumulate code size |
pure | Logical AND | Only remains pure if both are side-effect-free |
correlatedAtoms | Intersection | Fewer atoms are guaranteed to be loaded after merging |
containedAtoms | Union | Contains all atoms from both chunks |
dependentEntries | Union | Merge dependent entries |
Key point: correlatedAtoms uses intersection to ensure the side effect invariant still holds after merging.
- Location:
src/utils/chunkAssignment.ts:798-842
5) Side Effect Constraint Details
The core constraint for merging is not introducing non-correlated side effects. This is checked in getAdditionalSizeIfNoTransitiveDependencyOrNonCorrelatedSideEffect:
const { correlatedAtoms } = dependencyChunk;
let dependencyAtoms = dependentChunk.containedAtoms;
const dependentContainedSideEffects = dependencyAtoms & sideEffectAtoms;
if ((correlatedAtoms & dependentContainedSideEffects) !== dependentContainedSideEffects) {
return Infinity; // Merge would introduce non-correlated side effects, reject
}2
3
4
5
6
Mathematical expression:
Merge validity condition:
dependentChunk.containedSideEffects ⊆ dependencyChunk.correlatedAtoms
Bitwise representation:
(containedAtoms & sideEffectAtoms) & ~correlatedAtoms === 0n2
3
4
5
Explanation: If the side effects contained in dependentChunk are not a subset of dependencyChunk's correlated side effects, merging would cause some entries to unexpectedly execute additional side effects when loaded, violating the side effect invariant.
- Location:
src/utils/chunkAssignment.ts:914-918
6) Cycle Detection Mechanism
Merging may introduce circular dependencies between chunks, which must be detected and rejected. BFS traversal is used on the dependency graph:
const chunksToCheck = new Set(dependentChunk.dependencies);
for (const { dependencies, containedAtoms } of chunksToCheck) {
dependencyAtoms |= containedAtoms;
const containedSideEffects = containedAtoms & sideEffectAtoms;
if ((correlatedAtoms & containedSideEffects) !== containedSideEffects) {
return Infinity; // Transitive dependency contains non-correlated side effects
}
for (const dependency of dependencies) {
if (dependency === dependencyChunk) {
return Infinity; // Cycle detected!
}
chunksToCheck.add(dependency);
}
}2
3
4
5
6
7
8
9
10
11
12
13
14
Principle: If dependencyChunk is found in the transitive dependencies of dependentChunk, merging would create a cycle: mergedChunk → dependencyChunk → ... → mergedChunk.
- Location:
src/utils/chunkAssignment.ts:920-932
7) Size Calculation and Target Selection
getAdditionalSizeAfterMerge performs bidirectional merge cost checking:
function getAdditionalSizeAfterMerge(
mergedChunk: ChunkDescription,
targetChunk: ChunkDescription,
currentAdditionalSize: number,
sideEffectAtoms: bigint,
sizeByAtom: number[]
): number {
const firstSize = getAdditionalSizeIfNoTransitiveDependencyOrNonCorrelatedSideEffect(
mergedChunk, targetChunk, currentAdditionalSize, sideEffectAtoms, sizeByAtom
);
return firstSize < currentAdditionalSize
? firstSize + getAdditionalSizeIfNoTransitiveDependencyOrNonCorrelatedSideEffect(
targetChunk, mergedChunk, currentAdditionalSize - firstSize, sideEffectAtoms, sizeByAtom
)
: Infinity;
}2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Additional size calculation:
return getAtomsSizeIfBelowLimit(
dependencyAtoms & ~correlatedAtoms, // Non-correlated atoms
currentAdditionalSize,
sizeByAtom
);2
3
4
5
findBestMergeTarget iterates through all candidates and selects the target with the smallest additional size:
for (const targetChunk of concatLazy([small, big])) {
if (mergedChunk === targetChunk) continue;
const additionalSizeAfterMerge = getAdditionalSizeAfterMerge(...);
if (additionalSizeAfterMerge < smallestAdditionalSize) {
bestTargetChunk = targetChunk;
if (additionalSizeAfterMerge === 0) break; // Early exit optimization
smallestAdditionalSize = additionalSizeAfterMerge;
}
}2
3
4
5
6
7
8
9
- Location:
src/utils/chunkAssignment.ts:844-869, 879-939
Important Edge Cases
manualChunks unaffected:
experimentalMinChunkSizeonly applies to automatic chunks; it does not rewrite chunks defined bymanualChunks.Not effective under
preserveModulesandinlineDynamicImports: Both options bypass the default chunking algorithm, makingexperimentalMinChunkSizemerging inapplicable.Strict side effect constraints: Even if a chunk is below the threshold, it remains independent if merging would introduce non-correlated side effects or circular dependencies.
Default behavior (
minChunkSizeis1): The default value is1, which only merges chunks with an additional size of0(i.e., merging adds no redundant code).