A 10x Faster Native TypeScript
Relative Materials
A 10x Faster TypeScript - Reddit
A 10x faster TypeScript - YouTube
Technical Pain Points
- Scalability Bottleneck: As codebase size grows, the existing
JavaScript
implementation of theTypeScript
compiler encounters significant performance bottlenecks on very large projects. - High Resource Consumption: Memory usage and
CPU
utilization continue to increase during compilation and type checking of large projects. - Degraded Development Experience: When projects reach a certain size, editor responsiveness, type checking, and code navigation capabilities significantly decrease.
- AI Enhancement Requirements: The current architecture struggles to support advanced analysis and
AI
-driven new development tool requirements.
Limitations of Current Implementation
- Long Editor Startup Time: Loading the
VSCode
codebase (1.5
million lines of code) takes about9.6
seconds. - Inefficient Command-line Compilation: Type checking time for large projects often reaches minutes.
- Poor Memory Utilization: Due to
JavaScript
runtime limitations, memory usage cannot be fully optimized. - Technology Stack Limitations: The current
JavaScript
implementation struggles to break through performance ceilings, requiring a rethink at the language level.
Goals
To address these issues and fundamentally solve performance bottlenecks, the TypeScript
team has begun work on a native port
of the TypeScript
compiler and tools. The native implementation will significantly improve editor startup time, reduce most build times by 10
x, and significantly lower memory usage.
Through porting
the current codebase, we expect to preview a native implementation of tsc
capable of command-line type checking by mid-2025, and deliver a full-featured project build and language service solution by the end of the year.
Reasons for Choosing Go in TypeScript Engine Rewrite
To meet those goals, we've begun work on a
native port
of the TypeScript compiler and tools.
Language Selection Evaluation
We conducted a comprehensive evaluation of multiple language options, including recent research and previous surveys. We also considered hybrid approaches, where certain components would be implemented in native languages while keeping the core type checking algorithms in JavaScript
. To this end, we developed multiple prototypes, exploring various data representation methods using different languages, and deeply studied the implementation approaches of existing native TypeScript
parsers (such as swc
, oxc
, and esbuild
).
Go Language Advantages
It's important to note that many languages could be suitable in a complete rewrite scenario. However, considering multiple criteria for our current specific situation, the Go
language performed best, mainly for the following three reasons:
1. Code Compatibility
The most important consideration is maintaining high compatibility between the new codebase and the existing codebase, both in terms of semantics and code structure. We expect to maintain both codebases for a considerable time in the future. Go
can achieve structurally similar codebases, which provides significant convenience for porting code changes, allowing developers to easily migrate changes between the two codebases.
In contrast, languages that require fundamentally rethinking memory management, mutability, data structure design, polymorphism, and lazy computation, while potentially better suited for complete rewrites, we are doing more of a port job, aiming to maintain existing behaviors and key optimizations we've built into TypeScript
. Go
's idiomatic writing style is highly similar to the existing coding patterns of the TypeScript
codebase, making the porting work more feasible.
2. Memory Management
Go
provides excellent control over memory layout and allocation (at object and field levels) while not requiring the entire codebase to constantly focus on memory management issues. While this means we use a garbage collector, the drawbacks of GC
are not particularly noticeable in our codebase:
- We don't have strict latency constraints that would be affected by
GC
pauses/slowdowns. - Batch compilation can effectively completely avoid garbage collection since the process terminates at the end.
- In non-batch scenarios, most upfront allocations (
AST
etc.) exist for the entire lifetime of the program, and we have strong domain information about when are "logical" times to run theGC
.
Therefore, Go
's model brings us huge benefits in reducing codebase complexity, while the actual runtime cost of garbage collection is low.
3. Graph Processing Capabilities
Our codebase has extensive graph processing operations, particularly involving upward and downward traversal of tree structures with polymorphic nodes. Go
excels in this area, especially in contexts where we need to maintain similarity with the JavaScript
version code.
Weaknesses and Solutions
Go
's in-process JS
interoperability is not as good as some alternatives. We have upcoming plans to mitigate this issue and are committed to providing a high-performance and ergonomic JS API
. Due to the current API
model where consumers can access (and even modify) almost anything, we are limited in some possible optimizations and want to ensure the new codebase can leave room for changing internal representations without worrying about affecting all API
users. Moving to a more intentional API
design (while considering interoperability) will allow us to push the ecosystem forward while providing these massive performance improvements.
Why Not Speed Up JavaScript
Code?
To speed up TypeScript
, you need a language that supports multithreading. JavaScript
can actually only work on one core. Even though there are some upcoming features (like Shared Structs) that can share across threads, these features are not ready yet.
Languages like Go
and Rust
have built-in multithreading support, meaning they can use multiple CPU
cores to parallelize work as much as possible. This is why they are faster.
Performance Improvements
Our native implementation is already able to load many popular TypeScript
projects, including the TypeScript
compiler itself. Here's a comparison of running tsc
on different-sized popular codebases on GitHub
:
Codebase | Lines of Code | Current Version Compile Time | Native Version Compile Time | Speed Improvement |
---|---|---|---|---|
VS Code | 1,505,000 | 77.8s | 7.5s | 10.4x |
Playwright | 356,000 | 11.1s | 1.1s | 10.1x |
TypeORM | 270,000 | 17.5s | 1.3s | 13.5x |
date-fns | 104,000 | 6.5s | 0.7s | 9.5x |
tRPC (server + client) | 18,000 | 5.5s | 0.6s | 9.1x |
rxjs (observable) | 2,100 | 1.1s | 0.1s | 11.0x |
While we haven't implemented all features yet, these numbers represent the order of magnitude performance improvement you'll see when checking most codebases.
Overhead of compiling the TypeScript
compiler itself:
Diagnostics | TypeScript | Native Single Thread | Native Multi Thread(default) |
---|---|---|---|
Files | 217 | 217 | 217 |
Lines | 252851 | - | - |
Identifiers | 425459 | - | - |
Symbols | 260977 | - | - |
Types | 106419 | 111785 | 205491 |
Instantiations | 188269 | - | - |
Memory used | 487981K | 270506K | 332759K |
I/0 read | 0.07s | - | - |
I/0 write | 0.06s | - | - |
Parse time | 0.98s | 0.241s | 0.078s |
Bind time | 0.37s | 0.061s | 0.014s |
Check time | 5.58s | 1.547s | 0.805s |
Emit time | 0.39s | 0.182s | 0.068s |
Total time | 7.31s | 2.031s | 0.966s |
Why Not 10
x Improvement?
The type checker is the most important part of TypeScript
, responsible for checking type errors in code. When executing compilation tasks for the type checker, not every file can be parallelized, which takes more time. Therefore, the above optimization shows about 8
x improvement rather than 10
x.
This native version will be able to provide instant and comprehensive error lists for entire projects, support more advanced refactoring, and enable deep insights that were previously impossible due to computational costs. This new foundation not only surpasses today's developer experience but will also support the next generation of AI
tools, enhancing the development process and powering new tools that can learn, adapt, and improve the coding experience.
Editor Speed
Most development time is spent in the editor, which is where performance matters most. We want editors to load large projects quickly and respond quickly in all situations. Modern editors like Visual Studio and Visual Studio Code can provide excellent performance as long as the underlying language service is also fast enough. With our native implementation, we'll be able to provide an extremely fast editor experience.
Using the Visual Studio Code codebase as a benchmark again, the current time to load the entire project into the editor on a high-performance computer is about 9.6 seconds. With the native language service, this will be reduced to about 1.2 seconds, an 8x improvement in project loading time in the editor scenario. This means the working experience will be much quicker from opening the editor to typing the first character in any TypeScript codebase. We expect to see this level of improvement in loading time for all projects.
Overall memory usage also appears to be about half of the current implementation, although we haven't actively researched optimizing this aspect and expect to achieve further improvements. Editor responsiveness for all language service operations (including completion lists, quick info, go to definition, and find all references) will also see significant speed improvements. We'll also be moving to the Language Server Protocol (LSP), a long-term infrastructure work item aimed at better aligning our implementation with other languages.
Version Roadmap
- Mid-2025 Milestone: Provide a preview of native implementation capable of command-line type checking
- End of 2025 Full Release: Deliver full-featured TypeScript 7.0 native implementation, including project build and language service
- Seamless Migration Guarantee: Maintain parallel development of JS version (6.x) and native version (7.x), ensuring smooth ecosystem transition
TypeScript 5.8/5.9: Current latest version TypeScript 6.x series: JavaScript-based implementation, will introduce API changes aligned with native version TypeScript 7.0: Official release of native implementation
Expected to provide command-line type checking preview by mid-2025 Expected to complete full-featured implementation by end of 2025, including project build and language service
Our most recent TypeScript version is TypeScript 5.8, with TypeScript 5.9 coming soon. The JS-based codebase will continue to be developed into the 6.x series, with TypeScript 6.0 introducing some deprecations and breaking changes to align with the upcoming native codebase.
When the native codebase reaches sufficient parity with current TypeScript, we'll release it as TypeScript 7.0. This is still in development, and we'll make announcements when we reach stability and feature milestones.
For clarity, we'll simply call them TypeScript 6 (JS version) and TypeScript 7 (native version), as this will be the naming convention for the foreseeable future. You might also see us refer to "Strada" (original TypeScript codename) and "Corsa" (codename for this work) in internal discussions or code comments.
While some projects might be able to switch to TypeScript 7 upon release, others might depend on certain API features, legacy configurations, or other constraints requiring continued use of TypeScript 6. Recognizing TypeScript's critical role in the JS development ecosystem, we'll continue to maintain the 6.x line of the JS codebase until TypeScript 7+ reaches sufficient maturity and adoption.
Our long-term goal is to keep these versions as closely aligned as possible, so you can upgrade to TypeScript 7 as soon as it meets your needs, or fall back to TypeScript 6 if necessary.
Impact on the Ecosystem
Some speed-up plugins for TypeScript
will no longer be needed in subsequent versions.
Impact on webpack
Changes to ts-loader
ts-loader
currently relies on TypeScript
's JavaScript
API to perform type checking and transpilation. With the introduction of typescript-go
, it will face the following changes:
Need to adapt to new compiler API:
TypeScript 7
will provide "a new compiler API", meaningts-loader
needs to be refactored to support the new interface.Performance improvement potential: One of
ts-loader
's current main bottlenecks isTypeScript
compilation speed. If it can integrate the native implementation, the most time-consuming part of thewebpack
build process will be significantly accelerated.Dual mode support: Considering that
TypeScript 6 (JS version)
will continue to be maintained for some time,ts-loader
might need to support both running modes simultaneously, similar to:json{ "test": /\.tsx?$/, "use": "ts-loader", "options": { // New option "compilerImplementation": "native", // or 'javascript' } }
Transformation of fork-ts-checker-webpack-plugin
The core value of fork-ts-checker-webpack-plugin
lies in moving TypeScript
type checking to a separate process to speed up builds. With typescript-go
's 10
x performance improvement:
Value reassessment:
The plugin's main purpose is to solve the problem of slow
TypeScript
type checking. When nativeTypeScript
becomes10
x faster, the necessity of separating processes will be greatly reduced.Possible transformation directions:
- Become an adaptation layer for
TypeScript 7
. - Provide more advanced features, such as incremental compilation.
- Optimize integration into a more simplified workflow.
- Become an adaptation layer for
Impact on Native Tools
Downstream toolchains mostly separate TypeScript
's type checking and transpilation work for performance reasons. Transpilation work is handed over to native tools to complete, while type checking work is handed over to tsc
to complete.
Impact on esbuild
esbuild
has built-in TypeScript
transpilation support (without type checking), and downstream toolchains use esbuild
to complete transpilation work.
For example, vite
's vite-plugin-typescript
plugin uses esbuild
's fast transpilation capability to transpile TypeScript
modules into JS
modules.
export function esbuildPlugin(config: ResolvedConfig): Plugin {
const filter = createFilter(
include || /\.(m?ts|[jt]sx)$/,
exclude || /\.js$/
);
return {
name: 'vite:esbuild',
configureServer(_server) {
server = _server;
},
async transform(code, id) {
if (filter(id) || filter(cleanUrl(id))) {
const result = await transformWithEsbuild(
code,
id,
transformOptions,
undefined,
config,
server?.watcher
);
}
}
};
}
Type checking tools use tsc
to complete.
{
"name": "vite-react-typescript-starter",
"private": true,
"version": "0.0.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "tsc -b && vite build",
"lint": "eslint .",
"preview": "vite preview"
},
"dependencies": {
"react": "^19.0.0",
"react-dom": "^19.0.0"
},
"devDependencies": {
"@eslint/js": "^9.22.0",
"@types/react": "^19.0.10",
"@types/react-dom": "^19.0.4",
"@vitejs/plugin-react": "^4.3.4",
"eslint": "^9.22.0",
"eslint-plugin-react-hooks": "^5.2.0",
"eslint-plugin-react-refresh": "^0.4.19",
"globals": "^16.0.0",
"typescript": "~5.7.2",
"typescript-eslint": "^8.26.0",
"vite": "^6.2.1"
}
}
vite
处理 typescript
模块的行为
vite
的工作是尽可能快地将源模块转化为可以在浏览器中运行的形式。为此,vite
建议将静态分析检查与vite
的转换管道分开。这一原则也适用于其他静态分析检查,例如eslint
.vite
使用esbuild
将typescript
转译到javascript
,约是tsc
速度的20~30
倍,同时HMR
更新反映到浏览器的时间小于50ms
.
esbuild
是采用 go
来进行编写的构建工具,由于与 typescript-go
都是 go
实现,技术栈相似,这使得深度集成变得更为可行。
Impact on swc
swc
内置了 typescript
转译支持(无类型检查),当 typescript
本身能够提供高性能实现时,swc
在转译 typescript
时的速度优势将不再那么突出。
Impact on oxc
oxc
可能需要重新思考其对 typescript
的集成方式。目前,oxc
实现了自己的 typescript
解析器,但这种方式可能存在与官方 typescript
实现不一致的风险。typescript-go
提供了一个高性能的官方实现,oxc
可能会考虑直接集成该实现。