Description
Webpack version:
1.10.x and 2.x
Please tell us about your environment:
OSX 10.x / Linux / Windows 10 [all]
Expected/desired behavior:
Highlight at build-time any JavaScript chunks or bundles that are over a size threshold and can negatively impact web performance load times, parse/compile and interactivity. A default performance budget could indicate if total chunk sizes for a page are over a limit (e.g 250KB).
I'd love to see if we could make this a default and offer a config option for opting out 🏃♀️ Concern with an opt-in is the folks with the worst perf issues may not know to enable it or use this via a plugin if this suggestion was deferred to one. These folks may also not be testing on mobile devices.
Optionally, highlight where better performance patterns might be helpful:
Current behaviour:
Many of the apps bundled with Webpack that we trace ship a large, single bundle that ends up pegging the main thread and taking longer that it should for webapps to be interactive:
This isn't Webpack's fault, just the culture around shipping large monolithic bundles. This situation gets particularly bad on mobile when trying these apps out on real devices.
If we could fix this, it would also make it way more feasible for the Webpack + React (or Angular) stacks to be good candidates for building fast web apps and Progressive Web Apps 🔥
What is the motivation / use case for changing the behavior?
I recently dove into profiling a large set (180+) React apps in the wild, based on 470 responses we got from a developer survey. This included a mix of small to large scale apps.
I noted a few common characteristics:
- 83+% of them use Webpack for module bundling (17% are on Webpack 2)
- Many ship large monolithic JS bundles down to their users (0.5-1MB+). In most cases, this makes apps interactive in well over 12.4 seconds on real mobile devices (see table 1) compared to the 4 seconds we see for apps properly using code-splitting and keeping their chunks small. They’re also far slower on desktop than they should be - more JS = more parse/execution time spent at a JS engine level.
- Developers either don’t use code-splitting or use it while still shipping down large chunks of JS in many cases. More on this soon.
Table 1: Summary of time-to-interactive scoring (individual TTI was computed by Lighthouse)
Condition | Network latency | Download Throughput | Upload Throughput | TTI average React+Webpack app | TTI smaller bundles + code-splitting |
---|---|---|---|---|---|
Regular 2G | 150ms | 450kbps | 150kbps | 14.7s | 5.1s |
Regular 3G | 150ms | 1.6MBPs | 750kbps | 12.4s | 4s |
Regular 4G | 20ms | 4MBPs | 3MBPs | 8.8s | 3.8s |
Wifi | 2ms | 30MBPs | 15MBPs | 6s | 3.4s |
We generally believe that splitting up your work into smaller chunks can get you closer to being interactive sooner, in particular when using HTTP/2. Only serving down the code a user needs for a route is just one pattern here (e.g PRPL) that we’ve seen helps a great deal.
Examples of this include the great work done by Housing.com and Flipkart.com. They use Webpack and are getting those real nice numbers in the last column thanks to diligence with perf budgets and code-splitting 👍.
What impacts a user's ability to interact with an app?
A slow time to being interactive can be attributed to a few things:
- Client is slow i.e keeping the main thread busy 😓 Large JS bundles will take longer to compile and run. There may be other issues at play, but large JS bundles will definitely peg the main thread. Staying fast by shipping the smallest amount of JS needed to get a route/page interactive is a good pattern, especially on mobile where large bundles will take even longer to load/parse/execute/run ⏳
- Server/backend may be slow to respond
- Suboptimal back and forth between the server and client (lots of waterfall requests) that are a sequence of network busy -> CPU idle -> CPU busy -> network idle and so on.
If we looked at tools like performancebudget.io, targeting loading in RAIL’s <3s on 3G would place our total JS budget at a far more conservative 106KB once you factor in other resources a typical page might include (like stylesheets and images). The less conservative number of 250KB is an upper bound estimate.
Code-splitting and confusion
A surprising 58%+ of responders said they were using code-splitting. We also profiled just this subset and found that their average time to being interactive was 12.3 seconds (remember that overall, the average TTI we saw was 12.4 with or without splitting). So, what happened?
Digging into this data further, we discovered two things.
- Either folks that thought they were code-splitting actually weren't and there was a terminology break-down somewhere (e.g maybe they thought using
CommonsChunkPlugin
to 'split' vendor code from main chunks was code-splitting?) 🤔 - Folks that definitely were code-splitting had zero insight into chunk size impact on web performance. We saw lots of people with chunk sizes of 400, 500...all the way up to 1200KB of script who were then also lazy-loading in even more script 😢
Keep in mind: it's entirely possible to ship fast apps using JS that are interactive quickly with Webpack - if Flipkart can hit it in under 5 seconds, we can definitely bring this number down for the average Webpack user too.
Note: if you absolutely need a large bundle of JS for a route/page to be useful at all, our advice is to just serve it in one bundle rather than code-split. At an engine level this is cheaper to parse. In most cases, devs aren't going to need all that JS for just one page/view/route so splitting helps.
What device was used in our lab profiling?
A Nexus 5X with a real network connection. We also ran tests on emulated setups with throttled CPU and network (2G, 3G, 4G, Wifi). One key thing to note is if this proposal was implemented, it could benefit load times for webapps on all hardware, regardless of whether it's Android or iOS. Fewer bytes shipped down the line = a little more ❤️ for users data plans.
The time-to-interactive definition we use in Lighthouse is the moment after DOMContentLoaded
where the main thread is available enough to handle user input ✋. We look for the first 500ms window where estimated input latency is <50ms at the 90th percentile.
Suppressing the feature
Users could opt-out of the feature through their Webpack configuration (we can 🚲 🏠 over that). If a problem is that most devs don't run their dev configs optimized for production, it may be worth considering this feature be enabled when the -p production flag is enabled, however I'm unsure how often that is used. Right now it's unclear if we just want to define a top-level performanceHints
config vs a performance
object:
performance: {
hints: true,
maxBundleSize: 250,
warnAtPercent: 80
}
Optional additions to proposal
Going further, we could also consider informing you if:
- You weren’t taking advantage of patterns like code-splitting (e.g not using
require.ensure()
orSystem.import()
). This could be expanded to also provide suggestions on other perf plugins (likeCommonChunksPlugin
) - What if Webpack opted for code-splitting by default as long as you were using
System.import()
orrequire.ensure()
? The minimum config is just the minimum requirements aka the entry ouput today. - What if it could guide you through setting up code-splitting and patterns like PRPL if it detected perf issues? i.e at least install the right Webpack plugins and get your config setup or point you to the docs to finish getting setup?
Thanks to Sean Larkin, Shubhie Panicker, Gray Norton and Rob Wormald for reviewing this RFC before I submitted it.
Activity
rryter commentedon Oct 31, 2016
I like the idea very much. And I would like to stress that depending on the situation the performance budget could be 500KB or maybe just 100KB. This way people can set their perfomance budget and they will be notified as soon as they come close to the limit. Maybe already notify them when they are at about 80% of the budget.
mxstbr commentedon Oct 31, 2016
Love it!
Just wanted to note a nitpick: I'm red-green colorblind, and have a really hard time seeing the difference between the green and the yellow above. Would be great to have some more-different colors there!
addyosmani commentedon Oct 31, 2016
More than happy for us to figure out an alternative color scheme :) Also, glad you like the proposal!
NekR commentedon Oct 31, 2016
Great work @addyosmani (and your team)!
@rryter I can imagine people setting 5MB perf budget, this just doesn't change anything from what it's now. People just assume "this is okay". Performance is the same for everyone and we should be hard about pointing into it, instead of allowing incremental escapes: "Oh, I'll add another 100KB budget because I need this lib right now." -- three months later -- "Oh, it's 3MB bundle know 😱".
kostasmanionis commentedon Oct 31, 2016
Great initiative! A few thoughts.
Ideally I would like to get feedback on my bundle sizes when in development mode, but like Addy stated - most of us probably don't really do development in production mode. For e.g. I don't use a minifier when in development mode, because it does slow down builds and I like them to be as fast as possible.
But a minfier can shed a lot of code off your bundle, especially with webpack 2 tree-shaking feature. AFAIK webpack creates dead code, that's removed during the minfication process. So if I wan't to get an accurate idea of what my bundle size is, I need to minify my code.
Oh and there's gzip too...
kenwheeler commentedon Oct 31, 2016
👍
This is awesome. I would love to see more optimization warnings like this, things like how much savings you're getting from tree shaking and how much more you could save, so that devs would have a path for reducing those bundle sizes. Projected production metrics while in dev mode would be dope too.
asolove commentedon Oct 31, 2016
Lots of excellent ideas here and I'm excited for the possibilities.
I just want to remind everyone that for the 1% of people who have super-advanced configs, and the 19% of people who have done the config themselves, there's a much more important 80% of people who hardly think about webpack at all and are shipping apps with the default settings they found in some blog post. Tree-shaking and code-splitting might eventually help them, but they have other work to do first.
I would therefore ponder what are the simplest, easiest-to-get-wrong things we could warn them about. These are probably things people remotely familiar with webpack would never do, and so they likely aren't on the top of our minds. Some examples:
DefinePlugin
forprocess.env==='production'
Unlike tree-shaking and code-splitting, which may require difficult code changes to work well, these are very simple changes anyone can make and get an immediately benefit. And helping non-experts get easy wins ("Wow, I paid attention to this and reduced my load time by half!") might be a great way to get them excited about worrying about performance more regularly. (I can see a future where, after webpack sees them respond to these initial problems and get a win, it links them to lighthouse or some other tool where they can learn more about performance practices.)
A lot of the biggest-impact checks should only run in a production build. Which raises another question: since production builds may only run in CI or on a build server, where people rarely look at the logs, should obvious problems like this return an exit code? It could prompt the developer to either fix the problem or modify their performance options to accept the issue.
rryter commentedon Oct 31, 2016
@NekR It depends on what the target audience is. Also if you want to achieve a goal lower than the 250KB, let's say 100KB, that should be configurable. Having a 250KB performance budget or no budget at all isn't going to cut it. I do agree that this is a solid default value though.
NekR commentedon Oct 31, 2016
@kostasmanionis
While download time is important, most of the problems is executing/parsing the script which is used in a full size regardless how hard you compress it with gzip.
But yeah, I agree to it should be measured against production builds but somehow visible in dev builds too..
@asolove that sounds good, but problem is that it's hard to know if it's a production build or not, i.e. I rarely see people use
-p
CLI option and there is actually no way to tell about production intentions when you use webpack's Node API.84 remaining items
lili21 commentedon Aug 17, 2018
is it minified size or compressed size ? Can I configure it to compressed size like gzip ?
nilanjansiromani commentedon Nov 2, 2019
@TheLarkInn @addyosmani
Great feature, but is there a way to set budgets for for more than one bundle.
The problem that we faced was we set the budget at 1.5 mb for the biggest bundle,
meanwhile, the two smaller bundles that we had crept on from 350 to 600kb and had no error (because: < 1.5mb)
I understands its a developer's responsibility to to keep an eye, but it would be a great feature to have. (assuming its not there and i am not aware how to implement it.)
grgur commentedon Nov 7, 2019
@nilanjansiromani Give Gimbal a try: https://github.com/ModusCreateOrg/gimbal/
We've had a lot of success using it for performance budgeting. It expands on what Webpack budgets support by utilizing Lighthouse, Axe accessibility checks, memory and CPU budgeting, sourcemap checks, etc.
asennoussi commentedon Jul 9, 2020
This didn't age well for Flipkart and Housing.com :)
nyngwang commentedon Apr 4, 2022
@addyosmani I'm using the built-in plugin
SplitChunksPlugin
in Webpack 5, it generates manybundle.js
with sizes all under150KiB
. So each of them satisfies the suggesting limit, i.e.250KiB
. But why you're suggesting using total chunk sizes? I got this annoying error:The size of
react-dom.production.min.js
is116KiB
, which is what I cannot control, but it already occupies almost half of the limit. What can I do in this case to really solve this warning? That dynamic import is definitely not a solution here.update: I also made a detailed feedback on the current problem of this "warning". See #3486 (comment).
nyngwang commentedon Apr 6, 2024
alexander-akait commentedon Apr 8, 2024
We are working together with Github supports with this problem, so don't worry about it