Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why remove time slicing from vue3? #89

Closed
yisar opened this issue Oct 28, 2019 · 25 comments
Closed

Why remove time slicing from vue3? #89

yisar opened this issue Oct 28, 2019 · 25 comments

Comments

@yisar
Copy link

yisar commented Oct 28, 2019

I saw that the time slicing has been deleted in vue-next here, and no reason was found anywhere.

Can I find the answer here? Is it because time slicing is no longer needed or something else?

@Akryum
Copy link
Member

Akryum commented Oct 28, 2019

This is not the right place for this kind of questions, but the gist of it is:

  • Too much complexity
  • Little gain
  • Vue 3 is so fast we may not need it

@Akryum Akryum closed this as completed Oct 28, 2019
@LinusBorg
Copy link
Member

Also note that this doesn't mean it's dead forever. Rather consider it postponed until we can re-evaluate what good it can actually bring to Vue if we add it in a later version, and at what cost-benefit ratio.

@yisar
Copy link
Author

yisar commented Oct 28, 2019

@LinusBorg
I read the source code of time slicing and found that the slice unit may be a function, while the unit of block tree becomes a block, so I think the gains of slicing are not strong.

But even so, it still makes sense at the source level.

@yyx990803
Copy link
Member

yyx990803 commented Oct 28, 2019

In web apps, "janky" updates are typically caused by a combination of synchronous heavy CPU time + raw DOM updates. Time slicing is an attempt at keeping the app responsive during the CPU work, but it affects only CPU work - the flush of the DOM updates must still be synchronous to ensure consistency of the final DOM state.

So, imagine two types of janky updates:

  1. The CPU work is within 16ms but the amount of raw DOM updates are huge (e.g. mounting a large amount of new DOM content). The app will still feel "janky" regardless of time slicing or not.

  2. The CPU work is so heavy that it takes longer than 16ms. This is where time slicing theoretically starts to become beneficial - however, HCI research shows that unless it's doing animation, for normal user interactions most human won't feel the difference unless the update takes longer than 100ms.

    That is to say - time slicing only becomes practically beneficial when there will be frequent updates that would require longer than 100ms spent in pure CPU time. This is where the interesting part comes in: such a scenario would happen much more often in React because -

    1. React's Virtual DOM reconciliation is inherently slower because of the heavy fiber architecture;
    2. React using JSX makes its render functions inherently difficult to optimize compared to templates, which are more statically analyzable;
    3. React hooks leaves most of the component-tree level optimization (i.e. prevent unnecessary re-renders of child components) to the developers, requiring explicit usage of useMemo in most cases. Also, whenever a React component receives the children prop, it almost always has to re-render because the children prop will always be a fresh vdom tree on each render. This means a React app using hooks will be over-re-rendering by default. What's worse, optimizations like useMemo cannot easily be auto-applied because (1) it requires the correct deps Array and (2) blindly adding it everywhere may block updates that should happen, similar to PureComponent. Unfortunately, most developers will be lazy and will not aggressively optimize their apps everywhere, so most React apps using hooks will be doing a lot of unnecessary CPU work.

    In comparison, Vue addresses the above problem with:

    1. Inherently simpler and therefore faster Virtual DOM reconciliation (no time-slicing -> no fiber -> less overhead)
    2. Heavy AOT optimization by analyzing templates, solving the fundamental overhead of Virtual DOM reconciliation. Benchmark shows that for a typical piece of DOM content with approximately 1 : 4 dynamic to static content ratio, Vue 3 raw reconciliation is even faster than Svelte and spends less than 1/10 of the time in CPU than the React equivalent.
    3. Smart component-tree level optimization via Reactivity tracking, compiling slot to functions (avoids children causing re-render), and auto-caching inline handlers (avoids inline function props causing re-render). A child component never re-renders unless it has to, without any manual optimization needed from the developer. This means for the same update, in a React app it may cause multiple components to re-render, but in Vue it most likely causes only 1 component to re-render.

    So by default, a Vue 3 app will be spending so much less time CPU-bound compared to a React app, and the chance of 100+ms spent in CPU land is drastically reduced and would only be encountered in extreme cases, where the DOM will likely become the more important bottleneck anyway.


Now, time slicing, or concurrent mode brings along another problem: because the framework now schedules and coordinates all the updates, it creates a ton of extra complexity regarding priority, invalidation, re-entry etc. All the logic handling these can never be tree-shaken and this causes the runtime baseline size to bloat up. Even with Suspense and all tree-shakable features included, Vue 3's runtime is still only 1/4 the size of current React + React DOM.

Note this isn't saying concurrent mode as a whole is a bad idea. It does provide interesting new ways of dealing with a certain category of problems (in particular related to coordinating async state transitions), but time-slicing (as a sub feature of concurrent) specifically addresses a problem that is much more prominent in React than in other frameworks, at the same time creating its own costs. The trade-offs simply don't seem worthwhile for Vue 3.

@yisar
Copy link
Author

yisar commented Oct 28, 2019

@yyx990803 Thanks for your reply. I think it's also a great summary.

It is true that time slicing solves very few problems. Perhaps only a few scenes, such as animation and visualization. 99% scenes are not needed, and it will slow down the total time.

There are many problems in React. In addition to what you said, in fact, the use of fiber linked list traversal also limits the diff algorithm and loses many optimization ways.

In conclusion, Vue's tradeoff is persuasive. 👍

@CyberAP
Copy link
Contributor

CyberAP commented Oct 28, 2019

HCI research shows that unless it's doing animation, for normal user interactions most human won't feel the difference unless the update takes longer than 100ms

But there are different kinds of interaction. Clicking a button is very different from typing text or using tab navigation. And for the latter 100ms would feel like stutter. I doubt it's appropriate to measure all interactions the same.

@yisar
Copy link
Author

yisar commented Oct 28, 2019

Clicking a button is very different from typing text or using tab navigation.

In fact, in addition to animation, other interactions can keep browser responsive through throttle and debounce.

In the past few months, I have been engaged in the research of React fiber. So far, I have not found convincing use cases.

@yyx990803
Copy link
Member

yyx990803 commented Oct 28, 2019

@CyberAP of course. But

  1. For most of the typical update responses to typing / tab navigations, it's NOT going to take 16ms of CPU time, period. The demo React team showcased is so contrived that it will most likely never happen in an actual app.
  2. In common cases where these high frequency interactions do cause heavy CPU load, the work is also often not time-sliceable - for example, a live compiler playground where the compilation on keypress is synchronous by itself. Time slicing won't help there. A good old debounce/throttle will though. In addition, if the typing will trigger network side effects, a debounce/throttle is still needed regardless of rendering technology used.

@CyberAP
Copy link
Contributor

CyberAP commented Oct 28, 2019

For most of the typical update responses to typing / tab navigations, it's NOT going to take 16ms of CPU time, period. The demo React team showcased is so contrived that it will most likely never happen in an actual app.

I've had an issue in Vue 2 with updating on text input on a simple (and I assume a common) task, where you have a nested list that is filtered by the query you enter. In my case the filtering was working very fast, but the huge amount of VNode creation, patch (recursive item list, item also has multiple components in it) and DOM operations made it very slow to react quick enough on input, so of course I had to debounce it. The sluggishness hasn't gone anywhere though, it was just disguised by updating list at a slower pace. I am not exactly sure that time slicing would help here, but it's absolutely a real case where your business logic is fast, but the framework runtime code is slow and you have to deal with it. This is going to vastly improve with Vue 3, but maybe having some extra tools to improve UX would be even better (not necessary time slicing though).

@nek4life
Copy link

Now, time slicing, or concurrent mode brings along another problem

Just looking for a quick clarification. Are time slicing and concurrent mode the same thing?

@yisar
Copy link
Author

yisar commented Oct 28, 2019

Are time slicing and concurrent mode the same thing?

Can be considered one thing, but whether it is called time slicing or concurrent, it refers to the scheduler, which queue the task priority.

@marvinhagemeister
Copy link

Just wanted to chime in here to say that we (the Preact team) feel the same way about time-slicing and concurrent rendering. Both concepts are extremely complex to implement with only a very tiny fraction of apps benefiting from that.

There are many ways we can speed up rendering instead. More AOT is a big area VDOM libraries can tap into, and so are lots of other optimizations.

In scenarios where that can't be done, it's usually because of side-effects (network requests, synchronous compilation, etc) like Evan mentioned. In those cases time-slicing won't really help and we believe that other solutions like a simple debounce are more appropriate.

Disclaimer: I work on Preact.

@yisar
Copy link
Author

yisar commented Oct 29, 2019

@yyx990803 It needs to be clarified that time slicing of Vue is to split components. If the block logic inside a component exceeds 16ms, it will broken. Such as:

  render (props) {
    while (performance.now() - start < 16) { // broken
    }
    return h('li', props.msg)
}

In react, the unit of slicing is the fiber node, so the block in the components will not block the UI rendering.
I think that's one of the main reasons why time slicing not needed in Vue3.

@marvinhagemeister Preact is different. Its diff and patch are carried out at the same time.

@marvinhagemeister
Copy link

marvinhagemeister commented Oct 29, 2019

@132yse I know that, I wrote a good portion of that code in Preact 😉

Nonetheless we had a lot of discussions on our team as to what our future direction should be. Time slicing and concurrent rendering came up a few times and we could implement that given a change in our architecture, but we're not convinced that it's worthwhile for the same reasons Evan shared in detail here.

@nin-jin
Copy link

nin-jin commented Dec 6, 2019

Vue 3 is so fast we may not need it

No matter how fast you are, when a large page is rendering, any animations are slow down. Progressive and/or virtualized rendering can solve this problem independent of page size.

@JSerZANP
Copy link

no intention to start a debate here but I landed in this issue out of curiosity to know Vue's perspective on concurrent mode.

I'm still learning React concurrent mode, but I have to say Concurrent Mode is more than just "being fast" or not, being able to prioritize work is just the foundation for all kinds of innovations. The ability to keep something responsive is just part of it.

Speaking of throttle/debounce, as mentioned in the React homepage https://reactjs.org/docs/concurrent-mode-patterns.html, it cannot beat concurrent mode because we cannot set the perfect delay for devices of different performance.

Well I agree that it might not be needed for most of the web apps today, and the new features like transition could be implemented somehow in other ways, but I feel the attitude from the issues here is "OK, it is cool but we don't need it", this personally sounds a bit frustrating as a developer, since if we can make something even better, I guess we should give it a try, maybe we can achieve something different but also awesome.

Again, no intention to start a debate, just share some of my thoughts.

@boonyasukd
Copy link

@JSerZANP
Not intending to start a debate either, but those who are in favor of concurrent mode, please be reminded that the scheduler/reconciler itself is not free: once integrated into the rendering pipeline, it also inflicts its own overhead, which hurt performance.

The best way to demonstrate why scheduling/concurrent mode isn't a silver bullet, I suggest that you read through this twitter thread back in 2019, where one React enthusiast attempted to talk smack about other frameworks, boasting React scheduler's superiority. He got proven wrong, quite embarrassingly, by Svelte running in dev mode. And, to rub salt into the wound, his source code was picked apart to expose the fact that his underperforming demo was already cheating by bypassing React to update the 3D scene as well as its fps counter, because if he let React update all the props, the demo would become unresponsive. If I recall correctly, back then someone also whipped up a Vue version, which performed better than its React counterpart as well. So, it's no wonder that the original tweet was later deleted to hide such an embarrassment.

To quote Rich Harris' tweet, which rightfully serves as the key takeaway of the thread:

Just remember that 'scheduling' doesn't let you break the laws of physics. The best way to get better performance: do less work.

And if you still feel that Vue might be a bit "too slow" to your liking, you can always consult krausest's JS framework benchmark, to see where things stand for yourself. And, hopefully, you will finally realize that there's nothing for Vue users to be frustrated about.

@nin-jin
Copy link

nin-jin commented Apr 28, 2022

And if you still feel that Vue might be a bit "too slow" to your liking, you can always consult krausest's JS framework benchmark, to see where things stand for yourself. And, hopefully, you will finally realize that there's nothing for Vue users to be frustrated about.

Well, how to say, there's nothing to get upset about..

image
image

Online results
App sources - 150 sloc
App online
How it works

JS Heap Browser Tab Heap
VueJS: 170 comments 40 MB 150 MB
$mol: article + 2500 comments 40 MB 90 MB

image

And yes, the best way to get better performance: do work lazy, and clean up after yourself.

@Akryum
Copy link
Member

Akryum commented Apr 28, 2022

@nin-jin Virtualizing is sure a great way to get better performance on big lists.

@nin-jin
Copy link

nin-jin commented Apr 28, 2022

@nin-jin Virtualizing is sure a great way to get better performance on big lists.

So it's not about lists at all, but about an arbitrary layout. Virtualization is hidden from the application developer.

@Akryum
Copy link
Member

Akryum commented Apr 28, 2022

The topic of this thread is about time-slicing though 🐈

@boonyasukd
Copy link

@nin-jin
I still am holding the same opinion that, there's nothing to be frustrated about: from what we've seen, Svelte, Preact and Vue perform not any worse than React that employs scheduler/concurrent mode. And, as has been previously stated by multiple accounts, scheduler/concurrent mode does incur an overhead of its own.

The purpose of this very thread has been about scheduler/concurrent mode that has been inspired by React. Unless $mol employs the exact same concurrent mode as React does, I don't really see how the thing I said has anything to do with what you presented. From my perspective, you can have virtualization without a need to have scheduler/concurrent mode in the framework. And rejecting time slicing feature from Vue doesn't prevent it from benefiting from virtualization.

@nin-jin
Copy link

nin-jin commented Apr 28, 2022

We had an implementation of time slicing at all levels, but we removed it because it improves responsiveness at the cost of slowing down visible work. And not always, because on a really large page, recalculation of layout, styles and rendering takes a lot of time and cannot be time sliced. In addition, it introduces non-determinism into the operation of the application, which sometimes gives difficult debugging problems. When we implemented virtualization, it turned out that rendering the visible part of the application usually takes time comparable to one step of time slicing, which made its use completely meaningless.

@jods4
Copy link

jods4 commented Apr 28, 2022

Some thoughts:

  • Vue and modern browsers are plenty fast for most applications.
  • Time-slicing increases the size of Vue and adds a lot of complexity. It would slow down all Vue apps. Not only because slicing itself is slower but also because it will trigger side-effects such as browser style and layout computations more times.
  • It creates new, tricky problems. You must always keep in mind that your DOM might not be in sync with your state, even after a Vue flush. That can create timing-dependent bugs that are a whole lot of fun to debug.
  • It might make some apps feel more responsive, but not faster. If you time slice an update that should take 3s, it's gonna take 3.5s and although the user can interact with the app in the meantime, he'll see bits updating over time and your app will not magically feel fast.
  • It follows that there is a hard cap on how much more time slicing would enable you to do. At the extreme, if the frequency of your updates is higher than their duration, you're headed for catastrophic failure anyway.
  • The technique for interacting with massive amounts of data is well known: it's virtualization. Sure, it's not easy but there is no magic silver-bullet. Virtualization uses a lot less resources and can scale orders of magnitude higher than time-slicing ever will.
  • If you have a scenario where time-slicing does make sense, you can do it in user-land. For example if you're loading a very long list, you totally can add 500 items at a time to the list and continue adding more on requestIdleCallback (or co.) until you're done.
  • Some things might be better done outside of Vue. If you're plotting 1M points on a webgl canvas, maybe you should handle the drawing/updating yourself and not rely on a watch that observes every single point. Yeah, those last 2 examples are less "easy" but remember: there's no silver-bullet for super-high perf code.

@boonyasukd
Copy link

@nin-jin
Ah, so you're a part of $mol. At first I was rather perplexed as to why $mol stats (in tabular form, no less) even appeared in this Vue RFC thread in the first place. It looks so off-topic, I almost thought you were trying to discuss about which framework is the fastest for a second there. 😉

Coming back to the topic at-hand, I think it's great that your first-hand experience in developing $mol aligns with what other framework creators (Svelte, Preact, Vue) have been saying all along: that time slicing feature is not a silver bullet with zero flaws, and it actually is something that many frameworks consciously choose to do without. So in this thread, we now have people from 4 different frameworks coming to the same conclusion after thorough investigations of their own. And, to me, that carries quite a bit of weight.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests