I’m again running late for the March Edition, so that might turn into some kind of pattern. The last month was pretty exciting: We finally turned on Ignition and TurboFan by default for Chrome M59. It took two attempts, but it seems to stick now. Overall it went surprisingly well, despite the complexity and impact of this change; as my site director used to put it: “Changing from Crankshaft to TurboFan in Chrome was like changing the engine of a F1 car at 250 km/h.”
Note that with the switch, we also removed the
enable-v8-future setting from
chrome://flags, and instead added a kill switch
disable-v8-ignition-turbo for the new configuration on Chrome M59. This kill switch has three possible settings: Default, Disabled and Enabled. If you leave it to Default, there’s a certain chance that you get the old compiler pipeline even on Canary or Dev channel due to A/B testing. Set it to Disabled to ensure that you get the new pipeline, or set it to Enabled to go with Crankshaft and Full-Codegen again.
The instructions for Chrome M58 (currently in the Beta channel) didn’t change, Ignition and TurboFan is still controlled by the
Turning on the new configuration by default uncovered a couple of issues that weren’t known yet, so I was pretty busy helping to address those. But as far as I can tell, things look very promising now.
End of last month I gave a presentation about TurboFan: A new code generation architecture for V8 at the Munich Node.js User Group meetup, talking a bit about things that will change with the new pipeline. Most importantly turning on the new pipeline greatly reduces overall complexity of V8, and makes it easier to port to new architectures, and what’s even more important from a user’s perspective, it completely eliminates the need to worry about so-called optimization killers, since TurboFan is able to handle every language construct. This doesn’t mean that all of a sudden using a weird language construct like
with is blazingly fast; but it means that the presence of one of these optimization killers in a function no longer disables the optimizing compiler completely for that particular function.
Source: TurboFan: A new code generation architecture for V8, MNUG March '07 Meetup, @bmeurer.
Probably the most important changes are fully optimizable
finally statements, generators and
async functions, and of course you can now use
const without the risk of hitting the infamous Unsupported phi use of const or let variable bailout.
First of all, it seems that various people are confused by the terminology. It seems that using the terms “explicit vs. implicit” instead of “declarative vs. obscure” is easier to understand for many people, so just imagine those terms when staring at the slide above. The main controversy however was around whether it makes sense to use this advice or assume that the engine will optimize all of this under the hood.
Given infinite resources, V8 could generate better code for the right-hand side as well (and Crankshaft used to do this in the past), where it only checks for
undefined and objects, and deoptimizes on all other inputs. That reduces the number of checks in optimized code. However it comes at a price: It can deoptimize the optimized code as soon as new values enter here, and the engine needs to track what kind of values were seen for
obj in the baseline case, which requires time and memory.
But the reality is: V8 doesn’t have infinite resources. In fact with more and more low-end Android devices entering the mobile web, and less and less traffic coming from high-end mobile devices or desktops, it seems that V8 (and Chrome) will have even fewer resources available in the future. Reducing overhead and optimizing less aggressively has helped a lot in the past two years to significantly improve the experience on mobile.
I sometimes hear developers say that these startup issues don’t matter to Node.js, and V8 shouldn’t penalize what they call server performance, but then the next thing I hear is them complaining that TypeScript, UglifyJS, Webpack, Babel, etc. run way too slow. This also runs on V8 and also suffers a lot from overhead during warmup, or too aggressive optimizations leading to frequent deoptimizations. I’ve seen a lot of evidence that reducing over-optimization has helped server workloads significantly, as for example measured by the Node.js AcmeAir benchmark.
The throughput improved by over 45% mostly by reducing overhead in V8, accomplished via work guided mostly by client-side workloads. So for typical server-side workloads things don’t look too different.
Speaking of the example above, whether or not ToBoolean is heavily optimized by the engine or not, and specifically what kind of feedback is tracked and consumed about the incoming values, is and should remain somewhat irrelevant. If you can write your hot code in a way that is optimizable independent of certain speculative optimizations, you should favor that way. For example, when you have a value
obj that can either be
undefined or an object, consider ruling out
undefined explicit via
or even better:
This might also help your future self understanding this code in two years, as with
it’s not clear that all you intended to do back then was to rule out
This is especially useful to keep in mind when using
|| to implement default parameters like this:
When asked whether the empty string is a valid input for
0 is a valid input for
b, they were quite surprised to notice that they would rule out valid inputs this way. So don’t do this! Instead use the ES2015 feature:
This only replaces
undefined values with defaults, which is sane. There’s also an interesting performance aspect to using
My usual disclaimer applies: I’m not a front-end engineer. I have different priorities than you might have, and there might be good reasons for you to ignore any kind of advice that I give here. That’s perfectly fine. Also be careful with over-optimizing!