I've been happily using the Jekyll static site generator for this website for almost 7½ years now, initially motivated by the excellent GitHub Pages service. Jekyll is awesome, since it gets you started easily without a lot of worrying about how to setup lot's of different things.
I spent some time today and hacked together a simple Jekyll plugin to automatically generate a service worker with Google Workbox, with minimal overhead / effort. I had previously used the jekyll-pwa-plugin, which is awesome, but it's doing a bit too much for my taste:
Asynchronous processing in JavaScript traditionally had a reputation for not being particularly fast. To make matters worse, debugging live JavaScript applications — in particular Node.js servers — is no easy task, especially when it comes to async programming. Luckily the times, they are a-changin’. This article explores how we optimized async functions and promises in V8 (and to some extent in other JavaScript engines as well), and describes how we improved the debugging experience for async code.
DataViews are one of the two possible ways to do low-level memory accesses in JavaScript, the other one being TypedArrays. Up until now, DataViews were much less optimized than TypedArrays in V8, resulting in lower performance on tasks such as graphics-intensive workloads or when decoding/encoding binary data. The reasons for this have been mostly historical choices, like the fact that asm.js chose TypedArrays instead of DataViews, and so engines were incentivized to focus on performance of TypedArrays.
This article describes some key fundamentals that are common to all JavaScript engines — and not just V8, the engine the authors (Mathias and Benedikt) work on. As a JavaScript developer, having a deeper understanding of how JavaScript engines work helps you reason about the performance characteristics of your code.
This article describes some key fundamentals that are common to all JavaScript engines — and not just V8, the engine the authors (Benedikt and Mathias) work on. As a JavaScript developer, having a deeper understanding of how JavaScript engines work helps you reason about the performance characteristics of your code.
Before the conference the YGLF crew wanted to learn more about the speakers, so they had asked them #3FrontendQuestions related to their topics at YGLF and engineering experience. This is what I responded during the interview.
I just realized that this is going to be the very first blog post of 2018 that I write myself - versus bugging someone else to write a blog post. It's been quite a busy year for me already, plus I was sick a lot, and so was my family. Anyways, here's something I've been meaning to send out for a while. And while the title mentions React explicitly, this is by no means limited to React, but probably affects a lot of code out there, including a lot of Node.js code bases, where this impact is even more severe.
Following up on my talk "A Tale of TurboFan" (slides) at JS Kongress, I wanted to give some additional context on how TurboFan, V8's optimizing compiler, works and how V8 turns your JavaScript into highly-optimized machine code. For the talk I had to be brief and leave out several details. So I'll use this opportunity to fill the gaps, especially how V8 collects and uses the profiling information to perform speculative optimizations.
The last months have been a hectic time for me. I was hosting my first intern Juliana Franco at Google working on the Deoptimizer during her internship on lazyiness. Then I was diagnosed with articular gout and almost couldn't walk for a week. And we finally moved into our new house with a lot of help from my awesome colleagues on the V8 team. On the Node.js front, I became the tech lead of Node performance in the V8 team, joined the Node.js benchmarking working group, and I am now officially a Node.js collaborator. But I also had some time to close gaps in V8 performance now that Ignition and TurboFan finally launched everywhere.