Table of Contents generated with DocToc
- v8 Compiler
- is used for all code initially
- generates code quickly without heavy optimizations
- compilation with the base compiler is very fast generates little code
- monitors the running system and identifies hot code
- recompiles and optimizes hot code identified by the runtime profiler
- uses static single assignment form to perform optimizations
- loop-invariant code motion
- linear-scan register allocation
- inlining.
- optimization decisions are based on type information collected while running the code produced by the base compiler
- allows the optimizing compiler to be optimistic in the assumptions it makes when generating code
- deoptimization support allows to bail out to the code generated by the base compiler if the assumptions in the optimized code turn out to be too optimistic
- generates code for any JavaScript
- all code starts unoptimized
- initial (quick) JIT
- is not great and knows (almost) nothing about types
- needed to start executing code ASAP
- uses Inline Caches (ICs) to refine knowledge about types at runtime
Inline Caches implemented in JavaScript
- gather knowledge about types while program runs
- type dependent code for operations given specific hidden classes as inputs
-
- validate type assumptions (are hidden classes as expected)
-
- do work
- change at runtime via backpatching as more types are discovered to generate new ICs watch | slide
Inline Caches alone without optimizing compiler step make huge performance difference (20x speedup).
- operations are monomorphic if hidden classes of arguments are always same
- all others are polymorphic at best and megamorphic at worst
- monomorphic operations are easier optimized
- prefer monomorphic over polymorphic functions wherever possible
- if function executes a lot it becomes hot
- hot function is re-compiled with optimizing compiler
- optimistically
- lots of assumptions made from the calls made to that function so far
- type information takend from ICs
- operations get inlined speculatively using historic information
- monomorphic functions/constructors can be inlined entirely
- inlining allows even further optimizations
- optimizations are speculative and assumptions are made
- if assumption is violated
- function deoptimized
- execution resumes in full compiler code
- in short term execution slows down
- normal to occur
- more info about about function collected
- better optimization attempted
- if assumptions are violated again, deoptimized again and start over
- too many deoptimizations cause function to be sent to deoptimization hell
- considered not optimizable and no optimization is ever attempted again
- certain constructs like
try/catch
are considered not optimizable and functions containing it go straight to deoptimization hell due to bailout watch
None of this can be diagnosed with Chrome Devtools at this point.
- added fields (order matters) to object generate id of hidden class
- adding more fields later on generates new class id which results in code using Point that now gets Point' to be deoptimized
function Point(x, y) {
this.x = x;
this.y = y;
}
var p = new Point(1, 2); // => hidden Point class created
// ....
p.z = 3; // => another hidden class (Point') created
Point
class created, code still deoptimized- functions that have
Point
argument are optimized z
property added which causesPoint'
class to be created- functions that get passed
Point'
but were optimized forPoint
get deoptimized - later functions get optimized again, this time supporting
Point
andPoint'
as argument - detailed explanation
- avoid hidden class changes
- initialize all members in constructor function and in the same order
- v8 passes around 32bit numbers to represent all values
- bottom bit reserved as tag to signify if value is a SMI (small integer) or a pointer to an object
| object pointer | 1 |
or
| 31-bit-signed integer (SMI) | 0 |
- numbers bigger than 31 bits are boxed
- stored inside an object referenced via a pointer
- adds extra overhead (at a minimum an extra lookup)
- prefer SMIs for numeric values whenever possible
v8 has two methods for storing arrays.
- compact keysets
- linear storage buffer
- contiguous (non-sparse)
0
based- smaller than 64K
- hash table storage
- slow access
- sparse
- large
- Array's hidden class tracks element types
- if all doubles, array is unboxed
- wrapped objects layed out in linear buffer of doubles
- each element slot is 64-bit to hold a double
- SMIs that are currently in Array are converted to doubles
- very efficient access
- storing requires no allocation as is the case for boxed doubles
- causes hidden class change
- careless array manipulation may cause overhead due to boxing/unboxing watch | slide
- difference is in semantics of indexed properties
- v8 uses unboxed backing stores for such typed arrays
- gets 64-bit allocated for each element
- don't pre-allocate large arrays (
>64K
), instead grow as needed, to avoid them being considered sparse - do pre-allocate small arrays to correct size to avoid allocations due to resizing
- don't delete elements
- don't load uninitialized or deleted elements watch | slide
- use literal initializer for Arrays with mixed values
- don't store non-numeric values in numeric arrays
- causes boxing and efficient code that was generated for manipulating values can no longer be used
- use typed arrays whenever possible