You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Density - specifically, RAM has traditionally been fixed+preallocated in cloud environments, and tends to be the limiting factor on how many applications can be hosted. Our plans for density include: (i) Being able to park both "not running" and "running" apps on disk, and quickly reinstantiate (resume) running apps from disk to memory; (ii) being able to real-time measure and limit CPU and RAM utilization at a per-app level, and reclaim all RAM instantly when an app is page out; (iii) being able to monitor and manage storage and network throughput, and storage capacity on an app-by-app basis in real time.
Security - no ability to escape a container, and all "context" and "I/O" is virtualized and injected.
Manageability / Serviceability at the container level, including support for debugging, upgrading, life cycle management, etc.
Ideal deployment environment:
Shared storage for centralized logs and journals (for recovering from server total failures)
App hosting servers with large core count (OTOO 1000) & large RAM (OTOO 10TB) with large (OTOO 100TB?) local flash and OTOO 10Gb-100Gb networking
Technical Requirements:
Fiber based runtime
Our own memory management and GC (one reason: GC can be service level, and container level, and each container will also have a separate "shared immutable data" space, which can be concurrently collected without pauses or fences)
Adaptive profile-driven compilation is desirable: (i) start with a naive JIT pass with extremely light weight profile collection, just to identify potential targets (0 vs. 1 vs. n); (ii) second pass (probably applies to OTOO 5% of code) to refine statistics (e.g. type data incidence); (iii) additional passes (applying to some subset of ii) for actual optimizations. Undecided whether we would retain an interpreter "slow path".
While "everything is a reference" (combination of (i) a type and (ii) an object identity), in reality, our reference design (Ref.x) is intended to satisfy the needs for value type optimizations. For example, MapSet.x with its Nullable value needs neither a type nor an object identity for the value at runtime (type is known to be Null type; identity is known to be Null value)
Initial prototype plan (in progress):
Target JVM v.latest with fiber support
Produce JVM byte code (some combination of AOT and JIT and WDT)
Rely on invokedynamic (dynamic call sites) to allow a level of code modification below a re-compile / reload, e.g. inline type cache for monomorphic invocation optimizations
Collapse virtualization where it is (i) provably absent, (ii) small (obviously known) set of vtables, and (iii) expected intrinsics
Big questions for next step design:
Shape of an object, and in-memory storage of an object / "heap" (they're bytes at the end of the day, we need to roll in and out of flash storage)
Shape of a stack frame, and in-memory storage (they're bytes at the end of the day, we need to roll in and out of flash storage)
Shape(s) of a reference: (i) purely generic reference case (an actual combo of a type and an identity), (ii) known type sub tree, (iii) compressed
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Reminder of goals / non-negotiable requirements:
Ideal deployment environment:
Technical Requirements:
MapSet.x
with itsNullable
value needs neither a type nor an object identity for the value at runtime (type is known to beNull
type; identity is known to beNull
value)Initial prototype plan (in progress):
Big questions for next step design:
(To be continued)
Beta Was this translation helpful? Give feedback.
All reactions