Replies: 3 comments 8 replies
-
For 1 are you considering dropping checked ints entirely or just defaulting to unchecked? Given that there is hardware support for fast checked ints I don’t think we need to drop them, just evaluate if they should be the default. As far as 3, I think the item your missing is the inheritance relationship between checked and unchecked ints. Having one extend the other introduces a long term heavy performance hit and is unrelated to being on the JVM. But if checked and unchecked can be peers and share a base Number class for instance then we should be in good shape and only go slow path when people generically code against Number rather than an explicit checked or unchecked type. |
Beta Was this translation helpful? Give feedback.
-
My thought was that if they aren't related to each other, then we can always re-introduce the checked functionality later. Forcing an opt-in for checked ranges seems a bit weird to me conceptually; not sure what anyone else thinks on the topic. |
Beta Was this translation helpful? Give feedback.
-
Just as an update: This work is all done and in, with one small issue that one of my changes uncovered, which Gene is working on fixing. |
Beta Was this translation helpful? Give feedback.
-
In the process of changing the plan for the production-capable XVM implementation (as of now: emitting Java byte code, using the JVM as the back end), a number of topics have bubbled back to the surface. Part of the reason these are topics of discussion is related to the complexity of delivering these features vs. the perceived benefits of the features, but most of the concern raised thus far is related to runtime efficiency. Additionally, we are always conscious of the overall complexity budget, so removing capabilities that have not already proven to be absolutely essential or otherwise amazing is a lot easier to accept than it might seem.
The issues:
Int8
) to 128 (Int128
) bits, as well as a variable length integer type (IntN
), in both signed (Int_
) and unsignedUInt_
) variants. By default, math operations (add, multiply, etc.) are checked for overflow. So any operation that could overflow requires a runtime check. The rationale is simple: An unexpected overflow is potentially a "very bad thing", and several of us have actually encountered these situations in the wild in the past, and seen the cost. As a result, we were willing to accept some amount of execution cost in order to enforce a runtime overflow check, and we had hoped to leverage hardware support for doing so -- so not zero cost, but at least at a relatively low cost. There are three points to consider here: (1) the incidence of the error is extremely low, especially with 64 bit integers being readily available; (2) Java does not provide checked overflow support, so it would have to be performedcompletely in software, and (3) even though the checked/unchecked capability is well baked into the core library via a single@Unchecked
annotation, the few times that we have had to deal with the capability has been a pain, with hashcodes coming to mind (that often overflow because of the use of multiplication in a loop, see e.g.String
). The proposal is to adopt the typical "silent integer overflow" that is the de facto norm for most modern languages and hardware, and drop the "checked" concept and the@Unchecked
annotation.Int
. In Ecstasy, until 6 months or so ago, the nameInt
was just an alias forInt64
. We then changed it (at significant expense) to be its own independent type, with up to a 128-bit range, but no fixed size at compile time; this was referred to as an automatically-sized type: ""Automatically-sized" does not mean "variably sized" at runtime; a variably sized value means that the class for that value decides on an instance-by-instance basis what to use for the storage of the value that the instance represents. "Automatically-sized" means that the runtime is responsible for handling all values in the largest possible range, but is permitted to select a far smaller space for the storage of the integer value than the largest possible range would indicate, so long as the runtime can adjust that storage to handle larger values when necessary. An expected implementation for properties, for example, would utilize runtime statistics to determine the actual ranges of values encountered in classes with large numbers of instantiations, and would then reshape the storage layout of those classes, reducing the memory footprint of each instance of those classes; this operation would likely be performed during a service-level garbage collection, or when a service is idle. The reverse is not true, though: When a value is encountered that is larger than the storage size that was previously assumed to be sufficient, then the runtime must immediately provide for the storage of that value in some manner, so that information is not lost. As with all statistics-driven optimizations that require real-time de-optimization to handle unexpected and otherwise-unsupported conditions, there will be significant and unavoidable one-time costs, every time such a scenario is encountered." There were more complexities related to the new definition of this automatically-sizedInt
type as well, and frankly, we should be able to "automatically size" anything at runtime, so long as we maintain the programming contracts, so in conclusion: making this explicit wasn't particularly brilliant in retrospect. The proposal is to return theInt
type to a 64-bit definition.I mainly wanted to capture the basics of the topic, so that we can consider the implications and any alternative approaches. While it will be disappointing to lose the check for integer overflow, we have decent answers for that (e.g. Int128, IntN, Dec, etc.). I'm willing to trade off some performance for important capabilities -- enforced immutability comes to mind -- but these proposed changes seem reasonable, and potentially complexity-lowering.
Beta Was this translation helpful? Give feedback.
All reactions