-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remove instance NFData (a -> b) #16
Comments
tbh, I'd like the @haskell/core-libraries-committee to sign off on this. Personally, I'd prefer an unimplementable instance, as otherwise all it takes is one orphan instance hidden somewhere in a popular package to thwart this change. And even worse, if more than one orphan instance appears, we would risk ending up in an even worse situation than we're now in :-/ (/cc'ing @RyanGlScott as he wasn't yet part of the @haskell/core-libraries-committee github team at time of writing) |
I have no objection to removing the instance, but I'm not a fan of the idea Yes, one popular package could bring it back from the dead, or bring into We have an open world. Orphan instances are deeply unpopular and -Edward On Mon, May 23, 2016 at 6:29 AM, Herbert Valerio Riedel <
|
On Mon, 23 May 2016, Edward Kmett wrote:
There needs to be only one package in any of hundred packages I import |
@amigalemming afaik you suggested to implement some warning facility to annotate undesirable/questionable instances; can you think of a variant which would help here? |
And you can just as easily implement an orphan instance module that class Don'tExportMe where boom :: a and import that to know that the other isn't being used in your code, I'd, however, treat whatever package exported this instance as just as NFData provides no guarantees that it "really" does anything. There are For the scenario you fear to come to pass you have to make two lapses in That seems well balanced against the fact that someone using criterion -Edward On Mon, May 23, 2016 at 7:00 AM, amigalemming [email protected]
|
I agree with Edward here, my vote goes with his stance. On Mon, May 23, 2016, 3:33 PM Edward Kmett [email protected] wrote:
|
I'm also for no instance at this point. If you think the existing instance is bad, it's better for no instance to exist, so that you get an error if you accidentally try to use it. The other instances I can think of have roughly the same negatives as the existing one. You don't get a compile-time complaint, you just have to figure out what went wrong if your program ever exercises the instance (and either blows up or loops uselessly for a very long time). And the instance that doesn't just blow up requires introducing a new class to implement ideally, and then getting people to implement it. |
On Mon, 23 May 2016, dolio wrote:
I definitely want a compile-time complaint, and I think this can be better class NotImplementable a where -- not exported This instance tells any programmer that the instance cannot be defined |
On Mon, 23 May 2016, Herbert Valerio Riedel wrote:
I proposed this one: It would solve the issue here, too, but it is not implemented in GHC, it |
My point about that instance is that it is something that you are just as Just to be clear, I was not advocating that the instance I mentioned be I was mentioning that if you wanted to rule out the instance in question, So you can either, simply check to make sure that your code still compiles Only in the latter case do you break users who just want some instance -Edward On Mon, May 23, 2016 at 9:44 AM, amigalemming [email protected]
|
On Mon, 23 May 2016, Edward Kmett wrote:
I understood it this way.
Is this a frequent usecase? How about using a newtype wrapper there? The |
The criterion example was one usecase. One that enumerates all of the But if you can build with any version of your dependencies that doesn't You'll only run into this problem when you explicitly call rnf on a -Edward On Mon, May 23, 2016 at 10:38 AM, amigalemming [email protected]
|
On Mon, 23 May 2016, Edward Kmett wrote:
The existence of different possible implementations is an argument pro
I want to be sure that I do not accidentally call 'rnf' or that a library |
They can do that anyway if they just define their instance for any composite structure in the middle by hand. Even your newtype solution is evidence of this. You get no such transitive guarantee. Sent from my iPhone
|
I think the main issue is the removal of the instance. It seems that this instance is not widely useful, and can be implemented as an orphan (or perhaps a newtype) for those who need it. Making it unimplementable seems to be a more controversial stretch, and tacking it on to the issue of the instance's removal makes it more likely that nothing will get done at all. I think we should just remove the instance, make a changelog entry, and majour version bump. |
On Mon, 24 Jun 2019, chessai wrote:
I think we should just remove the instance, make a changelog entry, and
majour version bump.
Yes, please.
|
I agree that this instance just needs to be removed. |
see #47 |
What it actually should do is use |
On Sat, 2 Nov 2019, Zemyla wrote:
What it actually should do is use seq to turn it to WHNF, then use
unpackClosure# to make sure everything inside it is evaluated as well.
The function can still be partial and then `rnf` should diverge, too,
shouldn't it?
|
Basically, imagine a subset of Haskell where only named functions exist and can be passed to things. To represent a partially evaluated function of some sort, you'd need an existential datatype, and functions to manipulate it. data Closure a = forall u. Closure u (u -> a) -- Here, the (u -> a) function has to be able to be named at compile-time.
closeFmap :: (u -> a, a -> b, u) -> b
closeFmap (g, f, u) = g (f u)
instance Functor Closure where
fmap f (Closure u g) = Closure (g, f, u) closeFmap
closeLiftA2 :: (u -> a, v -> b, a -> b -> c, u, v) -> c
closeLiftA2 (gu, gv, f, u, v) = f (gu u) (gv v)
closeAp :: (u -> a -> b, v -> a, u, v) -> b
closeAp (gu, gv, u, v) = gu u (gv v)
instance Applicative Closure where
pure a = Closure a id
liftA2 f (Closure u gu) (Closure v gv) = Closure (gu, gv, f, u, v) closeLiftA2
Closure u gu <*> Closure v gv = Closure (gu, gv, u, v) closeAp
closeBind :: (u -> a, u, a -> Closure b) -> b
closeBind (gu, u, f) = case f (gu u) of
Closure v gv -> gv v
instance Monad Closure where
Closure u gu >>= f = Closure (gu, u, f) closeBind This may seem silly at first, but it's pretty much what the stackless, tagless G-machine behind GHC does when you write code with closed variables, and it's also what you need to do when you work with some forms of distributed Haskell. |
On Sat, 2 Nov 2019, Zemyla wrote:
unpackClosure# doesn't actually evaluate the function. It simply takes
it apart into the component pieces where any data needed for actually
calling the function are stored.
I think I understand what you are suggesting, but I doubt that it is a
useful definition.
|
I agree with @Zemyla that
I disagree @amigalemming. This definition would be very much in spirit of the goals of
Here, it is safe because
If, say, Thus the definition offered by @Zemyla would make the pattern above safe in the general case. |
Although I'm not sure how @Zemyla imagines running |
yeah, as @int-index says, forcing the captured thunks in a closure requires RTS hooks to work, and closures are existentials, and thus the RTS would have to know about how to |
or maybe i'm missunderstanding this thread :) |
So, the proposal here isn't just "Remove Removing the instance already removes the ability to do an Whether or not We can simply sidestep this whole "problem" by saying "A function |
(I'm not advocating in favor or against the change or in favor or against any specific migration strategy. This is just to clarify that no one will be busted immediately.) |
True. It will come as a major surprise to everyone at the time they upgrade their compiler. Hence I'll advocate for this to be a deprecation warning for at least two major GHC release (at the current cadence of two per year), so that users have enough time to start thinking (and mitigating) around it before the final break happens. The current GHC release is 9.6, stable(?) is 9.4? Ecosystem ready is 9.2? (It certainly was ~4mo ago). There is the awkward 9.0 in there as well. And lots of code is still on 8.10. People today on 8.10 are >2 years behind. They wouldn't even know this breakage is coming their way. |
Full disclosure, I work with Moritz. We have a huge and complex code base that is still mostly in |
Giving The root of the issue seems to be the contradiction between "NFData is for types that can be forced to normal form/fully evaluated" and "there is an instance (NFData (a -> b))". The proposal is to drop the instance. But the contradiction could just as well be resolved by changing the spec of NFData, with zero breakage. There seems to be a consensus that this spec is obtusely worded at the very least. As @parsonsmatt pointed out, "apply And as far as I can tell, whatever your interpretation, for every type there is at most one reasonable instance of |
I'm not sure about it. Imagine I'm writing |
@Bodigrim this was also mentioned in the mailing list. Again, the issue becomes: do we want This does not preclude manual instance declarations of I'm of the opinion that |
AFAIK there is no way to deprecate an instance, which is quite unfortunate. |
That’s right. That makes 2 things for ghc-proposals, I guess. |
Let's start with a proposal to decrease instances first. And then deprecate this. |
Actually, we can work kinda haphazardly around this today, with an orphan instance: -- Instance.hs
module Instance {-# DEPRECATED "The X Int instance will be deprecated" #-} where
import Class
f :: Int -> Int
f = (+1)
instance X Int where x = f -- Class.hs
module Class where
class X a where x :: a -> a -- Lib.hs
module Lib ( module Class, module Instance) where
import Class
import Instance -- Test.hs
module Test where
import Lib
main = print $ x (1 :: Int) This will result in the following: $ runghc Test.hs
Lib.hs:3:1: warning: [-Wdeprecations]
Module ‘Instance’ is deprecated:
The X Int instance will be deprecated
|
3 | import Instance
| ^^^^^^^^^^^^^^^
2 Maybe I've missed something which makes this un-applicable here. Having a proper Instance deprecation mechanism however seems like a good idea nonetheless. |
That's a neat hack :) I'll go to sleep less stupid this evening |
... I guess this won't work so well here. It would complain while building deepseq, but not propagate that to consumers of that library. |
...unless the instance is kept in its own quarantined module. But as mentioned above, that only creates a different problem, wherein The ideal migration path, in my view, is one of the following:
|
@mixphix to be clear here, I only care (independent of merits, motivation, or anything else), that this does not result in existing code breaking from one GHC to the next release. I hope this makes sense. I really want GHC N+2 (possibly with warnings) to accept the same code as GHC N (without warnings). |
(DISCLAIMER: I'm aware that Thank you, @angerman, for putting so concisely in this comment your only concern about this change. I think the Haskell community's nigh-religious devotion to backwards compatibility is admirable, and has allowed many extremely handy tools to thrive for many years with minimal upkeep. This spirit has also permitted reference material to remain up-to-date even while the compiler continually gains new features. Devotion is fickle. The passion it fuels can bring great things to the world and unite large groups of people to work toward a common goal. Yet it can often lead genuinely well-meaning and concerned individuals to do and advocate for things that are harmful to themselves and their communities, while simultaneously reinforcing their faith in their cause and their hostility toward vacillators and rivals in those same communities. (I trust the reader to be able to think of many such examples in today's political climate.) Of the many great things a devotion to backwards compatibility and long-term support has brought to Haskell, Stack and Stackage top the list. These are excellent at what they do: they provide sets of mutually-compatible libraries, with fixed versions, that are trusted to work well in tandem and whose existence is maintained over extended periods of time. The Stack team also does their best to keep up with new compiler releases, even going as far as to submit compatibility patches when particularly crucial libraries need fixing. The Haskell Stack is the number-one tool to date for ensuring long-term support of a Haskell program. On the other hand, there are many exciting things brought to us by the GHC development team with each release. I personally am ecstatic about Maintainers, when they choose to update their repositories, also strive to provide better tools for the community. As a recent example, This gets to what I think is the heart of what the backwards-compatibility camp are saying: bumping the compiler version on a project often leads to project-wide breakage. ... No shit? Call me inexperienced, but I haven't worked on a Haskell project large enough that the cost of such a change is measured in developer-months rather than developer-days. I have read anecdotes of some in the community who experience this when updating, and I will admit that it sounds much more tedious than enjoyable. But I ask you: do you want to have your cake, or eat it? Because how many GHC versions are enough? This issue was opened in 2016, just after the release of GHC 8.0.1. The instance was added in 2011 ( Do you, dear reader, whose time I have already wasted, dream of a world where all of the new language pragmas, parsing adjustments, and library functions are magically compatible with your code as-written? Do you imagine the code you build as part of an eternal creation, written once and lasting through the ages, immune to any nefarious influence? Allow me to show you the path. Or do you submit to the crushing reality that maintenance requires effort? Do you agree that in attempting progress, there must necessarily be things that cease to exist or function as they once did? If so, then I implore you to accept the cost of rectifying the things that we now view as mistakes, lest they continue to eat away at the trust we imbue our software. Wow, that turned out way longer than I thought. Sorry to make you the scapegoat, Moritz, but this is an attitude I see stalling easily-achievable goals with clear {performance, integrity, safety, ...} wins across the entire Haskell ecosystem. Long-term support via Stack has been around forever. New features of the compiler and libraries will continue to get released. They are mutually exclusive things to desire: you can't have both. I prefer semantics over convenience. TL;DR: Haskell is dead; long live Haskell. |
I have! You want to know why it is usually somewhat easy to upgrade to a new GHC? Other people have been putting in a lot of work to upgrading the ecosystem. I've personally spent months of work-time working on the ecosystem for GHC 9.2 and 9.4, and I'm gearing up for 9.6's ecosystem work. I try to keep my company as close to the most recent version of GHC as possible, and one of those reasons is so we can provide feedback to GHC as an industrial codebase spanning 500+kloc with a massive dependency footprint. 41% of all Haskell users are still on GHC 8.10.7 as of the '22 Haskell survey. Of folks that report that they're in industry in some way, I see that 254 out of 483 that volunteer a response say they're using GHC 8.10 - about 53%. Of folks that exclusively work in industry, 83/143 report using GHC 8 in some capacity - 58%.
I know I'm spoiled having only briefly worked in other ecosystems, but Haskell releases breaking changes far more often that what I saw in other languages. Moreover, despite having fantastic facilities for providing smooth upgrade paths and deprecation warnings, GHC and many important libraries do not take advantage of them, instead just deleting functions and instances. Users are left with a message like I'm not a "no breaking changes" evangelist. Breaking changes are an important and necessary aspect of life as a software developer. However, we need to weigh the costs and benefits, and figure out ways to solve the given problems in the best way possible. The "problem" with the In practice, the ecosystem has been operating under the assumption that Why is "breaking people's code" a better solution than providing a new class that has the implications some of y'all want? |
In case anyone's interested in one way to achieve smooth upgrade paths I wrote an article about it. |
I had. I assumed that In any case, I'm going to back @angerman on this one. Breaking changes in GHC and its boot libraries should be preceded by a deprecation period when the problematic code triggers a warning. No warning – no breaking change, even though I'd personally prefer this instance gone. If we don't have the ability to attach deprecation warnings to instances, it shouldn't be hard to add this feature to GHC. It's not rocket science, it's a warning pragma. |
@mixphix thank you for taking the time to write out that response, and don't worry, I'm not taking any offence, or even having a problem with being a scape goat. If that is what it takes to get this moving, I'm here for that.
You and I seem to have significantly different experience with backwards compatibility wrt to GHC. GHC is for us one of the by far least backwards compatible compilers/ecosystems. I am however not focused on reference material; this applies to large and complex production codebases.
Maybe you are, I won't know, and I won't judge. I can tell you that a live migration of a codebase from 8.10 -> 9.2, while ensuring compatibility with 8.10, is now a multi month endeavour. Why didn't we go straight for 9.4? I wish we could have, but large parts of the dependency tree were not ready for 9.4; even 9.2 wasn't, but that was a more manageable chunk. Coming back to your previous point:
Yes, this is the primary complaint, and I'd like to turn this on its head and ask, why does it have to be? What are the fundamental reasons that almost every GHC upgrade completely breaks existing code that was accepted by a previous GHC release perfectly fine?
Well, you might now know who I am, and I don't expect you to. But, yes, I do want to have my cake. My team and I have contributed to GHC: ghc-bignum, aarch64-ncg, and soon the JS backend. As well as various other fixes, including support for unboxed tuples and sums in the interpreter, significant work on the iOS and Android front, ... If I could I'd use a bleeding endge GHC all the time, it would make my teams and my life and work tremendously easier. Fun fact: patches we write against 8.10, have virtually no chance into getting integrated anywhere. There won't be a 8.10.8, and forward porting those patches to master is a lot of work and can't be tested against the codebase (because GHC is incompatible with that codebase). Breakage inhibits GHC contributions. So, I see myself porting our patches to 9.2, and maybe trying to find some time to see if 9.2 and master are close enough to make it worthwhile to try and port it into master. This all costs significant time, and I'd like to make sure we can have contributors who have full-time jobs, families and hobbies; but right now we are remarkably hostile to those.
I understand your frustration here fully. And as I outlined, I'd say two, but with the current cadence, I'd prefer four; yet two would already be a world of improvements (it would mean code that compiled warning free would compile with 9.2 as well, just with additional warnings). But it needs to have deprecation warnings. And should have migration strategies pointed out. Just randomly failing to compile, potentially non-obvious, in some remote dependency busting your whole codebase is just terrible. Let me make this abundantly clear, because it appears as if I'm here to prevent innovation and moving us forward. I absolutely do not. As I've outlined in in this discourse.haskell.org post:
I don't like writing patches, I don't like cutting compatibility releases, I don't like extra work. And I somehow doubt most library maintainers like any of these either. A breaking change in the surface language of GHC Haskell, or in the boot libraries, means that every one downstream of that now has to adapt, and cut releases. To that point:
To that point, it's laudable you guys do this work, but do you enjoy doing this; patching old releases, just so that they work with a newer compiler? Why does this need to be done in the first place? The old code doesn't use any new Language features, so the surface language should break. Then the only breakage would be from changes to the boot libraries, and maybe the solution here is to just make them reinstallable instead. Again, what I'm arguing for is that we get proper deprecation notices (into those peoples faces that actually work on the code), I do not believe that we can expect everyone to follow all issues (hell, I didn't know of this issue until ~1wk ago) on all core libraries (and dependencies, ...), follow the Steering Committees proposals, ... just to figure out how to prepare for the next GHC release. Adapting codebases to new compilers has near zero business value. |
@int-index could you coordinate with @hsyl20 on this? I've asked him to look into this as well. I believe if you two team up this could be implemented fairly fast, and give us a much better way ahead with these kinds of issues. |
@mixphix wrote:
Compared to what? If you want to talk about backwards compatibility, look a C, where legal C code from the 1980s should still compile with the latest versions of GCC and Clang. |
@angerman Good idea. I started by finding and reopening the relevant GHC ticket #17485 and will write a GHC Proposal as the next step. I'm happy to assist with the implementation, too. The ticket is probably a good place to discuss this further. EDIT: The proposal is up for discussion at ghc-proposals/ghc-proposals#575. |
Here's an alternative idea, that I think will address another pain point of this library, namely the confusion around what The purpose of this library is to fully evaluate thunks in nested data structures. About that, there's no question. The issue is when data structures contain functions: we can't fully evaluate them, and often we don't want to. But that's not really the issue: we already know we can't "fully evaluate" an arbitrary function, but we still want to know that the function itself isn't a thunk (is this even possible?). The class isn't really about "normal form" at all: it's about removing thunks and strictifying data. -- | appropriate haddocks, etc
class Unthunk x where
unthunk :: x -> ()
type NFData x = Unthunk x
{-# DEPRECATED NFData "Please use `Unthunk` instead" #-}
rnf :: (Unthunk x) => x -> ()
rnf = unthunk
{-# DEPRECATED rnf "Please use `unthunk` instead" #-}
{- ... -}
-- | explaining in detail what this does & why it's here
instance Unthunk (a -> b) where
unthunk = (`seq` ()) Someone using |
That doesn't fix anything, it just changes the name from And |
Here's an idea for a path forward. In the next release of deepseq, we add a warning on generic derived instances that make use of The idea is that Then in the release after, the |
I suggested to remove the instance NFData (a -> b) because it cannot be implemented properly. I think the proposal got broad support:
https://mail.haskell.org/libraries/2016-May/026961.html
We have still to decide whether to drop the instance or replace it by an unimplementable one. The latter one would have the advantage that people see that the instance is omitted by intention. We would need a major version bump. I can setup a pull request if you like.
The text was updated successfully, but these errors were encountered: