{"size":2364,"ext":"lhs","lang":"Literate Haskell","max_stars_count":4.0,"content":"---\ntitle: Parallelizing algorithms in Haskell\ntags: haskell\ndescription: I demonstrate parallelizing existing code in Haskell\n---\n\nI've been working through [*Parallel and Concurrent Programming in\nHaskell*](http:\/\/chimera.labs.oreilly.com\/books\/1230000000929\/index.html). In\nmy [last\npost](\/posts\/2014-08-26-concurrent-implementation-of-the-daytime-protocol-in-haskell.html),\nI demonstrated the facilities Haskell provides for lightweight\nconcurrency. In this post, let's take a look at Haskell facilities for\nparallelism.\n\nAs a brief example, let's parallelize\n[Quicksort](https:\/\/en.wikipedia.org\/wiki\/Quicksort)[^1].\n\n[^1]: This isn't the best algorithm to parallelize, nor is this an\nefficient implementation, but it shows how to add parallelism to your\ncode.\n\n\n> import Control.Parallel.Strategies\n\n[Strategies](http:\/\/hackage.haskell.org\/package\/parallel-3.2.0.4\/docs\/Control-Parallel-Strategies.html)\nprovide a means to tell the run-time system how to evaluate\nobjects. We'll be using `rseq` is the sequential evaluation strategy,\nand `parList` takes a strategy for list items, and uses that strategy\nfor each list element in parallel.\n\nHere's our non-parallelized Quicksort implementation:\n\n> quicksort :: Ord a => [a] -> [a]\n> quicksort [] = []\n> quicksort (x:xs) =\n> let leftPartition = [y | y <- xs, y < x]\n> rightPartition = [y | y <- xs, y >= x]\n> left = quicksort leftPartition\n> right = quicksort rightPartition\n> in left ++ [x] ++ right\n\nQuicksort partitions a list around a pivot, sorts each partition, and\nthen combines the partitions and the pivot.\n\nOur parallelized version is almost the same:\n\n> parallelsort :: Ord a => [a] -> [a]\n> parallelsort [] = []\n> parallelsort (x:xs) =\n> let leftPartition = [y | y <- xs, y < x] `using` parList rseq\n> rightPartition = [y | y <- xs, y >= x] `using` parList rseq\n> left = parallelsort leftPartition\n> right = parallelsort rightPartition\n> in left ++ [x] ++ right\n\nWe simply tell the run-time system what strategy to use for the list\ncomprehensions.\n\nThis doesn't really improve much in this case, but when used\njudiciously, extending your existing code with parallelism is\nstraight-forward in Haskell.\n\nThis post is also [available](\/files\/parallelsort.lhs) as a [literate\nHaskell](http:\/\/www.haskell.org\/haskellwiki\/Literate_programming) file.\n","avg_line_length":36.3692307692,"max_line_length":103,"alphanum_fraction":0.7292724196} {"size":4040,"ext":"lhs","lang":"Literate Haskell","max_stars_count":1.0,"content":"\n\n> {-# LANGUAGE QuasiQuotes,OverloadedStrings #-}\n>\n> module Database.HsSqlPpp.Tests.Parsing.Joins (joins) where\n>\n> --import Database.HsSqlPpp.Utils.Here\n>\n> import Database.HsSqlPpp.Ast\n\n> import Database.HsSqlPpp.Tests.Parsing.Utils\n\n> joins :: Item\n> joins =\n> Group \"joins\"\n> [q \"select a from t1,t2\"\n> stbl {selTref = [tref \"t1\", tref \"t2\"]}\n> ,q \"select a from t1 natural inner join t2\"\n> stbl {selTref = [naturalInnerJoin (tref \"t1\") (tref \"t2\")]}\n> ,q \"select a from t1 inner join t2 using (a)\"\n> stbl {selTref = [usingInnerJoin (tref \"t1\") (tref \"t2\") [\"a\"]]}\n\ntodo: these aren't quite right: any join which isn't natural requires\na using or on clause. maybe the syntax should be fixed to represent\nthis?\n\n> ,q \"select a from t1 left outer join t2\"\n> stbl {selTref = [join (tref \"t1\") LeftOuter (tref \"t2\") Nothing]}\n> ,q \"select a from t1 right outer join t2\"\n> stbl {selTref = [join (tref \"t1\") RightOuter (tref \"t2\") Nothing]}\n> ,q \"select a from t1 full outer join t2\"\n> stbl {selTref = [join (tref \"t1\") FullOuter (tref \"t2\") Nothing]}\n> ,q \"select a from t1 cross join t2\"\n> stbl {selTref = [join (tref \"t1\") Cross (tref \"t2\") Nothing]}\n> ,q \"select a from t1 join t2\"\n> stbl {selTref = [join (tref \"t1\") Inner (tref \"t2\") Nothing]}\n> ,q \"select a from (b natural join c);\"\n> stbl {selTref = [tfp $ naturalInnerJoin (tref \"b\") (tref \"c\")]}\n\n> ,q \"select a from a cross join b cross join c;\"\n> stbl {selTref = [join\n> (join (tref \"a\") Cross (tref \"b\") Nothing)\n> Cross\n> (tref \"c\") Nothing]}\n> ,q \"select a from (a cross join b) cross join c;\"\n> stbl {selTref = [join\n> (tfp $ join (tref \"a\") Cross (tref \"b\") Nothing)\n> Cross\n> (tref \"c\") Nothing]}\n> ,q \"select a from ((a cross join b) cross join c);\"\n> stbl {selTref = [tfp $ join\n> (tfp $ join (tref \"a\") Cross (tref \"b\") Nothing)\n> Cross\n> (tref \"c\") Nothing]}\n\n> ,q \"select a from a cross join (b cross join c);\"\n> stbl {selTref = [join\n> (tref \"a\") Cross\n> (tfp $ join (tref \"b\") Cross (tref \"c\") Nothing)\n> Nothing]}\n\n> ,q \"select a from (a cross join (b cross join c));\"\n> stbl {selTref = [tfp $ join\n> (tref \"a\") Cross\n> (tfp $ join (tref \"b\") Cross (tref \"c\") Nothing)\n> Nothing]}\n\n> ,q \"select a from ((a cross join b) cross join c) cross join d;\"\n> stbl {selTref = [join\n> (tfp $ join\n> (tfp $ join (tref \"a\") Cross (tref \"b\") Nothing)\n> Cross\n> (tref \"c\") Nothing)\n> Cross\n> (tref \"d\") Nothing]}\n\n> ,q \"select a from a cross join b cross join c cross join d;\"\n> stbl {selTref = [join\n> (join\n> (join (tref \"a\") Cross (tref \"b\") Nothing)\n> Cross\n> (tref \"c\") Nothing)\n> Cross\n> (tref \"d\") Nothing]}\n> {-,q \"select a from (t cross join u) x\"\n> stbl {selTref = [TableAlias ea (Nmc \"x\") $ tfp\n> $ (join (tref \"t\") Cross (tref \"u\") Nothing)]}\n> ,q \"select a from (t as t(a, b) cross join u as t(c, d)) as t(a, b, c, d);\"\n> stbl {selTref = [TableAlias ea (Nmc \"x\") $ tfp\n> $ (join (tref \"t\") Cross (tref \"u\") Nothing)]}-}\n\n\n> ,q \"select a from b\\n\\\n> \\ inner join c\\n\\\n> \\ on true\\n\\\n> \\ inner join d\\n\\\n> \\ on 1=1;\"\n> stbl {selTref = [innerJoin\n> (innerJoin (tref \"b\") (tref \"c\") (Just lTrue))\n> (tref \"d\") (Just $ binop \"=\" (num \"1\") (num \"1\"))]}\n> ]\n\n> where\n> stbl = makeSelect\n> {selSelectList = sl [si $ ei \"a\"]\n> ,selTref = [tref \"tbl\"]}\n> q = QueryExpr\n","avg_line_length":37.4074074074,"max_line_length":79,"alphanum_fraction":0.4903465347} {"size":6638,"ext":"lhs","lang":"Literate Haskell","max_stars_count":248.0,"content":"> {-# LANGUAGE CPP #-}\n#if __GLASGOW_HASKELL__ > 708\n> {-# LANGUAGE EmptyCase #-}\n#else\n> import Unsafe.Coerce\n#endif\n\n> import AbstractFOL\n\n== Short technical note ==\nFor these exercises, you might find it useful to take a look at typed holes, a\nfeature which is enabled by default in GHC and available (the same way as the\nlanguage extension above EmptyCase) version 7.8.1 onwards:\n https:\/\/wiki.haskell.org\/GHC\/Typed_holes\n\nIf you are familiar with Agda, these will be familiar to use. In summary, when\ntrying to code up the definition of some expression (which you have already\ntyped) you can get GHC's type checker to help you out a little in seeing how far\nyou might be from forming the expression you want. That is, how far you are from\nconstructing something of the appropriate type.\n\nTake example0 below, and say you are writing:\n\n< example0 e = andIntro (_ e) _\n\nWhen loading the module, GHC will tell you which types your holes \"_\" should\nhave for the expression to be type correct.\n\n==========================\n\n\nExercises for DSLsofMath week 2 (2017)\n--------------------------------------\n\nThe propositional fragment of FOL is given by the rules for \u2227, \u2192, \u27f7,\n\u00ac, \u2228.\n\nWe can use the Haskell type checker to check proofs in this fragment,\nusing the functional models for introduction and elimination rules.\nExamine the file [AbstractFOL.lhs](AbstractFOL.lhs), which introduces\nan empty datatype for every connective (except \u27f7 ), and corresponding\ntypes for the introduction and elimination rules. The introduction\nand elimination rules are explicitly left \"undefined\", but we can\nstill combine them and type check the results. For example:\n\n> example0 :: And p q -> And q p\n> example0 evApq = andIntro (andElimR evApq) (andElimL evApq)\n\nNotice that Haskell will not accept\n\n< example0 evApq = andIntro (andElimL evApq) (andElimR evApq)\n\nunless we change the type.\n\nAnother example:\n\n> example1 :: And q (Not q) -> p\n> example1 evAqnq = notElim (notIntro (\\ hyp_p -> evAqnq))\n\nOn to the exercises.\n\n1. Prove\n\n< Impl (And p q) q\n< Or p q -> Or q p\n< Or p (Not p)\n\n2. Translate to Haskell and prove the De Morgan laws:\n\n< \u00ac (p \u2228 q) \u27f7 \u00acp \u2227 \u00acq\n< \u00ac (p \u2227 q) \u27f7 \u00acp \u2228 \u00acq\n\n(translate equivalence to conjunction of two implications).\n\n3. So far, the implementation of the datatypes has played no role.\nTo make this clearer: define the types for connectives in AbstractFol\nin any way you wish, e.g.:\n\n< And p q = A ()\n< Not p = B p\n\netc. as long as you still export only the data types, and not the\nconstructors. Convince yourself that the proofs given above still\nwork and that the type checker can indeed be used as a poor man's\nproof checker.\n\n4. The introduction and elimination rules suggest that some\nimplementations of the datatypes for connectives might be more\nreasonable than others. We have seen that the type of evidence for\n\"p \u2192 q\" is very similar to the type of functions \"p -> q\", so it would\nmake sense to define\n\n< type Impl p q = (p -> q)\n\nSimilarly, \u2227-ElimL and \u2227-ElimR behave like the functions fst and snd\non pairs, so we can take\n\n< type And p q = (p, q)\n\nwhile the notion of proof by cases is very similar to that of writing\nfunctions by pattern-matching on the various clauses, making p \u2228 q\nsimilar to Either:\n\n< type Or p q = Either p q\n\n a. Define and implement the corresponding introduction and\n implementation rules as functions.\n\n b. Compare proving the distributivity laws\n\n< (p \u2227 q) \u2228 r \u27f7 (p \u2228 r) \u2227 (q \u2228 r)\n< (p \u2228 q) \u2227 r \u27f7 (p \u2227 r) \u2228 (q \u2227 r)\n\n using the \"undefined\" introduction and elimination rules, with\n writing the corresponding functions with the given implementations\n of the datatypes. The first law, for example, requires a pair of\n functions:\n\n< (Either (p, q) r -> (Either p r, Either q r),\n< (Either p r, Either q r) -> Either (p, q) r)\n\n**Moral:** The natural question is: is it true that every time we find\nan implementation using the \"pairs, ->, Either\" translation of\nsentences, we can also find one using the \"undefined\" introduction and\nelimination rules? The answer, perhaps surprisingly, is *yes*, as\nlong as the functions we write are total. This result is known as\n*the Curry\u2013Howard isomorphism*.\n\n7. Can we extend the Curry\u2013Howard isomorphism to formulas with \u00ac? In\nother words, is there a type that we could use to define Not p, which\nwould work together with pairs, ->, and Either to give a full\ntranslation of sentential logic?\n\nUnfortunately, we cannot. The best that can be done is to define an\nempty type\n\n> data Empty\n\nand define Not as\n\n< type Not p = p -> Empty\n\nThe reason for this definition is: when p is Empty, the type Not p is\nnot empty: it contains the identity\n\n< id :: Empty -> Empty\n\nWhen p is not Empty (and therefore is true), there is no (total,\ndefined) function of type p -> Empty, and therefore Not p is false.\n\nMoreover, mathematically, an empty set acts as a contradiction:\nthere is exactly one function from the empty set to any other set,\nnamely the empty function. Thus, if we had an element of the empty\nset, we could obtain an element of any other set.\n\nNow to the exercise:\n\nImplement notIntro using the definition of Not above, i.e., find a function\n\n< notIntro :: (p -> (q, q -> Empty)) -> (p -> Empty)\n\nUsing\n\n> contraHey :: Empty -> p\n#if __GLASGOW_HASKELL__ > 708\n> contraHey x = case x of {}\n#else\n> contraHey x = unsafeCoerce x\n#endif\n\n\nprove\n\n< q \u2227 \u00ac q \u2192 p\n\nYou will, however, not be able to prove p \u2228 \u00ac p (try it!).\n\nProve\n\n< \u00ac p \u2228 \u00ac q \u2192 \u00ac (p \u2227 q)\n\nbut you will not be able to prove the converse.\n\n8. The implementation Not p = p -> Empty is not adequate for\nrepresenting the sentential fragment of FOL, but it is adequate for\n*constructive logic* (also known as *intuitionistic*). In\nconstructive logic, the \u00ac p is *defined* as p -> \u22a5, and the following\nelimination rule is given for \u22a5\n\n< ...\n< i. \u22a5\n< ...\n< j. p (\u22a5-Elim: i)\n\ncorresponding to the principle that everything follows from a\ncontradiction (\"if you believe \u22a5, you believe everything\").\n\nEvery sentence provable in constructive logic is provable in classical\nlogic, but the converse, as we have seen in the previous exercise,\ndoes not hold. On the other hand, there is no sentence in classical\nlogic which would be contradicted in constructive logic. In\nparticular, while we cannot prove p \u2228 \u00ac p, we *can* prove\n(constructively!) that there is no p for which \u00ac (p \u2228 \u00ac p), i.e., that\nthe sentence \u00ac \u00ac (p \u2228 \u00acp) is always true.\n\nShow this by implementing the following function:\n\n< noContra :: (Either p (p -> Empty) -> Empty) -> Empty\n","avg_line_length":31.7607655502,"max_line_length":80,"alphanum_fraction":0.7033745104} {"size":2105,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"--- Day 1: Not Quite Lisp ---\n\nSanta was hoping for a white Christmas, but his weather machine's \"snow\"\nfunction is powered by stars, and he's fresh out! To save Christmas, he\nneeds you to collect fifty stars by December 25th.\n\nCollect stars by helping Santa solve puzzles. Two puzzles will be made\navailable on each day in the advent calendar; the second puzzle is unlocked\nwhen you complete the first. Each puzzle grants one star. Good luck!\n\nHere's an easy puzzle to warm you up.\n\nSanta is trying to deliver presents in a large apartment building, but he\ncan't find the right floor - the directions he got are a little confusing.\nHe starts on the ground floor (floor 0) and then follows the instructions\none character at a time.\n\nAn opening parenthesis, (, means he should go up one floor, and a closing\nparenthesis, ), means he should go down one floor.\n\nThe apartment building is very tall, and the basement is very deep; he will\nnever find the top or bottom floors.\n\nFor example:\n\n- (()) and ()() both result in floor 0.\n- ((( and (()(()( both result in floor 3.\n- ))((((( also results in floor 3.\n- ()) and ))( both result in floor -1 (the first basement level).\n- ))) and )())()) both result in floor -3.\n\nTo what floor do the instructions take Santa?\n\n> import Helpers\n>\n> level :: Num a => Char -> a\n> level '(' = 1\n> level ')' = -1\n> level _ = 0\n>\n> day01 = solve \"input-day01.txt\" (sum . map level)\n\n\n--- Part Two ---\n\nNow, given the same instructions, find the position of the first character\nthat causes him to enter the basement (floor -1). The first character in the\ninstructions has position 1, the second character has position 2, and so on.\n\nFor example:\n\n- ) causes him to enter the basement at character position 1.\n- ()()) causes him to enter the basement at character position 5.\n\nWhat is the position of the character that causes Santa to first enter the\nbasement?\n\nAlthough it hasn't changed, you can still get your puzzle input.\n\n> countTillBasement = length . takeWhile (\/= -1) . scanl (+) 0\n>\n> day01p2 = solve \"input-day01.txt\" (countTillBasement . map level)\n","avg_line_length":33.4126984127,"max_line_length":76,"alphanum_fraction":0.7135391924} {"size":51873,"ext":"lhs","lang":"Literate Haskell","max_stars_count":2.0,"content":"%\n% (c) The University of Glasgow 2006\n% (c) The GRASP\/AQUA Project, Glasgow University, 1992-1998\n%\n\n\\begin{code}\nmodule TcValidity (\n Rank, UserTypeCtxt(..), checkValidType, checkValidMonoType,\n expectedKindInCtxt, \n checkValidTheta, checkValidFamPats,\n checkValidInstance, validDerivPred,\n checkInstTermination, checkValidTyFamInst, checkTyFamFreeness, \n checkConsistentFamInst,\n arityErr, badATErr\n ) where\n\n#include \"HsVersions.h\"\n\n-- friends:\nimport TcUnify ( tcSubType )\nimport TcSimplify ( simplifyAmbiguityCheck )\nimport TypeRep\nimport TcType\nimport TcMType\nimport TysWiredIn ( coercibleClass )\nimport Type\nimport Unify( tcMatchTyX )\nimport Kind\nimport CoAxiom\nimport Class\nimport TyCon\n\n-- others:\nimport HsSyn -- HsType\nimport TcRnMonad -- TcType, amongst others\nimport FunDeps\nimport Name\nimport VarEnv\nimport VarSet\nimport ErrUtils\nimport PrelNames\nimport DynFlags\nimport Util\nimport Maybes\nimport ListSetOps\nimport SrcLoc\nimport Outputable\nimport FastString\nimport BasicTypes ( Arity )\n\nimport Control.Monad\nimport Data.List ( (\\\\) )\n\\end{code}\n \n\n%************************************************************************\n%* *\n Checking for ambiguity\n%* *\n%************************************************************************\n\n\n\\begin{code}\ncheckAmbiguity :: UserTypeCtxt -> Type -> TcM ()\ncheckAmbiguity ctxt ty\n | GhciCtxt <- ctxt -- Allow ambiguous types in GHCi's :kind command\n = return () -- E.g. type family T a :: * -- T :: forall k. k -> *\n -- Then :k T should work in GHCi, not complain that\n -- (T k) is ambiguous!\n\n | otherwise\n = do { traceTc \"Ambiguity check for\" (ppr ty)\n ; (subst, _tvs) <- tcInstSkolTyVars (varSetElems (tyVarsOfType ty))\n ; let ty' = substTy subst ty\n -- The type might have free TyVars,\n -- so we skolemise them as TcTyVars\n -- Tiresome; but the type inference engine expects TcTyVars\n\n -- Solve the constraints eagerly because an ambiguous type\n -- can cause a cascade of further errors. Since the free\n -- tyvars are skolemised, we can safely use tcSimplifyTop\n ; (_wrap, wanted) <- addErrCtxtM (mk_msg ty') $\n captureConstraints $\n tcSubType (AmbigOrigin ctxt) ctxt ty' ty'\n ; simplifyAmbiguityCheck ty wanted\n\n ; traceTc \"Done ambiguity check for\" (ppr ty) }\n where\n mk_msg ty tidy_env\n = do { allow_ambiguous <- xoptM Opt_AllowAmbiguousTypes\n ; return (tidy_env', msg $$ ppWhen (not allow_ambiguous) ambig_msg) }\n where\n (tidy_env', tidy_ty) = tidyOpenType tidy_env ty\n msg = hang (ptext (sLit \"In the ambiguity check for:\"))\n 2 (ppr tidy_ty)\n ambig_msg = ptext (sLit \"To defer the ambiguity check to use sites, enable AllowAmbiguousTypes\")\n\\end{code}\n\n\n%************************************************************************\n%* *\n Checking validity of a user-defined type\n%* *\n%************************************************************************\n\nWhen dealing with a user-written type, we first translate it from an HsType\nto a Type, performing kind checking, and then check various things that should \nbe true about it. We don't want to perform these checks at the same time\nas the initial translation because (a) they are unnecessary for interface-file\ntypes and (b) when checking a mutually recursive group of type and class decls,\nwe can't \"look\" at the tycons\/classes yet. Also, the checks are are rather\ndiverse, and used to really mess up the other code.\n\nOne thing we check for is 'rank'. \n\n Rank 0: monotypes (no foralls)\n Rank 1: foralls at the front only, Rank 0 inside\n Rank 2: foralls at the front, Rank 1 on left of fn arrow,\n\n basic ::= tyvar | T basic ... basic\n\n r2 ::= forall tvs. cxt => r2a\n r2a ::= r1 -> r2a | basic\n r1 ::= forall tvs. cxt => r0\n r0 ::= r0 -> r0 | basic\n \nAnother thing is to check that type synonyms are saturated. \nThis might not necessarily show up in kind checking.\n type A i = i\n data T k = MkT (k Int)\n f :: T A -- BAD!\n\n \n\\begin{code}\ncheckValidType :: UserTypeCtxt -> Type -> TcM ()\n-- Checks that the type is valid for the given context\n-- Not used for instance decls; checkValidInstance instead\ncheckValidType ctxt ty \n = do { traceTc \"checkValidType\" (ppr ty <+> text \"::\" <+> ppr (typeKind ty))\n ; rankn_flag <- xoptM Opt_RankNTypes\n ; let gen_rank :: Rank -> Rank\n gen_rank r | rankn_flag = ArbitraryRank\n | otherwise = r\n\n rank1 = gen_rank r1\n rank0 = gen_rank r0\n\n r0 = rankZeroMonoType\n r1 = LimitedRank True r0\n\n rank\n = case ctxt of\n DefaultDeclCtxt-> MustBeMonoType\n ResSigCtxt -> MustBeMonoType\n LamPatSigCtxt -> rank0\n BindPatSigCtxt -> rank0\n RuleSigCtxt _ -> rank1\n TySynCtxt _ -> rank0\n\n ExprSigCtxt -> rank1\n FunSigCtxt _ -> rank1\n InfSigCtxt _ -> ArbitraryRank -- Inferred type\n ConArgCtxt _ -> rank1 -- We are given the type of the entire\n -- constructor, hence rank 1\n\n ForSigCtxt _ -> rank1\n SpecInstCtxt -> rank1\n ThBrackCtxt -> rank1\n GhciCtxt -> ArbitraryRank\n _ -> panic \"checkValidType\"\n -- Can't happen; not used for *user* sigs\n\n -- Check the internal validity of the type itself\n ; check_type ctxt rank ty\n\n -- Check that the thing has kind Type, and is lifted if necessary\n -- Do this second, because we can't usefully take the kind of an \n -- ill-formed type such as (a~Int)\n ; check_kind ctxt ty\n ; traceTc \"checkValidType done\" (ppr ty <+> text \"::\" <+> ppr (typeKind ty)) }\n\ncheckValidMonoType :: Type -> TcM ()\ncheckValidMonoType ty = check_mono_type SigmaCtxt MustBeMonoType ty\n\n\ncheck_kind :: UserTypeCtxt -> TcType -> TcM ()\n-- Check that the type's kind is acceptable for the context\ncheck_kind ctxt ty\n | TySynCtxt {} <- ctxt\n = do { ck <- xoptM Opt_ConstraintKinds\n ; unless ck $\n checkTc (not (returnsConstraintKind actual_kind)) \n (constraintSynErr actual_kind) }\n\n | Just k <- expectedKindInCtxt ctxt\n = checkTc (tcIsSubKind actual_kind k) (kindErr actual_kind)\n\n | otherwise\n = return () -- Any kind will do\n where\n actual_kind = typeKind ty\n\n-- Depending on the context, we might accept any kind (for instance, in a TH\n-- splice), or only certain kinds (like in type signatures).\nexpectedKindInCtxt :: UserTypeCtxt -> Maybe Kind\nexpectedKindInCtxt (TySynCtxt _) = Nothing -- Any kind will do\nexpectedKindInCtxt ThBrackCtxt = Nothing\nexpectedKindInCtxt GhciCtxt = Nothing\nexpectedKindInCtxt (ForSigCtxt _) = Just liftedTypeKind\nexpectedKindInCtxt InstDeclCtxt = Just constraintKind\nexpectedKindInCtxt SpecInstCtxt = Just constraintKind\nexpectedKindInCtxt _ = Just openTypeKind\n\\end{code}\n\nNote [Higher rank types]\n~~~~~~~~~~~~~~~~~~~~~~~~\nTechnically \n Int -> forall a. a->a\nis still a rank-1 type, but it's not Haskell 98 (Trac #5957). So the\nvalidity checker allow a forall after an arrow only if we allow it\nbefore -- that is, with Rank2Types or RankNTypes\n\n\\begin{code}\ndata Rank = ArbitraryRank -- Any rank ok\n\n | LimitedRank -- Note [Higher rank types]\n Bool -- Forall ok at top\n Rank -- Use for function arguments\n\n | MonoType SDoc -- Monotype, with a suggestion of how it could be a polytype\n \n | MustBeMonoType -- Monotype regardless of flags\n\nrankZeroMonoType, tyConArgMonoType, synArgMonoType :: Rank\nrankZeroMonoType = MonoType (ptext (sLit \"Perhaps you intended to use RankNTypes or Rank2Types\"))\ntyConArgMonoType = MonoType (ptext (sLit \"Perhaps you intended to use ImpredicativeTypes\"))\nsynArgMonoType = MonoType (ptext (sLit \"Perhaps you intended to use LiberalTypeSynonyms\"))\n\nfunArgResRank :: Rank -> (Rank, Rank) -- Function argument and result\nfunArgResRank (LimitedRank _ arg_rank) = (arg_rank, LimitedRank (forAllAllowed arg_rank) arg_rank)\nfunArgResRank other_rank = (other_rank, other_rank)\n\nforAllAllowed :: Rank -> Bool\nforAllAllowed ArbitraryRank = True\nforAllAllowed (LimitedRank forall_ok _) = forall_ok\nforAllAllowed _ = False\n\n----------------------------------------\ncheck_mono_type :: UserTypeCtxt -> Rank\n -> KindOrType -> TcM () -- No foralls anywhere\n -- No unlifted types of any kind\ncheck_mono_type ctxt rank ty\n | isKind ty = return () -- IA0_NOTE: Do we need to check kinds?\n | otherwise\n = do { check_type ctxt rank ty\n ; checkTc (not (isUnLiftedType ty)) (unliftedArgErr ty) }\n\ncheck_type :: UserTypeCtxt -> Rank -> Type -> TcM ()\n-- The args say what the *type context* requires, independent\n-- of *flag* settings. You test the flag settings at usage sites.\n-- \n-- Rank is allowed rank for function args\n-- Rank 0 means no for-alls anywhere\n\ncheck_type ctxt rank ty\n | not (null tvs && null theta)\n = do { checkTc (forAllAllowed rank) (forAllTyErr rank ty)\n -- Reject e.g. (Maybe (?x::Int => Int)), \n -- with a decent error message\n ; check_valid_theta ctxt theta\n ; check_type ctxt rank tau -- Allow foralls to right of arrow\n ; checkAmbiguity ctxt ty }\n where\n (tvs, theta, tau) = tcSplitSigmaTy ty\n \ncheck_type _ _ (TyVarTy _) = return ()\n\ncheck_type ctxt rank (FunTy arg_ty res_ty)\n = do { check_type ctxt arg_rank arg_ty\n ; check_type ctxt res_rank res_ty }\n where\n (arg_rank, res_rank) = funArgResRank rank\n\ncheck_type ctxt rank (AppTy ty1 ty2)\n = do { check_arg_type ctxt rank ty1\n ; check_arg_type ctxt rank ty2 }\n\ncheck_type ctxt rank ty@(TyConApp tc tys)\n | isSynTyCon tc = check_syn_tc_app ctxt rank ty tc tys\n | isUnboxedTupleTyCon tc = check_ubx_tuple ctxt ty tys\n | otherwise = mapM_ (check_arg_type ctxt rank) tys\n\ncheck_type _ _ (LitTy {}) = return ()\n\ncheck_type _ _ ty = pprPanic \"check_type\" (ppr ty)\n\n----------------------------------------\ncheck_syn_tc_app :: UserTypeCtxt -> Rank -> KindOrType \n -> TyCon -> [KindOrType] -> TcM ()\ncheck_syn_tc_app ctxt rank ty tc tys\n | tc_arity <= n_args -- Saturated\n -- Check that the synonym has enough args\n -- This applies equally to open and closed synonyms\n -- It's OK to have an *over-applied* type synonym\n -- data Tree a b = ...\n -- type Foo a = Tree [a]\n -- f :: Foo a b -> ...\n = do { -- See Note [Liberal type synonyms]\n ; liberal <- xoptM Opt_LiberalTypeSynonyms\n ; if not liberal || isSynFamilyTyCon tc then\n -- For H98 and synonym families, do check the type args\n mapM_ check_arg tys\n\n else -- In the liberal case (only for closed syns), expand then check\n case tcView ty of \n Just ty' -> check_type ctxt rank ty' \n Nothing -> pprPanic \"check_tau_type\" (ppr ty) }\n\n | GhciCtxt <- ctxt -- Accept under-saturated type synonyms in \n -- GHCi :kind commands; see Trac #7586\n = mapM_ check_arg tys\n\n | otherwise\n = failWithTc (arityErr \"Type synonym\" (tyConName tc) tc_arity n_args)\n where\n n_args = length tys\n tc_arity = tyConArity tc\n check_arg | isSynFamilyTyCon tc = check_arg_type ctxt rank\n | otherwise = check_mono_type ctxt synArgMonoType\n \n----------------------------------------\ncheck_ubx_tuple :: UserTypeCtxt -> KindOrType \n -> [KindOrType] -> TcM ()\ncheck_ubx_tuple ctxt ty tys\n = do { ub_tuples_allowed <- xoptM Opt_UnboxedTuples\n ; checkTc ub_tuples_allowed (ubxArgTyErr ty)\n\n ; impred <- xoptM Opt_ImpredicativeTypes \n ; let rank' = if impred then ArbitraryRank else tyConArgMonoType\n -- c.f. check_arg_type\n -- However, args are allowed to be unlifted, or\n -- more unboxed tuples, so can't use check_arg_ty\n ; mapM_ (check_type ctxt rank') tys }\n \n----------------------------------------\ncheck_arg_type :: UserTypeCtxt -> Rank -> KindOrType -> TcM ()\n-- The sort of type that can instantiate a type variable,\n-- or be the argument of a type constructor.\n-- Not an unboxed tuple, but now *can* be a forall (since impredicativity)\n-- Other unboxed types are very occasionally allowed as type\n-- arguments depending on the kind of the type constructor\n-- \n-- For example, we want to reject things like:\n--\n-- instance Ord a => Ord (forall s. T s a)\n-- and\n-- g :: T s (forall b.b)\n--\n-- NB: unboxed tuples can have polymorphic or unboxed args.\n-- This happens in the workers for functions returning\n-- product types with polymorphic components.\n-- But not in user code.\n-- Anyway, they are dealt with by a special case in check_tau_type\n\ncheck_arg_type ctxt rank ty\n | isKind ty = return () -- IA0_NOTE: Do we need to check a kind?\n | otherwise\n = do { impred <- xoptM Opt_ImpredicativeTypes\n ; let rank' = case rank of -- Predictive => must be monotype\n MustBeMonoType -> MustBeMonoType -- Monotype, regardless\n _other | impred -> ArbitraryRank\n | otherwise -> tyConArgMonoType\n -- Make sure that MustBeMonoType is propagated, \n -- so that we don't suggest -XImpredicativeTypes in\n -- (Ord (forall a.a)) => a -> a\n -- and so that if it Must be a monotype, we check that it is!\n\n ; check_type ctxt rank' ty\n ; checkTc (not (isUnLiftedType ty)) (unliftedArgErr ty) }\n -- NB the isUnLiftedType test also checks for \n -- T State#\n -- where there is an illegal partial application of State# (which has\n -- kind * -> #); see Note [The kind invariant] in TypeRep\n\n----------------------------------------\nforAllTyErr :: Rank -> Type -> SDoc\nforAllTyErr rank ty \n = vcat [ hang (ptext (sLit \"Illegal polymorphic or qualified type:\")) 2 (ppr ty)\n , suggestion ]\n where\n suggestion = case rank of\n LimitedRank {} -> ptext (sLit \"Perhaps you intended to use RankNTypes or Rank2Types\")\n MonoType d -> d\n _ -> empty -- Polytype is always illegal\n\nunliftedArgErr, ubxArgTyErr :: Type -> SDoc\nunliftedArgErr ty = sep [ptext (sLit \"Illegal unlifted type:\"), ppr ty]\nubxArgTyErr ty = sep [ptext (sLit \"Illegal unboxed tuple type as function argument:\"), ppr ty]\n\nkindErr :: Kind -> SDoc\nkindErr kind = sep [ptext (sLit \"Expecting an ordinary type, but found a type of kind\"), ppr kind]\n\\end{code}\n\nNote [Liberal type synonyms]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIf -XLiberalTypeSynonyms is on, expand closed type synonyms *before*\ndoing validity checking. This allows us to instantiate a synonym defn\nwith a for-all type, or with a partially-applied type synonym.\n e.g. type T a b = a\n type S m = m ()\n f :: S (T Int)\nHere, T is partially applied, so it's illegal in H98. But if you\nexpand S first, then T we get just\n f :: Int\nwhich is fine.\n\nIMPORTANT: suppose T is a type synonym. Then we must do validity\nchecking on an appliation (T ty1 ty2)\n\n *either* before expansion (i.e. check ty1, ty2)\n *or* after expansion (i.e. expand T ty1 ty2, and then check)\n BUT NOT BOTH\n\nIf we do both, we get exponential behaviour!!\n\n data TIACons1 i r c = c i ::: r c\n type TIACons2 t x = TIACons1 t (TIACons1 t x)\n type TIACons3 t x = TIACons2 t (TIACons1 t x)\n type TIACons4 t x = TIACons2 t (TIACons2 t x)\n type TIACons7 t x = TIACons4 t (TIACons3 t x)\n\n\n%************************************************************************\n%* *\n\\subsection{Checking a theta or source type}\n%* *\n%************************************************************************\n\n\\begin{code}\ncheckValidTheta :: UserTypeCtxt -> ThetaType -> TcM ()\ncheckValidTheta ctxt theta \n = addErrCtxt (checkThetaCtxt ctxt theta) (check_valid_theta ctxt theta)\n\n-------------------------\ncheck_valid_theta :: UserTypeCtxt -> [PredType] -> TcM ()\ncheck_valid_theta _ []\n = return ()\ncheck_valid_theta ctxt theta\n = do { dflags <- getDynFlags\n ; warnTc (wopt Opt_WarnDuplicateConstraints dflags &&\n notNull dups) (dupPredWarn dups)\n ; mapM_ (check_pred_ty dflags ctxt) theta }\n where\n (_,dups) = removeDups cmpPred theta\n\n-------------------------\ncheck_pred_ty :: DynFlags -> UserTypeCtxt -> PredType -> TcM ()\n-- Check the validity of a predicate in a signature\n-- We look through any type synonyms; any constraint kinded\n-- type synonyms have been checked at their definition site\n\ncheck_pred_ty dflags ctxt pred\n | Just (tc,tys) <- tcSplitTyConApp_maybe pred\n = case () of \n _ | Just cls <- tyConClass_maybe tc\n -> check_class_pred dflags ctxt cls tys\n\n | tc `hasKey` eqTyConKey\n , let [_, ty1, ty2] = tys\n -> check_eq_pred dflags ctxt ty1 ty2\n\n | isTupleTyCon tc\n -> check_tuple_pred dflags ctxt pred tys\n \n | otherwise -- X t1 t2, where X is presumably a\n -- type\/data family returning ConstraintKind\n -> check_irred_pred dflags ctxt pred tys\n\n | (TyVarTy _, arg_tys) <- tcSplitAppTys pred\n = check_irred_pred dflags ctxt pred arg_tys\n\n | otherwise\n = badPred pred\n\nbadPred :: PredType -> TcM ()\nbadPred pred = failWithTc (ptext (sLit \"Malformed predicate\") <+> quotes (ppr pred))\n\ncheck_class_pred :: DynFlags -> UserTypeCtxt -> Class -> [TcType] -> TcM ()\ncheck_class_pred dflags ctxt cls tys\n = do { -- Class predicates are valid in all contexts\n ; checkTc (arity == n_tys) arity_err\n\n -- Check the form of the argument types\n ; mapM_ checkValidMonoType tys\n ; checkTc (check_class_pred_tys dflags ctxt tys)\n (predTyVarErr (mkClassPred cls tys) $$ how_to_allow)\n }\n where\n class_name = className cls\n arity = classArity cls\n n_tys = length tys\n arity_err = arityErr \"Class\" class_name arity n_tys\n how_to_allow = parens (ptext (sLit \"Use FlexibleContexts to permit this\"))\n\n\ncheck_eq_pred :: DynFlags -> UserTypeCtxt -> TcType -> TcType -> TcM ()\ncheck_eq_pred dflags _ctxt ty1 ty2\n = do { -- Equational constraints are valid in all contexts if type\n -- families are permitted\n ; checkTc (xopt Opt_TypeFamilies dflags || xopt Opt_GADTs dflags) \n (eqPredTyErr (mkEqPred ty1 ty2))\n\n -- Check the form of the argument types\n ; checkValidMonoType ty1\n ; checkValidMonoType ty2\n }\n\ncheck_tuple_pred :: DynFlags -> UserTypeCtxt -> PredType -> [PredType] -> TcM ()\ncheck_tuple_pred dflags ctxt pred ts\n = do { checkTc (xopt Opt_ConstraintKinds dflags)\n (predTupleErr pred)\n ; mapM_ (check_pred_ty dflags ctxt) ts }\n -- This case will not normally be executed because \n -- without -XConstraintKinds tuple types are only kind-checked as *\n\ncheck_irred_pred :: DynFlags -> UserTypeCtxt -> PredType -> [TcType] -> TcM ()\ncheck_irred_pred dflags ctxt pred arg_tys\n -- The predicate looks like (X t1 t2) or (x t1 t2) :: Constraint\n -- But X is not a synonym; that's been expanded already\n --\n -- Allowing irreducible predicates in class superclasses is somewhat dangerous\n -- because we can write:\n --\n -- type family Fooish x :: * -> Constraint\n -- type instance Fooish () = Foo\n -- class Fooish () a => Foo a where\n --\n -- This will cause the constraint simplifier to loop because every time we canonicalise a\n -- (Foo a) class constraint we add a (Fooish () a) constraint which will be immediately\n -- solved to add+canonicalise another (Foo a) constraint.\n --\n -- It is equally dangerous to allow them in instance heads because in that case the\n -- Paterson conditions may not detect duplication of a type variable or size change.\n = do { checkTc (xopt Opt_ConstraintKinds dflags)\n (predIrredErr pred)\n ; mapM_ checkValidMonoType arg_tys\n ; unless (xopt Opt_UndecidableInstances dflags) $\n -- Make sure it is OK to have an irred pred in this context\n checkTc (case ctxt of ClassSCCtxt _ -> False; InstDeclCtxt -> False; _ -> True)\n (predIrredBadCtxtErr pred) }\n\n-------------------------\ncheck_class_pred_tys :: DynFlags -> UserTypeCtxt -> [KindOrType] -> Bool\ncheck_class_pred_tys dflags ctxt kts\n = case ctxt of\n SpecInstCtxt -> True -- {-# SPECIALISE instance Eq (T Int) #-} is fine\n InstDeclCtxt -> flexible_contexts || undecidable_ok || all tcIsTyVarTy tys\n -- Further checks on head and theta in\n -- checkInstTermination\n _ -> flexible_contexts || all tyvar_head tys\n where\n (_, tys) = span isKind kts -- see Note [Kind polymorphic type classes]\n flexible_contexts = xopt Opt_FlexibleContexts dflags\n undecidable_ok = xopt Opt_UndecidableInstances dflags\n\n-------------------------\ntyvar_head :: Type -> Bool\ntyvar_head ty -- Haskell 98 allows predicates of form \n | tcIsTyVarTy ty = True -- C (a ty1 .. tyn)\n | otherwise -- where a is a type variable\n = case tcSplitAppTy_maybe ty of\n Just (ty, _) -> tyvar_head ty\n Nothing -> False\n\\end{code}\n\nNote [Kind polymorphic type classes]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nMultiParam check:\n\n class C f where... -- C :: forall k. k -> Constraint\n instance C Maybe where...\n\n The dictionary gets type [C * Maybe] even if it's not a MultiParam\n type class.\n\nFlexibility check:\n\n class C f where... -- C :: forall k. k -> Constraint\n data D a = D a\n instance C D where\n\n The dictionary gets type [C * (D *)]. IA0_TODO it should be\n generalized actually.\n\nNote [The ambiguity check for type signatures]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\ncheckAmbiguity is a check on user-supplied type signatures. It is\n*purely* there to report functions that cannot possibly be called. So for\nexample we want to reject:\n f :: C a => Int\nThe idea is there can be no legal calls to 'f' because every call will\ngive rise to an ambiguous constraint. We could soundly omit the\nambiguity check on type signatures entirely, at the expense of\ndelaying ambiguity errors to call sites. Indeed, the flag \n-XAllowAmbiguousTypes switches off the ambiguity check.\n\nWhat about things like this:\n class D a b | a -> b where ..\n h :: D Int b => Int \nThe Int may well fix 'b' at the call site, so that signature should\nnot be rejected. Moreover, using *visible* fundeps is too\nconservative. Consider\n class X a b where ...\n class D a b | a -> b where ...\n instance D a b => X [a] b where...\n h :: X a b => a -> a\nHere h's type looks ambiguous in 'b', but here's a legal call:\n ...(h [True])...\nThat gives rise to a (X [Bool] beta) constraint, and using the\ninstance means we need (D Bool beta) and that fixes 'beta' via D's\nfundep!\n\nBehind all these special cases there is a simple guiding principle. \nConsider\n\n f :: \n f = ...blah...\n\n g :: \n g = f\n\nYou would think that the definition of g would surely typecheck!\nAfter all f has exactly the same type, and g=f. But in fact f's type\nis instantiated and the instantiated constraints are solved against\nthe originals, so in the case an ambiguous type it won't work.\nConsider our earlier example f :: C a => Int. Then in g's definition,\nwe'll instantiate to (C alpha) and try to deduce (C alpha) from (C a),\nand fail. \n\nSo in fact we use this as our *definition* of ambiguity. We use a\nvery similar test for *inferred* types, to ensure that they are\nunambiguous. See Note [Impedence matching] in TcBinds.\n\nThis test is very conveniently implemented by calling\n tcSubType \nThis neatly takes account of the functional dependecy stuff above, \nand implict parameter (see Note [Implicit parameters and ambiguity]).\n\nWhat about this, though?\n g :: C [a] => Int\nIs every call to 'g' ambiguous? After all, we might have\n intance C [a] where ...\nat the call site. So maybe that type is ok! Indeed even f's\nquintessentially ambiguous type might, just possibly be callable: \nwith -XFlexibleInstances we could have\n instance C a where ...\nand now a call could be legal after all! Well, we'll reject this\nunless the instance is available *here*.\n\nSide note: the ambiguity check is only used for *user* types, not for\ntypes coming from inteface files. The latter can legitimately have\nambiguous types. Example\n\n class S a where s :: a -> (Int,Int)\n instance S Char where s _ = (1,1)\n f:: S a => [a] -> Int -> (Int,Int)\n f (_::[a]) x = (a*x,b)\n where (a,b) = s (undefined::a)\n\nHere the worker for f gets the type\n fw :: forall a. S a => Int -> (# Int, Int #)\n\nNote [Implicit parameters and ambiguity] \n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nOnly a *class* predicate can give rise to ambiguity\nAn *implicit parameter* cannot. For example:\n foo :: (?x :: [a]) => Int\n foo = length ?x\nis fine. The call site will suppply a particular 'x'\n\nFurthermore, the type variables fixed by an implicit parameter\npropagate to the others. E.g.\n foo :: (Show a, ?x::[a]) => Int\n foo = show (?x++?x)\nThe type of foo looks ambiguous. But it isn't, because at a call site\nwe might have\n let ?x = 5::Int in foo\nand all is well. In effect, implicit parameters are, well, parameters,\nso we can take their type variables into account as part of the\n\"tau-tvs\" stuff. This is done in the function 'FunDeps.grow'.\n\\begin{code}\ncheckThetaCtxt :: UserTypeCtxt -> ThetaType -> SDoc\ncheckThetaCtxt ctxt theta\n = vcat [ptext (sLit \"In the context:\") <+> pprTheta theta,\n ptext (sLit \"While checking\") <+> pprUserTypeCtxt ctxt ]\n\neqPredTyErr, predTyVarErr, predTupleErr, predIrredErr, predIrredBadCtxtErr :: PredType -> SDoc\neqPredTyErr pred = ptext (sLit \"Illegal equational constraint\") <+> pprType pred\n $$\n parens (ptext (sLit \"Use GADTs or TypeFamilies to permit this\"))\npredTyVarErr pred = hang (ptext (sLit \"Non type-variable argument\"))\n 2 (ptext (sLit \"in the constraint:\") <+> pprType pred)\npredTupleErr pred = hang (ptext (sLit \"Illegal tuple constraint:\") <+> pprType pred)\n 2 (parens (ptext (sLit \"Use ConstraintKinds to permit this\")))\npredIrredErr pred = hang (ptext (sLit \"Illegal constraint:\") <+> pprType pred)\n 2 (parens (ptext (sLit \"Use ConstraintKinds to permit this\")))\npredIrredBadCtxtErr pred = hang (ptext (sLit \"Illegal constraint\") <+> quotes (pprType pred)\n <+> ptext (sLit \"in a superclass\/instance context\")) \n 2 (parens (ptext (sLit \"Use UndecidableInstances to permit this\")))\n\nconstraintSynErr :: Type -> SDoc\nconstraintSynErr kind = hang (ptext (sLit \"Illegal constraint synonym of kind:\") <+> quotes (ppr kind))\n 2 (parens (ptext (sLit \"Use ConstraintKinds to permit this\")))\n\ndupPredWarn :: [[PredType]] -> SDoc\ndupPredWarn dups = ptext (sLit \"Duplicate constraint(s):\") <+> pprWithCommas pprType (map head dups)\n\narityErr :: Outputable a => String -> a -> Int -> Int -> SDoc\narityErr kind name n m\n = hsep [ text kind, quotes (ppr name), ptext (sLit \"should have\"),\n n_arguments <> comma, text \"but has been given\", \n if m==0 then text \"none\" else int m]\n where\n n_arguments | n == 0 = ptext (sLit \"no arguments\")\n | n == 1 = ptext (sLit \"1 argument\")\n | True = hsep [int n, ptext (sLit \"arguments\")]\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Checking for a decent instance head type}\n%* *\n%************************************************************************\n\n@checkValidInstHead@ checks the type {\\em and} its syntactic constraints:\nit must normally look like: @instance Foo (Tycon a b c ...) ...@\n\nThe exceptions to this syntactic checking: (1)~if the @GlasgowExts@\nflag is on, or (2)~the instance is imported (they must have been\ncompiled elsewhere). In these cases, we let them go through anyway.\n\nWe can also have instances for functions: @instance Foo (a -> b) ...@.\n\n\\begin{code}\ncheckValidInstHead :: UserTypeCtxt -> Class -> [Type] -> TcM ()\ncheckValidInstHead ctxt clas cls_args\n = do { dflags <- getDynFlags\n\n ; checkTc (clas `notElem` abstractClasses)\n (instTypeErr clas cls_args abstract_class_msg)\n\n -- Check language restrictions; \n -- but not for SPECIALISE isntance pragmas\n ; let ty_args = dropWhile isKind cls_args\n ; unless spec_inst_prag $\n do { checkTc (xopt Opt_TypeSynonymInstances dflags ||\n all tcInstHeadTyNotSynonym ty_args)\n (instTypeErr clas cls_args head_type_synonym_msg)\n ; checkTc (xopt Opt_FlexibleInstances dflags ||\n all tcInstHeadTyAppAllTyVars ty_args)\n (instTypeErr clas cls_args head_type_args_tyvars_msg)\n ; checkTc (xopt Opt_NullaryTypeClasses dflags ||\n not (null ty_args))\n (instTypeErr clas cls_args head_no_type_msg)\n ; checkTc (xopt Opt_MultiParamTypeClasses dflags ||\n length ty_args <= 1) -- Only count type arguments\n (instTypeErr clas cls_args head_one_type_msg) }\n\n -- May not contain type family applications\n ; mapM_ checkTyFamFreeness ty_args\n\n ; mapM_ checkValidMonoType ty_args\n -- For now, I only allow tau-types (not polytypes) in \n -- the head of an instance decl. \n -- E.g. instance C (forall a. a->a) is rejected\n -- One could imagine generalising that, but I'm not sure\n -- what all the consequences might be\n }\n\n where\n spec_inst_prag = case ctxt of { SpecInstCtxt -> True; _ -> False }\n\n head_type_synonym_msg = parens (\n text \"All instance types must be of the form (T t1 ... tn)\" $$\n text \"where T is not a synonym.\" $$\n text \"Use TypeSynonymInstances if you want to disable this.\")\n\n head_type_args_tyvars_msg = parens (vcat [\n text \"All instance types must be of the form (T a1 ... an)\",\n text \"where a1 ... an are *distinct type variables*,\",\n text \"and each type variable appears at most once in the instance head.\",\n text \"Use FlexibleInstances if you want to disable this.\"])\n\n head_one_type_msg = parens (\n text \"Only one type can be given in an instance head.\" $$\n text \"Use MultiParamTypeClasses if you want to allow more.\")\n\n head_no_type_msg = parens (\n text \"No parameters in the instance head.\" $$\n text \"Use NullaryTypeClasses if you want to allow this.\")\n\n abstract_class_msg =\n text \"The class is abstract, manual instances are not permitted.\"\n\nabstractClasses :: [ Class ]\nabstractClasses = [ coercibleClass ] -- See Note [Coercible Instances]\n\ninstTypeErr :: Class -> [Type] -> SDoc -> SDoc\ninstTypeErr cls tys msg\n = hang (hang (ptext (sLit \"Illegal instance declaration for\"))\n 2 (quotes (pprClassPred cls tys)))\n 2 msg\n\\end{code}\n\nvalidDeivPred checks for OK 'deriving' context. See Note [Exotic\nderived instance contexts] in TcSimplify. However the predicate is\nhere because it uses sizeTypes, fvTypes.\n\nAlso check for a bizarre corner case, when the derived instance decl \nwould look like\n instance C a b => D (T a) where ...\nNote that 'b' isn't a parameter of T. This gives rise to all sorts of\nproblems; in particular, it's hard to compare solutions for equality\nwhen finding the fixpoint, and that means the inferContext loop does\nnot converge. See Trac #5287.\n\n\\begin{code}\nvalidDerivPred :: TyVarSet -> PredType -> Bool\nvalidDerivPred tv_set pred\n = case classifyPredType pred of\n ClassPred _ tys -> hasNoDups fvs \n && sizeTypes tys == length fvs\n && all (`elemVarSet` tv_set) fvs\n TuplePred ps -> all (validDerivPred tv_set) ps\n _ -> True -- Non-class predicates are ok\n where\n fvs = fvType pred\n\\end{code}\n\n\n%************************************************************************\n%* *\n\\subsection{Checking instance for termination}\n%* *\n%************************************************************************\n\n\\begin{code}\ncheckValidInstance :: UserTypeCtxt -> LHsType Name -> Type\n -> TcM ([TyVar], ThetaType, Class, [Type])\ncheckValidInstance ctxt hs_type ty\n | Just (clas,inst_tys) <- getClassPredTys_maybe tau\n , inst_tys `lengthIs` classArity clas\n = do { setSrcSpan head_loc (checkValidInstHead ctxt clas inst_tys)\n ; checkValidTheta ctxt theta\n\n -- The Termination and Coverate Conditions\n -- Check that instance inference will terminate (if we care)\n -- For Haskell 98 this will already have been done by checkValidTheta,\n -- but as we may be using other extensions we need to check.\n -- \n -- Note that the Termination Condition is *more conservative* than \n -- the checkAmbiguity test we do on other type signatures\n -- e.g. Bar a => Bar Int is ambiguous, but it also fails\n -- the termination condition, because 'a' appears more often\n -- in the constraint than in the head\n ; undecidable_ok <- xoptM Opt_UndecidableInstances\n ; if undecidable_ok \n then checkAmbiguity ctxt ty\n else checkInstTermination inst_tys theta\n\n ; case (checkInstCoverage undecidable_ok clas theta inst_tys) of\n Nothing -> return () -- Check succeeded\n Just msg -> addErrTc (instTypeErr clas inst_tys msg)\n \n ; return (tvs, theta, clas, inst_tys) } \n\n | otherwise \n = failWithTc (ptext (sLit \"Malformed instance head:\") <+> ppr tau)\n where\n (tvs, theta, tau) = tcSplitSigmaTy ty\n\n -- The location of the \"head\" of the instance\n head_loc = case hs_type of\n L _ (HsForAllTy _ _ _ (L loc _)) -> loc\n L loc _ -> loc\n\\end{code}\n\nNote [Paterson conditions]\n~~~~~~~~~~~~~~~~~~~~~~~~~~\nTermination test: the so-called \"Paterson conditions\" (see Section 5 of\n\"Understanding functionsl dependencies via Constraint Handling Rules, \nJFP Jan 2007).\n\nWe check that each assertion in the context satisfies:\n (1) no variable has more occurrences in the assertion than in the head, and\n (2) the assertion has fewer constructors and variables (taken together\n and counting repetitions) than the head.\nThis is only needed with -fglasgow-exts, as Haskell 98 restrictions\n(which have already been checked) guarantee termination. \n\nThe underlying idea is that \n\n for any ground substitution, each assertion in the\n context has fewer type constructors than the head.\n\n\n\\begin{code}\ncheckInstTermination :: [TcType] -> ThetaType -> TcM ()\n-- See Note [Paterson conditions]\ncheckInstTermination tys theta\n = check_preds theta\n where\n fvs = fvTypes tys\n size = sizeTypes tys\n\n check_preds :: [PredType] -> TcM ()\n check_preds preds = mapM_ check preds\n\n check :: PredType -> TcM ()\n check pred \n = case classifyPredType pred of\n TuplePred preds -> check_preds preds -- Look inside tuple predicates; Trac #8359\n EqPred {} -> return () -- You can't get from equalities\n -- to class predicates, so this is safe\n _other -- ClassPred, IrredPred\n | not (null bad_tvs)\n -> addErrTc (predUndecErr pred (nomoreMsg bad_tvs) $$ parens undecidableMsg)\n | sizePred pred >= size\n -> addErrTc (predUndecErr pred smallerMsg $$ parens undecidableMsg)\n | otherwise\n -> return ()\n where\n bad_tvs = filterOut isKindVar (fvType pred \\\\ fvs)\n -- Rightly or wrongly, we only check for\n -- excessive occurrences of *type* variables.\n -- e.g. type instance Demote {T k} a = T (Demote {k} (Any {k}))\n\npredUndecErr :: PredType -> SDoc -> SDoc\npredUndecErr pred msg = sep [msg,\n nest 2 (ptext (sLit \"in the constraint:\") <+> pprType pred)]\n\nnomoreMsg :: [TcTyVar] -> SDoc\nnomoreMsg tvs \n = sep [ ptext (sLit \"Variable\") <> plural tvs <+> quotes (pprWithCommas ppr tvs) \n , (if isSingleton tvs then ptext (sLit \"occurs\")\n else ptext (sLit \"occur\"))\n <+> ptext (sLit \"more often than in the instance head\") ]\n\nsmallerMsg, undecidableMsg :: SDoc\nsmallerMsg = ptext (sLit \"Constraint is no smaller than the instance head\")\nundecidableMsg = ptext (sLit \"Use UndecidableInstances to permit this\")\n\\end{code}\n\n\n\nNote [Associated type instances]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWe allow this:\n class C a where\n type T x a\n instance C Int where\n type T (S y) Int = y\n type T Z Int = Char\n\nNote that \n a) The variable 'x' is not bound by the class decl\n b) 'x' is instantiated to a non-type-variable in the instance\n c) There are several type instance decls for T in the instance\n\nAll this is fine. Of course, you can't give any *more* instances\nfor (T ty Int) elsewhere, because it's an *associated* type.\n\nNote [Checking consistent instantiation]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n class C a b where\n type T a x b\n\n instance C [p] Int\n type T [p] y Int = (p,y,y) -- Induces the family instance TyCon\n -- type TR p y = (p,y,y)\n\nSo we \n * Form the mini-envt from the class type variables a,b\n to the instance decl types [p],Int: [a->[p], b->Int]\n\n * Look at the tyvars a,x,b of the type family constructor T\n (it shares tyvars with the class C)\n\n * Apply the mini-evnt to them, and check that the result is\n consistent with the instance types [p] y Int\n\nWe do *not* assume (at this point) the the bound variables of \nthe assoicated type instance decl are the same as for the parent\ninstance decl. So, for example,\n\n instance C [p] Int\n type T [q] y Int = ...\n\nwould work equally well. Reason: making the *kind* variables line\nup is much harder. Example (Trac #7282):\n class Foo (xs :: [k]) where\n type Bar xs :: *\n\n instance Foo '[] where\n type Bar '[] = Int\nHere the instance decl really looks like\n instance Foo k ('[] k) where\n type Bar k ('[] k) = Int\nbut the k's are not scoped, and hence won't match Uniques.\n\nSo instead we just match structure, with tcMatchTyX, and check\nthat distinct type variables match 1-1 with distinct type variables.\n\nHOWEVER, we *still* make the instance type variables scope over the\ntype instances, to pick up non-obvious kinds. Eg\n class Foo (a :: k) where\n type F a\n instance Foo (b :: k -> k) where\n type F b = Int\nHere the instance is kind-indexed and really looks like\n type F (k->k) (b::k->k) = Int\nBut if the 'b' didn't scope, we would make F's instance too\npoly-kinded.\n\n\\begin{code}\ncheckConsistentFamInst \n :: Maybe ( Class\n , VarEnv Type ) -- ^ Class of associated type\n -- and instantiation of class TyVars\n -> TyCon -- ^ Family tycon\n -> [TyVar] -- ^ Type variables of the family instance\n -> [Type] -- ^ Type patterns from instance\n -> TcM ()\n-- See Note [Checking consistent instantiation]\n\ncheckConsistentFamInst Nothing _ _ _ = return ()\ncheckConsistentFamInst (Just (clas, mini_env)) fam_tc at_tvs at_tys\n = do { -- Check that the associated type indeed comes from this class\n checkTc (Just clas == tyConAssoc_maybe fam_tc)\n (badATErr (className clas) (tyConName fam_tc))\n\n -- See Note [Checking consistent instantiation] in TcTyClsDecls\n -- Check right to left, so that we spot type variable\n -- inconsistencies before (more confusing) kind variables\n ; discardResult $ foldrM check_arg emptyTvSubst $\n tyConTyVars fam_tc `zip` at_tys }\n where\n at_tv_set = mkVarSet at_tvs\n\n check_arg :: (TyVar, Type) -> TvSubst -> TcM TvSubst\n check_arg (fam_tc_tv, at_ty) subst\n | Just inst_ty <- lookupVarEnv mini_env fam_tc_tv\n = case tcMatchTyX at_tv_set subst at_ty inst_ty of\n Just subst | all_distinct subst -> return subst\n _ -> failWithTc $ wrongATArgErr at_ty inst_ty\n -- No need to instantiate here, because the axiom\n -- uses the same type variables as the assocated class\n | otherwise\n = return subst -- Allow non-type-variable instantiation\n -- See Note [Associated type instances]\n\n all_distinct :: TvSubst -> Bool\n -- True if all the variables mapped the substitution \n -- map to *distinct* type *variables*\n all_distinct subst = go [] at_tvs\n where\n go _ [] = True\n go acc (tv:tvs) = case lookupTyVar subst tv of\n Nothing -> go acc tvs\n Just ty | Just tv' <- tcGetTyVar_maybe ty\n , tv' `notElem` acc\n -> go (tv' : acc) tvs\n _other -> False\n\nbadATErr :: Name -> Name -> SDoc\nbadATErr clas op\n = hsep [ptext (sLit \"Class\"), quotes (ppr clas), \n ptext (sLit \"does not have an associated type\"), quotes (ppr op)]\n\nwrongATArgErr :: Type -> Type -> SDoc\nwrongATArgErr ty instTy =\n sep [ ptext (sLit \"Type indexes must match class instance head\")\n , ptext (sLit \"Found\") <+> quotes (ppr ty)\n <+> ptext (sLit \"but expected\") <+> quotes (ppr instTy)\n ]\n\\end{code}\n\n\n%************************************************************************\n%* *\n Checking type instance well-formedness and termination\n%* *\n%************************************************************************\n\n\\begin{code}\n-- Check that a \"type instance\" is well-formed (which includes decidability\n-- unless -XUndecidableInstances is given).\n--\ncheckValidTyFamInst :: Maybe ( Class, VarEnv Type )\n -> TyCon -> CoAxBranch -> TcM ()\ncheckValidTyFamInst mb_clsinfo fam_tc \n (CoAxBranch { cab_tvs = tvs, cab_lhs = typats\n , cab_rhs = rhs, cab_loc = loc })\n = setSrcSpan loc $ \n do { checkValidFamPats fam_tc tvs typats\n\n -- The right-hand side is a tau type\n ; checkValidMonoType rhs\n\n -- We have a decidable instance unless otherwise permitted\n ; undecidable_ok <- xoptM Opt_UndecidableInstances\n ; unless undecidable_ok $\n mapM_ addErrTc (checkFamInstRhs typats (tcTyFamInsts rhs))\n\n -- Check that type patterns match the class instance head\n ; checkConsistentFamInst mb_clsinfo fam_tc tvs typats }\n\n-- Make sure that each type family application is \n-- (1) strictly smaller than the lhs,\n-- (2) mentions no type variable more often than the lhs, and\n-- (3) does not contain any further type family instances.\n--\ncheckFamInstRhs :: [Type] -- lhs\n -> [(TyCon, [Type])] -- type family instances\n -> [MsgDoc]\ncheckFamInstRhs lhsTys famInsts\n = mapCatMaybes check famInsts\n where\n size = sizeTypes lhsTys\n fvs = fvTypes lhsTys\n check (tc, tys)\n | not (all isTyFamFree tys)\n = Just (famInstUndecErr famInst nestedMsg $$ parens undecidableMsg)\n | not (null bad_tvs)\n = Just (famInstUndecErr famInst (nomoreMsg bad_tvs) $$ parens undecidableMsg)\n | size <= sizeTypes tys\n = Just (famInstUndecErr famInst smallerAppMsg $$ parens undecidableMsg)\n | otherwise\n = Nothing\n where\n famInst = TyConApp tc tys\n bad_tvs = filterOut isKindVar (fvTypes tys \\\\ fvs)\n -- Rightly or wrongly, we only check for\n -- excessive occurrences of *type* variables.\n -- e.g. type instance Demote {T k} a = T (Demote {k} (Any {k}))\n\ncheckValidFamPats :: TyCon -> [TyVar] -> [Type] -> TcM ()\n-- Patterns in a 'type instance' or 'data instance' decl should\n-- a) contain no type family applications\n-- (vanilla synonyms are fine, though)\n-- b) properly bind all their free type variables\n-- e.g. we disallow (Trac #7536)\n-- type T a = Int\n-- type instance F (T a) = a\n-- c) Have the right number of patterns\ncheckValidFamPats fam_tc tvs ty_pats\n = do { -- A family instance must have exactly the same number of type\n -- parameters as the family declaration. You can't write\n -- type family F a :: * -> *\n -- type instance F Int y = y\n -- because then the type (F Int) would be like (\\y.y)\n checkTc (length ty_pats == fam_arity) $\n wrongNumberOfParmsErr (fam_arity - length fam_kvs) -- report only types\n ; mapM_ checkTyFamFreeness ty_pats\n ; let unbound_tvs = filterOut (`elemVarSet` exactTyVarsOfTypes ty_pats) tvs\n ; checkTc (null unbound_tvs) (famPatErr fam_tc unbound_tvs ty_pats) }\n where fam_arity = tyConArity fam_tc\n (fam_kvs, _) = splitForAllTys (tyConKind fam_tc)\n\nwrongNumberOfParmsErr :: Arity -> SDoc\nwrongNumberOfParmsErr exp_arity\n = ptext (sLit \"Number of parameters must match family declaration; expected\")\n <+> ppr exp_arity\n\n-- Ensure that no type family instances occur in a type.\n--\ncheckTyFamFreeness :: Type -> TcM ()\ncheckTyFamFreeness ty\n = checkTc (isTyFamFree ty) $\n tyFamInstIllegalErr ty\n\n-- Check that a type does not contain any type family applications.\n--\nisTyFamFree :: Type -> Bool\nisTyFamFree = null . tcTyFamInsts\n\n-- Error messages\n\ntyFamInstIllegalErr :: Type -> SDoc\ntyFamInstIllegalErr ty\n = hang (ptext (sLit \"Illegal type synonym family application in instance\") <> \n colon) 2 $\n ppr ty\n\nfamInstUndecErr :: Type -> SDoc -> SDoc\nfamInstUndecErr ty msg \n = sep [msg, \n nest 2 (ptext (sLit \"in the type family application:\") <+> \n pprType ty)]\n\nfamPatErr :: TyCon -> [TyVar] -> [Type] -> SDoc\nfamPatErr fam_tc tvs pats\n = hang (ptext (sLit \"Family instance purports to bind type variable\") <> plural tvs\n <+> pprQuotedList tvs)\n 2 (hang (ptext (sLit \"but the real LHS (expanding synonyms) is:\"))\n 2 (pprTypeApp fam_tc (map expandTypeSynonyms pats) <+> ptext (sLit \"= ...\")))\n\nnestedMsg, smallerAppMsg :: SDoc\nnestedMsg = ptext (sLit \"Nested type family application\")\nsmallerAppMsg = ptext (sLit \"Application is no smaller than the instance head\")\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Auxiliary functions}\n%* *\n%************************************************************************\n\n\\begin{code}\n-- Free variables of a type, retaining repetitions, and expanding synonyms\nfvType :: Type -> [TyVar]\nfvType ty | Just exp_ty <- tcView ty = fvType exp_ty\nfvType (TyVarTy tv) = [tv]\nfvType (TyConApp _ tys) = fvTypes tys\nfvType (LitTy {}) = []\nfvType (FunTy arg res) = fvType arg ++ fvType res\nfvType (AppTy fun arg) = fvType fun ++ fvType arg\nfvType (ForAllTy tyvar ty) = filter (\/= tyvar) (fvType ty)\n\nfvTypes :: [Type] -> [TyVar]\nfvTypes tys = concat (map fvType tys)\n\nsizeType :: Type -> Int\n-- Size of a type: the number of variables and constructors\nsizeType ty | Just exp_ty <- tcView ty = sizeType exp_ty\nsizeType (TyVarTy {}) = 1\nsizeType (TyConApp _ tys) = sizeTypes tys + 1\nsizeType (LitTy {}) = 1\nsizeType (FunTy arg res) = sizeType arg + sizeType res + 1\nsizeType (AppTy fun arg) = sizeType fun + sizeType arg\nsizeType (ForAllTy _ ty) = sizeType ty\n\nsizeTypes :: [Type] -> Int\n-- IA0_NOTE: Avoid kinds.\nsizeTypes xs = sum (map sizeType tys)\n where tys = filter (not . isKind) xs\n\n-- Size of a predicate\n--\n-- We are considering whether class constraints terminate.\n-- Equality constraints and constraints for the implicit\n-- parameter class always termiante so it is safe to say \"size 0\".\n-- (Implicit parameter constraints always terminate because\n-- there are no instances for them---they are only solved by\n-- \"local instances\" in expressions).\n-- See Trac #4200.\nsizePred :: PredType -> Int\nsizePred ty = goClass ty\n where\n goClass p | isIPPred p = 0\n | otherwise = go (classifyPredType p)\n\n go (ClassPred _ tys') = sizeTypes tys'\n go (EqPred {}) = 0\n go (TuplePred ts) = sum (map goClass ts)\n go (IrredPred ty) = sizeType ty\n\\end{code}\n\nNote [Paterson conditions on PredTypes]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWe are considering whether *class* constraints terminate\n(see Note [Paterson conditions]). Precisely, the Paterson conditions\nwould have us check that \"the constraint has fewer constructors and variables\n(taken together and counting repetitions) than the head.\".\n\nHowever, we can be a bit more refined by looking at which kind of constraint\nthis actually is. There are two main tricks:\n\n 1. It seems like it should be OK not to count the tuple type constructor\n for a PredType like (Show a, Eq a) :: Constraint, since we don't\n count the \"implicit\" tuple in the ThetaType itself.\n\n In fact, the Paterson test just checks *each component* of the top level\n ThetaType against the size bound, one at a time. By analogy, it should be\n OK to return the size of the *largest* tuple component as the size of the\n whole tuple.\n\n 2. Once we get into an implicit parameter or equality we\n can't get back to a class constraint, so it's safe\n to say \"size 0\". See Trac #4200.\n\nNB: we don't want to detect PredTypes in sizeType (and then call \nsizePred on them), or we might get an infinite loop if that PredType\nis irreducible. See Trac #5581.\n","avg_line_length":40.0563706564,"max_line_length":104,"alphanum_fraction":0.6088138338} {"size":14234,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"%\n% (c) The University of Glasgow 2006\n% (c) The GRASP\/AQUA Project, Glasgow University, 1992-1998\n%\n\n\\begin{code}\n{-# LANGUAGE CPP #-}\n{-# OPTIONS_GHC -fno-warn-tabs #-}\n-- The above warning supression flag is a temporary kludge.\n-- While working on this module you are encouraged to remove it and\n-- detab the module (please do the detabbing in a separate patch). See\n-- http:\/\/ghc.haskell.org\/trac\/ghc\/wiki\/Commentary\/CodingStyle#TabsvsSpaces\n-- for details\n\nmodule BuildTyCl (\n buildSynTyCon,\n buildAlgTyCon, \n buildDataCon,\n buildPatSyn, mkPatSynMatcherId, mkPatSynWrapperId,\n TcMethInfo, buildClass,\n distinctAbstractTyConRhs, totallyAbstractTyConRhs,\n mkNewTyConRhs, mkDataTyConRhs, \n newImplicitBinder\n ) where\n\n#include \"HsVersions.h\"\n\nimport IfaceEnv\nimport FamInstEnv( FamInstEnvs )\nimport DataCon\nimport PatSyn\nimport Var\nimport VarSet\nimport BasicTypes\nimport Name\nimport MkId\nimport Class\nimport TyCon\nimport Type\nimport TypeRep\nimport TcType\nimport Id\nimport Coercion\n\nimport DynFlags\nimport TcRnMonad\nimport UniqSupply\nimport Util\nimport Outputable\n\\end{code}\n\t\n\n\\begin{code}\n------------------------------------------------------\nbuildSynTyCon :: Name -> [TyVar] -> [Role] \n -> SynTyConRhs\n -> Kind -- ^ Kind of the RHS\n -> TyConParent\n -> TcRnIf m n TyCon\nbuildSynTyCon tc_name tvs roles rhs rhs_kind parent \n = return (mkSynTyCon tc_name kind tvs roles rhs parent)\n where kind = mkPiKinds tvs rhs_kind\n\n\n------------------------------------------------------\ndistinctAbstractTyConRhs, totallyAbstractTyConRhs :: AlgTyConRhs\ndistinctAbstractTyConRhs = AbstractTyCon True\ntotallyAbstractTyConRhs = AbstractTyCon False\n\nmkDataTyConRhs :: [DataCon] -> AlgTyConRhs\nmkDataTyConRhs cons\n = DataTyCon {\n data_cons = cons,\n is_enum = not (null cons) && all is_enum_con cons\n\t\t -- See Note [Enumeration types] in TyCon\n }\n where\n is_enum_con con\n | (_tvs, theta, arg_tys, _res) <- dataConSig con\n = null theta && null arg_tys\n\n\nmkNewTyConRhs :: Name -> TyCon -> DataCon -> TcRnIf m n AlgTyConRhs\n-- ^ Monadic because it makes a Name for the coercion TyCon\n-- We pass the Name of the parent TyCon, as well as the TyCon itself,\n-- because the latter is part of a knot, whereas the former is not.\nmkNewTyConRhs tycon_name tycon con \n = do\t{ co_tycon_name <- newImplicitBinder tycon_name mkNewTyCoOcc\n\t; let co_tycon = mkNewTypeCo co_tycon_name tycon etad_tvs etad_roles etad_rhs\n\t; traceIf (text \"mkNewTyConRhs\" <+> ppr co_tycon)\n\t; return (NewTyCon { data_con = con, \n\t\t \t nt_rhs = rhs_ty,\n\t\t \t nt_etad_rhs = (etad_tvs, etad_rhs),\n \t\t \t nt_co \t = co_tycon } ) }\n -- Coreview looks through newtypes with a Nothing\n -- for nt_co, or uses explicit coercions otherwise\n where\n tvs = tyConTyVars tycon\n roles = tyConRoles tycon\n inst_con_ty = applyTys (dataConUserType con) (mkTyVarTys tvs)\n rhs_ty = ASSERT( isFunTy inst_con_ty ) funArgTy inst_con_ty\n\t-- Instantiate the data con with the \n\t-- type variables from the tycon\n\t-- NB: a newtype DataCon has a type that must look like\n\t-- forall tvs. -> T tvs\n\t-- Note that we *can't* use dataConInstOrigArgTys here because\n\t-- the newtype arising from class Foo a => Bar a where {}\n \t-- has a single argument (Foo a) that is a *type class*, so\n\t-- dataConInstOrigArgTys returns [].\n\n etad_tvs :: [TyVar] -- Matched lazily, so that mkNewTypeCo can\n etad_roles :: [Role] -- return a TyCon without pulling on rhs_ty\n etad_rhs :: Type -- See Note [Tricky iface loop] in LoadIface\n (etad_tvs, etad_roles, etad_rhs) = eta_reduce (reverse tvs) (reverse roles) rhs_ty\n \n eta_reduce :: [TyVar]\t-- Reversed\n -> [Role] -- also reversed\n\t -> Type\t\t-- Rhs type\n\t -> ([TyVar], [Role], Type) -- Eta-reduced version\n -- (tyvars in normal order)\n eta_reduce (a:as) (_:rs) ty | Just (fun, arg) <- splitAppTy_maybe ty,\n\t\t\t Just tv <- getTyVar_maybe arg,\n\t\t\t tv == a,\n\t\t\t not (a `elemVarSet` tyVarsOfType fun)\n\t\t\t = eta_reduce as rs fun\n eta_reduce tvs rs ty = (reverse tvs, reverse rs, ty)\n\t\t\t\t\n\n------------------------------------------------------\nbuildDataCon :: FamInstEnvs \n -> Name -> Bool\n\t -> [HsBang] \n\t -> [Name]\t\t\t-- Field labels\n\t -> [TyVar] -> [TyVar]\t-- Univ and ext \n -> [(TyVar,Type)] -- Equality spec\n\t -> ThetaType\t\t-- Does not include the \"stupid theta\"\n\t\t\t\t\t-- or the GADT equalities\n\t -> [Type] -> Type\t\t-- Argument and result types\n\t -> TyCon\t\t\t-- Rep tycon\n\t -> TcRnIf m n DataCon\n-- A wrapper for DataCon.mkDataCon that\n-- a) makes the worker Id\n-- b) makes the wrapper Id if necessary, including\n--\tallocating its unique (hence monadic)\nbuildDataCon fam_envs src_name declared_infix arg_stricts field_lbls\n\t univ_tvs ex_tvs eq_spec ctxt arg_tys res_ty rep_tycon\n = do\t{ wrap_name <- newImplicitBinder src_name mkDataConWrapperOcc\n\t; work_name <- newImplicitBinder src_name mkDataConWorkerOcc\n\t-- This last one takes the name of the data constructor in the source\n\t-- code, which (for Haskell source anyway) will be in the DataName name\n\t-- space, and puts it into the VarName name space\n\n ; us <- newUniqueSupply\n ; dflags <- getDynFlags\n\t; let\n\t\tstupid_ctxt = mkDataConStupidTheta rep_tycon arg_tys univ_tvs\n\t\tdata_con = mkDataCon src_name declared_infix\n\t\t\t\t arg_stricts field_lbls\n\t\t\t\t univ_tvs ex_tvs eq_spec ctxt\n\t\t\t\t arg_tys res_ty rep_tycon\n\t\t\t\t stupid_ctxt dc_wrk dc_rep\n dc_wrk = mkDataConWorkId work_name data_con\n dc_rep = initUs_ us (mkDataConRep dflags fam_envs wrap_name data_con)\n\n\t; return data_con }\n\n\n-- The stupid context for a data constructor should be limited to\n-- the type variables mentioned in the arg_tys\n-- ToDo: Or functionally dependent on? \n--\t This whole stupid theta thing is, well, stupid.\nmkDataConStupidTheta :: TyCon -> [Type] -> [TyVar] -> [PredType]\nmkDataConStupidTheta tycon arg_tys univ_tvs\n | null stupid_theta = []\t-- The common case\n | otherwise \t = filter in_arg_tys stupid_theta\n where\n tc_subst\t = zipTopTvSubst (tyConTyVars tycon) (mkTyVarTys univ_tvs)\n stupid_theta = substTheta tc_subst (tyConStupidTheta tycon)\n\t-- Start by instantiating the master copy of the \n\t-- stupid theta, taken from the TyCon\n\n arg_tyvars = tyVarsOfTypes arg_tys\n in_arg_tys pred = not $ isEmptyVarSet $ \n\t\t tyVarsOfType pred `intersectVarSet` arg_tyvars\n\n\n------------------------------------------------------\nbuildPatSyn :: Name -> Bool -> Bool\n -> [Var]\n -> [TyVar] -> [TyVar] -- Univ and ext\n -> ThetaType -> ThetaType -- Prov and req\n -> Type -- Result type\n -> TyVar\n -> TcRnIf m n PatSyn\nbuildPatSyn src_name declared_infix has_wrapper args univ_tvs ex_tvs prov_theta req_theta pat_ty tv\n = do\t{ (matcher, _, _) <- mkPatSynMatcherId src_name args\n univ_tvs ex_tvs\n prov_theta req_theta\n pat_ty tv\n ; wrapper <- case has_wrapper of\n False -> return Nothing\n True -> fmap Just $\n mkPatSynWrapperId src_name args\n (univ_tvs ++ ex_tvs) (prov_theta ++ req_theta)\n pat_ty\n ; return $ mkPatSyn src_name declared_infix\n args\n univ_tvs ex_tvs\n prov_theta req_theta\n pat_ty\n matcher\n wrapper }\n\nmkPatSynMatcherId :: Name\n -> [Var]\n -> [TyVar]\n -> [TyVar]\n -> ThetaType -> ThetaType\n -> Type\n -> TyVar\n -> TcRnIf n m (Id, Type, Type)\nmkPatSynMatcherId name args univ_tvs ex_tvs prov_theta req_theta pat_ty res_tv\n = do { matcher_name <- newImplicitBinder name mkMatcherOcc\n\n ; let res_ty = TyVarTy res_tv\n cont_ty = mkSigmaTy ex_tvs prov_theta $\n mkFunTys (map varType args) res_ty\n\n ; let matcher_tau = mkFunTys [pat_ty, cont_ty, res_ty] res_ty\n matcher_sigma = mkSigmaTy (res_tv:univ_tvs) req_theta matcher_tau\n matcher_id = mkVanillaGlobal matcher_name matcher_sigma\n ; return (matcher_id, res_ty, cont_ty) }\n\nmkPatSynWrapperId :: Name\n -> [Var]\n -> [TyVar]\n -> ThetaType\n -> Type\n -> TcRnIf n m Id\nmkPatSynWrapperId name args qtvs theta pat_ty\n = do { wrapper_name <- newImplicitBinder name mkDataConWrapperOcc\n\n ; let wrapper_tau = mkFunTys (map varType args) pat_ty\n wrapper_sigma = mkSigmaTy qtvs theta wrapper_tau\n\n ; let wrapper_id = mkVanillaGlobal wrapper_name wrapper_sigma\n ; return wrapper_id }\n\n\\end{code}\n\n\n------------------------------------------------------\n\\begin{code}\ntype TcMethInfo = (Name, DefMethSpec, Type) \n -- A temporary intermediate, to communicate between \n -- tcClassSigs and buildClass.\n\nbuildClass :: Name -> [TyVar] -> [Role] -> ThetaType\n\t -> [FunDep TyVar]\t\t -- Functional dependencies\n\t -> [ClassATItem]\t\t -- Associated types\n\t -> [TcMethInfo] -- Method info\n\t -> ClassMinimalDef -- Minimal complete definition\n\t -> RecFlag\t\t\t -- Info for type constructor\n\t -> TcRnIf m n Class\n\nbuildClass tycon_name tvs roles sc_theta fds at_items sig_stuff mindef tc_isrec\n = fixM $ \\ rec_clas -> \t-- Only name generation inside loop\n do\t{ traceIf (text \"buildClass\")\n\n\t; datacon_name <- newImplicitBinder tycon_name mkClassDataConOcc\n\t\t-- The class name is the 'parent' for this datacon, not its tycon,\n\t\t-- because one should import the class to get the binding for \n\t\t-- the datacon\n\n\n\t; op_items <- mapM (mk_op_item rec_clas) sig_stuff\n\t \t\t-- Build the selector id and default method id\n\n\t -- Make selectors for the superclasses \n\t; sc_sel_names <- mapM (newImplicitBinder tycon_name . mkSuperDictSelOcc) \n\t\t\t\t[1..length sc_theta]\n ; let sc_sel_ids = [ mkDictSelId sc_name rec_clas \n | sc_name <- sc_sel_names]\n\t -- We number off the Dict superclass selectors, 1, 2, 3 etc so that we \n\t -- can construct names for the selectors. Thus\n\t -- class (C a, C b) => D a b where ...\n\t -- gives superclass selectors\n\t -- D_sc1, D_sc2\n\t -- (We used to call them D_C, but now we can have two different\n\t -- superclasses both called C!)\n\t\n\t; let use_newtype = isSingleton arg_tys\n\t\t-- Use a newtype if the data constructor \n\t\t-- (a) has exactly one value field\n\t\t-- i.e. exactly one operation or superclass taken together\n -- (b) that value is of lifted type (which they always are, because\n -- we box equality superclasses)\n\t\t-- See note [Class newtypes and equality predicates]\n\n\t\t-- We treat the dictionary superclasses as ordinary arguments. \n -- That means that in the case of\n\t\t-- class C a => D a\n\t\t-- we don't get a newtype with no arguments!\n\t args = sc_sel_names ++ op_names\n\t op_tys\t= [ty | (_,_,ty) <- sig_stuff]\n\t op_names = [op | (op,_,_) <- sig_stuff]\n\t arg_tys = sc_theta ++ op_tys\n rec_tycon = classTyCon rec_clas\n \n\t; dict_con <- buildDataCon (panic \"buildClass: FamInstEnvs\")\n datacon_name\n\t\t\t\t False \t-- Not declared infix\n\t\t\t\t (map (const HsNoBang) args)\n\t\t\t\t [{- No fields -}]\n\t\t\t\t tvs [{- no existentials -}]\n [{- No GADT equalities -}] \n [{- No theta -}]\n arg_tys\n\t\t\t\t (mkTyConApp rec_tycon (mkTyVarTys tvs))\n\t\t\t\t rec_tycon\n\n\t; rhs <- if use_newtype\n\t\t then mkNewTyConRhs tycon_name rec_tycon dict_con\n\t\t else return (mkDataTyConRhs [dict_con])\n\n\t; let {\tclas_kind = mkPiKinds tvs constraintKind\n\n \t ; tycon = mkClassTyCon tycon_name clas_kind tvs roles\n \t rhs rec_clas tc_isrec\n\t\t-- A class can be recursive, and in the case of newtypes \n\t\t-- this matters. For example\n\t\t-- \tclass C a where { op :: C b => a -> b -> Int }\n\t\t-- Because C has only one operation, it is represented by\n\t\t-- a newtype, and it should be a *recursive* newtype.\n\t\t-- [If we don't make it a recursive newtype, we'll expand the\n\t\t-- newtype like a synonym, but that will lead to an infinite\n\t\t-- type]\n\n\t ; result = mkClass tvs fds \n\t\t\t sc_theta sc_sel_ids at_items\n\t\t\t\t op_items mindef tycon\n\t }\n\t; traceIf (text \"buildClass\" <+> ppr tycon) \n\t; return result }\n where\n mk_op_item :: Class -> TcMethInfo -> TcRnIf n m ClassOpItem\n mk_op_item rec_clas (op_name, dm_spec, _) \n = do { dm_info <- case dm_spec of\n NoDM -> return NoDefMeth\n GenericDM -> do { dm_name <- newImplicitBinder op_name mkGenDefMethodOcc\n\t\t\t \t ; return (GenDefMeth dm_name) }\n VanillaDM -> do { dm_name <- newImplicitBinder op_name mkDefaultMethodOcc\n\t\t\t \t ; return (DefMeth dm_name) }\n ; return (mkDictSelId op_name rec_clas, dm_info) }\n\\end{code}\n\nNote [Class newtypes and equality predicates]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nConsider\n\tclass (a ~ F b) => C a b where\n\t op :: a -> b\n\nWe cannot represent this by a newtype, even though it's not\nexistential, because there are two value fields (the equality\npredicate and op. See Trac #2238\n\nMoreover, \n\t class (a ~ F b) => C a b where {}\nHere we can't use a newtype either, even though there is only\none field, because equality predicates are unboxed, and classes\nare boxed.\n","avg_line_length":38.2634408602,"max_line_length":99,"alphanum_fraction":0.6038358859} {"size":1723,"ext":"lhs","lang":"Literate Haskell","max_stars_count":2133.0,"content":"\\section{IDS types}\n\n\\begin{code}\n{-# LANGUAGE MagicHash #-}\n\nmodule Types where\n\nimport GHC.Exts\n\ndata F a = FN | F1 a | F2 a a | F3 a a a \n | F4 a a a a \n | F5 a a a a a (F a) \n\ndata FI = FIN | FI1 Int# | FI2 Int# Int# | FI3 Int# Int# Int# \n | FI4 Int# Int# Int# Int# \n | FI5 Int# Int# Int# Int# Int# FI\n\ndata FC = FCN | FC1 Char# | FC2 Char# Char# \n | FC3 Char# Char# Char# \n | FC4 Char# Char# Char# Char# \n | FC5 Char# Char# Char# Char# Char# FC\n\\end{code}\n\n\\begin{code}\ndata F2 a b = F2N | F21 a b | F22 a b a b | F23 a b a b a b \n | F24 a b a b a b a b \n | F25 a b a b a b a b a b (F2 a b) \n\ndata F3 a b c = F3N | F31 a b c | F32 a b c a b c \n | F33 a b c a b c a b c\n | F34 a b c a b c a b c a b c\n | F35 a b c a b c a b c a b c a b c (F3 a b c) \n\ndata F3I = F3IN \n | F3I1 Int# Int# Int# \n | F3I2 Int# Int# Int# Int# Int# Int# \n | F3I3 Int# Int# Int# Int# Int# Int# Int# Int# Int#\n | F3I4 Int# Int# Int# Int# Int# Int# Int# Int# Int# \n Int# Int# Int#\n | F3I5 Int# Int# Int# Int# Int# Int# Int# Int# Int# \n Int# Int# Int# Int# Int# Int# F3I\n\\end{code}\n\n\\begin{code}\ndata S a = SN | S1 a (S a) | S2 a a (S a) | S3 a a a (S a)\n | S4 a a a a (S a)\n | S5 a a a a a (S a) \n\ndata SI = SIN | SI1 Int# SI | SI2 Int# Int# SI \n | SI3 Int# Int# Int# SI\n | SI4 Int# Int# Int# Int# SI\n | SI5 Int# Int# Int# Int# Int# SI\n\n\ndata SC = SCN | SC1 Char# SC | SC2 Char# Char# SC \n | SC3 Char# Char# Char# SC\n | SC4 Char# Char# Char# Char# SC\n | SC5 Char# Char# Char# Char# Char# SC\n\\end{code}\n\n\n\n\n","avg_line_length":26.921875,"max_line_length":62,"alphanum_fraction":0.4776552525} {"size":1025,"ext":"lhs","lang":"Literate Haskell","max_stars_count":1.0,"content":">{-# LANGUAGE DataKinds #-}\n>{-# LANGUAGE TypeFamilies #-}\n>{-# LANGUAGE BangPatterns #-}\n>import HLearn.Algebra\n>import HLearn.Models.Distributions\n>import HLearn.Models.Classifiers.Bayes\n\n>data Person = Person\n> { age :: !Double\n> , workclass :: !String\n> , fnlwgt :: !Double\n> , education :: !String\n> , educationNum :: !Double\n> , maritalStatus :: !String\n> , occupation :: !String\n> , relationship :: !String\n> , race :: !String\n> , capitalGain :: !Double\n> , capitalLoss :: !Double\n> , hoursPerWeek :: !Double\n> , nativeCountry :: !String\n> , income :: !String\n> }\n\n>instance Trainable Person where\n> type GetHList Person = HList '[Double,String,Double,String,Double,String,String,String,String,Double,Double,Double,String]\n> getHList p = age p:::workclass p:::fnlwgt p:::education p:::educationNum p:::maritalStatus p:::occupation p:::relationship p:::race p:::capitalGain p:::capitalLoss p:::hoursPerWeek p:::nativeCountry p:::HNil\n","avg_line_length":36.6071428571,"max_line_length":211,"alphanum_fraction":0.636097561} {"size":998,"ext":"lhs","lang":"Literate Haskell","max_stars_count":41.0,"content":"HUnitTestExc.lhs -- test for HUnit, using Haskell language system \"Exc\"\n\n> module Main (main) where\n\n> import Test.HUnit\n> import HUnitTestBase\n\n import qualified Control.Exception (assert)\n\n assertionMessage = \"HUnitTestExc.lhs:13: Assertion failed\\n\"\n assertion = Control.Exception.assert False (return ())\n\n\n> main :: IO Counts\n> main = runTestTT (test [baseTests, excTests])\n\n> excTests :: Test\n> excTests = test [\n\n -- Hugs and GHC don't currently catch arithmetic exceptions.\n \"div by 0\" ~:\n expectUnspecifiedError (TestCase ((3 `div` 0) `seq` return ())),\n\n -- GHC doesn't currently catch array-related exceptions.\n \"array ref out of bounds\" ~:\n expectUnspecifiedError (TestCase (... `seq` return ())),\n\n> \"error\" ~:\n> expectError \"error\" (TestCase (error \"error\")),\n\n> \"tail []\" ~:\n> expectUnspecifiedError (TestCase (tail [] `seq` return ()))\n\n -- Hugs doesn't provide `assert`.\n \"assert\" ~:\n expectError assertionMessage (TestCase assertion)\n\n> ]\n","avg_line_length":25.5897435897,"max_line_length":73,"alphanum_fraction":0.6683366733} {"size":10645,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"%\n% (c) The University of Glasgow 2000-2006\n%\nByteCodeLink: Bytecode assembler and linker\n\n\\begin{code}\n{-# LANGUAGE BangPatterns #-}\n{-# OPTIONS -optc-DNON_POSIX_SOURCE #-}\n\nmodule ByteCodeLink (\n HValue,\n ClosureEnv, emptyClosureEnv, extendClosureEnv,\n linkBCO, lookupStaticPtr, lookupName\n ,lookupIE\n ) where\n\n#include \"HsVersions.h\"\n\nimport ByteCodeItbls\nimport ByteCodeAsm\nimport ObjLink\n\nimport DynFlags\nimport Name\nimport NameEnv\nimport PrimOp\nimport Module\nimport FastString\nimport Panic\nimport Outputable\nimport Util\n\n-- Standard libraries\n\nimport Data.Array.Base\n\nimport Control.Monad\nimport Control.Monad.ST ( stToIO )\n\nimport GHC.Arr ( Array(..), STArray(..) )\nimport GHC.IO ( IO(..) )\nimport GHC.Exts\nimport GHC.Ptr ( castPtr )\n\\end{code}\n\n\n%************************************************************************\n%* *\n\\subsection{Linking interpretables into something we can run}\n%* *\n%************************************************************************\n\n\\begin{code}\ntype ClosureEnv = NameEnv (Name, HValue)\nnewtype HValue = HValue Any\n\nemptyClosureEnv :: ClosureEnv\nemptyClosureEnv = emptyNameEnv\n\nextendClosureEnv :: ClosureEnv -> [(Name,HValue)] -> ClosureEnv\nextendClosureEnv cl_env pairs\n = extendNameEnvList cl_env [ (n, (n,v)) | (n,v) <- pairs]\n\\end{code}\n\n\n%************************************************************************\n%* *\n\\subsection{Linking interpretables into something we can run}\n%* *\n%************************************************************************\n\n\\begin{code}\n{-\ndata BCO# = BCO# ByteArray# -- instrs :: Array Word16#\n ByteArray# -- literals :: Array Word32#\n PtrArray# -- ptrs :: Array HValue\n ByteArray# -- itbls :: Array Addr#\n-}\n\nlinkBCO :: DynFlags -> ItblEnv -> ClosureEnv -> UnlinkedBCO -> IO HValue\nlinkBCO dflags ie ce ul_bco\n = do BCO bco# <- linkBCO' dflags ie ce ul_bco\n -- SDM: Why do we need mkApUpd0 here? I *think* it's because\n -- otherwise top-level interpreted CAFs don't get updated\n -- after evaluation. A top-level BCO will evaluate itself and\n -- return its value when entered, but it won't update itself.\n -- Wrapping the BCO in an AP_UPD thunk will take care of the\n -- update for us.\n --\n -- Update: the above is true, but now we also have extra invariants:\n -- (a) An AP thunk *must* point directly to a BCO\n -- (b) A zero-arity BCO *must* be wrapped in an AP thunk\n -- (c) An AP is always fully saturated, so we *can't* wrap\n -- non-zero arity BCOs in an AP thunk.\n --\n if (unlinkedBCOArity ul_bco > 0)\n then return (HValue (unsafeCoerce# bco#))\n else case mkApUpd0# bco# of { (# final_bco #) -> return (HValue final_bco) }\n\n\nlinkBCO' :: DynFlags -> ItblEnv -> ClosureEnv -> UnlinkedBCO -> IO BCO\nlinkBCO' dflags ie ce (UnlinkedBCO _ arity insns_barr bitmap literalsSS ptrsSS)\n -- Raises an IO exception on failure\n = do let literals = ssElts literalsSS\n ptrs = ssElts ptrsSS\n\n linked_literals <- mapM (lookupLiteral dflags ie) literals\n\n let n_literals = sizeSS literalsSS\n n_ptrs = sizeSS ptrsSS\n\n ptrs_arr <- mkPtrsArray dflags ie ce n_ptrs ptrs\n\n let\n !ptrs_parr = case ptrs_arr of Array _lo _hi _n parr -> parr\n\n litRange\n | n_literals > 0 = (0, fromIntegral n_literals - 1)\n | otherwise = (1, 0)\n literals_arr :: UArray Word Word\n literals_arr = listArray litRange linked_literals\n !literals_barr = case literals_arr of UArray _lo _hi _n barr -> barr\n\n !(I# arity#) = arity\n\n newBCO insns_barr literals_barr ptrs_parr arity# bitmap\n\n\n-- we recursively link any sub-BCOs while making the ptrs array\nmkPtrsArray :: DynFlags -> ItblEnv -> ClosureEnv -> Word -> [BCOPtr] -> IO (Array Word HValue)\nmkPtrsArray dflags ie ce n_ptrs ptrs = do\n let ptrRange = if n_ptrs > 0 then (0, n_ptrs-1) else (1, 0)\n marr <- newArray_ ptrRange\n let\n fill (BCOPtrName n) i = do\n ptr <- lookupName ce n\n unsafeWrite marr i ptr\n fill (BCOPtrPrimOp op) i = do\n ptr <- lookupPrimOp op\n unsafeWrite marr i ptr\n fill (BCOPtrBCO ul_bco) i = do\n BCO bco# <- linkBCO' dflags ie ce ul_bco\n writeArrayBCO marr i bco#\n fill (BCOPtrBreakInfo brkInfo) i =\n unsafeWrite marr i (HValue (unsafeCoerce# brkInfo))\n fill (BCOPtrArray brkArray) i =\n unsafeWrite marr i (HValue (unsafeCoerce# brkArray))\n zipWithM_ fill ptrs [0..]\n unsafeFreeze marr\n\nnewtype IOArray i e = IOArray (STArray RealWorld i e)\n\ninstance MArray IOArray e IO where\n getBounds (IOArray marr) = stToIO $ getBounds marr\n getNumElements (IOArray marr) = stToIO $ getNumElements marr\n newArray lu init = stToIO $ do\n marr <- newArray lu init; return (IOArray marr)\n newArray_ lu = stToIO $ do\n marr <- newArray_ lu; return (IOArray marr)\n unsafeRead (IOArray marr) i = stToIO (unsafeRead marr i)\n unsafeWrite (IOArray marr) i e = stToIO (unsafeWrite marr i e)\n\n-- XXX HACK: we should really have a new writeArray# primop that takes a BCO#.\nwriteArrayBCO :: IOArray Word a -> Int -> BCO# -> IO ()\nwriteArrayBCO (IOArray (STArray _ _ _ marr#)) (I# i#) bco# = IO $ \\s# ->\n case (unsafeCoerce# writeArray#) marr# i# bco# s# of { s# ->\n (# s#, () #) }\n\n{-\nwriteArrayMBA :: IOArray Int a -> Int -> MutableByteArray# a -> IO ()\nwriteArrayMBA (IOArray (STArray _ _ marr#)) (I# i#) mba# = IO $ \\s# ->\n case (unsafeCoerce# writeArray#) marr# i# bco# s# of { s# ->\n (# s#, () #) }\n-}\n\ndata BCO = BCO BCO#\n\nnewBCO :: ByteArray# -> ByteArray# -> Array# a -> Int# -> ByteArray# -> IO BCO\nnewBCO instrs lits ptrs arity bitmap\n = IO $ \\s -> case newBCO# instrs lits ptrs arity bitmap s of\n (# s1, bco #) -> (# s1, BCO bco #)\n\n\nlookupLiteral :: DynFlags -> ItblEnv -> BCONPtr -> IO Word\nlookupLiteral _ _ (BCONPtrWord lit) = return lit\nlookupLiteral _ _ (BCONPtrLbl sym) = do Ptr a# <- lookupStaticPtr sym\n return (W# (int2Word# (addr2Int# a#)))\nlookupLiteral dflags ie (BCONPtrItbl nm) = do Ptr a# <- lookupIE dflags ie nm\n return (W# (int2Word# (addr2Int# a#)))\n\nlookupStaticPtr :: FastString -> IO (Ptr ())\nlookupStaticPtr addr_of_label_string\n = do let label_to_find = unpackFS addr_of_label_string\n m <- lookupSymbol label_to_find\n case m of\n Just ptr -> return ptr\n Nothing -> linkFail \"ByteCodeLink: can't find label\"\n label_to_find\n\nlookupPrimOp :: PrimOp -> IO HValue\nlookupPrimOp primop\n = do let sym_to_find = primopToCLabel primop \"closure\"\n m <- lookupSymbol sym_to_find\n case m of\n Just (Ptr addr) -> case addrToAny# addr of\n (# a #) -> return (HValue a)\n Nothing -> linkFail \"ByteCodeLink.lookupCE(primop)\" sym_to_find\n\nlookupName :: ClosureEnv -> Name -> IO HValue\nlookupName ce nm\n = case lookupNameEnv ce nm of\n Just (_,aa) -> return aa\n Nothing\n -> ASSERT2(isExternalName nm, ppr nm)\n do let sym_to_find = nameToCLabel nm \"closure\"\n m <- lookupSymbol sym_to_find\n case m of\n Just (Ptr addr) -> case addrToAny# addr of\n (# a #) -> return (HValue a)\n Nothing -> linkFail \"ByteCodeLink.lookupCE\" sym_to_find\n\nlookupIE :: DynFlags -> ItblEnv -> Name -> IO (Ptr a)\nlookupIE dflags ie con_nm\n = case lookupNameEnv ie con_nm of\n Just (_, a) -> return (castPtr (itblCode dflags a))\n Nothing\n -> do -- try looking up in the object files.\n let sym_to_find1 = nameToCLabel con_nm \"con_info\"\n m <- lookupSymbol sym_to_find1\n case m of\n Just addr -> return addr\n Nothing\n -> do -- perhaps a nullary constructor?\n let sym_to_find2 = nameToCLabel con_nm \"static_info\"\n n <- lookupSymbol sym_to_find2\n case n of\n Just addr -> return addr\n Nothing -> linkFail \"ByteCodeLink.lookupIE\"\n (sym_to_find1 ++ \" or \" ++ sym_to_find2)\n\nlinkFail :: String -> String -> IO a\nlinkFail who what\n = throwGhcExceptionIO (ProgramError $\n unlines [ \"\",who\n , \"During interactive linking, GHCi couldn't find the following symbol:\"\n , ' ' : ' ' : what\n , \"This may be due to you not asking GHCi to load extra object files,\"\n , \"archives or DLLs needed by your current session. Restart GHCi, specifying\"\n , \"the missing library using the -L\/path\/to\/object\/dir and -lmissinglibname\"\n , \"flags, or simply by naming the relevant files on the GHCi command line.\"\n , \"Alternatively, this link failure might indicate a bug in GHCi.\"\n , \"If you suspect the latter, please send a bug report to:\"\n , \" glasgow-haskell-bugs@haskell.org\"\n ])\n\n-- HACKS!!! ToDo: cleaner\nnameToCLabel :: Name -> String{-suffix-} -> String\nnameToCLabel n suffix\n = if pkgid \/= mainPackageId\n then package_part ++ '_': qual_name\n else qual_name\n where\n pkgid = modulePackageId mod\n mod = ASSERT( isExternalName n ) nameModule n\n package_part = zString (zEncodeFS (packageIdFS (modulePackageId mod)))\n module_part = zString (zEncodeFS (moduleNameFS (moduleName mod)))\n occ_part = zString (zEncodeFS (occNameFS (nameOccName n)))\n qual_name = module_part ++ '_':occ_part ++ '_':suffix\n\n\nprimopToCLabel :: PrimOp -> String{-suffix-} -> String\nprimopToCLabel primop suffix\n = let str = \"ghczmprim_GHCziPrimopWrappers_\" ++ zString (zEncodeFS (occNameFS (primOpOcc primop))) ++ '_':suffix\n in --trace (\"primopToCLabel: \" ++ str)\n str\n\\end{code}\n\n","avg_line_length":38.2913669065,"max_line_length":115,"alphanum_fraction":0.5685298262} {"size":1309,"ext":"lhs","lang":"Literate Haskell","max_stars_count":3.0,"content":"> module Astro.Functions where\n\n> import qualified Prelude\n> import Control.Monad.Reader\n> import Astro\n> import Astro.Coords.PosVel\n> import Numeric.Units.Dimensional.Prelude\n> import Numeric.Units.Dimensional.NonSI (revolution)\n\n\n| The length of a sidereal day.\n\n> siderealDay :: Floating a => Astro a (Time a)\n> siderealDay = do\n> phi <- asks phi\n> return $ 1 *~ revolution \/ phi\n\n\n| Calculates the potential energy per unit mass of a body at the given distance from the center of Earth.\n\n> potentialEnergyPerUnitMass :: Floating a => Length a -> Astro a (EnergyPerUnitMass a)\n> potentialEnergyPerUnitMass r = do\n> mu <- asks mu\n> return $ negate mu \/ r\n\n| Calculates the total orbital energy per unit mass of a body with the\ngiven @PosVel@. The @PosVel@ must be in an inertial reference frame.\n\n> orbitalEneryPerUnitMass :: Floating a => PosVel s a -> Astro a (EnergyPerUnitMass a)\n> orbitalEneryPerUnitMass pv = do\n> mu <- asks mu\n> return $ (dotProduct v v) \/ _2 + mu \/ r\n> where \n> r = radius (spos pv)\n> v = (cvel pv)\n\n\n> longitudeToRA :: Fractional a => Epoch -> Longitude a -> Astro a (RightAscension a)\n> longitudeToRA t l = do\n> ra0 <- asks greenwichRefRA\n> t0 <- asks greenwichRefEpoch\n> phi <- asks phi\n> return $ ra0 + phi * (diffTime t t0) + l\n\n\n","avg_line_length":28.4565217391,"max_line_length":105,"alphanum_fraction":0.6829640947} {"size":1854,"ext":"lhs","lang":"Literate Haskell","max_stars_count":92.0,"content":"\nForward the public part of CatalogInternal.\n\n> {- | This module contains the database catalog data types and helper\n> functions.\n>\n> The catalog data type holds the catalog information needed to type\n> check sql code, and a catalog value is produced after typechecking sql\n> which represents the catalog that would be produced (e.g. for sql\n> containing ddl)\n>\n\n> You can create a catalog using the 'CatalogUpdate' type, and there\n> is example and util in the repo which reads a catalog from\n> an existing database in postgres.\n>\n> -}\n>\n> module Database.HsSqlPpp.Catalog\n> (\n> -- * Data types\n> Catalog\n> -- ** Updates\n> ,CatalogUpdate(..)\n> --,ppCatUpdate\n> -- ** bits and pieces\n> ,CastContext(..)\n> ,CompositeFlavour(..)\n> ,CatName\n> ,CatNameExtra(..)\n> ,mkCatNameExtra\n> ,mkCatNameExtraNN\n> --,CompositeDef\n> --,FunctionPrototype\n> --,DomainDefinition\n> --,FunFlav(..)\n> -- -- * Catalog values\n> --,emptyCatalog\n> --,defaultCatalog\n> --,ansiCatalog\n> --,defaultTemplate1Catalog\n> --,defaultTSQLCatalog\n> --,odbcCatalog\n> -- -- * Catalog comparison\n> --,CatalogDiff(..)\n> --,compareCatalogs\n> --,ppCatDiff\n> -- * Functions\n> ,updateCatalog\n> ,deconstructCatalog\n> -- * testing support\n> ,Environment\n> ,brokeEnvironment\n> ,envSelectListEnvironment\n> ) where\n>\n> import Database.HsSqlPpp.Internals.Catalog.CatalogBuilder\n> import Database.HsSqlPpp.Internals.Catalog.CatalogTypes\n> --import Database.HsSqlPpp.Internals.Catalog.DefaultTemplate1Catalog\n> --import Database.HsSqlPpp.Internals.Catalog.DefaultTSQLCatalog\n> --import Database.HsSqlPpp.Internals.Catalog.OdbcCatalog\n> --import Database.HsSqlPpp.Internals.Catalog.AnsiCatalog\n> import Database.HsSqlPpp.Internals.TypeChecking.Environment\n","avg_line_length":28.96875,"max_line_length":72,"alphanum_fraction":0.6790722762} {"size":3544,"ext":"lhs","lang":"Literate Haskell","max_stars_count":1.0,"content":"%\n% (c) The University of Glasgow 2006\n% (c) The GRASP\/AQUA Project, Glasgow University, 1992-1998\n%\n\n\\begin{code}\nmodule Maybes (\n module Data.Maybe,\n\n MaybeErr(..), -- Instance of Monad\n failME, isSuccess,\n\n fmapM_maybe,\n orElse,\n mapCatMaybes,\n allMaybes,\n firstJust, firstJusts,\n expectJust,\n maybeToBool,\n\n MaybeT(..)\n ) where\n\nimport Data.Maybe\n\ninfixr 4 `orElse`\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection[Maybe type]{The @Maybe@ type}\n%* *\n%************************************************************************\n\n\\begin{code}\nmaybeToBool :: Maybe a -> Bool\nmaybeToBool Nothing = False\nmaybeToBool (Just _) = True\n\n-- | Collects a list of @Justs@ into a single @Just@, returning @Nothing@ if\n-- there are any @Nothings@.\nallMaybes :: [Maybe a] -> Maybe [a]\nallMaybes [] = Just []\nallMaybes (Nothing : _) = Nothing\nallMaybes (Just x : ms) = case allMaybes ms of\n Nothing -> Nothing\n Just xs -> Just (x:xs)\n\nfirstJust :: Maybe a -> Maybe a -> Maybe a\nfirstJust (Just a) _ = Just a\nfirstJust Nothing b = b\n\n-- | Takes a list of @Maybes@ and returns the first @Just@ if there is one, or\n-- @Nothing@ otherwise.\nfirstJusts :: [Maybe a] -> Maybe a\nfirstJusts = foldr firstJust Nothing\n\\end{code}\n\n\\begin{code}\nexpectJust :: String -> Maybe a -> a\n{-# INLINE expectJust #-}\nexpectJust _ (Just x) = x\nexpectJust err Nothing = error (\"expectJust \" ++ err)\n\\end{code}\n\n\\begin{code}\nmapCatMaybes :: (a -> Maybe b) -> [a] -> [b]\nmapCatMaybes _ [] = []\nmapCatMaybes f (x:xs) = case f x of\n Just y -> y : mapCatMaybes f xs\n Nothing -> mapCatMaybes f xs\n\\end{code}\n\n\\begin{code}\n\norElse :: Maybe a -> a -> a\n(Just x) `orElse` _ = x\nNothing `orElse` y = y\n\\end{code}\n\n\\begin{code}\nfmapM_maybe :: Monad m => (a -> m b) -> Maybe a -> m (Maybe b)\nfmapM_maybe _ Nothing = return Nothing\nfmapM_maybe f (Just x) = do\n x' <- f x\n return $ Just x'\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\\subsection[MaybeT type]{The @MaybeT@ monad transformer}\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\n\nnewtype MaybeT m a = MaybeT {runMaybeT :: m (Maybe a)}\n\ninstance Functor m => Functor (MaybeT m) where\n fmap f x = MaybeT $ fmap (fmap f) $ runMaybeT x\n\ninstance Monad m => Monad (MaybeT m) where\n return = MaybeT . return . Just\n x >>= f = MaybeT $ runMaybeT x >>= maybe (return Nothing) (runMaybeT . f)\n fail _ = MaybeT $ return Nothing\n\n\\end{code}\n\n\n%************************************************************************\n%* *\n\\subsection[MaybeErr type]{The @MaybeErr@ type}\n%* *\n%************************************************************************\n\n\\begin{code}\ndata MaybeErr err val = Succeeded val | Failed err\n\ninstance Monad (MaybeErr err) where\n return v = Succeeded v\n Succeeded v >>= k = k v\n Failed e >>= _ = Failed e\n\nisSuccess :: MaybeErr err val -> Bool\nisSuccess (Succeeded {}) = True\nisSuccess (Failed {}) = False\n\nfailME :: err -> MaybeErr err val\nfailME e = Failed e\n\\end{code}\n","avg_line_length":27.0534351145,"max_line_length":78,"alphanum_fraction":0.4844808126} {"size":7463,"ext":"lhs","lang":"Literate Haskell","max_stars_count":2.0,"content":"\\begin{code}\n{-# LANGUAGE Arrows #-}\n{-# LANGUAGE TypeFamilies #-}\n{-# LANGUAGE GADTs #-}\n{-# LANGUAGE DataKinds #-}\n{-# LANGUAGE GeneralizedNewtypeDeriving #-}\n{-# LANGUAGE RankNTypes #-}\n\nmodule Main where\nimport Data.Traversable\nimport qualified Data.Map.Strict as M\nimport Control.Arrow\nimport FreeExamples\nimport NaiveExamples\nimport Control.Monad.Writer\nimport Control.Monad.State\nimport Data.Text.Prettyprint.Doc\nimport Data.Text.Prettyprint.Doc.Render.String\n\n-- | Width of the vector instruction\nnewtype Width = Width Int\n\n-- | Identifier\ntype Id = String\n\n-- | Index expression\ntype Ix = Exp\n\n-- | Length expression\ntype Length = Exp\n\n\n-- | Eliminate superfluous skip expressions in the code.\nelimSkip :: Code -> Code\nelimSkip (Skip :>>: c) = elimSkip c\nelimSkip (c :>>: Skip) = elimSkip c\nelimSkip (c :>>: c') = elimSkip c :>>: elimSkip c'\nelimSkip c = c\n\ndata Value =\n IntVal Int\n | FloatVal Float\n | BoolVal Bool\n deriving(Eq, Ord)\n\ninstance Pretty Value where\n pretty (IntVal i) = pretty i\n pretty (FloatVal f) = pretty f\n pretty (BoolVal b) = pretty b\n\ninstance Show Value where\n showsPrec _ = renderShowS . layoutPretty defaultLayoutOptions . pretty\n\ndata Exp =\n Var Id\n | Literal Value\n | Index Id Ix\n | Exp :+: Exp\n | Exp :-: Exp\n | Exp :*: Exp\n | Mod Exp Exp\n | Div Exp Exp\n | Eq Exp Exp\n | Gt Exp Exp\n | LEq Exp Exp\n | Min Exp Exp\n | IfThenElse Exp Exp Exp \n deriving(Eq, Ord)\n\nprettySexp :: [Doc a] -> Doc a\nprettySexp as = nest 2 . parens . hsep $ as\n\ninstance Pretty Exp where\n pretty (Var id) = pretty id\n pretty (Literal l) = pretty l\n pretty (Index id ix) = pretty id <> brackets (pretty ix)\n pretty (e :+: e') = prettySexp $ [pretty \"+\", pretty e, pretty e']\n pretty (e :*: e') = prettySexp $ [pretty \"*\", pretty e, pretty e']\n pretty (Mod e e') = prettySexp $ [pretty \"%\", pretty e, pretty e']\n pretty (Div e e') = prettySexp $ [pretty \"\/\", pretty e, pretty e']\n pretty (Eq e e') = prettySexp $ [pretty \"==\", pretty e, pretty e']\n pretty (LEq e e') = prettySexp $ [pretty \"<=\", pretty e, pretty e']\n pretty (Min e e') = prettySexp $ [pretty \"min\", pretty e, pretty e']\n pretty (IfThenElse i t e) = \n prettySexp $ [pretty \"if\", pretty i, pretty t, pretty e]\n\ninstance Show Exp where\n showsPrec _ = renderShowS . layoutPretty defaultLayoutOptions . pretty\n\ninstance Num Exp where\n (+) = (:+:)\n (-) = (:-:)\n (*) = (:*:)\n fromInteger = Literal . IntVal . fromInteger\n abs = error \"no abs on Exp\"\n signum = error \"no signum on Exp\"\n\ndata Code =\n Skip\n | Code :>>: Code\n | For Id Exp Code\n | Allocate Id Length\n | Write Id Ix Exp\n deriving(Eq, Ord)\n\ninstance Pretty Code where\n pretty Skip = pretty \"skip\"\n pretty (c :>>: c') = vsep $ [pretty c, pretty c']\n pretty (For id lim body) =\n prettySexp [pretty \"for\", prettySexp [pretty id, pretty \"<=\", pretty lim], pretty body]\n pretty (Allocate id len) = prettySexp $ [pretty \"alloc\", pretty id, pretty len]\n pretty (Write id ix exp) = prettySexp $ [pretty id <> brackets (pretty ix), pretty \":=\", pretty exp]\n\ninstance Show Code where\n showsPrec _ = renderShowS . layoutPretty defaultLayoutOptions . pretty\n\n\ninstance Semigroup Code where\n (<>) = (:>>:)\n\ninstance Monoid Code where\n mempty = Skip\n\nnewtype CM a = CM {runCM :: Integer -> (Integer, Code, a) }\n\ninstance Functor CM where\n fmap f cm =\n CM $ \\i ->\n let (i', c, a) = runCM cm i\n in (i', c, f a)\n\ninstance Applicative CM where\n pure = return\n\n cma2b <*> cma = do\n a <- cma\n a2b <- cma2b\n return $ a2b a\n\ninstance Monad CM where\n return a = CM $ \\i -> (i, mempty, a)\n cm >>= f = CM $ \\i ->\n let (i', c', a') = runCM cm i\n (i'', c'', a'') = runCM (f a') i'\n in (i'', c' <> c'', a'')\n\n-- | Generate a new ID\nnewID :: String -> CM Id\nnewID name = CM $ \\i -> (i+1, mempty, (name <> \"-\" <> show i))\n\n-- | Append a section of code\nappendCode :: Code -> CM ()\nappendCode c = CM $ \\i -> (i, c, ())\n\n-- | Run a CM to extract out the code. Useful to generate code\n-- | and then transplant to another location while ensure we \n-- | do not create overlapping IDs\nextractCMCode :: CM () -> CM Code\nextractCMCode cm =\n CM $ \\i ->\n let (i', c, _) = runCM cm i\n in (i', mempty, c)\n\n-- | Generate code from the CM\ngenCMCode :: CM () -> Code\ngenCMCode cm = let (_, c, _) = runCM cm 0 in c \n\n--- for loop\nfor_ :: Exp -- ^ Limit of the loop. Variable goes from 0 <= v <= limit\n -> (Exp -> CM ()) -- ^ Function that receives the loop induction variable and generates the loop body\n -> CM ()\nfor_ lim f = do\n id <- newID \"%iv\"\n code <- extractCMCode $ f (Var id)\n appendCode $ For id lim code\n\n-- | A chunk of linear memory with an ID and a length attached to it\ndata CMMem = CMMem Id Length\n\n-- | Generate an index expression into the CMMem\ncmIndex :: CMMem -> Ix -> Exp\ncmIndex (CMMem name _) ix = Index name ix\n\n-- | Generate a write statement into the CMMem\ncmWrite ::CMMem -- ^ Array to be written\n -> Ix -- ^ Index to write to\n -> Exp -- ^ Value to write\n -> CM ()\ncmWrite (CMMem name _) ix v =\n appendCode $ Write name ix v\n\n-- | Defunctionalized push array\ndata PushT where\n Generate :: Length -> (Ix -> Exp) -> PushT\n Use :: CMMem -> PushT\n Map :: (Exp -> Exp) -> PushT -> PushT\n Append :: Length -> PushT -> PushT -> PushT\n\n-- | Compute the length of a PushT\npushTLength :: PushT -> Length\npushTLength (Generate l _ ) = l\npushTLength (Use (CMMem _ l)) = l\npushTLength (Map _ p) = pushTLength p\npushTLength (Append l p1 p2) = pushTLength p1 + pushTLength p2\n\n-- | code to index into the PushT\nindex :: PushT -> Ix -> Exp\nindex (Generate n ixf) ix = ixf ix\nindex (Use (CMMem id _)) ix = Index id ix\nindex (Map f p) ix = f (index p ix)\nindex (Append l p p') ix =\n IfThenElse\n (Gt ix l)\n (index p' (ix - l))\n (index p ix)\n\n-- | Generate code from a pushT given an index and an expression for\n-- | the value at that index\napply :: PushT -> (Ix -> Exp -> CM ()) -> CM ()\napply (Generate l ix2v) k = \n for_ l (\\ix -> k ix (ix2v ix))\napply (Use cmem@(CMMem _ n)) k = for_ n $ \\ix -> k ix (cmIndex cmem ix)\napply (Map f p) k = apply p (\\i a -> k i (f a))\napply (Append l p1 p2) k =\n apply p1 k >>\n apply p2 (\\i a -> k (l + i) a)\n\n-- | Code generate the allocation of an array and return a handle\n-- | to the alocated array\nallocate :: Length -> CM (CMMem)\nallocate l = do\n id <- newID \"#arr\"\n appendCode $ Allocate id l\n return (CMMem id l)\n\nmainArr :: IO ()\n-- mainArr = mapM_ (print) (traceProcessor (initProcessor program))\nmainArr = do\n runNaiveExamples\n runFreeExamples\n\n-- | Materialize an array, and return a handle to the materialized array\ntoVector :: PushT -> CM (CMMem)\ntoVector p = do\n -- | How do I get the length of the array I need to materialize?\n writeloc <- allocate (pushTLength p)\n apply p $ \\ix val -> (cmWrite writeloc ix val)\n return $ writeloc\n\n-- | Materialize an array and ignore the handle\ntoVector_ :: PushT -> CM ()\ntoVector_ p = toVector p >> pure ()\n\npushTZipWith :: (Exp -> Exp -> Exp) -> PushT -> PushT -> PushT\npushTZipWith f a1 a2 =\n Generate\n (min (pushTLength a1) (pushTLength a2))\n (\\i -> f (index a1 i) (index a2 i))\n\n\nsaxpy :: Exp -- ^ a\n -> PushT -- ^ x\n -> PushT -- ^ b\n -> PushT\nsaxpy a x b = pushTZipWith (\\x b -> a * x + b) x b\n\nmain :: IO ()\nmain = do\n let vec1 = CMMem \"src1\" 10\n let vec2 = CMMem \"src2\" 10\n let code = elimSkip $ genCMCode $ toVector_ $ saxpy 10 (Use vec1) (Generate 100 (\\ix -> Div ix 2))\n print code\n \n\\end{code}\n","avg_line_length":27.1381818182,"max_line_length":103,"alphanum_fraction":0.6288355889} {"size":7328,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"\\documentclass{article}\n%-------------------------------------------------------------------------------\n% SETUP\n%-------------------------------------------------------------------------------\n\n\\usepackage{verbatim}\n\\newenvironment{code}{\\verbatim}{\\endverbatim\\normalsize}\n\\usepackage[margin=1in]{geometry}\n\n\\newcommand{\\tc}[1]{\\texttt{#1}}\n\n%-------------------------------------------------------------------------------\n\\begin{document}\n%-------------------------------------------------------------------------------\n\n\\begin{center}\n\\Huge{Translator Interpreter}\n\\\\[0.75cm]\n\\end{center}\n\n%\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\n\\begin{code}\n\nmodule TranslatorInterpreter\n( interpret_translator\n) where\n\nimport Translator\nimport Utilities\nimport Debug\n\ninterpret_translator :: TransCode -> IO Translator\ninterpret_translator transcode =\n let helper :: [String] -> Translator -> IO Translator\n helper words trans = case words of\n [] -> return trans\n (\"(#)\":ws) -> helper (extract_comment ws) trans\n (\"\" :ws) -> helper ws trans\n (\" \" :ws) -> helper ws trans\n (\"\\n\":ws) -> helper ws trans\n (\"filetype\":\" \":filetype : ws) ->\n helper ws $ set_filetype filetype trans\n (title:\" \":nests_str:\" \":args_str : ws) ->\n let does_nest = string_to_bool nests_str\n args_type = string_to_argstype args_str\n in case args_type of\n ArgsNumber args_count ->\n let (block_format, rest) =\n make_count_formatter args_count ws\n trans_new =\n add_block title block_format does_nest trans\n in do\n -- putStrLn $ \"title : \" ++ title\n -- putStrLn $ \"before : \" ++ (show ws)\n -- putStrLn $ \"rest : \" ++ (show rest)\n helper rest trans_new\n ArgsStar ->\n let formatter = make_star_formatter\n (block_format, rest) = formatter ws\n trans_new =\n add_block title block_format does_nest trans\n in helper rest trans_new\n _ -> error\n $ \"couldn't interpret as translator code: \" ++\n (if length words < 10\n then show words\n else show $ take 10 words) ++ \"...\"\n splitted_transcode = transcode `splitted_with`\n [\" \", \"\\n\", \"<|\", \"|>\",\"(#)\"]\n empty_translator = Translator\n [] (Block \"root\" [] (\\xs -> join xs) True) (\\fp -> fp)\n in do\n foldl (>>) (putStr \"\") (map (debug . show) splitted_transcode)\n helper splitted_transcode empty_translator\n\nset_filetype :: String -> Translator -> Translator\nset_filetype s (Translator blocks root _) =\n Translator blocks root (\\fp -> fp ++ \".\" ++ s)\n\nextract_comment :: [String] -> [String]\nextract_comment ws = case ws of\n (\"\\n\":rest) -> rest\n (_:rest) -> extract_comment rest\n\n-- gets the next <|...|> enclosed text\nextract_next_text :: [String] -> Maybe (String, [String])\nextract_next_text strings =\n let extract_start :: [String] -> Maybe (String, [String])\n extract_start ss = case ss of\n [] -> Nothing\n (\"<|\":rest) -> extract_end rest\n (_ :rest) -> extract_start rest\n extract_end :: [String] -> Maybe (String, [String])\n extract_end ss = case ss of\n [] -> Nothing\n (\"|>\":rest) -> Just (\"\", rest)\n (s:rest) -> case extract_end rest of\n Nothing -> Nothing\n Just (ss, rest) -> Just (s ++ ss, rest)\n in extract_start strings\n\ndata ArgsType\n = ArgsNumber Int\n | ArgsStar\n\nstring_to_argstype :: String -> ArgsType\nstring_to_argstype s = case s of\n \"*\" -> ArgsStar\n int_str -> ArgsNumber $ string_to_int int_str\n\ndata ArgReference = ArgRefIndex Int\n\ninstance Show ArgReference where\n show (ArgRefIndex i) = show i\n\nbreak_text :: String -> [Either String ArgReference]\nbreak_text string =\n let helper :: String -> String -> [Either String ArgReference]\n helper str work = case str of\n \"\" -> case work of\n \"\" -> []\n _ -> [Left work]\n -- ('\\\\': x : xs) -> error $ \"got escape for: \" ++ [x]\n ('$' : x : xs) -> case (readMaybe [x] :: Maybe Int) of\n Nothing -> helper xs (work ++ ['$'] ++ [x])\n Just i -> case work of\n \"\" -> (Right $ ArgRefIndex i)\n : helper xs \"\"\n _ -> (Left work)\n : (Right $ ArgRefIndex i)\n : helper xs \"\"\n (x : xs) -> helper xs (work ++ [x])\n in helper string \"\"\n\ninterpret_star_items :: String -> String -> String -> ([TargetCode]->TargetCode)\ninterpret_star_items begin item end =\n let helper :: [Either String ArgReference] -> (TargetCode -> TargetCode)\n helper [] _ = \"\"\n helper (Left s : xs) ts = s ++ helper xs ts\n helper (Right (ArgRefIndex 1) : xs) ts = ts ++ helper xs ts\n splitted_item = break_text item\n in \\ts -> begin ++ (join $ map (helper splitted_item) ts) ++ end\n\ninterpret_count_item :: String -> ([TargetCode] -> TargetCode)\ninterpret_count_item item =\n let helper :: [Either String ArgReference] -> [TargetCode] -> TargetCode\n helper [] _ = \"\"\n helper (Left s : xs) ts = s ++ helper xs ts\n helper (Right (ArgRefIndex i) : xs) ts = if i <= length ts\n then (ts `at` (i-1)) ++ helper xs ts\n else helper xs ts\n splitted_item = break_text item\n in \\ts -> helper splitted_item ts\n\njust :: Maybe a -> a\njust mb_x = case mb_x of\n Just x -> x\n _ -> error \"tried to get something from nothing\"\n\nmake_count_formatter :: Int -> [String] -> ([TargetCode]->TargetCode, [String])\nmake_count_formatter count ss =\n let (item, rest) = just $ extract_next_text ss\n in (interpret_count_item item, rest)\n\nmake_star_formatter :: [String] -> ([TargetCode]->TargetCode, [String])\nmake_star_formatter ss =\n let (item1, rest1) = just $ extract_next_text ss\n (item2, rest2) = just $ extract_next_text rest1\n (item3, rest3) = just $ extract_next_text rest2\n in (interpret_star_items item1 item2 item3, rest3)\n\nmake_block :: String -> ([TargetCode] -> TargetCode) -> Bool -> Block\nmake_block title block_format does_nest = Block title [] block_format does_nest\n\nadd_block :: String->([TargetCode]->TargetCode)->Bool->Translator->Translator\nadd_block title block_format does_nest (Translator blocks root convert_fp) =\n let new_block = make_block title block_format does_nest\n in case title of\n \"root\" -> Translator blocks new_block convert_fp\n _ -> Translator (blocks ++ [new_block]) root convert_fp\n\n\\end{code}\n%\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n\n%-------------------------------------------------------------------------------\n\\end{document}\n%-------------------------------------------------------------------------------\n","avg_line_length":38.9787234043,"max_line_length":80,"alphanum_fraction":0.5111899563} {"size":112439,"ext":"lhs","lang":"Literate Haskell","max_stars_count":6.0,"content":"%\n% (c) The University of Glasgow, 2006\n%\n\\section[HscTypes]{Types for the per-module compiler}\n\n\\begin{code}\n{-# LANGUAGE CPP, DeriveDataTypeable, ScopedTypeVariables #-}\n\n-- | Types for the per-module compiler\nmodule HscTypes (\n -- * compilation state\n HscEnv(..), hscEPS,\n FinderCache, FindResult(..), ModLocationCache,\n Target(..), TargetId(..), pprTarget, pprTargetId,\n ModuleGraph, emptyMG,\n HscStatus(..),\n\n -- * Hsc monad\n Hsc(..), runHsc, runInteractiveHsc,\n\n -- * Information about modules\n ModDetails(..), emptyModDetails,\n ModGuts(..), CgGuts(..), ForeignStubs(..), appendStubC,\n ImportedMods, ImportedModsVal,\n\n ModSummary(..), ms_imps, ms_mod_name, showModMsg, isBootSummary,\n msHsFilePath, msHiFilePath, msObjFilePath,\n SourceModified(..),\n\n -- * Information about the module being compiled\n HscSource(..), isHsBoot, hscSourceString, -- Re-exported from DriverPhases\n\n -- * State relating to modules in this package\n HomePackageTable, HomeModInfo(..), emptyHomePackageTable,\n hptInstances, hptRules, hptVectInfo, pprHPT,\n hptObjs,\n\n -- * State relating to known packages\n ExternalPackageState(..), EpsStats(..), addEpsInStats,\n PackageTypeEnv, PackageIfaceTable, emptyPackageIfaceTable,\n lookupIfaceByModule, emptyModIface,\n\n PackageInstEnv, PackageFamInstEnv, PackageRuleBase,\n\n mkSOName, mkHsSOName, soExt,\n\n -- * Annotations\n prepareAnnotations,\n\n -- * Interactive context\n InteractiveContext(..), emptyInteractiveContext,\n icPrintUnqual, icInScopeTTs, icExtendGblRdrEnv,\n extendInteractiveContext, substInteractiveContext,\n setInteractivePrintName, icInteractiveModule,\n InteractiveImport(..), setInteractivePackage,\n mkPrintUnqualified, pprModulePrefix,\n mkQualPackage, mkQualModule, pkgQual,\n\n -- * Interfaces\n ModIface(..), mkIfaceWarnCache, mkIfaceHashCache, mkIfaceFixCache,\n emptyIfaceWarnCache, \n\n -- * Fixity\n FixityEnv, FixItem(..), lookupFixity, emptyFixityEnv,\n\n -- * TyThings and type environments\n TyThing(..), tyThingAvailInfo,\n tyThingTyCon, tyThingDataCon,\n tyThingId, tyThingCoAxiom, tyThingParent_maybe, tyThingsTyVars,\n implicitTyThings, implicitTyConThings, implicitClassThings,\n isImplicitTyThing,\n\n TypeEnv, lookupType, lookupTypeHscEnv, mkTypeEnv, emptyTypeEnv,\n typeEnvFromEntities, mkTypeEnvWithImplicits,\n extendTypeEnv, extendTypeEnvList,\n extendTypeEnvWithIds, \n lookupTypeEnv,\n typeEnvElts, typeEnvTyCons, typeEnvIds, typeEnvPatSyns,\n typeEnvDataCons, typeEnvCoAxioms, typeEnvClasses,\n\n -- * MonadThings\n MonadThings(..),\n\n -- * Information on imports and exports\n WhetherHasOrphans, IsBootInterface, Usage(..),\n Dependencies(..), noDependencies,\n NameCache(..), OrigNameCache,\n IfaceExport,\n\n -- * Warnings\n Warnings(..), WarningTxt(..), plusWarns,\n\n -- * Linker stuff\n Linkable(..), isObjectLinkable, linkableObjs,\n Unlinked(..), CompiledByteCode,\n isObject, nameOfObject, isInterpretable, byteCodeOfObject,\n\n -- * Program coverage\n HpcInfo(..), emptyHpcInfo, isHpcUsed, AnyHpcUsage,\n\n -- * Breakpoints\n ModBreaks (..), BreakIndex, emptyModBreaks,\n\n -- * Vectorisation information\n VectInfo(..), IfaceVectInfo(..), noVectInfo, plusVectInfo,\n noIfaceVectInfo, isNoIfaceVectInfo,\n\n -- * Safe Haskell information\n IfaceTrustInfo, getSafeMode, setSafeMode, noIfaceTrustInfo,\n trustInfoToNum, numToTrustInfo, IsSafeImport,\n\n -- * result of the parser\n HsParsedModule(..),\n\n -- * Compilation errors and warnings\n SourceError, GhcApiError, mkSrcErr, srcErrorMessages, mkApiErr,\n throwOneError, handleSourceError,\n handleFlagWarnings, printOrThrowWarnings,\n ) where\n\n#include \"HsVersions.h\"\n\n#ifdef GHCI\nimport ByteCodeAsm ( CompiledByteCode )\nimport InteractiveEvalTypes ( Resume )\n#endif\n\nimport HsSyn\nimport RdrName\nimport Avail\nimport Module\nimport InstEnv ( InstEnv, ClsInst )\nimport FamInstEnv\nimport Rules ( RuleBase )\nimport CoreSyn ( CoreProgram )\nimport Name\nimport NameEnv\nimport NameSet\nimport VarEnv\nimport VarSet\nimport Var\nimport Id\nimport IdInfo ( IdDetails(..) )\nimport Type\n\nimport Annotations ( Annotation, AnnEnv, mkAnnEnv, plusAnnEnv )\nimport Class\nimport TyCon\nimport CoAxiom\nimport ConLike\nimport DataCon\nimport PatSyn\nimport PrelNames ( gHC_PRIM, ioTyConName, printName, mkInteractiveModule )\nimport Packages hiding ( Version(..) )\nimport DynFlags\nimport DriverPhases ( Phase, HscSource(..), isHsBoot, hscSourceString )\nimport BasicTypes\nimport IfaceSyn\nimport CoreSyn ( CoreRule, CoreVect )\nimport Maybes\nimport Outputable\nimport BreakArray\nimport SrcLoc\n-- import Unique\nimport UniqFM\nimport UniqSupply\nimport FastString\nimport StringBuffer ( StringBuffer )\nimport Fingerprint\nimport MonadUtils\nimport Bag\nimport Binary\nimport ErrUtils\nimport Platform\nimport Util\n\nimport Control.Monad ( guard, liftM, when, ap )\nimport Data.Array ( Array, array )\nimport Data.IORef\nimport Data.Time\nimport Data.Word\nimport Data.Typeable ( Typeable )\nimport Exception\nimport System.FilePath\n\n-- -----------------------------------------------------------------------------\n-- Compilation state\n-- -----------------------------------------------------------------------------\n\n-- | Status of a compilation to hard-code\ndata HscStatus\n = HscNotGeneratingCode\n | HscUpToDate\n | HscUpdateBoot\n | HscRecomp CgGuts ModSummary\n\n-- -----------------------------------------------------------------------------\n-- The Hsc monad: Passing an environment and warning state\n\nnewtype Hsc a = Hsc (HscEnv -> WarningMessages -> IO (a, WarningMessages))\n\ninstance Functor Hsc where\n fmap = liftM\n\ninstance Applicative Hsc where\n pure = return\n (<*>) = ap\n\ninstance Monad Hsc where\n return a = Hsc $ \\_ w -> return (a, w)\n Hsc m >>= k = Hsc $ \\e w -> do (a, w1) <- m e w\n case k a of\n Hsc k' -> k' e w1\n\ninstance MonadIO Hsc where\n liftIO io = Hsc $ \\_ w -> do a <- io; return (a, w)\n\ninstance HasDynFlags Hsc where\n getDynFlags = Hsc $ \\e w -> return (hsc_dflags e, w)\n\nrunHsc :: HscEnv -> Hsc a -> IO a\nrunHsc hsc_env (Hsc hsc) = do\n (a, w) <- hsc hsc_env emptyBag\n printOrThrowWarnings (hsc_dflags hsc_env) w\n return a\n\nrunInteractiveHsc :: HscEnv -> Hsc a -> IO a\n-- A variant of runHsc that switches in the DynFlags from the\n-- InteractiveContext before running the Hsc computation.\nrunInteractiveHsc hsc_env\n = runHsc (hsc_env { hsc_dflags = interactive_dflags })\n where\n interactive_dflags = ic_dflags (hsc_IC hsc_env)\n\n-- -----------------------------------------------------------------------------\n-- Source Errors\n\n-- When the compiler (HscMain) discovers errors, it throws an\n-- exception in the IO monad.\n\nmkSrcErr :: ErrorMessages -> SourceError\nmkSrcErr = SourceError\n\nsrcErrorMessages :: SourceError -> ErrorMessages\nsrcErrorMessages (SourceError msgs) = msgs\n\nmkApiErr :: DynFlags -> SDoc -> GhcApiError\nmkApiErr dflags msg = GhcApiError (showSDoc dflags msg)\n\nthrowOneError :: MonadIO m => ErrMsg -> m ab\nthrowOneError err = liftIO $ throwIO $ mkSrcErr $ unitBag err\n\n-- | A source error is an error that is caused by one or more errors in the\n-- source code. A 'SourceError' is thrown by many functions in the\n-- compilation pipeline. Inside GHC these errors are merely printed via\n-- 'log_action', but API clients may treat them differently, for example,\n-- insert them into a list box. If you want the default behaviour, use the\n-- idiom:\n--\n-- > handleSourceError printExceptionAndWarnings $ do\n-- > ... api calls that may fail ...\n--\n-- The 'SourceError's error messages can be accessed via 'srcErrorMessages'.\n-- This list may be empty if the compiler failed due to @-Werror@\n-- ('Opt_WarnIsError').\n--\n-- See 'printExceptionAndWarnings' for more information on what to take care\n-- of when writing a custom error handler.\nnewtype SourceError = SourceError ErrorMessages\n deriving Typeable\n\ninstance Show SourceError where\n show (SourceError msgs) = unlines . map show . bagToList $ msgs\n\ninstance Exception SourceError\n\n-- | Perform the given action and call the exception handler if the action\n-- throws a 'SourceError'. See 'SourceError' for more information.\nhandleSourceError :: (ExceptionMonad m) =>\n (SourceError -> m a) -- ^ exception handler\n -> m a -- ^ action to perform\n -> m a\nhandleSourceError handler act =\n gcatch act (\\(e :: SourceError) -> handler e)\n\n-- | An error thrown if the GHC API is used in an incorrect fashion.\nnewtype GhcApiError = GhcApiError String\n deriving Typeable\n\ninstance Show GhcApiError where\n show (GhcApiError msg) = msg\n\ninstance Exception GhcApiError\n\n-- | Given a bag of warnings, turn them into an exception if\n-- -Werror is enabled, or print them out otherwise.\nprintOrThrowWarnings :: DynFlags -> Bag WarnMsg -> IO ()\nprintOrThrowWarnings dflags warns\n | gopt Opt_WarnIsError dflags\n = when (not (isEmptyBag warns)) $ do\n throwIO $ mkSrcErr $ warns `snocBag` warnIsErrorMsg dflags\n | otherwise\n = printBagOfErrors dflags warns\n\nhandleFlagWarnings :: DynFlags -> [Located String] -> IO ()\nhandleFlagWarnings dflags warns\n = when (wopt Opt_WarnDeprecatedFlags dflags) $ do\n -- It would be nicer if warns :: [Located MsgDoc], but that\n -- has circular import problems.\n let bag = listToBag [ mkPlainWarnMsg dflags loc (text warn)\n | L loc warn <- warns ]\n\n printOrThrowWarnings dflags bag\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{HscEnv}\n%* *\n%************************************************************************\n\n\\begin{code}\n\n-- | Hscenv is like 'Session', except that some of the fields are immutable.\n-- An HscEnv is used to compile a single module from plain Haskell source\n-- code (after preprocessing) to either C, assembly or C--. Things like\n-- the module graph don't change during a single compilation.\n--\n-- Historical note: \\\"hsc\\\" used to be the name of the compiler binary,\n-- when there was a separate driver and compiler. To compile a single\n-- module, the driver would invoke hsc on the source code... so nowadays\n-- we think of hsc as the layer of the compiler that deals with compiling\n-- a single module.\ndata HscEnv\n = HscEnv {\n hsc_dflags :: DynFlags,\n -- ^ The dynamic flag settings\n\n hsc_targets :: [Target],\n -- ^ The targets (or roots) of the current session\n\n hsc_mod_graph :: ModuleGraph,\n -- ^ The module graph of the current session\n\n hsc_IC :: InteractiveContext,\n -- ^ The context for evaluating interactive statements\n\n hsc_HPT :: HomePackageTable,\n -- ^ The home package table describes already-compiled\n -- home-package modules, \/excluding\/ the module we\n -- are compiling right now.\n -- (In one-shot mode the current module is the only\n -- home-package module, so hsc_HPT is empty. All other\n -- modules count as \\\"external-package\\\" modules.\n -- However, even in GHCi mode, hi-boot interfaces are\n -- demand-loaded into the external-package table.)\n --\n -- 'hsc_HPT' is not mutable because we only demand-load\n -- external packages; the home package is eagerly\n -- loaded, module by module, by the compilation manager.\n --\n -- The HPT may contain modules compiled earlier by @--make@\n -- but not actually below the current module in the dependency\n -- graph.\n --\n -- (This changes a previous invariant: changed Jan 05.)\n\n hsc_EPS :: {-# UNPACK #-} !(IORef ExternalPackageState),\n -- ^ Information about the currently loaded external packages.\n -- This is mutable because packages will be demand-loaded during\n -- a compilation run as required.\n\n hsc_NC :: {-# UNPACK #-} !(IORef NameCache),\n -- ^ As with 'hsc_EPS', this is side-effected by compiling to\n -- reflect sucking in interface files. They cache the state of\n -- external interface files, in effect.\n\n hsc_FC :: {-# UNPACK #-} !(IORef FinderCache),\n -- ^ The cached result of performing finding in the file system\n hsc_MLC :: {-# UNPACK #-} !(IORef ModLocationCache),\n -- ^ This caches the location of modules, so we don't have to\n -- search the filesystem multiple times. See also 'hsc_FC'.\n\n hsc_type_env_var :: Maybe (Module, IORef TypeEnv)\n -- ^ Used for one-shot compilation only, to initialise\n -- the 'IfGblEnv'. See 'TcRnTypes.tcg_type_env_var' for\n -- 'TcRunTypes.TcGblEnv'\n }\n\ninstance ContainsDynFlags HscEnv where\n extractDynFlags env = hsc_dflags env\n replaceDynFlags env dflags = env {hsc_dflags = dflags}\n\n-- | Retrieve the ExternalPackageState cache.\nhscEPS :: HscEnv -> IO ExternalPackageState\nhscEPS hsc_env = readIORef (hsc_EPS hsc_env)\n\n-- | A compilation target.\n--\n-- A target may be supplied with the actual text of the\n-- module. If so, use this instead of the file contents (this\n-- is for use in an IDE where the file hasn't been saved by\n-- the user yet).\ndata Target\n = Target {\n targetId :: TargetId, -- ^ module or filename\n targetAllowObjCode :: Bool, -- ^ object code allowed?\n targetContents :: Maybe (StringBuffer,UTCTime)\n -- ^ in-memory text buffer?\n }\n\ndata TargetId\n = TargetModule ModuleName\n -- ^ A module name: search for the file\n | TargetFile FilePath (Maybe Phase)\n -- ^ A filename: preprocess & parse it to find the module name.\n -- If specified, the Phase indicates how to compile this file\n -- (which phase to start from). Nothing indicates the starting phase\n -- should be determined from the suffix of the filename.\n deriving Eq\n\npprTarget :: Target -> SDoc\npprTarget (Target id obj _) =\n (if obj then char '*' else empty) <> pprTargetId id\n\ninstance Outputable Target where\n ppr = pprTarget\n\npprTargetId :: TargetId -> SDoc\npprTargetId (TargetModule m) = ppr m\npprTargetId (TargetFile f _) = text f\n\ninstance Outputable TargetId where\n ppr = pprTargetId\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Package and Module Tables}\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | Helps us find information about modules in the home package\ntype HomePackageTable = ModuleNameEnv HomeModInfo\n -- Domain = modules in the home package that have been fully compiled\n -- \"home\" package key cached here for convenience\n\n-- | Helps us find information about modules in the imported packages\ntype PackageIfaceTable = ModuleEnv ModIface\n -- Domain = modules in the imported packages\n\n-- | Constructs an empty HomePackageTable\nemptyHomePackageTable :: HomePackageTable\nemptyHomePackageTable = emptyUFM\n\n-- | Constructs an empty PackageIfaceTable\nemptyPackageIfaceTable :: PackageIfaceTable\nemptyPackageIfaceTable = emptyModuleEnv\n\npprHPT :: HomePackageTable -> SDoc\n-- A bit aribitrary for now\npprHPT hpt\n = vcat [ hang (ppr (mi_module (hm_iface hm)))\n 2 (ppr (md_types (hm_details hm)))\n | hm <- eltsUFM hpt ]\n\nlookupHptByModule :: HomePackageTable -> Module -> Maybe HomeModInfo\n-- The HPT is indexed by ModuleName, not Module,\n-- we must check for a hit on the right Module\nlookupHptByModule hpt mod\n = case lookupUFM hpt (moduleName mod) of\n Just hm | mi_module (hm_iface hm) == mod -> Just hm\n _otherwise -> Nothing\n\n-- | Information about modules in the package being compiled\ndata HomeModInfo\n = HomeModInfo {\n hm_iface :: !ModIface,\n -- ^ The basic loaded interface file: every loaded module has one of\n -- these, even if it is imported from another package\n hm_details :: !ModDetails,\n -- ^ Extra information that has been created from the 'ModIface' for\n -- the module, typically during typechecking\n hm_linkable :: !(Maybe Linkable)\n -- ^ The actual artifact we would like to link to access things in\n -- this module.\n --\n -- 'hm_linkable' might be Nothing:\n --\n -- 1. If this is an .hs-boot module\n --\n -- 2. Temporarily during compilation if we pruned away\n -- the old linkable because it was out of date.\n --\n -- After a complete compilation ('GHC.load'), all 'hm_linkable' fields\n -- in the 'HomePackageTable' will be @Just@.\n --\n -- When re-linking a module ('HscMain.HscNoRecomp'), we construct the\n -- 'HomeModInfo' by building a new 'ModDetails' from the old\n -- 'ModIface' (only).\n }\n\n-- | Find the 'ModIface' for a 'Module', searching in both the loaded home\n-- and external package module information\nlookupIfaceByModule\n :: DynFlags\n -> HomePackageTable\n -> PackageIfaceTable\n -> Module\n -> Maybe ModIface\nlookupIfaceByModule _dflags hpt pit mod\n = case lookupHptByModule hpt mod of\n Just hm -> Just (hm_iface hm)\n Nothing -> lookupModuleEnv pit mod\n\n-- If the module does come from the home package, why do we look in the PIT as well?\n-- (a) In OneShot mode, even home-package modules accumulate in the PIT\n-- (b) Even in Batch (--make) mode, there is *one* case where a home-package\n-- module is in the PIT, namely GHC.Prim when compiling the base package.\n-- We could eliminate (b) if we wanted, by making GHC.Prim belong to a package\n-- of its own, but it doesn't seem worth the bother.\n\n\n-- | Find all the instance declarations (of classes and families) from\n-- the Home Package Table filtered by the provided predicate function.\n-- Used in @tcRnImports@, to select the instances that are in the\n-- transitive closure of imports from the currently compiled module.\nhptInstances :: HscEnv -> (ModuleName -> Bool) -> ([ClsInst], [FamInst])\nhptInstances hsc_env want_this_module\n = let (insts, famInsts) = unzip $ flip hptAllThings hsc_env $ \\mod_info -> do\n guard (want_this_module (moduleName (mi_module (hm_iface mod_info))))\n let details = hm_details mod_info\n return (md_insts details, md_fam_insts details)\n in (concat insts, concat famInsts)\n\n-- | Get the combined VectInfo of all modules in the home package table. In\n-- contrast to instances and rules, we don't care whether the modules are\n-- \"below\" us in the dependency sense. The VectInfo of those modules not \"below\"\n-- us does not affect the compilation of the current module.\nhptVectInfo :: HscEnv -> VectInfo\nhptVectInfo = concatVectInfo . hptAllThings ((: []) . md_vect_info . hm_details)\n\n-- | Get rules from modules \"below\" this one (in the dependency sense)\nhptRules :: HscEnv -> [(ModuleName, IsBootInterface)] -> [CoreRule]\nhptRules = hptSomeThingsBelowUs (md_rules . hm_details) False\n\n\n-- | Get annotations from modules \"below\" this one (in the dependency sense)\nhptAnns :: HscEnv -> Maybe [(ModuleName, IsBootInterface)] -> [Annotation]\nhptAnns hsc_env (Just deps) = hptSomeThingsBelowUs (md_anns . hm_details) False hsc_env deps\nhptAnns hsc_env Nothing = hptAllThings (md_anns . hm_details) hsc_env\n\nhptAllThings :: (HomeModInfo -> [a]) -> HscEnv -> [a]\nhptAllThings extract hsc_env = concatMap extract (eltsUFM (hsc_HPT hsc_env))\n\n-- | Get things from modules \"below\" this one (in the dependency sense)\n-- C.f Inst.hptInstances\nhptSomeThingsBelowUs :: (HomeModInfo -> [a]) -> Bool -> HscEnv -> [(ModuleName, IsBootInterface)] -> [a]\nhptSomeThingsBelowUs extract include_hi_boot hsc_env deps\n | isOneShot (ghcMode (hsc_dflags hsc_env)) = []\n\n | otherwise\n = let hpt = hsc_HPT hsc_env\n in\n [ thing\n | -- Find each non-hi-boot module below me\n (mod, is_boot_mod) <- deps\n , include_hi_boot || not is_boot_mod\n\n -- unsavoury: when compiling the base package with --make, we\n -- sometimes try to look up RULES etc for GHC.Prim. GHC.Prim won't\n -- be in the HPT, because we never compile it; it's in the EPT\n -- instead. ToDo: clean up, and remove this slightly bogus filter:\n , mod \/= moduleName gHC_PRIM\n\n -- Look it up in the HPT\n , let things = case lookupUFM hpt mod of\n Just info -> extract info\n Nothing -> pprTrace \"WARNING in hptSomeThingsBelowUs\" msg []\n msg = vcat [ptext (sLit \"missing module\") <+> ppr mod,\n ptext (sLit \"Probable cause: out-of-date interface files\")]\n -- This really shouldn't happen, but see Trac #962\n\n -- And get its dfuns\n , thing <- things ]\n\nhptObjs :: HomePackageTable -> [FilePath]\nhptObjs hpt = concat (map (maybe [] linkableObjs . hm_linkable) (eltsUFM hpt))\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Dealing with Annotations}\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | Deal with gathering annotations in from all possible places\n-- and combining them into a single 'AnnEnv'\nprepareAnnotations :: HscEnv -> Maybe ModGuts -> IO AnnEnv\nprepareAnnotations hsc_env mb_guts = do\n eps <- hscEPS hsc_env\n let -- Extract annotations from the module being compiled if supplied one\n mb_this_module_anns = fmap (mkAnnEnv . mg_anns) mb_guts\n -- Extract dependencies of the module if we are supplied one,\n -- otherwise load annotations from all home package table\n -- entries regardless of dependency ordering.\n home_pkg_anns = (mkAnnEnv . hptAnns hsc_env) $ fmap (dep_mods . mg_deps) mb_guts\n other_pkg_anns = eps_ann_env eps\n ann_env = foldl1' plusAnnEnv $ catMaybes [mb_this_module_anns,\n Just home_pkg_anns,\n Just other_pkg_anns]\n return ann_env\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{The Finder cache}\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | The 'FinderCache' maps home module names to the result of\n-- searching for that module. It records the results of searching for\n-- modules along the search path. On @:load@, we flush the entire\n-- contents of this cache.\n--\n-- Although the @FinderCache@ range is 'FindResult' for convenience,\n-- in fact it will only ever contain 'Found' or 'NotFound' entries.\n--\ntype FinderCache = ModuleNameEnv FindResult\n\n-- | The result of searching for an imported module.\ndata FindResult\n = Found ModLocation Module\n -- ^ The module was found\n | NoPackage PackageKey\n -- ^ The requested package was not found\n | FoundMultiple [(Module, ModuleOrigin)]\n -- ^ _Error_: both in multiple packages\n\n -- | Not found\n | NotFound\n { fr_paths :: [FilePath] -- Places where I looked\n\n , fr_pkg :: Maybe PackageKey -- Just p => module is in this package's\n -- manifest, but couldn't find\n -- the .hi file\n\n , fr_mods_hidden :: [PackageKey] -- Module is in these packages,\n -- but the *module* is hidden\n\n , fr_pkgs_hidden :: [PackageKey] -- Module is in these packages,\n -- but the *package* is hidden\n\n , fr_suggestions :: [ModuleSuggestion] -- Possible mis-spelled modules\n }\n\n-- | Cache that remembers where we found a particular module. Contains both\n-- home modules and package modules. On @:load@, only home modules are\n-- purged from this cache.\ntype ModLocationCache = ModuleEnv ModLocation\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Symbol tables and Module details}\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | A 'ModIface' plus a 'ModDetails' summarises everything we know\n-- about a compiled module. The 'ModIface' is the stuff *before* linking,\n-- and can be written out to an interface file. The 'ModDetails is after\n-- linking and can be completely recovered from just the 'ModIface'.\n--\n-- When we read an interface file, we also construct a 'ModIface' from it,\n-- except that we explicitly make the 'mi_decls' and a few other fields empty;\n-- as when reading we consolidate the declarations etc. into a number of indexed\n-- maps and environments in the 'ExternalPackageState'.\ndata ModIface\n = ModIface {\n mi_module :: !Module, -- ^ Name of the module we are for\n mi_iface_hash :: !Fingerprint, -- ^ Hash of the whole interface\n mi_mod_hash :: !Fingerprint, -- ^ Hash of the ABI only\n mi_flag_hash :: !Fingerprint, -- ^ Hash of the important flags\n -- used when compiling this module\n\n mi_orphan :: !WhetherHasOrphans, -- ^ Whether this module has orphans\n mi_finsts :: !WhetherHasFamInst, -- ^ Whether this module has family instances\n mi_boot :: !IsBootInterface, -- ^ Read from an hi-boot file?\n\n mi_deps :: Dependencies,\n -- ^ The dependencies of the module. This is\n -- consulted for directly-imported modules, but not\n -- for anything else (hence lazy)\n\n mi_usages :: [Usage],\n -- ^ Usages; kept sorted so that it's easy to decide\n -- whether to write a new iface file (changing usages\n -- doesn't affect the hash of this module)\n -- NOT STRICT! we read this field lazily from the interface file\n -- It is *only* consulted by the recompilation checker\n\n mi_exports :: ![IfaceExport],\n -- ^ Exports\n -- Kept sorted by (mod,occ), to make version comparisons easier\n -- Records the modules that are the declaration points for things\n -- exported by this module, and the 'OccName's of those things\n\n mi_exp_hash :: !Fingerprint,\n -- ^ Hash of export list\n\n mi_used_th :: !Bool,\n -- ^ Module required TH splices when it was compiled.\n -- This disables recompilation avoidance (see #481).\n\n mi_fixities :: [(OccName,Fixity)],\n -- ^ Fixities\n -- NOT STRICT! we read this field lazily from the interface file\n\n mi_warns :: Warnings,\n -- ^ Warnings\n -- NOT STRICT! we read this field lazily from the interface file\n\n mi_anns :: [IfaceAnnotation],\n -- ^ Annotations\n -- NOT STRICT! we read this field lazily from the interface file\n\n\n mi_decls :: [(Fingerprint,IfaceDecl)],\n -- ^ Type, class and variable declarations\n -- The hash of an Id changes if its fixity or deprecations change\n -- (as well as its type of course)\n -- Ditto data constructors, class operations, except that\n -- the hash of the parent class\/tycon changes\n\n mi_globals :: !(Maybe GlobalRdrEnv),\n -- ^ Binds all the things defined at the top level in\n -- the \/original source\/ code for this module. which\n -- is NOT the same as mi_exports, nor mi_decls (which\n -- may contains declarations for things not actually\n -- defined by the user). Used for GHCi and for inspecting\n -- the contents of modules via the GHC API only.\n --\n -- (We need the source file to figure out the\n -- top-level environment, if we didn't compile this module\n -- from source then this field contains @Nothing@).\n --\n -- Strictly speaking this field should live in the\n -- 'HomeModInfo', but that leads to more plumbing.\n\n -- Instance declarations and rules\n mi_insts :: [IfaceClsInst], -- ^ Sorted class instance\n mi_fam_insts :: [IfaceFamInst], -- ^ Sorted family instances\n mi_rules :: [IfaceRule], -- ^ Sorted rules\n mi_orphan_hash :: !Fingerprint, -- ^ Hash for orphan rules, class and family\n -- instances, and vectorise pragmas combined\n\n mi_vect_info :: !IfaceVectInfo, -- ^ Vectorisation information\n\n -- Cached environments for easy lookup\n -- These are computed (lazily) from other fields\n -- and are not put into the interface file\n mi_warn_fn :: Name -> Maybe WarningTxt, -- ^ Cached lookup for 'mi_warns'\n mi_fix_fn :: OccName -> Fixity, -- ^ Cached lookup for 'mi_fixities'\n mi_hash_fn :: OccName -> Maybe (OccName, Fingerprint),\n -- ^ Cached lookup for 'mi_decls'.\n -- The @Nothing@ in 'mi_hash_fn' means that the thing\n -- isn't in decls. It's useful to know that when\n -- seeing if we are up to date wrt. the old interface.\n -- The 'OccName' is the parent of the name, if it has one.\n\n mi_hpc :: !AnyHpcUsage,\n -- ^ True if this program uses Hpc at any point in the program.\n\n mi_trust :: !IfaceTrustInfo,\n -- ^ Safe Haskell Trust information for this module.\n\n mi_trust_pkg :: !Bool\n -- ^ Do we require the package this module resides in be trusted\n -- to trust this module? This is used for the situation where a\n -- module is Safe (so doesn't require the package be trusted\n -- itself) but imports some trustworthy modules from its own\n -- package (which does require its own package be trusted).\n -- See Note [RnNames . Trust Own Package]\n }\n\ninstance Binary ModIface where\n put_ bh (ModIface {\n mi_module = mod,\n mi_boot = is_boot,\n mi_iface_hash= iface_hash,\n mi_mod_hash = mod_hash,\n mi_flag_hash = flag_hash,\n mi_orphan = orphan,\n mi_finsts = hasFamInsts,\n mi_deps = deps,\n mi_usages = usages,\n mi_exports = exports,\n mi_exp_hash = exp_hash,\n mi_used_th = used_th,\n mi_fixities = fixities,\n mi_warns = warns,\n mi_anns = anns,\n mi_decls = decls,\n mi_insts = insts,\n mi_fam_insts = fam_insts,\n mi_rules = rules,\n mi_orphan_hash = orphan_hash,\n mi_vect_info = vect_info,\n mi_hpc = hpc_info,\n mi_trust = trust,\n mi_trust_pkg = trust_pkg }) = do\n put_ bh mod\n put_ bh is_boot\n put_ bh iface_hash\n put_ bh mod_hash\n put_ bh flag_hash\n put_ bh orphan\n put_ bh hasFamInsts\n lazyPut bh deps\n lazyPut bh usages\n put_ bh exports\n put_ bh exp_hash\n put_ bh used_th\n put_ bh fixities\n lazyPut bh warns\n lazyPut bh anns\n put_ bh decls\n put_ bh insts\n put_ bh fam_insts\n lazyPut bh rules\n put_ bh orphan_hash\n put_ bh vect_info\n put_ bh hpc_info\n put_ bh trust\n put_ bh trust_pkg\n\n get bh = do\n mod_name <- get bh\n is_boot <- get bh\n iface_hash <- get bh\n mod_hash <- get bh\n flag_hash <- get bh\n orphan <- get bh\n hasFamInsts <- get bh\n deps <- lazyGet bh\n usages <- {-# SCC \"bin_usages\" #-} lazyGet bh\n exports <- {-# SCC \"bin_exports\" #-} get bh\n exp_hash <- get bh\n used_th <- get bh\n fixities <- {-# SCC \"bin_fixities\" #-} get bh\n warns <- {-# SCC \"bin_warns\" #-} lazyGet bh\n anns <- {-# SCC \"bin_anns\" #-} lazyGet bh\n decls <- {-# SCC \"bin_tycldecls\" #-} get bh\n insts <- {-# SCC \"bin_insts\" #-} get bh\n fam_insts <- {-# SCC \"bin_fam_insts\" #-} get bh\n rules <- {-# SCC \"bin_rules\" #-} lazyGet bh\n orphan_hash <- get bh\n vect_info <- get bh\n hpc_info <- get bh\n trust <- get bh\n trust_pkg <- get bh\n return (ModIface {\n mi_module = mod_name,\n mi_boot = is_boot,\n mi_iface_hash = iface_hash,\n mi_mod_hash = mod_hash,\n mi_flag_hash = flag_hash,\n mi_orphan = orphan,\n mi_finsts = hasFamInsts,\n mi_deps = deps,\n mi_usages = usages,\n mi_exports = exports,\n mi_exp_hash = exp_hash,\n mi_used_th = used_th,\n mi_anns = anns,\n mi_fixities = fixities,\n mi_warns = warns,\n mi_decls = decls,\n mi_globals = Nothing,\n mi_insts = insts,\n mi_fam_insts = fam_insts,\n mi_rules = rules,\n mi_orphan_hash = orphan_hash,\n mi_vect_info = vect_info,\n mi_hpc = hpc_info,\n mi_trust = trust,\n mi_trust_pkg = trust_pkg,\n -- And build the cached values\n mi_warn_fn = mkIfaceWarnCache warns,\n mi_fix_fn = mkIfaceFixCache fixities,\n mi_hash_fn = mkIfaceHashCache decls })\n\n-- | The original names declared of a certain module that are exported\ntype IfaceExport = AvailInfo\n\n-- | Constructs an empty ModIface\nemptyModIface :: Module -> ModIface\nemptyModIface mod\n = ModIface { mi_module = mod,\n mi_iface_hash = fingerprint0,\n mi_mod_hash = fingerprint0,\n mi_flag_hash = fingerprint0,\n mi_orphan = False,\n mi_finsts = False,\n mi_boot = False,\n mi_deps = noDependencies,\n mi_usages = [],\n mi_exports = [],\n mi_exp_hash = fingerprint0,\n mi_used_th = False,\n mi_fixities = [],\n mi_warns = NoWarnings,\n mi_anns = [],\n mi_insts = [],\n mi_fam_insts = [],\n mi_rules = [],\n mi_decls = [],\n mi_globals = Nothing,\n mi_orphan_hash = fingerprint0,\n mi_vect_info = noIfaceVectInfo,\n mi_warn_fn = emptyIfaceWarnCache,\n mi_fix_fn = emptyIfaceFixCache,\n mi_hash_fn = emptyIfaceHashCache,\n mi_hpc = False,\n mi_trust = noIfaceTrustInfo,\n mi_trust_pkg = False }\n\n\n-- | Constructs cache for the 'mi_hash_fn' field of a 'ModIface'\nmkIfaceHashCache :: [(Fingerprint,IfaceDecl)]\n -> (OccName -> Maybe (OccName, Fingerprint))\nmkIfaceHashCache pairs\n = \\occ -> lookupOccEnv env occ\n where\n env = foldr add_decl emptyOccEnv pairs\n add_decl (v,d) env0 = foldr add env0 (ifaceDeclFingerprints v d)\n where\n add (occ,hash) env0 = extendOccEnv env0 occ (occ,hash)\n\nemptyIfaceHashCache :: OccName -> Maybe (OccName, Fingerprint)\nemptyIfaceHashCache _occ = Nothing\n\n\n-- | The 'ModDetails' is essentially a cache for information in the 'ModIface'\n-- for home modules only. Information relating to packages will be loaded into\n-- global environments in 'ExternalPackageState'.\ndata ModDetails\n = ModDetails {\n -- The next two fields are created by the typechecker\n md_exports :: [AvailInfo],\n md_types :: !TypeEnv, -- ^ Local type environment for this particular module\n -- Includes Ids, TyCons, PatSyns\n md_insts :: ![ClsInst], -- ^ 'DFunId's for the instances in this module\n md_fam_insts :: ![FamInst],\n md_rules :: ![CoreRule], -- ^ Domain may include 'Id's from other modules\n md_anns :: ![Annotation], -- ^ Annotations present in this module: currently\n -- they only annotate things also declared in this module\n md_vect_info :: !VectInfo -- ^ Module vectorisation information\n }\n\n-- | Constructs an empty ModDetails\nemptyModDetails :: ModDetails\nemptyModDetails\n = ModDetails { md_types = emptyTypeEnv,\n md_exports = [],\n md_insts = [],\n md_rules = [],\n md_fam_insts = [],\n md_anns = [],\n md_vect_info = noVectInfo }\n\n-- | Records the modules directly imported by a module for extracting e.g. usage information\ntype ImportedMods = ModuleEnv [ImportedModsVal]\ntype ImportedModsVal = (ModuleName, Bool, SrcSpan, IsSafeImport)\n\n-- | A ModGuts is carried through the compiler, accumulating stuff as it goes\n-- There is only one ModGuts at any time, the one for the module\n-- being compiled right now. Once it is compiled, a 'ModIface' and\n-- 'ModDetails' are extracted and the ModGuts is discarded.\ndata ModGuts\n = ModGuts {\n mg_module :: !Module, -- ^ Module being compiled\n mg_boot :: IsBootInterface, -- ^ Whether it's an hs-boot module\n mg_exports :: ![AvailInfo], -- ^ What it exports\n mg_deps :: !Dependencies, -- ^ What it depends on, directly or\n -- otherwise\n mg_dir_imps :: !ImportedMods, -- ^ Directly-imported modules; used to\n -- generate initialisation code\n mg_used_names:: !NameSet, -- ^ What the module needed (used in 'MkIface.mkIface')\n\n mg_used_th :: !Bool, -- ^ Did we run a TH splice?\n mg_rdr_env :: !GlobalRdrEnv, -- ^ Top-level lexical environment\n\n -- These fields all describe the things **declared in this module**\n mg_fix_env :: !FixityEnv, -- ^ Fixities declared in this module.\n -- Used for creating interface files.\n mg_tcs :: ![TyCon], -- ^ TyCons declared in this module\n -- (includes TyCons for classes)\n mg_insts :: ![ClsInst], -- ^ Class instances declared in this module\n mg_fam_insts :: ![FamInst],\n -- ^ Family instances declared in this module\n mg_patsyns :: ![PatSyn], -- ^ Pattern synonyms declared in this module\n mg_rules :: ![CoreRule], -- ^ Before the core pipeline starts, contains\n -- See Note [Overall plumbing for rules] in Rules.lhs\n mg_binds :: !CoreProgram, -- ^ Bindings for this module\n mg_foreign :: !ForeignStubs, -- ^ Foreign exports declared in this module\n mg_warns :: !Warnings, -- ^ Warnings declared in the module\n mg_anns :: [Annotation], -- ^ Annotations declared in this module\n mg_hpc_info :: !HpcInfo, -- ^ Coverage tick boxes in the module\n mg_modBreaks :: !ModBreaks, -- ^ Breakpoints for the module\n mg_vect_decls:: ![CoreVect], -- ^ Vectorisation declarations in this module\n -- (produced by desugarer & consumed by vectoriser)\n mg_vect_info :: !VectInfo, -- ^ Pool of vectorised declarations in the module\n\n -- The next two fields are unusual, because they give instance\n -- environments for *all* modules in the home package, including\n -- this module, rather than for *just* this module.\n -- Reason: when looking up an instance we don't want to have to\n -- look at each module in the home package in turn\n mg_inst_env :: InstEnv,\n -- ^ Class instance environment from \/home-package\/ modules (including\n -- this one); c.f. 'tcg_inst_env'\n mg_fam_inst_env :: FamInstEnv,\n -- ^ Type-family instance environment for \/home-package\/ modules\n -- (including this one); c.f. 'tcg_fam_inst_env'\n mg_safe_haskell :: SafeHaskellMode,\n -- ^ Safe Haskell mode\n mg_trust_pkg :: Bool,\n -- ^ Do we need to trust our own package for Safe Haskell?\n -- See Note [RnNames . Trust Own Package]\n mg_dependent_files :: [FilePath] -- ^ dependencies from addDependentFile\n }\n\n-- The ModGuts takes on several slightly different forms:\n--\n-- After simplification, the following fields change slightly:\n-- mg_rules Orphan rules only (local ones now attached to binds)\n-- mg_binds With rules attached\n\n\n---------------------------------------------------------\n-- The Tidy pass forks the information about this module:\n-- * one lot goes to interface file generation (ModIface)\n-- and later compilations (ModDetails)\n-- * the other lot goes to code generation (CgGuts)\n\n-- | A restricted form of 'ModGuts' for code generation purposes\ndata CgGuts\n = CgGuts {\n cg_module :: !Module,\n -- ^ Module being compiled\n\n cg_tycons :: [TyCon],\n -- ^ Algebraic data types (including ones that started\n -- life as classes); generate constructors and info\n -- tables. Includes newtypes, just for the benefit of\n -- External Core\n\n cg_binds :: CoreProgram,\n -- ^ The tidied main bindings, including\n -- previously-implicit bindings for record and class\n -- selectors, and data constructor wrappers. But *not*\n -- data constructor workers; reason: we we regard them\n -- as part of the code-gen of tycons\n\n cg_foreign :: !ForeignStubs, -- ^ Foreign export stubs\n cg_dep_pkgs :: ![PackageKey], -- ^ Dependent packages, used to\n -- generate #includes for C code gen\n cg_hpc_info :: !HpcInfo, -- ^ Program coverage tick box information\n cg_modBreaks :: !ModBreaks -- ^ Module breakpoints\n }\n\n-----------------------------------\n-- | Foreign export stubs\ndata ForeignStubs\n = NoStubs\n -- ^ We don't have any stubs\n | ForeignStubs SDoc SDoc\n -- ^ There are some stubs. Parameters:\n --\n -- 1) Header file prototypes for\n -- \"foreign exported\" functions\n --\n -- 2) C stubs to use when calling\n -- \"foreign exported\" functions\n\nappendStubC :: ForeignStubs -> SDoc -> ForeignStubs\nappendStubC NoStubs c_code = ForeignStubs empty c_code\nappendStubC (ForeignStubs h c) c_code = ForeignStubs h (c $$ c_code)\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{The interactive context}\n%* *\n%************************************************************************\n\nNote [The interactive package]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nType, class, and value declarations at the command prompt are treated \nas if they were defined in modules\n interactive:Ghci1\n interactive:Ghci2\n ...etc...\nwith each bunch of declarations using a new module, all sharing a\ncommon package 'interactive' (see Module.interactivePackageKey, and\nPrelNames.mkInteractiveModule).\n\nThis scheme deals well with shadowing. For example:\n\n ghci> data T = A\n ghci> data T = B\n ghci> :i A\n data Ghci1.T = A -- Defined at :2:10\n\nHere we must display info about constructor A, but its type T has been\nshadowed by the second declaration. But it has a respectable\nqualified name (Ghci1.T), and its source location says where it was\ndefined.\n\nSo the main invariant continues to hold, that in any session an\noriginal name M.T only refers to one unique thing. (In a previous\niteration both the T's above were called :Interactive.T, albeit with\ndifferent uniques, which gave rise to all sorts of trouble.)\n\nThe details are a bit tricky though:\n\n * The field ic_mod_index counts which Ghci module we've got up to.\n It is incremented when extending ic_tythings\n\n * ic_tythings contains only things from the 'interactive' package.\n\n * Module from the 'interactive' package (Ghci1, Ghci2 etc) never go\n in the Home Package Table (HPT). When you say :load, that's when we\n extend the HPT.\n\n * The 'thisPackage' field of DynFlags is *not* set to 'interactive'.\n It stays as 'main' (or whatever -this-package-key says), and is the\n package to which :load'ed modules are added to.\n\n * So how do we arrange that declarations at the command prompt get\n to be in the 'interactive' package? Simply by setting the tcg_mod\n field of the TcGblEnv to \"interactive:Ghci1\". This is done by the\n call to initTc in initTcInteractive, initTcForLookup, which in \n turn get the module from it 'icInteractiveModule' field of the \n interactive context.\n\n The 'thisPackage' field stays as 'main' (or whatever -this-package-key says.\n\n * The main trickiness is that the type environment (tcg_type_env and\n fixity envt (tcg_fix_env), and instances (tcg_insts, tcg_fam_insts)\n now contains entities from all the interactive-package modules\n (Ghci1, Ghci2, ...) together, rather than just a single module as\n is usually the case. So you can't use \"nameIsLocalOrFrom\" to\n decide whether to look in the TcGblEnv vs the HPT\/PTE. This is a\n change, but not a problem provided you know.\n\n\nNote [Interactively-bound Ids in GHCi]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe Ids bound by previous Stmts in GHCi are currently\n a) GlobalIds\n b) with an Internal Name (not External)\n c) and a tidied type\n\n (a) They must be GlobalIds (not LocalIds) otherwise when we come to\n compile an expression using these ids later, the byte code\n generator will consider the occurrences to be free rather than\n global.\n\n (b) They start with an Internal Name because a Stmt is a local\n construct, so the renamer naturally builds an Internal name for\n each of its binders. It would be possible subsequently to give\n them an External Name (in a GhciN module) but then we'd have\n to substitute it out. So for now they stay Internal.\n\n (c) Their types are tidied. This is important, because :info may ask\n to look at them, and :info expects the things it looks up to have\n tidy types\n\nHowever note that TyCons, Classes, and even Ids bound by other top-level\ndeclarations in GHCi (eg foreign import, record selectors) currently get\nExternal Names, with Ghci9 (or 8, or 7, etc) as the module name. \n\n\nNote [ic_tythings]\n~~~~~~~~~~~~~~~~~~\nThe ic_tythings field contains\n * The TyThings declared by the user at the command prompt\n (eg Ids, TyCons, Classes)\n\n * The user-visible Ids that arise from such things, which \n *don't* come from 'implicitTyThings', notably:\n - record selectors\n - class ops\n The implicitTyThings are readily obtained from the TyThings\n but record selectors etc are not\n\nIt does *not* contain\n * DFunIds (they can be gotten from ic_instances)\n * CoAxioms (ditto)\n\nSee also Note [Interactively-bound Ids in GHCi]\n\n\n\\begin{code}\n-- | Interactive context, recording information about the state of the\n-- context in which statements are executed in a GHC session.\ndata InteractiveContext\n = InteractiveContext {\n ic_dflags :: DynFlags,\n -- ^ The 'DynFlags' used to evaluate interative expressions\n -- and statements.\n\n ic_mod_index :: Int,\n -- ^ Each GHCi stmt or declaration brings some new things into\n -- scope. We give them names like interactive:Ghci9.T,\n -- where the ic_index is the '9'. The ic_mod_index is\n -- incremented whenever we add something to ic_tythings\n -- See Note [The interactive package]\n\n ic_imports :: [InteractiveImport],\n -- ^ The GHCi top-level scope (ic_rn_gbl_env) is extended with\n -- these imports\n --\n -- This field is only stored here so that the client\n -- can retrieve it with GHC.getContext. GHC itself doesn't\n -- use it, but does reset it to empty sometimes (such\n -- as before a GHC.load). The context is set with GHC.setContext.\n\n ic_tythings :: [TyThing],\n -- ^ TyThings defined by the user, in reverse order of\n -- definition (ie most recent at the front)\n -- See Note [ic_tythings]\n\n ic_rn_gbl_env :: GlobalRdrEnv,\n -- ^ The cached 'GlobalRdrEnv', built by\n -- 'InteractiveEval.setContext' and updated regularly\n -- It contains everything in scope at the command line,\n -- including everything in ic_tythings\n\n ic_instances :: ([ClsInst], [FamInst]),\n -- ^ All instances and family instances created during\n -- this session. These are grabbed en masse after each\n -- update to be sure that proper overlapping is retained.\n -- That is, rather than re-check the overlapping each\n -- time we update the context, we just take the results\n -- from the instance code that already does that.\n\n ic_fix_env :: FixityEnv,\n -- ^ Fixities declared in let statements\n\n ic_default :: Maybe [Type],\n -- ^ The current default types, set by a 'default' declaration\n\n#ifdef GHCI\n ic_resume :: [Resume],\n -- ^ The stack of breakpoint contexts\n#endif\n\n ic_monad :: Name,\n -- ^ The monad that GHCi is executing in\n\n ic_int_print :: Name,\n -- ^ The function that is used for printing results\n -- of expressions in ghci and -e mode.\n\n ic_cwd :: Maybe FilePath\n -- virtual CWD of the program\n }\n\ndata InteractiveImport\n = IIDecl (ImportDecl RdrName)\n -- ^ Bring the exports of a particular module\n -- (filtered by an import decl) into scope\n\n | IIModule ModuleName\n -- ^ Bring into scope the entire top-level envt of\n -- of this module, including the things imported\n -- into it.\n\n\n-- | Constructs an empty InteractiveContext.\nemptyInteractiveContext :: DynFlags -> InteractiveContext\nemptyInteractiveContext dflags\n = InteractiveContext {\n ic_dflags = dflags,\n ic_imports = [],\n ic_rn_gbl_env = emptyGlobalRdrEnv,\n ic_mod_index = 1,\n ic_tythings = [],\n ic_instances = ([],[]),\n ic_fix_env = emptyNameEnv,\n ic_monad = ioTyConName, -- IO monad by default\n ic_int_print = printName, -- System.IO.print by default\n ic_default = Nothing,\n#ifdef GHCI\n ic_resume = [],\n#endif\n ic_cwd = Nothing }\n\nicInteractiveModule :: InteractiveContext -> Module\nicInteractiveModule (InteractiveContext { ic_mod_index = index }) \n = mkInteractiveModule index\n\n-- | This function returns the list of visible TyThings (useful for\n-- e.g. showBindings)\nicInScopeTTs :: InteractiveContext -> [TyThing]\nicInScopeTTs = ic_tythings\n\n-- | Get the PrintUnqualified function based on the flags and this InteractiveContext\nicPrintUnqual :: DynFlags -> InteractiveContext -> PrintUnqualified\nicPrintUnqual dflags InteractiveContext{ ic_rn_gbl_env = grenv } =\n mkPrintUnqualified dflags grenv\n\n-- | This function is called with new TyThings recently defined to update the\n-- InteractiveContext to include them. Ids are easily removed when shadowed,\n-- but Classes and TyCons are not. Some work could be done to determine\n-- whether they are entirely shadowed, but as you could still have references\n-- to them (e.g. instances for classes or values of the type for TyCons), it's\n-- not clear whether removing them is even the appropriate behavior.\nextendInteractiveContext :: InteractiveContext -> [TyThing] -> InteractiveContext\nextendInteractiveContext ictxt new_tythings\n | null new_tythings\n = ictxt\n | otherwise\n = ictxt { ic_mod_index = ic_mod_index ictxt + 1\n , ic_tythings = new_tythings ++ old_tythings\n , ic_rn_gbl_env = ic_rn_gbl_env ictxt `icExtendGblRdrEnv` new_tythings\n }\n where\n old_tythings = filter (not . shadowed) (ic_tythings ictxt)\n\n shadowed (AnId id) = ((`elem` new_names) . nameOccName . idName) id\n shadowed _ = False\n\n new_names = [ nameOccName (getName id) | AnId id <- new_tythings ]\n\nsetInteractivePackage :: HscEnv -> HscEnv\n-- Set the 'thisPackage' DynFlag to 'interactive'\nsetInteractivePackage hsc_env\n = hsc_env { hsc_dflags = (hsc_dflags hsc_env) { thisPackage = interactivePackageKey } }\n\nsetInteractivePrintName :: InteractiveContext -> Name -> InteractiveContext\nsetInteractivePrintName ic n = ic{ic_int_print = n}\n\n -- ToDo: should not add Ids to the gbl env here\n\n-- | Add TyThings to the GlobalRdrEnv, earlier ones in the list shadowing\n-- later ones, and shadowing existing entries in the GlobalRdrEnv.\nicExtendGblRdrEnv :: GlobalRdrEnv -> [TyThing] -> GlobalRdrEnv\nicExtendGblRdrEnv env tythings\n = foldr add env tythings -- Foldr makes things in the front of\n -- the list shadow things at the back\n where\n add thing env = extendGlobalRdrEnv True {- Shadowing please -} env\n [tyThingAvailInfo thing]\n -- One at a time, to ensure each shadows the previous ones\n\nsubstInteractiveContext :: InteractiveContext -> TvSubst -> InteractiveContext\nsubstInteractiveContext ictxt@InteractiveContext{ ic_tythings = tts } subst\n | isEmptyTvSubst subst = ictxt\n | otherwise = ictxt { ic_tythings = map subst_ty tts }\n where\n subst_ty (AnId id) = AnId $ id `setIdType` substTy subst (idType id)\n subst_ty tt = tt\n\ninstance Outputable InteractiveImport where\n ppr (IIModule m) = char '*' <> ppr m\n ppr (IIDecl d) = ppr d\n\\end{code}\n\n%************************************************************************\n%* *\n Building a PrintUnqualified\n%* *\n%************************************************************************\n\nNote [Printing original names]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nDeciding how to print names is pretty tricky. We are given a name\nP:M.T, where P is the package name, M is the defining module, and T is\nthe occurrence name, and we have to decide in which form to display\nthe name given a GlobalRdrEnv describing the current scope.\n\nIdeally we want to display the name in the form in which it is in\nscope. However, the name might not be in scope at all, and that's\nwhere it gets tricky. Here are the cases:\n\n 1. T uniquely maps to P:M.T ---> \"T\" NameUnqual\n 2. There is an X for which X.T\n uniquely maps to P:M.T ---> \"X.T\" NameQual X\n 3. There is no binding for \"M.T\" ---> \"M.T\" NameNotInScope1\n 4. Otherwise ---> \"P:M.T\" NameNotInScope2\n\n(3) and (4) apply when the entity P:M.T is not in the GlobalRdrEnv at\nall. In these cases we still want to refer to the name as \"M.T\", *but*\n\"M.T\" might mean something else in the current scope (e.g. if there's\nan \"import X as M\"), so to avoid confusion we avoid using \"M.T\" if\nthere's already a binding for it. Instead we write P:M.T.\n\nThere's one further subtlety: in case (3), what if there are two\nthings around, P1:M.T and P2:M.T? Then we don't want to print both of\nthem as M.T! However only one of the modules P1:M and P2:M can be\nexposed (say P2), so we use M.T for that, and P1:M.T for the other one.\nThis is handled by the qual_mod component of PrintUnqualified, inside\nthe (ppr mod) of case (3), in Name.pprModulePrefix\n\nNote [Printing package keys]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIn the old days, original names were tied to PackageIds, which directly\ncorresponded to the entities that users wrote in Cabal files, and were perfectly\nsuitable for printing when we need to disambiguate packages. However, with\nPackageKey, the situation is different. First, the key is not a human readable\nat all, so we need to consult the package database to find the appropriate\nPackageId to display. Second, there may be multiple copies of a library visible\nwith the same PackageId, in which case we need to disambiguate. For now,\nwe just emit the actual package key (which the user can go look up); however,\nanother scheme is to (recursively) say which dependencies are different.\n\nNB: When we extend package keys to also have holes, we will have to disambiguate\nthose as well.\n\n\\begin{code}\n-- | Creates some functions that work out the best ways to format\n-- names for the user according to a set of heuristics.\nmkPrintUnqualified :: DynFlags -> GlobalRdrEnv -> PrintUnqualified\nmkPrintUnqualified dflags env = QueryQualify qual_name\n (mkQualModule dflags)\n (mkQualPackage dflags)\n where\n qual_name mod occ\n | [gre] <- unqual_gres\n , right_name gre\n = NameUnqual\n -- If there's a unique entity that's in scope unqualified with 'occ'\n -- AND that entity is the right one, then we can use the unqualified name\n\n | [gre] <- qual_gres\n = NameQual (get_qual_mod (gre_prov gre))\n\n | null qual_gres\n = if null (lookupGRE_RdrName (mkRdrQual (moduleName mod) occ) env)\n then NameNotInScope1\n else NameNotInScope2\n\n | otherwise\n = NameNotInScope1 -- Can happen if 'f' is bound twice in the module\n -- Eg f = True; g = 0; f = False\n where\n right_name gre = nameModule_maybe (gre_name gre) == Just mod\n\n unqual_gres = lookupGRE_RdrName (mkRdrUnqual occ) env\n qual_gres = filter right_name (lookupGlobalRdrEnv env occ)\n\n get_qual_mod LocalDef = moduleName mod\n get_qual_mod (Imported is) = ASSERT( not (null is) ) is_as (is_decl (head is))\n\n -- we can mention a module P:M without the P: qualifier iff\n -- \"import M\" would resolve unambiguously to P:M. (if P is the\n -- current package we can just assume it is unqualified).\n\n-- | Creates a function for formatting modules based on two heuristics:\n-- (1) if the module is the current module, don't qualify, and (2) if there\n-- is only one exposed package which exports this module, don't qualify.\nmkQualModule :: DynFlags -> QueryQualifyModule\nmkQualModule dflags mod\n | modulePackageKey mod == thisPackage dflags = False\n\n | [(_, pkgconfig)] <- lookup,\n packageConfigId pkgconfig == modulePackageKey mod\n -- this says: we are given a module P:M, is there just one exposed package\n -- that exposes a module M, and is it package P?\n = False\n\n | otherwise = True\n where lookup = lookupModuleInAllPackages dflags (moduleName mod)\n\n-- | Creates a function for formatting packages based on two heuristics:\n-- (1) don't qualify if the package in question is \"main\", and (2) only qualify\n-- with a package key if the package ID would be ambiguous.\nmkQualPackage :: DynFlags -> QueryQualifyPackage\nmkQualPackage dflags pkg_key\n | pkg_key == mainPackageKey\n -- Skip the lookup if it's main, since it won't be in the package\n -- database!\n = False\n | searchPackageId dflags pkgid `lengthIs` 1\n -- this says: we are given a package pkg-0.1@MMM, are there only one\n -- exposed packages whose package ID is pkg-0.1?\n = False\n | otherwise\n = True\n where pkg = fromMaybe (pprPanic \"qual_pkg\" (ftext (packageKeyFS pkg_key)))\n (lookupPackage dflags pkg_key)\n pkgid = sourcePackageId pkg\n\n-- | A function which only qualifies package names if necessary; but\n-- qualifies all other identifiers.\npkgQual :: DynFlags -> PrintUnqualified\npkgQual dflags = alwaysQualify {\n queryQualifyPackage = mkQualPackage dflags\n }\n\n\\end{code}\n\n\n%************************************************************************\n%* *\n Implicit TyThings\n%* *\n%************************************************************************\n\nNote [Implicit TyThings]\n~~~~~~~~~~~~~~~~~~~~~~~~\n DEFINITION: An \"implicit\" TyThing is one that does not have its own\n IfaceDecl in an interface file. Instead, its binding in the type\n environment is created as part of typechecking the IfaceDecl for\n some other thing.\n\nExamples:\n * All DataCons are implicit, because they are generated from the\n IfaceDecl for the data\/newtype. Ditto class methods.\n\n * Record selectors are *not* implicit, because they get their own\n free-standing IfaceDecl.\n\n * Associated data\/type families are implicit because they are\n included in the IfaceDecl of the parent class. (NB: the\n IfaceClass decl happens to use IfaceDecl recursively for the\n associated types, but that's irrelevant here.)\n\n * Dictionary function Ids are not implicit.\n\n * Axioms for newtypes are implicit (same as above), but axioms\n for data\/type family instances are *not* implicit (like DFunIds).\n\n\\begin{code}\n-- | Determine the 'TyThing's brought into scope by another 'TyThing'\n-- \/other\/ than itself. For example, Id's don't have any implicit TyThings\n-- as they just bring themselves into scope, but classes bring their\n-- dictionary datatype, type constructor and some selector functions into\n-- scope, just for a start!\n\n-- N.B. the set of TyThings returned here *must* match the set of\n-- names returned by LoadIface.ifaceDeclImplicitBndrs, in the sense that\n-- TyThing.getOccName should define a bijection between the two lists.\n-- This invariant is used in LoadIface.loadDecl (see note [Tricky iface loop])\n-- The order of the list does not matter.\nimplicitTyThings :: TyThing -> [TyThing]\nimplicitTyThings (AnId _) = []\nimplicitTyThings (ACoAxiom _cc) = []\nimplicitTyThings (ATyCon tc) = implicitTyConThings tc\nimplicitTyThings (AConLike cl) = implicitConLikeThings cl\n\nimplicitConLikeThings :: ConLike -> [TyThing]\nimplicitConLikeThings (RealDataCon dc)\n = map AnId (dataConImplicitIds dc)\n -- For data cons add the worker and (possibly) wrapper\n\nimplicitConLikeThings (PatSynCon {})\n = [] -- Pattern synonyms have no implicit Ids; the wrapper and matcher\n -- are not \"implicit\"; they are simply new top-level bindings,\n -- and they have their own declaration in an interface fiel\n\nimplicitClassThings :: Class -> [TyThing]\nimplicitClassThings cl\n = -- Does not include default methods, because those Ids may have\n -- their own pragmas, unfoldings etc, not derived from the Class object\n -- associated types\n -- No extras_plus (recursive call) for the classATs, because they\n -- are only the family decls; they have no implicit things\n map ATyCon (classATs cl) ++\n -- superclass and operation selectors\n map AnId (classAllSelIds cl)\n\nimplicitTyConThings :: TyCon -> [TyThing]\nimplicitTyConThings tc\n = class_stuff ++\n -- fields (names of selectors)\n\n -- (possibly) implicit newtype coercion\n implicitCoTyCon tc ++\n\n -- for each data constructor in order,\n -- the contructor, worker, and (possibly) wrapper\n concatMap (extras_plus . AConLike . RealDataCon) (tyConDataCons tc)\n -- NB. record selectors are *not* implicit, they have fully-fledged\n -- bindings that pass through the compilation pipeline as normal.\n where\n class_stuff = case tyConClass_maybe tc of\n Nothing -> []\n Just cl -> implicitClassThings cl\n\n-- add a thing and recursive call\nextras_plus :: TyThing -> [TyThing]\nextras_plus thing = thing : implicitTyThings thing\n\n-- For newtypes and closed type families (only) add the implicit coercion tycon\nimplicitCoTyCon :: TyCon -> [TyThing]\nimplicitCoTyCon tc\n | Just co <- newTyConCo_maybe tc = [ACoAxiom $ toBranchedAxiom co]\n | Just co <- isClosedSynFamilyTyCon_maybe tc\n = [ACoAxiom co]\n | otherwise = []\n\n-- | Returns @True@ if there should be no interface-file declaration\n-- for this thing on its own: either it is built-in, or it is part\n-- of some other declaration, or it is generated implicitly by some\n-- other declaration.\nisImplicitTyThing :: TyThing -> Bool\nisImplicitTyThing (AConLike cl) = case cl of\n RealDataCon {} -> True\n PatSynCon {} -> False\nisImplicitTyThing (AnId id) = isImplicitId id\nisImplicitTyThing (ATyCon tc) = isImplicitTyCon tc\nisImplicitTyThing (ACoAxiom ax) = isImplicitCoAxiom ax\n\n-- | tyThingParent_maybe x returns (Just p)\n-- when pprTyThingInContext sould print a declaration for p\n-- (albeit with some \"...\" in it) when asked to show x\n-- It returns the *immediate* parent. So a datacon returns its tycon\n-- but the tycon could be the associated type of a class, so it in turn\n-- might have a parent.\ntyThingParent_maybe :: TyThing -> Maybe TyThing\ntyThingParent_maybe (AConLike cl) = case cl of\n RealDataCon dc -> Just (ATyCon (dataConTyCon dc))\n PatSynCon{} -> Nothing\ntyThingParent_maybe (ATyCon tc) = case tyConAssoc_maybe tc of\n Just cls -> Just (ATyCon (classTyCon cls))\n Nothing -> Nothing\ntyThingParent_maybe (AnId id) = case idDetails id of\n RecSelId { sel_tycon = tc } -> Just (ATyCon tc)\n ClassOpId cls -> Just (ATyCon (classTyCon cls))\n _other -> Nothing\ntyThingParent_maybe _other = Nothing\n\ntyThingsTyVars :: [TyThing] -> TyVarSet\ntyThingsTyVars tts =\n unionVarSets $ map ttToVarSet tts\n where\n ttToVarSet (AnId id) = tyVarsOfType $ idType id\n ttToVarSet (AConLike cl) = case cl of\n RealDataCon dc -> tyVarsOfType $ dataConRepType dc\n PatSynCon{} -> emptyVarSet\n ttToVarSet (ATyCon tc)\n = case tyConClass_maybe tc of\n Just cls -> (mkVarSet . fst . classTvsFds) cls\n Nothing -> tyVarsOfType $ tyConKind tc\n ttToVarSet _ = emptyVarSet\n\n-- | The Names that a TyThing should bring into scope. Used to build\n-- the GlobalRdrEnv for the InteractiveContext.\ntyThingAvailInfo :: TyThing -> AvailInfo\ntyThingAvailInfo (ATyCon t)\n = case tyConClass_maybe t of\n Just c -> AvailTC n (n : map getName (classMethods c)\n ++ map getName (classATs c))\n where n = getName c\n Nothing -> AvailTC n (n : map getName dcs ++\n concatMap dataConFieldLabels dcs)\n where n = getName t\n dcs = tyConDataCons t\ntyThingAvailInfo t\n = Avail (getName t)\n\\end{code}\n\n%************************************************************************\n%* *\n TypeEnv\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | A map from 'Name's to 'TyThing's, constructed by typechecking\n-- local declarations or interface files\ntype TypeEnv = NameEnv TyThing\n\nemptyTypeEnv :: TypeEnv\ntypeEnvElts :: TypeEnv -> [TyThing]\ntypeEnvTyCons :: TypeEnv -> [TyCon]\ntypeEnvCoAxioms :: TypeEnv -> [CoAxiom Branched]\ntypeEnvIds :: TypeEnv -> [Id]\ntypeEnvPatSyns :: TypeEnv -> [PatSyn]\ntypeEnvDataCons :: TypeEnv -> [DataCon]\ntypeEnvClasses :: TypeEnv -> [Class]\nlookupTypeEnv :: TypeEnv -> Name -> Maybe TyThing\n\nemptyTypeEnv = emptyNameEnv\ntypeEnvElts env = nameEnvElts env\ntypeEnvTyCons env = [tc | ATyCon tc <- typeEnvElts env]\ntypeEnvCoAxioms env = [ax | ACoAxiom ax <- typeEnvElts env]\ntypeEnvIds env = [id | AnId id <- typeEnvElts env]\ntypeEnvPatSyns env = [ps | AConLike (PatSynCon ps) <- typeEnvElts env]\ntypeEnvDataCons env = [dc | AConLike (RealDataCon dc) <- typeEnvElts env]\ntypeEnvClasses env = [cl | tc <- typeEnvTyCons env,\n Just cl <- [tyConClass_maybe tc]]\n\nmkTypeEnv :: [TyThing] -> TypeEnv\nmkTypeEnv things = extendTypeEnvList emptyTypeEnv things\n\nmkTypeEnvWithImplicits :: [TyThing] -> TypeEnv\nmkTypeEnvWithImplicits things =\n mkTypeEnv things\n `plusNameEnv`\n mkTypeEnv (concatMap implicitTyThings things)\n\ntypeEnvFromEntities :: [Id] -> [TyCon] -> [FamInst] -> TypeEnv\ntypeEnvFromEntities ids tcs famInsts =\n mkTypeEnv ( map AnId ids\n ++ map ATyCon all_tcs\n ++ concatMap implicitTyConThings all_tcs\n ++ map (ACoAxiom . toBranchedAxiom . famInstAxiom) famInsts\n )\n where\n all_tcs = tcs ++ famInstsRepTyCons famInsts\n\nlookupTypeEnv = lookupNameEnv\n\n-- Extend the type environment\nextendTypeEnv :: TypeEnv -> TyThing -> TypeEnv\nextendTypeEnv env thing = extendNameEnv env (getName thing) thing\n\nextendTypeEnvList :: TypeEnv -> [TyThing] -> TypeEnv\nextendTypeEnvList env things = foldl extendTypeEnv env things\n\nextendTypeEnvWithIds :: TypeEnv -> [Id] -> TypeEnv\nextendTypeEnvWithIds env ids\n = extendNameEnvList env [(getName id, AnId id) | id <- ids]\n\\end{code}\n\n\\begin{code}\n-- | Find the 'TyThing' for the given 'Name' by using all the resources\n-- at our disposal: the compiled modules in the 'HomePackageTable' and the\n-- compiled modules in other packages that live in 'PackageTypeEnv'. Note\n-- that this does NOT look up the 'TyThing' in the module being compiled: you\n-- have to do that yourself, if desired\nlookupType :: DynFlags\n -> HomePackageTable\n -> PackageTypeEnv\n -> Name\n -> Maybe TyThing\n\nlookupType dflags hpt pte name\n | isOneShot (ghcMode dflags) -- in one-shot, we don't use the HPT\n = lookupNameEnv pte name\n | otherwise\n = case lookupHptByModule hpt mod of\n Just hm -> lookupNameEnv (md_types (hm_details hm)) name\n Nothing -> lookupNameEnv pte name\n where\n mod = ASSERT2( isExternalName name, ppr name ) nameModule name\n\n-- | As 'lookupType', but with a marginally easier-to-use interface\n-- if you have a 'HscEnv'\nlookupTypeHscEnv :: HscEnv -> Name -> IO (Maybe TyThing)\nlookupTypeHscEnv hsc_env name = do\n eps <- readIORef (hsc_EPS hsc_env)\n return $! lookupType dflags hpt (eps_PTE eps) name\n where\n dflags = hsc_dflags hsc_env\n hpt = hsc_HPT hsc_env\n\\end{code}\n\n\\begin{code}\n-- | Get the 'TyCon' from a 'TyThing' if it is a type constructor thing. Panics otherwise\ntyThingTyCon :: TyThing -> TyCon\ntyThingTyCon (ATyCon tc) = tc\ntyThingTyCon other = pprPanic \"tyThingTyCon\" (pprTyThing other)\n\n-- | Get the 'CoAxiom' from a 'TyThing' if it is a coercion axiom thing. Panics otherwise\ntyThingCoAxiom :: TyThing -> CoAxiom Branched\ntyThingCoAxiom (ACoAxiom ax) = ax\ntyThingCoAxiom other = pprPanic \"tyThingCoAxiom\" (pprTyThing other)\n\n-- | Get the 'DataCon' from a 'TyThing' if it is a data constructor thing. Panics otherwise\ntyThingDataCon :: TyThing -> DataCon\ntyThingDataCon (AConLike (RealDataCon dc)) = dc\ntyThingDataCon other = pprPanic \"tyThingDataCon\" (pprTyThing other)\n\n-- | Get the 'Id' from a 'TyThing' if it is a id *or* data constructor thing. Panics otherwise\ntyThingId :: TyThing -> Id\ntyThingId (AnId id) = id\ntyThingId (AConLike (RealDataCon dc)) = dataConWrapId dc\ntyThingId other = pprPanic \"tyThingId\" (pprTyThing other)\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{MonadThings and friends}\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | Class that abstracts out the common ability of the monads in GHC\n-- to lookup a 'TyThing' in the monadic environment by 'Name'. Provides\n-- a number of related convenience functions for accessing particular\n-- kinds of 'TyThing'\nclass Monad m => MonadThings m where\n lookupThing :: Name -> m TyThing\n\n lookupId :: Name -> m Id\n lookupId = liftM tyThingId . lookupThing\n\n lookupDataCon :: Name -> m DataCon\n lookupDataCon = liftM tyThingDataCon . lookupThing\n\n lookupTyCon :: Name -> m TyCon\n lookupTyCon = liftM tyThingTyCon . lookupThing\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Auxiliary types}\n%* *\n%************************************************************************\n\nThese types are defined here because they are mentioned in ModDetails,\nbut they are mostly elaborated elsewhere\n\n\\begin{code}\n------------------ Warnings -------------------------\n-- | Warning information for a module\ndata Warnings\n = NoWarnings -- ^ Nothing deprecated\n | WarnAll WarningTxt -- ^ Whole module deprecated\n | WarnSome [(OccName,WarningTxt)] -- ^ Some specific things deprecated\n\n -- Only an OccName is needed because\n -- (1) a deprecation always applies to a binding\n -- defined in the module in which the deprecation appears.\n -- (2) deprecations are only reported outside the defining module.\n -- this is important because, otherwise, if we saw something like\n --\n -- {-# DEPRECATED f \"\" #-}\n -- f = ...\n -- h = f\n -- g = let f = undefined in f\n --\n -- we'd need more information than an OccName to know to say something\n -- about the use of f in h but not the use of the locally bound f in g\n --\n -- however, because we only report about deprecations from the outside,\n -- and a module can only export one value called f,\n -- an OccName suffices.\n --\n -- this is in contrast with fixity declarations, where we need to map\n -- a Name to its fixity declaration.\n deriving( Eq )\n\ninstance Binary Warnings where\n put_ bh NoWarnings = putByte bh 0\n put_ bh (WarnAll t) = do\n putByte bh 1\n put_ bh t\n put_ bh (WarnSome ts) = do\n putByte bh 2\n put_ bh ts\n\n get bh = do\n h <- getByte bh\n case h of\n 0 -> return NoWarnings\n 1 -> do aa <- get bh\n return (WarnAll aa)\n _ -> do aa <- get bh\n return (WarnSome aa)\n\n-- | Constructs the cache for the 'mi_warn_fn' field of a 'ModIface'\nmkIfaceWarnCache :: Warnings -> Name -> Maybe WarningTxt\nmkIfaceWarnCache NoWarnings = \\_ -> Nothing\nmkIfaceWarnCache (WarnAll t) = \\_ -> Just t\nmkIfaceWarnCache (WarnSome pairs) = lookupOccEnv (mkOccEnv pairs) . nameOccName\n\nemptyIfaceWarnCache :: Name -> Maybe WarningTxt\nemptyIfaceWarnCache _ = Nothing\n\nplusWarns :: Warnings -> Warnings -> Warnings\nplusWarns d NoWarnings = d\nplusWarns NoWarnings d = d\nplusWarns _ (WarnAll t) = WarnAll t\nplusWarns (WarnAll t) _ = WarnAll t\nplusWarns (WarnSome v1) (WarnSome v2) = WarnSome (v1 ++ v2)\n\\end{code}\n\n\\begin{code}\n-- | Creates cached lookup for the 'mi_fix_fn' field of 'ModIface'\nmkIfaceFixCache :: [(OccName, Fixity)] -> OccName -> Fixity\nmkIfaceFixCache pairs\n = \\n -> lookupOccEnv env n `orElse` defaultFixity\n where\n env = mkOccEnv pairs\n\nemptyIfaceFixCache :: OccName -> Fixity\nemptyIfaceFixCache _ = defaultFixity\n\n-- | Fixity environment mapping names to their fixities\ntype FixityEnv = NameEnv FixItem\n\n-- | Fixity information for an 'Name'. We keep the OccName in the range\n-- so that we can generate an interface from it\ndata FixItem = FixItem OccName Fixity\n\ninstance Outputable FixItem where\n ppr (FixItem occ fix) = ppr fix <+> ppr occ\n\nemptyFixityEnv :: FixityEnv\nemptyFixityEnv = emptyNameEnv\n\nlookupFixity :: FixityEnv -> Name -> Fixity\nlookupFixity env n = case lookupNameEnv env n of\n Just (FixItem _ fix) -> fix\n Nothing -> defaultFixity\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{WhatsImported}\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | Records whether a module has orphans. An \\\"orphan\\\" is one of:\n--\n-- * An instance declaration in a module other than the definition\n-- module for one of the type constructors or classes in the instance head\n--\n-- * A transformation rule in a module other than the one defining\n-- the function in the head of the rule\n--\n-- * A vectorisation pragma\ntype WhetherHasOrphans = Bool\n\n-- | Does this module define family instances?\ntype WhetherHasFamInst = Bool\n\n-- | Did this module originate from a *-boot file?\ntype IsBootInterface = Bool\n\n-- | Dependency information about ALL modules and packages below this one\n-- in the import hierarchy.\n--\n-- Invariant: the dependencies of a module @M@ never includes @M@.\n--\n-- Invariant: none of the lists contain duplicates.\ndata Dependencies\n = Deps { dep_mods :: [(ModuleName, IsBootInterface)]\n -- ^ All home-package modules transitively below this one\n -- I.e. modules that this one imports, or that are in the\n -- dep_mods of those directly-imported modules\n\n , dep_pkgs :: [(PackageKey, Bool)]\n -- ^ All packages transitively below this module\n -- I.e. packages to which this module's direct imports belong,\n -- or that are in the dep_pkgs of those modules\n -- The bool indicates if the package is required to be\n -- trusted when the module is imported as a safe import\n -- (Safe Haskell). See Note [RnNames . Tracking Trust Transitively]\n\n , dep_orphs :: [Module]\n -- ^ Orphan modules (whether home or external pkg),\n -- *not* including family instance orphans as they\n -- are anyway included in 'dep_finsts'\n\n , dep_finsts :: [Module]\n -- ^ Modules that contain family instances (whether the\n -- instances are from the home or an external package)\n }\n deriving( Eq )\n -- Equality used only for old\/new comparison in MkIface.addFingerprints\n -- See 'TcRnTypes.ImportAvails' for details on dependencies.\n\ninstance Binary Dependencies where\n put_ bh deps = do put_ bh (dep_mods deps)\n put_ bh (dep_pkgs deps)\n put_ bh (dep_orphs deps)\n put_ bh (dep_finsts deps)\n\n get bh = do ms <- get bh\n ps <- get bh\n os <- get bh\n fis <- get bh\n return (Deps { dep_mods = ms, dep_pkgs = ps, dep_orphs = os,\n dep_finsts = fis })\n\nnoDependencies :: Dependencies\nnoDependencies = Deps [] [] [] []\n\n-- | Records modules for which changes may force recompilation of this module\n-- See wiki: http:\/\/ghc.haskell.org\/trac\/ghc\/wiki\/Commentary\/Compiler\/RecompilationAvoidance\n--\n-- This differs from Dependencies. A module X may be in the dep_mods of this\n-- module (via an import chain) but if we don't use anything from X it won't\n-- appear in our Usage\ndata Usage\n -- | Module from another package\n = UsagePackageModule {\n usg_mod :: Module,\n -- ^ External package module depended on\n usg_mod_hash :: Fingerprint,\n -- ^ Cached module fingerprint\n usg_safe :: IsSafeImport\n -- ^ Was this module imported as a safe import\n }\n -- | Module from the current package\n | UsageHomeModule {\n usg_mod_name :: ModuleName,\n -- ^ Name of the module\n usg_mod_hash :: Fingerprint,\n -- ^ Cached module fingerprint\n usg_entities :: [(OccName,Fingerprint)],\n -- ^ Entities we depend on, sorted by occurrence name and fingerprinted.\n -- NB: usages are for parent names only, e.g. type constructors\n -- but not the associated data constructors.\n usg_exports :: Maybe Fingerprint,\n -- ^ Fingerprint for the export list of this module,\n -- if we directly imported it (and hence we depend on its export list)\n usg_safe :: IsSafeImport\n -- ^ Was this module imported as a safe import\n } -- ^ Module from the current package\n -- | A file upon which the module depends, e.g. a CPP #include, or using TH's\n -- 'addDependentFile'\n | UsageFile {\n usg_file_path :: FilePath,\n -- ^ External file dependency. From a CPP #include or TH\n -- addDependentFile. Should be absolute.\n usg_file_hash :: Fingerprint\n -- ^ 'Fingerprint' of the file contents.\n\n -- Note: We don't consider things like modification timestamps\n -- here, because there's no reason to recompile if the actual\n -- contents don't change. This previously lead to odd\n -- recompilation behaviors; see #8114\n }\n deriving( Eq )\n -- The export list field is (Just v) if we depend on the export list:\n -- i.e. we imported the module directly, whether or not we\n -- enumerated the things we imported, or just imported\n -- everything\n -- We need to recompile if M's exports change, because\n -- if the import was import M, we might now have a name clash\n -- in the importing module.\n -- if the import was import M(x) M might no longer export x\n -- The only way we don't depend on the export list is if we have\n -- import M()\n -- And of course, for modules that aren't imported directly we don't\n -- depend on their export lists\n\ninstance Binary Usage where\n put_ bh usg@UsagePackageModule{} = do\n putByte bh 0\n put_ bh (usg_mod usg)\n put_ bh (usg_mod_hash usg)\n put_ bh (usg_safe usg)\n\n put_ bh usg@UsageHomeModule{} = do\n putByte bh 1\n put_ bh (usg_mod_name usg)\n put_ bh (usg_mod_hash usg)\n put_ bh (usg_exports usg)\n put_ bh (usg_entities usg)\n put_ bh (usg_safe usg)\n\n put_ bh usg@UsageFile{} = do\n putByte bh 2\n put_ bh (usg_file_path usg)\n put_ bh (usg_file_hash usg)\n\n get bh = do\n h <- getByte bh\n case h of\n 0 -> do\n nm <- get bh\n mod <- get bh\n safe <- get bh\n return UsagePackageModule { usg_mod = nm, usg_mod_hash = mod, usg_safe = safe }\n 1 -> do\n nm <- get bh\n mod <- get bh\n exps <- get bh\n ents <- get bh\n safe <- get bh\n return UsageHomeModule { usg_mod_name = nm, usg_mod_hash = mod,\n usg_exports = exps, usg_entities = ents, usg_safe = safe }\n 2 -> do\n fp <- get bh\n hash <- get bh\n return UsageFile { usg_file_path = fp, usg_file_hash = hash }\n i -> error (\"Binary.get(Usage): \" ++ show i)\n\n\\end{code}\n\n\n%************************************************************************\n%* *\n The External Package State\n%* *\n%************************************************************************\n\n\\begin{code}\ntype PackageTypeEnv = TypeEnv\ntype PackageRuleBase = RuleBase\ntype PackageInstEnv = InstEnv\ntype PackageFamInstEnv = FamInstEnv\ntype PackageVectInfo = VectInfo\ntype PackageAnnEnv = AnnEnv\n\n-- | Information about other packages that we have slurped in by reading\n-- their interface files\ndata ExternalPackageState\n = EPS {\n eps_is_boot :: !(ModuleNameEnv (ModuleName, IsBootInterface)),\n -- ^ In OneShot mode (only), home-package modules\n -- accumulate in the external package state, and are\n -- sucked in lazily. For these home-pkg modules\n -- (only) we need to record which are boot modules.\n -- We set this field after loading all the\n -- explicitly-imported interfaces, but before doing\n -- anything else\n --\n -- The 'ModuleName' part is not necessary, but it's useful for\n -- debug prints, and it's convenient because this field comes\n -- direct from 'TcRnTypes.imp_dep_mods'\n\n eps_PIT :: !PackageIfaceTable,\n -- ^ The 'ModIface's for modules in external packages\n -- whose interfaces we have opened.\n -- The declarations in these interface files are held in the\n -- 'eps_decls', 'eps_inst_env', 'eps_fam_inst_env' and 'eps_rules'\n -- fields of this record, not in the 'mi_decls' fields of the\n -- interface we have sucked in.\n --\n -- What \/is\/ in the PIT is:\n --\n -- * The Module\n --\n -- * Fingerprint info\n --\n -- * Its exports\n --\n -- * Fixities\n --\n -- * Deprecations and warnings\n\n eps_PTE :: !PackageTypeEnv,\n -- ^ Result of typechecking all the external package\n -- interface files we have sucked in. The domain of\n -- the mapping is external-package modules\n\n eps_inst_env :: !PackageInstEnv, -- ^ The total 'InstEnv' accumulated\n -- from all the external-package modules\n eps_fam_inst_env :: !PackageFamInstEnv,-- ^ The total 'FamInstEnv' accumulated\n -- from all the external-package modules\n eps_rule_base :: !PackageRuleBase, -- ^ The total 'RuleEnv' accumulated\n -- from all the external-package modules\n eps_vect_info :: !PackageVectInfo, -- ^ The total 'VectInfo' accumulated\n -- from all the external-package modules\n eps_ann_env :: !PackageAnnEnv, -- ^ The total 'AnnEnv' accumulated\n -- from all the external-package modules\n\n eps_mod_fam_inst_env :: !(ModuleEnv FamInstEnv), -- ^ The family instances accumulated from external\n -- packages, keyed off the module that declared them\n\n eps_stats :: !EpsStats -- ^ Stastics about what was loaded from external packages\n }\n\n-- | Accumulated statistics about what we are putting into the 'ExternalPackageState'.\n-- \\\"In\\\" means stuff that is just \/read\/ from interface files,\n-- \\\"Out\\\" means actually sucked in and type-checked\ndata EpsStats = EpsStats { n_ifaces_in\n , n_decls_in, n_decls_out\n , n_rules_in, n_rules_out\n , n_insts_in, n_insts_out :: !Int }\n\naddEpsInStats :: EpsStats -> Int -> Int -> Int -> EpsStats\n-- ^ Add stats for one newly-read interface\naddEpsInStats stats n_decls n_insts n_rules\n = stats { n_ifaces_in = n_ifaces_in stats + 1\n , n_decls_in = n_decls_in stats + n_decls\n , n_insts_in = n_insts_in stats + n_insts\n , n_rules_in = n_rules_in stats + n_rules }\n\\end{code}\n\nNames in a NameCache are always stored as a Global, and have the SrcLoc\nof their binding locations.\n\nActually that's not quite right. When we first encounter the original\nname, we might not be at its binding site (e.g. we are reading an\ninterface file); so we give it 'noSrcLoc' then. Later, when we find\nits binding site, we fix it up.\n\n\\begin{code}\n-- | The NameCache makes sure that there is just one Unique assigned for\n-- each original name; i.e. (module-name, occ-name) pair and provides\n-- something of a lookup mechanism for those names.\ndata NameCache\n = NameCache { nsUniqs :: !UniqSupply,\n -- ^ Supply of uniques\n nsNames :: !OrigNameCache\n -- ^ Ensures that one original name gets one unique\n }\n\n-- | Per-module cache of original 'OccName's given 'Name's\ntype OrigNameCache = ModuleEnv (OccEnv Name)\n\\end{code}\n\n\n\\begin{code}\nmkSOName :: Platform -> FilePath -> FilePath\nmkSOName platform root\n = case platformOS platform of\n OSDarwin -> (\"lib\" ++ root) <.> \"dylib\"\n OSMinGW32 -> root <.> \"dll\"\n _ -> (\"lib\" ++ root) <.> \"so\"\n\nmkHsSOName :: Platform -> FilePath -> FilePath\nmkHsSOName platform root = (\"lib\" ++ root) <.> soExt platform\n\nsoExt :: Platform -> FilePath\nsoExt platform\n = case platformOS platform of\n OSDarwin -> \"dylib\"\n OSMinGW32 -> \"dll\"\n _ -> \"so\"\n\\end{code}\n\n\n%************************************************************************\n%* *\n The module graph and ModSummary type\n A ModSummary is a node in the compilation manager's\n dependency graph, and it's also passed to hscMain\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | A ModuleGraph contains all the nodes from the home package (only).\n-- There will be a node for each source module, plus a node for each hi-boot\n-- module.\n--\n-- The graph is not necessarily stored in topologically-sorted order. Use\n-- 'GHC.topSortModuleGraph' and 'Digraph.flattenSCC' to achieve this.\ntype ModuleGraph = [ModSummary]\n\nemptyMG :: ModuleGraph\nemptyMG = []\n\n-- | A single node in a 'ModuleGraph'. The nodes of the module graph\n-- are one of:\n--\n-- * A regular Haskell source module\n-- * A hi-boot source module\n-- * An external-core source module\n--\ndata ModSummary\n = ModSummary {\n ms_mod :: Module,\n -- ^ Identity of the module\n ms_hsc_src :: HscSource,\n -- ^ The module source either plain Haskell, hs-boot or external core\n ms_location :: ModLocation,\n -- ^ Location of the various files belonging to the module\n ms_hs_date :: UTCTime,\n -- ^ Timestamp of source file\n ms_obj_date :: Maybe UTCTime,\n -- ^ Timestamp of object, if we have one\n ms_srcimps :: [Located (ImportDecl RdrName)],\n -- ^ Source imports of the module\n ms_textual_imps :: [Located (ImportDecl RdrName)],\n -- ^ Non-source imports of the module from the module *text*\n ms_hspp_file :: FilePath,\n -- ^ Filename of preprocessed source file\n ms_hspp_opts :: DynFlags,\n -- ^ Cached flags from @OPTIONS@, @INCLUDE@ and @LANGUAGE@\n -- pragmas in the modules source code\n ms_hspp_buf :: Maybe StringBuffer\n -- ^ The actual preprocessed source, if we have it\n }\n\nms_mod_name :: ModSummary -> ModuleName\nms_mod_name = moduleName . ms_mod\n\nms_imps :: ModSummary -> [Located (ImportDecl RdrName)]\nms_imps ms =\n ms_textual_imps ms ++\n map mk_additional_import (dynFlagDependencies (ms_hspp_opts ms))\n where\n -- This is a not-entirely-satisfactory means of creating an import\n -- that corresponds to an import that did not occur in the program\n -- text, such as those induced by the use of plugins (the -plgFoo\n -- flag)\n mk_additional_import mod_nm = noLoc $ ImportDecl {\n ideclName = noLoc mod_nm,\n ideclPkgQual = Nothing,\n ideclSource = False,\n ideclImplicit = True, -- Maybe implicit because not \"in the program text\"\n ideclQualified = False,\n ideclAs = Nothing,\n ideclHiding = Nothing,\n ideclSafe = False\n }\n\n-- The ModLocation contains both the original source filename and the\n-- filename of the cleaned-up source file after all preprocessing has been\n-- done. The point is that the summariser will have to cpp\/unlit\/whatever\n-- all files anyway, and there's no point in doing this twice -- just\n-- park the result in a temp file, put the name of it in the location,\n-- and let @compile@ read from that file on the way back up.\n\n-- The ModLocation is stable over successive up-sweeps in GHCi, wheres\n-- the ms_hs_date and imports can, of course, change\n\nmsHsFilePath, msHiFilePath, msObjFilePath :: ModSummary -> FilePath\nmsHsFilePath ms = expectJust \"msHsFilePath\" (ml_hs_file (ms_location ms))\nmsHiFilePath ms = ml_hi_file (ms_location ms)\nmsObjFilePath ms = ml_obj_file (ms_location ms)\n\n-- | Did this 'ModSummary' originate from a hs-boot file?\nisBootSummary :: ModSummary -> Bool\nisBootSummary ms = isHsBoot (ms_hsc_src ms)\n\ninstance Outputable ModSummary where\n ppr ms\n = sep [text \"ModSummary {\",\n nest 3 (sep [text \"ms_hs_date = \" <> text (show (ms_hs_date ms)),\n text \"ms_mod =\" <+> ppr (ms_mod ms)\n <> text (hscSourceString (ms_hsc_src ms)) <> comma,\n text \"ms_textual_imps =\" <+> ppr (ms_textual_imps ms),\n text \"ms_srcimps =\" <+> ppr (ms_srcimps ms)]),\n char '}'\n ]\n\nshowModMsg :: DynFlags -> HscTarget -> Bool -> ModSummary -> String\nshowModMsg dflags target recomp mod_summary\n = showSDoc dflags $\n hsep [text (mod_str ++ replicate (max 0 (16 - length mod_str)) ' '),\n char '(', text (normalise $ msHsFilePath mod_summary) <> comma,\n case target of\n HscInterpreted | recomp\n -> text \"interpreted\"\n HscNothing -> text \"nothing\"\n _ -> text (normalise $ msObjFilePath mod_summary),\n char ')']\n where\n mod = moduleName (ms_mod mod_summary)\n mod_str = showPpr dflags mod ++ hscSourceString (ms_hsc_src mod_summary)\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Recmpilation}\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | Indicates whether a given module's source has been modified since it\n-- was last compiled.\ndata SourceModified\n = SourceModified\n -- ^ the source has been modified\n | SourceUnmodified\n -- ^ the source has not been modified. Compilation may or may\n -- not be necessary, depending on whether any dependencies have\n -- changed since we last compiled.\n | SourceUnmodifiedAndStable\n -- ^ the source has not been modified, and furthermore all of\n -- its (transitive) dependencies are up to date; it definitely\n -- does not need to be recompiled. This is important for two\n -- reasons: (a) we can omit the version check in checkOldIface,\n -- and (b) if the module used TH splices we don't need to force\n -- recompilation.\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Hpc Support}\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | Information about a modules use of Haskell Program Coverage\ndata HpcInfo\n = HpcInfo\n { hpcInfoTickCount :: Int\n , hpcInfoHash :: Int\n }\n | NoHpcInfo\n { hpcUsed :: AnyHpcUsage -- ^ Is hpc used anywhere on the module \\*tree\\*?\n }\n\n-- | This is used to signal if one of my imports used HPC instrumentation\n-- even if there is no module-local HPC usage\ntype AnyHpcUsage = Bool\n\nemptyHpcInfo :: AnyHpcUsage -> HpcInfo\nemptyHpcInfo = NoHpcInfo\n\n-- | Find out if HPC is used by this module or any of the modules\n-- it depends upon\nisHpcUsed :: HpcInfo -> AnyHpcUsage\nisHpcUsed (HpcInfo {}) = True\nisHpcUsed (NoHpcInfo { hpcUsed = used }) = used\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Vectorisation Support}\n%* *\n%************************************************************************\n\nThe following information is generated and consumed by the vectorisation\nsubsystem. It communicates the vectorisation status of declarations from one\nmodule to another.\n\nWhy do we need both f and f_v in the ModGuts\/ModDetails\/EPS version VectInfo\nbelow? We need to know `f' when converting to IfaceVectInfo. However, during\nvectorisation, we need to know `f_v', whose `Var' we cannot lookup based\non just the OccName easily in a Core pass.\n\n\\begin{code}\n-- |Vectorisation information for 'ModGuts', 'ModDetails' and 'ExternalPackageState'; see also\n-- documentation at 'Vectorise.Env.GlobalEnv'.\n--\n-- NB: The following tables may also include 'Var's, 'TyCon's and 'DataCon's from imported modules,\n-- which have been subsequently vectorised in the current module.\n--\ndata VectInfo\n = VectInfo\n { vectInfoVar :: VarEnv (Var , Var ) -- ^ @(f, f_v)@ keyed on @f@\n , vectInfoTyCon :: NameEnv (TyCon , TyCon) -- ^ @(T, T_v)@ keyed on @T@\n , vectInfoDataCon :: NameEnv (DataCon, DataCon) -- ^ @(C, C_v)@ keyed on @C@\n , vectInfoParallelVars :: VarSet -- ^ set of parallel variables\n , vectInfoParallelTyCons :: NameSet -- ^ set of parallel type constructors\n }\n\n-- |Vectorisation information for 'ModIface'; i.e, the vectorisation information propagated\n-- across module boundaries.\n--\n-- NB: The field 'ifaceVectInfoVar' explicitly contains the workers of data constructors as well as\n-- class selectors \u2014 i.e., their mappings are \/not\/ implicitly generated from the data types.\n-- Moreover, whether the worker of a data constructor is in 'ifaceVectInfoVar' determines\n-- whether that data constructor was vectorised (or is part of an abstractly vectorised type\n-- constructor).\n--\ndata IfaceVectInfo\n = IfaceVectInfo\n { ifaceVectInfoVar :: [Name] -- ^ All variables in here have a vectorised variant\n , ifaceVectInfoTyCon :: [Name] -- ^ All 'TyCon's in here have a vectorised variant;\n -- the name of the vectorised variant and those of its\n -- data constructors are determined by\n -- 'OccName.mkVectTyConOcc' and\n -- 'OccName.mkVectDataConOcc'; the names of the\n -- isomorphisms are determined by 'OccName.mkVectIsoOcc'\n , ifaceVectInfoTyConReuse :: [Name] -- ^ The vectorised form of all the 'TyCon's in here\n -- coincides with the unconverted form; the name of the\n -- isomorphisms is determined by 'OccName.mkVectIsoOcc'\n , ifaceVectInfoParallelVars :: [Name] -- iface version of 'vectInfoParallelVar'\n , ifaceVectInfoParallelTyCons :: [Name] -- iface version of 'vectInfoParallelTyCon'\n }\n\nnoVectInfo :: VectInfo\nnoVectInfo\n = VectInfo emptyVarEnv emptyNameEnv emptyNameEnv emptyVarSet emptyNameSet\n\nplusVectInfo :: VectInfo -> VectInfo -> VectInfo\nplusVectInfo vi1 vi2 =\n VectInfo (vectInfoVar vi1 `plusVarEnv` vectInfoVar vi2)\n (vectInfoTyCon vi1 `plusNameEnv` vectInfoTyCon vi2)\n (vectInfoDataCon vi1 `plusNameEnv` vectInfoDataCon vi2)\n (vectInfoParallelVars vi1 `unionVarSet` vectInfoParallelVars vi2)\n (vectInfoParallelTyCons vi1 `unionNameSets` vectInfoParallelTyCons vi2)\n\nconcatVectInfo :: [VectInfo] -> VectInfo\nconcatVectInfo = foldr plusVectInfo noVectInfo\n\nnoIfaceVectInfo :: IfaceVectInfo\nnoIfaceVectInfo = IfaceVectInfo [] [] [] [] []\n\nisNoIfaceVectInfo :: IfaceVectInfo -> Bool\nisNoIfaceVectInfo (IfaceVectInfo l1 l2 l3 l4 l5)\n = null l1 && null l2 && null l3 && null l4 && null l5\n\ninstance Outputable VectInfo where\n ppr info = vcat\n [ ptext (sLit \"variables :\") <+> ppr (vectInfoVar info)\n , ptext (sLit \"tycons :\") <+> ppr (vectInfoTyCon info)\n , ptext (sLit \"datacons :\") <+> ppr (vectInfoDataCon info)\n , ptext (sLit \"parallel vars :\") <+> ppr (vectInfoParallelVars info)\n , ptext (sLit \"parallel tycons :\") <+> ppr (vectInfoParallelTyCons info)\n ]\n\ninstance Binary IfaceVectInfo where\n put_ bh (IfaceVectInfo a1 a2 a3 a4 a5) = do\n put_ bh a1\n put_ bh a2\n put_ bh a3\n put_ bh a4\n put_ bh a5\n get bh = do\n a1 <- get bh\n a2 <- get bh\n a3 <- get bh\n a4 <- get bh\n a5 <- get bh\n return (IfaceVectInfo a1 a2 a3 a4 a5)\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Safe Haskell Support}\n%* *\n%************************************************************************\n\nThis stuff here is related to supporting the Safe Haskell extension,\nprimarily about storing under what trust type a module has been compiled.\n\n\\begin{code}\n-- | Is an import a safe import?\ntype IsSafeImport = Bool\n\n-- | Safe Haskell information for 'ModIface'\n-- Simply a wrapper around SafeHaskellMode to sepperate iface and flags\nnewtype IfaceTrustInfo = TrustInfo SafeHaskellMode\n\ngetSafeMode :: IfaceTrustInfo -> SafeHaskellMode\ngetSafeMode (TrustInfo x) = x\n\nsetSafeMode :: SafeHaskellMode -> IfaceTrustInfo\nsetSafeMode = TrustInfo\n\nnoIfaceTrustInfo :: IfaceTrustInfo\nnoIfaceTrustInfo = setSafeMode Sf_None\n\ntrustInfoToNum :: IfaceTrustInfo -> Word8\ntrustInfoToNum it\n = case getSafeMode it of\n Sf_None -> 0\n Sf_Unsafe -> 1\n Sf_Trustworthy -> 2\n Sf_Safe -> 3\n\nnumToTrustInfo :: Word8 -> IfaceTrustInfo\nnumToTrustInfo 0 = setSafeMode Sf_None\nnumToTrustInfo 1 = setSafeMode Sf_Unsafe\nnumToTrustInfo 2 = setSafeMode Sf_Trustworthy\nnumToTrustInfo 3 = setSafeMode Sf_Safe\nnumToTrustInfo 4 = setSafeMode Sf_Safe -- retained for backwards compat, used\n -- to be Sf_SafeInfered but we no longer\n -- differentiate.\nnumToTrustInfo n = error $ \"numToTrustInfo: bad input number! (\" ++ show n ++ \")\"\n\ninstance Outputable IfaceTrustInfo where\n ppr (TrustInfo Sf_None) = ptext $ sLit \"none\"\n ppr (TrustInfo Sf_Unsafe) = ptext $ sLit \"unsafe\"\n ppr (TrustInfo Sf_Trustworthy) = ptext $ sLit \"trustworthy\"\n ppr (TrustInfo Sf_Safe) = ptext $ sLit \"safe\"\n\ninstance Binary IfaceTrustInfo where\n put_ bh iftrust = putByte bh $ trustInfoToNum iftrust\n get bh = getByte bh >>= (return . numToTrustInfo)\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Parser result}\n%* *\n%************************************************************************\n\n\\begin{code}\ndata HsParsedModule = HsParsedModule {\n hpm_module :: Located (HsModule RdrName),\n hpm_src_files :: [FilePath]\n -- ^ extra source files (e.g. from #includes). The lexer collects\n -- these from '# ' pragmas, which the C preprocessor\n -- leaves behind. These files and their timestamps are stored in\n -- the .hi file, so that we can force recompilation if any of\n -- them change (#3589)\n }\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Linkable stuff}\n%* *\n%************************************************************************\n\nThis stuff is in here, rather than (say) in Linker.lhs, because the Linker.lhs\nstuff is the *dynamic* linker, and isn't present in a stage-1 compiler\n\n\\begin{code}\n-- | Information we can use to dynamically link modules into the compiler\ndata Linkable = LM {\n linkableTime :: UTCTime, -- ^ Time at which this linkable was built\n -- (i.e. when the bytecodes were produced,\n -- or the mod date on the files)\n linkableModule :: Module, -- ^ The linkable module itself\n linkableUnlinked :: [Unlinked]\n -- ^ Those files and chunks of code we have yet to link.\n --\n -- INVARIANT: A valid linkable always has at least one 'Unlinked' item.\n -- If this list is empty, the Linkable represents a fake linkable, which\n -- is generated in HscNothing mode to avoid recompiling modules.\n --\n -- ToDo: Do items get removed from this list when they get linked?\n }\n\nisObjectLinkable :: Linkable -> Bool\nisObjectLinkable l = not (null unlinked) && all isObject unlinked\n where unlinked = linkableUnlinked l\n -- A linkable with no Unlinked's is treated as a BCO. We can\n -- generate a linkable with no Unlinked's as a result of\n -- compiling a module in HscNothing mode, and this choice\n -- happens to work well with checkStability in module GHC.\n\nlinkableObjs :: Linkable -> [FilePath]\nlinkableObjs l = [ f | DotO f <- linkableUnlinked l ]\n\ninstance Outputable Linkable where\n ppr (LM when_made mod unlinkeds)\n = (text \"LinkableM\" <+> parens (text (show when_made)) <+> ppr mod)\n $$ nest 3 (ppr unlinkeds)\n\n-------------------------------------------\n\n-- | Objects which have yet to be linked by the compiler\ndata Unlinked\n = DotO FilePath -- ^ An object file (.o)\n | DotA FilePath -- ^ Static archive file (.a)\n | DotDLL FilePath -- ^ Dynamically linked library file (.so, .dll, .dylib)\n | BCOs CompiledByteCode ModBreaks -- ^ A byte-code object, lives only in memory\n\n#ifndef GHCI\ndata CompiledByteCode = CompiledByteCodeUndefined\n_unused :: CompiledByteCode\n_unused = CompiledByteCodeUndefined\n#endif\n\ninstance Outputable Unlinked where\n ppr (DotO path) = text \"DotO\" <+> text path\n ppr (DotA path) = text \"DotA\" <+> text path\n ppr (DotDLL path) = text \"DotDLL\" <+> text path\n#ifdef GHCI\n ppr (BCOs bcos _) = text \"BCOs\" <+> ppr bcos\n#else\n ppr (BCOs _ _) = text \"No byte code\"\n#endif\n\n-- | Is this an actual file on disk we can link in somehow?\nisObject :: Unlinked -> Bool\nisObject (DotO _) = True\nisObject (DotA _) = True\nisObject (DotDLL _) = True\nisObject _ = False\n\n-- | Is this a bytecode linkable with no file on disk?\nisInterpretable :: Unlinked -> Bool\nisInterpretable = not . isObject\n\n-- | Retrieve the filename of the linkable if possible. Panic if it is a byte-code object\nnameOfObject :: Unlinked -> FilePath\nnameOfObject (DotO fn) = fn\nnameOfObject (DotA fn) = fn\nnameOfObject (DotDLL fn) = fn\nnameOfObject other = pprPanic \"nameOfObject\" (ppr other)\n\n-- | Retrieve the compiled byte-code if possible. Panic if it is a file-based linkable\nbyteCodeOfObject :: Unlinked -> CompiledByteCode\nbyteCodeOfObject (BCOs bc _) = bc\nbyteCodeOfObject other = pprPanic \"byteCodeOfObject\" (ppr other)\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Breakpoint Support}\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | Breakpoint index\ntype BreakIndex = Int\n\n-- | All the information about the breakpoints for a given module\ndata ModBreaks\n = ModBreaks\n { modBreaks_flags :: BreakArray\n -- ^ The array of flags, one per breakpoint,\n -- indicating which breakpoints are enabled.\n , modBreaks_locs :: !(Array BreakIndex SrcSpan)\n -- ^ An array giving the source span of each breakpoint.\n , modBreaks_vars :: !(Array BreakIndex [OccName])\n -- ^ An array giving the names of the free variables at each breakpoint.\n , modBreaks_decls :: !(Array BreakIndex [String])\n -- ^ An array giving the names of the declarations enclosing each breakpoint.\n }\n\n-- | Construct an empty ModBreaks\nemptyModBreaks :: ModBreaks\nemptyModBreaks = ModBreaks\n { modBreaks_flags = error \"ModBreaks.modBreaks_array not initialised\"\n -- ToDo: can we avoid this?\n , modBreaks_locs = array (0,-1) []\n , modBreaks_vars = array (0,-1) []\n , modBreaks_decls = array (0,-1) []\n }\n\\end{code}\n","avg_line_length":41.4904059041,"max_line_length":109,"alphanum_fraction":0.5943222547} {"size":60842,"ext":"lhs","lang":"Literate Haskell","max_stars_count":1.0,"content":"\\begin{code}\n{-# LANGUAGE CPP #-}\n\nmodule TcSimplify(\n simplifyInfer, quantifyPred,\n simplifyAmbiguityCheck,\n simplifyDefault,\n simplifyRule, simplifyTop, simplifyInteractive,\n solveWantedsTcM\n ) where\n\n#include \"HsVersions.h\"\n\nimport TcRnTypes\nimport TcRnMonad\nimport TcErrors\nimport TcMType as TcM\nimport TcType\nimport TcSMonad as TcS\nimport TcInteract\nimport Kind ( isKind, defaultKind_maybe )\nimport Inst\nimport FunDeps ( growThetaTyVars )\nimport Type ( classifyPredType, PredTree(..), getClassPredTys_maybe )\nimport Class ( Class )\nimport Var\nimport Unique\nimport VarSet\nimport VarEnv\nimport TcEvidence\nimport Name\nimport Bag\nimport ListSetOps\nimport Util\nimport PrelInfo\nimport PrelNames\nimport Control.Monad ( unless )\nimport DynFlags ( ExtensionFlag( Opt_AllowAmbiguousTypes ) )\nimport Class ( classKey )\nimport BasicTypes ( RuleName )\nimport Outputable\nimport FastString\nimport TrieMap () -- DV: for now\n\\end{code}\n\n\n*********************************************************************************\n* *\n* External interface *\n* *\n*********************************************************************************\n\n\\begin{code}\nsimplifyTop :: WantedConstraints -> TcM (Bag EvBind)\n-- Simplify top-level constraints\n-- Usually these will be implications,\n-- but when there is nothing to quantify we don't wrap\n-- in a degenerate implication, so we do that here instead\nsimplifyTop wanteds\n = do { traceTc \"simplifyTop {\" $ text \"wanted = \" <+> ppr wanteds\n ; ev_binds_var <- newTcEvBinds\n ; zonked_final_wc <- solveWantedsTcMWithEvBinds ev_binds_var wanteds simpl_top\n ; binds1 <- TcRnMonad.getTcEvBinds ev_binds_var\n ; traceTc \"End simplifyTop }\" empty\n\n ; traceTc \"reportUnsolved {\" empty\n ; binds2 <- reportUnsolved zonked_final_wc\n ; traceTc \"reportUnsolved }\" empty\n\n ; return (binds1 `unionBags` binds2) }\n\nsimpl_top :: WantedConstraints -> TcS WantedConstraints\n -- See Note [Top-level Defaulting Plan]\nsimpl_top wanteds\n = do { wc_first_go <- nestTcS (solve_wanteds_and_drop wanteds)\n -- This is where the main work happens\n ; try_tyvar_defaulting wc_first_go }\n where\n try_tyvar_defaulting :: WantedConstraints -> TcS WantedConstraints\n try_tyvar_defaulting wc\n | isEmptyWC wc\n = return wc\n | otherwise\n = do { free_tvs <- TcS.zonkTyVarsAndFV (tyVarsOfWC wc)\n ; let meta_tvs = varSetElems (filterVarSet isMetaTyVar free_tvs)\n -- zonkTyVarsAndFV: the wc_first_go is not yet zonked\n -- filter isMetaTyVar: we might have runtime-skolems in GHCi,\n -- and we definitely don't want to try to assign to those!\n\n ; meta_tvs' <- mapM defaultTyVar meta_tvs -- Has unification side effects\n ; if meta_tvs' == meta_tvs -- No defaulting took place;\n -- (defaulting returns fresh vars)\n then try_class_defaulting wc\n else do { wc_residual <- nestTcS (solve_wanteds_and_drop wc)\n -- See Note [Must simplify after defaulting]\n ; try_class_defaulting wc_residual } }\n\n try_class_defaulting :: WantedConstraints -> TcS WantedConstraints\n try_class_defaulting wc\n | isEmptyWC wc \n = return wc\n | otherwise -- See Note [When to do type-class defaulting]\n = do { something_happened <- applyDefaultingRules (approximateWC wc)\n -- See Note [Top-level Defaulting Plan]\n ; if something_happened\n then do { wc_residual <- nestTcS (solve_wanteds_and_drop wc)\n ; try_class_defaulting wc_residual }\n else return wc }\n\\end{code}\n\nNote [When to do type-class defaulting]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIn GHC 7.6 and 7.8.2, we did type-class defaulting only if insolubleWC\nwas false, on the grounds that defaulting can't help solve insoluble\nconstraints. But if we *don't* do defaulting we may report a whole\nlot of errors that would be solved by defaulting; these errors are\nquite spurious because fixing the single insoluble error means that\ndefaulting happens again, which makes all the other errors go away.\nThis is jolly confusing: Trac #9033.\n\nSo it seems better to always do type-class defaulting.\n\nHowever, always doing defaulting does mean that we'll do it in\nsituations like this (Trac #5934):\n run :: (forall s. GenST s) -> Int\n run = fromInteger 0 \nWe don't unify the return type of fromInteger with the given function\ntype, because the latter involves foralls. So we're left with\n (Num alpha, alpha ~ (forall s. GenST s) -> Int)\nNow we do defaulting, get alpha := Integer, and report that we can't \nmatch Integer with (forall s. GenST s) -> Int. That's not totally \nstupid, but perhaps a little strange.\n\nAnother potential alternative would be to suppress *all* non-insoluble\nerrors if there are *any* insoluble errors, anywhere, but that seems\ntoo drastic.\n\nNote [Must simplify after defaulting]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWe may have a deeply buried constraint\n (t:*) ~ (a:Open)\nwhich we couldn't solve because of the kind incompatibility, and 'a' is free.\nThen when we default 'a' we can solve the constraint. And we want to do\nthat before starting in on type classes. We MUST do it before reporting\nerrors, because it isn't an error! Trac #7967 was due to this.\n\nNote [Top-level Defaulting Plan]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWe have considered two design choices for where\/when to apply defaulting.\n (i) Do it in SimplCheck mode only \/whenever\/ you try to solve some\n flat constraints, maybe deep inside the context of implications.\n This used to be the case in GHC 7.4.1.\n (ii) Do it in a tight loop at simplifyTop, once all other constraint has\n finished. This is the current story.\n\nOption (i) had many disadvantages:\n a) First it was deep inside the actual solver,\n b) Second it was dependent on the context (Infer a type signature,\n or Check a type signature, or Interactive) since we did not want\n to always start defaulting when inferring (though there is an exception to\n this see Note [Default while Inferring])\n c) It plainly did not work. Consider typecheck\/should_compile\/DfltProb2.hs:\n f :: Int -> Bool\n f x = const True (\\y -> let w :: a -> a\n w a = const a (y+1)\n in w y)\n We will get an implication constraint (for beta the type of y):\n [untch=beta] forall a. 0 => Num beta\n which we really cannot default \/while solving\/ the implication, since beta is\n untouchable.\n\nInstead our new defaulting story is to pull defaulting out of the solver loop and\ngo with option (i), implemented at SimplifyTop. Namely:\n - First have a go at solving the residual constraint of the whole program\n - Try to approximate it with a flat constraint\n - Figure out derived defaulting equations for that flat constraint\n - Go round the loop again if you did manage to get some equations\n\nNow, that has to do with class defaulting. However there exists type variable \/kind\/\ndefaulting. Again this is done at the top-level and the plan is:\n - At the top-level, once you had a go at solving the constraint, do\n figure out \/all\/ the touchable unification variables of the wanted constraints.\n - Apply defaulting to their kinds\n\nMore details in Note [DefaultTyVar].\n\n\\begin{code}\n------------------\nsimplifyAmbiguityCheck :: Type -> WantedConstraints -> TcM ()\nsimplifyAmbiguityCheck ty wanteds\n = do { traceTc \"simplifyAmbiguityCheck {\" (text \"type = \" <+> ppr ty $$ text \"wanted = \" <+> ppr wanteds)\n ; ev_binds_var <- newTcEvBinds\n ; zonked_final_wc <- solveWantedsTcMWithEvBinds ev_binds_var wanteds simpl_top\n ; traceTc \"End simplifyAmbiguityCheck }\" empty\n\n -- Normally report all errors; but with -XAllowAmbiguousTypes\n -- report only insoluble ones, since they represent genuinely\n -- inaccessible code\n ; allow_ambiguous <- xoptM Opt_AllowAmbiguousTypes\n ; traceTc \"reportUnsolved(ambig) {\" empty\n ; unless (allow_ambiguous && not (insolubleWC zonked_final_wc))\n (discardResult (reportUnsolved zonked_final_wc))\n ; traceTc \"reportUnsolved(ambig) }\" empty\n\n ; return () }\n\n------------------\nsimplifyInteractive :: WantedConstraints -> TcM (Bag EvBind)\nsimplifyInteractive wanteds\n = traceTc \"simplifyInteractive\" empty >>\n simplifyTop wanteds\n\n------------------\nsimplifyDefault :: ThetaType -- Wanted; has no type variables in it\n -> TcM () -- Succeeds iff the constraint is soluble\nsimplifyDefault theta\n = do { traceTc \"simplifyInteractive\" empty\n ; wanted <- newFlatWanteds DefaultOrigin theta\n ; (unsolved, _binds) <- solveWantedsTcM (mkFlatWC wanted)\n\n ; traceTc \"reportUnsolved {\" empty\n -- See Note [Deferring coercion errors to runtime]\n ; reportAllUnsolved unsolved\n -- Postcondition of solveWantedsTcM is that returned\n -- constraints are zonked. So Precondition of reportUnsolved\n -- is true.\n ; traceTc \"reportUnsolved }\" empty\n\n ; return () }\n\\end{code}\n\n\n*********************************************************************************\n* *\n* Inference\n* *\n***********************************************************************************\n\n\\begin{code}\nsimplifyInfer :: Bool\n -> Bool -- Apply monomorphism restriction\n -> [(Name, TcTauType)] -- Variables to be generalised,\n -- and their tau-types\n -> WantedConstraints\n -> TcM ([TcTyVar], -- Quantify over these type variables\n [EvVar], -- ... and these constraints\n Bool, -- The monomorphism restriction did something\n -- so the results type is not as general as\n -- it could be\n TcEvBinds) -- ... binding these evidence variables\nsimplifyInfer _top_lvl apply_mr name_taus wanteds\n | isEmptyWC wanteds\n = do { gbl_tvs <- tcGetGlobalTyVars\n ; qtkvs <- quantifyTyVars gbl_tvs (tyVarsOfTypes (map snd name_taus))\n ; traceTc \"simplifyInfer: empty WC\" (ppr name_taus $$ ppr qtkvs)\n ; return (qtkvs, [], False, emptyTcEvBinds) }\n\n | otherwise\n = do { traceTc \"simplifyInfer {\" $ vcat\n [ ptext (sLit \"binds =\") <+> ppr name_taus\n , ptext (sLit \"closed =\") <+> ppr _top_lvl\n , ptext (sLit \"apply_mr =\") <+> ppr apply_mr\n , ptext (sLit \"(unzonked) wanted =\") <+> ppr wanteds\n ]\n\n -- Historical note: Before step 2 we used to have a\n -- HORRIBLE HACK described in Note [Avoid unecessary\n -- constraint simplification] but, as described in Trac\n -- #4361, we have taken in out now. That's why we start\n -- with step 2!\n\n -- Step 2) First try full-blown solving\n\n -- NB: we must gather up all the bindings from doing\n -- this solving; hence (runTcSWithEvBinds ev_binds_var).\n -- And note that since there are nested implications,\n -- calling solveWanteds will side-effect their evidence\n -- bindings, so we can't just revert to the input\n -- constraint.\n\n ; ev_binds_var <- newTcEvBinds\n ; wanted_transformed_incl_derivs\n <- solveWantedsTcMWithEvBinds ev_binds_var wanteds solve_wanteds\n -- Post: wanted_transformed_incl_derivs are zonked\n\n -- Step 4) Candidates for quantification are an approximation of wanted_transformed\n -- NB: Already the fixpoint of any unifications that may have happened\n -- NB: We do not do any defaulting when inferring a type, this can lead\n -- to less polymorphic types, see Note [Default while Inferring]\n\n ; tc_lcl_env <- TcRnMonad.getLclEnv\n ; let untch = tcl_untch tc_lcl_env\n wanted_transformed = dropDerivedWC wanted_transformed_incl_derivs\n ; quant_pred_candidates -- Fully zonked\n <- if insolubleWC wanted_transformed_incl_derivs\n then return [] -- See Note [Quantification with errors]\n -- NB: must include derived errors in this test, \n -- hence \"incl_derivs\"\n\n else do { let quant_cand = approximateWC wanted_transformed\n meta_tvs = filter isMetaTyVar (varSetElems (tyVarsOfCts quant_cand))\n ; gbl_tvs <- tcGetGlobalTyVars\n ; null_ev_binds_var <- newTcEvBinds\n -- Miminise quant_cand. We are not interested in any evidence\n -- produced, because we are going to simplify wanted_transformed\n -- again later. All we want here is the predicates over which to\n -- quantify. \n --\n -- If any meta-tyvar unifications take place (unlikely), we'll\n -- pick that up later.\n\n ; (flats, _insols) <- runTcSWithEvBinds null_ev_binds_var $\n do { mapM_ (promoteAndDefaultTyVar untch gbl_tvs) meta_tvs\n -- See Note [Promote _and_ default when inferring]\n ; _implics <- solveInteract quant_cand\n ; getInertUnsolved }\n\n ; flats' <- zonkFlats null_ev_binds_var untch $\n filterBag isWantedCt flats\n -- The quant_cand were already fully zonked, so this zonkFlats\n -- really only unflattens the flattening that solveInteract\n -- may have done (Trac #8889). \n -- E.g. quant_cand = F a, where F :: * -> Constraint\n -- We'll flatten to (alpha, F a ~ alpha)\n -- fail to make any further progress and must unflatten again \n\n ; return (map ctPred $ bagToList flats') }\n\n -- NB: quant_pred_candidates is already the fixpoint of any\n -- unifications that may have happened\n ; gbl_tvs <- tcGetGlobalTyVars\n ; zonked_tau_tvs <- TcM.zonkTyVarsAndFV (tyVarsOfTypes (map snd name_taus))\n ; let poly_qtvs = growThetaTyVars quant_pred_candidates zonked_tau_tvs\n `minusVarSet` gbl_tvs\n pbound = filter (quantifyPred poly_qtvs) quant_pred_candidates\n\n -- Monomorphism restriction\n constrained_tvs = tyVarsOfTypes pbound `unionVarSet` gbl_tvs\n mr_bites = apply_mr && not (null pbound)\n\n ; (qtvs, bound) <- if mr_bites\n then do { qtvs <- quantifyTyVars constrained_tvs zonked_tau_tvs\n ; return (qtvs, []) }\n else do { qtvs <- quantifyTyVars gbl_tvs poly_qtvs\n ; return (qtvs, pbound) }\n\n ; traceTc \"simplifyWithApprox\" $\n vcat [ ptext (sLit \"quant_pred_candidates =\") <+> ppr quant_pred_candidates\n , ptext (sLit \"gbl_tvs=\") <+> ppr gbl_tvs\n , ptext (sLit \"zonked_tau_tvs=\") <+> ppr zonked_tau_tvs\n , ptext (sLit \"pbound =\") <+> ppr pbound\n , ptext (sLit \"bbound =\") <+> ppr bound\n , ptext (sLit \"poly_qtvs =\") <+> ppr poly_qtvs\n , ptext (sLit \"constrained_tvs =\") <+> ppr constrained_tvs\n , ptext (sLit \"mr_bites =\") <+> ppr mr_bites\n , ptext (sLit \"qtvs =\") <+> ppr qtvs ]\n\n ; if null qtvs && null bound\n then do { traceTc \"} simplifyInfer\/no implication needed\" empty\n ; emitConstraints wanted_transformed\n -- Includes insolubles (if -fdefer-type-errors)\n -- as well as flats and implications\n ; return ([], [], mr_bites, TcEvBinds ev_binds_var) }\n else do\n\n { -- Step 7) Emit an implication\n let minimal_flat_preds = mkMinimalBySCs bound\n -- See Note [Minimize by Superclasses]\n skol_info = InferSkol [ (name, mkSigmaTy [] minimal_flat_preds ty)\n | (name, ty) <- name_taus ]\n -- Don't add the quantified variables here, because\n -- they are also bound in ic_skols and we want them to be\n -- tidied uniformly\n\n ; minimal_bound_ev_vars <- mapM TcM.newEvVar minimal_flat_preds\n ; let implic = Implic { ic_untch = pushUntouchables untch\n , ic_skols = qtvs\n , ic_no_eqs = False\n , ic_fsks = [] -- wanted_tansformed arose only from solveWanteds\n -- hence no flatten-skolems (which come from givens)\n , ic_given = minimal_bound_ev_vars\n , ic_wanted = wanted_transformed\n , ic_insol = False\n , ic_binds = ev_binds_var\n , ic_info = skol_info\n , ic_env = tc_lcl_env }\n ; emitImplication implic\n\n ; traceTc \"} simplifyInfer\/produced residual implication for quantification\" $\n vcat [ ptext (sLit \"implic =\") <+> ppr implic\n -- ic_skols, ic_given give rest of result\n , ptext (sLit \"qtvs =\") <+> ppr qtvs\n , ptext (sLit \"spb =\") <+> ppr quant_pred_candidates\n , ptext (sLit \"bound =\") <+> ppr bound ]\n\n ; return ( qtvs, minimal_bound_ev_vars\n , mr_bites, TcEvBinds ev_binds_var) } }\n\nquantifyPred :: TyVarSet -- Quantifying over these\n -> PredType -> Bool -- True <=> quantify over this wanted\nquantifyPred qtvs pred\n | isIPPred pred = True -- Note [Inheriting implicit parameters]\n | otherwise = tyVarsOfType pred `intersectsVarSet` qtvs\n\\end{code}\n\nNote [Inheriting implicit parameters]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nConsider this:\n\n f x = (x::Int) + ?y\n\nwhere f is *not* a top-level binding.\nFrom the RHS of f we'll get the constraint (?y::Int).\nThere are two types we might infer for f:\n\n f :: Int -> Int\n\n(so we get ?y from the context of f's definition), or\n\n f :: (?y::Int) => Int -> Int\n\nAt first you might think the first was better, because then\n?y behaves like a free variable of the definition, rather than\nhaving to be passed at each call site. But of course, the WHOLE\nIDEA is that ?y should be passed at each call site (that's what\ndynamic binding means) so we'd better infer the second.\n\nBOTTOM LINE: when *inferring types* you must quantify over implicit\nparameters, *even if* they don't mention the bound type variables.\nReason: because implicit parameters, uniquely, have local instance\ndeclarations. See the predicate quantifyPred.\n\nNote [Quantification with errors]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIf we find that the RHS of the definition has some absolutely-insoluble\nconstraints, we abandon all attempts to find a context to quantify\nover, and instead make the function fully-polymorphic in whatever\ntype we have found. For two reasons\n a) Minimise downstream errors\n b) Avoid spurious errors from this function\n\nBut NB that we must include *derived* errors in the check. Example:\n (a::*) ~ Int#\nWe get an insoluble derived error *~#, and we don't want to discard\nit before doing the isInsolubleWC test! (Trac #8262)\n\nNote [Default while Inferring]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nOur current plan is that defaulting only happens at simplifyTop and\nnot simplifyInfer. This may lead to some insoluble deferred constraints\nExample:\n\ninstance D g => C g Int b\n\nconstraint inferred = (forall b. 0 => C gamma alpha b) \/\\ Num alpha\ntype inferred = gamma -> gamma\n\nNow, if we try to default (alpha := Int) we will be able to refine the implication to\n (forall b. 0 => C gamma Int b)\nwhich can then be simplified further to\n (forall b. 0 => D gamma)\nFinally we \/can\/ approximate this implication with (D gamma) and infer the quantified\ntype: forall g. D g => g -> g\n\nInstead what will currently happen is that we will get a quantified type\n(forall g. g -> g) and an implication:\n forall g. 0 => (forall b. 0 => C g alpha b) \/\\ Num alpha\n\nwhich, even if the simplifyTop defaults (alpha := Int) we will still be left with an\nunsolvable implication:\n forall g. 0 => (forall b. 0 => D g)\n\nThe concrete example would be:\n h :: C g a s => g -> a -> ST s a\n f (x::gamma) = (\\_ -> x) (runST (h x (undefined::alpha)) + 1)\n\nBut it is quite tedious to do defaulting and resolve the implication constraints and\nwe have not observed code breaking because of the lack of defaulting in inference so\nwe don't do it for now.\n\n\n\nNote [Minimize by Superclasses]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWhen we quantify over a constraint, in simplifyInfer we need to\nquantify over a constraint that is minimal in some sense: For\ninstance, if the final wanted constraint is (Eq alpha, Ord alpha),\nwe'd like to quantify over Ord alpha, because we can just get Eq alpha\nfrom superclass selection from Ord alpha. This minimization is what\nmkMinimalBySCs does. Then, simplifyInfer uses the minimal constraint\nto check the original wanted.\n\n\nNote [Avoid unecessary constraint simplification]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n -------- NB NB NB (Jun 12) -------------\n This note not longer applies; see the notes with Trac #4361.\n But I'm leaving it in here so we remember the issue.)\n ----------------------------------------\nWhen inferring the type of a let-binding, with simplifyInfer,\ntry to avoid unnecessarily simplifying class constraints.\nDoing so aids sharing, but it also helps with delicate\nsituations like\n\n instance C t => C [t] where ..\n\n f :: C [t] => ....\n f x = let g y = ...(constraint C [t])...\n in ...\nWhen inferring a type for 'g', we don't want to apply the\ninstance decl, because then we can't satisfy (C t). So we\njust notice that g isn't quantified over 't' and partition\nthe constraints before simplifying.\n\nThis only half-works, but then let-generalisation only half-works.\n\n\n*********************************************************************************\n* *\n* RULES *\n* *\n***********************************************************************************\n\nSee note [Simplifying RULE constraints] in TcRule\n\nNote [RULE quantification over equalities]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nDeciding which equalities to quantify over is tricky:\n * We do not want to quantify over insoluble equalities (Int ~ Bool)\n (a) because we prefer to report a LHS type error\n (b) because if such things end up in 'givens' we get a bogus\n \"inaccessible code\" error\n\n * But we do want to quantify over things like (a ~ F b), where\n F is a type function.\n\nThe difficulty is that it's hard to tell what is insoluble!\nSo we see whether the simplificaiotn step yielded any type errors,\nand if so refrain from quantifying over *any* equalites.\n\n\\begin{code}\nsimplifyRule :: RuleName\n -> WantedConstraints -- Constraints from LHS\n -> WantedConstraints -- Constraints from RHS\n -> TcM ([EvVar], WantedConstraints) -- LHS evidence varaibles\n-- See Note [Simplifying RULE constraints] in TcRule\nsimplifyRule name lhs_wanted rhs_wanted\n = do { -- We allow ourselves to unify environment\n -- variables: runTcS runs with NoUntouchables\n (resid_wanted, _) <- solveWantedsTcM (lhs_wanted `andWC` rhs_wanted)\n -- Post: these are zonked and unflattened\n\n ; zonked_lhs_flats <- zonkCts (wc_flat lhs_wanted)\n ; let (q_cts, non_q_cts) = partitionBag quantify_me zonked_lhs_flats\n quantify_me -- Note [RULE quantification over equalities]\n | insolubleWC resid_wanted = quantify_insol\n | otherwise = quantify_normal\n\n quantify_insol ct = not (isEqPred (ctPred ct))\n\n quantify_normal ct\n | EqPred t1 t2 <- classifyPredType (ctPred ct)\n = not (t1 `tcEqType` t2)\n | otherwise\n = True\n\n ; traceTc \"simplifyRule\" $\n vcat [ ptext (sLit \"LHS of rule\") <+> doubleQuotes (ftext name)\n , text \"zonked_lhs_flats\" <+> ppr zonked_lhs_flats\n , text \"q_cts\" <+> ppr q_cts\n , text \"non_q_cts\" <+> ppr non_q_cts ]\n\n ; return ( map (ctEvId . ctEvidence) (bagToList q_cts)\n , lhs_wanted { wc_flat = non_q_cts }) }\n\\end{code}\n\n\n*********************************************************************************\n* *\n* Main Simplifier *\n* *\n***********************************************************************************\n\nNote [Deferring coercion errors to runtime]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWhile developing, sometimes it is desirable to allow compilation to succeed even\nif there are type errors in the code. Consider the following case:\n\n module Main where\n\n a :: Int\n a = 'a'\n\n main = print \"b\"\n\nEven though `a` is ill-typed, it is not used in the end, so if all that we're\ninterested in is `main` it is handy to be able to ignore the problems in `a`.\n\nSince we treat type equalities as evidence, this is relatively simple. Whenever\nwe run into a type mismatch in TcUnify, we normally just emit an error. But it\nis always safe to defer the mismatch to the main constraint solver. If we do\nthat, `a` will get transformed into\n\n co :: Int ~ Char\n co = ...\n\n a :: Int\n a = 'a' `cast` co\n\nThe constraint solver would realize that `co` is an insoluble constraint, and\nemit an error with `reportUnsolved`. But we can also replace the right-hand side\nof `co` with `error \"Deferred type error: Int ~ Char\"`. This allows the program\nto compile, and it will run fine unless we evaluate `a`. This is what\n`deferErrorsToRuntime` does.\n\nIt does this by keeping track of which errors correspond to which coercion\nin TcErrors (with ErrEnv). TcErrors.reportTidyWanteds does not print the errors\nand does not fail if -fdefer-type-errors is on, so that we can continue\ncompilation. The errors are turned into warnings in `reportUnsolved`.\n\nNote [Zonk after solving]\n~~~~~~~~~~~~~~~~~~~~~~~~~\nWe zonk the result immediately after constraint solving, for two reasons:\n\na) because zonkWC generates evidence, and this is the moment when we\n have a suitable evidence variable to hand.\n\nNote that *after* solving the constraints are typically small, so the\noverhead is not great.\n\n\\begin{code}\nsolveWantedsTcMWithEvBinds :: EvBindsVar\n -> WantedConstraints\n -> (WantedConstraints -> TcS WantedConstraints)\n -> TcM WantedConstraints\n-- Returns a *zonked* result\n-- We zonk when we finish primarily to un-flatten out any\n-- flatten-skolems etc introduced by canonicalisation of\n-- types involving type funuctions. Happily the result\n-- is typically much smaller than the input, indeed it is\n-- often empty.\nsolveWantedsTcMWithEvBinds ev_binds_var wc tcs_action\n = do { traceTc \"solveWantedsTcMWithEvBinds\" $ text \"wanted=\" <+> ppr wc\n ; wc2 <- runTcSWithEvBinds ev_binds_var (tcs_action wc)\n ; zonkWC ev_binds_var wc2 }\n -- See Note [Zonk after solving]\n\nsolveWantedsTcM :: WantedConstraints -> TcM (WantedConstraints, Bag EvBind)\n-- Zonk the input constraints, and simplify them\n-- Return the evidence binds in the BagEvBinds result\n-- Discards all Derived stuff in result\n-- Postcondition: fully zonked and unflattened constraints\nsolveWantedsTcM wanted\n = do { ev_binds_var <- newTcEvBinds\n ; wanteds' <- solveWantedsTcMWithEvBinds ev_binds_var wanted solve_wanteds_and_drop\n ; binds <- TcRnMonad.getTcEvBinds ev_binds_var\n ; return (wanteds', binds) }\n\nsolve_wanteds_and_drop :: WantedConstraints -> TcS (WantedConstraints)\n-- Since solve_wanteds returns the residual WantedConstraints,\n-- it should always be called within a runTcS or something similar,\nsolve_wanteds_and_drop wanted = do { wc <- solve_wanteds wanted\n ; return (dropDerivedWC wc) }\n\nsolve_wanteds :: WantedConstraints -> TcS WantedConstraints\n-- so that the inert set doesn't mindlessly propagate.\n-- NB: wc_flats may be wanted \/or\/ derived now\nsolve_wanteds wanted@(WC { wc_flat = flats, wc_impl = implics, wc_insol = insols })\n = do { traceTcS \"solveWanteds {\" (ppr wanted)\n\n -- Try the flat bit, including insolubles. Solving insolubles a\n -- second time round is a bit of a waste; but the code is simple\n -- and the program is wrong anyway, and we don't run the danger\n -- of adding Derived insolubles twice; see\n -- TcSMonad Note [Do not add duplicate derived insolubles]\n ; traceTcS \"solveFlats {\" empty\n ; let all_flats = flats `unionBags` insols\n ; impls_from_flats <- solveInteract all_flats\n ; traceTcS \"solveFlats end }\" (ppr impls_from_flats)\n\n -- solve_wanteds iterates when it is able to float equalities\n -- out of one or more of the implications.\n ; unsolved_implics <- simpl_loop 1 (implics `unionBags` impls_from_flats)\n\n ; (unsolved_flats, insoluble_flats) <- getInertUnsolved\n\n -- We used to unflatten here but now we only do it once at top-level\n -- during zonking -- see Note [Unflattening while zonking] in TcMType\n ; let wc = WC { wc_flat = unsolved_flats\n , wc_impl = unsolved_implics\n , wc_insol = insoluble_flats }\n\n ; bb <- getTcEvBindsMap\n ; tb <- getTcSTyBindsMap\n ; traceTcS \"solveWanteds }\" $\n vcat [ text \"unsolved_flats =\" <+> ppr unsolved_flats\n , text \"unsolved_implics =\" <+> ppr unsolved_implics\n , text \"current evbinds =\" <+> ppr (evBindMapBinds bb)\n , text \"current tybinds =\" <+> vcat (map ppr (varEnvElts tb))\n , text \"final wc =\" <+> ppr wc ]\n\n ; return wc }\n\nsimpl_loop :: Int\n -> Bag Implication\n -> TcS (Bag Implication)\nsimpl_loop n implics\n | n > 10\n = traceTcS \"solveWanteds: loop!\" empty >> return implics\n | otherwise\n = do { traceTcS \"simpl_loop, iteration\" (int n)\n ; (floated_eqs, unsolved_implics) <- solveNestedImplications implics\n ; if isEmptyBag floated_eqs\n then return unsolved_implics\n else\n do { -- Put floated_eqs into the current inert set before looping\n (unifs_happened, impls_from_eqs) <- reportUnifications $\n solveInteract floated_eqs\n ; if -- See Note [Cutting off simpl_loop]\n isEmptyBag impls_from_eqs &&\n not unifs_happened && -- (a)\n not (anyBag isCFunEqCan floated_eqs) -- (b)\n then return unsolved_implics\n else simpl_loop (n+1) (unsolved_implics `unionBags` impls_from_eqs) } }\n\nsolveNestedImplications :: Bag Implication\n -> TcS (Cts, Bag Implication)\n-- Precondition: the TcS inerts may contain unsolved flats which have\n-- to be converted to givens before we go inside a nested implication.\nsolveNestedImplications implics\n | isEmptyBag implics\n = return (emptyBag, emptyBag)\n | otherwise\n = do { inerts <- getTcSInerts\n ; let thinner_inerts = prepareInertsForImplications inerts\n -- See Note [Preparing inert set for implications]\n\n ; traceTcS \"solveNestedImplications starting {\" $\n vcat [ text \"original inerts = \" <+> ppr inerts\n , text \"thinner_inerts = \" <+> ppr thinner_inerts ]\n\n ; (floated_eqs, unsolved_implics)\n <- flatMapBagPairM (solveImplication thinner_inerts) implics\n\n -- ... and we are back in the original TcS inerts\n -- Notice that the original includes the _insoluble_flats so it was safe to ignore\n -- them in the beginning of this function.\n ; traceTcS \"solveNestedImplications end }\" $\n vcat [ text \"all floated_eqs =\" <+> ppr floated_eqs\n , text \"unsolved_implics =\" <+> ppr unsolved_implics ]\n\n ; return (floated_eqs, unsolved_implics) }\n\nsolveImplication :: InertSet\n -> Implication -- Wanted\n -> TcS (Cts, -- All wanted or derived floated equalities: var = type\n Bag Implication) -- Unsolved rest (always empty or singleton)\n-- Precondition: The TcS monad contains an empty worklist and given-only inerts\n-- which after trying to solve this implication we must restore to their original value\nsolveImplication inerts\n imp@(Implic { ic_untch = untch\n , ic_binds = ev_binds\n , ic_skols = skols\n , ic_fsks = old_fsks\n , ic_given = givens\n , ic_wanted = wanteds\n , ic_info = info\n , ic_env = env })\n = do { traceTcS \"solveImplication {\" (ppr imp)\n\n -- Solve the nested constraints\n ; (no_given_eqs, new_fsks, residual_wanted)\n <- nestImplicTcS ev_binds untch inerts $\n do { (no_eqs, new_fsks) <- solveInteractGiven (mkGivenLoc info env)\n old_fsks givens\n\n ; residual_wanted <- solve_wanteds wanteds\n -- solve_wanteds, *not* solve_wanteds_and_drop, because\n -- we want to retain derived equalities so we can float\n -- them out in floatEqualities\n\n ; return (no_eqs, new_fsks, residual_wanted) }\n\n ; (floated_eqs, final_wanted)\n <- floatEqualities (skols ++ new_fsks) no_given_eqs residual_wanted\n\n ; let res_implic | isEmptyWC final_wanted && no_given_eqs\n = emptyBag -- Reason for the no_given_eqs: we don't want to\n -- lose the \"inaccessible code\" error message\n | otherwise\n = unitBag (imp { ic_fsks = new_fsks\n , ic_no_eqs = no_given_eqs\n , ic_wanted = dropDerivedWC final_wanted\n , ic_insol = insolubleWC final_wanted })\n\n ; evbinds <- getTcEvBindsMap\n ; traceTcS \"solveImplication end }\" $ vcat\n [ text \"no_given_eqs =\" <+> ppr no_given_eqs\n , text \"floated_eqs =\" <+> ppr floated_eqs\n , text \"new_fsks =\" <+> ppr new_fsks\n , text \"res_implic =\" <+> ppr res_implic\n , text \"implication evbinds = \" <+> ppr (evBindMapBinds evbinds) ]\n\n ; return (floated_eqs, res_implic) }\n\\end{code}\n\nNote [Cutting off simpl_loop]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIt is very important not to iterate in simpl_loop unless there is a chance\nof progress. Trac #8474 is a classic example:\n\n * There's a deeply-nested chain of implication constraints.\n ?x:alpha => ?y1:beta1 => ... ?yn:betan => [W] ?x:Int\n\n * From the innermost one we get a [D] alpha ~ Int,\n but alpha is untouchable until we get out to the outermost one\n\n * We float [D] alpha~Int out (it is in floated_eqs), but since alpha\n is untouchable, the solveInteract in simpl_loop makes no progress\n\n * So there is no point in attempting to re-solve\n ?yn:betan => [W] ?x:Int\n because we'll just get the same [D] again\n\n * If we *do* re-solve, we'll get an ininite loop. It is cut off by\n the fixed bound of 10, but solving the next takes 10*10*...*10 (ie\n exponentially many) iterations!\n\nConclusion: we should iterate simpl_loop iff we will get more 'givens'\nin the inert set when solving the nested implications. That is the\nresult of prepareInertsForImplications is larger. How can we tell\nthis?\n\nConsider floated_eqs (all wanted or derived):\n\n(a) [W\/D] CTyEqCan (a ~ ty). This can give rise to a new given only by causing\n a unification. So we count those unifications.\n\n(b) [W] CFunEqCan (F tys ~ xi). Even though these are wanted, they\n are pushed in as givens by prepareInertsForImplications. See Note\n [Preparing inert set for implications] in TcSMonad. But because\n of that very fact, we won't generate another copy if we iterate\n simpl_loop. So we iterate if there any of these\n\n\\begin{code}\nfloatEqualities :: [TcTyVar] -> Bool -> WantedConstraints\n -> TcS (Cts, WantedConstraints)\n-- Post: The returned floated constraints (Cts) are only Wanted or Derived\n-- and come from the input wanted ev vars or deriveds\n-- Also performs some unifications, adding to monadically-carried ty_binds\n-- These will be used when processing floated_eqs later\nfloatEqualities skols no_given_eqs wanteds@(WC { wc_flat = flats })\n | not no_given_eqs -- There are some given equalities, so don't float\n = return (emptyBag, wanteds) -- Note [Float Equalities out of Implications]\n | otherwise\n = do { let (float_eqs, remaining_flats) = partitionBag is_floatable flats\n ; untch <- TcS.getUntouchables\n ; mapM_ (promoteTyVar untch) (varSetElems (tyVarsOfCts float_eqs))\n -- See Note [Promoting unification variables]\n ; ty_binds <- getTcSTyBindsMap\n ; traceTcS \"floatEqualities\" (vcat [ text \"Flats =\" <+> ppr flats\n , text \"Floated eqs =\" <+> ppr float_eqs\n , text \"Ty binds =\" <+> ppr ty_binds])\n ; return (float_eqs, wanteds { wc_flat = remaining_flats }) }\n where\n -- See Note [Float equalities from under a skolem binding]\n skol_set = fixVarSet mk_next (mkVarSet skols)\n mk_next tvs = foldrBag grow_one tvs flats\n grow_one (CFunEqCan { cc_tyargs = xis, cc_rhs = rhs }) tvs\n | intersectsVarSet tvs (tyVarsOfTypes xis)\n = tvs `unionVarSet` tyVarsOfType rhs\n grow_one _ tvs = tvs\n\n is_floatable :: Ct -> Bool\n is_floatable ct = isEqPred pred && skol_set `disjointVarSet` tyVarsOfType pred\n where\n pred = ctPred ct\n\npromoteTyVar :: Untouchables -> TcTyVar -> TcS ()\n-- When we float a constraint out of an implication we must restore\n-- invariant (MetaTvInv) in Note [Untouchable type variables] in TcType\n-- See Note [Promoting unification variables]\npromoteTyVar untch tv\n | isFloatedTouchableMetaTyVar untch tv\n = do { cloned_tv <- TcS.cloneMetaTyVar tv\n ; let rhs_tv = setMetaTyVarUntouchables cloned_tv untch\n ; setWantedTyBind tv (mkTyVarTy rhs_tv) }\n | otherwise\n = return ()\n\npromoteAndDefaultTyVar :: Untouchables -> TcTyVarSet -> TyVar -> TcS ()\n-- See Note [Promote _and_ default when inferring]\npromoteAndDefaultTyVar untch gbl_tvs tv\n = do { tv1 <- if tv `elemVarSet` gbl_tvs\n then return tv\n else defaultTyVar tv\n ; promoteTyVar untch tv1 }\n\ndefaultTyVar :: TcTyVar -> TcS TcTyVar\n-- Precondition: MetaTyVars only\n-- See Note [DefaultTyVar]\ndefaultTyVar the_tv\n | Just default_k <- defaultKind_maybe (tyVarKind the_tv)\n = do { tv' <- TcS.cloneMetaTyVar the_tv\n ; let new_tv = setTyVarKind tv' default_k\n ; traceTcS \"defaultTyVar\" (ppr the_tv <+> ppr new_tv)\n ; setWantedTyBind the_tv (mkTyVarTy new_tv)\n ; return new_tv }\n -- Why not directly derived_pred = mkTcEqPred k default_k?\n -- See Note [DefaultTyVar]\n -- We keep the same Untouchables on tv'\n\n | otherwise = return the_tv -- The common case\n\napproximateWC :: WantedConstraints -> Cts\n-- Postcondition: Wanted or Derived Cts\n-- See Note [ApproximateWC]\napproximateWC wc\n = float_wc emptyVarSet wc\n where\n float_wc :: TcTyVarSet -> WantedConstraints -> Cts\n float_wc trapping_tvs (WC { wc_flat = flats, wc_impl = implics })\n = filterBag is_floatable flats `unionBags`\n do_bag (float_implic new_trapping_tvs) implics\n where\n new_trapping_tvs = fixVarSet grow trapping_tvs\n is_floatable ct = tyVarsOfCt ct `disjointVarSet` new_trapping_tvs\n\n grow tvs = foldrBag grow_one tvs flats\n grow_one ct tvs | ct_tvs `intersectsVarSet` tvs = tvs `unionVarSet` ct_tvs\n | otherwise = tvs\n where\n ct_tvs = tyVarsOfCt ct\n\n float_implic :: TcTyVarSet -> Implication -> Cts\n float_implic trapping_tvs imp\n | ic_no_eqs imp -- No equalities, so float\n = float_wc new_trapping_tvs (ic_wanted imp)\n | otherwise -- Don't float out of equalities\n = emptyCts -- See Note [ApproximateWC]\n where\n new_trapping_tvs = trapping_tvs `extendVarSetList` ic_skols imp\n `extendVarSetList` ic_fsks imp\n do_bag :: (a -> Bag c) -> Bag a -> Bag c\n do_bag f = foldrBag (unionBags.f) emptyBag\n\\end{code}\n\nNote [ApproximateWC]\n~~~~~~~~~~~~~~~~~~~~\napproximateWC takes a constraint, typically arising from the RHS of a\nlet-binding whose type we are *inferring*, and extracts from it some\n*flat* constraints that we might plausibly abstract over. Of course\nthe top-level flat constraints are plausible, but we also float constraints\nout from inside, if they are not captured by skolems.\n\nThe same function is used when doing type-class defaulting (see the call\nto applyDefaultingRules) to extract constraints that that might be defaulted.\n\nThere are two caveats:\n\n1. We do *not* float anything out if the implication binds equality\n constraints, because that defeats the OutsideIn story. Consider\n data T a where\n TInt :: T Int\n MkT :: T a\n\n f TInt = 3::Int\n\n We get the implication (a ~ Int => res ~ Int), where so far we've decided\n f :: T a -> res\n We don't want to float (res~Int) out because then we'll infer\n f :: T a -> Int\n which is only on of the possible types. (GHC 7.6 accidentally *did*\n float out of such implications, which meant it would happily infer\n non-principal types.)\n\n2. We do not float out an inner constraint that shares a type variable\n (transitively) with one that is trapped by a skolem. Eg\n forall a. F a ~ beta, Integral beta\n We don't want to float out (Integral beta). Doing so would be bad\n when defaulting, because then we'll default beta:=Integer, and that\n makes the error message much worse; we'd get\n Can't solve F a ~ Integer\n rather than\n Can't solve Integral (F a)\n\n Moreover, floating out these \"contaminated\" constraints doesn't help\n when generalising either. If we generalise over (Integral b), we still\n can't solve the retained implication (forall a. F a ~ b). Indeed,\n arguably that too would be a harder error to understand.\n\nNote [DefaultTyVar]\n~~~~~~~~~~~~~~~~~~~\ndefaultTyVar is used on any un-instantiated meta type variables to\ndefault the kind of OpenKind and ArgKind etc to *. This is important\nto ensure that instance declarations match. For example consider\n\n instance Show (a->b)\n foo x = show (\\_ -> True)\n\nThen we'll get a constraint (Show (p ->q)) where p has kind ArgKind,\nand that won't match the typeKind (*) in the instance decl. See tests\ntc217 and tc175.\n\nWe look only at touchable type variables. No further constraints\nare going to affect these type variables, so it's time to do it by\nhand. However we aren't ready to default them fully to () or\nwhatever, because the type-class defaulting rules have yet to run.\n\nAn important point is that if the type variable tv has kind k and the\ndefault is default_k we do not simply generate [D] (k ~ default_k) because:\n\n (1) k may be ArgKind and default_k may be * so we will fail\n\n (2) We need to rewrite all occurrences of the tv to be a type\n variable with the right kind and we choose to do this by rewriting\n the type variable \/itself\/ by a new variable which does have the\n right kind.\n\nNote [Promote _and_ default when inferring]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWhen we are inferring a type, we simplify the constraint, and then use\napproximateWC to produce a list of candidate constraints. Then we MUST\n\n a) Promote any meta-tyvars that have been floated out by\n approximateWC, to restore invariant (MetaTvInv) described in\n Note [Untouchable type variables] in TcType.\n\n b) Default the kind of any meta-tyyvars that are not mentioned in\n in the environment.\n\nTo see (b), suppose the constraint is (C ((a :: OpenKind) -> Int)), and we\nhave an instance (C ((x:*) -> Int)). The instance doesn't match -- but it\nshould! If we don't solve the constraint, we'll stupidly quantify over\n(C (a->Int)) and, worse, in doing so zonkQuantifiedTyVar will quantify over\n(b:*) instead of (a:OpenKind), which can lead to disaster; see Trac #7332.\nTrac #7641 is a simpler example.\n\nNote [Float Equalities out of Implications]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nFor ordinary pattern matches (including existentials) we float\nequalities out of implications, for instance:\n data T where\n MkT :: Eq a => a -> T\n f x y = case x of MkT _ -> (y::Int)\nWe get the implication constraint (x::T) (y::alpha):\n forall a. [untouchable=alpha] Eq a => alpha ~ Int\nWe want to float out the equality into a scope where alpha is no\nlonger untouchable, to solve the implication!\n\nBut we cannot float equalities out of implications whose givens may\nyield or contain equalities:\n\n data T a where\n T1 :: T Int\n T2 :: T Bool\n T3 :: T a\n\n h :: T a -> a -> Int\n\n f x y = case x of\n T1 -> y::Int\n T2 -> y::Bool\n T3 -> h x y\n\nWe generate constraint, for (x::T alpha) and (y :: beta):\n [untouchables = beta] (alpha ~ Int => beta ~ Int) -- From 1st branch\n [untouchables = beta] (alpha ~ Bool => beta ~ Bool) -- From 2nd branch\n (alpha ~ beta) -- From 3rd branch\n\nIf we float the equality (beta ~ Int) outside of the first implication and\nthe equality (beta ~ Bool) out of the second we get an insoluble constraint.\nBut if we just leave them inside the implications we unify alpha := beta and\nsolve everything.\n\nPrinciple:\n We do not want to float equalities out which may\n need the given *evidence* to become soluble.\n\nConsequence: classes with functional dependencies don't matter (since there is\nno evidence for a fundep equality), but equality superclasses do matter (since\nthey carry evidence).\n\nNote [When does an implication have given equalities?]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nConsider an implication\n beta => alpha ~ Int\nwhere beta is a unification variable that has already been unified\nto () in an outer scope. Then we can float the (alpha ~ Int) out\njust fine. So when deciding whether the givens contain an equality,\nwe should canonicalise first, rather than just looking at the original\ngivens (Trac #8644).\n\nThis is the entire reason for the inert_no_eqs field in InertCans.\nWe initialise it to False before processing the Givens of an implication;\nand set it to True when adding an inert equality in addInertCan.\n\nHowever, when flattening givens, we generate given equalities like\n : F [a] ~ f,\nwith Refl evidence, and we *don't* want those to count as an equality\nin the givens! After all, the entire flattening business is just an\ninternal matter, and the evidence does not mention any of the 'givens'\nof this implication.\n\nSo we set the flag to False when adding an equality\n(TcSMonad.addInertCan) whose evidence whose CtOrigin is\nFlatSkolOrigin; see TcSMonad.isFlatSkolEv. Note that we may transform\nthe original flat-skol equality before adding it to the inerts, so\nit's important that the transformation preserves origin (which\nxCtEvidence and rewriteEvidence both do). Example\n instance F [a] = Maybe a\n implication: C (F [a]) => blah\n We flatten (C (F [a])) to C fsk, with : F [a] ~ fsk\n Then we reduce the F [a] LHS, giving\n g22 = ax7 ; \n g22 : Maybe a ~ fsk\n And before adding g22 we'll re-orient it to an ordinary tyvar\n equality. None of this should count as \"adding a given equality\".\n This really happens (Trac #8651).\n\nAn alternative we considered was to\n * Accumulate the new inert equalities (in TcSMonad.addInertCan)\n * In solveInteractGiven, check whether the evidence for the new\n equalities mentions any of the ic_givens of this implication.\nThis seems like the Right Thing, but it's more code, and more work\nat runtime, so we are using the FlatSkolOrigin idea intead. It's less\nobvious that it works, but I htink it does, and it's simple and efficient.\n\n\nNote [Float equalities from under a skolem binding]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nYou might worry about skolem escape with all this floating.\nFor example, consider\n [2] forall a. (a ~ F beta[2] delta,\n Maybe beta[2] ~ gamma[1])\n\nThe (Maybe beta ~ gamma) doesn't mention 'a', so we float it, and\nsolve with gamma := beta. But what if later delta:=Int, and\n F b Int = b.\nThen we'd get a ~ beta[2], and solve to get beta:=a, and now the\nskolem has escaped!\n\nBut it's ok: when we float (Maybe beta[2] ~ gamma[1]), we promote beta[2]\nto beta[1], and that means the (a ~ beta[1]) will be stuck, as it should be.\n\nPreviously we tried to \"grow\" the skol_set with the constraints, to get\nall the tyvars that could *conceivably* unify with the skolems, but that\nwas far too conservative (Trac #7804). Example: this should be fine:\n f :: (forall a. a -> Proxy x -> Proxy (F x)) -> Int\n f = error \"Urk\" :: (forall a. a -> Proxy x -> Proxy (F x)) -> Int\n\nBUT (sigh) we have to be careful. Here are some edge cases:\n\na) [2]forall a. (F a delta[1] ~ beta[2], delta[1] ~ Maybe beta[2])\nb) [2]forall a. (F b ty ~ beta[2], G beta[2] ~ gamma[2])\nc) [2]forall a. (F a ty ~ beta[2], delta[1] ~ Maybe beta[2])\n\nIn (a) we *must* float out the second equality,\n else we can't solve at all (Trac #7804).\n\nIn (b) we *must not* float out the second equality.\n It will ultimately be solved (by flattening) in situ, but if we\n float it we'll promote beta,gamma, and render the first equality insoluble.\n\nIn (c) it would be OK to float the second equality but better not to.\n If we flatten we see (delta[1] ~ Maybe (F a ty)), which is a\n skolem-escape problem. If we float the secodn equality we'll\n end up with (F a ty ~ beta'[1]), which is a less explicable error.\n\nHence we start with the skolems, grow them by the CFunEqCans, and\nfloat ones that don't mention the grown variables. Seems very ad hoc.\n\nNote [Promoting unification variables]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWhen we float an equality out of an implication we must \"promote\" free\nunification variables of the equality, in order to maintain Invariant\n(MetaTvInv) from Note [Untouchable type variables] in TcType. for the\nleftover implication.\n\nThis is absolutely necessary. Consider the following example. We start\nwith two implications and a class with a functional dependency.\n\n class C x y | x -> y\n instance C [a] [a]\n\n (I1) [untch=beta]forall b. 0 => F Int ~ [beta]\n (I2) [untch=beta]forall c. 0 => F Int ~ [[alpha]] \/\\ C beta [c]\n\nWe float (F Int ~ [beta]) out of I1, and we float (F Int ~ [[alpha]]) out of I2.\nThey may react to yield that (beta := [alpha]) which can then be pushed inwards\nthe leftover of I2 to get (C [alpha] [a]) which, using the FunDep, will mean that\n(alpha := a). In the end we will have the skolem 'b' escaping in the untouchable\nbeta! Concrete example is in indexed_types\/should_fail\/ExtraTcsUntch.hs:\n\n class C x y | x -> y where\n op :: x -> y -> ()\n\n instance C [a] [a]\n\n type family F a :: *\n\n h :: F Int -> ()\n h = undefined\n\n data TEx where\n TEx :: a -> TEx\n\n\n f (x::beta) =\n let g1 :: forall b. b -> ()\n g1 _ = h [x]\n g2 z = case z of TEx y -> (h [[undefined]], op x [y])\n in (g1 '3', g2 undefined)\n\n\n\nNote [Solving Family Equations]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nAfter we are done with simplification we may be left with constraints of the form:\n [Wanted] F xis ~ beta\nIf 'beta' is a touchable unification variable not already bound in the TyBinds\nthen we'd like to create a binding for it, effectively \"defaulting\" it to be 'F xis'.\n\nWhen is it ok to do so?\n 1) 'beta' must not already be defaulted to something. Example:\n\n [Wanted] F Int ~ beta <~ Will default [beta := F Int]\n [Wanted] F Char ~ beta <~ Already defaulted, can't default again. We\n have to report this as unsolved.\n\n 2) However, we must still do an occurs check when defaulting (F xis ~ beta), to\n set [beta := F xis] only if beta is not among the free variables of xis.\n\n 3) Notice that 'beta' can't be bound in ty binds already because we rewrite RHS\n of type family equations. See Inert Set invariants in TcInteract.\n\nThis solving is now happening during zonking, see Note [Unflattening while zonking]\nin TcMType.\n\n\n*********************************************************************************\n* *\n* Defaulting and disamgiguation *\n* *\n*********************************************************************************\n\n\\begin{code}\napplyDefaultingRules :: Cts -> TcS Bool\n -- True <=> I did some defaulting, reflected in ty_binds\n\n-- Return some extra derived equalities, which express the\n-- type-class default choice.\napplyDefaultingRules wanteds\n | isEmptyBag wanteds\n = return False\n | otherwise\n = do { traceTcS \"applyDefaultingRules { \" $\n text \"wanteds =\" <+> ppr wanteds\n\n ; info@(default_tys, _) <- getDefaultInfo\n ; let groups = findDefaultableGroups info wanteds\n ; traceTcS \"findDefaultableGroups\" $ vcat [ text \"groups=\" <+> ppr groups\n , text \"info=\" <+> ppr info ]\n ; something_happeneds <- mapM (disambigGroup default_tys) groups\n\n ; traceTcS \"applyDefaultingRules }\" (ppr something_happeneds)\n\n ; return (or something_happeneds) }\n\\end{code}\n\n\n\n\\begin{code}\nfindDefaultableGroups\n :: ( [Type]\n , (Bool,Bool) ) -- (Overloaded strings, extended default rules)\n -> Cts -- Unsolved (wanted or derived)\n -> [[(Ct,Class,TcTyVar)]]\nfindDefaultableGroups (default_tys, (ovl_strings, extended_defaults)) wanteds\n | null default_tys = []\n | otherwise = defaultable_groups\n where\n defaultable_groups = filter is_defaultable_group groups\n groups = equivClasses cmp_tv unaries\n unaries :: [(Ct, Class, TcTyVar)] -- (C tv) constraints\n non_unaries :: [Ct] -- and *other* constraints\n\n (unaries, non_unaries) = partitionWith find_unary (bagToList wanteds)\n -- Finds unary type-class constraints\n -- But take account of polykinded classes like Typeable,\n -- which may look like (Typeable * (a:*)) (Trac #8931)\n find_unary cc\n | Just (cls,tys) <- getClassPredTys_maybe (ctPred cc)\n , Just (kinds, ty) <- snocView tys\n , all isKind kinds\n , Just tv <- tcGetTyVar_maybe ty\n , isMetaTyVar tv -- We might have runtime-skolems in GHCi, and\n -- we definitely don't want to try to assign to those!\n = Left (cc, cls, tv)\n find_unary cc = Right cc -- Non unary or non dictionary\n\n bad_tvs :: TcTyVarSet -- TyVars mentioned by non-unaries\n bad_tvs = foldr (unionVarSet . tyVarsOfCt) emptyVarSet non_unaries\n\n cmp_tv (_,_,tv1) (_,_,tv2) = tv1 `compare` tv2\n\n is_defaultable_group ds@((_,_,tv):_)\n = let b1 = isTyConableTyVar tv -- Note [Avoiding spurious errors]\n b2 = not (tv `elemVarSet` bad_tvs)\n b4 = defaultable_classes [cls | (_,cls,_) <- ds]\n in (b1 && b2 && b4)\n is_defaultable_group [] = panic \"defaultable_group\"\n\n defaultable_classes clss\n | extended_defaults = any isInteractiveClass clss\n | otherwise = all is_std_class clss && (any is_num_class clss)\n\n -- In interactive mode, or with -XExtendedDefaultRules,\n -- we default Show a to Show () to avoid graututious errors on \"show []\"\n isInteractiveClass cls\n = is_num_class cls || (classKey cls `elem` [showClassKey, eqClassKey, ordClassKey])\n\n is_num_class cls = isNumericClass cls || (ovl_strings && (cls `hasKey` isStringClassKey))\n -- is_num_class adds IsString to the standard numeric classes,\n -- when -foverloaded-strings is enabled\n\n is_std_class cls = isStandardClass cls || (ovl_strings && (cls `hasKey` isStringClassKey))\n -- Similarly is_std_class\n\n------------------------------\ndisambigGroup :: [Type] -- The default types\n -> [(Ct, Class, TcTyVar)] -- All classes of the form (C a)\n -- sharing same type variable\n -> TcS Bool -- True <=> something happened, reflected in ty_binds\n\ndisambigGroup [] _grp\n = return False\ndisambigGroup (default_ty:default_tys) group\n = do { traceTcS \"disambigGroup {\" (ppr group $$ ppr default_ty)\n ; success <- tryTcS $ -- Why tryTcS? If this attempt fails, we want to\n -- discard all side effects from the attempt\n do { setWantedTyBind the_tv default_ty\n ; implics_from_defaulting <- solveInteract wanteds\n ; MASSERT(isEmptyBag implics_from_defaulting)\n -- I am not certain if any implications can be generated\n -- but I am letting this fail aggressively if this ever happens.\n\n ; checkAllSolved }\n\n ; if success then\n -- Success: record the type variable binding, and return\n do { setWantedTyBind the_tv default_ty\n ; wrapWarnTcS $ warnDefaulting wanteds default_ty\n ; traceTcS \"disambigGroup succeeded }\" (ppr default_ty)\n ; return True }\n else\n -- Failure: try with the next type\n do { traceTcS \"disambigGroup failed, will try other default types }\"\n (ppr default_ty)\n ; disambigGroup default_tys group } }\n where\n ((_,_,the_tv):_) = group\n wanteds = listToBag (map fstOf3 group)\n\\end{code}\n\nNote [Avoiding spurious errors]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWhen doing the unification for defaulting, we check for skolem\ntype variables, and simply don't default them. For example:\n f = (*) -- Monomorphic\n g :: Num a => a -> a\n g x = f x x\nHere, we get a complaint when checking the type signature for g,\nthat g isn't polymorphic enough; but then we get another one when\ndealing with the (Num a) context arising from f's definition;\nwe try to unify a with Int (to default it), but find that it's\nalready been unified with the rigid variable from g's type sig\n\n","avg_line_length":44.1203770848,"max_line_length":107,"alphanum_fraction":0.6158574669} {"size":13303,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"SUDOKU IN HASKELL\nGraham Hutton, January 2021\nBased upon notes by Richard Bird\n\nThe program developed in this note is a good example of what\nhas been termed \"wholemeal programming\", the idea of focusing\non entire data structures rather than on their elements. This\napproach naturally leads to a compositional style, and relies\non lazy evaluation for efficiency. Our development proceeds\nby starting with a simple but impractical program, which is\nthen refined in a series of steps, ending up with a program\nthat can solve any newspaper Sudoku puzzle in an instant. \n\n> module Hutton where\n\nLibrary file\n-------------\n\nWe use a few things from the list library:\n\n> import Data.List \n\n\nBasic declarations\n------------------\n\nWe begin with some basic declarations, using the terminology of\nsudoku.org.uk. Although the declarations do not enforce it,\nwe will only consider non-empty square matrices with a multiple\nof boxsize (defined in the next section) rows. This assumption\nis important for various properties that we rely on.\n\n> type Grid = Matrix Value\n>\n> type Matrix a = [Row a]\n>\n> type Row a = [a]\n>\n> type Value = Char\n\n\nBasic definitions \n-----------------\n\n> boxsize :: Int\n> boxsize = 3\n>\n> values :: [Value]\n> values = ['1'..'9']\n>\n> empty :: Value -> Bool\n> empty = (== '.')\n>\n> single :: [a] -> Bool\n> single [_] = True\n> single _ = False\n\n\nExample grids\n-------------\n\nSolvable only using the basic rules:\n\n> easy :: Grid\n> easy = [\"2....1.38\",\n> \"........5\",\n> \".7...6...\",\n> \".......13\",\n> \".981..257\",\n> \"31....8..\",\n> \"9..8...2.\",\n> \".5..69784\",\n> \"4..25....\"]\n\nFirst gentle example from sudoku.org.uk:\n\n> gentle :: Grid\n> gentle = [\".1.42...5\",\n> \"..2.71.39\",\n> \".......4.\",\n> \"2.71....6\",\n> \"....4....\",\n> \"6....74.3\",\n> \".7.......\",\n> \"12.73.5..\",\n> \"3...82.7.\"]\n\nFirst diabolical example:\n\n> diabolical :: Grid\n> diabolical = [\".9.7..86.\",\n> \".31..5.2.\",\n> \"8.6......\",\n> \"..7.5...6\",\n> \"...3.7...\",\n> \"5...1.7..\",\n> \"......1.9\",\n> \".2.6..35.\",\n> \".54..8.7.\"]\n\nFirst \"unsolvable\" (requires backtracking) example:\n\n> unsolvable :: Grid\n> unsolvable = [\"1..9.7..3\",\n> \".8.....7.\",\n> \"..9...6..\",\n> \"..72.94..\",\n> \"41.....95\",\n> \"..85.43..\",\n> \"..3...7..\",\n> \".5.....4.\",\n> \"2..8.6..9\"]\n\nMinimal sized grid (17 values) with a unique solution:\n\n> minimal :: Grid\n> minimal = [\".98......\",\n> \"....7....\",\n> \"....15...\",\n> \"1........\",\n> \"...2....9\",\n> \"...9.6.82\",\n> \".......3.\",\n> \"5.1......\",\n> \"...4...2.\"]\n\nEmpty grid:\n\n> blank :: Grid\n> blank = replicate n (replicate n '.')\n> where n = boxsize ^ 2\n\n\nExtracting rows, columns and boxes\n----------------------------------\n\nExtracting rows is trivial:\n\n> rows :: Matrix a -> [Row a]\n> rows = id\n\nWe also have, trivially, that rows . rows = id. This property (and\nsimilarly for cols and boxs) will be important later on.\n\nExtracting columns is just matrix transposition:\n\n> cols :: Matrix a -> [Row a]\n> cols = transpose\n\nExample: cols [[1,2],[3,4]] = [[1,3],[2,4]]. Exercise: define\ntranspose, without looking at the library definition.\n\nWe also have that cols . cols = id.\n\nExtracting boxes is more complicated:\n\n> boxs :: Matrix a -> [Row a]\n> boxs = unpack . map cols . pack\n> where\n> pack = split . map split\n> split = chop boxsize\n> unpack = map concat . concat\n>\n> chop :: Int -> [a] -> [[a]]\n> chop n [] = []\n> chop n xs = take n xs : chop n (drop n xs)\n\nExample: if boxsize = 2, then we have \n\n [[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]\n\n |\n pack\n |\n v\n\n [[[[1,2],[3,4]],[[5,6],[7,8]]],[[[9,10],[11,12]],[[13,14],[15,16]]]]\n\n |\n map cols\n |\n v\n\n [[[[1,2],[5,6]],[[3,4],[7,8]]],[[[9,10],[13,14]],[[11,12],[15,16]]]]\n\n | \n unpack\n |\n v\n\n [[1,2,5,6],[3,4,7,8],[9,10,13,14],[11,12,15,16]]\n\nNote that concat . split = id, and moreover, boxs . boxs = id.\n\n\nValidity checking\n-----------------\n\nNow let us turn our attention from matrices to Sudoku grids. A grid\nis valid if there are no duplicates in any row, column or box:\n\n> valid :: Grid -> Bool\n> valid g = all nodups (rows g) &&\n> all nodups (cols g) &&\n> all nodups (boxs g)\n>\n> nodups :: Eq a => [a] -> Bool\n> nodups [] = True\n> nodups (x:xs) = x `notElem` xs && nodups xs\n\n\nA basic solver\n--------------\n\nThe function choices replaces blank squares in a grid by all possible\nvalues for that square, giving a matrix of choices:\n\n> type Choices = [Value]\n>\n> choices :: Grid -> Matrix Choices\n> choices = map (map choice)\n> where\n> choice v = if empty v then values else [v]\n\nReducing a matrix of choices to a choice of matrices can be defined \nin terms of the normal cartesian product of a list of lists, which\ngeneralises the cartesian product of two lists:\n\n> cp :: [[a]] -> [[a]]\n> cp [] = [[]]\n> cp (xs:xss) = [y:ys | y <- xs, ys <- cp xss]\n\nFor example, cp [[1,2],[3,4],[5,6]] gives:\n\n [[1,3,5],[1,3,6],[1,4,5],[1,4,6],[2,3,5],[2,3,6],[2,4,5],[2,4,6]]\n\nIt is now simple to collapse a matrix of choices:\n\n> collapse :: Matrix [a] -> [Matrix a]\n> collapse = cp . map cp\n\nFinally, we can now specify a suduku solver:\n\n> solve :: Grid -> [Grid]\n> solve = filter valid . collapse . choices\n\nFor the easy example grid, there are 51 empty squares, which means\nthat this function will consider 9^51 possible grids, which is:\n\n 4638397686588101979328150167890591454318967698009\n\nSearching this space isn't feasible; we need to think further.\n\n\nPruning the search space\n------------------------\n\nOur first step to making things better is to introduce the idea\nof \"pruning\" the choices that are considered for each square.\nPrunes go well with wholemeal programming! In particular, from\nthe set of all possible choices for each square, we can prune\nout any choices that already occur as single entries in the\nassociated row, column, and box, as otherwise the resulting\ngrid will be invalid. Here is the code for this:\n\n> prune :: Matrix Choices -> Matrix Choices\n> prune = pruneBy boxs . pruneBy cols . pruneBy rows\n> where pruneBy f = f . map reduce . f\n>\n> reduce :: Row Choices -> Row Choices\n> reduce xss = [xs `minus` singles | xs <- xss]\n> where singles = concat (filter single xss)\n>\n> minus :: Choices -> Choices -> Choices\n> xs `minus` ys = if single xs then xs else xs \\\\ ys\n\nNote that pruneBy relies on the fact that rows . rows = id, and\nsimilarly for the functions cols and boxs, in order to decompose\na matrix into its rows, operate upon the rows in some way, and\nthen reconstruct the matrix. Now we can write a new solver:\n\n> solve2 :: Grid -> [Grid]\n> solve2 = filter valid . collapse . prune . choices\n\nFor example, for the easy grid, pruning leaves an average of around\n2.4 choices for each of the 81 squares, or 1027134771639091200000000\npossible grids. A much smaller number, but still not feasible.\n\n\nRepeatedly pruning\n------------------\n\nAfter pruning, there may now be new single entries, for which pruning\nagain may further reduce the search space. More generally, we can \niterate the pruning process until this has no further effect, which\nin mathematical terms means that we have found a \"fixpoint\". The\nsimplest Sudoku puzzles can be solved in this way.\n\n> solve3 :: Grid -> [Grid]\n> solve3 = filter valid . collapse . fix prune . choices\n>\n> fix :: Eq a => (a -> a) -> a -> a\n> fix f x = if x == x' then x else fix f x'\n> where x' = f x\n\nFor example, for our easy grid, the pruning process leaves precisely\none choice for each square, and solve3 terminates immediately. However,\nfor the gentle grid we still get around 2.8 choices for each square, or\n154070215745863680000000000000 possible grids.\n\nBack to the drawing board...\n\n\nProperties of matrices\n----------------------\n\nIn this section we introduce a number of properties that may hold of\na matrix of choices. First of all, let us say that such a matrix is\n\"complete\" if each square contains a single choice:\n\n> complete :: Matrix Choices -> Bool\n> complete = all (all single)\n\nSimilarly, a matrix is \"void\" if some square contains no choices:\n\n> void :: Matrix Choices -> Bool\n> void = any (any null)\n\nIn turn, we use the term \"safe\" for matrix for which all rows,\ncolumns and boxes are consistent, in the sense that they do not\ncontain more than one occurrence of the same single choice:\n\n> safe :: Matrix Choices -> Bool\n> safe cm = all consistent (rows cm) &&\n> all consistent (cols cm) &&\n> all consistent (boxs cm)\n>\n> consistent :: Row Choices -> Bool\n> consistent = nodups . concat . filter single\n\nFinally, a matrix is \"blocked\" if it is void or unsafe:\n\n> blocked :: Matrix Choices -> Bool\n> blocked m = void m || not (safe m)\n\n\nMaking choices one at a time\n----------------------------\n\nClearly, a blocked matrix cannot lead to a solution. However, our\nprevious solver does not take account of this. More importantly,\na choice that leads to a blocked matrix can be duplicated many\ntimes by the collapse function, because this function simply\nconsiders all possible combinations of choices. This is the \nprimary source of inefficiency in our previous solver.\n\nThis problem can be addressed by expanding choices one square at\na time, and filtering out any resulting matrices that are blocked\nbefore considering any further choices. Implementing this idea\nis straightforward, and gives our final Sudoku solver:\n\n> solve4 :: Grid -> [Grid]\n> solve4 = search . prune . choices\n>\n> search :: Matrix Choices -> [Grid]\n> search m\n> | blocked m = []\n> | complete m = collapse m\n> | otherwise = [g | m' <- expand m\n> , g <- search (prune m')]\n\nThe function expand behaves in the same way as collapse, except that\nit only collapses the first square with more than one choice:\n\n> expand :: Matrix Choices -> [Matrix Choices]\n> expand m =\n> [rows1 ++ [row1 ++ [c] : row2] ++ rows2 | c <- cs]\n> where\n> (rows1,row:rows2) = span (all single) m\n> (row1,cs:row2) = span single row\n\nNote that there is no longer any need to check for valid grids at \nthe end, because the process by which solutions are constructed \nguarantees that this will always be the case. There also doesn't\nseem to be any benefit in using \"fix prune\" rather than \"prune\"\nabove; the program is faster without using fix. In fact, our\nprogram now solves any newspaper Sudoku puzzle in an instant!\n\nExercise: modify the expand function to collapse a square with the\nsmallest number of choices greater than one, and see what effect \nthis change has on the performance of the solver.\n\n\nTesting\n-------\n\n> main :: IO ()\n> main = putStrLn (unlines (head (solve4 easy)))\n","avg_line_length":33.3408521303,"max_line_length":72,"alphanum_fraction":0.5099601594} {"size":6298,"ext":"lhs","lang":"Literate Haskell","max_stars_count":2.0,"content":"% -*- LaTeX -*-\n% $Id: TypeSubst.lhs 3243 2016-06-19 12:37:10Z wlux $\n%\n% Copyright (c) 2003-2015, Wolfgang Lux\n% See LICENSE for the full license.\n%\n\\nwfilename{TypeSubst.lhs}\n\\section{Type Substitutions}\nThis module implements substitutions on types.\n\\begin{verbatim}\n\n> module TypeSubst(TypeSubst, SubstType(..), bindVar, substVar,\n> InstTypeScheme(..), normalize, instanceType, typeExpansions,\n> idSubst, bindSubst, compose) where\n> import Ident\n> import List\n> import Maybe\n> import Subst\n> import TopEnv\n> import Types\n> import ValueInfo\n\n> type TypeSubst = Subst Int Type\n\n> class SubstType a where\n> subst :: TypeSubst -> a -> a\n\n> bindVar :: Int -> Type -> TypeSubst -> TypeSubst\n> bindVar tv ty = compose (bindSubst tv ty idSubst)\n\n> substVar :: TypeSubst -> Int -> Type\n> substVar = substVar' TypeVariable subst\n\n> instance SubstType a => SubstType [a] where\n> subst sigma = map (subst sigma)\n\n> instance SubstType Type where\n> subst sigma ty = substTypeApp sigma ty []\n\n> substTypeApp :: TypeSubst -> Type -> [Type] -> Type\n> substTypeApp _ (TypeConstructor tc) = foldl TypeApply (TypeConstructor tc)\n> substTypeApp sigma (TypeVariable tv) = applyType (substVar sigma tv)\n> substTypeApp sigma (TypeConstrained tys tv) =\n> case substVar sigma tv of\n> TypeVariable tv -> foldl TypeApply (TypeConstrained tys tv)\n> ty -> foldl TypeApply ty\n> substTypeApp _ (TypeSkolem k) = foldl TypeApply (TypeSkolem k)\n> substTypeApp sigma (TypeApply ty1 ty2) =\n> substTypeApp sigma ty1 . (subst sigma ty2 :)\n> substTypeApp sigma (TypeArrow ty1 ty2) =\n> foldl TypeApply (TypeArrow (subst sigma ty1) (subst sigma ty2))\n\n> instance SubstType TypePred where\n> subst sigma (TypePred cls ty) = TypePred cls (subst sigma ty)\n\n> instance SubstType QualType where\n> subst sigma (QualType cx ty) = QualType (subst sigma cx) (subst sigma ty)\n\n> instance SubstType TypeScheme where\n> subst sigma (ForAll n ty) = ForAll n (subst sigma' ty)\n> where sigma' = foldr unbindSubst sigma [0..n-1]\n\n> instance SubstType ValueInfo where\n> subst _ (DataConstructor c ls ci ty) = DataConstructor c ls ci ty\n> subst _ (NewtypeConstructor c l ty) = NewtypeConstructor c l ty\n> subst sigma (Value v n ty) = Value v n (subst sigma ty)\n\n> instance SubstType a => SubstType (TopEnv a) where\n> subst = fmap . subst\n\n\\end{verbatim}\nThe class method \\texttt{instTypeScheme} instantiates a type scheme by\nsubstituting the given types for the universally quantified type\nvariables in a type (scheme). After a substitution the compiler must\nrecompute the type indices for all type variables. Otherwise,\nexpanding a type synonym like \\verb|type Pair' a b = (b,a)| could\nbreak the invariant that the universally quantified type variables are\nassigned indices in the order of their occurrence. This is handled by\nfunction \\texttt{normalize}. The function has a threshold parameter\nthat allows preserving the indices of type variables bound on the left\nhand side of a type declaration and in the head of a type class\ndeclaration, respectively.\n\\begin{verbatim}\n\n> class InstTypeScheme t where\n> instTypeScheme :: [Type] -> t -> t\n\n> instance InstTypeScheme Type where\n> instTypeScheme tys ty = instTypeApp tys ty []\n\n> instTypeApp :: [Type] -> Type -> [Type] -> Type\n> instTypeApp _ (TypeConstructor tc) = foldl TypeApply (TypeConstructor tc)\n> instTypeApp tys (TypeVariable n)\n> | n >= 0 = applyType (tys !! n)\n> | otherwise = foldl TypeApply (TypeVariable n)\n> instTypeApp _ (TypeConstrained tys n) =\n> foldl TypeApply (TypeConstrained tys n)\n> instTypeApp _ (TypeSkolem k) = foldl TypeApply (TypeSkolem k)\n> instTypeApp tys (TypeApply ty1 ty2) =\n> instTypeApp tys ty1 . (instTypeScheme tys ty2 :)\n> instTypeApp tys (TypeArrow ty1 ty2) =\n> foldl TypeApply\n> (TypeArrow (instTypeScheme tys ty1) (instTypeScheme tys ty2))\n\n> instance InstTypeScheme TypePred where\n> instTypeScheme tys (TypePred cls ty) = TypePred cls (instTypeScheme tys ty)\n\n> instance InstTypeScheme QualType where\n> instTypeScheme tys (QualType cx ty) =\n> QualType (map (instTypeScheme tys) cx) (instTypeScheme tys ty)\n\n> normalize :: Int -> QualType -> QualType\n> normalize n ty =\n> canonType (instTypeScheme [TypeVariable (occur tv) | tv <- [0..]] ty)\n> where tvs' = zip (nub (filter (>= n) (typeVars ty))) [n..]\n> occur tv = fromMaybe tv (lookup tv tvs')\n\n\\end{verbatim}\nThe function \\texttt{instanceType} computes an instance of a\npolymorphic type by substituting the first type argument for all\noccurrences of the type variable with index 0 in the second argument.\nThe function carefully assigns new indices to all other type variables\nof the second argument so that they do not conflict with the type\nvariables of the first argument.\n\\begin{verbatim}\n\n> instanceType :: InstTypeScheme a => Type -> a -> a\n> instanceType ty = instTypeScheme (ty : map TypeVariable [n..])\n> where ForAll n _ = polyType ty\n\n\\end{verbatim}\nThe function \\texttt{typeExpansions} recursively expands a type\n$T\\,t_1\\,\\dots\\,t_n$ as long as the type constructor $T$ denotes a\ntype synonym. While the right hand side type of a type synonym\ndeclaration is already fully expanded when it is saved in the type\nenvironment, a recursive expansion may be necessary to resolve renamed\ntypes after making newtype constructors transparent during desugaring\n(see Sect.~\\ref{sec:newtype}). Since renaming types can be (mutually)\nrecursive, the expansion may not terminate. Therefore,\n\\texttt{typeExpansions} stops expansion after 50 iterations even the\nresulting type still is a synonym.\n\n\\ToDo{Make the limit configurable.}\n\\begin{verbatim}\n\n> typeExpansions :: (QualIdent -> Maybe (Int,Type)) -> Type -> [Type]\n> typeExpansions typeAlias ty = take 50 (ty : unfoldr expandType ty)\n> where expandType ty =\n> case unapplyType False ty of\n> (TypeConstructor tc,tys) ->\n> case typeAlias tc of\n> Just (n,ty')\n> | n <= length tys ->\n> Just (dup (applyType (instTypeScheme tys' ty') tys''))\n> | otherwise -> Nothing\n> where (tys',tys'') = splitAt n tys\n> Nothing -> Nothing\n> _ -> Nothing\n> dup x = (x,x)\n\n\\end{verbatim}","avg_line_length":39.1180124224,"max_line_length":79,"alphanum_fraction":0.706732296} {"size":15421,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"> import Data.List\n> import Data.Char\n> import Data.Ord\n\nPARAMETERS\n\n> rows = 6\n> cols = 7\n> win = 4\n> depth = 4\n\nDEFINITIONS\n\n> type Board = [Row]\n> type Row = [Player]\n\n> data Player = O | B | X deriving (Ord, Eq, Show)\n\nUSER INTERFACE: INITIALISATION, SHOWING & LOOPING\n\nPlayer O is passed into `nextPly` as AI starts by default becayse the AI is easily exploitable when it has the disadvantage of starting second. To give the human the starting ply, pass X instead of O into `nextPly`. \"blankBoard\" is defined according to the board dimension parameters.\n\n> main :: IO ()\n> main = nextPly O blankBoard\n\n> blankBoard :: Board \n> blankBoard = replicate rows (replicate cols B)\n\nThe game loop, which, for every ply: starts drawing from top left corner of cleared screen, checks for game over, calls AI to search its pruned tree and allows human to input their desired column. The \"else\" case is only reachable when the player is X (ie., human), with an opportunity to ply. \n\n> nextPly :: Player -> Board -> IO ()\n> nextPly p b = do putStr \"\\ESC[2J\\ESC[1;1H\" -- comment out if running on Windows\n> showBoard b\n> if anyHasWon b then putStrLn (showPlayer (winner b) : \" has won.\")\n> else if isFull b then putStrLn \"Draw\"\n> else if p == O then (nextPly X (idealPath O b))\n> else do\n> c <- getCheckCol \"O was placed by AI.\" b\n> nextPly O (ply X b ((lowestRow b c (rows-1)),c)) \n\nUtility for `nextPly` to retrieve and validate the human's desired column. It will prompt the human indefinitely unless a column in range with at least one blank entry is entered.\n\n> getCheckCol :: String -> Board -> IO Int\n> getCheckCol s b = do putStrLn (s ++ \" Enter column for X: \")\n> c <- getLine\n> if c \/= [] && all isDigit c && (read c::Int) >= 0 && (read c::Int) < cols && any isBlank ((transpose b) !! (read c::Int)) then \n> return (read c)\n> else getCheckCol \"Column is invalid or full.\" b\n\nWriting the board to the screen.\n\n> showBoard :: Board -> IO ()\n> showBoard b =\n> putStrLn (unlines (map showRow b ++ [line] ++ [nums]))\n> where\n> showRow = map showPlayer\n> line = replicate cols '-'\n> nums = take cols ['0'..]\n \n> showPlayer :: Player -> Char\n> showPlayer O = 'O'\n> showPlayer B = '.'\n> showPlayer X = 'X'\n\nTREES AND THEIR UTILITIES\n\n> data Tree a = Node a [Tree a] deriving Show\n\n> tree :: Player -> Board -> Tree Board\n> tree p b = Node b [tree (changePlayer p) neighbour | neighbour <- neighbourhood p b]\n\nChildren after the desired depth are not considered and thus the base case of `prune` represents such potential children as an empty list. These children are hence considered as leaves. The recursive case generates a list of all children and calls `prune` on each with one less level in the tree to go (by decrementing \"d\")\n\n> prune :: Int -> Tree a -> Tree a\n> prune 0 (Node x _) = Node x []\n> prune d (Node x ts) = Node x [prune (d-1) t | t <- ts]\n\nLABELLING\n\nReturns a triple containing the board, the player label, and the \"reverse depth\" (\"rd\"), which is defined as the number of boards from a leaf and is calculated by labeling the \"reverse depth\" of leaves as 0; each successive ancestor recursively increments this value. `byOrdering` is determined by a long case analysis which involves threading through the current player to determine how to compare \"rd\" values and is explained fruther below.\n\n> labelTree :: Tree Board -> Tree (Board,Player,Int)\n> labelTree (Node b []) = Node (b,winner b,0) []\n> labelTree (Node b ts) = Node (b, wp, rd + 1) ts'\n> where\n> ts' = map labelTree ts\n> (wp,rd) = byOrdering (whosePly b) [(player,rd) | Node (_, player, rd) _ <- ts']\n\nThese two functions compute an Ordering and then use it to choose between `maximumBy` and `minimumBy` according to the objective of the given player. In the minimax paradigm, O is the minimiser and X is the maximiser; accordingly, the ordering of the \"reverse depths\" (\"rd\") integers must be deliberately flipped for each player. The board which is best for the given player based on the derived ordering is then extracted from all the boards (also based on which player is extracting) so that the `labelTree` function above can label \"winning player\" (\"wp\") and \"rd\" appropriately.\n\n> byOrdering :: Player -> [(Player, Int)] -> (Player, Int)\n> byOrdering p = (if p == O then minimumBy else maximumBy) (dependentOrdering p)\n\n> dependentOrdering :: Player -> (Player, Int) -> (Player, Int) -> Ordering\n> dependentOrdering p (p1, rd1) (p2, rd2) = compare p1 p2 <> case (p, p1) of\n> (_, O) -> compare rd1 rd2\n> (_, X) -> compare rd2 rd1\n> (_, B) -> EQ\n\nReturns the player who is due to ply based on the constitution of the board based on the logic that whichever player has less of their pieces on the board is owed the subsequent ply. If the board is empty, the AI should by default ply first, and the AI player is by default assigned player O, thus it is the turn of O on an empty board.\n\n> whosePly :: Board -> Player\n> whosePly b = if os <= xs then O else X\n> where\n> os = length (filter (== O) flat)\n> xs = length (filter (== X) flat)\n> flat = concat b\n\nPATH CHOOSING\n\nSelects the board which is closest to a leaf on a winning path, or, the furthest board (requiring the most plies to reach) if all paths end in a loss. The board is extracted from the pair according to its integer label (see `labelTree`).\n\n> idealPath :: Player -> Board -> Board\n> idealPath p b = fst (minimumBy (comparing snd) (nonLosingPaths p b))\n\nKeeps only boards which are labeled with \"this\" player passed as an argument. The named argument \"p\" is only used to determine the corresponding neighbourhood of boards which can be reached from that player plying, but it is effectively the same the player which is given the name, \"this\". The predicates in the list comprehension ensure that only the desired player is selected and that boards strictly from each subsequent \"reverse depth\" are selected. The selections come from the tree which is generated and labeled in the where clause. The function returns a list of doubles instead of triples because it is redundant to store the same player in all of them.\n\n> nonLosingPaths :: Player -> Board -> [(Board,Int)]\n> nonLosingPaths p b = [(b',rd') | Node (b',p',rd') ts' <- ts, p' == this, rd == rd' + 1]\n> where\n> Node (_,this,rd) ts = labelTree (prune depth (tree p b)) \n\nSUBSEQUENT BOARDS\n\nReturns a list of all possible ways to place the given player into the given board. The first case returns an empty list if the game is over. The second case generates a list of boards where each has the given player placed in the lowest blank cell of each column.\n\n> neighbourhood :: Player -> Board -> [Board]\n> neighbourhood p b | anyHasWon b || isFull b = []\n> | otherwise = [ply p b ((lowestRow b c (rows-1)),c) | c <- [0..cols-1], any isBlank ((transpose b) !! c)]\n\nPlaces the given player into the given cell into the given board and return the new board. This is achieved through surgery on the rows and columns to remove the previous entry and insert the new one.\n\n> ply :: Player -> Board -> (Int,Int) -> Board\n> ply p b (r,c) = take r b ++ newRow ++ drop (r+1) b\n> where\n> newRow = [(take c (b !! r) ++ [p] ++ drop (c+1) (b !! r))]\n\nReturns the bottom-most row in the given column of the given board with a blank cell. Usage: to be called with the maximum-possible rows (thus starting at the lowest row in the board) and decrease this number (thus moving up to the top of the board) until a blank is found. The row value is recursively decreased (to move up the board) at each encounter with a non-blank cell.\n\n> lowestRow :: Board -> Int -> Int -> Int\n> lowestRow b c r | isBlank (transpose b !! c !! r) = r\n> | r < 0 = -1\n> | otherwise = lowestRow b c (r-1)\n\nDIAGONALS\n\nReturns (left & right) diagonals (above & below) the main (without forgetting the original main) (4 cases) as rows by calculating each list of diagonals through manipulating the given board differntly for each case. Each of the four concatenations handles each case. The first retrieves the \"left & below\" diagonals, the second retrieves the \"left & above\" diagonals, the third retrieves the \"right & above\" diagonals and the fourth retrieves the \"right & below\" diagonals. \"horizontal\" is defined with (+ 1) to save us from off-by-one errors related to having \/ not having the main diagonal of each recursively-smaller board.\n\n> allDiagonals :: Board -> [Row]\n> allDiagonals b =\n> take vertical (belowDiagonals (tail b))\n> ++ take horizontal (belowDiagonals transposed)\n> ++ take vertical (belowDiagonals (tail (reverse b)))\n> ++ take horizontal (belowDiagonals (transpose (reverse b)))\n> where\n> transposed = transpose b\n> vertical = rows - win\n> horizontal = cols - win + 1\n\nReturns, as rows, the left diagonals below the main left diagonal from the given board. The recursive case names the board so that its main diagonal can be added to the list. The first row is discarded and the resulting board (which is now one row smaller and thus have a new main diagonal) is recurisvely processed.\n\n> belowDiagonals :: Board -> [Row]\n> belowDiagonals [] = []\n> belowDiagonals b@(_:rs) = mainDiagonal b : belowDiagonals rs\n\nExtracts the main diagonal from a given board. The first base case is matched when the board either: runs out of rows but there are still more columns, or runs out of rows and columns simultaneously (implying that the board is square). The second base case is matched when the board runs out of columns but there are still more rows. The recursive case extracts the first player from the first row, drops the first player from all rows except the first and recursively processes this new board with (with the first row excluded).\n\n> mainDiagonal :: Board -> Row\n> mainDiagonal [] = []\n> mainDiagonal ([]:rs) = []\n> mainDiagonal ((p:r):rs) = p : mainDiagonal (map tail rs)\n\nGAME OVER CHECKING\n\nWrapper for the hasWon function to return the winning player.\n\n> winner :: Board -> Player\n> winner b | hasWon X b = X\n> | hasWon O b = O\n> | otherwise = B\n\nThe following two functions determine whether the game is over and are thus called before each ply and in the process of determining neighbourhoods.\n\n> isFull :: Board -> Bool\n> isFull = all (\/=B) . concat\n\n> anyHasWon :: Board -> Bool\n> anyHasWon b = hasWon X b || hasWon O b\n\nThe three cases in which a player can win are checked in this order as separate lines in the definition: case 1: no transformation is required to inspect horizontal lines; case 2: transposition is required to represent all columns as rows; case 3: left and right diagonals are transformed into rows using `allDiagonals`\n\n> hasWon :: Player -> Board -> Bool\n> hasWon p b = connections p b\n> || connections p (transpose b)\n> || connections p (allDiagonals b)\n\nLooks for a connection using naive string matching. Its type is declared as taking [Row] instead of Board to emphasise that it is used on lists of players which may have been represented vertically, horizontally or diagonally in the actual board.\n\n> connections :: Player -> [Row] -> Bool\n> connections _ [] = False\n> connections p (r:b) = connection p r || connections p b\n\nGiven a player and list of players, looks for a contiguous series of that player in that list.\n\n> connection :: Player -> [Player] -> Bool\n> connection _ [] = False\n> connection p ps = winningLine == candidateLine || connection p (tail ps)\n> where \n> winningLine = replicate win p\n> candidateLine = take win ps\n\nPLAYER UTILITIES\n\n> changePlayer :: Player -> Player\n> changePlayer O = X\n> changePlayer X = O\n\n> isBlank :: Player -> Bool\n> isBlank p = p == B\n\nTESTING UTILITIES\n\n> showBoards :: [Board] -> IO ()\n> showBoards [] = putStrLn \"\"\n> showBoards (b:bs) =\n> do\n> showBoard b\n> showBoards bs\n\nAPPENDIX: PRE-DEFINED BOARDS AND ROWS FOR TESTING, ASSUMING PARAMETER \"win\" == 4\n\nCases 1 - 4 for the `allDiagonals` function:\n\nCase 1: X wins diagonally from left top to right bottom below main\n\"X LEFT DIAGONAL BELOW\"\n\n> xldb :: Board\n> xldb = [[B,B,B,B,B,B,B],\n> [B,B,B,B,B,B,B],\n> [X,B,B,B,X,B,B],\n> [B,X,B,O,O,B,B],\n> [B,B,X,O,X,B,O],\n> [B,O,O,X,O,X,O]]\n\nCase 2: X wins diagonally from left top to right bottom above main\n\"X LEFT DIAGONAL ABOVE\"\n\n> xlda :: Board\n> xlda = [[B,B,B,X,B,B,B],\n> [B,B,B,B,X,B,B],\n> [B,B,B,B,X,X,B],\n> [B,B,B,O,O,B,X],\n> [B,B,X,O,X,B,O],\n> [B,O,O,X,O,X,O]]\n\nCase 3: X wins diagonally from right top to left bottom above main\n\"X RIGHT DIAGONAL ABOVE\"\n\n> xrda :: Board\n> xrda = [[B,B,B,X,B,B,B],\n> [B,B,X,B,B,B,B],\n> [O,X,B,B,X,B,X],\n> [X,X,B,O,O,O,B],\n> [B,B,O,O,O,B,O],\n> [B,O,O,X,O,X,O]]\n\nCase 4: X wins diagonally from right top to left bottom below main \n\"X RIGHT DIAGONAL BELOW\"\n\n> xrdb :: Board \n> xrdb = [[B,B,B,B,B,B,B],\n> [B,B,B,B,B,B,B],\n> [O,B,B,B,X,B,X],\n> [B,X,B,O,O,X,B],\n> [B,B,O,O,X,B,O],\n> [B,O,O,X,O,X,O]]\n\nCould the AI be made to choose make the second vertical connection (which blocks the human) instead of the first?\n\n> idealPathTest :: Board\n> idealPathTest = [[B,B,B,B,B,B,B],\n> [B,B,B,B,B,B,B],\n> [B,B,B,B,B,B,B],\n> [B,B,B,B,X,B,B],\n> [B,O,B,X,X,O,B],\n> [B,O,X,X,X,O,B]] \n\n> fullDraw :: Board\n> fullDraw = [[O,X,O,X,O,X,O],\n> [X,O,X,O,X,O,X],\n> [O,O,X,X,O,X,X],\n> [O,X,O,X,X,O,O],\n> [O,X,O,O,X,O,X],\n> [X,O,O,X,X,X,O]]\n\n> arbitraryTest :: Board\n> arbitraryTest = [[B,B,B,B,B,B,B],\n> [B,B,B,B,B,B,B],\n> [B,B,B,B,B,B,B],\n> [B,B,B,X,X,B,B],\n> [B,B,O,O,X,B,B],\n> [B,O,O,X,X,X,O]] \n\n> xHorizontalWin :: Board\n> xHorizontalWin = [[B,B,B,B,B,B,B],\n> [B,B,B,B,B,B,B],\n> [B,B,B,B,B,B,B],\n> [B,B,B,O,X,B,B],\n> [B,B,O,O,X,B,B],\n> [B,O,X,X,X,X,O]] \n\n> xVerticalWin :: Board\n> xVerticalWin = [[B,B,B,B,B,B,B],\n> [B,B,B,B,B,B,B],\n> [B,B,B,B,X,B,B],\n> [B,B,B,O,X,B,B],\n> [B,B,O,O,X,B,B],\n> [B,O,O,X,X,X,O]]\n\n\nX wins in a row\n\n> xr :: Row\n> xr = [B,X,X,X,X,O,O]\n\nO wins in a row\n\n> or :: Row\n> or = [B,O,O,O,O,X,X]\n","avg_line_length":46.7303030303,"max_line_length":663,"alphanum_fraction":0.6127358796} {"size":3975,"ext":"lhs","lang":"Literate Haskell","max_stars_count":4.0,"content":"% Logic Variables and Narrowing\n% Sebastian Fischer (sebf@informatik.uni-kiel.de)\n\n> {-# LANGUAGE\n> FlexibleContexts,\n> FlexibleInstances,\n> MultiParamTypeClasses\n> #-}\n>\n> module CFLP.Data.Narrowing (\n>\n> unknown, Narrow(..), -- Binding(..), narrowVar, \n>\n> (?), oneOf\n>\n> ) where\n>\n> import Data.Supply\n>\n> import Control.Monad\n>\n> import CFLP.Data.Types\n>\n> import CFLP.Control.Monad.Update\n> import CFLP.Control.Strategy\n\nThe application of `unknown` to a constraint store and a unique\nidentifier represents a logic variable of an arbitrary type. \n\n> unknown :: (Monad s, Strategy c s, MonadUpdate c s, Update c s s, Narrow c a)\n> => ID -> Nondet c s a\n> unknown u = freeVar u (delayed (isNarrowedID u) (`narrow`u))\n>\n> isNarrowedID :: Strategy c s => ID -> Context c -> s Bool\n> isNarrowedID (ID us) (Context c) = isNarrowed c (supplyValue us)\n\nLogic variables of type `a` can be narrowed to head-normal form if\nthere is an instance of the type class `Narrow`. A constraint store\nmay be used to find the possible results which are returned in a monad\nthat supports choices. Usually, `narrow` will be implemented as a\nnon-deterministic generator using `oneOf`, but for specific types\ndifferent strategies may be implemented.\n\n> class Narrow c a\n> where\n> narrow :: (Monad s, Strategy c s, MonadUpdate c s, Update c s s)\n> => Context c -> ID -> Nondet c s a\n\nThe operator `(?)` wraps the combinator `oneOf` to generate a delayed\nnon-deterministic choice that is executed whenever it is\ndemanded. Although the choice itself is reexecuted according to the\ncurrent constraint store, the arguments of `(?)` are shared among all\nexecutions and *not* reexecuted.\n\n> (?) :: (Monad s, Strategy c s, MonadUpdate c s)\n> => Nondet c s a -> Nondet c s a -> ID -> Nondet c s a\n> (x ? y) u = delayed (isNarrowedID u) (\\c -> oneOf [x,y] c u)\n\nThe operation `oneOf` takes a list of non-deterministic values and\nreturns a non-deterministic value that yields one of the elements in\nthe given list.\n\n> oneOf :: (Strategy c s, MonadUpdate c s)\n> => [Nondet c s a] -> Context c -> ID -> Nondet c s a\n> oneOf xs (Context c) (ID us)\n> = Typed (choose c (supplyValue us) (map untyped xs))\n\n\nConstraint Solving\n------------------\n\nWe provide a type class for constraint stores that support branching\non variables.\n\n class Solver c\n where\n branchOn :: MonadPlus m => c -> Int -> c -> m c\n\nThe first argument of `branchOn` is only used to support the type\nchecker. It must be ignored when instantiating the `Solver` class.\n\nThe type class `Binding` defines a function that looks up a variable\nbinding in a constraint store.\n\n class Binding c a\n where\n binding :: (Monad m, Update cs m m) => c -> Int -> Maybe (Nondet cs m a)\n\nThe result of the `binding` function is optional in order to allow for\ncomposing results when transforming evaluation contexts.\n\nThe base instance for the unit context always returns `Nothing`:\n\n instance Binding () a where binding _ _ = Nothing\n\nWe provide a default implementation of the `narrow` function for\nconstraint solvers that support variable lookups. This function can be\nused to define the `Narrow` instance for types that are handled by\nconstraint solvers.\n\n narrowVar :: (Monad s, Strategy c s, MonadUpdate c s, \n Update c s s, Narrow a, Solver c, Binding c a)\n => Context c -> ID -> Nondet c s a\n narrowVar (Context c) u@(ID us) = joinNondet $ do\n let v = supplyValue us\n isn <- isNarrowed c v\n if isn\n then maybe (error \"no binding for narrowed variable\") return (binding c v)\n else do\n update (branchOn c v)\n return (unknown u)\n\nAn evaluation context that is a `Solver` or supports `Binding` can be\ntransformed without losing this functionality.\n\n instance (Solver c, Transformer t) => Solver (t c)\n where\n branchOn _ = inside . branchOn undefined\n\n instance (Binding c a, Transformer t) => Binding (t c) a\n where\n binding = binding . project\n\n","avg_line_length":32.0564516129,"max_line_length":79,"alphanum_fraction":0.6973584906} {"size":23749,"ext":"lhs","lang":"Literate Haskell","max_stars_count":6.0,"content":"%\n% (c) The University of Glasgow 2006\n% (c) The GRASP\/AQUA Project, Glasgow University, 1992-1998\n%\n\\section[Name]{@Name@: to transmit name info from renamer to typechecker}\n\n\\begin{code}\n{-# LANGUAGE DeriveDataTypeable #-}\n\n-- |\n-- #name_types#\n-- GHC uses several kinds of name internally:\n--\n-- * 'OccName.OccName': see \"OccName#name_types\"\n--\n-- * 'RdrName.RdrName': see \"RdrName#name_types\"\n--\n-- * 'Name.Name' is the type of names that have had their scoping and binding resolved. They\n-- have an 'OccName.OccName' but also a 'Unique.Unique' that disambiguates Names that have\n-- the same 'OccName.OccName' and indeed is used for all 'Name.Name' comparison. Names\n-- also contain information about where they originated from, see \"Name#name_sorts\"\n--\n-- * 'Id.Id': see \"Id#name_types\"\n--\n-- * 'Var.Var': see \"Var#name_types\"\n--\n-- #name_sorts#\n-- Names are one of:\n--\n-- * External, if they name things declared in other modules. Some external\n-- Names are wired in, i.e. they name primitives defined in the compiler itself\n--\n-- * Internal, if they name things in the module being compiled. Some internal\n-- Names are system names, if they are names manufactured by the compiler\n\nmodule Name (\n -- * The main types\n Name, -- Abstract\n BuiltInSyntax(..),\n\n -- ** Creating 'Name's\n mkSystemName, mkSystemNameAt,\n mkInternalName, mkClonedInternalName, mkDerivedInternalName,\n mkSystemVarName, mkSysTvName,\n mkFCallName,\n mkExternalName, mkWiredInName,\n\n -- ** Manipulating and deconstructing 'Name's\n nameUnique, setNameUnique,\n nameOccName, nameModule, nameModule_maybe,\n tidyNameOcc,\n hashName, localiseName,\n mkLocalisedOccName,\n\n nameSrcLoc, nameSrcSpan, pprNameDefnLoc, pprDefinedAt,\n\n -- ** Predicates on 'Name's\n isSystemName, isInternalName, isExternalName,\n isTyVarName, isTyConName, isDataConName,\n isValName, isVarName,\n isWiredInName, isBuiltInSyntax,\n wiredInNameTyThing_maybe,\n nameIsLocalOrFrom, stableNameCmp,\n\n -- * Class 'NamedThing' and overloaded friends\n NamedThing(..),\n getSrcLoc, getSrcSpan, getOccString,\n\n pprInfixName, pprPrefixName, pprModulePrefix,\n\n -- Re-export the OccName stuff\n module OccName\n ) where\n\nimport {-# SOURCE #-} TypeRep( TyThing )\nimport {-# SOURCE #-} PrelNames( liftedTypeKindTyConKey )\n\nimport OccName\nimport Module\nimport SrcLoc\nimport Unique\nimport Util\nimport Maybes\nimport Binary\nimport DynFlags\nimport FastTypes\nimport FastString\nimport Outputable\n\nimport Data.Data\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection[Name-datatype]{The @Name@ datatype, and name construction}\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | A unique, unambigious name for something, containing information about where\n-- that thing originated.\ndata Name = Name {\n n_sort :: NameSort, -- What sort of name it is\n n_occ :: !OccName, -- Its occurrence name\n n_uniq :: FastInt, -- UNPACK doesn't work, recursive type\n--(note later when changing Int# -> FastInt: is that still true about UNPACK?)\n n_loc :: !SrcSpan -- Definition site\n }\n deriving Typeable\n\n-- NOTE: we make the n_loc field strict to eliminate some potential\n-- (and real!) space leaks, due to the fact that we don't look at\n-- the SrcLoc in a Name all that often.\n\ndata NameSort\n = External Module\n\n | WiredIn Module TyThing BuiltInSyntax\n -- A variant of External, for wired-in things\n\n | Internal -- A user-defined Id or TyVar\n -- defined in the module being compiled\n\n | System -- A system-defined Id or TyVar. Typically the\n -- OccName is very uninformative (like 's')\n\n-- | BuiltInSyntax is for things like @(:)@, @[]@ and tuples,\n-- which have special syntactic forms. They aren't in scope\n-- as such.\ndata BuiltInSyntax = BuiltInSyntax | UserSyntax\n\\end{code}\n\nNotes about the NameSorts:\n\n1. Initially, top-level Ids (including locally-defined ones) get External names,\n and all other local Ids get Internal names\n\n2. In any invocation of GHC, an External Name for \"M.x\" has one and only one\n unique. This unique association is ensured via the Name Cache; \n see Note [The Name Cache] in IfaceEnv.\n\n3. Things with a External name are given C static labels, so they finally\n appear in the .o file's symbol table. They appear in the symbol table\n in the form M.n. If originally-local things have this property they\n must be made @External@ first.\n\n4. In the tidy-core phase, a External that is not visible to an importer\n is changed to Internal, and a Internal that is visible is changed to External\n\n5. A System Name differs in the following ways:\n a) has unique attached when printing dumps\n b) unifier eliminates sys tyvars in favour of user provs where possible\n\n Before anything gets printed in interface files or output code, it's\n fed through a 'tidy' processor, which zaps the OccNames to have\n unique names; and converts all sys-locals to user locals\n If any desugarer sys-locals have survived that far, they get changed to\n \"ds1\", \"ds2\", etc.\n\nBuilt-in syntax => It's a syntactic form, not \"in scope\" (e.g. [])\n\nWired-in thing => The thing (Id, TyCon) is fully known to the compiler,\n not read from an interface file.\n E.g. Bool, True, Int, Float, and many others\n\nAll built-in syntax is for wired-in things.\n\n\\begin{code}\ninstance HasOccName Name where\n occName = nameOccName\n\nnameUnique :: Name -> Unique\nnameOccName :: Name -> OccName\nnameModule :: Name -> Module\nnameSrcLoc :: Name -> SrcLoc\nnameSrcSpan :: Name -> SrcSpan\n\nnameUnique name = mkUniqueGrimily (iBox (n_uniq name))\nnameOccName name = n_occ name\nnameSrcLoc name = srcSpanStart (n_loc name)\nnameSrcSpan name = n_loc name\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Predicates on names}\n%* *\n%************************************************************************\n\n\\begin{code}\nnameIsLocalOrFrom :: Module -> Name -> Bool\nisInternalName :: Name -> Bool\nisExternalName :: Name -> Bool\nisSystemName :: Name -> Bool\nisWiredInName :: Name -> Bool\n\nisWiredInName (Name {n_sort = WiredIn _ _ _}) = True\nisWiredInName _ = False\n\nwiredInNameTyThing_maybe :: Name -> Maybe TyThing\nwiredInNameTyThing_maybe (Name {n_sort = WiredIn _ thing _}) = Just thing\nwiredInNameTyThing_maybe _ = Nothing\n\nisBuiltInSyntax :: Name -> Bool\nisBuiltInSyntax (Name {n_sort = WiredIn _ _ BuiltInSyntax}) = True\nisBuiltInSyntax _ = False\n\nisExternalName (Name {n_sort = External _}) = True\nisExternalName (Name {n_sort = WiredIn _ _ _}) = True\nisExternalName _ = False\n\nisInternalName name = not (isExternalName name)\n\nnameModule name = nameModule_maybe name `orElse` pprPanic \"nameModule\" (ppr name)\nnameModule_maybe :: Name -> Maybe Module\nnameModule_maybe (Name { n_sort = External mod}) = Just mod\nnameModule_maybe (Name { n_sort = WiredIn mod _ _}) = Just mod\nnameModule_maybe _ = Nothing\n\nnameIsLocalOrFrom from name\n | isExternalName name = from == nameModule name\n | otherwise = True\n\nisTyVarName :: Name -> Bool\nisTyVarName name = isTvOcc (nameOccName name)\n\nisTyConName :: Name -> Bool\nisTyConName name = isTcOcc (nameOccName name)\n\nisDataConName :: Name -> Bool\nisDataConName name = isDataOcc (nameOccName name)\n\nisValName :: Name -> Bool\nisValName name = isValOcc (nameOccName name)\n\nisVarName :: Name -> Bool\nisVarName = isVarOcc . nameOccName\n\nisSystemName (Name {n_sort = System}) = True\nisSystemName _ = False\n\\end{code}\n\n\n%************************************************************************\n%* *\n\\subsection{Making names}\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | Create a name which is (for now at least) local to the current module and hence\n-- does not need a 'Module' to disambiguate it from other 'Name's\nmkInternalName :: Unique -> OccName -> SrcSpan -> Name\nmkInternalName uniq occ loc = Name { n_uniq = getKeyFastInt uniq\n , n_sort = Internal\n , n_occ = occ\n , n_loc = loc }\n -- NB: You might worry that after lots of huffing and\n -- puffing we might end up with two local names with distinct\n -- uniques, but the same OccName. Indeed we can, but that's ok\n -- * the insides of the compiler don't care: they use the Unique\n -- * when printing for -ddump-xxx you can switch on -dppr-debug to get the\n -- uniques if you get confused\n -- * for interface files we tidyCore first, which makes\n -- the OccNames distinct when they need to be\n\nmkClonedInternalName :: Unique -> Name -> Name\nmkClonedInternalName uniq (Name { n_occ = occ, n_loc = loc })\n = Name { n_uniq = getKeyFastInt uniq, n_sort = Internal\n , n_occ = occ, n_loc = loc }\n\nmkDerivedInternalName :: (OccName -> OccName) -> Unique -> Name -> Name\nmkDerivedInternalName derive_occ uniq (Name { n_occ = occ, n_loc = loc })\n = Name { n_uniq = getKeyFastInt uniq, n_sort = Internal\n , n_occ = derive_occ occ, n_loc = loc }\n\n-- | Create a name which definitely originates in the given module\nmkExternalName :: Unique -> Module -> OccName -> SrcSpan -> Name\n-- WATCH OUT! External Names should be in the Name Cache\n-- (see Note [The Name Cache] in IfaceEnv), so don't just call mkExternalName\n-- with some fresh unique without populating the Name Cache\nmkExternalName uniq mod occ loc\n = Name { n_uniq = getKeyFastInt uniq, n_sort = External mod,\n n_occ = occ, n_loc = loc }\n\n-- | Create a name which is actually defined by the compiler itself\nmkWiredInName :: Module -> OccName -> Unique -> TyThing -> BuiltInSyntax -> Name\nmkWiredInName mod occ uniq thing built_in\n = Name { n_uniq = getKeyFastInt uniq,\n n_sort = WiredIn mod thing built_in,\n n_occ = occ, n_loc = wiredInSrcSpan }\n\n-- | Create a name brought into being by the compiler\nmkSystemName :: Unique -> OccName -> Name\nmkSystemName uniq occ = mkSystemNameAt uniq occ noSrcSpan\n\nmkSystemNameAt :: Unique -> OccName -> SrcSpan -> Name\nmkSystemNameAt uniq occ loc = Name { n_uniq = getKeyFastInt uniq, n_sort = System\n , n_occ = occ, n_loc = loc }\n\nmkSystemVarName :: Unique -> FastString -> Name\nmkSystemVarName uniq fs = mkSystemName uniq (mkVarOccFS fs)\n\nmkSysTvName :: Unique -> FastString -> Name\nmkSysTvName uniq fs = mkSystemName uniq (mkOccNameFS tvName fs)\n\n-- | Make a name for a foreign call\nmkFCallName :: Unique -> String -> Name\nmkFCallName uniq str = mkInternalName uniq (mkVarOcc str) noSrcSpan\n -- The encoded string completely describes the ccall\n\\end{code}\n\n\\begin{code}\n-- When we renumber\/rename things, we need to be\n-- able to change a Name's Unique to match the cached\n-- one in the thing it's the name of. If you know what I mean.\nsetNameUnique :: Name -> Unique -> Name\nsetNameUnique name uniq = name {n_uniq = getKeyFastInt uniq}\n\ntidyNameOcc :: Name -> OccName -> Name\n-- We set the OccName of a Name when tidying\n-- In doing so, we change System --> Internal, so that when we print\n-- it we don't get the unique by default. It's tidy now!\ntidyNameOcc name@(Name { n_sort = System }) occ = name { n_occ = occ, n_sort = Internal}\ntidyNameOcc name occ = name { n_occ = occ }\n\n-- | Make the 'Name' into an internal name, regardless of what it was to begin with\nlocaliseName :: Name -> Name\nlocaliseName n = n { n_sort = Internal }\n\\end{code}\n\n\\begin{code}\n-- |Create a localised variant of a name.\n--\n-- If the name is external, encode the original's module name to disambiguate.\n--\nmkLocalisedOccName :: Module -> (Maybe String -> OccName -> OccName) -> Name -> OccName\nmkLocalisedOccName this_mod mk_occ name = mk_occ origin (nameOccName name)\n where\n origin\n | nameIsLocalOrFrom this_mod name = Nothing\n | otherwise = Just (moduleNameColons . moduleName . nameModule $ name)\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Hashing and comparison}\n%* *\n%************************************************************************\n\n\\begin{code}\nhashName :: Name -> Int -- ToDo: should really be Word\nhashName name = getKey (nameUnique name) + 1\n -- The +1 avoids keys with lots of zeros in the ls bits, which\n -- interacts badly with the cheap and cheerful multiplication in\n -- hashExpr\n\ncmpName :: Name -> Name -> Ordering\ncmpName n1 n2 = iBox (n_uniq n1) `compare` iBox (n_uniq n2)\n\nstableNameCmp :: Name -> Name -> Ordering\n-- Compare lexicographically\nstableNameCmp (Name { n_sort = s1, n_occ = occ1 })\n (Name { n_sort = s2, n_occ = occ2 })\n = (s1 `sort_cmp` s2) `thenCmp` (occ1 `compare` occ2)\n -- The ordinary compare on OccNames is lexicographic\n where\n -- Later constructors are bigger\n sort_cmp (External m1) (External m2) = m1 `stableModuleCmp` m2\n sort_cmp (External {}) _ = LT\n sort_cmp (WiredIn {}) (External {}) = GT\n sort_cmp (WiredIn m1 _ _) (WiredIn m2 _ _) = m1 `stableModuleCmp` m2\n sort_cmp (WiredIn {}) _ = LT\n sort_cmp Internal (External {}) = GT\n sort_cmp Internal (WiredIn {}) = GT\n sort_cmp Internal Internal = EQ\n sort_cmp Internal System = LT\n sort_cmp System System = EQ\n sort_cmp System _ = GT\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection[Name-instances]{Instance declarations}\n%* *\n%************************************************************************\n\n\\begin{code}\ninstance Eq Name where\n a == b = case (a `compare` b) of { EQ -> True; _ -> False }\n a \/= b = case (a `compare` b) of { EQ -> False; _ -> True }\n\ninstance Ord Name where\n a <= b = case (a `compare` b) of { LT -> True; EQ -> True; GT -> False }\n a < b = case (a `compare` b) of { LT -> True; EQ -> False; GT -> False }\n a >= b = case (a `compare` b) of { LT -> False; EQ -> True; GT -> True }\n a > b = case (a `compare` b) of { LT -> False; EQ -> False; GT -> True }\n compare a b = cmpName a b\n\ninstance Uniquable Name where\n getUnique = nameUnique\n\ninstance NamedThing Name where\n getName n = n\n\ninstance Data Name where\n -- don't traverse?\n toConstr _ = abstractConstr \"Name\"\n gunfold _ _ = error \"gunfold\"\n dataTypeOf _ = mkNoRepType \"Name\"\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Binary}\n%* *\n%************************************************************************\n\n\\begin{code}\ninstance Binary Name where\n put_ bh name =\n case getUserData bh of\n UserData{ ud_put_name = put_name } -> put_name bh name\n\n get bh =\n case getUserData bh of\n UserData { ud_get_name = get_name } -> get_name bh\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Pretty printing}\n%* *\n%************************************************************************\n\n\\begin{code}\ninstance Outputable Name where\n ppr name = pprName name\n\ninstance OutputableBndr Name where\n pprBndr _ name = pprName name\n pprInfixOcc = pprInfixName\n pprPrefixOcc = pprPrefixName\n\n\npprName :: Name -> SDoc\npprName (Name {n_sort = sort, n_uniq = u, n_occ = occ})\n = getPprStyle $ \\ sty ->\n case sort of\n WiredIn mod _ builtin -> pprExternal sty uniq mod occ True builtin\n External mod -> pprExternal sty uniq mod occ False UserSyntax\n System -> pprSystem sty uniq occ\n Internal -> pprInternal sty uniq occ\n where uniq = mkUniqueGrimily (iBox u)\n\npprExternal :: PprStyle -> Unique -> Module -> OccName -> Bool -> BuiltInSyntax -> SDoc\npprExternal sty uniq mod occ is_wired is_builtin\n | codeStyle sty = ppr mod <> char '_' <> ppr_z_occ_name occ\n -- In code style, always qualify\n -- ToDo: maybe we could print all wired-in things unqualified\n -- in code style, to reduce symbol table bloat?\n | debugStyle sty = pp_mod <> ppr_occ_name occ\n <> braces (hsep [if is_wired then ptext (sLit \"(w)\") else empty,\n pprNameSpaceBrief (occNameSpace occ),\n pprUnique uniq])\n | BuiltInSyntax <- is_builtin = ppr_occ_name occ -- Never qualify builtin syntax\n | otherwise = pprModulePrefix sty mod occ <> ppr_occ_name occ\n where\n pp_mod = sdocWithDynFlags $ \\dflags ->\n if gopt Opt_SuppressModulePrefixes dflags\n then empty\n else ppr mod <> dot\n\npprInternal :: PprStyle -> Unique -> OccName -> SDoc\npprInternal sty uniq occ\n | codeStyle sty = pprUnique uniq\n | debugStyle sty = ppr_occ_name occ <> braces (hsep [pprNameSpaceBrief (occNameSpace occ),\n pprUnique uniq])\n | dumpStyle sty = ppr_occ_name occ <> ppr_underscore_unique uniq\n -- For debug dumps, we're not necessarily dumping\n -- tidied code, so we need to print the uniques.\n | otherwise = ppr_occ_name occ -- User style\n\n-- Like Internal, except that we only omit the unique in Iface style\npprSystem :: PprStyle -> Unique -> OccName -> SDoc\npprSystem sty uniq occ\n | codeStyle sty = pprUnique uniq\n | debugStyle sty = ppr_occ_name occ <> ppr_underscore_unique uniq\n <> braces (pprNameSpaceBrief (occNameSpace occ))\n | otherwise = ppr_occ_name occ <> ppr_underscore_unique uniq\n -- If the tidy phase hasn't run, the OccName\n -- is unlikely to be informative (like 's'),\n -- so print the unique\n\n\npprModulePrefix :: PprStyle -> Module -> OccName -> SDoc\n-- Print the \"M.\" part of a name, based on whether it's in scope or not\n-- See Note [Printing original names] in HscTypes\npprModulePrefix sty mod occ = sdocWithDynFlags $ \\dflags ->\n if gopt Opt_SuppressModulePrefixes dflags\n then empty\n else\n case qualName sty mod occ of -- See Outputable.QualifyName:\n NameQual modname -> ppr modname <> dot -- Name is in scope\n NameNotInScope1 -> ppr mod <> dot -- Not in scope\n NameNotInScope2 -> ppr (modulePackageKey mod) <> colon -- Module not in\n <> ppr (moduleName mod) <> dot -- scope either\n _otherwise -> empty\n\nppr_underscore_unique :: Unique -> SDoc\n-- Print an underscore separating the name from its unique\n-- But suppress it if we aren't printing the uniques anyway\nppr_underscore_unique uniq\n = sdocWithDynFlags $ \\dflags ->\n if gopt Opt_SuppressUniques dflags\n then empty\n else char '_' <> pprUnique uniq\n\nppr_occ_name :: OccName -> SDoc\nppr_occ_name occ = ftext (occNameFS occ)\n -- Don't use pprOccName; instead, just print the string of the OccName;\n -- we print the namespace in the debug stuff above\n\n-- In code style, we Z-encode the strings. The results of Z-encoding each FastString are\n-- cached behind the scenes in the FastString implementation.\nppr_z_occ_name :: OccName -> SDoc\nppr_z_occ_name occ = ztext (zEncodeFS (occNameFS occ))\n\n-- Prints (if mod information is available) \"Defined at \" or\n-- \"Defined in \" information for a Name.\npprDefinedAt :: Name -> SDoc\npprDefinedAt name = ptext (sLit \"Defined\") <+> pprNameDefnLoc name\n\npprNameDefnLoc :: Name -> SDoc\n-- Prints \"at \" or\n-- or \"in \" depending on what info is available\npprNameDefnLoc name\n = case nameSrcLoc name of\n -- nameSrcLoc rather than nameSrcSpan\n -- It seems less cluttered to show a location\n -- rather than a span for the definition point\n RealSrcLoc s -> ptext (sLit \"at\") <+> ppr s\n UnhelpfulLoc s\n | isInternalName name || isSystemName name\n -> ptext (sLit \"at\") <+> ftext s\n | otherwise\n -> ptext (sLit \"in\") <+> quotes (ppr (nameModule name))\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Overloaded functions related to Names}\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | A class allowing convenient access to the 'Name' of various datatypes\nclass NamedThing a where\n getOccName :: a -> OccName\n getName :: a -> Name\n\n getOccName n = nameOccName (getName n) -- Default method\n\\end{code}\n\n\\begin{code}\ngetSrcLoc :: NamedThing a => a -> SrcLoc\ngetSrcSpan :: NamedThing a => a -> SrcSpan\ngetOccString :: NamedThing a => a -> String\n\ngetSrcLoc = nameSrcLoc . getName\ngetSrcSpan = nameSrcSpan . getName\ngetOccString = occNameString . getOccName\n\npprInfixName, pprPrefixName :: (Outputable a, NamedThing a) => a -> SDoc\n-- See Outputable.pprPrefixVar, pprInfixVar;\n-- add parens or back-quotes as appropriate\npprInfixName n = pprInfixVar (isSymOcc (getOccName n)) (ppr n)\n\npprPrefixName thing \n | name `hasKey` liftedTypeKindTyConKey \n = ppr name -- See Note [Special treatment for kind *]\n | otherwise\n = pprPrefixVar (isSymOcc (nameOccName name)) (ppr name)\n where\n name = getName thing\n\\end{code}\n\nNote [Special treatment for kind *]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nDo not put parens around the kind '*'. Even though it looks like\nan operator, it is really a special case.\n\nThis pprPrefixName stuff is really only used when printing HsSyn,\nwhich has to be polymorphic in the name type, and hence has to go via\nthe overloaded function pprPrefixOcc. It's easier where we know the\ntype being pretty printed; eg the pretty-printing code in TypeRep.\n\nSee Trac #7645, which led to this.\n\n","avg_line_length":39.5816666667,"max_line_length":98,"alphanum_fraction":0.5861299423} {"size":14006,"ext":"lhs","lang":"Literate Haskell","max_stars_count":1.0,"content":"%\n% Copyright 2014, General Dynamics C4 Systems\n%\n% This software may be distributed and modified according to the terms of\n% the GNU General Public License version 2. Note that NO WARRANTY is provided.\n% See \"LICENSE_GPLv2.txt\" for details.\n%\n% @TAG(GD_GPL)\n%\n\nThis module contains the data structure and operations for the physical memory model.\n\n> module SEL4.Model.PSpace (\n> PSpace, newPSpace, initPSpace,\n> PSpaceStorable,\n> objBits, injectKO, projectKO, makeObject, loadObject, updateObject,\n> getObject, setObject, deleteObjects, reserveFrame,\n> typeError, alignError, alignCheck, sizeCheck,\n> loadWordUser, storeWordUser, placeNewObject\n> ) where\n\n\\begin{impdetails}\n\n% {-# BOOT-IMPORTS: Data.Map SEL4.Object.Structures SEL4.Machine.RegisterSet #-}\n% {-# BOOT-EXPORTS: PSpace #PRegion newPSpace #-}\n\n> import SEL4.Model.StateData\n> import SEL4.Object.Structures\n\n> import qualified Data.Map\n> import Data.Bits\n> import SEL4.Machine.RegisterSet\n> import SEL4.Machine.Hardware\n\n\n\\end{impdetails}\n\n\\subsection{Physical Address Space}\n\nThe physical address space is represented by a map from physical addresses to objects. The objects themselves are wrapped in the \"KernelObject\" type.\n\n> newtype PSpace = PSpace { psMap :: Data.Map.Map Word KernelObject }\n\n\\subsection{Storable Objects}\n\nThe type class \"PSpaceStorable\" defines a set of operations that may be performed on any object that is storable in physical memory.\n\n> class PSpaceStorable a where\n\n\\begin{impdetails}\nFor a \\emph{pure} kernel object --- one which is only accessed by the kernel itself, and may therefore be stored in the Haskell \"PSpace\" structure --- it is usually sufficient to define the \"objBits\", \"injectKO\" and \"projectKO\" functions for an instance of this class.\n\nSome kernel objects, such as capability table entries (\"CTE\"s), may be stored either alone in the \"PSpace\" or encapsulated inside another kernel object. Instances for such objects must override the default \"loadObject\" and \"updateObject\" definitions, as \"injectKO\" and \"projectKO\" are not sufficient.\n\nObjects such as virtual memory pages or hardware-defined page tables must be accessed in the hardware monad. Instances for these objects must override \"getObject\" and \"setObject\".\n\nAll instances must either define \"injectKO\" and \"makeObject\" or override \"placeNewObject\" to allow new objects to be created.\n\\end{impdetails}\n\n\\subsubsection{Object Properties}\n\nThe size and alignment of the physical region occupied by objects of type \"a\". This is the logarithm base 2 of the object's size (i.e., the number of low-order bits of the physical address that are not necessary to locate the object).\n\nThe default contents of a kernel object of this type.\n\n> makeObject :: a\n\n\\subsubsection{Storing Objects in PSpace}\n\nThe \"loadObject\" and \"updateObject\" functions are used to insert or extract an object from a \"KernelObject\" wrapper, given any remaining unresolved physical address bits. Normally these bits must all be zero, and the number of bits must equal the result of \"objBits\"; this ensures that the alignment is correct.\n\n\\begin{impdetails}\nThe default definitions are sufficient for most kernel objects. There is one exception in the platform-independent code, for \"CTE\" objects; it can be found in \\autoref{sec:object.instances}.\n\\end{impdetails}\n\n> loadObject :: (Monad m) => Word -> Word -> Maybe Word ->\n> KernelObject -> m a\n> loadObject ptr ptr' next obj = do\n> unless (ptr == ptr') $ fail $ \"no object at address given in pspace,target=\" ++ (show ptr) ++\",lookup=\" ++ (show ptr')\n> val <- projectKO obj \n> alignCheck ptr (objBits val)\n> sizeCheck ptr next (objBits val)\n> return val\n\n> updateObject :: (Monad m) => a -> KernelObject -> Word ->\n> Word -> Maybe Word -> m KernelObject\n> updateObject val oldObj ptr ptr' next = do \n> unless (ptr == ptr') $ fail $ \"no object at address given in pspace,target=\" ++ (show ptr) ++\",lookup=\" ++ (show ptr')\n> liftM (asTypeOf val) $ projectKO oldObj -- for the type error\n> alignCheck ptr (objBits val)\n> sizeCheck ptr next (objBits val)\n> return (injectKO val)\n\nThe \"injectKO\" and \"projectKO\" functions convert to and from a \"KernelObject\", which is the type used to encapsulate all objects stored in the \"PSpace\" structure.\n\n> injectKO :: a -> KernelObject\n> projectKO :: (Monad m) => KernelObject -> m a\n\n> objBits :: PSpaceStorable a => a -> Int\n> objBits a = objBitsKO (injectKO a)\n\n\\subsection{Functions}\n\n\\subsubsection{Initialisation}\n\nA new physical address space has an empty object map.\n\n> newPSpace :: PSpace\n> newPSpace = PSpace { psMap = Data.Map.empty }\n\nThe \"initPSpace\" function currently does nothing. In earlier versions of the Haskell model, it was used to configure the \"PSpace\" model to signal a bus error if an invalid physical address was accessed. This is useful only for debugging of the Haskell model, and is not strictly necessary; it has no equivalent in a real implementation.\n% FIXME maybe check that the arguments are OK\n\n> initPSpace :: [(PPtr (), PPtr ())] -> Kernel ()\n> initPSpace _ = return ()\n\n\\subsubsection{Accessing Objects}\n\nGiven a pointer into physical memory, an attempt may be made to fetch\nor update an object of any storable type from the address space. The caller is\nassumed to have checked that the address is correctly aligned for the\nrequested object type and that it actually contains an object of the\nrequested type.\n\n> getObject :: PSpaceStorable a => PPtr a -> Kernel a\n> getObject ptr = do\n> map <- gets $ psMap . ksPSpace\n> let (before, after) = lookupAround2 (fromPPtr ptr) map\n> (ptr', val) <- maybeToMonad before\n> loadObject (fromPPtr ptr) ptr' after val\n\n> setObject :: PSpaceStorable a => PPtr a -> a -> Kernel ()\n> setObject ptr val = do\n> ps <- gets ksPSpace\n> let map = psMap ps\n> let (before, after) = lookupAround2 (fromPPtr ptr) map\n> (ptr', obj) <- maybeToMonad before\n> obj' <- updateObject val obj (fromPPtr ptr) ptr' after\n> let map' = Data.Map.insert ptr' obj' map\n> let ps' = ps { psMap = map' }\n> modify (\\ks -> ks { ksPSpace = ps'})\n\n> lookupAround :: Ord k => k -> Data.Map.Map k a ->\n> (Maybe (k, a), Maybe a, Maybe (k, a))\n> lookupAround ptr map = (nullProtect Data.Map.findMax before,\n> at, nullProtect Data.Map.findMin after)\n> where\n> (before, at, after) = Data.Map.splitLookup ptr map\n> nullProtect f m\n> | Data.Map.null m = Nothing\n> | otherwise = Just (f m)\n\n> lookupAround2 :: Ord k => k -> Data.Map.Map k a -> (Maybe (k, a), Maybe k)\n> lookupAround2 ptr mp = case at of\n> Just v -> (Just (ptr, v), after')\n> Nothing -> (before, after')\n> where\n> (before, at, after) = lookupAround ptr mp\n> after' = maybe Nothing (Just . fst) after\n\n> maybeToMonad :: Monad m => Maybe a -> m a\n> maybeToMonad (Just x) = return x\n> maybeToMonad Nothing = fail \"maybeToMonad: got Nothing\"\n\n\\subsubsection{Creating Objects}\n\nCreate a new object, and place it in memory. Some objects (such as page\ndirectories) are actually arrays of smaller objects. To handle these cases, the\nfunction `placeNewObject' accepts an argument allowing multiple objects of the\nsame type to be created as a group. For standard object types, this argument\nwill always be set to create only a single object.\n\nThe arguments to \"placeNewObject\" are a pointer to the start of the region\n(which must be aligned to the object's size); the value used to initialise the\ncreated objects; and the number of copies that will be created (the input\nrepresent as a power-of-two).\n\n> placeNewObject :: PSpaceStorable a => PPtr () -> a -> Int -> Kernel ()\n> placeNewObject ptr val groupSizeBits =\n> placeNewObject' ptr (injectKO val) groupSizeBits\n>\n> placeNewObject' :: PPtr () -> KernelObject -> Int -> Kernel ()\n> placeNewObject' ptr val groupSizeBits = do\n\nCalculate the size (as a power of two) of the region the new object will be placed in.\n\n> let objSizeBits = objBitsKO val\n> let totalBits = objSizeBits + groupSizeBits\n\nCheck the alignment of the specified region.\n\n> unless (fromPPtr ptr .&. mask totalBits == 0) $\n> alignError totalBits\n\nFetch the \"PSpace\" structure from the current state, and search the region for existing objects; fail if any are found.\n\n> ps <- gets ksPSpace\n> let end = fromPPtr ptr + ((1 `shiftL` totalBits) - 1)\n> let (before, _) = lookupAround2 end (psMap ps)\n> case before of\n> Nothing -> return ()\n> Just (x, _) -> assert (x < fromPPtr ptr)\n> \"Object creation would destroy an existing object\"\n\nMake a list of addresses at which to create objects.\n\n> let addresses = map\n> (\\n -> fromPPtr ptr + n `shiftL` objSizeBits)\n> [0 .. (1 `shiftL` groupSizeBits) - 1]\n\nInsert the objects into the \"PSpace\" map.\n\n> let map' = foldr\n> (\\addr map -> Data.Map.insert addr val map)\n> (psMap ps) addresses\n\nUpdate the state with the new \"PSpace\" map.\n\n> let ps' = ps { psMap = map' }\n> modify (\\ks -> ks { ksPSpace = ps'})\n\n\\subsubsection{Deleting Objects}\n\nNo type checks are performed when deleting objects; \"deleteObjects\" simply deletes every object in the given region. If an object partially overlaps with the given region but is not completely inside it, this function's behaviour is undefined.\n\n> deleteObjects :: PPtr a -> Int -> Kernel ()\n> deleteObjects ptr bits = do\n> unless (fromPPtr ptr .&. mask bits == 0) $\n> alignError bits\n> stateAssert (deletionIsSafe ptr bits)\n> \"Object deletion would leave dangling pointers\"\n> doMachineOp $ freeMemory (PPtr (fromPPtr ptr)) bits\n> ps <- gets ksPSpace\n> let inRange = (\\x -> x .&. ((- mask bits) - 1) == fromPPtr ptr)\n> let map' = Data.Map.filterWithKey\n> (\\x _ -> not (inRange x))\n> (psMap ps)\n> let ps' = ps { psMap = map' }\n> modify (\\ks -> ks { ksPSpace = ps'})\n\ndelete the ghost state just in case the deleted objects have been user pages or cnodes.\n\n\n> modify (\\ks -> ks { gsUserPages = (\\x -> if inRange x\n> then Nothing else gsUserPages ks x) })\n> modify (\\ks -> ks { gsCNodes = (\\x -> if inRange x\n> then Nothing else gsCNodes ks x) })\n> stateAssert ksASIDMapSafe \"Object deletion would leave dangling PD pointers\"\n\nIn \"deleteObjects\" above, we assert \"deletionIsSafe\"; that is, that there are no pointers to these objects remaining elsewhere in the kernel state. Since we cannot easily check this in the Haskell model, we assume that it is always true; the assertion is strengthened during translation into Isabelle.\n\n> deletionIsSafe :: PPtr a -> Int -> KernelState -> Bool\n> deletionIsSafe _ _ _ = True\n\nAfter deletion, we assert ksASIDMapSafe which states that there are page directories at the addresses in the asid map. Again, the real assertion is only inserted in the translation to the theorem prover. \n\n> ksASIDMapSafe :: KernelState -> Bool\n> ksASIDMapSafe _ = True\n\n\n\\subsubsection{Reserving Memory Regions}\n\nThe \"reserveFrame\" function marks a page-sized physical region as being in use for kernel or user data. This prevents any new kernel objects being created in these regions, but does not store any real data in the \"PSpace\".\n\n\\begin{impdetails}\nThis is intended for use by alternate implementations of \"placeNewObject\", for objects that are stored outside the \"PSpace\" structure.\n\\end{impdetails}\n\n> reserveFrame :: PPtr a -> Bool -> Kernel ()\n> reserveFrame ptr isKernel = do\n> let val = if isKernel then KOKernelData else KOUserData\n> placeNewObject' (PPtr (fromPPtr ptr)) val 0\n> return ()\n\n\\subsubsection{Access Failures}\n\nThese two functions halt the kernel with an error message when a memory access is performed with incorrect type or alignment.\n\n> typeError :: Monad m => String -> KernelObject -> m a\n> typeError t1 t2 = fail (\"Wrong object type - expected \" ++ t1 ++ \n> \", found \" ++ (kernelObjectTypeName t2))\n\n> alignError :: Monad m => Int -> m a\n> alignError n = fail (\"Unaligned access - lowest \" ++\n> (show n) ++ \" bits must be 0\")\n\n> alignCheck :: Monad m => Word -> Int -> m ()\n> alignCheck x n = unless (x .&. mask n == 0) $ alignError n\n\n> sizeCheck :: Monad m => Word -> Maybe Word -> Int -> m ()\n> sizeCheck _ Nothing _ = return ()\n> sizeCheck start (Just end) n =\n> when (end - start < 1 `shiftL` n)\n> (fail (\"Object must be at least 2^\" ++ (show n) ++ \" bytes long.\"))\n\n\\subsubsection{Accessing user data}\n\nThe following functions are used to access words in user-accessible data pages. They are equivalent to \"loadWord\" and \"storeWord\", except that they also assert that the pointer is in a user data page.\n\n> loadWordUser :: PPtr Word -> Kernel Word\n> loadWordUser p = do\n> stateAssert (pointerInUserData p)\n> \"loadWordUser needs a user data page\"\n> doMachineOp $ loadWord p\n\n> storeWordUser :: PPtr Word -> Word -> Kernel ()\n> storeWordUser p w = do\n> stateAssert (pointerInUserData p)\n> \"storeWordUser needs a user data page\"\n> doMachineOp $ storeWord p w\n\nThe following predicate is used above to assert that the pointer is a valid pointer to user data. It is always \"True\" here, but is replaced with a stronger assertion in the Isabelle translation. % FIXME: this can probably actually be stronger here too\n\n> pointerInUserData :: PPtr Word -> KernelState -> Bool\n> pointerInUserData _ _ = True\n\n\n","avg_line_length":44.0440251572,"max_line_length":336,"alphanum_fraction":0.6744252463} {"size":55,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"> module Main where\n>\n> import Lib\n>\n> main = someFunc\n","avg_line_length":9.1666666667,"max_line_length":19,"alphanum_fraction":0.6545454545} {"size":6541,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"% -*- LaTeX -*-\n% $Id: CurryUtils.lhs 3049 2011-10-02 15:07:27Z wlux $\n%\n% Copyright (c) 1999-2011, Wolfgang Lux\n% See LICENSE for the full license.\n%\n\\nwfilename{CurryUtils.lhs}\n\\section{Utilities for the Syntax Tree}\nThe module \\texttt{CurryUtils} provides definitions that are useful\nfor analyzing and constructing abstract syntax trees of Curry modules\nand goals.\n\\begin{verbatim}\n\n> module CurryUtils where\n> import Curry\n\n\\end{verbatim}\nHere is a list of predicates identifying various kinds of\ndeclarations.\n\\begin{verbatim}\n\n> isTypeDecl, isBlockDecl :: TopDecl a -> Bool\n> isTypeDecl (DataDecl _ _ _ _) = True\n> isTypeDecl (NewtypeDecl _ _ _ _) = True\n> isTypeDecl (TypeDecl _ _ _ _) = True\n> isTypeDecl (BlockDecl _) = False\n> isBlockDecl (BlockDecl _) = True\n> isBlockDecl _ = False\n\n> isInfixDecl, isTypeSig, isFunDecl, isFreeDecl :: Decl a -> Bool\n> isTrustAnnot, isValueDecl :: Decl a -> Bool\n> isInfixDecl (InfixDecl _ _ _ _) = True\n> isInfixDecl _ = False\n> isTypeSig (TypeSig _ _ _) = True\n> isTypeSig (ForeignDecl _ _ _ _ _) = True\n> isTypeSig _ = False\n> isFunDecl (FunctionDecl _ _ _ _) = True\n> isFunDecl (ForeignDecl _ _ _ _ _) = True\n> isFunDecl _ = False\n> isFreeDecl (FreeDecl _ _) = True\n> isFreeDecl _ = False\n> isTrustAnnot (TrustAnnot _ _ _) = True\n> isTrustAnnot _ = False\n> isValueDecl (FunctionDecl _ _ _ _) = True\n> isValueDecl (ForeignDecl _ _ _ _ _) = True\n> isValueDecl (PatternDecl _ _ _) = True\n> isValueDecl (FreeDecl _ _) = True\n> isValueDecl _ = False\n\n\\end{verbatim}\nThe function \\texttt{isVarPattern} returns true if its argument is\nsemantically equivalent to a variable pattern. Note that in particular\nthis function returns \\texttt{True} for lazy patterns.\n\\begin{verbatim}\n\n> isVarPattern :: ConstrTerm a -> Bool\n> isVarPattern (LiteralPattern _ _) = False\n> isVarPattern (NegativePattern _ _ _) = False\n> isVarPattern (VariablePattern _ _) = True\n> isVarPattern (ConstructorPattern _ _ _) = False\n> isVarPattern (FunctionPattern _ _ _) = False\n> isVarPattern (InfixPattern _ _ _ _) = False\n> isVarPattern (ParenPattern t) = isVarPattern t\n> isVarPattern (TuplePattern _) = False\n> isVarPattern (ListPattern _ _) = False\n> isVarPattern (AsPattern _ t) = isVarPattern t\n> isVarPattern (LazyPattern _) = True\n\n\\end{verbatim}\nThe functions \\texttt{constr} and \\texttt{nconstr} return the\nconstructor name of a data constructor and newtype constructor\ndeclaration, respectively.\n\\begin{verbatim}\n\n> constr :: ConstrDecl -> Ident\n> constr (ConstrDecl _ _ c _) = c\n> constr (ConOpDecl _ _ _ op _) = op\n> constr (RecordDecl _ _ c _) = c\n\n> nconstr :: NewConstrDecl -> Ident\n> nconstr (NewConstrDecl _ c _) = c\n> nconstr (NewRecordDecl _ c _ _) = c\n\n\\end{verbatim}\nThe functions \\texttt{labels} and \\texttt{nlabel} return the field\nlabel identifiers of a data constructor and newtype constructor\ndeclaration, respectively.\n\\begin{verbatim}\n\n> labels :: ConstrDecl -> [Ident]\n> labels (ConstrDecl _ _ _ _) = []\n> labels (ConOpDecl _ _ _ _ _) = []\n> labels (RecordDecl _ _ _ fs) = [l | FieldDecl _ ls _ <- fs, l <- ls]\n\n> nlabel :: NewConstrDecl -> [Ident]\n> nlabel (NewConstrDecl _ _ _) = []\n> nlabel (NewRecordDecl _ _ l _) = [l]\n\n\\end{verbatim}\nThe function \\texttt{eqnArity} returns the (syntactic) arity of a\nfunction equation and \\texttt{flatLhs} returns the function name and\nthe list of arguments from the left hand side of a function equation.\n\\begin{verbatim}\n\n> eqnArity :: Equation a -> Int\n> eqnArity (Equation _ lhs _) = length (snd (flatLhs lhs))\n\n> flatLhs :: Lhs a -> (Ident,[ConstrTerm a])\n> flatLhs lhs = flat lhs []\n> where flat (FunLhs f ts) ts' = (f,ts ++ ts')\n> flat (OpLhs t1 op t2) ts = (op,t1:t2:ts)\n> flat (ApLhs lhs ts) ts' = flat lhs (ts ++ ts')\n\n\\end{verbatim}\nThe function \\texttt{infixOp} converts an infix operator into an\nexpression and the function \\texttt{opName} returns the operator's\nname.\n\\begin{verbatim}\n\n> infixOp :: InfixOp a -> Expression a\n> infixOp (InfixOp a op) = Variable a op\n> infixOp (InfixConstr a op) = Constructor a op\n\n> opName :: InfixOp a -> QualIdent\n> opName (InfixOp _ op) = op\n> opName (InfixConstr _ c) = c\n\n\\end{verbatim}\nThe function \\texttt{orderFields} sorts the arguments of a record\npattern or expression into a fixed order, which usually is the order\nin which the labels appear in the record's declaration.\n\\begin{verbatim}\n\n> orderFields :: [Field a] -> [Ident] -> [Maybe a]\n> orderFields fs ls = map (flip lookup [(unqualify l,x) | Field l x <- fs]) ls\n\n\\end{verbatim}\nThe function \\texttt{entity} returns the qualified name of the entity\ndefined by an interface declaration.\n\\begin{verbatim}\n\n> entity :: IDecl -> QualIdent\n> entity (IInfixDecl _ _ _ op) = op\n> entity (HidingDataDecl _ tc _) = tc\n> entity (IDataDecl _ tc _ _ _) = tc\n> entity (INewtypeDecl _ tc _ _ _) = tc\n> entity (ITypeDecl _ tc _ _) = tc\n> entity (IFunctionDecl _ f _ _) = f\n\n\\end{verbatim}\nThe function \\texttt{unhide} makes interface declarations transparent,\ni.e., it replaces hidden data type declarations by standard data type\ndeclarations and removes all hiding specifications from interface\ndeclarations.\n\\begin{verbatim}\n\n> unhide :: IDecl -> IDecl\n> unhide (IInfixDecl p fix pr op) = IInfixDecl p fix pr op\n> unhide (HidingDataDecl p tc tvs) = IDataDecl p tc tvs [] []\n> unhide (IDataDecl p tc tvs cs _) = IDataDecl p tc tvs cs []\n> unhide (INewtypeDecl p tc tvs nc _) = INewtypeDecl p tc tvs nc []\n> unhide (ITypeDecl p tc tvs ty) = ITypeDecl p tc tvs ty\n> unhide (IFunctionDecl p f n ty) = IFunctionDecl p f n ty\n\n\\end{verbatim}\nHere are a few convenience functions for constructing (elements of)\nabstract syntax trees.\n\\begin{verbatim}\n\n> funDecl :: Position -> a -> Ident -> [ConstrTerm a] -> Expression a -> Decl a\n> funDecl p a f ts e = FunctionDecl p a f [funEqn p f ts e]\n\n> funEqn :: Position -> Ident -> [ConstrTerm a] -> Expression a -> Equation a\n> funEqn p f ts e = Equation p (FunLhs f ts) (SimpleRhs p e [])\n\n> patDecl :: Position -> ConstrTerm a -> Expression a -> Decl a\n> patDecl p t e = PatternDecl p t (SimpleRhs p e [])\n\n> varDecl :: Position -> a -> Ident -> Expression a -> Decl a\n> varDecl p ty = patDecl p . VariablePattern ty\n\n> caseAlt :: Position -> ConstrTerm a -> Expression a -> Alt a\n> caseAlt p t e = Alt p t (SimpleRhs p e [])\n\n> mkLet :: [Decl a] -> Expression a -> Expression a\n> mkLet ds e = if null ds then e else Let ds e\n\n> apply :: Expression a -> [Expression a] -> Expression a\n> apply = foldl Apply\n\n> mkVar :: a -> Ident -> Expression a\n> mkVar ty = Variable ty . qualify\n\n\\end{verbatim}\n","avg_line_length":33.5435897436,"max_line_length":79,"alphanum_fraction":0.7064669011} {"size":2588,"ext":"lhs","lang":"Literate Haskell","max_stars_count":1.0,"content":"> module Chase where\n\n> import Parsley\n> import HaLay\n> import Data.Maybe\n> import Control.Applicative\n> import Control.Monad\n> import Control.Exception\n> import System.FilePath\n> import System.Directory\n> import Distribution.Simple.PreProcess.Unlit\n> import System.IO.Error\n> import Data.Foldable\n> import System.Exit\n\n> type Import = [String]\n> type Module = [String]\n\n> pPath :: P Tok [String]\n> pPath = (pSep (teq (Sym \".\")) uid) <|> pure <$> uid\n\n> pModule :: P Tok Module\n> pModule = reverse <$> (spc *> teq (KW \"module\") *> spc *> pPath <* spc <* teq (L \"where\" []) <* pRest)\n\n> pImport :: P Tok Import\n> pImport = (spc *> teq (KW \"import\") *> spc *> pPath <* spc <* pEnd) where\n\n> imports :: FilePath -> [[Tok]] -> IO [(FilePath, String)]\n> imports cur [] = return []\n> imports cur (ts : tts) = case parse pModule ts of\n> Nothing -> imports cur tts\n> Just ms -> imports' cur tts ms where\n> imports' :: FilePath -> [[Tok]] -> Module -> IO [(FilePath, String)]\n> imports' cur [] _ = return []\n> imports' cur (ts : tts) ms = let tail = imports' cur tts ms in\n> case (parse pImport ts) of\n> Just is -> isLocal cur is ms >>=\n> maybe tail (\\ fp -> tail >>= (\\ fps -> return $ fp : fps))\n> Nothing -> tail\n\n> backUpPath :: FilePath -> Module -> FilePath\n> backUpPath pre (m : []) = pre\n> backUpPath pre (m : ms) = backUpPath (takeDirectory pre) ms\n> backUpPath pre _ = pre\n\n> isLocal :: FilePath -> Import -> Module -> IO (Maybe (FilePath, String))\n> isLocal curAll imps ms = do\n> let cur = backUpPath curAll ms\n> let path = foldl' (\\ pb -> ((pb ++ \"\/\") ++ )) cur imps\n> let pathHs = path ++ \".hs\"\n> f <- tryJust (guard . isDoesNotExistError) $ readFile pathHs\n> case f of\n> Right f -> do\n> isOld <- isOutDate pathHs\n> return $ if isOld then Just $ (pathHs, f) else Nothing\n> Left e -> do\n> let pathLhs = path ++ \".lhs\"\n> f <- tryJust (guard . isDoesNotExistError) $ readFile pathLhs\n> case f of\n> Right f -> case unlit pathLhs f of\n> Left f -> do\n> isOld <- isOutDate pathLhs\n> return $ if isOld then Just $ (pathLhs, f) else Nothing\n> Right e -> do\n> putStrLn e\n> exitFailure\n> Left e -> return Nothing\n\n> isOutDate :: FilePath -> IO Bool\n> isOutDate fps = do\n> let fph = replaceExtension fps \".hers\"\n> srcT <- getModificationTime fps\n> herT <- tryJust (guard . isDoesNotExistError) (getModificationTime fph)\n> case herT of\n> Left e -> return True\n> Right herT -> return $ srcT > herT\n\n","avg_line_length":33.1794871795,"max_line_length":104,"alphanum_fraction":0.5935085008} {"size":1523,"ext":"lhs","lang":"Literate Haskell","max_stars_count":14.0,"content":"> {-# OPTIONS_HADDOCK show-extensions #-}\n> {-|\n> Module : LTK.Porters.Corpus\n> Copyright : (c) 2019 Dakotah Lambert\n> LICENSE : MIT\n> \n> This module provides methods to construct\n> prefix-trees of corpora.\n>\n> @since 0.3\n> -}\n> module LTK.Porters.Corpus (readCorpus) where\n\n> import Data.Set (Set)\n> import qualified Data.Set as Set\n\n> import LTK.FSA\n\n> -- |Construct a prefix-tree of a (finite) corpus.\n> readCorpus :: Ord a => [[a]] -> FSA [a] a\n> readCorpus = f . foldr addWord (empty, empty, empty)\n> where f (alpha, trans, fin)\n> = FSA\n> { sigma = alpha\n> , transitions = trans\n> , initials = singleton $ State []\n> , finals = fin\n> , isDeterministic = False\n> }\n\n> addWord :: (Ord a) =>\n> [a] -> (Set a, Set (Transition [a] a), Set (State [a])) ->\n> (Set a, Set (Transition [a] a), Set (State [a]))\n> addWord w (alpha, trans, fin)\n> = ( collapse insert alpha w\n> , union trans trans'\n> , insert (State w) fin\n> )\n> where trans' = Set.fromList $ f (inits w) w\n> f (x:y:xs) (z:zs)\n> = Transition\n> { edgeLabel = Symbol z\n> , source = State x\n> , destination = State y\n> } : f (y:xs) zs\n> f _ _ = []\n> inits xs = [] :\n> case xs\n> of [] -> []\n> (a:as) -> map (a :) (inits as)\n","avg_line_length":29.862745098,"max_line_length":71,"alphanum_fraction":0.4629021668} {"size":48965,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"> module MakeTree (Tree(..),Elem(..),SmartTree(..),maketree,imaketree,iimaketree,readtree,treestats,readmodels,readtree') where\n\n> import DataProp\n> import DataTree\n> import Data.List\n> import GPLIparser\n> import GPLIprinter\n> import GPLIevaluator\n> import PrintModels\n> import Control.Concurrent\n> import System.Console.ANSI\n\nGENERAL FUNCTIONS\n\nHere is a function which adds a list of elements to all open paths on a tree.\n\n\n> addnonbranching :: [Elem] -> Tree -> Tree\n> addnonbranching center (Branch [] [] ) = Branch [] []\n> addnonbranching center (Branch ys []) = Branch (ys++center) []\n> addnonbranching center (Branch ys ts) = Branch ys (map (addnonbranching center) ts)\n\nHere is a function which adds a left list and a right list to new branches on all open paths on a tree.\n\n> addbranching :: ([Elem],[Elem]) -> Tree -> Tree\n> addbranching (left, right) (Branch [] []) = Branch [] []\n> addbranching (left, right) (Branch ys []) = Branch ys [Branch right [], Branch left []]\n> addbranching (left, right) (Branch ys ts) = Branch ys ((map (addbranching (right, left)) ts))\n\nCONJUNCTION\n\n> hasconj :: [Elem] -> Bool\n> hasconj xs = any isconj xs\n> where isconj (Elem (Conj _ _) _ False) = True\n> isconj (Elem _ _ _ ) = False\n\n> checkconj :: [Elem] -> [Elem]\n> checkconj ((Elem (Conj left right) subs False ):xs) = ((Elem (Conj left right) subs True ):xs) \n> checkconj (x:xs) = x : checkconj xs\n> checkconj [] = []\n\n> getconjuncts :: [Elem] -> [Elem]\n> getconjuncts ((Elem (Conj left right) subs False ):xs) = [Elem left [] False, Elem right [] False] \n> getconjuncts (x:xs) = getconjuncts xs\n> getconjuncts [] = []\n\n> applyconj :: Tree -> Tree\n> applyconj (Branch elems xs) = if hasconj elems\n> then addnonbranching (getconjuncts elems) (Branch (checkconj elems) xs)\n> else (Branch elems (map applyconj xs))\n\nDOUBLE NEG\n\n> hasdneg :: [Elem] -> Bool\n> hasdneg xs = any isdneg xs\n> where isdneg (Elem (Neg (Neg _ )) _ False) = True\n> isdneg (Elem _ _ _ ) = False\n\n> checkdneg :: [Elem] -> [Elem]\n> checkdneg ((Elem (Neg (Neg scope )) subs False ):xs) = ((Elem (Neg (Neg scope )) subs True ):xs) \n> checkdneg (x:xs) = x : checkdneg xs\n> checkdneg [] = []\n\n> getdneg :: [Elem] -> [Elem]\n> getdneg ((Elem (Neg (Neg scope )) subs False ):xs) = [Elem scope [] False] \n> getdneg (x:xs) = getdneg xs\n> getdneg [] = []\n\n> applydneg :: Tree -> Tree\n> applydneg (Branch elems xs) = if hasdneg elems\n> then addnonbranching (getdneg elems) (Branch (checkdneg elems) xs)\n> else (Branch elems (map applydneg xs))\n\nNEGATED CONJUNCTION\n\n> hasnegconj :: [Elem] -> Bool\n> hasnegconj xs = any isnegconj xs\n> where isnegconj (Elem (Neg (Conj _ _)) _ False) = True\n> isnegconj (Elem _ _ _ ) = False\n\n> checknegconj :: [Elem] -> [Elem]\n> checknegconj ((Elem (Neg (Conj left right)) subs False ):xs) = ((Elem (Neg (Conj left right)) subs True ):xs) \n> checknegconj (x:xs) = x : checknegconj xs\n> checknegconj [] = []\n\n> getnegconjuncts :: [Elem] -> ([Elem],[Elem])\n> getnegconjuncts ((Elem (Neg (Conj left right)) subs False ):xs) = ([Elem (Neg left) [] False], [Elem (Neg right) [] False]) \n> getnegconjuncts (x:xs) = getnegconjuncts xs\n> getnegconjuncts [] = ([],[])\n\n> applynegconj :: Tree -> Tree\n> applynegconj (Branch elems xs) = if hasnegconj elems\n> then addbranching (getnegconjuncts elems) (Branch (checknegconj elems) xs)\n> else (Branch elems (map applynegconj xs))\n\nDISJUNCTION\n\n> hasdisj :: [Elem] -> Bool\n> hasdisj xs = any isdisj xs\n> where isdisj (Elem (Disj _ _) _ False) = True\n> isdisj (Elem _ _ _ ) = False\n\n> checkdisj :: [Elem] -> [Elem]\n> checkdisj ((Elem (Disj left right) subs False ):xs) = ((Elem (Disj left right) subs True ):xs) \n> checkdisj (x:xs) = x : checkdisj xs\n> checkdisj [] = []\n\n> getdisjuncts :: [Elem] -> ([Elem],[Elem])\n> getdisjuncts ((Elem (Disj left right) subs False ):xs) = ([Elem left [] False], [Elem right [] False])\n> getdisjuncts (x:xs) = getdisjuncts xs\n> getdisjuncts [] = ([],[])\n\n> applydisj :: Tree -> Tree\n> applydisj (Branch elems xs) = if hasdisj elems\n> then addbranching (getdisjuncts elems) (Branch (checkdisj elems) xs)\n> else (Branch elems (map applydisj xs))\n\nNEG DISJUNCTION\n\n> hasnegdisj :: [Elem] -> Bool\n> hasnegdisj xs = any isnegdisj xs\n> where isnegdisj (Elem (Neg (Disj _ _)) _ False) = True\n> isnegdisj (Elem _ _ _ ) = False\n\n> checknegdisj :: [Elem] -> [Elem]\n> checknegdisj ((Elem (Neg (Disj left right)) subs False ):xs) = ((Elem (Neg (Disj left right)) subs True ):xs) \n> checknegdisj (x:xs) = x : checknegdisj xs\n> checknegdisj [] = []\n\n> getnegdisjuncts :: [Elem] -> [Elem]\n> getnegdisjuncts ((Elem (Neg (Disj left right)) subs False ):xs) = [Elem (Neg left) [] False, Elem (Neg right) [] False] \n> getnegdisjuncts (x:xs) = getnegdisjuncts xs\n> getnegdisjuncts [] = []\n\n> applynegdisj :: Tree -> Tree\n> applynegdisj (Branch elems xs) = if hasnegdisj elems\n> then addnonbranching (getnegdisjuncts elems) (Branch (checknegdisj elems) xs)\n> else (Branch elems (map applynegdisj xs))\n\nCONDITIONAL\n\n> hascond :: [Elem] -> Bool\n> hascond xs = any iscond xs\n> where iscond (Elem (Cond _ _) _ False) = True\n> iscond (Elem _ _ _ ) = False\n\n> checkcond :: [Elem] -> [Elem]\n> checkcond ((Elem (Cond left right) subs False ):xs) = ((Elem (Cond left right) subs True ):xs) \n> checkcond (x:xs) = x : checkcond xs\n> checkcond [] = []\n\n> getconelements :: [Elem] -> ([Elem],[Elem])\n> getconelements ((Elem (Cond left right) subs False ):xs) = ([Elem (Neg left) [] False], [Elem right [] False]) \n> getconelements (x:xs) = getconelements xs\n> getconelements [] = ([],[])\n\n> applycond :: Tree -> Tree\n> applycond (Branch elems xs) = if hascond elems\n> then addbranching (getconelements elems) (Branch (checkcond elems) xs)\n> else (Branch elems (map applycond xs))\n\nNEGCONDITIONAL\n\n> hasnegcond :: [Elem] -> Bool\n> hasnegcond xs = any isnegcond xs\n> where isnegcond (Elem (Neg (Cond _ _)) _ False) = True\n> isnegcond (Elem _ _ _ ) = False\n\n> checknegcond :: [Elem] -> [Elem]\n> checknegcond ((Elem (Neg (Cond left right)) subs False ):xs) = ((Elem (Neg (Cond left right)) subs True ):xs) \n> checknegcond (x:xs) = x : checknegcond xs\n> checknegcond [] = []\n\n> getnegconelements :: [Elem] -> [Elem]\n> getnegconelements ((Elem (Neg (Cond left right)) subs False ):xs) = [Elem left [] False, Elem (Neg right) [] False] \n> getnegconelements (x:xs) = getnegconelements xs\n> getnegconelements [] = []\n\n> applynegcond :: Tree -> Tree\n> applynegcond (Branch elems xs) = if hasnegcond elems\n> then addnonbranching (getnegconelements elems) (Branch (checknegcond elems) xs)\n> else (Branch elems (map applynegcond xs))\n\nBICONDITIONAL\n\n> hasbicond :: [Elem] -> Bool\n> hasbicond xs = any isbicond xs\n> where isbicond (Elem (Bicon _ _) _ False) = True\n> isbicond (Elem _ _ _ ) = False\n\n> checkbicond :: [Elem] -> [Elem]\n> checkbicond ((Elem (Bicon left right) subs False ):xs) = ((Elem (Bicon left right) subs True ):xs) \n> checkbicond (x:xs) = x : checkbicond xs\n> checkbicond [] = []\n\n> getbiconelements :: [Elem] -> ([Elem],[Elem])\n> getbiconelements ((Elem (Bicon left right) subs False ):xs) = ([Elem left [] False, Elem right [] False],[Elem (Neg left) [] False, Elem (Neg right) [] False]) \n> getbiconelements (x:xs) = getbiconelements xs\n> getbiconelements [] = ([],[])\n\n> applybicond :: Tree -> Tree\n> applybicond (Branch elems xs) = if hasbicond elems\n> then addbranching (getbiconelements elems) (Branch (checkbicond elems) xs)\n> else (Branch elems (map applybicond xs))\n\nNEGBICONDITIONAL\n\n> hasnegbicond :: [Elem] -> Bool\n> hasnegbicond xs = any isnegbicond xs\n> where isnegbicond (Elem (Neg (Bicon _ _)) _ False) = True\n> isnegbicond (Elem _ _ _ ) = False\n\n> checknegbicond :: [Elem] -> [Elem]\n> checknegbicond ((Elem (Neg (Bicon left right)) subs False ):xs) = ((Elem (Neg (Bicon left right)) subs True ):xs) \n> checknegbicond (x:xs) = x : checknegbicond xs\n> checknegbicond [] = []\n\n> getnegbiconelements :: [Elem] -> ([Elem],[Elem])\n> getnegbiconelements ((Elem (Neg (Bicon left right)) subs False ):xs) = ([Elem left [] False, Elem (Neg right) [] False], [Elem (Neg left) [] False, Elem right [] False]) \n> getnegbiconelements (x:xs) = getnegbiconelements xs\n> getnegbiconelements [] = ([],[])\n\n> applynegbicond :: Tree -> Tree\n> applynegbicond (Branch elems xs) = if hasnegbicond elems\n> then addbranching (getnegbiconelements elems) (Branch (checknegbicond elems) xs)\n> else (Branch elems (map applynegbicond xs))\n\nDOUBLENEGATION\n\n> hasdoublenegj :: [Elem] -> Bool\n> hasdoublenegj xs = any isdoublenegj xs\n> where isdoublenegj (Elem (Neg (Neg _ )) _ False) = True\n> isdoublenegj (Elem _ _ _ ) = False\n\n> checkdoublenegj :: [Elem] -> [Elem]\n> checkdoublenegj ((Elem (Neg (Neg center)) subs False ):xs) = ((Elem (Neg (Neg center)) subs True ):xs) \n> checkdoublenegj (x:xs) = x : checkdoublenegj xs\n> checkdoublenegj [] = []\n\n> getdoubleneg :: [Elem] -> [Elem]\n> getdoubleneg ((Elem (Neg (Neg center)) subs False ):xs) = [Elem center [] False] \n> getdoubleneg (x:xs) = getdoubleneg xs\n> getdoubleneg [] = []\n\n> applydoublenegj :: Tree -> Tree\n> applydoublenegj (Branch elems xs) = if hasdoublenegj elems\n> then addnonbranching (getdoubleneg elems) (Branch (checkdoublenegj elems) xs)\n> else (Branch elems (map applydoublenegj xs))\n\nNEGATED UNIVERSAL\n\n> hasneguni :: [Elem] -> Bool\n> hasneguni xs = any isneguni xs\n> where isneguni (Elem (Neg (Uni _ _)) _ False) = True\n> isneguni (Elem _ _ _ ) = False\n\n> checkneguni :: [Elem] -> [Elem]\n> checkneguni ((Elem (Neg (Uni var scope)) subs False ):xs) = ((Elem (Neg (Uni var scope)) subs True ):xs) \n> checkneguni (x:xs) = x : checkneguni xs\n> checkneguni [] = []\n\n> getneguni :: [Elem] -> [Elem]\n> getneguni ((Elem (Neg (Uni var scope)) subs False ):xs) = [Elem (Exi var (Neg scope)) [] False] \n> getneguni (x:xs) = getneguni xs\n> getneguni [] = []\n\n> applyneguni :: Tree -> Tree\n> applyneguni (Branch elems xs) = if hasneguni elems\n> then addnonbranching (getneguni elems) (Branch (checkneguni elems) xs)\n> else (Branch elems (map applyneguni xs))\n\nNEGATED EXISTENTIAL\n\n> hasnegexe :: [Elem] -> Bool\n> hasnegexe xs = any isnegexe xs\n> where isnegexe (Elem (Neg (Exi _ _)) _ False) = True\n> isnegexe (Elem _ _ _ ) = False\n\n> checknegexe :: [Elem] -> [Elem]\n> checknegexe ((Elem (Neg (Exi var scope)) subs False ):xs) = ((Elem (Neg (Exi var scope)) subs True ):xs) \n> checknegexe (x:xs) = x : checknegexe xs\n> checknegexe [] = []\n\n> getnegexe :: [Elem] -> [Elem]\n> getnegexe ((Elem (Neg (Exi var scope)) subs False ):xs) = [Elem (Uni var (Neg scope)) [] False] \n> getnegexe (x:xs) = getnegexe xs\n> getnegexe [] = []\n\n> applynegexe :: Tree -> Tree\n> applynegexe (Branch elems xs) = if hasnegexe elems\n> then addnonbranching (getnegexe elems) (Branch (checknegexe elems) xs)\n> else (Branch elems (map applynegexe xs))\n\nEXISTENTIAL and UNIVERSAL\n\nFirst, two little helper functions.\n\n> getnameselems :: String -> [Elem] -> String\n> getnameselems ys xs = sort $ nub $ (ys ++ (concatMap getnameselem xs))\n> where getnameselem (Elem p _ _) = getnames p \n\n> getprops :: [Prop] -> [Elem] -> [Prop]\n> getprops ys xs = nub $ ys ++ (map getp xs)\n> where getp (Elem p _ _) = p\n\nNow a new data-type. A smart-tree is a tree whose terminals on open paths carry information about all the names on the path on which they occur and all the propositions which occur on that path. (The former is for uni and exe, the latter is for id) \n\nThis turns an ordinary tree into a smart tree:\n\n dumbtosmart :: String -> [Prop] -> Tree -> SmartTree\n dumbtosmart xs ys (Branch es ts) = (SmartBranch es (map (dumbtosmart xs ys) ts) (getnameselems xs es) (getprops ys es)) \n\n> dumbtosmart :: String -> [Prop] -> Tree -> SmartTree\n> dumbtosmart xs ys (Branch [] []) = SmartBranch [] [] xs ys \n> dumbtosmart xs ys (Branch es []) = SmartBranch es [] ((getnameselems xs es)) (getprops ys es)\n> dumbtosmart xs ys (Branch es ts) = SmartBranch es (map (dumbtosmart (nub(xs ++ (getnameselems xs es))) (nub(ys ++ (getprops ys es )))) ts) (getnameselems xs es) (getprops ys es)\n\nThis converts it back:\n\n> smarttodumb :: SmartTree -> Tree\n> smarttodumb (SmartBranch es sts xs ys) = (Branch es (map smarttodumb sts))\n\nThis gets all names on open paths.\n\n> namesonopenpaths :: SmartTree -> String \n> namesonopenpaths (SmartBranch em [] n _) = n\n> namesonopenpaths (SmartBranch em ts _ _) = concatMap namesonopenpaths ts \n\nThis gets all propositions on open paths.\n\n> propsonopenpaths :: SmartTree -> [Prop]\n> propsonopenpaths (SmartBranch em [] _ p ) = p\n> propsonopenpaths (SmartBranch em ts _ _ ) = concatMap propsonopenpaths ts\n\nWe need a general substitution function. We'll write it in GPLIparser (proposition to change -> target of the substitution -> substitution -> output prop)\n\nEXISTENTIAL\n\nFor existential we substitute the next new name.\n\n> nnames = ['a'..'t'] ++ ['\\128' ..]\n\n> nextnewname :: String -> Char\n> nextnewname x = head $ filter (\\y -> notElem y x) nnames \n\n> newonpaths :: SmartTree -> Char\n> newonpaths = nextnewname . namesonopenpaths\n\nThe standard functions:\n\nChecking to see whether a list of elements has an existential on it is straightforward:\n\n> hasexi :: [Elem] -> Bool\n> hasexi xs = any isexi xs\n> where isexi (Elem (Exi _ _) _ False) = True\n> isexi (Elem _ _ _ ) = False\n\nChecking off the existential now involves feeding the function a Char to be entered into subs.\n\n> checkexi :: Char -> [Elem] -> [Elem]\n> checkexi y ((Elem (Exi var scope) subs False ):xs) = ((Elem (Exi var scope) (y: subs) True ):xs) \n> checkexi y (x:xs) = x : checkexi y xs\n> checkexi y [] = []\n\nWe have the function for geting the element to append to the tree take the relevant substitution as an argument too.\n\n> getexi :: Char -> [Elem] -> [Elem]\n> getexi y ((Elem (Exi var scope) subs False ):xs) = [Elem (substitute scope var y) [] False] \n> getexi y (x:xs) = getexi y xs\n> getexi y [] = []\n\nModify the nonbranching rule for SmartTrees\n\n> addnonbranchingsmart :: [Elem] -> SmartTree -> SmartTree\n> addnonbranchingsmart center (SmartBranch [] [] u v) = SmartBranch [] [] u v\n> addnonbranchingsmart center (SmartBranch ys [] u v) = SmartBranch (ys++center) [] u v\n> addnonbranchingsmart center (SmartBranch ys ts u v) = SmartBranch ys (map (addnonbranchingsmart center) ts) u v\n\nApply the existential rule to a SmartTree\n\n> applyexismart :: SmartTree -> SmartTree\n> applyexismart (SmartBranch elems xs u v) = if hasexi elems\n> then addnonbranchingsmart (getexi (newonpaths(SmartBranch elems xs u v)) elems) (SmartBranch (checkexi (newonpaths(SmartBranch elems xs u v)) elems) xs u v)\n> else (SmartBranch elems (map applyexismart xs) u v)\n\n> applyexi :: Tree -> Tree\n> applyexi x = smarttodumb $ applyexismart $ dumbtosmart [] [] x\n\nUNIVERSAL\n\nOkay, we need a function which tests whether the path a universal is on it saturated or not, relative to that universal.\n\n> checknotsatuni :: SmartTree -> Elem -> Bool\n> checknotsatuni t (Elem (Uni x y) subs False) = null subs || allfstinsnd subs (namesonopenpaths t)\n> checknotsatuni t (Elem _ _ _) = False\n\n> allfstinsnd :: String -> String -> Bool\n> allfstinsnd x y = not (all (\\x -> x == True) (map (\\z -> z `elem` x) y))\n\n\nOkay, so now \"hasuni\" checks for unsaturated universals.\n\n> hasuni :: SmartTree -> [Elem] -> Bool\n> hasuni t xs = any (checknotsatuni t) xs\n\nChecking off the unistential now involves feeding the function a Char to be entered into subs.\n\n> checkuni :: SmartTree -> [Elem] -> [Elem]\n> checkuni y ((Elem (Uni var scope) subs False ):xs) = if checknotsatuni y (Elem (Uni var scope) subs False)\n> then ((Elem (Uni var scope) (subs ++ [(nextname subs y)]) False ):xs)\n> else [(Elem (Uni var scope) subs False)] ++ checkuni y xs \n> checkuni y (x:xs) = x : checkuni y xs\n> checkuni y [] = []\n\nWe have the function for geting the element to append to the tree take the relevant substitution as an argument too.\n\n> getuni :: SmartTree -> [Elem] -> [Elem]\n> getuni y ((Elem (Uni var scope) subs False ):xs) = if checknotsatuni y (Elem (Uni var scope) subs False)\n> then [Elem (substitute scope var (nextname subs y)) [] False] \n> else getuni y xs\n> getuni y (x:xs) = getuni y xs\n> getuni y [] = []\n\n> nextname :: String -> SmartTree -> Char\n> nextname x t = head $ [ y | y <- (nub ((namesonopenpaths t) ++ nnames)), (y `notElem` x)]\n \nApply the universal rule to a SmartTree\n\n> applyunismart :: SmartTree -> SmartTree\n> applyunismart (SmartBranch elems xs u v) = if hasuni (SmartBranch elems xs u v) elems\n> then addnonbranchingsmart (getuni (SmartBranch elems xs u v) elems) (SmartBranch (checkuni (SmartBranch elems xs u v) elems) xs u v)\n> else (SmartBranch elems (map applyunismart xs) u v)\n\n> applyuni :: Tree -> Tree\n> applyuni x = smarttodumb $ applyunismart $ dumbtosmart [] [] x\n\nSUBSTITUTION OF IDENTICALS!!!!!!\n\nAt some point we are going to need a function which takes the names in Iab (for identity) and a list of atomic propositions and negated atomics and generates the next new proposition in the list by substituting an a for a b or b for an a. Let's model it on nextname and nextnewname above\n\nOkay, so the following will get us all the atomic propositions on the open paths of a tree.\n\n> atomicsonpath :: SmartTree -> [Prop]\n> atomicsonpath xs = concatMap isatomic (propsonopenpaths xs)\n> where isatomic (Atom x y) = [Atom x y]\n> isatomic (Neg (Atom x y)) = [Neg (Atom x y)]\n> isatomic _ = [] \n\n> genonesubs :: String -> [Prop] -> [Prop]\n> genonesubs xs ps = map (genonesub xs) ps\n\n> genonesub :: String -> Prop -> Prop \n> genonesub (fst:snd:[]) (Atom (Pred1 x) y) \n> | y == [fst] = (Atom (Pred1 x) [snd])\n> | y == [snd] = (Atom (Pred1 x) [fst])\n> | otherwise = (Atom (Pred1 x) y) \n> genonesub (fst:snd:[]) (Neg (Atom (Pred1 x) y))\n> | y == [fst] = Neg (Atom (Pred1 x) [snd])\n> | y == [snd] = Neg (Atom (Pred1 x) [fst])\n> | otherwise = Neg (Atom (Pred1 x) y) \n> genonesub zx (Atom x y) = (Atom x y)\n> genonesub xz (Neg (Atom x y)) = Neg (Atom x y)\n\n> gentwosubs :: String -> [Prop] -> [Prop]\n> gentwosubs xs ps = concatMap (gentwosub' xs) ps\n\n> gentwosub' :: String -> Prop -> [Prop]\n> gentwosub' y (Atom (Pred2 x) z) = gentwosub y (Atom (Pred2 x) z)\n> gentwosub' y (Neg (Atom (Pred2 x) z)) = makenegs $ gentwosub y (Atom (Pred2 x) z)\n> gentwosub' z (Atom x y) = [(Atom x y)]\n> gentwosub' z (Neg (Atom x y)) = [Neg (Atom x y)]\n\n\n\n\n> gentwosub :: String -> Prop -> [Prop] \n> gentwosub (fst:snd:[]) (Atom (Pred2 x) (fs:sn:[])) \n> | fst == fs && fst == sn = [(Atom (Pred2 x) (fst:snd:[])), (Atom (Pred2 x) (snd:fst:[])),(Atom (Pred2 x) (snd:snd:[]))] \n> | fst == fs && fst \/= sn = [(Atom (Pred2 x) (snd:sn:[]))]\n> | fst \/= fs && fst == sn = [(Atom (Pred2 x) (fs:snd:[]))]\n> | snd == fs && snd == sn = [(Atom (Pred2 x) (snd:fst:[])), (Atom (Pred2 x) (fst:snd:[])),(Atom (Pred2 x) (fst:fst:[]))]\n> | snd == fs && snd \/= sn = [(Atom (Pred2 x) (fst:sn:[]))]\n> | snd \/= fs && snd == sn = [(Atom (Pred2 x) (fs:fst:[]))]\n> gentwosub zx (Atom x y) = [(Atom x y)]\n\n> genthreesubs :: String -> [Prop] -> [Prop]\n> genthreesubs xs ps = concatMap (genthreesub' xs) ps\n\n> genthreesub' :: String -> Prop -> [Prop]\n> genthreesub' y (Atom (Pred3 x) z) = genthreesub y (Atom (Pred3 x) z)\n> genthreesub' y (Neg (Atom (Pred3 x) z)) = makenegs $ genthreesub y (Atom (Pred3 x) z)\n> genthreesub' z (Atom x y) = [(Atom x y)]\n> genthreesub' z (Neg (Atom x y)) = [Neg (Atom x y)]\n\n> makenegs :: [Prop] -> [Prop]\n> makenegs xs = [ Neg x | x <- xs] \n\n> genthreesub :: String -> Prop -> [Prop]\n> genthreesub (fst:snd:[]) (Atom (Pred3 x) (fs:sn:tr:[]))\n> | fst == fs && fst == sn && fst == tr = [(Atom (Pred3 x) (snd:snd:snd:[])),(Atom (Pred3 x) (snd:fst:fst:[])),(Atom (Pred3 x) (snd:snd:fst:[])),(Atom (Pred3 x) (fst:fst:snd:[])),(Atom (Pred3 x) (snd:fst:snd:[]))] \n> | fst \/= fs && fst == sn && fst == tr = [(Atom (Pred3 x) (snd:snd:snd:[])),(Atom (Pred3 x) (fs:snd:fst:[])),(Atom (Pred3 x) (fs:fst:snd:[]))] \n> | fst \/= fs && fst \/= sn && fst == tr = [(Atom (Pred3 x) (fs:sn:snd:[]))] \n> | fst == fs && fst \/= sn && fst == tr = [(Atom (Pred3 x) (snd:sn:snd:[])),(Atom (Pred3 x) (snd:sn:fst:[])),(Atom (Pred3 x) (fst:sn:snd:[]))] \n> | fst == fs && fst \/= sn && fst \/= tr = [(Atom (Pred3 x) (snd:sn:tr:[]))] \n> | fst == fs && fst == sn && fst \/= tr = [(Atom (Pred3 x) (snd:snd:tr:[])),(Atom (Pred3 x) (fst:snd:tr:[])),(Atom (Pred3 x) (snd:fst:tr:[]))] \n> | fst \/= fs && fst == sn && fst \/= tr = [(Atom (Pred3 x) (fs:snd:tr:[]))]\n> | snd == fs && snd == sn && snd == tr = [(Atom (Pred3 x) (fst:fst:fst:[])),(Atom (Pred3 x) (fst:fst:snd:[])),(Atom (Pred3 x) (fst:snd:snd:[])),(Atom (Pred3 x) (snd:fst:snd:[])),(Atom (Pred3 x) (snd:fst:fst:[]))]\n> | snd \/= fs && snd == sn && snd == tr = [(Atom (Pred3 x) (fs:fst:fst:[])),(Atom (Pred3 x) (fs:fst:snd:[])),(Atom (Pred3 x) (fs:snd:fst:[]))]\n> | snd \/= fs && snd \/= sn && snd == tr = [(Atom (Pred3 x) (fs:sn:fst:[]))]\n> | snd == fs && snd \/= sn && snd == tr = [(Atom (Pred3 x) (fst:sn:fst:[])),(Atom (Pred3 x) (fst:sn:snd:[])),(Atom (Pred3 x) (snd:sn:fst:[]))]\n> | snd == fs && snd \/= sn && snd \/= tr = [(Atom (Pred3 x) (fst:sn:tr:[]))]\n> | snd == fs && snd == sn && snd \/= tr = [(Atom (Pred3 x) (fst:fst:tr:[])),(Atom (Pred3 x) (fst:snd:tr:[])),(Atom (Pred3 x) (snd:fst:tr:[]))]\n> | snd \/= fs && snd == sn && snd \/= tr = [(Atom (Pred3 x) (fs:fst:tr:[]))]\n> genthreesub zx (Atom x y) = [(Atom x y)]\n\nNow we can put all of these together to give us:\n\n> gensubs :: String -> SmartTree -> [Prop]\n> gensubs x y = ( genthreesubs x (atomicsonpath y) ++ gentwosubs x (atomicsonpath y) ++ genonesubs x (atomicsonpath y)) \n\nNow we need a function which checks if a path is saturated relative to a formula. To put it another way, we need a function to the True if a Elem has an ID such that gensubs generates a proposition which is no generated by atomicsonpath.\n\n> allfstinsnd' :: [Prop] -> [Prop] -> Bool\n> allfstinsnd' x y = not (all (\\x -> x == True) (map (\\z -> z `elem` x) y))\n\n> hasid :: SmartTree -> [Elem] -> Bool\n> hasid t xs = any (checknotsatid t) xs\n\n> checknotsatid :: SmartTree -> Elem -> Bool\n> checknotsatid t (Elem (Atom (Pred2 'I') y) subs False) = allfstinsnd' (nub (atomicsonpath t)) (nub (gensubs y t))\n> checknotsatid t (Elem _ _ _) = False\n\nOkay, here's the rule at work:\n\nChecking off the idstential now involves feeding the function a Char to be entered into subs.\n\nWe have the function for geting the element to append to the tree take the relevant substitution as an argument too.\n\n> getid :: SmartTree -> [Elem] -> [Elem]\n> getid y ((Elem (Atom (Pred2 'I') t) subs False ):xs) = if checknotsatid y (Elem (Atom (Pred2 'I') t) subs False)\n> then [Elem (nextnewprop t y) [] False] \n> else getid y xs\n> getid y (x:xs) = getid y xs\n> getid y [] = []\n\n> nextnewprop :: String -> SmartTree -> Prop \n> nextnewprop y t = head ((nub (gensubs y t) ) \\\\ (nub (atomicsonpath t)))\n\nApply the idversal rule to a SmartTree\n\n> applyidsmart :: SmartTree -> SmartTree\n> applyidsmart (SmartBranch elems xs u v) = if hasid (SmartBranch elems xs u v) elems\n> then addnonbranchingsmart (getid (SmartBranch elems xs u v) elems) (SmartBranch elems xs u v)\n> else (SmartBranch elems (map applyidsmart xs) u v)\n\n> applyid :: Tree -> Tree\n> applyid x = smarttodumb $ applyidsmart $ dumbtosmart [] [] x\n\n\n\nCHECK FOR CONTRADICTIONS\n\n> type Path = [[Elem]]\n\n> getpaths :: Tree -> [Path]\n> getpaths (Branch [] []) = []\n> getpaths (Branch es []) = [[es]]\n> getpaths (Branch es ts) = map ([es]++) (concatMap getpaths ts)\n\n> getpropsonpath :: Path -> [Prop]\n> getpropsonpath es = map getprop (concat es)\n> where getprop (Elem prop subs bool) = prop \n\n> newclosure :: [Prop] -> [Bool]\n> newclosure xs = map isnegid xs\n> where isnegid (Neg (Atom (Pred2 'I') (x:y:[]))) = if x == y then True else False\n> isnegid _ = False\n\n\n> checkpaths :: [Path] -> [Path]\n> checkpaths xs = [ x | x <- xs, (checkpath (getpropsonpath x))]\n\n\n> checkpath :: [Prop] -> Bool\n> checkpath ps = (or (map (\\x -> (x `elem` ps) && ((Neg x) `elem` ps)) ps)) || or (newclosure ps)\n\n> killpath :: Path -> Tree -> Tree\n> killpath (z:zs) (Branch [] []) = (Branch [] [])\n> killpath [] (Branch [] []) = (Branch [] [])\n> killpath (z:zs) (Branch es []) = if es == z\n> then (Branch es [Branch [] []])\n> else Branch es []\n> killpath [] (Branch es []) = (Branch es [])\n> killpath (z:zs) (Branch es ts) = if es == z\n> then Branch es (map (killpath zs) ts)\n> else Branch es ts \n> killpath [] (Branch es ts) = Branch es ts\n\n> killpaths :: [Path] -> Tree -> Tree\n> killpaths (y:ys) x = killpaths ys (killpath y x)\n> killpaths [] x = x\n\n> oldcfc :: Tree -> Tree\n> oldcfc x = killpaths (checkpaths (getpaths x)) x \n\n> cfc = smartercfc\n\nSMARTCFC\n\nProfiling suggests that this method of checking for contradictions is not very efficient. Let's re-implement cfc in terms of smart-trees. After all, a smart tree caries with it information about all propositions on the path. We can just check these and change the path to a closed one. It should be more efficient this way. \n\n> smartcfc :: Tree -> Tree\n> smartcfc t = smarttodumb (closepaths (dumbtosmart [] [] t))\n\n> closepaths :: SmartTree -> SmartTree\n> closepaths (SmartBranch [] [] ns ps) = SmartBranch [] [] ns ps \n> closepaths (SmartBranch es [] ns ps) = if checkpath ps\n> then SmartBranch es [SmartBranch [] [] ns ps] ns ps\n> else SmartBranch es [] ns ps \n> closepaths (SmartBranch es ts ns ps) = SmartBranch es (map closepaths ts) ns ps\n\n> smartercfc :: (Tree -> Tree) -> Tree -> Tree\n> smartercfc rule tree = if tree == (rule tree)\n> then tree\n> else smartcfc (rule tree) \n\n\nALL RULES\n\nBranching\n\n> disj xs = cfc applydisj xs\n> negconj xs = cfc applynegconj xs\n> con xs = cfc applycond xs\n> bicon xs = cfc applybicond xs\n> negbicon xs = cfc applynegbicond xs\n\nNonBranching\n\n> conj xs = cfc applyconj xs\n> negdisj xs = cfc applynegdisj xs\n> negcon xs = cfc applynegcond xs\n> negsome xs = cfc applynegexe xs\n> negall xs = cfc applyneguni xs\n\nDouble Negation\n\n> dneg xs = cfc applydneg xs\n\nQuantifiers\n\n> exi xs = cfc applyexi xs\n> uni xs = cfc applyuni xs\n\n\n> loopdneg x | dneg x == dneg (dneg x) = dneg x\n> | otherwise = loopdneg (dneg x)\n\n> allnon xs = loopdneg $ negall $ loopdneg $ negsome $ loopdneg $ negcon $ loopdneg $ negdisj $ loopdneg $ conj $ loopdneg xs\n\n> loopnon x | allnon x == allnon (allnon x) = allnon x\n> | otherwise = loopnon (allnon x)\n\n> alls' xs = allnon $ exi $ allnon $ negbicon $ allnon $ bicon $ allnon $ con $ allnon $ negconj $ allnon $ disj $ allnon xs\n\n> idd x = cfc applyid x\n\n> loopid x | idd x == idd (idd x) = idd x\n> | otherwise = loopid (idd x)\n\n\n> allrules xs = uni $ alls' xs\n\nMAKETREE\n\n> maketree' :: Tree -> Tree\n> maketree' x | allrules x == allrules (allrules x) = allrules x\n> | otherwise = maketree' (allrules x)\n\n> maketree :: String -> Tree\n> maketree x = loopid $ maketree' (gentree (lines x))\n\nEND OF ORDINARY MAKETREE\n\nINCREMENTAL MAKETREE I\n\nOkay, the idea for the incremental maketree function is to generalise the above so that each rule takes a list of trees as input and produces a list of trees as output. If the rule applies, it returns the list with the new tree appended to the end. If it doesn't apply it returns the old list unchanged. We might as well do this properly. We will make the list not just a list of trees, but a list of pairs of trees and strings. The string says what rule has been applied. \n\n> igentree :: [String] -> [(Tree,String)]\n> igentree xs = [(gentree xs,\"setting up the tree...\")]\n\nBranching\n\n> idisj xs = icfc (iapplydisj xs)\n> inegconj xs = icfc (iapplynegconj xs)\n> icon xs = icfc (iapplycon xs)\n> ibicon xs = icfc (iapplybicon xs)\n> inegbicon xs = icfc (iapplynegbicon xs)\n\n> iapplydisj :: [(Tree,String)] -> [(Tree,String)]\n> iapplydisj xs = if (fst (last xs)) \/= applydisj (fst (last xs)) \n> then xs ++ [(applydisj (fst (last xs)),\"applying the rule for disjunction...\")]\n> else xs\n\n> iapplynegconj :: [(Tree,String)] -> [(Tree,String)]\n> iapplynegconj xs = if (fst (last xs)) \/= applynegconj (fst (last xs)) \n> then xs ++ [(applynegconj (fst (last xs)),\"applying the rule for negated conjunction...\")]\n> else xs\n\n> iapplycon :: [(Tree,String)] -> [(Tree,String)]\n> iapplycon xs = if (fst (last xs)) \/= applycond (fst (last xs)) \n> then xs ++ [(applycond (fst (last xs)),\"applying the rule for conditional...\")]\n> else xs\n\n> iapplybicon :: [(Tree,String)] -> [(Tree,String)]\n> iapplybicon xs = if (fst (last xs)) \/= applybicond (fst (last xs)) \n> then xs ++ [(applybicond (fst (last xs)),\"applying the rule of biconditional...\")]\n> else xs\n\n> iapplynegbicon :: [(Tree,String)] -> [(Tree,String)]\n> iapplynegbicon xs = if (fst (last xs)) \/= applynegbicond (fst (last xs)) \n> then xs ++ [(applynegbicond (fst (last xs)),\"applying the rule for negative biconditional...\")]\n> else xs\n\nNonBranching\n\n> iconj xs = icfc (iapplyconj xs)\n> inegdisj xs = icfc (iapplynegdisj xs)\n> inegcon xs = icfc (iapplynegcon xs)\n> inegsome xs = icfc (iapplynegsome xs)\n> inegall xs = icfc (iapplynegall xs)\n\n> iapplyconj :: [(Tree,String)] -> [(Tree,String)]\n> iapplyconj xs = if (fst (last xs)) \/= applyconj (fst (last xs)) \n> then xs ++ [(applyconj (fst (last xs)),\"applying the rule for conjunction...\")]\n> else xs\n\n> iapplynegdisj :: [(Tree,String)] -> [(Tree,String)]\n> iapplynegdisj xs = if (fst (last xs)) \/= applynegdisj (fst (last xs)) \n> then xs ++ [(applynegdisj (fst (last xs)),\"applying the rule for negated disjunction...\")]\n> else xs\n\n> iapplynegcon :: [(Tree,String)] -> [(Tree,String)]\n> iapplynegcon xs = if (fst (last xs)) \/= applynegcond (fst (last xs)) \n> then xs ++ [(applynegcond (fst (last xs)),\"applying the rule for negated conditional...\")]\n> else xs\n\n> iapplynegsome :: [(Tree,String)] -> [(Tree,String)]\n> iapplynegsome xs = if (fst (last xs)) \/= applynegexe (fst (last xs)) \n> then xs ++ [(applynegexe (fst (last xs)),\"applying the rule for negated existential...\")]\n> else xs\n\n> iapplynegall :: [(Tree,String)] -> [(Tree,String)]\n> iapplynegall xs = if (fst (last xs)) \/= applyneguni (fst (last xs)) \n> then xs ++ [(applyneguni (fst (last xs)),\"applying the rule for negated universal...\")]\n> else xs\n\nDouble Negation\n\n> idneg xs = icfc (iapplydneg xs)\n\n> iapplydneg :: [(Tree,String)] -> [(Tree,String)]\n> iapplydneg xs = if (fst (last xs)) \/= applydneg (fst (last xs)) \n> then xs ++ [(applydneg (fst (last xs)),\"applying the rule for double negation...\")]\n> else xs\n\nQuantifiers\n\n> iexi xs = icfc (iapplyexi xs)\n> iuni xs = icfc (iapplyuni xs)\n\n> iapplyexi :: [(Tree,String)] -> [(Tree,String)]\n> iapplyexi xs = if (fst (last xs)) \/= applyexi (fst (last xs)) \n> then xs ++ [(applyexi (fst (last xs)),\"applying the rule for existential...\")]\n> else xs\n\n> iapplyuni :: [(Tree,String)] -> [(Tree,String)]\n> iapplyuni xs = if (fst (last xs)) \/= applyuni (fst (last xs)) \n> then xs ++ [(applyuni (fst (last xs)),\"applying the rule for universal...\")]\n> else xs\n\n\n> icfc :: [(Tree,String)] -> [(Tree,String)]\n> icfc xs = if (fst (last xs)) \/= oldcfc (fst (last xs)) \n> then xs ++ [(oldcfc (fst (last xs)),\"applying closure rule...\")]\n> else xs\n\n\n> iloopdneg x | idneg x == idneg (idneg x) = idneg x\n> | otherwise = iloopdneg (idneg x)\n\n> iallnon xs = iloopdneg $ inegall $ iloopdneg $ inegsome $ iloopdneg $ inegcon $ iloopdneg $ inegdisj $ iloopdneg $ iconj $ iloopdneg xs\n\n> iloopnon x | iallnon x == iallnon (iallnon x) = iallnon x\n> | otherwise = iloopnon (iallnon x)\n\n> ialls' xs = iallnon $ iexi $ iallnon $ inegbicon $ iallnon $ ibicon $ iallnon $ icon $ iallnon $ inegconj $ iallnon $ idisj $ iallnon xs\n\n> iidd x = icfc (iapplyid x)\n\n> iapplyid :: [(Tree,String)] -> [(Tree,String)]\n> iapplyid xs = if (fst (last xs)) \/= applyid (fst (last xs)) \n> then xs ++ [(applyid (fst (last xs)),\"applying the substitution of identicals rule...\")]\n> else xs\n\n> iloopid x | iidd x == iidd (iidd x) = iidd x\n> | otherwise = iloopid (iidd x)\n\n> iallrules xs = iuni $ ialls' xs\n\nMAKETREE\n\n> imaketree' :: [(Tree,String)] -> [(Tree,String)]\n> imaketree' x | iallrules x == iallrules ( iallrules (iallrules x)) = iallrules x\n> | otherwise = imaketree' (iallrules x)\n\n> imaketree :: String -> [(Tree,String)]\n> imaketree x = iloopid $ imaketree' (igentree (lines x))\n\nEND OF INCREMENTAL MAKETREE I\n\nINCREMENTAL MAKETREE II\n\nOkay, the idea for the incremental maketree function is to generalise the above so that each rule takes a list of trees as input and produces a list of trees as output. If the rule applies, it returns the list with the new tree appended to the end. If it doesn't apply it returns the old list unchanged. We might as well do this properly. We will make the list not just a list of trees, but a list of pairs of trees and strings. The string says what rule has been applied. \n\n\n> iicfc :: Tree -> IO (Tree)\n> iicfc xs = do\n> clearScreen\n> setCursorPosition 0 0\n> if xs \/= oldcfc xs \n> then do \n> putStrLn \"checked for closure...\\n\"\n> putStrLn (printtree $ oldcfc xs)\n> threadDelay mydelay\n> return (oldcfc xs)\n> else do\n> return (xs)\n\n> smarteriicfc :: (Tree -> IO (Tree)) -> Tree -> IO (Tree)\n> smarteriicfc rule tree = do\n> newtree <- rule tree\n> if tree == newtree\n> then return (newtree)\n> else do\n> clearScreen\n> setCursorPosition 0 0\n> putStrLn \"checked for closure...\\n\"\n> putStrLn (printtree (smartcfc newtree))\n> threadDelay mydelay\n> return (smartcfc newtree) \n\n\n\n\n> mydelay = 1000000\n\n> iigentree :: [String] -> IO (Tree)\n> iigentree xs = do\n> clearScreen\n> setCursorPosition 0 0\n> putStrLn \"set up the tree...\\n\"\n> putStrLn (printtree $ gentree xs)\n> threadDelay mydelay\n> return (gentree xs)\n\n\n\nBranching\n\n> iidisj xs = smarteriicfc iiapplydisj xs\n> iinegconj xs = smarteriicfc iiapplynegconj xs\n> iicon xs = smarteriicfc iiapplycon xs\n> iibicon xs = smarteriicfc iiapplybicon xs\n> iinegbicon xs = smarteriicfc iiapplynegbicon xs\n\n\n> iiapplydisj :: Tree -> IO (Tree)\n> iiapplydisj xs = do\n> clearScreen\n> setCursorPosition 0 0\n> if xs \/= applydisj xs \n> then do \n> putStrLn \"applied the rule for disjunction...\\n\"\n> putStrLn (printtree $ applydisj xs)\n> threadDelay mydelay\n> return (applydisj xs)\n> else do\n> return (xs)\n\n> iiapplynegconj :: Tree -> IO (Tree)\n> iiapplynegconj xs = do\n> clearScreen\n> setCursorPosition 0 0\n> if xs \/= applynegconj xs \n> then do \n> putStrLn \"applied the rule for negated conjunction...\\n\"\n> putStrLn (printtree $ applynegconj xs)\n> threadDelay mydelay\n> return (applynegconj xs)\n> else do\n> return (xs)\n\n> iiapplycon :: Tree -> IO (Tree)\n> iiapplycon xs = do\n> clearScreen\n> setCursorPosition 0 0\n> if xs \/= applycond xs \n> then do \n> putStrLn \"applied the rule for conditional...\\n\"\n> putStrLn (printtree $ applycond xs)\n> threadDelay mydelay\n> return (applycond xs)\n> else do\n> return (xs)\n\n> iiapplybicon :: Tree -> IO (Tree)\n> iiapplybicon xs = do\n> clearScreen\n> setCursorPosition 0 0\n> if xs \/= applybicond xs \n> then do \n> putStrLn \"applied the rule for biconditional...\\n\"\n> putStrLn (printtree $ applybicond xs)\n> threadDelay mydelay\n> return (applybicond xs)\n> else do\n> return (xs)\n\n> iiapplynegbicon :: Tree -> IO (Tree)\n> iiapplynegbicon xs = do\n> clearScreen\n> setCursorPosition 0 0\n> if xs \/= applynegbicond xs \n> then do \n> putStrLn \"applied the rule for negated biconditional...\\n\"\n> putStrLn (printtree $ applynegbicond xs)\n> threadDelay mydelay\n> return (applynegbicond xs)\n> else do\n> return (xs)\n\nNonBranching\n\n> iiconj xs = smarteriicfc iiapplyconj xs\n> iinegdisj xs = smarteriicfc iiapplynegdisj xs\n> iinegcon xs = smarteriicfc iiapplynegcon xs\n> iinegsome xs = smarteriicfc iiapplynegsome xs\n> iinegall xs = smarteriicfc iiapplynegall xs\n\n\n> iiapplyconj :: Tree -> IO (Tree)\n> iiapplyconj xs = do\n> clearScreen\n> setCursorPosition 0 0\n> if xs \/= applyconj xs \n> then do \n> putStrLn \"applied the rule for conjunction...\\n\"\n> putStrLn (printtree $ applyconj xs)\n> threadDelay mydelay\n> return (applyconj xs)\n> else do\n> return (xs)\n\n> iiapplynegdisj :: Tree -> IO (Tree)\n> iiapplynegdisj xs = do\n> clearScreen\n> setCursorPosition 0 0\n> if xs \/= applynegdisj xs \n> then do \n> putStrLn \"applied the rule for negated disjunction...\\n\"\n> putStrLn (printtree $ applynegdisj xs)\n> threadDelay mydelay\n> return (applynegdisj xs)\n> else do\n> return (xs)\n\n> iiapplynegcon :: Tree -> IO (Tree)\n> iiapplynegcon xs = do\n> clearScreen\n> setCursorPosition 0 0\n> if xs \/= applynegcond xs \n> then do \n> putStrLn \"applied the rule for negated conditional...\\n\"\n> putStrLn (printtree $ applynegcond xs)\n> threadDelay mydelay\n> return (applynegcond xs)\n> else do\n> return (xs)\n\n> iiapplynegsome :: Tree -> IO (Tree)\n> iiapplynegsome xs = do\n> clearScreen\n> setCursorPosition 0 0\n> if xs \/= applynegexe xs \n> then do \n> putStrLn \"applied the rule for negated existential...\\n\"\n> putStrLn (printtree $ applynegexe xs)\n> threadDelay mydelay\n> return (applynegexe xs)\n> else do\n> return (xs)\n\n> iiapplynegall :: Tree -> IO (Tree)\n> iiapplynegall xs = do\n> clearScreen\n> setCursorPosition 0 0\n> if xs \/= applyneguni xs \n> then do \n> putStrLn \"applied the rule for negated universal...\\n\"\n> putStrLn (printtree $ applyneguni xs)\n> threadDelay mydelay\n> return (applyneguni xs)\n> else do\n> return (xs)\n\nDouble Negation\n\n> iidneg xs = smarteriicfc iiapplydneg xs\n\n> iiapplydneg :: Tree -> IO (Tree)\n> iiapplydneg xs = do\n> clearScreen\n> setCursorPosition 0 0\n> if xs \/= applydneg xs \n> then do \n> putStrLn \"applied the rule for double negation...\\n\"\n> putStrLn (printtree $ applydneg xs)\n> threadDelay mydelay\n> return (applydneg xs)\n> else do\n> return (xs)\n\nQuantifiers\n\n> iiexi xs = smarteriicfc iiapplyexi xs\n> iiuni xs = smarteriicfc iiapplyuni xs\n\n> iiapplyexi :: Tree -> IO (Tree)\n> iiapplyexi xs = do\n> clearScreen\n> setCursorPosition 0 0\n> if xs \/= applyexi xs \n> then do \n> putStrLn \"applied the rule for existential quantifier...\\n\"\n> putStrLn (printtree $ applyexi xs)\n> threadDelay mydelay\n> return (applyexi xs)\n> else do\n> return (xs)\n\n> iiapplyuni :: Tree -> IO (Tree)\n> iiapplyuni xs = do\n> clearScreen\n> setCursorPosition 0 0\n> if xs \/= applyuni xs \n> then do \n> putStrLn \"applied the rule for universal quantifier...\\n\"\n> putStrLn (printtree $ applyuni xs)\n> threadDelay mydelay\n> return (applyuni xs)\n> else do\n> return (xs)\n\n> iiloopdneg :: Tree -> IO (Tree) \n> iiloopdneg x = do\n> old <- iidneg x\n> new <- iidneg old\n> if old == new\n> then return (new)\n> else iiloopdneg new\n\n\n> iiallnon xs = iiloopdneg xs >>= iiconj >>= iiloopdneg >>= iinegdisj >>= iiloopdneg >>= iinegcon >>= iiloopdneg >>= iinegsome >>= iiloopdneg >>= iinegall\n\n> iiloopnon :: Tree -> IO (Tree)\n> iiloopnon x = do\n> old <- iiallnon x\n> new <- iiallnon old\n> if old == new\n> then return (new)\n> else iiloopnon new\n\n\n> iialls' xs = iiallnon xs >>= iidisj >>= iiallnon >>= iinegconj >>= iiallnon >>= iicon >>= iiallnon >>= iibicon >>= iiallnon >>= iinegbicon >>= iiallnon >>= iiexi >>= iiallnon\n\n\n> iiidd x = smarteriicfc iiapplyid x\n\n> iiapplyid :: Tree -> IO (Tree)\n> iiapplyid xs = do\n> clearScreen\n> setCursorPosition 0 0\n> if xs \/= applyid xs \n> then do \n> putStrLn \"applied the substitution of identicals rule...\\n\"\n> putStrLn (printtree $ applyid xs)\n> threadDelay mydelay\n> return (applyid xs)\n> else do\n> return (xs)\n\n> iiloopid x = do\n> old <- iiidd x\n> new <- iiidd old\n> if old == new\n> then return (new)\n> else iiloopid new\n\n\n> iiallrules xs = (iialls' xs) >>= iiuni\n\n> makeloop x = do\n> old <- iiallrules x\n> new <- iiallrules old\n> if old == new\n> then return (new)\n> else makeloop new\n\n> iimaketree :: String -> IO (Tree)\n> iimaketree x = (iigentree (lines x)) >>= makeloop >>= iiloopid\n\n\nEND OF MAKETREE II\n\n\n\n\nTREE STATS\n\n> readtree :: Tree -> String\n> readtree t | null (getpaths t) = \"all paths close. not satisfiable.\"\n> | (length (getpaths t) == 1) = \"not all paths close. there is \" ++ (show (length (getpaths t))) ++ \" open path.\" \n> | otherwise = \"not all paths close. there are \" ++ (show (length (getpaths t))) ++ \" open paths.\" \n\n> readtree' :: Tree -> String\n> readtree' t | null (getpaths t) = \"not satisfiable! (by tree method)\"\n> | otherwise = \"satisfiable! (by tree method)\"\n\n\n\n\n> treestats :: Tree -> String\n> treestats t = \"tree stats:\\n\\n number of paths (width): \" ++ (show (length (getallpaths t))) ++ \"\\n number of open paths: \" ++ (show (length (getpaths t))) ++ \"\\n number of closed paths: \" ++ show ( ( (length (getallpaths t))) - ( (length (getpaths t))) ) ++ \"\\n longest path (depth): \" ++ (show (maximum (map length (getallpaths t)))) ++ \"\\n no. of propositions on longest path: \" ++ (show (maximum (map length (map concat (getallpaths t)))))\n\n> getallpaths :: Tree -> [[[Elem]]]\n> getallpaths (Branch [] []) = [[]]\n> getallpaths (Branch es []) = [[es]]\n> getallpaths (Branch es ts) = map (es:) (concatMap getallpaths ts)\n\n\n\n\n\n\nREAD MODEL OFF TREE\n\nThe relevant unit here is a Path. We write everything to operate on a path and then we just map over paths.\n\n> getnamesonpath :: Path -> String\n> getnamesonpath p = sort $ nub $ concatMap getnames $ getpropsonpath p \n\n> temprefs :: Path -> [(Char,Int)]\n> temprefs p = zip (getnamesonpath p) [1..]\n\n> getids :: Path -> [(Char,Char)]\n> getids p = concatMap getid (getpropsonpath p)\n> where getid (Atom (Pred2 'I') (x:y:[])) = [(x,y)]\n> getid _ = []\n\n> keyval :: (Eq a) => a -> [(a,b)] -> b\n> keyval x (y:ys) = if (fst y) == x\n> then snd y\n> else keyval x ys \n\n\n\n> changeval :: (Eq a) => a -> b -> [(a,b)] -> [(a,b)]\n> changeval x z (y:ys) = if (fst y) == x\n> then (fst y,z) : changeval x z ys\n> else y : changeval x z ys \n> changeval x z [] = []\n\n> trim :: [(Char,Int)] -> (Char,Char) -> [(Char,Int)]\n> trim r (fst,snd) = changeval snd (keyval fst r) r \n\n> finalrefs :: Path -> [(Char,Int)]\n> finalrefs p = foldl trim (temprefs p) (getids p)\n\n> makedomain :: [(Char,Int)] -> [Int]\n> makedomain xs = sort $ nub $ map dom xs\n> where dom (fst,snd) = snd \n\n> namestorefs :: String -> [(Char,Int)] -> [Int]\n> namestorefs xs r = map (nametoref r) xs\n> where nametoref r x = keyval x r \n\n> makeonepreds :: Path -> [(Char,[Int])]\n> makeonepreds p = nub $ concatMap onepred (getpropsonpath p)\n> where onepred (Atom (Pred1 x) y) = [(x,(namestorefs y (finalrefs p)))] \n> onepred _ = []\n\n\n> maketwopreds :: Path -> [(Char,[(Int,Int)])]\n> maketwopreds p = nub $ concatMap twopred (getpropsonpath p)\n> where twopred (Atom (Pred2 'I') y) = []\n> twopred (Neg (Atom (Pred2 'I') y)) = []\n> twopred (Atom (Pred2 x) y) = [(x,[topairs (namestorefs y (finalrefs p))])] \n> twopred _ = []\n> topairs (x:y:[]) = (x,y)\n\n> makethreepreds :: Path -> [(Char,[(Int,Int,Int)])]\n> makethreepreds p = nub $ concatMap threepred (getpropsonpath p)\n> where threepred (Atom (Pred3 x) y) = [(x,[totriples (namestorefs y (finalrefs p))])] \n> threepred _ = []\n> totriples (x:y:z:[]) = (x,y,z)\n\n> keyval' :: (Eq a) => a -> [(a,b)] -> [b]\n> keyval' x (y:ys) = if (fst y) == x\n> then [snd y] ++ keyval' x ys\n> else keyval' x ys \n> keyval' x [] = []\n\n> getall :: [(Char,[a])] -> Char -> (Char,[a])\n> getall xs x = (x,(concat (keyval' x xs)))\n\n> getall' :: [Char] -> [(Char,[a])] -> [(Char,[a])]\n> getall' ys xs = map (getall xs) ys\n\n> getpredlet1 :: Path -> [Char]\n> getpredlet1 p = nub $ concatMap getpredicates1 (getpropsonpath p)\n\n> getpredlet2 :: Path -> [Char]\n> getpredlet2 p = nub $ concatMap getpredicates2' (getpropsonpath p)\n\n> getpredlet3 :: Path -> [Char]\n> getpredlet3 p = nub $ concatMap getpredicates3 (getpropsonpath p)\n\n> onepreds :: Path -> [(Char,[Int])] \n> onepreds p = nub $ getall' (getpredlet1 p) (makeonepreds p) \n\n> twopreds :: Path -> [(Char,[(Int,Int)])] \n> twopreds p = nub $ getall' (getpredlet2 p) (maketwopreds p) \n\n> threepreds :: Path -> [(Char,[(Int,Int,Int)])] \n> threepreds p = nub $ getall' (getpredlet3 p) (makethreepreds p) \n\n> readmodel :: Path -> Model\n> readmodel p = Model (makedomain (finalrefs p)) (finalrefs p) (onepreds p) (twopreds p) (threepreds p) \n\n> readmodels :: Tree -> [Model]\n> readmodels t = map readmodel (getpaths t) \n \nPREPARE INPUT\n\n> gentree :: [String] -> Tree\n> gentree xs = Branch (gentest xs) []\n\n> gentest :: [String] -> [Elem]\n> gentest xs = map f xs\n> where f x = proptoelem ((parser x)!!0) \n\n> proptoelem :: Prop -> Elem\n> proptoelem p = Elem p [] False \n","avg_line_length":38.7075098814,"max_line_length":474,"alphanum_fraction":0.584008986} {"size":16248,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"University of Zagreb\nFaculty of Electrical Engineering and Computing\n\nPROGRAMMING IN HASKELL\n\nAcademic Year 2014\/2015\n\nLECTURE 4: Syntax of functions\n\nv1.0\n\n(c) 2014 Jan \u0160najder\n\n==============================================================================\n\n> import Data.Char\n> import Data.List\n\n=== PATTERN MATCHING =========================================================\n\nRemember guards?\n\n> magicNumber :: Int -> String\n> magicNumber x | x == 42 = \"Yeah!\"\n> | otherwise = \"Nope, try again.\"\n\nInstead of using guards, we can write out two CLAUSES with PATTERN MATCHING:\n\n> magicNumber2 :: Int -> String\n> magicNumber2 42 = \"Yeah!\"\n> magicNumber2 x = \"Nope, try again.\"\n\nIf a variable is not used in the definition, we can anonymize it:\n\n> magicNumber3 :: Int -> String\n> magicNumber3 42 = \"Yeah!\"\n> magicNumber3 _ = \"Nope, try again.\"\n\nWe often do pattern matching on tuples:\n\n> fst' :: (a,b) -> a\n> fst' (x,_) = x\n\n> snd' :: (a,b) -> b\n> snd' (_,y) = y\n\n> addVectors :: (Double, Double) -> (Double, Double) -> (Double, Double)\n> addVectors (x1,y1) (x2,y2) = (x1 + x2, y1 + y2)\n\n> swap :: (a,b) -> (b,a)\n> swap (x,y) = (y,x)\n\n> mallcolmInTheMiddle :: (a,b,c) -> b\n> mallcolmInTheMiddle (x,y,z) = y\n\n> leaves :: ((a, a), (a, a)) -> [a]\n> leaves ((x, y), (z, w)) = [x, y, z, w]\n\nNote that the above pattern will always match. We call such patterns\nIRREFUTABLE. Some patterns, however, are REFUTABLE, i.e., they can fail to\nmatch. For example:\n\n> goo (x,1) = x + 1\n\nWe can also pattern match on lists. For example, we can split up right away a\nlist into its head and tail:\n\n> head' :: [a] -> a\n> head' [] = error \"No head to behead\"\n> head' (x:_) = x\n\n> tail' :: [a] -> [a]\n> tail' [] = []\n> tail' (_:xs) = xs\n\nCan we use pattern matching to accomplish the opposite: split up the initial\npart of a list from its last element?\n\nNo, we can not. Because of the way how list is defined (a pair of a head and a\ntail).\n\nYou can have many patterns:\n\n> partnerSwap :: (a,b) -> (c,d) -> ((a,c),(b,d))\n> partnerSwap (x,y) (z,w) = ((x,z),(y,w))\n\n> headSwap :: [a] -> [a] -> ([a],[a])\n> headSwap (x:xs) (y:ys) = (y:xs, x:ys)\n\nAlso, we can have many different patterns:\n\n> foo :: [a] -> [a] -> [a]\n> foo (x:xs) (_:y:_) = x:y:xs\n> foo (x:_) [y] = [x,y]\n> foo xs [] = xs\n\nFurthermore, patterns can be nested:\n\n> headOfHead :: [[a]] -> a\n> headOfHead ((x:_):_) = x\n> headOfHead _ = error \"No head of head\"\n\nYou must be careful how you order the definitions. The most general case should\ncome at the end:\n\n> rhymeMe :: String -> String\n> rhymeMe \"letters\" = \"matters\"\n> rhymeMe \"love\" = \"glove\"\n> rhymeMe \"pain\" = \"rain\"\n> rhymeMe (x:xs) = succ x : xs\n> rhymeMe _ = \"huh?\"\n\nUnless all your patterns are irrefutable, you must take care that all cases are\ncovered, i.e., that the patterns are exhaustive. This no good:\n\n> fullName :: Char -> String\n> fullName 'a' = \"Ann\"\n> fullName 'b' = \"Barney\"\n> fullName 'c' = \"Clark\"\n\nKeep in mind that the compiler cannot check whether the patterns are\nexhaustive. As a matter of fact, \"Non-exhaustive patterns\" is one of the most\ncommon runtime errors in Haskell. Thus, be watchful for non-exhaustive\npatterns, especially when pattern matching on recursive data types (e.g.,\nlists).\n\nIn particular, you should distinguish '[x,y]' from '(x:y:_)':\n\n> describeList :: Show a => [a] -> String\n> describeList [] = \"This thing is empty\"\n> describeList [x] = \"This list has only one element: \" ++ show x\n> describeList [x,y] = \"This list has elements \" ++ show x ++ \" and \" ++ show y\n> describeList (x:y:_) = \n> \"This list has many elements, of which the first two are \" ++\n> show x ++ \" i \" ++ show y\n\nNow, let's say we'd also like to print out the length of the list in the above\nfunction. For this we need the complete list. So, we need both the complete\nlist and its first two elements. We can do that with an AS-PATTERN:\n\n> describeList' xs@(x:y:_) = \n> \"This list has \" ++ show (length xs) ++ \" elements, \" ++\n> \"of which the first two are \" ++ show x ++ \" and \" ++ show y\n> describeList' _ = \"This list has two or less elements\"\n\nAnother example of an AS-PATTERN in action:\n\n> spellFirst :: String -> String\n> spellFirst w@(c:_) = toUpper c : \" as in \" ++ w\n\nLet's write a function that tests whether a matrix (here represented as a list\nof lists) contains a row with identical elements. First shot:\n\n> rowWithEqualElems :: Eq a => [[a]] -> Bool\n> rowWithEqualElems m = \n> or [ and [ e==head row | e <- row] | row <- m ]\n\nPattern matching makes this a bit prettier:\n\n> rowWithEqualElems' :: Eq a => [[a]] -> Bool\n> rowWithEqualElems' m = \n> or [ and [ e==h | e <- row] | row@(h:_) <- m]\n\nImportant: All patterns within a list comprehension are irrefutable in the\nsense that they cannot cause an error. If an element doesn't match a pattern,\nit is simply skipped.\n\n> singletonElems :: [[a]] -> [a]\n> singletonElems xs = [x | [x] <- xs]\n\nAnother thing to remember is that you cannot test for equality by using a\nvariable multiple times. This will not work:\n\nsameElems :: (a,a) -> Bool\nsameElems (x,x) = True\nsameElems _ = False\n\nNor will this:\n\nheadsEqual :: [a] -> [a] -> Bool\nheadsEqual (x:_) (x:_) = True\nheadsEqual _ _ = False\n\n(The reason why this doesn't work is because Haskell is doing PATTERN MATCHING,\nand not variable UNIFICATION. Prolog does unification, so there it would work.\nBut luckily we don't program in Prolog here.)\n\nThe correct way of doing this is to use different variables and then explicitly\ncheck for equality:\n\n> sameElems :: Eq a => (a,a) -> Bool\n> sameElems (x,y) = x == y\n\n> headsEqual :: Eq a => [a] -> [a] -> Bool\n> headsEqual (x:_) (y:_) = x == y\n> headsEqual _ _ = False\n\n=== EXERCISE 1 ===============================================================\n\nDefine the following functions using pattern matching.\n\n1.1.\n- Define 'headHunter xss' that takes the head of the first list element. If \n the first element has no head, it takes the head of the second element.\n If the second element has no head, it takes the head of the third element.\n If none of this works, the function returns an error.\n\n1.2.\n- Define 'firstColumn m' that returns the first column of a matrix.\n firstColumn [[1,2],[3,4]] => [1,3]\n- Check what happens if the input is not a valid matrix.\n\n1.3.\n- Define 'shoutOutLoud' that repeats three times the initial letter of each\n word in a string.\n shoutOutLoud :: String -> String\n shoutOutLoud \"Is anybody here?\" => \"IIIs aaanybody hhhere?\"\n\n=== LOCAL DEFINITIONS (WHERE) ================================================\n\nWe should avoid situations in which the same value is computed many times over.\nIn Haskell, we use local definitions for that.\n\n> triangleArea :: Double -> Double -> Double -> Double\n> triangleArea a b c = sqrt $ s * (s-a) * (s-b) * (s-c)\n> where s = (a + b + c) \/ 2\n\n> pairsSumTo100 = [(x,y) | x <- xs, y <- xs, x + y == 100 ]\n> where xs = [1..100]\n\nThe scope of 'where' declarations extends over guards:\n\n> tripCost :: Int -> Double -> Double -> String\n> tripCost days hotel travel\n> | cost <= 100 = \"Extremely cheap vacation\"\n> | cost <= 500 = \"Cheap vacation\"\n> | otherwise = \"Expensive vacation\"\n> where cost = hotel * realToFrac days + travel\n\nWe can have many 'where' declarations:\n\n> tripCost' :: Int -> Double -> Double -> String\n> tripCost' days hotel travel\n> | cost <= 100 = \"Extremely cheap vacation\"\n> | cost <= 500 = \"Cheap vacation\"\n> | otherwise = \"Expensive vacation\"\n> where hotelCost = hotel * realToFrac days\n> cost = hotelCost + travel\n\nA median of a list of numbers:\n\n> median :: (Integral a, Fractional b) => [a] -> b\n> median [] = error \"median: Empty list\"\n> median xs \n> | odd l = realToFrac $ ys !! h\n> | otherwise = realToFrac (ys !! h + ys !! (h-1)) \/ 2\n> where l = length xs\n> h = l `div` 2\n> ys = sort xs\n\nThe scope of a 'where' definition is the clause in which it is defined.\nThis is no good:\n\nwhatNumber 5 = good ++ \"5\"\nwhatNumber x = bad ++ show x\n where good = \"I like the number \" \n bad = \"I don't like the number \"\n\nIf a definition needs to be shared among several clauses, you must define it at\nthe top level so that it becomes visible to all clauses (and other functions as\nwell).\n\nWe can of course also pattern match in 'where' blocks:\n\n> tripCost2 :: Int -> Double -> Double -> String\n> tripCost2 days hotel travel\n> | cost <= i = \"Extremely cheap vacation\"\n> | cost <= j = \"Cheap vacation\"\n> | otherwise = \"Expensive vacation\"\n> where cost = hotel * realToFrac days + travel\n> (i,j) = (100,500)\n\nBut this should not be taken too far. This is perhaps too much:\n\n> tripCost3 :: Int -> Double -> Double -> String\n> tripCost3 days hotel travel\n> | exCheap = \"Extremely cheap vacation\"\n> | cheap = \"Cheap vacation\"\n> | otherwise = \"Expensive vacation\"\n> where cost = hotel * realToFrac days + travel\n> (i,j) = (100,500)\n> exCheap = cost <= i\n> cheap = cost <= j\n\nAnother example of a bad style:\n\n> initials :: String -> String -> String\n> initials first last = f : \". \" ++ l : \".\"\n> where (f:_) = first\n> (l:_) = last\n\nDo this instead:\n\n> initials' (f:_) (l:_) = f : \". \" ++ l : \".\"\n\nHow can the above definition be extended to cover the cases when the first\/last\nname is an empty string?\n\nWithin a 'where' block we can also define functions:\n\n> middleOne :: (a,b,c) -> (d,e,f) -> (b,e)\n> middleOne x y = (middle x,middle y)\n> where middle (_,w,_) = w\n\nAnd, as you would have expected, these locally-defined functions can also use\npattern matching:\n\n> middleOne' x y = (middle x,middle y)\n> where middle (0,0,0) = 1\n> middle (_,w,_) = w\n\nThe question that's probably tormenting you now is: when should I use 'where',\nand when should I use global definitions instead?\n\nGood reasons FOR using 'where':\n* Use 'where' if an expression is being repeated locally.\n* Use 'where' if this makes your code more comprehensive.\n* Use 'where' as a sort of documentation (assuming that the variables\/functions\n have meaningful names).\n\nReasons for NOT using 'where':\n* If a value\/function is used also in other functions.\n* If 'where' blocks nest in more than two levels.\n\n=== EXERCISE 2 ===============================================================\n\nSolve the following exercises using pattern matching and local definitions,\nwherever appropriate.\n\n2.1.\n- Define 'pad' that pads the shorter of two the strings with trailing spaces \n and returns both strings capitalized.\n pad :: String -> String -> (String,String)\n pad \"elephant\" \"cat\" => (\"Elephant\",\"Cat \")\n\n2.2.\n- Define 'quartiles xs' that returns the quartiles (q1,q2,q3) of a given list.\n The quartiles are elements at the first, second, and third quarter of a list\n sorted in ascending order. (You can use the built-int 'splitAt' function and\n the previously defined 'median' function.)\n quartiles :: [Int] -> (Double,Double,Double)\n quartiles [3,1,2,4,5,6,8,0,7] => (1.5, 4.0, 6.5)\n\n=== LET ======================================================================\n\nA 'let-in' statement is similar to a 'where' block. The differences are:\n\n* 'where' is declared at the end of a function and is visible only to the last\n clause, including the guards\n* 'let' can be declared everywhere and is by itself an expression\n\n> triangleArea' :: Double -> Double -> Double -> Double\n> triangleArea' a b c = \n> let s = (a + b + c) \/ 2 in sqrt $ s * (s-a) * (s-b) * (s-c)\n\n> foo2 x =\n> let a = x\n> b = x*2\n> c = b+1 \n> in a + b + c\n\nOr shorter:\n\n> foo3 x = let (a,b,c) = (x, x*2, x*2+1) in a + b + c\n\nUse 'let' when you need a \"very local\" definition that is not used at several\nplaces within a function.\n\nNote that 'let' has actually nothing to do with 'let' in a 'do' block, although\nin both cases they serve the same purpose: assigning values to a variable.\n\n'let' (without the 'in' part) can also be used within a list comprehension:\n\n> sumLists :: [Int] -> [Int] -> [Int]\n> sumLists xs ys = [s | (x,y) <- zip xs ys, let s = x+y, s >= 0]\n\nBtw., note that 'where' declarations are not visible within a list comprehension:\n\nsumLists' :: [Int] -> [Int] -> [Int]\nsumLists' xs ys = [s | (x,y) <- zip xs ys, s >= 0]\n where s = x+y\n\nInstead of using 'let', you can use the following trick:\n\n> sumLists' :: [Int] -> [Int] -> [Int]\n> sumLists' xs ys = [s | (x,y) <- zip xs ys, s <- [x+y], s >= 0]\n\nOne more example:\n\n> bigTriangles :: [((Double, Double, Double), Double)]\n> bigTriangles = [ ((x,y,z), a) | \n> x <- ns, y <- ns, z <- ns, \n> let a = triangleArea x y z, x + y >= z, x < y, a >= 100]\n> where ns = [1..100]\n\n=== EXERCISE 3 ===============================================================\n\nRedo Exercise 2 using 'let' instead of 'where'.\n\n=== CASE =====================================================================\n\nA 'case' statement is quite powerful as it enables pattern matching anywhere in\nthe code. A 'case' statement is an expression itself, much like 'if-then-else'\nand 'let-in' expressions.\n\n> magicNumber4 :: Int -> String\n> magicNumber4 x = case x of\n> 42 -> \"Yeah!\"\n> _ -> \"Nope, try again.\"\n\nAs a matter of fact, pattern matching used in function definition is just\nsyntactic sugar for a 'case' statement. Otherwise said, every definition of the\nform:\n\n foo pattern1 = def1\n foo pattern2 = def2\n ...\n\nis equivalent to:\n\n foo = case expression of\n pattern1 -> def1\n pattern2 -> def2\n ...\n\nThe difference is that 'case' can be used everywhere (inside another\nexpression):\n\n> describeList2 xs =\n> \"This is a list that \" ++ case xs of\n> [] -> \"is empty\"\n> [x] -> \"has one element\"\n> xs -> \"has \" ++ show (length xs) ++ \" elements\"\n\n=== EXERCISE 4 ===============================================================\n\n4.1.\n- Write a function that takes in a pair (a,b) and a list [c] and returns the\n following string:\n \"The pair [contains two ones|contains one one|does not contain a single one]\n and the second element of the list is \"\n\n== SUMMARY ===================================================================\n\nLet's wrap it up:\n\n* guards are similar to 'if-then-else'\n* pattern matching in function definition is similar to 'case'\n* 'where' is similar to 'let'\n\n'if-then-else', 'case', and 'let' are expressions by themselves so you can use\nthem everywhere.\n\nYou can combine anything in any way you wish (sky is the limit). Apart from\nefficiency, code readability should be your primary objective.\n\nNote: Some tests can be done with pattern matching (e.g., is the list empty?),\nbut some require Boolean expressions (e.g., is the number less than 1?)\n\nRule of thumb: use pattern matching and local definitions (more 'where' than\n'let') as much as you can.\n\n== PRACTICAL HASKELL: SETTING UP A HASKELL PROJECT ============================\n\n* Hackage: Haskell Package Database\n https:\/\/hackage.haskell.org\/\n\n* Cabal: Common Architecture for Building Applications and Libraries\n http:\/\/www.haskell.org\/cabal\/\n'\nPackage is a bundle of modules and\/or executables. One package can have (and\nusually has) many (hierarchically organized) modules. Once you install the\npackage, you simply import the modules (and you don't really care about\npackages anymore).\n\n* Installing packages with Cabal\n http:\/\/www.haskell.org\/haskellwiki\/Cabal-Install\n\n* \"Cabal hell\" and Cabal sandboxes\n http:\/\/coldwa.st\/e\/blog\/2013-08-20-Cabal-sandbox.html\n\nNow, let's see how we can define our own package with cabal. Let's go back to\nour last week's example. Recall that we made a program that behaves pretty much\nlike:\n\n cat file.txt | tr ' ' '\\n' | sort | uniq\n\nLet's look at:\n\n* Modules and exports\n\n* Defining a cabal project\n\n* Semantic versioning\n http:\/\/semver.org\/\n But in Haskell: \n http:\/\/www.haskell.org\/haskellwiki\/Package_versioning_policy\n\nMore on setting up a Haskell environment:\nhttp:\/\/www.haskell.org\/haskellwiki\/How_to_write_a_Haskell_program\n\n=== NEXT =====================================================================\n\nMany functions cannot be defined without RECURSION, and it is only with\nrecursion that Haskell is a Turing-complete language. So, next week we'll look\ninto recursive functions.\n\n","avg_line_length":31.3667953668,"max_line_length":81,"alphanum_fraction":0.6279542097} {"size":34396,"ext":"lhs","lang":"Literate Haskell","max_stars_count":1.0,"content":"\\documentclass[thesis.tex]{subfiles}\n\n\\begin{document}\n\n\n\\section{Operators}\n\nIn this section we discuss the implementation details of the\nSubjective Logic operators that are provided by SLHS. The following\nnotation is used for the operators:\n\n\\begin{itemize}\n \\item We denote binary operators with a trailing exclamation mark\n $!$ in order to avoid conflicting with Haskell's mathematical\n operators. For example, binomial addition is denoted as $+!$.\n \\item We use tildes as a prefix to denote $co-$ operations. For\n example, the binomial co-multiplication operator is denoted as\n $\\sim *!$.\n \\item All n-ary operators, where $n > 2$ are denoted as simple\n functions, instead of symbolic operators.\n\\end{itemize}\n\nEvery operator is presented in its most general\nform. For example, instead of presenting two operators for\n\\emph{averaging fusion} (one for multinomial opinions, and another for\nhyper opinions) we implement only the version for hyper opinions. In\norder to achieve this level of code reuse, each operator accepts\nas parameters any object that can be converted into the correct\nopinion type by virtue of the \\emph{ToBinomial}, \\emph{ToMultinomial},\nand \\emph{ToHyper} type classes.\n\n\n\\ignore{\n\\begin{code}\n{-# LANGUAGE ScopedTypeVariables #-}\n\nmodule Math.SLHS.Operators where\n\nimport Data.Ratio ((%))\nimport Control.Monad\nimport Control.Applicative\nimport Data.List\n\nimport Math.SLHS.Types\nimport Math.SLHS.Opinions\n\nimport qualified Math.SLHS.Frame as F\nimport qualified Math.SLHS.Vector as V\n\\end{code}\n}\n\n\n\\subsection{Binomial Operators}\n\nWe begin our treatment of the Subjective Logic operators by looking at\nthose operators designed to work with binomial opinions. We split this\nsection into two parts: \\emph{logical and set-theoretical} operators, and\n\\emph{trust transitivity} operators. The former contains the operators\nthat are generalizations of those found in logic and set theory, such\nas conjunction, and set union. The latter operators are for modeling\ntrust networks, where agents can formulate opinions based on reputation\nand trust.\n\n\n\\subsubsection{Logical and Set-Theoretical Operators}\n\nThe logical and set-theoretical binomial operators are those that have\nequivalent operators in logic and set theory. We will start with binomial addition.\nAddition of binomial opinions, denoted as $\\omega_{x \\cup y} = \\omega_x + \\omega_y$,\nis defined when $x$ and $y$ are disjoint subsets of the same frame of discernment\n\\cite{mcanally2004addition}. Binomial addition is implemented as follows:\n\n\\begin{code}\n(+!) :: (ToBinomial op1, ToBinomial op2, Eq h, Eq b, Ord b)\n => SLExpr h a (op1 h (F.Frame b))\n -> SLExpr h a (op2 h (F.Frame b))\n -> SLExpr h a (Binomial h (F.Frame b))\nopx +! opy = do\n opx' <- liftM toBinomial opx\n opy' <- liftM toBinomial opy\n require (bHolder opx' == bHolder opy') \"opinions must have same holder\"\n require (getFrame opx' == getFrame opy') \"opinions must have the same frame\"\n return $ add' opx' opy'\n\\end{code}\n\n\\begin{code}\nadd' :: Ord a\n => Binomial h (F.Frame a) -> Binomial h (F.Frame a) -> Binomial h (F.Frame a)\nadd' opx@(Binomial bx dx ux ax hx xt xf) (Binomial by dy uy ay _ yt yf) =\n Binomial b' d' u' a' hx (xt `F.union` yt) (xf `F.union` yf)\n where\n b' = bx + by\n d' = (ax * (dx - by) + ay * (dy - bx)) \/ (ax + ay)\n u' = (ax * ux + ay * uy) \/ (ax + ay)\n a' = ax + ay\n\\end{code}\n\nHere we see a pattern that we will re-use for all operator implementations. We start\nwith a function whose inputs are of type \\emph{SLExpr h a t}, where $t$ is some\ntype. Within that function, we unwrap the values from the \\emph{SLExpr} monad, verify\nthat some requirements are met, and then send those values to a worker function that\ndoes the actual computation. We then wrap the result back into the \\emph{SLExpr}\nmonad via the \\emph{return} function.\n\nBinomial subtraction is the inverse operation of addition. In set theory it is\nequivalent to the set difference operator \\cite{mcanally2004addition}. Given\ntwo opinions $\\omega_x$ and $\\omega_y$ where $x \\cap y = y$, the difference,\n$\\omega_{x \\setminus y}$ is calculated as follows:\n\n\\begin{code}\n(-!) :: (ToBinomial op1, ToBinomial op2, Eq h, Eq b, Ord b)\n => SLExpr h a (op1 h (F.Frame b))\n -> SLExpr h a (op2 h (F.Frame b))\n -> SLExpr h a (Binomial h (F.Frame b))\nopx -! opy = do\n opx' <- liftM toBinomial opx\n opy' <- liftM toBinomial opy\n require (bHolder opx' == bHolder opy') \"opinions must have same holder\"\n require (getFrame opx' == getFrame opy') \"opinions must have the same frame\"\n return $ subtract' opx' opy'\n\\end{code}\n\n\\begin{code}\nsubtract' :: Ord a\n => Binomial h (F.Frame a) -> Binomial h (F.Frame a) -> Binomial h (F.Frame a)\nsubtract' (Binomial bx dx ux ax hx xt xf) (Binomial by dy uy ay _ yt yf) =\n Binomial b' d' u' a' hx ft ff\n where\n b' = bx - by\n d' = (ax * (dx + by) - ay * (1 + by - bx - uy)) \/ (ax - ay)\n u' = (ax * ux - ay * uy) \/ (ax - ay)\n a' = ax - ay\n ft = xt `F.difference` yt\n ff = xt `F.union` xf `F.difference` ft\n\\end{code}\n\nNegation is a unary operator that switches the belief and disbelief and\ninverts the atomicity of a binomial opinion \\cite{josang2001logic}.\nGiven a binomial opinion $\\omega_x$ over\na frame $X = \\lbrace x, \\lnot x \\rbrace$, the negated opinion\n$\\omega_{\\overline{x}} = \\omega_{\\lnot x}$.\n\n\\begin{code}\nnegate :: ToBinomial op => SLExpr h a (op h b) -> SLExpr h a (Binomial h b)\nnegate op = do\n op' <- liftM toBinomial op\n return $ negate' op'\n\\end{code}\n\n\\begin{code}\nnegate' :: Binomial h a -> Binomial h a\nnegate' (Binomial b d u a h x y) = Binomial d b u (1 - a) h y x\n\\end{code}\n\nMultiplication of two binomial opinions is equivalent to the logical\n\\emph{and} operator \\cite{josang2005multiplication}. Given two opinions\n$\\omega_x$ and $\\omega_y$ over distinct binary frames $x$ and $y$, the\nproduct of the opinions, $\\omega_{x \\land y}$, represents the conjunction\nof the two opinions.\n\n\\begin{code}\n(*!) :: (ToBinomial op1, ToBinomial op2, Eq h, Ord b, Ord c)\n => SLExpr h a (op1 h b)\n -> SLExpr h a (op2 h c)\n -> SLExpr h a (Binomial h (F.Frame (b, c)))\nopx *! opy = do\n opx' <- liftM toBinomial opx\n opy' <- liftM toBinomial opy\n require (bHolder opx' == bHolder opy') \"opinions must have same holder\"\n return $ b_times' opx' opy'\n\nb_times' (Binomial bx dx ux ax hx xt xf) (Binomial by dy uy ay _ yt yf) =\n Binomial b' d' u' a' hx t f\n where\n b' = bx * by + ((1 - ax) * bx * uy + (1 - ay) * ux * by)\n \/ (1 - ax * ay)\n d' = dx + dy - dx * dy\n u' = ux * uy + ((1 - ay) * bx * uy + (1 - ax) * ux * by)\n \/ (1 - ax * ay)\n a' = ax * ay\n\n t = F.singleton (xt, yt)\n f = F.fromList [(xt, yf), (xf, yt), (xf, yf)]\n\\end{code}\n\nThe resulting frame of discernment is a coarsened frame from the\ncartesian product of $\\lbrace x, \\lnot x\\rbrace$ and\n$\\lbrace y, \\lnot y\\rbrace$, where the element whose belief mass is\ndesignated the role of \"belief\" for binomial opinions is\n$\\lbrace (x, y)\\rbrace$, and the element whose belief mass is given\nthe role of \"disbelief\" is $\\lbrace (x, \\lnot y), (\\lnot x, y), (\\lnot x, \\lnot y)\\rbrace$.\n\nBinomial co-multiplication is equivalent to the logical \\emph{or} operator\n\\cite{josang2005multiplication}. Given two opinions, again on distinct binary\nframes, $\\omega_x$ and $\\omega_y$, the disjunctive binomial opinion\n$\\omega_{x \\lor y} = \\omega_x \\sqcup \\omega_y$ is computed by the following\nfunction:\n\n\\begin{code}\n(~*!) :: (ToBinomial op1, ToBinomial op2, Eq h, Ord b, Ord c)\n => SLExpr h a (op1 h b)\n -> SLExpr h a (op2 h c)\n -> SLExpr h a (Binomial h (F.Frame (b, c)))\nopx ~*! opy = do\n opx' <- liftM toBinomial opx\n opy' <- liftM toBinomial opy\n require (bHolder opx' == bHolder opy') \"opinions must have same holder\"\n return $ cotimes' opx' opy'\n\ncotimes' (Binomial bx dx ux ax hx xt xf) (Binomial by dy uy ay _ yt yf) =\n Binomial b' d' u' a' hx t f\n where\n b' = bx + by - bx * by\n d' = dx * dy + (ax * (1 - ay) * dx * uy + (1 - ax) * ay * ux * dy)\n \/ (ax + ay - ax * ay)\n u' = ux * uy + (ay * dx * uy + ax * ux * dy)\n \/ (ax + ay - ax * ay)\n a' = ax + ay - ax * ay\n\n t = F.fromList [(xt, yt), (xf, yt), (xt, yf)]\n f = F.singleton (xf, yf)\n\\end{code}\n\nBinomial multiplication and co-multiplication are duals to one another\nand satisfy De-Morgan's law:\n$\\omega_{x \\land y} = \\omega_{\\overline{\\overline{x} \\lor \\overline{y}}}$\nand $\\omega_{x \\lor y} = \\omega_{\\overline{\\overline{x} \\land \\overline{y}}}$,\nbut they do not distribute over one another \\cite{josang2005multiplication}.\nJosang and McAnally claim that binomial multiplication and\nco-multiplication produce good approximations of the analytically correct\nproducts and co-products of Beta probability density functions\n\\cite{josang2005multiplication}. Therefore, if one were to construct a\n\\emph{Beta} data type in Haskell representing a beta PDF and create an instance\nof the \\emph{ToBinomial} type class for it, one could use the above operators\nto generate good approximations to the products and co-products of beta PDFs\nwith minimal effort.\n\nWe next discuss binomial division and co-division, which are the inverses\nof binomial multiplication and co-multiplication. The binomial division\nof an opinion $\\omega_x$ by another opinion $\\omega_y$ is denoted as\n$\\omega_{x \\overline{\\land} y} = \\omega_x \/ \\omega_y$\n\\cite{josang2005multiplication}, and is computed as follows:\n\n\\begin{code}\n(\/!) :: (ToBinomial op1, ToBinomial op2, Eq c)\n => SLExpr h a (op1 h (F.Frame (b, c)))\n -> SLExpr h a (op2 h b)\n -> SLExpr h a (Binomial h c)\nopx \/! opy = do\n opx' <- liftM toBinomial opx\n opy' <- liftM toBinomial opy\n require (lessBaseRate opx' opy') \"ax must be less than ay\"\n require (greaterDisbelief opx' opy') \"dx must be greater than or equal to dy\"\n require (bxConstraint opx' opy') \"Division requirement not satisfied\"\n require (uxConstraint opx' opy') \"Division requirement not satisfied\"\n return $ divide' opx' opy'\n where\n lessBaseRate x y = bAtomicity x < bAtomicity y\n greaterDisbelief x y = bDisbelief x >= bDisbelief y\n\n bxConstraint x y = bx >= (ax * (1 - ay) * (1 - dx) * by) \/ ((1 - ax) * ay * (1 - dy))\n where\n (bx, dx, ux, ax) = (bBelief x, bDisbelief x, bUncertainty x, bAtomicity x)\n (by, dy, uy, ay) = (bBelief y, bDisbelief y, bUncertainty y, bAtomicity y)\n\n uxConstraint x y = ux >= ((1 - ay) * (1 - dx) * uy) \/ ((1 - ax) * (1 - dy))\n where\n (bx, dx, ux, ax) = (bBelief x, bDisbelief x, bUncertainty x, bAtomicity x)\n (by, dy, uy, ay) = (bBelief y, bDisbelief y, bUncertainty y, bAtomicity y)\n\ndivide' (Binomial bx dx ux ax hx xt xf) (Binomial by dy uy ay _ yt yf) =\n Binomial b' d' u' a' hx zt zf\n where\n b' = ay * (bx + ax * ux) \/ ((ay - ax) * (by + ay *uy))\n - ax * (1 - dx) \/ ((ay - ax) * (1 - dy))\n d' = (dx - dy) \/ (1 - dy)\n u' = ay * (1 - dx) \/ ((ay - ax) * (1 - dy))\n - ay * (bx + ax * ux) \/ ((ay - ax) * (bx + ay * uy))\n a' = ax \/ ay\n\n [(_, zt)] = F.toList xt\n zf = head . filter (\/= zt) . map snd . F.toList $ xf\n\\end{code}\n\nLastly co-division, the inverse operation of co-multiplication \\cite{josang2005multiplication},\nis denoted as $\\omega_{x \\overline{\\lor} y} = \\omega_x \\overline{\\sqcup} \\omega_y$ and is\ncomputed as follows:\n\n\\begin{code}\n(~\/!) :: (ToBinomial op1, ToBinomial op2, Eq c)\n => SLExpr h a (op1 h (F.Frame (b, c)))\n -> SLExpr h a (op2 h b)\n -> SLExpr h a (Binomial h c)\nopx ~\/! opy = do\n opx' <- liftM toBinomial opx\n opy' <- liftM toBinomial opy\n require (greaterBaseRate opx' opy') \"ax must be greater than ay\"\n require (greaterBelief opx' opy') \"bx must be greater than or equal to by\"\n require (dxConstraint opx' opy') \"Division requirement not satisfied\"\n require (uxConstraint opx' opy') \"Division requirement not satisfied\"\n return $ codivide' opx' opy'\n where\n greaterBaseRate x y = bAtomicity x > bAtomicity y\n greaterBelief x y = bBelief x >= bBelief y\n\n dxConstraint x y = dx >= (ay * (1 - ax) * (1 - bx) * dy) \/ ((1 - ay) * ax * (1 - by))\n where\n (bx, dx, ux, ax) = (bBelief x, bDisbelief x, bUncertainty x, bAtomicity x)\n (by, dy, uy, ay) = (bBelief y, bDisbelief y, bUncertainty y, bAtomicity y)\n\n uxConstraint x y = ux >= (ay * (1 - bx) * uy) \/ (ax * (1 - by))\n where\n (bx, dx, ux, ax) = (bBelief x, bDisbelief x, bUncertainty x, bAtomicity x)\n (by, dy, uy, ay) = (bBelief y, bDisbelief y, bUncertainty y, bAtomicity y)\n\ncodivide' (Binomial bx dx ux ax hx xt xf) (Binomial by dy uy ay _ yt yf) =\n Binomial b' d' u' a' hx zt zf\n where\n b' = (bx - by) \/ (1 - by)\n d' = ((1 - ay) * (dx + (1 - ax) * ux)\n \/ ((ax - ay) * (dy + (1 - ay) * uy)))\n - (1 - ax) * (1 - bx) \/ ((ax - ay) * (1 - by))\n u' = ((1 - ay) * (1 - bx) \/ ((ax - ay) * (1 - by)))\n - ((1 - ay) * (dx + (1 - ax) * ux)\n \/ ((ax - ay) * (dy + (1 - ay) * uy)))\n a' = (ax - ay) \/ (1 - ay)\n\n zt = head . filter (\/= zf) . map snd . F.toList $ xt\n [(_, zf)] = F.toList xf\n\\end{code}\n\nIn this section we have introduced those binomial operators that have\nanalogs to logic and set theory. In the next section we discuss\nthe binomial operators for modeling \\emph{trust transitivity}.\n\n\n\\subsubsection{Trust Transitivity Operators}\n\nIn this section we present the Subjective Logic operators for trust\ntransitivity. If two agents A and B exist such that agent A has an opinion\nof agent B, and agent B has an opinion about some proposition X, then\nA can form an opinion of X by \\emph{discounting} B's opinion of x based on\nA's opinion of B.\n\nSubjective Logic offers three methods of discounting:\n\\emph{uncertainty favouring discounting}, \\emph{opposite belief\nfavouring discounting}, and \\emph{base rate sensitive discounting} \\cite{josang2012trust}.\nWe begin by constructing a simple data type to represent the three kinds of discounting.\n\n\\begin{code}\ndata Favouring = Uncertainty | Opposite | BaseRateSensitive\n\\end{code}\n\nBy doing so, we are able to expose a single discounting function to\nthe user that selects the kind of discounting based on an input parameter of type\n\\emph{Favouring}:\n\n\\begin{code}\ndiscount :: (ToBinomial op1, ToBinomial op2, Ord h, Ord b)\n => Favouring\n ->SLExpr h a (op1 h h)\n -> SLExpr h a (op2 h b)\n -> SLExpr h a (Binomial h b)\ndiscount f opx opy = do\n opx' <- liftM toBinomial opx\n opy' <- liftM toBinomial opy\n return $ case f of\n Uncertainty -> discount_u opx' opy'\n Opposite -> discount_o opx' opy'\n BaseRateSensitive -> discount_b opx' opy'\n\\end{code}\n\nDepending on the first parameter, the discount function dispatches to one\nof three implementations: \\emph{discount\\_u}, \\emph{discount\\_o}, or \\emph{discount\\_b}.\nTheir definitions follow below.\n\n\\begin{code}\ndiscount_u :: Binomial h h -> Binomial h a -> Binomial h a\ndiscount_u (Binomial bb db ub ab hx _ _) (Binomial bx dx ux ax hy fx fy) =\n Binomial b' d' u' a' (Discount hx hy) fx fy\n where\n b' = bb * bx\n d' = bb * dx\n u' = db + ub + bb * ux\n a' = ax\n\\end{code}\n\n\\begin{code}\ndiscount_o :: Binomial h h -> Binomial h a -> Binomial h a\ndiscount_o (Binomial bb db ub ab hx _ _) (Binomial bx dx ux ax hy fx fy) =\n Binomial b' d' u' a' (Discount hx hy) fx fy\n where\n b' = bb * bx + db * dx\n d' = bb * dx + db * bx\n u' = ub + (bb + db) * ux\n a' = ax\n\\end{code}\n\n\\begin{code}\ndiscount_b :: (Ord a, Ord h) => Binomial h h -> Binomial h a -> Binomial h a\ndiscount_b op1@(Binomial bb db ub ab hx _ _) op2@(Binomial bx dx ux ax hy fx fy) =\n Binomial b' d' u' a' (Discount hx hy) fx fy\n where\n b' = expectation op1 * bx\n d' = expectation op1 * dx\n u' = 1 - expectation op1 * (bx + dx)\n a' = ax\n\\end{code}\n\n\nIn this section we have presented the operators of Subjective Logic for working with binomial opinions.\nWe first introduced the operators that have\nanalogs to the classical operators of logic and set theory, and then introduced\noperators for modeling transitive trust networks. These operators are summarized\nin Table \\ref{tbl:binomial-operators}. In the next section we introduce the\noperators of Subjective Logic for working with multinomial and hyper opinions.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{| l | l | l |}\n \\hline\n Name & SL Notation & SLHS Notation\\\\\n \\hline\n Addition & $\\omega_{X \\cup Y} = \\omega_X + \\omega_Y$ & $opx +! opy$ \\\\\n Subtraction & $\\omega_{X \\setminus Y} = \\omega_X - \\omega_Y$ & $opx -! opy$ \\\\\n Negation & $\\omega_{\\bar{x}} = \\lnot \\omega_x$ & $negate opx$ \\\\\n Multiplication & $\\omega_{X \\land Y} = \\omega_X \\cdot \\omega_Y$ & $opx *! opy$ \\\\\n Co-multiplication & $\\omega_{X \\lor Y} = \\omega_X \\sqcup \\omega_Y$ & $opx ~*! opy$ \\\\\n Division & $\\omega_{X \\bar{\\land} Y} = \\omega_X \/ \\omega_Y$ & $opx \/! opy$ \\\\\n Co-division & $\\omega_{X \\bar{\\lor} Y} = \\omega_X \\bar{\\sqcup} \\omega_Y$ & $opx ~\/! opy$ \\\\\n Discounting & $\\omega^{A:B}_x = \\omega^A_B \\otimes \\omega^B_x$ & $discount\\,t\\,opa\\,opb$ \\\\\n \\hline\n\\end{tabular}\n\\end{center}\n\n\\caption{Summary of binomial operators}\n\\label{tbl:binomial-operators}\n\\end{table}\n\n\n\n\n\\subsection{Multinomial and Hyper Operators}\n\nIn this section we present the multinomial and hyper operators. We start with\nmultinomial multiplication and describe how it differs from binomial multiplication \\cite{josang2005multiplication},\nthen we introduce the various operators for belief \\emph{fusion} and \\emph{unfusion}\n\\cite{josang2012interpretation, josang2010cumulative, josang2009fission, josang2009cumulative}.\nWe then introduce the \\emph{deduction} and \\emph{abduction} operators for reasoning \\cite{josanginverting, josang2008conditional, josang2008abductive},\nand lastly we introduce the \\emph{belief constraint} operator \\cite{josang2012dempster}.\n\n\n\\subsubsection{Multinomial Multiplication}\n\nThe multiplication of two multinomial opinions is a separate operator\nthan the product operator defined over binomial opinions. Whereas the\nbinomial product operator is equivalent to the logical \\emph{and}\noperator, multinomial multiplication constructs an opinion over a new\nframe which is the cartesian product of the frames of the input opinions \\cite{josang2005multiplication}.\nIn order to avoid symbolic naming conflicts, we have chosen to name the binomial\noperator with the symbol $*!$, and we have used the name \\emph{times} to\ndenote the multinomial operator.\n\n\\begin{code}\ntimes :: (ToMultinomial op1, ToMultinomial op2, Eq h, Ord b, Ord c)\n => SLExpr h a (op1 h b) -> SLExpr h a (op2 h c)\n -> SLExpr h a (Multinomial h (b, c))\ntimes opx opy = do\n opx' <- liftM toMultinomial opx\n opy' <- liftM toMultinomial opy\n return $ m_times' opx' opy'\n\\end{code}\n\n\\begin{code}\nm_times' :: (Ord a, Ord b) => Multinomial h a -> Multinomial h b -> Multinomial h (a, b)\nm_times' (Multinomial bx ux ax hx fx) (Multinomial by uy ay hy fy) =\n Multinomial b' u' a' (Product hx hy) (fx `F.cross` fy)\n where\n b' = V.fromList bxy\n u' = uxy\n a' = V.fromList axy\n\n bxy = [ ((x, y), f x y) | x <- xKeys, y <- yKeys ]\n where\n f x y = expect x y - (V.value ax x * V.value ay y * uxy)\n\n axy = [ ((x, y), f x y) | x <- xKeys, y <- yKeys ]\n where\n f x y = V.value ax x * V.value ay y\n\n uxy = minimum [ uxy' x y | x <- xKeys, y <- yKeys ]\n\n uxy' x y = (uIxy * expect x y) \/ (bIxy x y + V.value ax x * V.value ay y * uIxy)\n\n uIxy = uRxy + uCxy + uFxy\n where\n uRxy = sum [ ux * V.value by y | y <- yKeys ]\n uCxy = sum [ uy * V.value bx x | x <- xKeys ]\n uFxy = ux * uy\n\n bIxy x y = V.value bx x * V.value by y\n\n expect x y = (V.value bx x + V.value ax x * ux) * (V.value by y + V.value ay y * uy)\n\n xKeys = F.toList fx\n yKeys = F.toList fy\n\\end{code}\n\n\n\\subsubsection{Fusion, Unfusion, and Fission}\n\nHyper opinions can be fused together using two different operators:\n\\emph{cumulative fusion} and \\emph{averaging fusion}. Each operator\nshould be used under different circumstances depending on the meaning\nof the fused opinions \\cite{josang2010cumulative, josang2012interpretation}.\n\n\\begin{code}\ncFuse :: (ToHyper op1, ToHyper op2, Ord b)\n => SLExpr h a (op1 h b) -> SLExpr h a (op2 h b) -> SLExpr h a (Hyper h b)\ncFuse opa opb = do\n opa' <- liftM toHyper opa\n opb' <- liftM toHyper opb\n return $ cFuse' opa' opb'\n\\end{code}\n\n\\begin{code}\ncFuse' :: Ord a => Hyper h a -> Hyper h a -> Hyper h a\ncFuse' (Hyper ba ua aa hx fx) (Hyper bb ub ab hy _)\n | ua \/= 0 || ub \/= 0 = Hyper b' u' a' (Fuse Cumulative hx hy) fx\n | otherwise = Hyper b'' u'' a'' (Fuse Cumulative hx hy) fx\n where\n b' = V.fromList . map (\\k -> (k, bFunc k)) $ keys\n u' = ua * ub \/ (ua + ub - ua * ub)\n a' = aa\n\n b'' = V.fromList . map (\\k -> (k, bB k)) $ keys\n u'' = 0\n a'' = aa\n\n bFunc x = (bA x * ub + bB x * ua) \/ (ua + ub - ua * ub)\n\n keys = nub (V.focals ba ++ V.focals bb)\n\n bA = V.value ba\n bB = V.value bb\n\\end{code}\n\n\\begin{code}\naFuse :: (ToHyper op1, ToHyper op2, Ord a)\n => SLExpr h a (op1 h a) -> SLExpr h a (op2 h a) -> SLExpr h a (Hyper h a)\naFuse opa opb = do\n opa' <- liftM toHyper opa\n opb' <- liftM toHyper opb\n return $ aFuse' opa' opb'\n\\end{code}\n\n\\begin{code}\naFuse' :: Ord a => Hyper h a -> Hyper h a -> Hyper h a\naFuse' (Hyper ba ua aa hx fx) (Hyper bb ub ab hy _)\n | ua \/= 0 || ub \/= 0 = Hyper b' u' a' (Fuse Averaging hx hy) fx\n | otherwise = Hyper b'' u'' a'' (Fuse Averaging hx hy) fx\n where\n b' = V.fromList . map (\\k -> (k, bFunc k)) $ keys\n u' = 2 * ua * ub \/ (ua + ub)\n a' = aa\n\n b'' = V.fromList . map (\\k -> (k, bB k)) $ keys\n u'' = 0\n a'' = aa\n\n bFunc x = (bA x * ub + bB x * ua) \/ (ua + ub)\n\n keys = nub (V.focals ba ++ V.focals bb)\n\n bA = V.value ba\n bB = V.value bb\n\\end{code}\n\nCumulative \\emph{unfusion} is defined for multinomial opinions \\cite{josang2009cumulative}.\nIt has yet to be generalized to hyper opinions. Given an opinion that represents\nthe result of cumulatively fusing together two opinions, and one of the\ntwo original opinions, it is possible to extract the other original\nopinion.\n\n\\begin{code}\ncUnfuse :: (ToMultinomial op1, ToMultinomial op2, Ord a)\n => SLExpr h a (op1 h a) -> SLExpr h a (op2 h a)\n -> SLExpr h a (Multinomial h a)\ncUnfuse opc opb = do\n opc' <- liftM toMultinomial opc\n opb' <- liftM toMultinomial opb\n return $ cUnfuse' opc' opb'\n\\end{code}\n\n\\begin{code}\ncUnfuse' :: Ord a => Multinomial h a -> Multinomial h a -> Multinomial h a\ncUnfuse' (Multinomial bc uc ac (Fuse Cumulative hx hy) fx) (Multinomial bb ub ab _ _)\n | uc \/= 0 || ub \/= 0 = Multinomial ba ua aa hx fx\n | otherwise = Multinomial ba' ua' aa' hx fx\n where\n ba = V.mapWithKey belief bc\n ua = ub * uc \/ (ub - uc + ub * uc)\n aa = ac\n\n ba' = bb\n ua' = 0\n aa' = ac\n\n belief x b = (b * ub - V.value bb x * uc) \/ (ub - uc + ub * uc)\n\\end{code}\n\nLikewise, averaging unfusion is the inverse operation to averaging fusion\n\\cite{josang2009cumulative}.\n\n\\begin{code}\naUnfuse :: (ToMultinomial op1, ToMultinomial op2, Ord a)\n => SLExpr h a (op1 h a) -> SLExpr h a (op2 h a)\n -> SLExpr h a (Multinomial h a)\naUnfuse opc opb = do\n opc' <- liftM toMultinomial opc\n opb' <- liftM toMultinomial opb\n return $ aUnfuse' opc' opb'\n\\end{code}\n\n\\begin{code}\naUnfuse' :: Ord a => Multinomial h a -> Multinomial h a -> Multinomial h a\naUnfuse' (Multinomial bc uc ac (Fuse Averaging hx hy) fx) (Multinomial bb ub ab _ _)\n | uc \/= 0 || ub \/= 0 = Multinomial ba ua aa hx fx\n | otherwise = Multinomial ba' ua' aa' hy fx\n where\n ba = V.mapWithKey belief bc\n ua = ub * uc \/ (2 * ub - uc)\n aa = ac\n\n ba' = bb\n ua' = 0\n aa' = ac\n\n belief x b = (2 * b * ub - V.value bb x * uc) \/ (2 * ub - uc)\n\\end{code}\n\nFission is the operation of splitting a multinomial opinion into two\nmultinomial opinions based on some ratio $\\phi$ \\cite{josang2009fission}\nWe refer to this as the \\emph{split} operator. Like unfusion, fission\nhas not yet been generalized to hyper opinions.\n\n\\begin{code}\ncSplit :: (Ord a, ToMultinomial op) => Rational -> SLExpr h a (op h a)\n -> SLExpr h a (Multinomial h a , Multinomial h a)\ncSplit phi op = do\n op' <- liftM toMultinomial op\n return $ cSplit' phi op'\n\\end{code}\n\n\\begin{code}\ncSplit' :: Rational -> Multinomial h a -> (Multinomial h a, Multinomial h a)\ncSplit' phi (Multinomial b u a (Fuse Cumulative h1 h2) fx) = (op1, op2)\n where\n op1 = Multinomial b1 u1 a h1 fx\n op2 = Multinomial b2 u2 a h2 fx\n\n b1 = V.map (\\x -> phi * x \/ norm phi) b\n u1 = u \/ norm phi\n\n b2 = V.map (\\x -> (1 - phi) * x \/ norm (1 - phi)) b\n u2 = u \/ norm (1 - phi)\n\n norm p = u + p * V.fold (+) 0 b\n\\end{code}\n\n\n\n\n\n\n\n\n\\subsubsection{Deduction and Abduction}\n\nDeduction and abduction of multinomial opinions allows for one to do\nconditional reasoning with Subjective Logic \\cite{josanginverting, josang2008conditional, josang2008abductive}. We first introduce the\noperator for performing deduction, which we call \\emph{deduce}, and\nthen discuss the operator \\emph{abduce} for performing abduction.\n\nBecause of the nature of these operators, the frames of discernment\nwhich the opinions are defined over must satisfy two properties: they\nmust be \\emph{bounded}, and the must be \\emph{enumerable}. These\nconstraints on the type of frames allowed is expressed via the type\nclasses \\emph{Bounded} and \\emph{Enum}. Boundedness simply means that\nthere exists a least and greatest element, and enumerability means that\nthe values of the type must be enumerable.\n\nWe begin by introducing deduction.\n\n\\begin{code}\ndeduce :: (ToMultinomial op, Ord a, Bounded a, Enum a, Ord b, Bounded b, Enum b)\n => SLExpr h a (op h a)\n -> [(a, Multinomial h b)]\n -> SLExpr h a (Multinomial h b)\ndeduce opx ops = do\n opx' <- liftM toMultinomial opx\n return $ deduce' opx' ops\n\\end{code}\n\n\\begin{code}\ndeduce' :: forall a. forall b. forall h.\n (Ord a, Bounded a, Enum a, Ord b, Bounded b, Enum b)\n => Multinomial h a\n -> [(a, Multinomial h b)]\n -> Multinomial h b\ndeduce' opx@(Multinomial bx ux ax hx _) ops = Multinomial b' u' a' hx f\n where\n\\end{code}\n\n\\begin{code}\n expt y = sum . map f $ xs\n where\n f x = V.value ax x * V.value (expectation (findOpinion x)) y\n\\end{code}\n\n\\begin{code}\n expt' y = sum . map f $ xs\n where\n f x = V.value (expectation opx) x * V.value (expectation (findOpinion x)) y\n\\end{code}\n\n\\begin{code}\n tExpt y = (1 - V.value ay y) * byxs + (V.value ay y) * (byxr + uyxr)\n where\n (xr', xs') = dims y\n byxr = V.value (mBelief xr') y\n uyxr = mUncertainty xr'\n byxs = V.value (mBelief xs') y\n\\end{code}\n\n\\begin{code}\n xs = [minBound .. maxBound] :: [a]\n ys = [minBound .. maxBound] :: [b]\n\\end{code}\n\n\\begin{code}\n ay = mBaseRate . snd . head $ ops\n\\end{code}\n\n\\begin{code}\n uYx x = maybe 1 mUncertainty . lookup x $ ops\n\n findOpinion x = case lookup x ops of\n Nothing -> Multinomial (V.fromList []) 1 ay hx f\n Just op -> op\n\\end{code}\n\n\\begin{code}\n f = mFrame . snd . head $ ops\n\\end{code}\n\n\\begin{code}\n dims :: b -> (Multinomial h b, Multinomial h b)\n dims y = (xr', xs')\n where\n (_, xr', xs') = foldl1' minPair (dims' y)\n minPair a@(u, _, _) b@(u', _, _) | u < u' = a\n | otherwise = b\n\n dims' y = do xr' <- xs\n xs' <- xs\n let xr'' = findOpinion xr'\n xs'' = findOpinion xs'\n byxr = V.value (mBelief xr'') y\n uyxr = mUncertainty xr''\n byxs = V.value (mBelief xs'') y\n val = 1 - byxr - uyxr + byxs\n return (val, xr'', xs'')\n\\end{code}\n\n\\begin{code}\n triangleApexU y\n | expt y <= tExpt y = (expt y - byxs) \/ V.value ay y\n | otherwise = (byxr + uyxr - expt y) \/ (1 - V.value ay y)\n where\n byxr = V.value (mBelief . fst . dims $ y) y\n uyxr = mUncertainty . fst . dims $ y\n byxs = V.value (mBelief . snd . dims $ y) y\n\\end{code}\n\n\\begin{code}\n intApexU = maximum . map triangleApexU $ ys\n\\end{code}\n\n\\begin{code}\n bComp y = expt y - V.value ay y * intApexU\n\\end{code}\n\n\\begin{code}\n adjustedU y | bComp y < 0 = expt y \/ V.value ay y\n | otherwise = intApexU\n\\end{code}\n\n\\begin{code}\n apexU = minimum . map adjustedU $ ys\n\\end{code}\n\n\\begin{code}\n b' = V.fromList [ (y, expt' y - (V.value ay y) * u') | y <- ys ]\n u' = (apexU -) . sum . map (\\x -> (apexU - uYx x) * V.value bx x) $ xs\n a' = ay\n\\end{code}\n\n\nSubjective Logic abduction is a two step procedure. Given an opinion\nover a frame X and a list of conditional opinions over X given Y, we\nfirst must invert the conditionals into a list of conditional opinions\nover Y given X, and then perform Subjective Logic deduction with the\nnew list and the opinion over X.\n\n\\begin{code}\nabduce :: (ToMultinomial op, Ord a, Bounded a, Enum a, Ord b, Bounded b, Enum b)\n => SLExpr h a (op h a)\n -> [(b, Multinomial h a)]\n -> BaseRateVector b\n -> SLExpr h a (Multinomial h b)\nabduce opx ops ay = do\n opx' <- liftM toMultinomial opx\n return $ abduce' opx' ops ay\n\\end{code}\n\n\\begin{code}\nabduce' :: forall a. forall b. forall h.\n (Ord a, Bounded a, Enum a, Ord b, Bounded b, Enum b)\n => Multinomial h a\n -> [(b, Multinomial h a)]\n -> BaseRateVector b\n -> Multinomial h b\nabduce' opx@(Multinomial bx ux ax hx fx) ops ay = deduce' opx ops'\n where\n ops' = map multinomial xs\n\n multinomial x = (x, Multinomial b' u' a' hx (F.fromList ys))\n where\n b' = V.fromList bs\n u' = uT x\n a' = ay\n bs = map (\\y -> (y, f y)) ys\n f y = expt y x - V.value ay y * uT x\n\\end{code}\n\n\\begin{code}\n expt y x = numer \/ denom\n where\n numer = V.value ay y * V.value (expectation (findOpinion y)) x\n denom = sum . map f $ ys\n f y = V.value ay y * V.value (expectation (findOpinion y)) x\n\\end{code}\n\n\\begin{code}\n uT x = minimum . map f $ ys\n where\n f y = expt y x \/ V.value ay y\n\\end{code}\n\n\\begin{code}\n ax = mBaseRate . snd . head $ ops\n\\end{code}\n\n\\begin{code}\n xs = [minBound .. maxBound] :: [a]\n ys = [minBound .. maxBound] :: [b]\n\\end{code}\n\n\\begin{code}\n findOpinion y = case lookup y ops of\n Nothing -> Multinomial (V.fromList []) 1 ax hx (F.fromList xs)\n Just op -> op\n\\end{code}\n\n\n\\subsubsection{Belief Constraining}\n\nThe final operator we discuss is the \\emph{belief constraint}\noperator \\cite{josang2012dempster}. This operator takes as input two objects that are convertible\nto hyper opinions and returns a hyper opinion as output. This function\nis equivalent in meaning to Dempster's rule of combination from Dempster-Shafer Theory \\cite{josang2012dempster}.\n\n\\begin{code}\nconstraint :: (ToHyper op1, ToHyper op2, Ord b)\n => SLExpr h a (op1 h b)\n -> SLExpr h a (op2 h b)\n -> SLExpr h a (Hyper h b)\nconstraint op1 op2 = do\n op1' <- liftM toHyper op1\n op2' <- liftM toHyper op2\n return $ constraint' op1' op2'\n\\end{code}\n\n\\begin{code}\nconstraint' :: (Ord a) => Hyper h a -> Hyper h a -> Hyper h a\nconstraint' h1@(Hyper bA uA aA hx fx) h2@(Hyper bB uB aB hy _) =\n Hyper bAB uAB aAB (Constraint hx hy) fx\n where\n bAB = V.fromList . map (\\k -> (k, harmony k \/ (1 - conflict))) $ keys\n\n uAB = (uA * uB) \/ (1 - conflict)\n\n aAB = V.fromList $ map (\\k -> (k, f k)) keys'\n where\n f x = (axA * (1 - uA) + axB * (1 - uB)) \/ (2 - uA - uB)\n where\n axA = V.value aA x\n axB = V.value aB x\n\n harmony x = bxA * uB + bxB * uA + rest\n where\n bxA = V.value bA x\n bxB = V.value bB x\n rest = sum . map combine $ matches\n matches = [(y, z) | y <- keys, z <- keys, F.intersection y z == x]\n\n conflict = sum . map combine $ matches\n where\n matches = [(y, z) | y <- keys, z <- keys, F.intersection y z == F.empty]\n\n combine (y, z) = V.value bA y * V.value bB z\n\n keys = F.toList $ F.reducedPowerSet fx\n keys' = nub (V.focals aA ++ V.focals aB)\n\\end{code}\n\nThe operators for multinomial and hyper opinions are summarized in table \\ref{tbl:mh-operators}.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{| l | l | l |}\n \\hline\n Name & SL Notation & SLHS Notation\\\\\n \\hline\n Multiplication & $\\omega_{X \\cup Y} = \\omega_X + \\omega_Y$ & $opx\\; `times`\\; opy$ \\\\\n Deduction & $\\omega_{Y || X} = \\omega_X \\circledcirc \\omega_{Y | X}$ & $deduce\\; opx\\; ops$ \\\\\n Abduction & $\\omega_{Y \\overline{||} X} = \\omega_X \\overline{\\circledcirc} \\omega_{X | Y}$ & $abduce\\; opx\\; opys\\; a$ \\\\\n Cumulative Fusion & $\\omega^{A \\diamondsuit B}_{X} = \\omega^A_X \\oplus \\omega^B_X$ & $opx\\; `cFuse`\\; opy$ \\\\\n Cumulative Unfusion & $\\omega^{A \\overline{\\diamondsuit} B}_{X} = \\omega^A_X \\ominus \\omega^B_X$ & $opx\\; `cUnfuse`\\; opy$ \\\\\n Averaging Fusion & $\\omega^{A \\underline{\\diamondsuit} B}_{X} = \\omega^A_X \\underline{\\oplus} \\omega^B_X$ & $opx\\; `aFuse`\\; opy$ \\\\\n Averaging Unfusion & $\\omega^{A \\overline{\\underline{\\diamondsuit}} B}_{X} = \\omega^A_X \\underline{\\ominus} \\omega^B_X$ & $opx\\; `aUnfuse`\\; opy$ \\\\\n Fission & $\\omega_{X \\cup Y} = \\omega_X + \\omega_Y$ & $split\\; phi\\; opx$ \\\\\n Belief Constraining & $\\omega^{A \\& B}_{X} = \\omega^A_X \\odot \\omega^B_X$ & $opx\\; `constraint`\\; opy$ \\\\\n \\hline\n\\end{tabular}\n\\end{center}\n\n\\caption{Summary of multinomial and hyper operators}\n\\label{tbl:mh-operators}\n\\end{table}\n\n\n\n\n\n\n\n\\end{document}\n","avg_line_length":35.8291666667,"max_line_length":155,"alphanum_fraction":0.6221653681} {"size":45200,"ext":"lhs","lang":"Literate Haskell","max_stars_count":1.0,"content":"%\n% (c) The University of Glasgow 2006\n% (c) The GRASP\/AQUA Project, Glasgow University, 1992-1998\n%\n\nTcPat: Typechecking patterns\n\n\\begin{code}\n{-# LANGUAGE CPP, RankNTypes #-}\n{-# OPTIONS_GHC -fno-warn-tabs #-}\n-- The above warning supression flag is a temporary kludge.\n-- While working on this module you are encouraged to remove it and\n-- detab the module (please do the detabbing in a separate patch). See\n-- http:\/\/ghc.haskell.org\/trac\/ghc\/wiki\/Commentary\/CodingStyle#TabsvsSpaces\n-- for details\n\nmodule TcPat ( tcLetPat, TcSigFun, TcPragFun\n , TcSigInfo(..), findScopedTyVars\n , LetBndrSpec(..), addInlinePrags, warnPrags\n , tcPat, tcPats, newNoSigLetBndr\n\t , addDataConStupidTheta, badFieldCon, polyPatSig ) where\n\n#include \"HsVersions.h\"\n\nimport {-# SOURCE #-}\tTcExpr( tcSyntaxOp, tcInferRho)\n\nimport HsSyn\nimport TcHsSyn\nimport TcRnMonad\nimport Inst\nimport Id\nimport Var\nimport Name\nimport NameSet\nimport TcEnv\n--import TcExpr\nimport TcMType\nimport TcValidity( arityErr )\nimport TcType\nimport TcUnify\nimport TcHsType\nimport TysWiredIn\nimport TcEvidence\nimport TyCon\nimport DataCon\nimport PatSyn\nimport ConLike\nimport PrelNames\nimport BasicTypes hiding (SuccessFlag(..))\nimport DynFlags\nimport SrcLoc\nimport Util\nimport Outputable\nimport FastString\nimport Control.Monad\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\t\tExternal interface\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ntcLetPat :: TcSigFun -> LetBndrSpec\n \t -> LPat Name -> TcSigmaType \n \t -> TcM a\n \t -> TcM (LPat TcId, a)\ntcLetPat sig_fn no_gen pat pat_ty thing_inside\n = tc_lpat pat pat_ty penv thing_inside \n where\n penv = PE { pe_lazy = True\n , pe_ctxt = LetPat sig_fn no_gen }\n\n-----------------\ntcPats :: HsMatchContext Name\n -> [LPat Name]\t\t -- Patterns,\n -> [TcSigmaType]\t -- and their types\n -> TcM a -- and the checker for the body\n -> TcM ([LPat TcId], a)\n\n-- This is the externally-callable wrapper function\n-- Typecheck the patterns, extend the environment to bind the variables,\n-- do the thing inside, use any existentially-bound dictionaries to \n-- discharge parts of the returning LIE, and deal with pattern type\n-- signatures\n\n-- 1. Initialise the PatState\n-- 2. Check the patterns\n-- 3. Check the body\n-- 4. Check that no existentials escape\n\ntcPats ctxt pats pat_tys thing_inside\n = tc_lpats penv pats pat_tys thing_inside\n where\n penv = PE { pe_lazy = False, pe_ctxt = LamPat ctxt }\n\ntcPat :: HsMatchContext Name\n -> LPat Name -> TcSigmaType \n -> TcM a -- Checker for body, given\n -- its result type\n -> TcM (LPat TcId, a)\ntcPat ctxt pat pat_ty thing_inside\n = tc_lpat pat pat_ty penv thing_inside\n where\n penv = PE { pe_lazy = False, pe_ctxt = LamPat ctxt }\n \n\n-----------------\ndata PatEnv\n = PE { pe_lazy :: Bool\t-- True <=> lazy context, so no existentials allowed\n , pe_ctxt :: PatCtxt \t-- Context in which the whole pattern appears\n }\n\ndata PatCtxt\n = LamPat -- Used for lambdas, case etc\n (HsMatchContext Name) \n\n | LetPat -- Used only for let(rec) pattern bindings\n \t -- See Note [Typing patterns in pattern bindings]\n TcSigFun -- Tells type sig if any\n LetBndrSpec -- True <=> no generalisation of this let\n\ndata LetBndrSpec \n = LetLclBndr\t\t -- The binder is just a local one;\n \t\t\t -- an AbsBinds will provide the global version\n\n | LetGblBndr TcPragFun -- Genrealisation plan is NoGen, so there isn't going \n -- to be an AbsBinds; So we must bind the global version\n -- of the binder right away. \n \t \t\t -- Oh, and dhhere is the inline-pragma information\n\nmakeLazy :: PatEnv -> PatEnv\nmakeLazy penv = penv { pe_lazy = True }\n\npatSigCtxt :: PatEnv -> UserTypeCtxt\npatSigCtxt (PE { pe_ctxt = LetPat {} }) = BindPatSigCtxt\npatSigCtxt (PE { pe_ctxt = LamPat {} }) = LamPatSigCtxt\n\n---------------\ntype TcPragFun = Name -> [LSig Name]\ntype TcSigFun = Name -> Maybe TcSigInfo\n\ndata TcSigInfo\n = TcSigInfo {\n sig_id :: TcId, -- *Polymorphic* binder for this value...\n\n sig_tvs :: [(Maybe Name, TcTyVar)], \n -- Instantiated type and kind variables\n -- Just n <=> this skolem is lexically in scope with name n\n -- See Note [Binding scoped type variables]\n\n sig_theta :: TcThetaType, -- Instantiated theta\n\n sig_tau :: TcSigmaType, -- Instantiated tau\n\t\t \t\t -- See Note [sig_tau may be polymorphic]\n\n sig_loc :: SrcSpan -- The location of the signature\n }\n\nfindScopedTyVars -- See Note [Binding scoped type variables]\n :: LHsType Name -- The HsType\n -> TcType -- The corresponding Type:\n -- uses same Names as the HsType\n -> [TcTyVar] -- The instantiated forall variables of the Type\n -> [(Maybe Name, TcTyVar)] -- In 1-1 correspondence with the instantiated vars\nfindScopedTyVars hs_ty sig_ty inst_tvs\n = zipWith find sig_tvs inst_tvs\n where\n find sig_tv inst_tv\n | tv_name `elemNameSet` scoped_names = (Just tv_name, inst_tv)\n | otherwise = (Nothing, inst_tv)\n where\n tv_name = tyVarName sig_tv\n\n scoped_names = mkNameSet (hsExplicitTvs hs_ty)\n (sig_tvs,_) = tcSplitForAllTys sig_ty\n\ninstance Outputable TcSigInfo where\n ppr (TcSigInfo { sig_id = id, sig_tvs = tyvars, sig_theta = theta, sig_tau = tau})\n = ppr id <+> dcolon <+> vcat [ pprSigmaType (mkSigmaTy (map snd tyvars) theta tau)\n , ppr (map fst tyvars) ]\n\\end{code}\n\nNote [Binding scoped type variables]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe type variables *brought into lexical scope* by a type signature may\nbe a subset of the *quantified type variables* of the signatures, for two reasons:\n\n* With kind polymorphism a signature like\n f :: forall f a. f a -> f a\n may actuallly give rise to\n f :: forall k. forall (f::k -> *) (a:k). f a -> f a\n So the sig_tvs will be [k,f,a], but only f,a are scoped.\n NB: the scoped ones are not necessarily the *inital* ones!\n\n* Even aside from kind polymorphism, tere may be more instantiated\n type variables than lexically-scoped ones. For example:\n type T a = forall b. b -> (a,b)\n f :: forall c. T c\n Here, the signature for f will have one scoped type variable, c,\n but two instantiated type variables, c' and b'.\n\nThe function findScopedTyVars takes\n * hs_ty: the original HsForAllTy\n * sig_ty: the corresponding Type (which is guaranteed to use the same Names\n as the HsForAllTy)\n * inst_tvs: the skolems instantiated from the forall's in sig_ty\nIt returns a [(Maybe Name, TcTyVar)], in 1-1 correspondence with inst_tvs\nbut with a (Just n) for the lexically scoped name of each in-scope tyvar.\n\nNote [sig_tau may be polymorphic]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nNote that \"sig_tau\" might actually be a polymorphic type,\nif the original function had a signature like\n forall a. Eq a => forall b. Ord b => ....\nBut that's ok: tcMatchesFun (called by tcRhs) can deal with that\nIt happens, too! See Note [Polymorphic methods] in TcClassDcl.\n\nNote [Existential check]\n~~~~~~~~~~~~~~~~~~~~~~~~\nLazy patterns can't bind existentials. They arise in two ways:\n * Let bindings let { C a b = e } in b\n * Twiddle patterns f ~(C a b) = e\nThe pe_lazy field of PatEnv says whether we are inside a lazy\npattern (perhaps deeply)\n\nIf we aren't inside a lazy pattern then we can bind existentials,\nbut we need to be careful about \"extra\" tyvars. Consider\n (\\C x -> d) : pat_ty -> res_ty\nWhen looking for existential escape we must check that the existential\nbound by C don't unify with the free variables of pat_ty, OR res_ty\n(or of course the environment). Hence we need to keep track of the \nres_ty free vars.\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\t\tBinders\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ntcPatBndr :: PatEnv -> Name -> TcSigmaType -> TcM (TcCoercion, TcId)\n-- (coi, xp) = tcPatBndr penv x pat_ty\n-- Then coi : pat_ty ~ typeof(xp)\n--\ntcPatBndr (PE { pe_ctxt = LetPat lookup_sig no_gen}) bndr_name pat_ty\n -- See Note [Typing patterns in pattern bindings]\n | LetGblBndr prags <- no_gen\n , Just sig <- lookup_sig bndr_name\n = do { bndr_id <- addInlinePrags (sig_id sig) (prags bndr_name)\n ; traceTc \"tcPatBndr(gbl,sig)\" (ppr bndr_id $$ ppr (idType bndr_id)) \n ; co <- unifyPatType (idType bndr_id) pat_ty\n ; return (co, bndr_id) }\n \n | otherwise \n = do { bndr_id <- newNoSigLetBndr no_gen bndr_name pat_ty\n ; traceTc \"tcPatBndr(no-sig)\" (ppr bndr_id $$ ppr (idType bndr_id))\n ; return (mkTcNomReflCo pat_ty, bndr_id) }\n\ntcPatBndr (PE { pe_ctxt = _lam_or_proc }) bndr_name pat_ty\n = do { bndr <- mkLocalBinder bndr_name pat_ty\n ; return (mkTcNomReflCo pat_ty, bndr) }\n\n------------\nnewNoSigLetBndr :: LetBndrSpec -> Name -> TcType -> TcM TcId\n-- In the polymorphic case (no_gen = LetLclBndr), generate a \"monomorphic version\" \n-- of the Id; the original name will be bound to the polymorphic version\n-- by the AbsBinds\n-- In the monomorphic case (no_gen = LetBglBndr) there is no AbsBinds, and we \n-- use the original name directly\nnewNoSigLetBndr LetLclBndr name ty \n =do { mono_name <- newLocalName name\n ; mkLocalBinder mono_name ty }\nnewNoSigLetBndr (LetGblBndr prags) name ty \n = do { id <- mkLocalBinder name ty\n ; addInlinePrags id (prags name) }\n\n----------\naddInlinePrags :: TcId -> [LSig Name] -> TcM TcId\naddInlinePrags poly_id prags\n = do { traceTc \"addInlinePrags\" (ppr poly_id $$ ppr prags) \n ; tc_inl inl_sigs }\n where\n inl_sigs = filter isInlineLSig prags\n tc_inl [] = return poly_id\n tc_inl (L loc (InlineSig _ prag) : other_inls)\n = do { unless (null other_inls) (setSrcSpan loc warn_dup_inline)\n ; traceTc \"addInlinePrag\" (ppr poly_id $$ ppr prag) \n ; return (poly_id `setInlinePragma` prag) }\n tc_inl _ = panic \"tc_inl\"\n\n warn_dup_inline = warnPrags poly_id inl_sigs $\n ptext (sLit \"Duplicate INLINE pragmas for\")\n\nwarnPrags :: Id -> [LSig Name] -> SDoc -> TcM ()\nwarnPrags id bad_sigs herald\n = addWarnTc (hang (herald <+> quotes (ppr id))\n 2 (ppr_sigs bad_sigs))\n where\n ppr_sigs sigs = vcat (map (ppr . getLoc) sigs)\n\n-----------------\nmkLocalBinder :: Name -> TcType -> TcM TcId\nmkLocalBinder name ty\n = return (Id.mkLocalId name ty)\n\\end{code}\n\nNote [Typing patterns in pattern bindings]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nSuppose we are typing a pattern binding\n pat = rhs\nThen the PatCtxt will be (LetPat sig_fn let_bndr_spec).\n\nThere can still be signatures for the binders:\n data T = MkT (forall a. a->a) Int\n x :: forall a. a->a\n y :: Int\n MkT x y = \n\nTwo cases, dealt with by the LetPat case of tcPatBndr\n\n * If we are generalising (generalisation plan is InferGen or\n CheckGen), then the let_bndr_spec will be LetLclBndr. In that case\n we want to bind a cloned, local version of the variable, with the\n type given by the pattern context, *not* by the signature (even if\n there is one; see Trac #7268). The mkExport part of the\n generalisation step will do the checking and impedence matching\n against the signature.\n\n * If for some some reason we are not generalising (plan = NoGen), the\n LetBndrSpec will be LetGblBndr. In that case we must bind the\n global version of the Id, and do so with precisely the type given\n in the signature. (Then we unify with the type from the pattern\n context type.\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\t\tThe main worker functions\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\nNote [Nesting]\n~~~~~~~~~~~~~~\ntcPat takes a \"thing inside\" over which the pattern scopes. This is partly\nso that tcPat can extend the environment for the thing_inside, but also \nso that constraints arising in the thing_inside can be discharged by the\npattern.\n\nThis does not work so well for the ErrCtxt carried by the monad: we don't\nwant the error-context for the pattern to scope over the RHS. \nHence the getErrCtxt\/setErrCtxt stuff in tcMultiple\n\n\\begin{code}\n--------------------\ntype Checker inp out = forall r.\n\t\t\t inp\n\t\t -> PatEnv\n\t\t -> TcM r\n\t\t -> TcM (out, r)\n\ntcMultiple :: Checker inp out -> Checker [inp] [out]\ntcMultiple tc_pat args penv thing_inside\n = do\t{ err_ctxt <- getErrCtxt\n\t; let loop _ []\n\t\t= do { res <- thing_inside\n\t\t ; return ([], res) }\n\n\t loop penv (arg:args)\n\t\t= do { (p', (ps', res)) \n\t\t\t\t<- tc_pat arg penv $ \n\t\t\t\t setErrCtxt err_ctxt $\n\t\t\t\t loop penv args\n\t\t-- setErrCtxt: restore context before doing the next pattern\n\t\t-- See note [Nesting] above\n\t\t\t\t\n\t\t ; return (p':ps', res) }\n\n\t; loop penv args }\n\n--------------------\ntc_lpat :: LPat Name \n\t-> TcSigmaType\n\t-> PatEnv\n\t-> TcM a\n\t-> TcM (LPat TcId, a)\ntc_lpat (L span pat) pat_ty penv thing_inside\n = setSrcSpan span $\n do\t{ (pat', res) <- maybeWrapPatCtxt pat (tc_pat penv pat pat_ty)\n thing_inside\n\t; return (L span pat', res) }\n\ntc_lpats :: PatEnv\n\t -> [LPat Name] -> [TcSigmaType]\n \t -> TcM a\t\n \t -> TcM ([LPat TcId], a)\ntc_lpats penv pats tys thing_inside \n = ASSERT2( equalLength pats tys, ppr pats $$ ppr tys )\n tcMultiple (\\(p,t) -> tc_lpat p t) \n (zipEqual \"tc_lpats\" pats tys)\n penv thing_inside \n\n--------------------\ntc_pat\t:: PatEnv\n -> Pat Name \n -> TcSigmaType\t-- Fully refined result type\n -> TcM a\t\t-- Thing inside\n -> TcM (Pat TcId, \t-- Translated pattern\n a)\t\t-- Result of thing inside\n\ntc_pat penv (VarPat name) pat_ty thing_inside\n = do\t{ (co, id) <- tcPatBndr penv name pat_ty\n ; res <- tcExtendIdEnv1 name id thing_inside\n ; return (mkHsWrapPatCo co (VarPat id) pat_ty, res) }\n\ntc_pat penv (ParPat pat) pat_ty thing_inside\n = do\t{ (pat', res) <- tc_lpat pat pat_ty penv thing_inside\n\t; return (ParPat pat', res) }\n\ntc_pat penv (BangPat pat) pat_ty thing_inside\n = do\t{ (pat', res) <- tc_lpat pat pat_ty penv thing_inside\n\t; return (BangPat pat', res) }\n\ntc_pat penv lpat@(LazyPat pat) pat_ty thing_inside\n = do\t{ (pat', (res, pat_ct)) \n\t\t<- tc_lpat pat pat_ty (makeLazy penv) $ \n\t\t captureConstraints thing_inside\n\t\t-- Ignore refined penv', revert to penv\n\n\t; emitConstraints pat_ct\n\t-- captureConstraints\/extendConstraints: \n -- see Note [Hopping the LIE in lazy patterns]\n\n\t-- Check there are no unlifted types under the lazy pattern\n\t; when (any (isUnLiftedType . idType) $ collectPatBinders pat') $\n lazyUnliftedPatErr lpat\n\n\t-- Check that the expected pattern type is itself lifted\n\t; pat_ty' <- newFlexiTyVarTy liftedTypeKind\n\t; _ <- unifyType pat_ty pat_ty'\n\n\t; return (LazyPat pat', res) }\n\ntc_pat _ p@(QuasiQuotePat _) _ _\n = pprPanic \"Should never see QuasiQuotePat in type checker\" (ppr p)\n\ntc_pat _ (WildPat _) pat_ty thing_inside\n = do\t{ res <- thing_inside \n\t; return (WildPat pat_ty, res) }\n\ntc_pat penv (AsPat (L nm_loc name) pat) pat_ty thing_inside\n = do\t{ (co, bndr_id) <- setSrcSpan nm_loc (tcPatBndr penv name pat_ty)\n ; (pat', res) <- tcExtendIdEnv1 name bndr_id $\n\t\t\t tc_lpat pat (idType bndr_id) penv thing_inside\n\t -- NB: if we do inference on:\n\t --\t\t\\ (y@(x::forall a. a->a)) = e\n\t -- we'll fail. The as-pattern infers a monotype for 'y', which then\n\t -- fails to unify with the polymorphic type for 'x'. This could \n\t -- perhaps be fixed, but only with a bit more work.\n\t --\n\t -- If you fix it, don't forget the bindInstsOfPatIds!\n\t; return (mkHsWrapPatCo co (AsPat (L nm_loc bndr_id) pat') pat_ty, res) }\n\ntc_pat penv (ViewPat expr pat _) overall_pat_ty thing_inside \n = do\t{\n -- Morally, expr must have type `forall a1...aN. OPT' -> B` \n -- where overall_pat_ty is an instance of OPT'.\n -- Here, we infer a rho type for it,\n -- which replaces the leading foralls and constraints\n -- with fresh unification variables.\n ; (expr',expr'_inferred) <- tcInferRho expr\n\n -- next, we check that expr is coercible to `overall_pat_ty -> pat_ty`\n -- NOTE: this forces pat_ty to be a monotype (because we use a unification \n -- variable to find it). this means that in an example like\n -- (view -> f) where view :: _ -> forall b. b\n -- we will only be able to use view at one instantation in the\n -- rest of the view\n\t; (expr_co, pat_ty) <- tcInfer $ \\ pat_ty -> \n\t\tunifyType expr'_inferred (mkFunTy overall_pat_ty pat_ty)\n \n -- pattern must have pat_ty\n ; (pat', res) <- tc_lpat pat pat_ty penv thing_inside\n\n\t; return (ViewPat (mkLHsWrapCo expr_co expr') pat' overall_pat_ty, res) }\n\n-- Type signatures in patterns\n-- See Note [Pattern coercions] below\ntc_pat penv (SigPatIn pat sig_ty) pat_ty thing_inside\n = do\t{ (inner_ty, tv_binds, wrap) <- tcPatSig (patSigCtxt penv) sig_ty pat_ty\n\t; (pat', res) <- tcExtendTyVarEnv2 tv_binds $\n\t\t\t tc_lpat pat inner_ty penv thing_inside\n\n ; return (mkHsWrapPat wrap (SigPatOut pat' inner_ty) pat_ty, res) }\n\n------------------------\n-- Lists, tuples, arrays\ntc_pat penv (ListPat pats _ Nothing) pat_ty thing_inside\n = do\t{ (coi, elt_ty) <- matchExpectedPatTy matchExpectedListTy pat_ty \n ; (pats', res) <- tcMultiple (\\p -> tc_lpat p elt_ty)\n\t\t\t\t pats penv thing_inside\n \t; return (mkHsWrapPat coi (ListPat pats' elt_ty Nothing) pat_ty, res) \n }\n\ntc_pat penv (ListPat pats _ (Just (_,e))) pat_ty thing_inside\n = do\t{ list_pat_ty <- newFlexiTyVarTy liftedTypeKind\n ; e' <- tcSyntaxOp ListOrigin e (mkFunTy pat_ty list_pat_ty)\n ; (coi, elt_ty) <- matchExpectedPatTy matchExpectedListTy list_pat_ty\n ; (pats', res) <- tcMultiple (\\p -> tc_lpat p elt_ty)\n\t\t\t\t pats penv thing_inside\n \t; return (mkHsWrapPat coi (ListPat pats' elt_ty (Just (pat_ty,e'))) list_pat_ty, res) \n }\n\ntc_pat penv (PArrPat pats _) pat_ty thing_inside\n = do\t{ (coi, elt_ty) <- matchExpectedPatTy matchExpectedPArrTy pat_ty\n\t; (pats', res) <- tcMultiple (\\p -> tc_lpat p elt_ty)\n\t\t\t\t pats penv thing_inside \n\t; return (mkHsWrapPat coi (PArrPat pats' elt_ty) pat_ty, res)\n }\n\ntc_pat penv (TuplePat pats boxity _) pat_ty thing_inside\n = do\t{ let tc = tupleTyCon (boxityNormalTupleSort boxity) (length pats)\n ; (coi, arg_tys) <- matchExpectedPatTy (matchExpectedTyConApp tc) pat_ty\n\t; (pats', res) <- tc_lpats penv pats arg_tys thing_inside\n\n\t; dflags <- getDynFlags\n\n\t-- Under flag control turn a pattern (x,y,z) into ~(x,y,z)\n\t-- so that we can experiment with lazy tuple-matching.\n\t-- This is a pretty odd place to make the switch, but\n\t-- it was easy to do.\n\t; let pat_ty' = mkTyConApp tc arg_tys\n -- pat_ty \/= pat_ty iff coi \/= IdCo\n unmangled_result = TuplePat pats' boxity pat_ty'\n\t possibly_mangled_result\n\t | gopt Opt_IrrefutableTuples dflags &&\n isBoxed boxity = LazyPat (noLoc unmangled_result)\n\t | otherwise\t\t = unmangled_result\n\n \t; ASSERT( length arg_tys == length pats ) -- Syntactically enforced\n\t return (mkHsWrapPat coi possibly_mangled_result pat_ty, res)\n }\n\n------------------------\n-- Data constructors\ntc_pat penv (ConPatIn con arg_pats) pat_ty thing_inside\n = tcConPat penv con pat_ty arg_pats thing_inside\n\n------------------------\n-- Literal patterns\ntc_pat _ (LitPat simple_lit) pat_ty thing_inside\n = do\t{ let lit_ty = hsLitType simple_lit\n\t; co <- unifyPatType lit_ty pat_ty\n\t\t-- coi is of kind: pat_ty ~ lit_ty\n\t; res <- thing_inside \n\t; return ( mkHsWrapPatCo co (LitPat simple_lit) pat_ty \n , res) }\n\n------------------------\n-- Overloaded patterns: n, and n+k\ntc_pat _ (NPat over_lit mb_neg eq) pat_ty thing_inside\n = do\t{ let orig = LiteralOrigin over_lit\n\t; lit' <- newOverloadedLit orig over_lit pat_ty\n\t; eq' <- tcSyntaxOp orig eq (mkFunTys [pat_ty, pat_ty] boolTy)\n\t; mb_neg' <- case mb_neg of\n\t\t\tNothing -> return Nothing\t-- Positive literal\n\t\t\tJust neg -> \t-- Negative literal\n\t\t\t\t\t-- The 'negate' is re-mappable syntax\n \t\t\t do { neg' <- tcSyntaxOp orig neg (mkFunTy pat_ty pat_ty)\n\t\t\t ; return (Just neg') }\n\t; res <- thing_inside \n\t; return (NPat lit' mb_neg' eq', res) }\n\ntc_pat penv (NPlusKPat (L nm_loc name) lit ge minus) pat_ty thing_inside\n = do\t{ (co, bndr_id) <- setSrcSpan nm_loc (tcPatBndr penv name pat_ty)\n ; let pat_ty' = idType bndr_id\n\t orig = LiteralOrigin lit\n\t; lit' <- newOverloadedLit orig lit pat_ty'\n\n\t-- The '>=' and '-' parts are re-mappable syntax\n\t; ge' <- tcSyntaxOp orig ge (mkFunTys [pat_ty', pat_ty'] boolTy)\n\t; minus' <- tcSyntaxOp orig minus (mkFunTys [pat_ty', pat_ty'] pat_ty')\n ; let pat' = NPlusKPat (L nm_loc bndr_id) lit' ge' minus'\n\n\t-- The Report says that n+k patterns must be in Integral\n\t-- We may not want this when using re-mappable syntax, though (ToDo?)\n\t; icls <- tcLookupClass integralClassName\n\t; instStupidTheta orig [mkClassPred icls [pat_ty']]\t\n \n\t; res <- tcExtendIdEnv1 name bndr_id thing_inside\n\t; return (mkHsWrapPatCo co pat' pat_ty, res) }\n\ntc_pat _ _other_pat _ _ = panic \"tc_pat\" \t-- ConPatOut, SigPatOut\n\n----------------\nunifyPatType :: TcType -> TcType -> TcM TcCoercion\n-- In patterns we want a coercion from the\n-- context type (expected) to the actual pattern type\n-- But we don't want to reverse the args to unifyType because\n-- that controls the actual\/expected stuff in error messages\nunifyPatType actual_ty expected_ty\n = do { coi <- unifyType actual_ty expected_ty\n ; return (mkTcSymCo coi) }\n\\end{code}\n\nNote [Hopping the LIE in lazy patterns]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIn a lazy pattern, we must *not* discharge constraints from the RHS\nfrom dictionaries bound in the pattern. E.g.\n\tf ~(C x) = 3\nWe can't discharge the Num constraint from dictionaries bound by\nthe pattern C! \n\nSo we have to make the constraints from thing_inside \"hop around\" \nthe pattern. Hence the captureConstraints and emitConstraints.\n\nThe same thing ensures that equality constraints in a lazy match\nare not made available in the RHS of the match. For example\n\tdata T a where { T1 :: Int -> T Int; ... }\n\tf :: T a -> Int -> a\n\tf ~(T1 i) y = y\nIt's obviously not sound to refine a to Int in the right\nhand side, because the arugment might not match T1 at all!\n\nFinally, a lazy pattern should not bind any existential type variables\nbecause they won't be in scope when we do the desugaring\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\tMost of the work for constructors is here\n\t(the rest is in the ConPatIn case of tc_pat)\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n[Pattern matching indexed data types]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nConsider the following declarations:\n\n data family Map k :: * -> *\n data instance Map (a, b) v = MapPair (Map a (Pair b v))\n\nand a case expression\n\n case x :: Map (Int, c) w of MapPair m -> ...\n\nAs explained by [Wrappers for data instance tycons] in MkIds.lhs, the\nworker\/wrapper types for MapPair are\n\n $WMapPair :: forall a b v. Map a (Map a b v) -> Map (a, b) v\n $wMapPair :: forall a b v. Map a (Map a b v) -> :R123Map a b v\n\nSo, the type of the scrutinee is Map (Int, c) w, but the tycon of MapPair is\n:R123Map, which means the straight use of boxySplitTyConApp would give a type\nerror. Hence, the smart wrapper function boxySplitTyConAppWithFamily calls\nboxySplitTyConApp with the family tycon Map instead, which gives us the family\ntype list {(Int, c), w}. To get the correct split for :R123Map, we need to\nunify the family type list {(Int, c), w} with the instance types {(a, b), v}\n(provided by tyConFamInst_maybe together with the family tycon). This\nunification yields the substitution [a -> Int, b -> c, v -> w], which gives us\nthe split arguments for the representation tycon :R123Map as {Int, c, w}\n\nIn other words, boxySplitTyConAppWithFamily implicitly takes the coercion \n\n Co123Map a b v :: {Map (a, b) v ~ :R123Map a b v}\n\nmoving between representation and family type into account. To produce type\ncorrect Core, this coercion needs to be used to case the type of the scrutinee\nfrom the family to the representation type. This is achieved by\nunwrapFamInstScrutinee using a CoPat around the result pattern.\n\nNow it might appear seem as if we could have used the previous GADT type\nrefinement infrastructure of refineAlt and friends instead of the explicit\nunification and CoPat generation. However, that would be wrong. Why? The\nwhole point of GADT refinement is that the refinement is local to the case\nalternative. In contrast, the substitution generated by the unification of\nthe family type list and instance types needs to be propagated to the outside.\nImagine that in the above example, the type of the scrutinee would have been\n(Map x w), then we would have unified {x, w} with {(a, b), v}, yielding the\nsubstitution [x -> (a, b), v -> w]. In contrast to GADT matching, the\ninstantiation of x with (a, b) must be global; ie, it must be valid in *all*\nalternatives of the case expression, whereas in the GADT case it might vary\nbetween alternatives.\n\nRIP GADT refinement: refinements have been replaced by the use of explicit\nequality constraints that are used in conjunction with implication constraints\nto express the local scope of GADT refinements.\n\n\\begin{code}\n--\tRunning example:\n-- MkT :: forall a b c. (a~[b]) => b -> c -> T a\n-- \t with scrutinee of type (T ty)\n\ntcConPat :: PatEnv -> Located Name \n\t -> TcRhoType \t \t-- Type of the pattern\n\t -> HsConPatDetails Name -> TcM a\n\t -> TcM (Pat TcId, a)\ntcConPat penv con_lname@(L _ con_name) pat_ty arg_pats thing_inside\n = do { con_like <- tcLookupConLike con_name\n ; case con_like of\n RealDataCon data_con -> tcDataConPat penv con_lname data_con\n pat_ty arg_pats thing_inside\n PatSynCon pat_syn -> tcPatSynPat penv con_lname pat_syn\n pat_ty arg_pats thing_inside\n }\n\ntcDataConPat :: PatEnv -> Located Name -> DataCon\n\t -> TcRhoType \t \t-- Type of the pattern\n\t -> HsConPatDetails Name -> TcM a\n\t -> TcM (Pat TcId, a)\ntcDataConPat penv (L con_span con_name) data_con pat_ty arg_pats thing_inside\n = do\t{ let tycon = dataConTyCon data_con\n \t -- For data families this is the representation tycon\n\t (univ_tvs, ex_tvs, eq_spec, theta, arg_tys, _)\n = dataConFullSig data_con\n header = L con_span (RealDataCon data_con)\n\n\t -- Instantiate the constructor type variables [a->ty]\n\t -- This may involve doing a family-instance coercion, \n\t -- and building a wrapper \n\t; (wrap, ctxt_res_tys) <- matchExpectedPatTy (matchExpectedConTy tycon) pat_ty\n\n\t -- Add the stupid theta\n\t; setSrcSpan con_span $ addDataConStupidTheta data_con ctxt_res_tys\n\n\t; checkExistentials ex_tvs penv \n ; (tenv, ex_tvs') <- tcInstSuperSkolTyVarsX\n (zipTopTvSubst univ_tvs ctxt_res_tys) ex_tvs\n -- Get location from monad, not from ex_tvs\n\n ; let pat_ty' = mkTyConApp tycon ctxt_res_tys\n\t -- pat_ty' is type of the actual constructor application\n -- pat_ty' \/= pat_ty iff coi \/= IdCo\n\n\t arg_tys' = substTys tenv arg_tys\n\n ; traceTc \"tcConPat\" (vcat [ ppr con_name, ppr univ_tvs, ppr ex_tvs, ppr eq_spec\n , ppr ex_tvs', ppr pat_ty', ppr arg_tys' ])\n\t; if null ex_tvs && null eq_spec && null theta\n\t then do { -- The common case; no class bindings etc \n -- (see Note [Arrows and patterns])\n\t\t (arg_pats', res) <- tcConArgs (RealDataCon data_con) arg_tys'\n\t\t\t\t\t\t arg_pats penv thing_inside\n\t\t ; let res_pat = ConPatOut { pat_con = header,\n\t\t\t \t pat_tvs = [], pat_dicts = [], \n pat_binds = emptyTcEvBinds,\n\t\t\t\t\t pat_args = arg_pats', \n pat_ty = pat_ty',\n pat_wrap = idHsWrapper }\n\n\t\t ; return (mkHsWrapPat wrap res_pat pat_ty, res) }\n\n\t else do -- The general case, with existential, \n -- and local equality constraints\n\t{ let theta' = substTheta tenv (eqSpecPreds eq_spec ++ theta)\n -- order is *important* as we generate the list of\n -- dictionary binders from theta'\n\t no_equalities = not (any isEqPred theta')\n skol_info = case pe_ctxt penv of\n LamPat mc -> PatSkol (RealDataCon data_con) mc\n LetPat {} -> UnkSkol -- Doesn't matter\n \n ; gadts_on <- xoptM Opt_GADTs\n ; families_on <- xoptM Opt_TypeFamilies\n\t; checkTc (no_equalities || gadts_on || families_on)\n\t\t (ptext (sLit \"A pattern match on a GADT requires GADTs or TypeFamilies\"))\n\t\t -- Trac #2905 decided that a *pattern-match* of a GADT\n\t\t -- should require the GADT language flag. \n -- Re TypeFamilies see also #7156 \n\n ; given <- newEvVars theta'\n ; (ev_binds, (arg_pats', res))\n\t <- checkConstraints skol_info ex_tvs' given $\n tcConArgs (RealDataCon data_con) arg_tys' arg_pats penv thing_inside\n\n ; let res_pat = ConPatOut { pat_con = header,\n\t\t\t pat_tvs = ex_tvs',\n\t\t\t pat_dicts = given,\n\t\t\t pat_binds = ev_binds,\n\t\t\t pat_args = arg_pats', \n pat_ty = pat_ty',\n pat_wrap = idHsWrapper }\n\t; return (mkHsWrapPat wrap res_pat pat_ty, res)\n\t} }\n\ntcPatSynPat :: PatEnv -> Located Name -> PatSyn\n\t -> TcRhoType \t \t-- Type of the pattern\n\t -> HsConPatDetails Name -> TcM a\n\t -> TcM (Pat TcId, a)\ntcPatSynPat penv (L con_span _) pat_syn pat_ty arg_pats thing_inside\n = do\t{ let (univ_tvs, ex_tvs, prov_theta, req_theta) = patSynSig pat_syn\n arg_tys = patSynArgs pat_syn\n ty = patSynType pat_syn\n\n ; (_univ_tvs', inst_tys, subst) <- tcInstTyVars univ_tvs\n\n\t; checkExistentials ex_tvs penv\n ; (tenv, ex_tvs') <- tcInstSuperSkolTyVarsX subst ex_tvs\n ; let ty' = substTy tenv ty\n arg_tys' = substTys tenv arg_tys\n prov_theta' = substTheta tenv prov_theta\n req_theta' = substTheta tenv req_theta\n\n ; wrap <- coToHsWrapper <$> unifyType ty' pat_ty\n ; traceTc \"tcPatSynPat\" (ppr pat_syn $$\n ppr pat_ty $$\n ppr ty' $$\n ppr ex_tvs' $$\n ppr prov_theta' $$\n ppr req_theta' $$\n ppr arg_tys')\n\n ; prov_dicts' <- newEvVars prov_theta'\n\n -- Using a pattern synonym requires the PatternSynonyms\n -- language flag to keep consistent with #2905\n ; patsyns_on <- xoptM Opt_PatternSynonyms\n\t; checkTc patsyns_on\n (ptext (sLit \"A pattern match on a pattern synonym requires PatternSynonyms\"))\n\n ; let skol_info = case pe_ctxt penv of\n LamPat mc -> PatSkol (PatSynCon pat_syn) mc\n LetPat {} -> UnkSkol -- Doesn't matter\n\n ; req_wrap <- instCall PatOrigin inst_tys req_theta'\n ; traceTc \"instCall\" (ppr req_wrap)\n\n ; traceTc \"checkConstraints {\" empty\n ; (ev_binds, (arg_pats', res))\n <- checkConstraints skol_info ex_tvs' prov_dicts' $\n tcConArgs (PatSynCon pat_syn) arg_tys' arg_pats penv thing_inside\n\n ; traceTc \"checkConstraints }\" (ppr ev_binds)\n ; let res_pat = ConPatOut { pat_con = L con_span $ PatSynCon pat_syn,\n\t\t\t pat_tvs = ex_tvs',\n\t\t\t pat_dicts = prov_dicts',\n\t\t\t pat_binds = ev_binds,\n\t\t\t pat_args = arg_pats',\n pat_ty = ty',\n pat_wrap = req_wrap }\n\t; return (mkHsWrapPat wrap res_pat pat_ty, res) }\n\n----------------------------\nmatchExpectedPatTy :: (TcRhoType -> TcM (TcCoercion, a))\n -> TcRhoType -> TcM (HsWrapper, a) \n-- See Note [Matching polytyped patterns]\n-- Returns a wrapper : pat_ty ~ inner_ty\nmatchExpectedPatTy inner_match pat_ty\n | null tvs && null theta\n = do { (co, res) <- inner_match pat_ty\n ; return (coToHsWrapper (mkTcSymCo co), res) }\n \t -- The Sym is because the inner_match returns a coercion\n\t -- that is the other way round to matchExpectedPatTy\n\n | otherwise\n = do { (_, tys, subst) <- tcInstTyVars tvs\n ; wrap1 <- instCall PatOrigin tys (substTheta subst theta)\n ; (wrap2, arg_tys) <- matchExpectedPatTy inner_match (TcType.substTy subst tau)\n ; return (wrap2 <.> wrap1 , arg_tys) }\n where\n (tvs, theta, tau) = tcSplitSigmaTy pat_ty\n\n----------------------------\nmatchExpectedConTy :: TyCon \t -- The TyCon that this data \n\t\t \t\t -- constructor actually returns\n\t\t -> TcRhoType -- The type of the pattern\n\t\t -> TcM (TcCoercion, [TcSigmaType])\n-- See Note [Matching constructor patterns]\n-- Returns a coercion : T ty1 ... tyn ~ pat_ty\n-- This is the same way round as matchExpectedListTy etc\n-- but the other way round to matchExpectedPatTy\nmatchExpectedConTy data_tc pat_ty\n | Just (fam_tc, fam_args, co_tc) <- tyConFamInstSig_maybe data_tc\n \t -- Comments refer to Note [Matching constructor patterns]\n \t -- co_tc :: forall a. T [a] ~ T7 a\n = do { (_, tys, subst) <- tcInstTyVars (tyConTyVars data_tc)\n \t -- tys = [ty1,ty2]\n\n ; traceTc \"matchExpectedConTy\" (vcat [ppr data_tc, \n ppr (tyConTyVars data_tc),\n ppr fam_tc, ppr fam_args])\n ; co1 <- unifyType (mkTyConApp fam_tc (substTys subst fam_args)) pat_ty\n \t -- co1 : T (ty1,ty2) ~ pat_ty\n\n ; let co2 = mkTcUnbranchedAxInstCo Nominal co_tc tys\n \t -- co2 : T (ty1,ty2) ~ T7 ty1 ty2\n\n ; return (mkTcSymCo co2 `mkTcTransCo` co1, tys) }\n\n | otherwise\n = matchExpectedTyConApp data_tc pat_ty\n \t -- coi : T tys ~ pat_ty\n\\end{code}\n\nNote [Matching constructor patterns]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nSuppose (coi, tys) = matchExpectedConType data_tc pat_ty\n\n * In the simple case, pat_ty = tc tys\n\n * If pat_ty is a polytype, we want to instantiate it\n This is like part of a subsumption check. Eg\n f :: (forall a. [a]) -> blah\n f [] = blah\n\n * In a type family case, suppose we have\n data family T a\n data instance T (p,q) = A p | B q\n Then we'll have internally generated\n data T7 p q = A p | B q\n axiom coT7 p q :: T (p,q) ~ T7 p q\n \n So if pat_ty = T (ty1,ty2), we return (coi, [ty1,ty2]) such that\n coi = coi2 . coi1 : T7 t ~ pat_ty\n coi1 : T (ty1,ty2) ~ pat_ty\n coi2 : T7 ty1 ty2 ~ T (ty1,ty2)\n\n For families we do all this matching here, not in the unifier,\n because we never want a whisper of the data_tycon to appear in\n error messages; it's a purely internal thing\n\n\\begin{code}\ntcConArgs :: ConLike -> [TcSigmaType]\n\t -> Checker (HsConPatDetails Name) (HsConPatDetails Id)\n\ntcConArgs con_like arg_tys (PrefixCon arg_pats) penv thing_inside\n = do\t{ checkTc (con_arity == no_of_args)\t-- Check correct arity\n\t\t (arityErr \"Constructor\" con_like con_arity no_of_args)\n\t; let pats_w_tys = zipEqual \"tcConArgs\" arg_pats arg_tys\n\t; (arg_pats', res) <- tcMultiple tcConArg pats_w_tys\n\t\t\t\t\t penv thing_inside \n\t; return (PrefixCon arg_pats', res) }\n where\n con_arity = conLikeArity con_like\n no_of_args = length arg_pats\n\ntcConArgs con_like arg_tys (InfixCon p1 p2) penv thing_inside\n = do\t{ checkTc (con_arity == 2)\t-- Check correct arity\n (arityErr \"Constructor\" con_like con_arity 2)\n\t; let [arg_ty1,arg_ty2] = arg_tys\t-- This can't fail after the arity check\n\t; ([p1',p2'], res) <- tcMultiple tcConArg [(p1,arg_ty1),(p2,arg_ty2)]\n\t\t\t\t\t penv thing_inside\n\t; return (InfixCon p1' p2', res) }\n where\n con_arity = conLikeArity con_like\n\ntcConArgs con_like arg_tys (RecCon (HsRecFields rpats dd)) penv thing_inside\n = do\t{ (rpats', res) <- tcMultiple tc_field rpats penv thing_inside\n\t; return (RecCon (HsRecFields rpats' dd), res) }\n where\n tc_field :: Checker (HsRecField FieldLabel (LPat Name)) (HsRecField TcId (LPat TcId))\n tc_field (HsRecField field_lbl pat pun) penv thing_inside\n = do { (sel_id, pat_ty) <- wrapLocFstM find_field_ty field_lbl\n\t ; (pat', res) <- tcConArg (pat, pat_ty) penv thing_inside\n\t ; return (HsRecField sel_id pat' pun, res) }\n\n find_field_ty :: FieldLabel -> TcM (Id, TcType)\n find_field_ty field_lbl\n\t= case [ty | (f,ty) <- field_tys, f == field_lbl] of\n\n\t\t-- No matching field; chances are this field label comes from some\n\t\t-- other record type (or maybe none). If this happens, just fail,\n -- otherwise we get crashes later (Trac #8570), and similar:\n\t\t--\tf (R { foo = (a,b) }) = a+b\n\t\t-- If foo isn't one of R's fields, we don't want to crash when\n\t\t-- typechecking the \"a+b\".\n\t [] -> failWith (badFieldCon con_like field_lbl)\n\n\t\t-- The normal case, when the field comes from the right constructor\n\t (pat_ty : extras) ->\n\t\tASSERT( null extras )\n\t\tdo { sel_id <- tcLookupField field_lbl\n\t\t ; return (sel_id, pat_ty) }\n\n field_tys :: [(FieldLabel, TcType)]\n field_tys = case con_like of\n RealDataCon data_con -> zip (dataConFieldLabels data_con) arg_tys\n\t -- Don't use zipEqual! If the constructor isn't really a record, then\n\t -- dataConFieldLabels will be empty (and each field in the pattern\n\t -- will generate an error below).\n PatSynCon{} -> []\n\nconLikeArity :: ConLike -> Arity\nconLikeArity (RealDataCon data_con) = dataConSourceArity data_con\nconLikeArity (PatSynCon pat_syn) = patSynArity pat_syn\n\ntcConArg :: Checker (LPat Name, TcSigmaType) (LPat Id)\ntcConArg (arg_pat, arg_ty) penv thing_inside\n = tc_lpat arg_pat arg_ty penv thing_inside\n\\end{code}\n\n\\begin{code}\naddDataConStupidTheta :: DataCon -> [TcType] -> TcM ()\n-- Instantiate the \"stupid theta\" of the data con, and throw \n-- the constraints into the constraint set\naddDataConStupidTheta data_con inst_tys\n | null stupid_theta = return ()\n | otherwise\t = instStupidTheta origin inst_theta\n where\n origin = OccurrenceOf (dataConName data_con)\n\t-- The origin should always report \"occurrence of C\"\n\t-- even when C occurs in a pattern\n stupid_theta = dataConStupidTheta data_con\n tenv = mkTopTvSubst (dataConUnivTyVars data_con `zip` inst_tys)\n \t -- NB: inst_tys can be longer than the univ tyvars\n\t -- because the constructor might have existentials\n inst_theta = substTheta tenv stupid_theta\n\\end{code}\n\nNote [Arrows and patterns]\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n(Oct 07) Arrow noation has the odd property that it involves \n\"holes in the scope\". For example:\n expr :: Arrow a => a () Int\n expr = proc (y,z) -> do\n x <- term -< y\n expr' -< x\n\nHere the 'proc (y,z)' binding scopes over the arrow tails but not the\narrow body (e.g 'term'). As things stand (bogusly) all the\nconstraints from the proc body are gathered together, so constraints\nfrom 'term' will be seen by the tcPat for (y,z). But we must *not*\nbind constraints from 'term' here, because the desugarer will not make\nthese bindings scope over 'term'.\n\nThe Right Thing is not to confuse these constraints together. But for\nnow the Easy Thing is to ensure that we do not have existential or\nGADT constraints in a 'proc', and to short-cut the constraint\nsimplification for such vanilla patterns so that it binds no\nconstraints. Hence the 'fast path' in tcConPat; but it's also a good\nplan for ordinary vanilla patterns to bypass the constraint\nsimplification step.\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\t\tNote [Pattern coercions]\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\nIn principle, these program would be reasonable:\n\t\n\tf :: (forall a. a->a) -> Int\n\tf (x :: Int->Int) = x 3\n\n\tg :: (forall a. [a]) -> Bool\n\tg [] = True\n\nIn both cases, the function type signature restricts what arguments can be passed\nin a call (to polymorphic ones). The pattern type signature then instantiates this\ntype. For example, in the first case, (forall a. a->a) <= Int -> Int, and we\ngenerate the translated term\n\tf = \\x' :: (forall a. a->a). let x = x' Int in x 3\n\nFrom a type-system point of view, this is perfectly fine, but it's *very* seldom useful.\nAnd it requires a significant amount of code to implement, because we need to decorate\nthe translated pattern with coercion functions (generated from the subsumption check \nby tcSub). \n\nSo for now I'm just insisting on type *equality* in patterns. No subsumption. \n\nOld notes about desugaring, at a time when pattern coercions were handled:\n\nA SigPat is a type coercion and must be handled one at at time. We can't\ncombine them unless the type of the pattern inside is identical, and we don't\nbother to check for that. For example:\n\n\tdata T = T1 Int | T2 Bool\n\tf :: (forall a. a -> a) -> T -> t\n\tf (g::Int->Int) (T1 i) = T1 (g i)\n\tf (g::Bool->Bool) (T2 b) = T2 (g b)\n\nWe desugar this as follows:\n\n\tf = \\ g::(forall a. a->a) t::T ->\n\t let gi = g Int\n\t in case t of { T1 i -> T1 (gi i)\n\t\t\t other ->\n\t let\tgb = g Bool\n\t in case t of { T2 b -> T2 (gb b)\n\t\t\t other -> fail }}\n\nNote that we do not treat the first column of patterns as a\ncolumn of variables, because the coerced variables (gi, gb)\nwould be of different types. So we get rather grotty code.\nBut I don't think this is a common case, and if it was we could\ndoubtless improve it.\n\nMeanwhile, the strategy is:\n\t* treat each SigPat coercion (always non-identity coercions)\n\t\tas a separate block\n\t* deal with the stuff inside, and then wrap a binding round\n\t\tthe result to bind the new variable (gi, gb, etc)\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\\subsection{Errors and contexts}\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\nmaybeWrapPatCtxt :: Pat Name -> (TcM a -> TcM b) -> TcM a -> TcM b\n-- Not all patterns are worth pushing a context\nmaybeWrapPatCtxt pat tcm thing_inside \n | not (worth_wrapping pat) = tcm thing_inside\n | otherwise = addErrCtxt msg $ tcm $ popErrCtxt thing_inside\n \t\t\t -- Remember to pop before doing thing_inside\n where\n worth_wrapping (VarPat {}) = False\n worth_wrapping (ParPat {}) = False\n worth_wrapping (AsPat {}) = False\n worth_wrapping _ \t = True\n msg = hang (ptext (sLit \"In the pattern:\")) 2 (ppr pat)\n\n-----------------------------------------------\ncheckExistentials :: [TyVar] -> PatEnv -> TcM ()\n\t -- See Note [Arrows and patterns]\ncheckExistentials [] _ = return ()\ncheckExistentials _ (PE { pe_ctxt = LetPat {}}) = failWithTc existentialLetPat\ncheckExistentials _ (PE { pe_ctxt = LamPat ProcExpr }) = failWithTc existentialProcPat\ncheckExistentials _ (PE { pe_lazy = True }) = failWithTc existentialLazyPat\ncheckExistentials _ _ = return ()\n\nexistentialLazyPat :: SDoc\nexistentialLazyPat\n = hang (ptext (sLit \"An existential or GADT data constructor cannot be used\"))\n 2 (ptext (sLit \"inside a lazy (~) pattern\"))\n\nexistentialProcPat :: SDoc\nexistentialProcPat \n = ptext (sLit \"Proc patterns cannot use existential or GADT data constructors\")\n\nexistentialLetPat :: SDoc\nexistentialLetPat\n = vcat [text \"My brain just exploded\",\n\t text \"I can't handle pattern bindings for existential or GADT data constructors.\",\n\t text \"Instead, use a case-expression, or do-notation, to unpack the constructor.\"]\n\nbadFieldCon :: ConLike -> Name -> SDoc\nbadFieldCon con field\n = hsep [ptext (sLit \"Constructor\") <+> quotes (ppr con),\n\t ptext (sLit \"does not have field\"), quotes (ppr field)]\n\npolyPatSig :: TcType -> SDoc\npolyPatSig sig_ty\n = hang (ptext (sLit \"Illegal polymorphic type signature in pattern:\"))\n 2 (ppr sig_ty)\n\nlazyUnliftedPatErr :: OutputableBndr name => Pat name -> TcM ()\nlazyUnliftedPatErr pat\n = failWithTc $\n hang (ptext (sLit \"A lazy (~) pattern cannot contain unlifted types:\"))\n 2 (ppr pat)\n\\end{code}\n","avg_line_length":39.0328151986,"max_line_length":96,"alphanum_fraction":0.6321681416} {"size":26173,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"---\ntitle: Monad Transformers\n---\n\n> {-# LANGUAGE TypeSynonymInstances, FlexibleContexts, NoMonomorphismRestriction, OverlappingInstances, FlexibleInstances #-}\n\n> import Control.Monad.Error\n> import Control.Monad.State\n> import Control.Monad.Writer\n> import Debug.Trace\n> import Control.Applicative \n\nMonads Can Do Many Things\n=========================\n\nLets recall the simple language of divisions. \n\n> data Expr = Val Int\n> | Div Expr Expr\n> deriving (Show)\n\nToday, we will see how monads can be used to write (and compose) \n*evaluators* for such languages.\n\nRemember the vanilla *unsafe* evaluator \n\n> eval :: Expr -> Int\n> eval (Val n) = n\n> eval (Div x y) = eval x `div` eval y\n\nHere are two terms that we will use as running examples. \n\n> ok = Div (Div (Val 1972) (Val 2)) (Val 23)\n> err = Div (Val 2) (Div (Val 1) (Div (Val 2) (Val 3)))\n\nThe first evaluates properly and returns a valid answer, \nand the second fails with a divide-by-zero exception.\n\n~~~~~{.haskell}\nghci> eval ok \n42\n\nghci> eval err\n*** Exception: divide by zero\n~~~~~\n\nWe didn't like this `eval` because it can just blow up \nwith a divide by zero error without telling us how it \nhappened. Worse, the error is a *radioactive* value\nthat, spread unchecked through the entire computation.\n\nWe used the `Maybe` type to capture the failure case: a\n`Nothing` result meant that an error happened somewhere,\nwhile a `Just n` result meant that evaluation succeeded\nyielding `n`. Morever, we saw how the `Maybe` monad could\nbe used to avoid ugly case-split-staircase-hell.\n\n> evalMaybe :: Expr -> Maybe Int\n> evalMaybe (Val n) = return n\n> evalMaybe (Div x y) = do n <- evalMaybe x\n> m <- evalMaybe y\n> if m == 0 \n> then Nothing\n> else return (n `div` m)\n\n> evalExn (Val n) = return n\n> evalExn (Div x y) = do n <- evalExn x\n> m <- evalExn y\n> if m == 0 \n> then throwExn \"EEKES DIVIDED BY ZERO\" \n> else return (n `div` m)\n\n> throwExn = Exn\n\n\n\nwhich behaves thus\n\n~~~~~{.haskell}\nghci> evalMaybe ok \nJust 42\n\nghci> evalMaybe err\nNothing\n~~~~~\n\n\nError Handling Via Exception Monads\n-----------------------------------\n\nThe trouble with the above is that it doesn't let us know\n*where* the divide by zero occurred. It would be nice to \nhave an *exception* mechanism where, when the error occurred,\nwe could just saw `throw x` for some value `x` which would, \nlike an exception go rocketing back to the top and tell us\nwhat the problem was.\n\nIf you think for a moment, you'll realize this is but a small\ntweak on the `Maybe` type; all we need is to jazz up the \n`Nothing` constructor so that it carries the exception value.\n\n> data Exc a = Exn String\n> | Result a\n> deriving (Show)\n\n\n\n\n\nHere the `Exn` is like `Nothing` but it carries a string \ndenoting what the exception was. We can make the above a \n`Monad` much like the `Maybe` monad.\n\n> instance Monad Exc where\n> (Exn s ) >>= _ = Exn s\n> (Result x) >>= f = f x\n> return = Result \n\n> instance Functor Exc where\n> fmap f (Exn s) = Exn s\n> fmap f (Result x) = Result (f x) \n\n**Throwing Exceptions**\n\nLet's write a function to `throw` an exception \n\n~~~~~{.haskell}\nthrow = Exn\n~~~~~\n\nand now, we can use our newly minted monad to write \na better exception throwing evaluator\n\n> evalExc :: Expr -> Exc Int\n> evalExc (Val n) = return n\n> evalExc (Div x y) = do n <- evalExc x\n> m <- evalExc y\n> if m == 0 \n> then throw $ errorS y m \n> else return $ n `div` m\n\nwhere the sidekick `errorS` generates the error string. \n\n> errorS y m = \"Error dividing by \" ++ show y ++ \" = \" ++ show m\n\nNote that this is essentially like the first evaluator; \ninstead of bailing with `Nothing` we return some (hopefully)\nhelpful message, but the monad takes care of ensuring that \nthe exception is shot back up.\n\n~~~~~{.haskell}\nghci> evalExc ok \nResult 42\n\nghci> evalExc err\nExn \"Error dividing by Div (Val 2) (Val 3) = 0\"\n~~~~~\n\n\n**Catching Exceptions**\n\nIts all well and good to *throw* an exception, but it would be nice if we\ncould gracefully *catch* them as well. For example, wouldn't it be nice if\nwe could write a function like this:\n\n> evalExcc :: Expr -> Exc (Maybe Int)\n> evalExcc e = tryCatch (Just <$> (evalExc e)) $ \\err -> \n> return (trace (\"oops, caught an exn\" ++ err) Nothing)\n\nThus, in `evalExcc` we have just *caught* the exception to return a `Maybe` value\nin the case that something went wrong. Not the most sophisticated form of\nerror handling, but you get the picture.\n\nQUIZ \n----\n\nWhat should the **type** of `tryCatch` be?\n\na. `Exc a -> (a -> Exc b) -> Exc b`\nd. `Exc a -> (String -> a) -> a`\ne. None of the above\n\n~~~~~{.haskell}\n\n\n\n\n\n~~~~~\n\n\n\n> tryCatch :: Exc a -> (String -> Exc a) -> Exc a\n\nAnd next, lets write it!\n\n> tryCatch (Exn err) f = f err\n> tryCatch r@(Result _) _ = r \n\nAnd now, we can run it of course...\n\n~~~~~{.haskell}\nghci> evalExcc ok\nResult (Just 42)\n\nghci> evalExcc err\nCaught Error: Error dividing by Div (Val 2) (Val 3) = 0\nResult Nothing\n~~~~~\n\n\n\n\n\nProfiling Operations Via State Monads\n-------------------------------------\n\nNext, lets stop being so paranoid about errors and instead \ntry to do some **profiling**. Lets imagine that the `div` \noperator is very expensive, and that we would like to \n*count* the number of divisions that are performed while\nevaluating a particular expression.\n\nAs you might imagine, our old friend the state-transformer \nmonad is likely to be of service here!\n\n> type StateST = Int\n> data ST a = S (StateST -> (a, StateST))\n\n\n> instance Monad ST where\n> return x = S $ \\s -> (x, s)\n> (S st) >>= f = S $ \\s -> let (x, s') = st s \n> S st' = f x\n> in st' s'\n\n\nNext, lets write the useful `runStateST` which executes the monad from \nan initial state, `getST` and `putST` which allow us to access and\nmodify the state, respectively.\n\n\n~~~~~{.haskell}\ngetST = S (\\s -> (s, s))\nputST = \\s' -> S (\\_ -> ((), s'))\n~~~~~\n\n\nArmed with the above, we can write a function\n\n\n> tickST = do n <- getST\n> putST (n+1)\n\nNow, we can write a profiling evaluator\n\n\n> evalST :: Expr -> ST Int\n> evalST (Val n) = return n\n> evalST (Div x y) = do n <- evalST x\n> m <- evalST y\n> tickST\n> return (n `div` m)\n\nand by judiciously making the above an instance of `Show`\n\n> instance Show a => Show (ST a) where\n> show (S st) = \"value: \" ++ show x ++ \", count: \" ++ show s\n> where (x, s) = st 0\n\t \nwe can get observe our profiling evaluator at work\n\n~~~~~{.haskell}\nghci> evalST ok\nvalue: 42, count: 2\n~~~~~\n\nBut, alas, to get the profiling we threw out the nifty \nerror handling that we had put in earlier\n\n~~~~~{.haskell}\nghci> evalST err \nvalue: *** Exception: divide by zero\n~~~~~\n\n(Hmm. Why does it print `value` this time around?)\n\n\nTransformers: Making Monads Multitask\n=====================================\n\nSo it looks like Monads can do many thigs, but only \n*one thing at a time* -- you can either use a monad \nto do the error management plumbing *OR* to do the \nstate manipulation plumbing, but not at the same time. \nIs it too much ask for both? I guess we could write a \n*mega-state-and-exception* monad that supports the \noperations of both, but that doesn't sound like any \nfun at all! Worse, if later we decide to add yet \nanother feature, then we would have to make up yet \nanother mega-monad. \n\n \n \n\n\nWe shall take a different approach, where we will keep\n*wrapping* or decorating monads with extra features, so \nthat we can take a simple monad, and then add the \nException monad's features to it, and then add the \nState monad's features and so on. \n\nThe key to doing this is to not define exception \nhandling, state passing etc as monads, but as\n**functions from monads to monads.** \nThis will require a little more work up-front \n(most of which is done already in well-designed libraries)\nbut after that we can add new features in a modular manner.\nFor example, to get a mega state- and exception- monad,\nwe will start with a dummy `Identity` monad, apply it to \nthe `StateT` monad transformer (which yields state-passing monad)\nand pass the result to the `ExcT` monad transformer which yields\nthe desired mega monad. Incidentally, the above should remind \nsome of you of the [Decorator Design Pattern][2] and others \nof [Python's Decorators][3].\n\nConcretely, we will develop mega-monads in *four* steps:\n\n- **Step 1: Description** First we will define typeclasses that describe the\n *enhanced* monads, i.e. by describing their *extra* operations,\n\n- **Step 2: Use** Second we will see how to write functions that *use* the mega\n monads, simply by using a combination of their features -- here the\n functions' type signatures will list all the constraints on the\n corresponding monad,\n\nNext, we need to **create** monads with the special features. We will do\nthis by starting with a basic *powerless* monad, and then\n\n- **Step 3: Add Features** thereby adding extra operations to the simpler\n monad to make it more powerful, and\n\n- **Step 4: Preserver Features** Will make sure that the addition of\n features allows us to *hold onto* the older features, so that at the\n end, we get a mega monad that is just the *accumulation* of all the\n added features.\n\nNext, lets look at each step in turn.\n\nStep 1: Describing Monads With Special Features\n-----------------------------------------------\n\nThe first step to being able to compose monads is to \ndefine typeclasses that describe monads armed with \nthe special features. For example, the notion of an \n*exception monad* is captured by the typeclass\n\n> class Monad m => MonadExc m where\n> throw :: String -> m a \n\nwhich corresponds to monads that are also equipped with \nan appropriate `throw` function (you can add a `catch` \nfunction too, if you like!) Indeed, we can make `Exc` an\ninstance of the above by\n\n> instance MonadExc Exc where \n> throw = Exn\n\nI urge you to directly enter the body of `evalExc` above into \nGHCi and see what type is inferred for it!\n\nSimilarly, we can bottle the notion of a *state(-transforming) \nmonad* in the typeclass\n\n> class Monad m => MonadST m where\n> runStateST :: m a -> StateST -> m (a, StateST)\n> getST :: m StateST \n> putST :: StateST -> m ()\n\nwhich corresponds to monads that are kitted out with the\nappropriate execution, extraction and modification functions.\nNeedless to say, we can make `ST` an instance of the above by\n\n> instance MonadST ST where\n> runStateST (S f) = return . f \n> getST = S (\\s -> (s, s))\n> putST = \\s' -> S (\\_ -> ((), s'))\n\nOnce again, if you know whats good for you, enter the body of\n`evalST` into GHCi and see what type is inferred.\n\nStep 2: Using Monads With Special Features\n------------------------------------------\n\nArmed with these two typeclasses, we can write our evaluator\nquite easily\n\n> evalMega (Val n) = return n\n> evalMega (Div x y) = do n <- evalMega x\n> m <- evalMega y\n> tickST\n> if m == 0 \n> then throw $ errorS y m \n> else return $ n `div` m\n\nQUIZ\n----\n\nWhat is the type of `evalMega` ?\n\na. `Expr -> ST Int`\nb. `Expr -> Exc Int`\nc. `(MonadST m) => Expr -> m Int`\nd. `(MonadExc m) => Expr -> m Int`\ne. None of the above\n\n~~~~~{.haskell}\n\n\n\n\n\n\n~~~~~\n\n\nNote that it is simply the combination of the two evaluators\nfrom before -- we use the `throw` from `evalExc` and the \n`tickST` from `evalST`. Meditate for a moment on the type of \nabove evaluator; note that it works with *any monad* that \nis **both** a exception- and a state- monad! \n\nIndeed, if, as I exhorted you to, you had gone back and studied the types of\n`evalST` and `evalExc` you would find that each of those functions required the\nunderlying monad to be a state-manipulating and exception-handling monad\nrespectively. In contrast, the above evaluator simply demands both features.\n\n**Next:** But, but, but ... how do we create monads with **both** features?\n\n\nStep 3: Injecting Special Features into Monads\n----------------------------------------------\n\nTo *add* special features to existing monads, we will use\n*monad transformers*, which are type operators `t` that \nmap a monad `m` to a monad `t m`. The key ingredient of \na transformer is that it must have a function `promote`\nthat can take an `m` value (ie action) and turn it into a \n`t m` value (ie action):\n\n> class Transformer t where\n> promote :: Monad m => m a -> (t m) a\n\nNow, that just defines the *type* of a transformer, lets see\nsome real transformers!\n\n**A Transformer For Exceptions**\n\nConsider the following type\n\n> newtype ExcT m a = MkExc (m (Exc a))\n\nit is simply a type with two parameters -- the first\nis a monad `m` inside which we will put the exception \nmonad `Exc a`. In other words, the `ExcT m a` simply \n*injects* the `Exc a` monad *into* the value slot \nof the `m` monad.\n\nIt is easy to formally state that the above is a \nbonafide transformer\n\n> instance Transformer ExcT where\n> promote = MkExc . promote_ \n\nwhere the generic `promote_` function simply injects\nthe value from the outer monad `m` into the inner \nmonad `m1` :\n\n> promote_ :: (Monad m, Monad m1) => m t -> m (m1 t)\n> promote_ m = do x <- m\n> return $ return x\n\nConsequently, any operation on the input monad `m` can be \ndirectly promoted into an action on the transformed monad, \nand so the transformation *preserves* all the operations\non the original monad.\n\nNow, the real trick is twofold, we ensure that if `m` \nis a monad, then transformed `ExcT m` is an \n*exception monad*, that is an `MonadExc`.\n\nFirst, we show the transformer output is a monad:\n\n> instance Monad m => Monad (ExcT m) where\n> return x = promote $ return x \n> p >>= f = MkExc $ strip p >>= r \n> where r (Result x) = strip $ f x\n> r (Exn s) = return $ Exn s\n> strip (MkExc m) = m\n\nand next we ensure that the transformer is an \nexception monad by equipping it with `throw`\n\n> instance Monad m => MonadExc (ExcT m) where\n> throw s = MkExc $ return $ Exn s\n\n**A Transformer For State**\n\nNext, we will build a transformer for the state monad,\nfollowing, more or less, the recipe for exceptions. Here \nis the type for the transformer\n\n> newtype STT m a = MkSTT (StateST -> m (a, StateST))\n\nThus, in effect, the enhanced monad is a state-update where \nthe output is the original monad as we do the state-update\nand return as output the new state wrapped inside the \nparameter monad.\n\n> instance Transformer STT where\n> promote f = MkSTT $ \\s -> do x <- f \n> return (x, s)\n\nNext, we ensure that the transformer output is a monad:\n\n> instance Monad m => Monad (STT m) where\n> return = promote . return\n> m >>= f = MkSTT $ \\s -> do (x, s') <- strip m s\n> strip (f x) s' \n> where strip (MkSTT f) = f \n\nand next we ensure that the transformer is a state \nmonad by equipping it with the operations from `MonadST` \n\n> instance Monad m => MonadST (STT m) where\n> --runStateST :: STT m a -> StateST -> STT m (a, StateST)\n> runStateST (MkSTT f) s = MkSTT $ \\s0 -> do (x,s') <- f s\n> return ((x,s'), s0)\n> --getST :: STT m StateST\n> --getST :: MkSTT (StateST -> m (StateST, StateST))\n> getST = MkSTT $ \\s -> return (s, s)\n>\n> --putST :: StateST -> STT m () \n> --putST :: StateST -> MkSTT (StateST -> m ((), StateST))\n> putST s = MkSTT (\\_ -> return ((), s)) \n\nStep 4: Preserving Old Features of Monads\n-----------------------------------------\n\nOf course, we must make sure that the original features\nof the monads are not lost in the transformed monads. \nFor this purpose, we will just use the `promote` \noperation to directly transfer operations from \nthe old monad into the transformed monad. \n\nThus, we can ensure that if a monad was already \na state-manipulating monad, then the result of \nthe exception-transformer is *also* a \nstate-manipulating monad.\n\n> instance MonadExc m => MonadExc (STT m) where\n> throw s = promote (throw s)\n\n> instance MonadST m => MonadST (ExcT m) where\n> getST = promote getST\n> putST = promote . putST\n> runStateST (MkExc m) s = MkExc $ do (ex, s') <- runStateST m s\n> case ex of\n> Result x -> return $ Result (x, s')\n> Exn err -> return $ Exn err \n\nStep 5: Whew! Put together and Run\n----------------------------------\n\nFinally, we can put all the pieces together and run the transformers.\nWe could *order* the transformations differently (and that can have\ndifferent consequences on the output as we will see.)\n\n> evalExSt :: Expr -> STT Exc Int\n> evalExSt = evalMega\n>\n> evalStEx :: Expr -> ExcT ST Int\n> evalStEx = evalMega\n\nwhich we can run as\n\n~~~~~{.haskell}\nghci> d1\nExn:Error dividing by Div (Val 2) (Val 3) = 0\n\nghci> evalStEx ok\nCount: 2\nResult 42\n\nghci> evalStEx err\nCount: 2\nExn \"Error dividing by Div (Val 2) (Val 3) = 0\"\n\nghci> evalExSt ok\nCount:2\nResult: 42\n\nghci> evalExSt err\nExn:Error dividing by Div (Val 2) (Val 3) = 0\n~~~~~\n\nwhere the rendering functions are\n\n> instance Show a => Show (STT Exc a) where\n> show (MkSTT f) = case (f 0) of \n> Exn s -> \"Exn:\" ++ s ++ \"\\n\"\n> Result (v, cnt) -> \"Count:\" ++ show cnt ++ \"\\n\" ++\n> \"Result: \" ++ show v ++ \"\\n\"\n>\n> instance Show a => Show (ExcT ST a) where\n> show (MkExc (S f)) = \"Count: \" ++ show cnt ++ \"\\n\" ++ show r ++ \"\\n\"\n> where (r, cnt) = f 0\n\n\nThe Monad Transformer Library\n=============================\n\nWhile it is often *instructive* to roll your own versions \nof code, as we did above, in practice you should reuse as \nmuch as you can from standard libraries. \n\n\nError Monads and Transformers \n-----------------------------\n\nThe above sauced-up exception-tracking version of `Maybe` \nalready exists in the standard type [Either][1]\n\n~~~~~{.haskell}\nghci> :info Either \ndata Either a b = Left a | Right b \t-- Defined in Data.Either\n~~~~~\n\nThe `Either` type is a generalization of our `Exc` type, \nwhere the exception is polymorphic, rather than just being\na `String`. In other words the hand-rolled `Exc a` corresponds \nto the standard `Either String a` type.\n\nThe standard [MonadError][6] typeclass corresponds directly with\n`MonadExc` developed above.\n\n~~~~~{.haskell}\nghci> :info MonadError\nclass (Monad m) => MonadError e m | m -> e where\n throwError :: e -> m a\n catchError :: m a -> (e -> m a) -> m a\n \t-- Defined in Control.Monad.Error.Class\ninstance (Monad m, Error e) => MonadError e (ErrorT e m)\n -- Defined in Control.Monad.Error\ninstance (Error e) => MonadError e (Either e)\n -- Defined in Control.Monad.Error\ninstance MonadError IOError IO -- Defined in Control.Monad.Error\n~~~~~\n\nNote that `Either String` is an instance of `MonadError` much \nlike `Exc` is an instance of `MonadExc`. Finally, the `ErrorT`\ntransformer corresponds to the `ExcT` transformer developed above\nand its output is guaranteed to be an instance of `MonadError`.\n\nState Monads and Transformers\n-----------------------------\n\nSimilarly, the `ST` monad that we wrote above is but a pale reflection \nof the more general [State][4] monad. \n\n~~~~~{.haskell}\nghci> :info State\nnewtype State s a = State {runState :: s -> (a, s)}\n \t-- Defined in Control.Monad.State.Lazy\n~~~~~\n\nThe `MonadST` typeclass that we developed above corresponds directly \nwith the standard [MonadState][5] typeclass.\n\n~~~~~{.haskell}\nghci> :info MonadState\nclass (Monad m) => MonadState s m | m -> s where\n get :: m s\n put :: s -> m ()\n \t-- Defined in Control.Monad.State.Class\n\ninstance (Monad m) => MonadState s (StateT s m)\n -- Defined in Control.Monad.State.Lazy\n\ninstance MonadState s (State s)\n -- Defined in Control.Monad.State.Lazy\n~~~~~\n\nNote that `State s` is already an instance of `MonadState` much \nlike `ST` is an instance of `MonadST`. Finally, the `StateT`\ntransformer corresponds to the `STT` transformer developed above\nand its output is guaranteed to be an instance of `MonadState`.\n\nThus, if we stick with the standard libraries, we can simply write\n\n> tick = do {n <- get; put (n+1)}\n\n> eval1 (Val n) = return n\n> eval1 (Div x y) = do n <- eval1 x\n> m <- eval1 y\n> if m == 0 \n> then throwError $ errorS y m\n> else do tick\n> return $ n `div` m\n\n\n\n> evalSE :: Expr -> StateT Int (Either String) Int\n> evalSE = eval1\n\n~~~~~{.haskell}\nghci> runStateT (evalSE ok) 0\nRight (42,2)\n\nghci> runStateT (evalSE err) 0\nLeft \"Error dividing by Div (Val 2) (Val 3) = 0\"\n~~~~~\n\nYou can stack them in the other order if you prefer\n\n> evalES :: Expr -> ErrorT String (State Int) Int\n> evalES = eval1\n\nwhich will yield a different result\n\n~~~~~{.haskell}\nghci> runState (runErrorT (evalES ok)) 0\n(Right 42,2)\n\nghci> runState (runErrorT (evalES err)) 0\n(Left \"Error dividing by Div (Val 2) (Val 3) = 0\",2)\n~~~~~\n\nsee that we actually get the division-count (upto the point of\nfailure) even when the computation bails.\n\n\nTracing Operations Via Logger Monads\n------------------------------------\n\nNext, we will spice up our computations to also *log* messages (a *pure* \nvariant of the usual method where we just *print* the messages to the\nscreen.) This can be done with the standard [Writer][7] monad, which\nsupports a `tell` action that logs the string you want (and allows you to\nlater view the entire log of the computation.\n\nTo accomodate logging, we juice up our evaluator directly as\n\n> eval2 v = \n> case v of \n> Val n -> do tell $ msg v n \n> return n\n> Div x y -> do n <- eval2 x\n> m <- eval2 y\n> if m == 0 \n> then throwError $ errorS y m \n> else do tick \n> tell $ msg v (n `div` m) \n> return $ n `div` m\n\nwhere the `msg` function is simply\n\n> msg t r = \"term: \" ++ show t ++ \", yields \" ++ show r ++ \"\\n\"\n\nNote that the only addition to the previous evaluator is the `tell`\noperations! We can run the above using\n\n> evalWSE :: Expr -> WSE Int\n> evalWSE = eval2\n\nwhere `WSE` is a type abbreviation\n\n> type WSE a = WriterT String (StateT Int (Either String)) a \n\nThat is, we simply use the `WriterT` transformer to decorate the underlying \nmonad that carries the state and exception information.\n\n~~~~~{.haskell}\nghci> runStateT (runWriterT (evalWSE ok)) 0\nRight ((42,\"term: Val 1972, yields 1972\\nterm: Val 2, yields 2\\nterm: Div (Val 1972) (Val 2), yields 986\\nterm: Val 23, yields 23\\nterm: Div (Div (Val 1972) (Val 2)) (Val 23), yields 42\\n\"),2)\n\nghci> runStateT (runWriterT (evalWSE err)) 0\nLeft \"Error dividing by Div (Val 2) (Val 3) = 0\"\n~~~~~\n\nThat looks a bit ugly, so we can write our own pretty-printer\n\n> instance Show a => Show (WSE a) where\n> show m = case runStateT (runWriterT m) 0 of \n> Left s -> \"Error: \" ++ s\n> Right ((v, w), s) -> \"Log:\\n\" ++ w ++ \"\\n\" ++\n> \"Count: \" ++ show s ++ \"\\n\" ++\n> \"Value: \" ++ show v ++ \"\\n\"\n\nafter which we get\n\n~~~~~{.haskell}\nghci> print $ evalWSE ok\n\nLog:\nterm: Val 1972, yields 1972\nterm: Val 2, yields 2\nterm: Div (Val 1972) (Val 2), yields 986\nterm: Val 23, yields 23\nterm: Div (Div (Val 1972) (Val 2)) (Val 23), yields 42\n\nCount: 2\nValue: 42\n\nghci> print $ evalWSE err\nError: Error dividing by Div (Val 2) (Val 3) = 0\n~~~~~\n\n*How come we didn't get any log in the error case?* \n\nThe answer lies in the *order* in which we compose the transformers; \nsince the error wraps the log, if the computation fails, the log gets \nthrown away. Instead, we can just wrap the other way around\n\n> type ESW a = ErrorT String (StateT Int (Writer String)) a \n>\n> evalESW :: Expr -> ESW Int\n> evalESW = eval2\n\nafter which, everything works just fine!\n\n~~~~~{.haskell}\nghci> evalESW err\nLog:\nterm: Val 2, yields 2\nterm: Val 1, yields 1\nterm: Val 2, yields 2\nterm: Val 3, yields 3\nterm: Div (Val 2) (Val 3), yields 0\n\nCount: 1\nError: Error dividing by Div (Val 2) (Val 3) = 0\n~~~~~\n\n> instance Show a => Show (ESW a) where \n> show m = \"Log:\\n\" ++ log ++ \"\\n\" ++ \n> \"Count: \" ++ show cnt ++ \"\\n\" ++\n> result\n> where ((res, cnt), log) = runWriter (runStateT (runErrorT m) 0)\n> result = case res of \n> Left s -> \"Error: \" ++ s\n> Right v -> \"Value: \" ++ show v\n\nMoral of the story\n------------------\n\n \n\nThere are many useful monads, and if you play your cards right, Haskell\nwill let you *stack* them nicely on top of each other, so that you can get\n*mega-monads* that have all the powers of the individual monads. See for\nyourself in [Homework 3][8].\n\n\n[1]: http:\/\/hackage.haskell.org\/packages\/archive\/base\/latest\/doc\/html\/Prelude.html#t:Either\n[2]: http:\/\/oreilly.com\/catalog\/hfdesignpat\/chapter\/ch03.pdf\n[3]: http:\/\/en.wikipedia.org\/wiki\/Python_syntax_and_semantics#Decorators\n[4]: http:\/\/hackage.haskell.org\/packages\/archive\/mtl\/latest\/doc\/html\/Control-Monad-State-Lazy.html#v:state\n[5]: http:\/\/hackage.haskell.org\/packages\/archive\/mtl\/latest\/doc\/html\/Control-Monad-State-Class.html#t:MonadState\n[6]: http:\/\/hackage.haskell.org\/packages\/archive\/mtl\/latest\/doc\/html\/Control-Monad-Error-Class.html#t:MonadError\n[7]: http:\/\/hackage.haskell.org\/packages\/archive\/transformers\/latest\/doc\/html\/Control-Monad-Trans-Writer-Lazy.html#t:Writer\n[8]: homeworks\/hw3.html \n","avg_line_length":29.9805269187,"max_line_length":192,"alphanum_fraction":0.6342413938} {"size":88876,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"\\begin{code}\nmodule TcInteract ( \n solveInteract, solveInteractGiven, solveInteractWanted,\n AtomicInert, tyVarsOfInert, \n InertSet, emptyInert, updInertSet, extractUnsolved, solveOne,\n ) where \n\n#include \"HsVersions.h\"\n\n\nimport BasicTypes \nimport TcCanonical\nimport VarSet\nimport Type\n\nimport Id \nimport Var\n\nimport TcType\nimport HsBinds\n\nimport Inst( tyVarsOfEvVar )\nimport Class\nimport TyCon\nimport Name\n\nimport FunDeps\n\nimport Coercion\nimport Outputable\n\nimport TcRnTypes\nimport TcErrors\nimport TcSMonad\nimport Bag\nimport qualified Data.Map as Map\n\nimport Control.Monad( when )\n\nimport FastString ( sLit ) \nimport DynFlags\n\\end{code}\n\nNote [InertSet invariants]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\nAn InertSet is a bag of canonical constraints, with the following invariants:\n\n 1 No two constraints react with each other. \n \n A tricky case is when there exists a given (solved) dictionary \n constraint and a wanted identical constraint in the inert set, but do \n not react because reaction would create loopy dictionary evidence for \n the wanted. See note [Recursive dictionaries]\n\n 2 Given equalities form an idempotent substitution [none of the\n given LHS's occur in any of the given RHS's or reactant parts]\n\n 3 Wanted equalities also form an idempotent substitution\n\n 4 The entire set of equalities is acyclic.\n\n 5 Wanted dictionaries are inert with the top-level axiom set \n\n 6 Equalities of the form tv1 ~ tv2 always have a touchable variable\n on the left (if possible).\n\n 7 No wanted constraints tv1 ~ tv2 with tv1 touchable. Such constraints\n will be marked as solved right before being pushed into the inert set. \n See note [Touchables and givens].\n\n 8 No Given constraint mentions a touchable unification variable,\n except if the\n \nNote that 6 and 7 are \/not\/ enforced by canonicalization but rather by \ninsertion in the inert list, ie by TcInteract. \n\nDuring the process of solving, the inert set will contain some\npreviously given constraints, some wanted constraints, and some given\nconstraints which have arisen from solving wanted constraints. For\nnow we do not distinguish between given and solved constraints.\n\nNote that we must switch wanted inert items to given when going under an\nimplication constraint (when in top-level inference mode).\n\n\\begin{code}\n\ndata CCanMap a = CCanMap { cts_given :: Map.Map a CanonicalCts\n -- Invariant: all Given\n , cts_derived :: Map.Map a CanonicalCts \n -- Invariant: all Derived\n , cts_wanted :: Map.Map a CanonicalCts } \n -- Invariant: all Wanted\n\ncCanMapToBag :: Ord a => CCanMap a -> CanonicalCts \ncCanMapToBag cmap = Map.fold unionBags rest_wder (cts_given cmap)\n where rest_wder = Map.fold unionBags rest_der (cts_wanted cmap) \n rest_der = Map.fold unionBags emptyCCan (cts_derived cmap)\n\nemptyCCanMap :: CCanMap a \nemptyCCanMap = CCanMap { cts_given = Map.empty\n , cts_derived = Map.empty, cts_wanted = Map.empty } \n\nupdCCanMap:: Ord a => (a,CanonicalCt) -> CCanMap a -> CCanMap a \nupdCCanMap (a,ct) cmap \n = case cc_flavor ct of \n Wanted {} \n -> cmap { cts_wanted = Map.insertWith unionBags a this_ct (cts_wanted cmap) } \n Given {} \n -> cmap { cts_given = Map.insertWith unionBags a this_ct (cts_given cmap) }\n Derived {}\n -> cmap { cts_derived = Map.insertWith unionBags a this_ct (cts_derived cmap) }\n where this_ct = singleCCan ct \n\ngetRelevantCts :: Ord a => a -> CCanMap a -> (CanonicalCts, CCanMap a) \n-- Gets the relevant constraints and returns the rest of the CCanMap\ngetRelevantCts a cmap \n = let relevant = unionManyBags [ Map.findWithDefault emptyCCan a (cts_wanted cmap)\n , Map.findWithDefault emptyCCan a (cts_given cmap)\n , Map.findWithDefault emptyCCan a (cts_derived cmap) ]\n residual_map = cmap { cts_wanted = Map.delete a (cts_wanted cmap) \n , cts_given = Map.delete a (cts_given cmap) \n , cts_derived = Map.delete a (cts_derived cmap) }\n in (relevant, residual_map) \n\nextractUnsolvedCMap :: Ord a => CCanMap a -> (CanonicalCts, CCanMap a)\n-- Gets the wanted or derived constraints and returns a residual\n-- CCanMap with only givens.\nextractUnsolvedCMap cmap =\n let wntd = Map.fold unionBags emptyCCan (cts_wanted cmap)\n derd = Map.fold unionBags emptyCCan (cts_derived cmap)\n in (wntd `unionBags` derd, \n cmap { cts_wanted = Map.empty, cts_derived = Map.empty })\n\n\n-- See Note [InertSet invariants]\ndata InertSet \n = IS { inert_eqs :: CanonicalCts -- Equalities only (CTyEqCan)\n , inert_dicts :: CCanMap Class -- Dictionaries only\n , inert_ips :: CCanMap (IPName Name) -- Implicit parameters \n , inert_frozen :: CanonicalCts\n , inert_funeqs :: CCanMap TyCon -- Type family equalities only\n -- This representation allows us to quickly get to the relevant \n -- inert constraints when interacting a work item with the inert set.\n }\n\ntyVarsOfInert :: InertSet -> TcTyVarSet \ntyVarsOfInert (IS { inert_eqs = eqs\n , inert_dicts = dictmap\n , inert_ips = ipmap\n , inert_frozen = frozen\n , inert_funeqs = funeqmap }) = tyVarsOfCanonicals cts\n where\n cts = eqs `andCCan` frozen `andCCan` cCanMapToBag dictmap\n `andCCan` cCanMapToBag ipmap `andCCan` cCanMapToBag funeqmap\n\ninstance Outputable InertSet where\n ppr is = vcat [ vcat (map ppr (Bag.bagToList $ inert_eqs is))\n , vcat (map ppr (Bag.bagToList $ cCanMapToBag (inert_dicts is)))\n , vcat (map ppr (Bag.bagToList $ cCanMapToBag (inert_ips is))) \n , vcat (map ppr (Bag.bagToList $ cCanMapToBag (inert_funeqs is)))\n , vcat (map ppr (Bag.bagToList $ inert_frozen is))\n ]\n \nemptyInert :: InertSet\nemptyInert = IS { inert_eqs = Bag.emptyBag\n , inert_frozen = Bag.emptyBag\n , inert_dicts = emptyCCanMap\n , inert_ips = emptyCCanMap\n , inert_funeqs = emptyCCanMap }\n\nupdInertSet :: InertSet -> AtomicInert -> InertSet \nupdInertSet is item \n | isCTyEqCan item -- Other equality \n = let eqs' = inert_eqs is `Bag.snocBag` item \n in is { inert_eqs = eqs' } \n | Just cls <- isCDictCan_Maybe item -- Dictionary \n = is { inert_dicts = updCCanMap (cls,item) (inert_dicts is) } \n | Just x <- isCIPCan_Maybe item -- IP \n = is { inert_ips = updCCanMap (x,item) (inert_ips is) } \n | Just tc <- isCFunEqCan_Maybe item -- Function equality \n = is { inert_funeqs = updCCanMap (tc,item) (inert_funeqs is) }\n | otherwise \n = is { inert_frozen = inert_frozen is `Bag.snocBag` item }\n\nextractUnsolved :: InertSet -> (InertSet, CanonicalCts)\n-- Postcondition: the returned canonical cts are either Derived, or Wanted.\nextractUnsolved is@(IS {inert_eqs = eqs}) \n = let is_solved = is { inert_eqs = solved_eqs\n , inert_dicts = solved_dicts\n , inert_ips = solved_ips\n , inert_frozen = emptyCCan\n , inert_funeqs = solved_funeqs }\n in (is_solved, unsolved)\n\n where (unsolved_eqs, solved_eqs) = Bag.partitionBag (not.isGivenCt) eqs\n (unsolved_ips, solved_ips) = extractUnsolvedCMap (inert_ips is) \n (unsolved_dicts, solved_dicts) = extractUnsolvedCMap (inert_dicts is) \n (unsolved_funeqs, solved_funeqs) = extractUnsolvedCMap (inert_funeqs is) \n\n unsolved = unsolved_eqs `unionBags` inert_frozen is `unionBags`\n unsolved_ips `unionBags` unsolved_dicts `unionBags` unsolved_funeqs\n\\end{code}\n\n%*********************************************************************\n%* * \n* Main Interaction Solver *\n* *\n**********************************************************************\n\nNote [Basic plan] \n~~~~~~~~~~~~~~~~~\n1. Canonicalise (unary)\n2. Pairwise interaction (binary)\n * Take one from work list \n * Try all pair-wise interactions with each constraint in inert\n \n As an optimisation, we prioritize the equalities both in the \n worklist and in the inerts. \n\n3. Try to solve spontaneously for equalities involving touchables \n4. Top-level interaction (binary wrt top-level)\n Superclass decomposition belongs in (4), see note [Superclasses]\n\n\\begin{code}\ntype AtomicInert = CanonicalCt -- constraint pulled from InertSet\ntype WorkItem = CanonicalCt -- constraint pulled from WorkList\n\n------------------------\ndata StopOrContinue \n = Stop\t\t\t-- Work item is consumed\n | ContinueWith WorkItem\t-- Not consumed\n\ninstance Outputable StopOrContinue where\n ppr Stop = ptext (sLit \"Stop\")\n ppr (ContinueWith w) = ptext (sLit \"ContinueWith\") <+> ppr w\n\n-- Results after interacting a WorkItem as far as possible with an InertSet\ndata StageResult\n = SR { sr_inerts :: InertSet\n -- The new InertSet to use (REPLACES the old InertSet)\n , sr_new_work :: WorkList\n -- Any new work items generated (should be ADDED to the old WorkList)\n -- Invariant: \n -- sr_stop = Just workitem => workitem is *not* in sr_inerts and\n -- workitem is inert wrt to sr_inerts\n , sr_stop :: StopOrContinue\n }\n\ninstance Outputable StageResult where\n ppr (SR { sr_inerts = inerts, sr_new_work = work, sr_stop = stop })\n = ptext (sLit \"SR\") <+> \n braces (sep [ ptext (sLit \"inerts =\") <+> ppr inerts <> comma\n \t , ptext (sLit \"new work =\") <+> ppr work <> comma\n \t , ptext (sLit \"stop =\") <+> ppr stop])\n\ntype SubGoalDepth = Int\t -- Starts at zero; used to limit infinite\n \t\t \t -- recursion of sub-goals\ntype SimplifierStage = SubGoalDepth -> WorkItem -> InertSet -> TcS StageResult \n\n-- Combine a sequence of simplifier 'stages' to create a pipeline \nrunSolverPipeline :: SubGoalDepth\n -> [(String, SimplifierStage)]\n\t\t -> InertSet -> WorkItem \n -> TcS (InertSet, WorkList)\n-- Precondition: non-empty list of stages \nrunSolverPipeline depth pipeline inerts workItem\n = do { traceTcS \"Start solver pipeline\" $ \n vcat [ ptext (sLit \"work item =\") <+> ppr workItem\n , ptext (sLit \"inerts =\") <+> ppr inerts]\n\n ; let itr_in = SR { sr_inerts = inerts\n , sr_new_work = emptyWorkList\n , sr_stop = ContinueWith workItem }\n ; itr_out <- run_pipeline pipeline itr_in\n ; let new_inert \n = case sr_stop itr_out of \n \t Stop -> sr_inerts itr_out\n ContinueWith item -> sr_inerts itr_out `updInertSet` item\n ; return (new_inert, sr_new_work itr_out) }\n where \n run_pipeline :: [(String, SimplifierStage)]\n -> StageResult -> TcS StageResult\n run_pipeline [] itr = return itr\n run_pipeline _ itr@(SR { sr_stop = Stop }) = return itr\n\n run_pipeline ((name,stage):stages) \n (SR { sr_new_work = accum_work\n , sr_inerts = inerts\n , sr_stop = ContinueWith work_item })\n = do { itr <- stage depth work_item inerts \n ; traceTcS (\"Stage result (\" ++ name ++ \")\") (ppr itr)\n ; let itr' = itr { sr_new_work = accum_work `unionWorkList` sr_new_work itr }\n ; run_pipeline stages itr' }\n\\end{code}\n\nExample 1:\n Inert: {c ~ d, F a ~ t, b ~ Int, a ~ ty} (all given)\n Reagent: a ~ [b] (given)\n\nReact with (c~d) ==> IR (ContinueWith (a~[b])) True []\nReact with (F a ~ t) ==> IR (ContinueWith (a~[b])) False [F [b] ~ t]\nReact with (b ~ Int) ==> IR (ContinueWith (a~[Int]) True []\n\nExample 2:\n Inert: {c ~w d, F a ~g t, b ~w Int, a ~w ty}\n Reagent: a ~w [b]\n\nReact with (c ~w d) ==> IR (ContinueWith (a~[b])) True []\nReact with (F a ~g t) ==> IR (ContinueWith (a~[b])) True [] (can't rewrite given with wanted!)\netc.\n\nExample 3:\n Inert: {a ~ Int, F Int ~ b} (given)\n Reagent: F a ~ b (wanted)\n\nReact with (a ~ Int) ==> IR (ContinueWith (F Int ~ b)) True []\nReact with (F Int ~ b) ==> IR Stop True [] -- after substituting we re-canonicalize and get nothing\n\n\\begin{code}\n-- Main interaction solver: we fully solve the worklist 'in one go', \n-- returning an extended inert set.\n--\n-- See Note [Touchables and givens].\nsolveInteractGiven :: InertSet -> GivenLoc -> [EvVar] -> TcS InertSet\nsolveInteractGiven inert gloc evs\n = do { (_, inert_ret) <- solveInteract inert $ listToBag $\n map mk_given evs\n ; return inert_ret }\n where\n flav = Given gloc\n mk_given ev = mkEvVarX ev flav\n\nsolveInteractWanted :: InertSet -> [WantedEvVar] -> TcS InertSet\nsolveInteractWanted inert wvs\n = do { (_,inert_ret) <- solveInteract inert $ listToBag $\n map wantedToFlavored wvs\n ; return inert_ret }\n\nsolveInteract :: InertSet -> Bag FlavoredEvVar -> TcS (Bool, InertSet)\n-- Post: (True, inert_set) means we managed to discharge all constraints\n-- without actually doing any interactions!\n-- (False, inert_set) means some interactions occurred\nsolveInteract inert ws \n = do { dyn_flags <- getDynFlags\n ; sctx <- getTcSContext\n\n ; traceTcS \"solveInteract, before clever canonicalization:\" $\n vcat [ text \"ws = \" <+> ppr (mapBag (\\(EvVarX ev ct)\n -> (ct,evVarPred ev)) ws)\n , text \"inert = \" <+> ppr inert ]\n\n ; can_ws <- mkCanonicalFEVs ws\n\n ; (flag, inert_ret)\n <- foldrWorkListM (tryPreSolveAndInteract sctx dyn_flags) (True,inert) can_ws\n\n ; traceTcS \"solveInteract, after clever canonicalization (and interaction):\" $\n vcat [ text \"No interaction happened = \" <+> ppr flag\n , text \"inert_ret = \" <+> ppr inert_ret ]\n\n ; return (flag, inert_ret) }\n\ntryPreSolveAndInteract :: SimplContext\n -> DynFlags\n -> CanonicalCt\n -> (Bool, InertSet)\n -> TcS (Bool, InertSet)\n-- Returns: True if it was able to discharge this constraint AND all previous ones\ntryPreSolveAndInteract sctx dyn_flags ct (all_previous_discharged, inert)\n = do { let inert_cts = get_inert_cts (evVarPred ev_var)\n\n ; this_one_discharged <- \n if isCFrozenErr ct then \n return False\n else\n dischargeFromCCans inert_cts ev_var fl\n\n ; if this_one_discharged\n then return (all_previous_discharged, inert)\n\n else do\n { inert_ret <- solveOneWithDepth (ctxtStkDepth dyn_flags,0,[]) ct inert\n ; return (False, inert_ret) } }\n\n where\n ev_var = cc_id ct\n fl = cc_flavor ct \n\n get_inert_cts (ClassP clas _)\n | simplEqsOnly sctx = emptyCCan\n | otherwise = fst (getRelevantCts clas (inert_dicts inert))\n get_inert_cts (IParam {})\n = emptyCCan -- We must not do the same thing for IParams, because (contrary\n -- to dictionaries), work items \/must\/ override inert items.\n -- See Note [Overriding implicit parameters] in TcInteract.\n get_inert_cts (EqPred {})\n = inert_eqs inert `unionBags` cCanMapToBag (inert_funeqs inert)\n\ndischargeFromCCans :: CanonicalCts -> EvVar -> CtFlavor -> TcS Bool\n-- See if this (pre-canonicalised) work-item is identical to a \n-- one already in the inert set. Reasons:\n-- a) Avoid creating superclass constraints for millions of incoming (Num a) constraints\n-- b) Termination for improve_eqs in TcSimplify.simpl_loop\ndischargeFromCCans cans ev fl\n = Bag.foldrBag discharge_ct (return False) cans\n where \n the_pred = evVarPred ev\n\n discharge_ct :: CanonicalCt -> TcS Bool -> TcS Bool\n discharge_ct ct _rest\n | evVarPred (cc_id ct) `tcEqPred` the_pred\n , cc_flavor ct `canSolve` fl\n = do { when (isWanted fl) $ set_ev_bind ev (cc_id ct) \n \t -- Deriveds need no evidence\n \t -- For Givens, we already have evidence, and we don't need it twice \n ; return True }\n where \n set_ev_bind x y\n | EqPred {} <- evVarPred y = setEvBind x (EvCoercion (mkCoVarCoercion y))\n | otherwise = setEvBind x (EvId y)\n\n discharge_ct _ct rest = rest\n\\end{code}\n\nNote [Avoiding the superclass explosion] \n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \nThis note now is not as significant as it used to be because we no\nlonger add the superclasses of Wanted as Derived, except only if they\nhave equality superclasses or superclasses with functional\ndependencies. The fear was that hundreds of identical wanteds would\ngive rise each to the same superclass or equality Derived's which\nwould lead to a blo-up in the number of interactions.\n\nInstead, what we do with tryPreSolveAndCanon, is when we encounter a\nnew constraint, we very quickly see if it can be immediately\ndischarged by a class constraint in our inert set or the previous\ncanonicals. If so, we add nothing to the returned canonical\nconstraints.\n\n\\begin{code}\nsolveOne :: WorkItem -> InertSet -> TcS InertSet \nsolveOne workItem inerts \n = do { dyn_flags <- getDynFlags\n ; solveOneWithDepth (ctxtStkDepth dyn_flags,0,[]) workItem inerts\n }\n\n-----------------\nsolveInteractWithDepth :: (Int, Int, [WorkItem])\n -> WorkList -> InertSet -> TcS InertSet\nsolveInteractWithDepth ctxt@(max_depth,n,stack) ws inert\n | isEmptyWorkList ws\n = return inert\n\n | n > max_depth \n = solverDepthErrorTcS n stack\n\n | otherwise \n = do { traceTcS \"solveInteractWithDepth\" $ \n vcat [ text \"Current depth =\" <+> ppr n\n , text \"Max depth =\" <+> ppr max_depth\n , text \"ws =\" <+> ppr ws ]\n\n\n ; foldrWorkListM (solveOneWithDepth ctxt) inert ws }\n -- use foldr to preserve the order\n\n------------------\n-- Fully interact the given work item with an inert set, and return a\n-- new inert set which has assimilated the new information.\nsolveOneWithDepth :: (Int, Int, [WorkItem])\n -> WorkItem -> InertSet -> TcS InertSet\nsolveOneWithDepth (max_depth, depth, stack) work inert\n = do { traceFireTcS depth (text \"Solving {\" <+> ppr work)\n ; (new_inert, new_work) <- runSolverPipeline depth thePipeline inert work\n \n\t -- Recursively solve the new work generated \n -- from workItem, with a greater depth\n ; res_inert <- solveInteractWithDepth (max_depth, depth+1, work:stack) new_work new_inert \n\n ; traceFireTcS depth (text \"Done }\" <+> ppr work) \n\n ; return res_inert }\n\nthePipeline :: [(String,SimplifierStage)]\nthePipeline = [ (\"interact with inert eqs\", interactWithInertEqsStage)\n , (\"interact with inerts\", interactWithInertsStage)\n , (\"spontaneous solve\", spontaneousSolveStage)\n , (\"top-level reactions\", topReactionsStage) ]\n\\end{code}\n\n*********************************************************************************\n* * \n The spontaneous-solve Stage\n* *\n*********************************************************************************\n\nNote [Efficient Orientation] \n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThere are two cases where we have to be careful about \norienting equalities to get better efficiency. \n\nCase 1: In Rewriting Equalities (function rewriteEqLHS) \n\n When rewriting two equalities with the same LHS:\n (a) (tv ~ xi1) \n (b) (tv ~ xi2) \n We have a choice of producing work (xi1 ~ xi2) (up-to the\n canonicalization invariants) However, to prevent the inert items\n from getting kicked out of the inerts first, we prefer to\n canonicalize (xi1 ~ xi2) if (b) comes from the inert set, or (xi2\n ~ xi1) if (a) comes from the inert set.\n \n This choice is implemented using the WhichComesFromInert flag. \n\nCase 2: Functional Dependencies \n Again, we should prefer, if possible, the inert variables on the RHS\n\nCase 3: IP improvement work\n We must always rewrite so that the inert type is on the right. \n\n\\begin{code}\nspontaneousSolveStage :: SimplifierStage \nspontaneousSolveStage depth workItem inerts \n = do { mSolve <- trySpontaneousSolve workItem\n\n ; case mSolve of \n SPCantSolve -> -- No spontaneous solution for him, keep going\n return $ SR { sr_new_work = emptyWorkList\n , sr_inerts = inerts\n , sr_stop = ContinueWith workItem }\n\n SPSolved workItem'\n | not (isGivenCt workItem) \n\t \t -- Original was wanted or derived but we have now made him \n -- given so we have to interact him with the inerts due to\n -- its status change. This in turn may produce more work.\n\t\t -- We do this *right now* (rather than just putting workItem'\n\t\t -- back into the work-list) because we've solved \n -> do { bumpStepCountTcS\n\t \t ; traceFireTcS depth (ptext (sLit \"Spontaneous (w\/d)\") <+> ppr workItem)\n ; (new_inert, new_work) <- runSolverPipeline depth\n [ (\"recursive interact with inert eqs\", interactWithInertEqsStage)\n , (\"recursive interact with inerts\", interactWithInertsStage)\n ] inerts workItem'\n ; return $ SR { sr_new_work = new_work \n , sr_inerts = new_inert -- will include workItem' \n , sr_stop = Stop }\n }\n | otherwise \n -> -- Original was given; he must then be inert all right, and\n -- workList' are all givens from flattening\n do { bumpStepCountTcS\n\t \t ; traceFireTcS depth (ptext (sLit \"Spontaneous (g)\") <+> ppr workItem)\n ; return $ SR { sr_new_work = emptyWorkList\n , sr_inerts = inerts `updInertSet` workItem' \n , sr_stop = Stop } }\n SPError -> -- Return with no new work\n return $ SR { sr_new_work = emptyWorkList\n , sr_inerts = inerts\n , sr_stop = Stop }\n }\n\ndata SPSolveResult = SPCantSolve | SPSolved WorkItem | SPError\n-- SPCantSolve means that we can't do the unification because e.g. the variable is untouchable\n-- SPSolved workItem' gives us a new *given* to go on \n-- SPError means that it's completely impossible to solve this equality, eg due to a kind error\n\n\n-- @trySpontaneousSolve wi@ solves equalities where one side is a\n-- touchable unification variable.\n-- \t See Note [Touchables and givens] \ntrySpontaneousSolve :: WorkItem -> TcS SPSolveResult\ntrySpontaneousSolve workItem@(CTyEqCan { cc_id = cv, cc_flavor = gw, cc_tyvar = tv1, cc_rhs = xi })\n | isGiven gw\n = return SPCantSolve\n | Just tv2 <- tcGetTyVar_maybe xi\n = do { tch1 <- isTouchableMetaTyVar tv1\n ; tch2 <- isTouchableMetaTyVar tv2\n ; case (tch1, tch2) of\n (True, True) -> trySpontaneousEqTwoWay cv gw tv1 tv2\n (True, False) -> trySpontaneousEqOneWay cv gw tv1 xi\n (False, True) -> trySpontaneousEqOneWay cv gw tv2 (mkTyVarTy tv1)\n\t _ -> return SPCantSolve }\n | otherwise\n = do { tch1 <- isTouchableMetaTyVar tv1\n ; if tch1 then trySpontaneousEqOneWay cv gw tv1 xi\n else do { traceTcS \"Untouchable LHS, can't spontaneously solve workitem:\" \n (ppr workItem) \n ; return SPCantSolve }\n }\n\n -- No need for \n -- trySpontaneousSolve (CFunEqCan ...) = ...\n -- See Note [No touchables as FunEq RHS] in TcSMonad\ntrySpontaneousSolve _ = return SPCantSolve\n\n----------------\ntrySpontaneousEqOneWay :: CoVar -> CtFlavor -> TcTyVar -> Xi -> TcS SPSolveResult\n-- tv is a MetaTyVar, not untouchable\ntrySpontaneousEqOneWay cv gw tv xi\t\n | not (isSigTyVar tv) || isTyVarTy xi \n = do { let kxi = typeKind xi -- NB: 'xi' is fully rewritten according to the inerts \n -- so we have its more specific kind in our hands\n ; if kxi `isSubKind` tyVarKind tv then\n solveWithIdentity cv gw tv xi\n else return SPCantSolve\n{-\n else if tyVarKind tv `isSubKind` kxi then\n return SPCantSolve -- kinds are compatible but we can't solveWithIdentity this way\n -- This case covers the a_touchable :: * ~ b_untouchable :: ?? \n -- which has to be deferred or floated out for someone else to solve \n -- it in a scope where 'b' is no longer untouchable.\n else do { addErrorTcS KindError gw (mkTyVarTy tv) xi -- See Note [Kind errors]\n ; return SPError }\n-}\n }\n | otherwise -- Still can't solve, sig tyvar and non-variable rhs\n = return SPCantSolve\n\n----------------\ntrySpontaneousEqTwoWay :: CoVar -> CtFlavor -> TcTyVar -> TcTyVar -> TcS SPSolveResult\n-- Both tyvars are *touchable* MetaTyvars so there is only a chance for kind error here\ntrySpontaneousEqTwoWay cv gw tv1 tv2\n | k1 `isSubKind` k2\n , nicer_to_update_tv2 = solveWithIdentity cv gw tv2 (mkTyVarTy tv1)\n | k2 `isSubKind` k1 \n = solveWithIdentity cv gw tv1 (mkTyVarTy tv2)\n | otherwise -- None is a subkind of the other, but they are both touchable! \n = return SPCantSolve\n -- do { addErrorTcS KindError gw (mkTyVarTy tv1) (mkTyVarTy tv2)\n -- ; return SPError }\n where\n k1 = tyVarKind tv1\n k2 = tyVarKind tv2\n nicer_to_update_tv2 = isSigTyVar tv1 || isSystemName (Var.varName tv2)\n\\end{code}\n\nNote [Kind errors] \n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nConsider the wanted problem: \n alpha ~ (# Int, Int #) \nwhere alpha :: ?? and (# Int, Int #) :: (#). We can't spontaneously solve this constraint, \nbut we should rather reject the program that give rise to it. If 'trySpontaneousEqTwoWay' \nsimply returns @CantSolve@ then that wanted constraint is going to propagate all the way and \nget quantified over in inference mode. That's bad because we do know at this point that the \nconstraint is insoluble. Instead, we call 'recKindErrorTcS' here, which will fail later on.\n\nThe same applies in canonicalization code in case of kind errors in the givens. \n\nHowever, when we canonicalize givens we only check for compatibility (@compatKind@). \nIf there were a kind error in the givens, this means some form of inconsistency or dead code.\n\nYou may think that when we spontaneously solve wanteds we may have to look through the \nbindings to determine the right kind of the RHS type. E.g one may be worried that xi is \n@alpha@ where alpha :: ? and a previous spontaneous solving has set (alpha := f) with (f :: *).\nBut we orient our constraints so that spontaneously solved ones can rewrite all other constraint\nso this situation can't happen. \n\nNote [Spontaneous solving and kind compatibility] \n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nNote that our canonical constraints insist that *all* equalities (tv ~\nxi) or (F xis ~ rhs) require the LHS and the RHS to have *compatible*\nthe same kinds. (\"compatible\" means one is a subKind of the other.)\n\n - It can't be *equal* kinds, because\n b) wanted constraints don't necessarily have identical kinds\n eg alpha::? ~ Int\n b) a solved wanted constraint becomes a given\n\n - SPJ thinks that *given* constraints (tv ~ tau) always have that\n tau has a sub-kind of tv; and when solving wanted constraints\n in trySpontaneousEqTwoWay we re-orient to achieve this.\n\n - Note that the kind invariant is maintained by rewriting.\n Eg wanted1 rewrites wanted2; if both were compatible kinds before,\n wanted2 will be afterwards. Similarly givens.\n\nCaveat:\n - Givens from higher-rank, such as: \n type family T b :: * -> * -> * \n type instance T Bool = (->) \n\n f :: forall a. ((T a ~ (->)) => ...) -> a -> ... \n flop = f (...) True \n Whereas we would be able to apply the type instance, we would not be able to \n use the given (T Bool ~ (->)) in the body of 'flop' \n\n\nNote [Avoid double unifications] \n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe spontaneous solver has to return a given which mentions the unified unification\nvariable *on the left* of the equality. Here is what happens if not: \n Original wanted: (a ~ alpha), (alpha ~ Int) \nWe spontaneously solve the first wanted, without changing the order! \n given : a ~ alpha [having unified alpha := a] \nNow the second wanted comes along, but he cannot rewrite the given, so we simply continue.\nAt the end we spontaneously solve that guy, *reunifying* [alpha := Int] \n\nWe avoid this problem by orienting the resulting given so that the unification\nvariable is on the left. [Note that alternatively we could attempt to\nenforce this at canonicalization]\n\nSee also Note [No touchables as FunEq RHS] in TcSMonad; avoiding\ndouble unifications is the main reason we disallow touchable\nunification variables as RHS of type family equations: F xis ~ alpha.\n\n\\begin{code}\n----------------\n\nsolveWithIdentity :: CoVar -> CtFlavor -> TcTyVar -> Xi -> TcS SPSolveResult\n-- Solve with the identity coercion \n-- Precondition: kind(xi) is a sub-kind of kind(tv)\n-- Precondition: CtFlavor is Wanted or Derived\n-- See [New Wanted Superclass Work] to see why solveWithIdentity \n-- must work for Derived as well as Wanted\n-- Returns: workItem where \n-- workItem = the new Given constraint\nsolveWithIdentity cv wd tv xi \n = do { traceTcS \"Sneaky unification:\" $ \n vcat [text \"Coercion variable: \" <+> ppr wd, \n text \"Coercion: \" <+> pprEq (mkTyVarTy tv) xi,\n text \"Left Kind is : \" <+> ppr (typeKind (mkTyVarTy tv)),\n text \"Right Kind is : \" <+> ppr (typeKind xi)\n ]\n\n ; setWantedTyBind tv xi\n ; cv_given <- newGivenCoVar (mkTyVarTy tv) xi xi\n\n ; when (isWanted wd) (setCoBind cv xi)\n -- We don't want to do this for Derived, that's why we use 'when (isWanted wd)'\n\n ; return $ SPSolved (CTyEqCan { cc_id = cv_given\n , cc_flavor = mkGivenFlavor wd UnkSkol\n , cc_tyvar = tv, cc_rhs = xi }) }\n\\end{code}\n\n\n*********************************************************************************\n* * \n The interact-with-inert Stage\n* *\n*********************************************************************************\n\nNote [The Solver Invariant]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWe always add Givens first. So you might think that the solver has\nthe invariant\n\n If the work-item is Given, \n then the inert item must Given\n\nBut this isn't quite true. Suppose we have, \n c1: [W] beta ~ [alpha], c2 : [W] blah, c3 :[W] alpha ~ Int\nAfter processing the first two, we get\n c1: [G] beta ~ [alpha], c2 : [W] blah\nNow, c3 does not interact with the the given c1, so when we spontaneously\nsolve c3, we must re-react it with the inert set. So we can attempt a \nreaction between inert c2 [W] and work-item c3 [G].\n\nIt *is* true that [Solver Invariant]\n If the work-item is Given, \n AND there is a reaction\n then the inert item must Given\nor, equivalently,\n If the work-item is Given, \n and the inert item is Wanted\/Derived\n then there is no reaction\n\n\\begin{code}\n-- Interaction result of WorkItem <~> AtomicInert\ndata InteractResult\n = IR { ir_stop :: StopOrContinue\n -- Stop\n -- => Reagent (work item) consumed.\n -- ContinueWith new_reagent\n -- => Reagent transformed but keep gathering interactions. \n -- The transformed item remains inert with respect \n -- to any previously encountered inerts.\n\n , ir_inert_action :: InertAction\n -- Whether the inert item should remain in the InertSet.\n\n , ir_new_work :: WorkList\n -- new work items to add to the WorkList\n\n , ir_fire :: Maybe String -- Tells whether a rule fired, and if so what\n }\n\n-- What to do with the inert reactant.\ndata InertAction = KeepInert | DropInert \n\nmkIRContinue :: String -> WorkItem -> InertAction -> WorkList -> TcS InteractResult\nmkIRContinue rule wi keep newWork \n = return $ IR { ir_stop = ContinueWith wi, ir_inert_action = keep\n , ir_new_work = newWork, ir_fire = Just rule }\n\nmkIRStopK :: String -> WorkList -> TcS InteractResult\nmkIRStopK rule newWork\n = return $ IR { ir_stop = Stop, ir_inert_action = KeepInert\n , ir_new_work = newWork, ir_fire = Just rule }\n\nmkIRStopD :: String -> WorkList -> TcS InteractResult\nmkIRStopD rule newWork\n = return $ IR { ir_stop = Stop, ir_inert_action = DropInert\n , ir_new_work = newWork, ir_fire = Just rule }\n\nnoInteraction :: Monad m => WorkItem -> m InteractResult\nnoInteraction wi\n = return $ IR { ir_stop = ContinueWith wi, ir_inert_action = KeepInert\n , ir_new_work = emptyWorkList, ir_fire = Nothing }\n\ndata WhichComesFromInert = LeftComesFromInert | RightComesFromInert \n -- See Note [Efficient Orientation] \n\n\n---------------------------------------------------\n-- Interact a single WorkItem with the equalities of an inert set as\n-- far as possible, i.e. until we get a Stop result from an individual\n-- reaction (i.e. when the WorkItem is consumed), or until we've\n-- interact the WorkItem with the entire equalities of the InertSet\n\ninteractWithInertEqsStage :: SimplifierStage \ninteractWithInertEqsStage depth workItem inert\n = Bag.foldrBagM (interactNext depth) initITR (inert_eqs inert)\n -- use foldr to preserve the order \n where\n initITR = SR { sr_inerts = inert { inert_eqs = emptyCCan }\n , sr_new_work = emptyWorkList\n , sr_stop = ContinueWith workItem }\n\n---------------------------------------------------\n-- Interact a single WorkItem with *non-equality* constraints in the inert set. \n-- Precondition: equality interactions must have already happened, hence we have \n-- to pick up some information from the incoming inert, before folding over the \n-- \"Other\" constraints it contains!\n\ninteractWithInertsStage :: SimplifierStage\ninteractWithInertsStage depth workItem inert\n = let (relevant, inert_residual) = getISRelevant workItem inert \n initITR = SR { sr_inerts = inert_residual\n , sr_new_work = emptyWorkList\n , sr_stop = ContinueWith workItem } \n in Bag.foldrBagM (interactNext depth) initITR relevant \n -- use foldr to preserve the order\n where \n getISRelevant :: CanonicalCt -> InertSet -> (CanonicalCts, InertSet) \n getISRelevant (CFrozenErr {}) is = (emptyCCan, is)\n -- Nothing s relevant; we have alread interacted\n -- it with the equalities in the inert set\n\n getISRelevant (CDictCan { cc_class = cls } ) is\n = let (relevant, residual_map) = getRelevantCts cls (inert_dicts is)\n in (relevant, is { inert_dicts = residual_map }) \n getISRelevant (CFunEqCan { cc_fun = tc } ) is \n = let (relevant, residual_map) = getRelevantCts tc (inert_funeqs is) \n in (relevant, is { inert_funeqs = residual_map })\n getISRelevant (CIPCan { cc_ip_nm = nm }) is \n = let (relevant, residual_map) = getRelevantCts nm (inert_ips is)\n in (relevant, is { inert_ips = residual_map }) \n -- An equality, finally, may kick everything except equalities out \n -- because we have already interacted the equalities in interactWithInertEqsStage\n getISRelevant _eq_ct is -- Equality, everything is relevant for this one \n -- TODO: if we were caching variables, we'd know that only \n -- some are relevant. Experiment with this for now. \n = let cts = cCanMapToBag (inert_ips is) `unionBags` \n cCanMapToBag (inert_dicts is) `unionBags` cCanMapToBag (inert_funeqs is)\n in (cts, is { inert_dicts = emptyCCanMap\n , inert_ips = emptyCCanMap\n , inert_funeqs = emptyCCanMap })\n\ninteractNext :: SubGoalDepth -> AtomicInert -> StageResult -> TcS StageResult \ninteractNext depth inert it\n | ContinueWith work_item <- sr_stop it\n = do { let inerts = sr_inerts it \n\n ; IR { ir_new_work = new_work, ir_inert_action = inert_action\n , ir_fire = fire_info, ir_stop = stop } \n <- interactWithInert inert work_item\n\n ; let mk_msg rule \n \t = text rule <+> keep_doc\n \t <+> vcat [ ptext (sLit \"Inert =\") <+> ppr inert\n \t , ptext (sLit \"Work =\") <+> ppr work_item\n \t , ppUnless (isEmptyWorkList new_work) $\n ptext (sLit \"New =\") <+> ppr new_work ]\n keep_doc = case inert_action of\n \t KeepInert -> ptext (sLit \"[keep]\")\n \t DropInert -> ptext (sLit \"[drop]\")\n ; case fire_info of\n Just rule -> do { bumpStepCountTcS\n ; traceFireTcS depth (mk_msg rule) }\n Nothing -> return ()\n\n -- New inerts depend on whether we KeepInert or not \n ; let inerts_new = case inert_action of\n KeepInert -> inerts `updInertSet` inert\n DropInert -> inerts\n\n ; return $ SR { sr_inerts = inerts_new\n , sr_new_work = sr_new_work it `unionWorkList` new_work\n , sr_stop = stop } }\n | otherwise \n = return $ it { sr_inerts = (sr_inerts it) `updInertSet` inert }\n\n-- Do a single interaction of two constraints.\ninteractWithInert :: AtomicInert -> WorkItem -> TcS InteractResult\ninteractWithInert inert workItem \n = do { ctxt <- getTcSContext\n ; let is_allowed = allowedInteraction (simplEqsOnly ctxt) inert workItem \n\n ; if is_allowed then \n doInteractWithInert inert workItem \n else \n noInteraction workItem \n }\n\nallowedInteraction :: Bool -> AtomicInert -> WorkItem -> Bool \n-- Allowed interactions \nallowedInteraction eqs_only (CDictCan {}) (CDictCan {}) = not eqs_only\nallowedInteraction eqs_only (CIPCan {}) (CIPCan {}) = not eqs_only\nallowedInteraction _ _ _ = True \n\n--------------------------------------------\ndoInteractWithInert :: CanonicalCt -> CanonicalCt -> TcS InteractResult\n-- Identical class constraints.\n\ndoInteractWithInert\n inertItem@(CDictCan { cc_id = d1, cc_flavor = fl1, cc_class = cls1, cc_tyargs = tys1 }) \n workItem@(CDictCan { cc_id = d2, cc_flavor = fl2, cc_class = cls2, cc_tyargs = tys2 })\n | cls1 == cls2 && (and $ zipWith tcEqType tys1 tys2)\n = solveOneFromTheOther \"Cls\/Cls\" (EvId d1,fl1) workItem \n\n | cls1 == cls2 && (not (isGiven fl1 && isGiven fl2))\n = \t -- See Note [When improvement happens]\n do { let pty1 = ClassP cls1 tys1\n pty2 = ClassP cls2 tys2\n inert_pred_loc = (pty1, pprFlavorArising fl1)\n work_item_pred_loc = (pty2, pprFlavorArising fl2)\n fd_eqns = improveFromAnother \n inert_pred_loc -- the template\n work_item_pred_loc -- the one we aim to rewrite\n -- See Note [Efficient Orientation]\n\n ; m <- rewriteWithFunDeps fd_eqns tys2 fl2\n ; case m of \n Nothing -> noInteraction workItem\n Just (rewritten_tys2, cos2, fd_work)\n | tcEqTypes tys1 rewritten_tys2\n -> -- Solve him on the spot in this case\n\t \tcase fl2 of\n\t Given {} -> pprPanic \"Unexpected given\" (ppr inertItem $$ ppr workItem)\n Derived {} -> mkIRStopK \"Cls\/Cls fundep (solved)\" fd_work\n\t\t Wanted {} \n\t\t | isDerived fl1 \n -> do { setDictBind d2 (EvCast d1 dict_co)\n\t\t\t ; let inert_w = inertItem { cc_flavor = fl2 }\n\t\t\t -- A bit naughty: we take the inert Derived, \n\t\t\t -- turn it into a Wanted, use it to solve the work-item\n\t\t\t -- and put it back into the work-list\n\t\t\t -- Maybe rather than starting again, we could *replace* the\n\t\t\t -- inert item, but its safe and simple to restart\n ; mkIRStopD \"Cls\/Cls fundep (solved)\" $ \n workListFromNonEq inert_w `unionWorkList` fd_work }\n\t\t | otherwise \n -> do { setDictBind d2 (EvCast d1 dict_co)\n ; mkIRStopK \"Cls\/Cls fundep (solved)\" fd_work }\n\n | otherwise\n -> -- We could not quite solve him, but we still rewrite him\n\t -- Example: class C a b c | a -> b\n\t\t-- Given: C Int Bool x, Wanted: C Int beta y\n\t\t-- Then rewrite the wanted to C Int Bool y\n\t\t-- but note that is still not identical to the given\n\t\t-- The important thing is that the rewritten constraint is\n\t\t-- inert wrt the given.\n\t\t-- However it is not necessarily inert wrt previous inert-set items.\n -- class C a b c d | a -> b, b c -> d\n\t\t-- Inert: c1: C b Q R S, c2: C P Q a b\n\t\t-- Work: C P alpha R beta\n\t\t-- Does not react with c1; reacts with c2, with alpha:=Q\n\t\t-- NOW it reacts with c1!\n\t\t-- So we must stop, and put the rewritten constraint back in the work list\n do { d2' <- newDictVar cls1 rewritten_tys2\n ; case fl2 of\n Given {} -> pprPanic \"Unexpected given\" (ppr inertItem $$ ppr workItem)\n Wanted {} -> setDictBind d2 (EvCast d2' dict_co)\n Derived {} -> return ()\n ; let workItem' = workItem { cc_id = d2', cc_tyargs = rewritten_tys2 }\n ; mkIRStopK \"Cls\/Cls fundep (partial)\" $ \n workListFromNonEq workItem' `unionWorkList` fd_work } \n\n where\n dict_co = mkTyConCoercion (classTyCon cls1) cos2\n }\n\n-- Class constraint and given equality: use the equality to rewrite\n-- the class constraint. \ndoInteractWithInert (CTyEqCan { cc_id = cv, cc_flavor = ifl, cc_tyvar = tv, cc_rhs = xi }) \n (CDictCan { cc_id = dv, cc_flavor = wfl, cc_class = cl, cc_tyargs = xis }) \n | ifl `canRewrite` wfl \n , tv `elemVarSet` tyVarsOfTypes xis\n = do { rewritten_dict <- rewriteDict (cv,tv,xi) (dv,wfl,cl,xis)\n -- Continue with rewritten Dictionary because we can only be in the \n -- interactWithEqsStage, so the dictionary is inert. \n ; mkIRContinue \"Eq\/Cls\" rewritten_dict KeepInert emptyWorkList }\n \ndoInteractWithInert (CDictCan { cc_id = dv, cc_flavor = ifl, cc_class = cl, cc_tyargs = xis }) \n workItem@(CTyEqCan { cc_id = cv, cc_flavor = wfl, cc_tyvar = tv, cc_rhs = xi })\n | wfl `canRewrite` ifl\n , tv `elemVarSet` tyVarsOfTypes xis\n = do { rewritten_dict <- rewriteDict (cv,tv,xi) (dv,ifl,cl,xis)\n ; mkIRContinue \"Cls\/Eq\" workItem DropInert (workListFromNonEq rewritten_dict) }\n\n-- Class constraint and given equality: use the equality to rewrite\n-- the class constraint.\ndoInteractWithInert (CTyEqCan { cc_id = cv, cc_flavor = ifl, cc_tyvar = tv, cc_rhs = xi }) \n (CIPCan { cc_id = ipid, cc_flavor = wfl, cc_ip_nm = nm, cc_ip_ty = ty }) \n | ifl `canRewrite` wfl\n , tv `elemVarSet` tyVarsOfType ty \n = do { rewritten_ip <- rewriteIP (cv,tv,xi) (ipid,wfl,nm,ty) \n ; mkIRContinue \"Eq\/IP\" rewritten_ip KeepInert emptyWorkList } \n\ndoInteractWithInert (CIPCan { cc_id = ipid, cc_flavor = ifl, cc_ip_nm = nm, cc_ip_ty = ty }) \n workItem@(CTyEqCan { cc_id = cv, cc_flavor = wfl, cc_tyvar = tv, cc_rhs = xi })\n | wfl `canRewrite` ifl\n , tv `elemVarSet` tyVarsOfType ty\n = do { rewritten_ip <- rewriteIP (cv,tv,xi) (ipid,ifl,nm,ty) \n ; mkIRContinue \"IP\/Eq\" workItem DropInert (workListFromNonEq rewritten_ip) }\n\n-- Two implicit parameter constraints. If the names are the same,\n-- but their types are not, we generate a wanted type equality \n-- that equates the type (this is \"improvement\"). \n-- However, we don't actually need the coercion evidence,\n-- so we just generate a fresh coercion variable that isn't used anywhere.\ndoInteractWithInert (CIPCan { cc_id = id1, cc_flavor = ifl, cc_ip_nm = nm1, cc_ip_ty = ty1 }) \n workItem@(CIPCan { cc_flavor = wfl, cc_ip_nm = nm2, cc_ip_ty = ty2 })\n | nm1 == nm2 && isGiven wfl && isGiven ifl\n = \t-- See Note [Overriding implicit parameters]\n -- Dump the inert item, override totally with the new one\n\t-- Do not require type equality\n\t-- For example, given let ?x::Int = 3 in let ?x::Bool = True in ...\n\t-- we must *override* the outer one with the inner one\n mkIRContinue \"IP\/IP override\" workItem DropInert emptyWorkList\n\n | nm1 == nm2 && ty1 `tcEqType` ty2 \n = solveOneFromTheOther \"IP\/IP\" (EvId id1,ifl) workItem \n\n | nm1 == nm2\n = \t-- See Note [When improvement happens]\n do { co_var <- newCoVar ty2 ty1 -- See Note [Efficient Orientation]\n ; let flav = Wanted (combineCtLoc ifl wfl) \n ; cans <- mkCanonical flav co_var \n ; mkIRContinue \"IP\/IP fundep\" workItem KeepInert cans }\n\n-- Never rewrite a given with a wanted equality, and a type function\n-- equality can never rewrite an equality. We rewrite LHS *and* RHS \n-- of function equalities so that our inert set exposes everything that \n-- we know about equalities.\n\n-- Inert: equality, work item: function equality\ndoInteractWithInert (CTyEqCan { cc_id = cv1, cc_flavor = ifl, cc_tyvar = tv, cc_rhs = xi1 }) \n (CFunEqCan { cc_id = cv2, cc_flavor = wfl, cc_fun = tc\n , cc_tyargs = args, cc_rhs = xi2 })\n | ifl `canRewrite` wfl \n , tv `elemVarSet` tyVarsOfTypes (xi2:args) -- Rewrite RHS as well\n = do { rewritten_funeq <- rewriteFunEq (cv1,tv,xi1) (cv2,wfl,tc,args,xi2) \n ; mkIRStopK \"Eq\/FunEq\" (workListFromEq rewritten_funeq) } \n -- Must Stop here, because we may no longer be inert after the rewritting.\n\n-- Inert: function equality, work item: equality\ndoInteractWithInert (CFunEqCan {cc_id = cv1, cc_flavor = ifl, cc_fun = tc\n , cc_tyargs = args, cc_rhs = xi1 }) \n workItem@(CTyEqCan { cc_id = cv2, cc_flavor = wfl, cc_tyvar = tv, cc_rhs = xi2 })\n | wfl `canRewrite` ifl\n , tv `elemVarSet` tyVarsOfTypes (xi1:args) -- Rewrite RHS as well\n = do { rewritten_funeq <- rewriteFunEq (cv2,tv,xi2) (cv1,ifl,tc,args,xi1) \n ; mkIRContinue \"FunEq\/Eq\" workItem DropInert (workListFromEq rewritten_funeq) } \n -- One may think that we could (KeepTransformedInert rewritten_funeq) \n -- but that is wrong, because it may end up not being inert with respect \n -- to future inerts. Example: \n -- Original inert = { F xis ~ [a], b ~ Maybe Int } \n -- Work item comes along = a ~ [b] \n -- If we keep { F xis ~ [b] } in the inert set we will end up with: \n -- { F xis ~ [b], b ~ Maybe Int, a ~ [Maybe Int] } \n -- At the end, which is *not* inert. So we should unfortunately DropInert here.\n\ndoInteractWithInert (CFunEqCan { cc_id = cv1, cc_flavor = fl1, cc_fun = tc1\n , cc_tyargs = args1, cc_rhs = xi1 }) \n workItem@(CFunEqCan { cc_id = cv2, cc_flavor = fl2, cc_fun = tc2\n , cc_tyargs = args2, cc_rhs = xi2 })\n | fl1 `canSolve` fl2 && lhss_match\n = do { cans <- rewriteEqLHS LeftComesFromInert (mkCoVarCoercion cv1,xi1) (cv2,fl2,xi2) \n ; mkIRStopK \"FunEq\/FunEq\" cans } \n | fl2 `canSolve` fl1 && lhss_match\n = do { cans <- rewriteEqLHS RightComesFromInert (mkCoVarCoercion cv2,xi2) (cv1,fl1,xi1) \n ; mkIRContinue \"FunEq\/FunEq\" workItem DropInert cans }\n where\n lhss_match = tc1 == tc2 && and (zipWith tcEqType args1 args2) \n\ndoInteractWithInert (CTyEqCan { cc_id = cv1, cc_flavor = fl1, cc_tyvar = tv1, cc_rhs = xi1 }) \n workItem@(CTyEqCan { cc_id = cv2, cc_flavor = fl2, cc_tyvar = tv2, cc_rhs = xi2 })\n-- Check for matching LHS \n | fl1 `canSolve` fl2 && tv1 == tv2 \n = do { cans <- rewriteEqLHS LeftComesFromInert (mkCoVarCoercion cv1,xi1) (cv2,fl2,xi2) \n ; mkIRStopK \"Eq\/Eq lhs\" cans } \n\n | fl2 `canSolve` fl1 && tv1 == tv2 \n = do { cans <- rewriteEqLHS RightComesFromInert (mkCoVarCoercion cv2,xi2) (cv1,fl1,xi1) \n ; mkIRContinue \"Eq\/Eq lhs\" workItem DropInert cans }\n\n-- Check for rewriting RHS \n | fl1 `canRewrite` fl2 && tv1 `elemVarSet` tyVarsOfType xi2 \n = do { rewritten_eq <- rewriteEqRHS (cv1,tv1,xi1) (cv2,fl2,tv2,xi2) \n ; mkIRStopK \"Eq\/Eq rhs\" rewritten_eq }\n\n | fl2 `canRewrite` fl1 && tv2 `elemVarSet` tyVarsOfType xi1\n = do { rewritten_eq <- rewriteEqRHS (cv2,tv2,xi2) (cv1,fl1,tv1,xi1) \n ; mkIRContinue \"Eq\/Eq rhs\" workItem DropInert rewritten_eq } \n\ndoInteractWithInert (CTyEqCan { cc_id = cv1, cc_flavor = fl1, cc_tyvar = tv1, cc_rhs = xi1 })\n (CFrozenErr { cc_id = cv2, cc_flavor = fl2 })\n | fl1 `canRewrite` fl2 && tv1 `elemVarSet` tyVarsOfEvVar cv2\n = do { rewritten_frozen <- rewriteFrozen (cv1, tv1, xi1) (cv2, fl2)\n ; mkIRStopK \"Frozen\/Eq\" rewritten_frozen }\n\ndoInteractWithInert (CFrozenErr { cc_id = cv2, cc_flavor = fl2 })\n workItem@(CTyEqCan { cc_id = cv1, cc_flavor = fl1, cc_tyvar = tv1, cc_rhs = xi1 })\n | fl1 `canRewrite` fl2 && tv1 `elemVarSet` tyVarsOfEvVar cv2\n = do { rewritten_frozen <- rewriteFrozen (cv1, tv1, xi1) (cv2, fl2)\n ; mkIRContinue \"Frozen\/Eq\" workItem DropInert rewritten_frozen }\n\n-- Fall-through case for all other situations\ndoInteractWithInert _ workItem = noInteraction workItem\n\n-------------------------\n-- Equational Rewriting \nrewriteDict :: (CoVar, TcTyVar, Xi) -> (DictId, CtFlavor, Class, [Xi]) -> TcS CanonicalCt\nrewriteDict (cv,tv,xi) (dv,gw,cl,xis) \n = do { let cos = substTysWith [tv] [mkCoVarCoercion cv] xis -- xis[tv] ~ xis[xi]\n args = substTysWith [tv] [xi] xis\n con = classTyCon cl \n dict_co = mkTyConCoercion con cos \n ; dv' <- newDictVar cl args \n ; case gw of \n Wanted {} -> setDictBind dv (EvCast dv' (mkSymCoercion dict_co))\n Given {} -> setDictBind dv' (EvCast dv dict_co) \n Derived {} -> return () -- Derived dicts we don't set any evidence\n\n ; return (CDictCan { cc_id = dv'\n , cc_flavor = gw \n , cc_class = cl \n , cc_tyargs = args }) } \n\nrewriteIP :: (CoVar,TcTyVar,Xi) -> (EvVar,CtFlavor, IPName Name, TcType) -> TcS CanonicalCt \nrewriteIP (cv,tv,xi) (ipid,gw,nm,ty) \n = do { let ip_co = substTyWith [tv] [mkCoVarCoercion cv] ty -- ty[tv] ~ t[xi] \n ty' = substTyWith [tv] [xi] ty\n ; ipid' <- newIPVar nm ty' \n ; case gw of \n Wanted {} -> setIPBind ipid (EvCast ipid' (mkSymCoercion ip_co))\n Given {} -> setIPBind ipid' (EvCast ipid ip_co) \n Derived {} -> return () -- Derived ips: we don't set any evidence\n\n ; return (CIPCan { cc_id = ipid'\n , cc_flavor = gw\n , cc_ip_nm = nm\n , cc_ip_ty = ty' }) }\n \nrewriteFunEq :: (CoVar,TcTyVar,Xi) -> (CoVar,CtFlavor,TyCon, [Xi], Xi) -> TcS CanonicalCt\nrewriteFunEq (cv1,tv,xi1) (cv2,gw, tc,args,xi2) -- cv2 :: F args ~ xi2\n = do { let arg_cos = substTysWith [tv] [mkCoVarCoercion cv1] args \n args' = substTysWith [tv] [xi1] args \n fun_co = mkTyConCoercion tc arg_cos -- fun_co :: F args ~ F args'\n\n xi2' = substTyWith [tv] [xi1] xi2\n xi2_co = substTyWith [tv] [mkCoVarCoercion cv1] xi2 -- xi2_co :: xi2 ~ xi2' \n\n ; cv2' <- newCoVar (mkTyConApp tc args') xi2'\n ; case gw of \n Wanted {} -> setCoBind cv2 (fun_co `mkTransCoercion` \n mkCoVarCoercion cv2' `mkTransCoercion` \n mkSymCoercion xi2_co)\n Given {} -> setCoBind cv2' (mkSymCoercion fun_co `mkTransCoercion` \n mkCoVarCoercion cv2 `mkTransCoercion` \n xi2_co)\n Derived {} -> return () \n\n ; return (CFunEqCan { cc_id = cv2'\n , cc_flavor = gw\n , cc_tyargs = args'\n , cc_fun = tc \n , cc_rhs = xi2' }) }\n\n\nrewriteEqRHS :: (CoVar,TcTyVar,Xi) -> (CoVar,CtFlavor,TcTyVar,Xi) -> TcS WorkList\n-- Use the first equality to rewrite the second, flavors already checked. \n-- E.g. c1 : tv1 ~ xi1 c2 : tv2 ~ xi2\n-- rewrites c2 to give\n-- c2' : tv2 ~ xi2[xi1\/tv1]\n-- We must do an occurs check to sure the new constraint is canonical\n-- So we might return an empty bag\nrewriteEqRHS (cv1,tv1,xi1) (cv2,gw,tv2,xi2) \n | Just tv2' <- tcGetTyVar_maybe xi2'\n , tv2 == tv2'\t -- In this case xi2[xi1\/tv1] = tv2, so we have tv2~tv2\n = do { when (isWanted gw) (setCoBind cv2 (mkSymCoercion co2')) \n ; return emptyWorkList } \n | otherwise\n = do { cv2' <- newCoVar (mkTyVarTy tv2) xi2'\n ; case gw of\n Wanted {} -> setCoBind cv2 $ mkCoVarCoercion cv2' `mkTransCoercion` \n mkSymCoercion co2'\n Given {} -> setCoBind cv2' $ mkCoVarCoercion cv2 `mkTransCoercion` \n co2'\n Derived {} -> return ()\n ; canEqToWorkList gw cv2' (mkTyVarTy tv2) xi2' }\n where \n xi2' = substTyWith [tv1] [xi1] xi2 \n co2' = substTyWith [tv1] [mkCoVarCoercion cv1] xi2 -- xi2 ~ xi2[xi1\/tv1]\n\nrewriteEqLHS :: WhichComesFromInert -> (Coercion,Xi) -> (CoVar,CtFlavor,Xi) -> TcS WorkList\n-- Used to ineract two equalities of the following form: \n-- First Equality: co1: (XXX ~ xi1) \n-- Second Equality: cv2: (XXX ~ xi2) \n-- Where the cv1 `canRewrite` cv2 equality \n-- We have an option of creating new work (xi1 ~ xi2) OR (xi2 ~ xi1), \n-- See Note [Efficient Orientation] for that \nrewriteEqLHS LeftComesFromInert (co1,xi1) (cv2,gw,xi2) \n = do { cv2' <- newCoVar xi2 xi1 \n ; case gw of \n Wanted {} -> setCoBind cv2 $ \n co1 `mkTransCoercion` mkSymCoercion (mkCoVarCoercion cv2')\n Given {} -> setCoBind cv2' $ \n mkSymCoercion (mkCoVarCoercion cv2) `mkTransCoercion` co1 \n Derived {} -> return ()\n ; mkCanonical gw cv2' }\n\nrewriteEqLHS RightComesFromInert (co1,xi1) (cv2,gw,xi2) \n = do { cv2' <- newCoVar xi1 xi2\n ; case gw of\n Wanted {} -> setCoBind cv2 $\n co1 `mkTransCoercion` mkCoVarCoercion cv2'\n Given {} -> setCoBind cv2' $\n mkSymCoercion co1 `mkTransCoercion` mkCoVarCoercion cv2\n Derived {} -> return ()\n ; mkCanonical gw cv2' }\n\nrewriteFrozen :: (CoVar,TcTyVar,Xi) -> (CoVar,CtFlavor) -> TcS WorkList\nrewriteFrozen (cv1, tv1, xi1) (cv2, fl2)\n = do { cv2' <- newCoVar ty2a' ty2b' -- ty2a[xi1\/tv1] ~ ty2b[xi1\/tv1]\n ; case fl2 of\n Wanted {} -> setCoBind cv2 $ co2a' `mkTransCoercion`\n \t \t mkCoVarCoercion cv2' `mkTransCoercion`\n \t \t mkSymCoercion co2b'\n\n Given {} -> setCoBind cv2' $ mkSymCoercion co2a' `mkTransCoercion`\n \t \t\t mkCoVarCoercion cv2 `mkTransCoercion`\n \t \t\t co2b'\n\n Derived {} -> return ()\n\n ; return (workListFromNonEq $ CFrozenErr { cc_id = cv2', cc_flavor = fl2 }) }\n where\n (ty2a, ty2b) = coVarKind cv2 -- cv2 : ty2a ~ ty2b\n ty2a' = substTyWith [tv1] [xi1] ty2a\n ty2b' = substTyWith [tv1] [xi1] ty2b\n\n co2a' = substTyWith [tv1] [mkCoVarCoercion cv1] ty2a -- ty2a ~ ty2a[xi1\/tv1]\n co2b' = substTyWith [tv1] [mkCoVarCoercion cv1] ty2b -- ty2b ~ ty2b[xi1\/tv1]\n\nsolveOneFromTheOther :: String -> (EvTerm, CtFlavor) -> CanonicalCt -> TcS InteractResult\n-- First argument inert, second argument work-item. They both represent \n-- wanted\/given\/derived evidence for the *same* predicate so \n-- we can discharge one directly from the other. \n--\n-- Precondition: value evidence only (implicit parameters, classes) \n-- not coercion\nsolveOneFromTheOther info (ev_term,ifl) workItem\n | isDerived wfl\n = mkIRStopK (\"Solved[DW] \" ++ info) emptyWorkList\n\n | isDerived ifl -- The inert item is Derived, we can just throw it away, \n \t \t -- The workItem is inert wrt earlier inert-set items, \n\t\t -- so it's safe to continue on from this point\n = mkIRContinue (\"Solved[DI] \" ++ info) workItem DropInert emptyWorkList\n \n | otherwise\n = ASSERT( ifl `canSolve` wfl )\n -- Because of Note [The Solver Invariant], plus Derived dealt with\n do { when (isWanted wfl) $ setEvBind wid ev_term\n -- Overwrite the binding, if one exists\n\t -- If both are Given, we already have evidence; no need to duplicate\n ; mkIRStopK (\"Solved \" ++ info) emptyWorkList }\n where \n wfl = cc_flavor workItem\n wid = cc_id workItem\n\\end{code}\n\nNote [Superclasses and recursive dictionaries]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n Overlaps with Note [SUPERCLASS-LOOP 1]\n Note [SUPERCLASS-LOOP 2]\n Note [Recursive instances and superclases]\n ToDo: check overlap and delete redundant stuff\n\nRight before adding a given into the inert set, we must\nproduce some more work, that will bring the superclasses \nof the given into scope. The superclass constraints go into \nour worklist. \n\nWhen we simplify a wanted constraint, if we first see a matching\ninstance, we may produce new wanted work. To (1) avoid doing this work \ntwice in the future and (2) to handle recursive dictionaries we may ``cache'' \nthis item as given into our inert set WITHOUT adding its superclass constraints, \notherwise we'd be in danger of creating a loop [In fact this was the exact reason\nfor doing the isGoodRecEv check in an older version of the type checker]. \n\nBut now we have added partially solved constraints to the worklist which may \ninteract with other wanteds. Consider the example: \n\nExample 1: \n\n class Eq b => Foo a b --- 0-th selector\n instance Eq a => Foo [a] a --- fooDFun\n\nand wanted (Foo [t] t). We are first going to see that the instance matches \nand create an inert set that includes the solved (Foo [t] t) but not its superclasses:\n d1 :_g Foo [t] t d1 := EvDFunApp fooDFun d3 \nOur work list is going to contain a new *wanted* goal\n d3 :_w Eq t \n\nOk, so how do we get recursive dictionaries, at all: \n\nExample 2:\n\n data D r = ZeroD | SuccD (r (D r));\n \n instance (Eq (r (D r))) => Eq (D r) where\n ZeroD == ZeroD = True\n (SuccD a) == (SuccD b) = a == b\n _ == _ = False;\n \n equalDC :: D [] -> D [] -> Bool;\n equalDC = (==);\n\nWe need to prove (Eq (D [])). Here's how we go:\n\n\td1 :_w Eq (D [])\n\nby instance decl, holds if\n\td2 :_w Eq [D []]\n\twhere \td1 = dfEqD d2\n\n*BUT* we have an inert set which gives us (no superclasses): \n d1 :_g Eq (D []) \nBy the instance declaration of Eq we can show the 'd2' goal if \n\td3 :_w Eq (D [])\n\twhere\td2 = dfEqList d3\n\t\td1 = dfEqD d2\nNow, however this wanted can interact with our inert d1 to set: \n d3 := d1 \nand solve the goal. Why was this interaction OK? Because, if we chase the \nevidence of d1 ~~> dfEqD d2 ~~-> dfEqList d3, so by setting d3 := d1 we \nare really setting\n d3 := dfEqD2 (dfEqList d3) \nwhich is FINE because the use of d3 is protected by the instance function \napplications. \n\nSo, our strategy is to try to put solved wanted dictionaries into the\ninert set along with their superclasses (when this is meaningful,\ni.e. when new wanted goals are generated) but solve a wanted dictionary\nfrom a given only in the case where the evidence variable of the\nwanted is mentioned in the evidence of the given (recursively through\nthe evidence binds) in a protected way: more instance function applications \nthan superclass selectors.\n\nHere are some more examples from GHC's previous type checker\n\n\nExample 3: \nThis code arises in the context of \"Scrap Your Boilerplate with Class\"\n\n class Sat a\n class Data ctx a\n instance Sat (ctx Char) => Data ctx Char -- dfunData1\n instance (Sat (ctx [a]), Data ctx a) => Data ctx [a] -- dfunData2\n\n class Data Maybe a => Foo a \n\n instance Foo t => Sat (Maybe t) -- dfunSat\n\n instance Data Maybe a => Foo a -- dfunFoo1\n instance Foo a => Foo [a] -- dfunFoo2\n instance Foo [Char] -- dfunFoo3\n\nConsider generating the superclasses of the instance declaration\n\t instance Foo a => Foo [a]\n\nSo our problem is this\n d0 :_g Foo t\n d1 :_w Data Maybe [t] \n\nWe may add the given in the inert set, along with its superclasses\n[assuming we don't fail because there is a matching instance, see \n tryTopReact, given case ]\n Inert:\n d0 :_g Foo t \n WorkList \n d01 :_g Data Maybe t -- d2 := EvDictSuperClass d0 0 \n d1 :_w Data Maybe [t] \nThen d2 can readily enter the inert, and we also do solving of the wanted\n Inert: \n d0 :_g Foo t \n d1 :_s Data Maybe [t] d1 := dfunData2 d2 d3 \n WorkList\n d2 :_w Sat (Maybe [t]) \n d3 :_w Data Maybe t\n d01 :_g Data Maybe t \nNow, we may simplify d2 more: \n Inert:\n d0 :_g Foo t \n d1 :_s Data Maybe [t] d1 := dfunData2 d2 d3 \n d1 :_g Data Maybe [t] \n d2 :_g Sat (Maybe [t]) d2 := dfunSat d4 \n WorkList: \n d3 :_w Data Maybe t \n d4 :_w Foo [t] \n d01 :_g Data Maybe t \n\nNow, we can just solve d3.\n Inert\n d0 :_g Foo t \n d1 :_s Data Maybe [t] d1 := dfunData2 d2 d3 \n d2 :_g Sat (Maybe [t]) d2 := dfunSat d4 \n WorkList\n d4 :_w Foo [t] \n d01 :_g Data Maybe t \nAnd now we can simplify d4 again, but since it has superclasses we *add* them to the worklist:\n Inert\n d0 :_g Foo t \n d1 :_s Data Maybe [t] d1 := dfunData2 d2 d3 \n d2 :_g Sat (Maybe [t]) d2 := dfunSat d4 \n d4 :_g Foo [t] d4 := dfunFoo2 d5 \n WorkList:\n d5 :_w Foo t \n d6 :_g Data Maybe [t] d6 := EvDictSuperClass d4 0\n d01 :_g Data Maybe t \nNow, d5 can be solved! (and its superclass enter scope) \n Inert\n d0 :_g Foo t \n d1 :_s Data Maybe [t] d1 := dfunData2 d2 d3 \n d2 :_g Sat (Maybe [t]) d2 := dfunSat d4 \n d4 :_g Foo [t] d4 := dfunFoo2 d5 \n d5 :_g Foo t d5 := dfunFoo1 d7\n WorkList:\n d7 :_w Data Maybe t\n d6 :_g Data Maybe [t]\n d8 :_g Data Maybe t d8 := EvDictSuperClass d5 0\n d01 :_g Data Maybe t \n\nNow, two problems: \n [1] Suppose we pick d8 and we react him with d01. Which of the two givens should \n we keep? Well, we *MUST NOT* drop d01 because d8 contains recursive evidence \n that must not be used (look at case interactInert where both inert and workitem\n are givens). So we have several options: \n - Drop the workitem always (this will drop d8)\n This feels very unsafe -- what if the work item was the \"good\" one\n that should be used later to solve another wanted?\n - Don't drop anyone: the inert set may contain multiple givens! \n [This is currently implemented] \n\nThe \"don't drop anyone\" seems the most safe thing to do, so now we come to problem 2: \n [2] We have added both d6 and d01 in the inert set, and we are interacting our wanted\n d7. Now the [isRecDictEv] function in the ineration solver \n [case inert-given workitem-wanted] will prevent us from interacting d7 := d8 \n precisely because chasing the evidence of d8 leads us to an unguarded use of d7. \n\n So, no interaction happens there. Then we meet d01 and there is no recursion \n problem there [isRectDictEv] gives us the OK to interact and we do solve d7 := d01! \n \nNote [SUPERCLASS-LOOP 1]\n~~~~~~~~~~~~~~~~~~~~~~~~\nWe have to be very, very careful when generating superclasses, lest we\naccidentally build a loop. Here's an example:\n\n class S a\n\n class S a => C a where { opc :: a -> a }\n class S b => D b where { opd :: b -> b }\n \n instance C Int where\n opc = opd\n \n instance D Int where\n opd = opc\n\nFrom (instance C Int) we get the constraint set {ds1:S Int, dd:D Int}\nSimplifying, we may well get:\n\t$dfCInt = :C ds1 (opd dd)\n\tdd = $dfDInt\n\tds1 = $p1 dd\nNotice that we spot that we can extract ds1 from dd. \n\nAlas! Alack! We can do the same for (instance D Int):\n\n\t$dfDInt = :D ds2 (opc dc)\n\tdc = $dfCInt\n\tds2 = $p1 dc\n\nAnd now we've defined the superclass in terms of itself.\nTwo more nasty cases are in\n\ttcrun021\n\ttcrun033\n\nSolution: \n - Satisfy the superclass context *all by itself* \n (tcSimplifySuperClasses)\n - And do so completely; i.e. no left-over constraints\n to mix with the constraints arising from method declarations\n\n\nNote [SUPERCLASS-LOOP 2]\n~~~~~~~~~~~~~~~~~~~~~~~~\nWe need to be careful when adding \"the constaint we are trying to prove\".\nSuppose we are *given* d1:Ord a, and want to deduce (d2:C [a]) where\n\n\tclass Ord a => C a where\n\tinstance Ord [a] => C [a] where ...\n\nThen we'll use the instance decl to deduce C [a] from Ord [a], and then add the\nsuperclasses of C [a] to avails. But we must not overwrite the binding\nfor Ord [a] (which is obtained from Ord a) with a superclass selection or we'll just\nbuild a loop! \n\nHere's another variant, immortalised in tcrun020\n\tclass Monad m => C1 m\n\tclass C1 m => C2 m x\n\tinstance C2 Maybe Bool\nFor the instance decl we need to build (C1 Maybe), and it's no good if\nwe run around and add (C2 Maybe Bool) and its superclasses to the avails \nbefore we search for C1 Maybe.\n\nHere's another example \n \tclass Eq b => Foo a b\n\tinstance Eq a => Foo [a] a\nIf we are reducing\n\t(Foo [t] t)\n\nwe'll first deduce that it holds (via the instance decl). We must not\nthen overwrite the Eq t constraint with a superclass selection!\n\nAt first I had a gross hack, whereby I simply did not add superclass constraints\nin addWanted, though I did for addGiven and addIrred. This was sub-optimal,\nbecuase it lost legitimate superclass sharing, and it still didn't do the job:\nI found a very obscure program (now tcrun021) in which improvement meant the\nsimplifier got two bites a the cherry... so something seemed to be an Stop\nfirst time, but reducible next time.\n\nNow we implement the Right Solution, which is to check for loops directly \nwhen adding superclasses. It's a bit like the occurs check in unification.\n\nNote [Recursive instances and superclases]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nConsider this code, which arises in the context of \"Scrap Your \nBoilerplate with Class\". \n\n class Sat a\n class Data ctx a\n instance Sat (ctx Char) => Data ctx Char\n instance (Sat (ctx [a]), Data ctx a) => Data ctx [a]\n\n class Data Maybe a => Foo a\n\n instance Foo t => Sat (Maybe t)\n\n instance Data Maybe a => Foo a\n instance Foo a => Foo [a]\n instance Foo [Char]\n\nIn the instance for Foo [a], when generating evidence for the superclasses\n(ie in tcSimplifySuperClasses) we need a superclass (Data Maybe [a]).\nUsing the instance for Data, we therefore need\n (Sat (Maybe [a], Data Maybe a)\nBut we are given (Foo a), and hence its superclass (Data Maybe a).\nSo that leaves (Sat (Maybe [a])). Using the instance for Sat means\nwe need (Foo [a]). And that is the very dictionary we are bulding\nan instance for! So we must put that in the \"givens\". So in this\ncase we have\n\tGiven: Foo a, Foo [a]\n\tWanted: Data Maybe [a]\n\nBUT we must *not not not* put the *superclasses* of (Foo [a]) in\nthe givens, which is what 'addGiven' would normally do. Why? Because\n(Data Maybe [a]) is the superclass, so we'd \"satisfy\" the wanted \nby selecting a superclass from Foo [a], which simply makes a loop.\n\nOn the other hand we *must* put the superclasses of (Foo a) in\nthe givens, as you can see from the derivation described above.\n\nConclusion: in the very special case of tcSimplifySuperClasses\nwe have one 'given' (namely the \"this\" dictionary) whose superclasses\nmust not be added to 'givens' by addGiven. \n\nThere is a complication though. Suppose there are equalities\n instance (Eq a, a~b) => Num (a,b)\nThen we normalise the 'givens' wrt the equalities, so the original\ngiven \"this\" dictionary is cast to one of a different type. So it's a\nbit trickier than before to identify the \"special\" dictionary whose\nsuperclasses must not be added. See test\n indexed-types\/should_run\/EqInInstance\n\nWe need a persistent property of the dictionary to record this\nspecial-ness. Current I'm using the InstLocOrigin (a bit of a hack,\nbut cool), which is maintained by dictionary normalisation.\nSpecifically, the InstLocOrigin is\n\t NoScOrigin\nthen the no-superclass thing kicks in. WATCH OUT if you fiddle\nwith InstLocOrigin!\n\nNote [MATCHING-SYNONYMS]\n~~~~~~~~~~~~~~~~~~~~~~~~\nWhen trying to match a dictionary (D tau) to a top-level instance, or a \ntype family equation (F taus_1 ~ tau_2) to a top-level family instance, \nwe do *not* need to expand type synonyms because the matcher will do that for us.\n\n\nNote [RHS-FAMILY-SYNONYMS] \n~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe RHS of a family instance is represented as yet another constructor which is \nlike a type synonym for the real RHS the programmer declared. Eg: \n type instance F (a,a) = [a] \nBecomes: \n :R32 a = [a] -- internal type synonym introduced\n F (a,a) ~ :R32 a -- instance \n\nWhen we react a family instance with a type family equation in the work list \nwe keep the synonym-using RHS without expansion. \n\n\n*********************************************************************************\n* * \n The top-reaction Stage\n* *\n*********************************************************************************\n\n\\begin{code}\n-- If a work item has any form of interaction with top-level we get this \ndata TopInteractResult \n = NoTopInt -- No top-level interaction\n -- Equivalent to (SomeTopInt emptyWorkList (ContinueWith work_item))\n | SomeTopInt \n { tir_new_work :: WorkList\t-- Sub-goals or new work (could be given, \n -- for superclasses)\n , tir_new_inert :: StopOrContinue -- The input work item, ready to become *inert* now: \n } \t\t-- NB: in ``given'' (solved) form if the \n \t\t-- original was wanted or given and instance match\n \t\t-- was found, but may also be in wanted form if we \n -- only reacted with functional dependencies \n\t\t\t\t\t-- arising from top-level instances.\n\ntopReactionsStage :: SimplifierStage \ntopReactionsStage depth workItem inerts \n = do { tir <- tryTopReact workItem \n ; case tir of \n NoTopInt -> \n return $ SR { sr_inerts = inerts \n , sr_new_work = emptyWorkList \n , sr_stop = ContinueWith workItem } \n SomeTopInt tir_new_work tir_new_inert -> \n do { bumpStepCountTcS\n ; traceFireTcS depth (ptext (sLit \"Top react\")\n <+> vcat [ ptext (sLit \"Work =\") <+> ppr workItem\n , ptext (sLit \"New =\") <+> ppr tir_new_work ])\n ; return $ SR { sr_inerts = inerts \n \t, sr_new_work = tir_new_work\n \t, sr_stop = tir_new_inert\n \t} }\n }\n\ntryTopReact :: WorkItem -> TcS TopInteractResult \ntryTopReact workitem \n = do { -- A flag controls the amount of interaction allowed\n -- See Note [Simplifying RULE lhs constraints]\n ctxt <- getTcSContext\n ; if allowedTopReaction (simplEqsOnly ctxt) workitem \n then do { traceTcS \"tryTopReact \/ calling doTopReact\" (ppr workitem)\n ; doTopReact workitem }\n else return NoTopInt \n } \n\nallowedTopReaction :: Bool -> WorkItem -> Bool\nallowedTopReaction eqs_only (CDictCan {}) = not eqs_only\nallowedTopReaction _ _ = True\n\ndoTopReact :: WorkItem -> TcS TopInteractResult \n-- The work item does not react with the inert set, so try interaction with top-level instances\n-- NB: The place to add superclasses in *not* in doTopReact stage. Instead superclasses are \n-- added in the worklist as part of the canonicalisation process. \n-- See Note [Adding superclasses] in TcCanonical.\n\n-- Given dictionary\n-- See Note [Given constraint that matches an instance declaration]\ndoTopReact (CDictCan { cc_flavor = Given {} })\n = return NoTopInt -- NB: Superclasses already added since it's canonical\n\n-- Derived dictionary: just look for functional dependencies\ndoTopReact workItem@(CDictCan { cc_flavor = fl@(Derived loc)\n , cc_class = cls, cc_tyargs = xis })\n = do { instEnvs <- getInstEnvs\n ; let fd_eqns = improveFromInstEnv instEnvs\n (ClassP cls xis, pprArisingAt loc)\n ; m <- rewriteWithFunDeps fd_eqns xis fl\n ; case m of\n Nothing -> return NoTopInt\n Just (xis',_,fd_work) ->\n let workItem' = workItem { cc_tyargs = xis' }\n -- Deriveds are not supposed to have identity (cc_id is unused!)\n in return $ SomeTopInt { tir_new_work = fd_work \n , tir_new_inert = ContinueWith workItem' } }\n\n-- Wanted dictionary\ndoTopReact workItem@(CDictCan { cc_id = dv, cc_flavor = fl@(Wanted loc)\n , cc_class = cls, cc_tyargs = xis })\n = do { -- See Note [MATCHING-SYNONYMS]\n ; lkp_inst_res <- matchClassInst cls xis loc\n ; case lkp_inst_res of\n NoInstance ->\n do { traceTcS \"doTopReact\/ no class instance for\" (ppr dv)\n\n ; instEnvs <- getInstEnvs\n ; let fd_eqns = improveFromInstEnv instEnvs\n (ClassP cls xis, pprArisingAt loc)\n ; m <- rewriteWithFunDeps fd_eqns xis fl\n ; case m of\n Nothing -> return NoTopInt\n Just (xis',cos,fd_work) ->\n do { let dict_co = mkTyConCoercion (classTyCon cls) cos\n ; dv'<- newDictVar cls xis'\n ; setDictBind dv (EvCast dv' dict_co)\n ; let workItem' = CDictCan { cc_id = dv', cc_flavor = fl, \n cc_class = cls, cc_tyargs = xis' }\n ; return $ \n SomeTopInt { tir_new_work = workListFromNonEq workItem' `unionWorkList` fd_work\n , tir_new_inert = Stop } } }\n\n GenInst wtvs ev_term -- Solved \n\t \t -- No need to do fundeps stuff here; the instance \n\t\t -- matches already so we won't get any more info\n\t\t -- from functional dependencies\n | null wtvs\n -> do { traceTcS \"doTopReact\/ found nullary class instance for\" (ppr dv) \n ; setDictBind dv ev_term \n -- Solved in one step and no new wanted work produced. \n -- i.e we directly matched a top-level instance\n -- No point in caching this in 'inert'; hence Stop\n ; return $ SomeTopInt { tir_new_work = emptyWorkList \n , tir_new_inert = Stop } }\n\n | otherwise\n -> do { traceTcS \"doTopReact\/ found nullary class instance for\" (ppr dv) \n ; setDictBind dv ev_term \n -- Solved and new wanted work produced, you may cache the \n -- (tentatively solved) dictionary as Given! (used to be: Derived)\n ; let solved = workItem { cc_flavor = given_fl }\n given_fl = Given (setCtLocOrigin loc UnkSkol) \n ; inst_work <- canWanteds wtvs\n ; return $ SomeTopInt { tir_new_work = inst_work\n , tir_new_inert = ContinueWith solved } }\n } \n\n-- Type functions\ndoTopReact (CFunEqCan { cc_id = cv, cc_flavor = fl\n , cc_fun = tc, cc_tyargs = args, cc_rhs = xi })\n = ASSERT (isSynFamilyTyCon tc) -- No associated data families have reached that far \n do { match_res <- matchFam tc args -- See Note [MATCHING-SYNONYMS]\n ; case match_res of \n MatchInstNo \n -> return NoTopInt \n MatchInstSingle (rep_tc, rep_tys)\n -> do { let Just coe_tc = tyConFamilyCoercion_maybe rep_tc\n Just rhs_ty = tcView (mkTyConApp rep_tc rep_tys)\n\t\t\t -- Eagerly expand away the type synonym on the\n\t\t\t -- RHS of a type function, so that it never\n\t\t\t -- appears in an error message\n -- See Note [Type synonym families] in TyCon\n coe = mkTyConApp coe_tc rep_tys \n ; cv' <- case fl of\n Wanted {} -> do { cv' <- newCoVar rhs_ty xi\n ; setCoBind cv $ \n coe `mkTransCoercion`\n mkCoVarCoercion cv'\n ; return cv' }\n Given {} -> newGivenCoVar xi rhs_ty $ \n mkSymCoercion (mkCoVarCoercion cv) `mkTransCoercion` coe \n Derived {} -> newDerivedId (EqPred xi rhs_ty)\n ; can_cts <- mkCanonical fl cv'\n ; return $ SomeTopInt can_cts Stop }\n _ \n -> panicTcS $ text \"TcSMonad.matchFam returned multiple instances!\"\n }\n\n\n-- Any other work item does not react with any top-level equations\ndoTopReact _workItem = return NoTopInt \n\\end{code}\n\n\nNote [FunDep and implicit parameter reactions] \n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nCurrently, our story of interacting two dictionaries (or a dictionary\nand top-level instances) for functional dependencies, and implicit\nparamters, is that we simply produce new wanted equalities. So for example\n\n class D a b | a -> b where ... \n Inert: \n d1 :g D Int Bool\n WorkItem: \n d2 :w D Int alpha\n\n We generate the extra work item\n cv :w alpha ~ Bool\n where 'cv' is currently unused. However, this new item reacts with d2,\n discharging it in favour of a new constraint d2' thus:\n d2' :w D Int Bool\n\td2 := d2' |> D Int cv\n Now d2' can be discharged from d1\n\nWe could be more aggressive and try to *immediately* solve the dictionary \nusing those extra equalities. With the same inert set and work item we\nmight dischard d2 directly:\n\n cv :w alpha ~ Bool\n d2 := d1 |> D Int cv\n\nBut in general it's a bit painful to figure out the necessary coercion,\nso we just take the first approach. Here is a better example. Consider:\n class C a b c | a -> b \nAnd: \n [Given] d1 : C T Int Char \n [Wanted] d2 : C T beta Int \nIn this case, it's *not even possible* to solve the wanted immediately. \nSo we should simply output the functional dependency and add this guy\n[but NOT its superclasses] back in the worklist. Even worse: \n [Given] d1 : C T Int beta \n [Wanted] d2: C T beta Int \nThen it is solvable, but its very hard to detect this on the spot. \n\nIt's exactly the same with implicit parameters, except that the\n\"aggressive\" approach would be much easier to implement.\n\nNote [When improvement happens]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWe fire an improvement rule when\n\n * Two constraints match (modulo the fundep)\n e.g. C t1 t2, C t1 t3 where C a b | a->b\n The two match because the first arg is identical\n\n * At least one is not Given. If they are both given, we don't fire\n the reaction because we have no way of constructing evidence for a\n new equality nor does it seem right to create a new wanted goal\n (because the goal will most likely contain untouchables, which\n can't be solved anyway)!\n \nNote that we *do* fire the improvement if one is Given and one is Derived.\nThe latter can be a superclass of a wanted goal. Example (tcfail138)\n class L a b | a -> b\n class (G a, L a b) => C a b\n\n instance C a b' => G (Maybe a)\n instance C a b => C (Maybe a) a\n instance L (Maybe a) a\n\nWhen solving the superclasses of the (C (Maybe a) a) instance, we get\n Given: C a b ... and hance by superclasses, (G a, L a b)\n Wanted: G (Maybe a)\nUse the instance decl to get\n Wanted: C a b'\nThe (C a b') is inert, so we generate its Derived superclasses (L a b'),\nand now we need improvement between that derived superclass an the Given (L a b)\n\nNote [Overriding implicit parameters]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nConsider\n f :: (?x::a) -> Bool -> a\n \n g v = let ?x::Int = 3 \n in (f v, let ?x::Bool = True in f v)\n\nThis should probably be well typed, with\n g :: Bool -> (Int, Bool)\n\nSo the inner binding for ?x::Bool *overrides* the outer one.\nHence a work-item Given overrides an inert-item Given.\n\nNote [Given constraint that matches an instance declaration]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWhat should we do when we discover that one (or more) top-level \ninstances match a given (or solved) class constraint? We have \ntwo possibilities:\n\n 1. Reject the program. The reason is that there may not be a unique\n best strategy for the solver. Example, from the OutsideIn(X) paper:\n instance P x => Q [x] \n instance (x ~ y) => R [x] y \n \n wob :: forall a b. (Q [b], R b a) => a -> Int \n\n g :: forall a. Q [a] => [a] -> Int \n g x = wob x \n\n will generate the impliation constraint: \n Q [a] => (Q [beta], R beta [a]) \n If we react (Q [beta]) with its top-level axiom, we end up with a \n (P beta), which we have no way of discharging. On the other hand, \n if we react R beta [a] with the top-level we get (beta ~ a), which \n is solvable and can help us rewrite (Q [beta]) to (Q [a]) which is \n now solvable by the given Q [a]. \n \n However, this option is restrictive, for instance [Example 3] from \n Note [Recursive dictionaries] will fail to work. \n\n 2. Ignore the problem, hoping that the situations where there exist indeed\n such multiple strategies are rare: Indeed the cause of the previous \n problem is that (R [x] y) yields the new work (x ~ y) which can be \n *spontaneously* solved, not using the givens. \n\nWe are choosing option 2 below but we might consider having a flag as well.\n\n\nNote [New Wanted Superclass Work] \n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nEven in the case of wanted constraints, we may add some superclasses \nas new given work. The reason is: \n\n To allow FD-like improvement for type families. Assume that \n we have a class \n class C a b | a -> b \n and we have to solve the implication constraint: \n C a b => C a beta \n Then, FD improvement can help us to produce a new wanted (beta ~ b) \n\n We want to have the same effect with the type family encoding of \n functional dependencies. Namely, consider: \n class (F a ~ b) => C a b \n Now suppose that we have: \n given: C a b \n wanted: C a beta \n By interacting the given we will get given (F a ~ b) which is not \n enough by itself to make us discharge (C a beta). However, we \n may create a new derived equality from the super-class of the\n wanted constraint (C a beta), namely derived (F a ~ beta). \n Now we may interact this with given (F a ~ b) to get: \n derived : beta ~ b \n But 'beta' is a touchable unification variable, and hence OK to \n unify it with 'b', replacing the derived evidence with the identity. \n\n This requires trySpontaneousSolve to solve *derived*\n equalities that have a touchable in their RHS, *in addition*\n to solving wanted equalities.\n\nWe also need to somehow use the superclasses to quantify over a minimal, \nconstraint see note [Minimize by Superclasses] in TcSimplify.\n\n\nFinally, here is another example where this is useful. \n\nExample 1:\n----------\n class (F a ~ b) => C a b \nAnd we are given the wanteds:\n w1 : C a b \n w2 : C a c \n w3 : b ~ c \nWe surely do *not* want to quantify over (b ~ c), since if someone provides\ndictionaries for (C a b) and (C a c), these dictionaries can provide a proof \nof (b ~ c), hence no extra evidence is necessary. Here is what will happen: \n\n Step 1: We will get new *given* superclass work, \n provisionally to our solving of w1 and w2\n \n g1: F a ~ b, g2 : F a ~ c, \n w1 : C a b, w2 : C a c, w3 : b ~ c\n\n The evidence for g1 and g2 is a superclass evidence term: \n\n g1 := sc w1, g2 := sc w2\n\n Step 2: The givens will solve the wanted w3, so that \n w3 := sym (sc w1) ; sc w2 \n \n Step 3: Now, one may naively assume that then w2 can be solve from w1\n after rewriting with the (now solved equality) (b ~ c). \n \n But this rewriting is ruled out by the isGoodRectDict! \n\nConclusion, we will (correctly) end up with the unsolved goals \n (C a b, C a c) \n\nNB: The desugarer needs be more clever to deal with equalities \n that participate in recursive dictionary bindings. \n\n\\begin{code}\ndata LookupInstResult\n = NoInstance\n | GenInst [WantedEvVar] EvTerm \n\nmatchClassInst :: Class -> [Type] -> WantedLoc -> TcS LookupInstResult\nmatchClassInst clas tys loc\n = do { let pred = mkClassPred clas tys \n ; mb_result <- matchClass clas tys\n ; case mb_result of\n MatchInstNo -> return NoInstance\n MatchInstMany -> return NoInstance -- defer any reactions of a multitude until \n -- we learn more about the reagent \n MatchInstSingle (dfun_id, mb_inst_tys) -> \n do { checkWellStagedDFun pred dfun_id loc\n\n \t-- It's possible that not all the tyvars are in\n\t-- the substitution, tenv. For example:\n\t--\tinstance C X a => D X where ...\n\t-- (presumably there's a functional dependency in class C)\n\t-- Hence mb_inst_tys :: Either TyVar TcType \n\n ; tys <- instDFunTypes mb_inst_tys \n ; let (theta, _) = tcSplitPhiTy (applyTys (idType dfun_id) tys)\n ; if null theta then\n return (GenInst [] (EvDFunApp dfun_id tys []))\n else do\n { ev_vars <- instDFunConstraints theta\n ; let wevs = [EvVarX w loc | w <- ev_vars]\n ; return $ GenInst wevs (EvDFunApp dfun_id tys ev_vars) }\n }\n }\n\\end{code}\n","avg_line_length":43.652259332,"max_line_length":109,"alphanum_fraction":0.6040775913} {"size":18450,"ext":"lhs","lang":"Literate Haskell","max_stars_count":1.0,"content":"% -*- LaTeX -*-\n\\documentclass{tmr}\n\n\\usepackage{mflogo}\n\n%include polycode.fmt\n\n\\title{Guidelines for Authors}\n\\author{Shae Matijs Erisson\\email{shae@@scannedinavian.com}}\n\\author{Andres L\\\"oh\\email{kstmr@@andres-loeh.de}}\n\\author{Wouter Swierstra\\email{wss@@cs.nott.ac.uk}}\n\\author{Brent Yorgey\\email{byorgey@@cis.upenn.edu}}\n\n\\begin{document}\n\n\\begin{introduction}\nThis text, written in the style of a typical \\TMR\\\narticle, gives guidelines and advice for \\TMR\\ authors. We explain\nthe \\TeX nical constructs needed to write an article, and give a\nfew hints on how to improve style.\n\\end{introduction}\n\n\\section{Getting started}\n\nIf you want to write an article for \\TMR~\\cite{auth:tmr}, you need a special\n\\LaTeX\\ class file called \\verb+tmr.cls+. Currently, the best way\nto retrieve the latest version is to say\n\\begin{Verbatim}\ndarcs get http:\/\/code.haskell.org\/~byorgey\/TMR\/Guidelines\n\\end{Verbatim}\nassuming you have \\verb+darcs+~\\cite{auth:darcs} installed. If you do not use darcs, you can also download a zipfile containing the same files from\n\\begin{Verbatim}\nhttp:\/\/code.haskell.org\/~byorgey\/TMR\/TMR.zip\n\\end{Verbatim}\n\nPlace the file in a directory where your \\TeX\\ installation\ncan find it. If you do not know how to do this, you can put\nit in the same directory where the sources of your article\nreside -- this should always work.\n\nThen, you have to instruct \\LaTeX\\ to use the class file,\nby using\n\\begin{Verbatim}\n\\documentclass{tmr}\n\\end{Verbatim}\nas the first line of your document. There is no need to specify\nany class options (such as options affecting the default font\nor paper size), because \\verb+tmr.cls+ automatically sets everything\naccording to the defaults for \\TMR.\n\nThe zip file and darcs repository contain several other useful\nfiles. The file \\verb+Author.tex+ contains the source code that generated\nthis file. The other files, \\verb+tmr.bst+ and \\verb+tmr.dbj+, are needed to\nformat the bibliography. Make sure your \\TeX\\ installation can locate\nthese files as well.\n\n\\section{The title page}\n\nEach article starts with meta-information about the title\nand the authors. The title is set using the \\verb+\\title+ command.\n\\begin{compactitem}\n\\item Capitalize all relevant words in a title.\n\\end{compactitem}\nThe authors are given by one or more \\verb+\\author+ commands.\n\\begin{compactitem}\n\\item Include e-mail addresses of the authors using the\n\\verb+\\email+ command.\n\\item Put no space between the author and\n the \\verb+\\email+ command.\n\\item Include no further information about the authors.\n\\end{compactitem}\nAll of these commands should appear in\nthe \\emph{preamble} of the document, that is, before the\n\\verb+\\begin{document}+ command. The header of this document is shown\nin Figure~\\ref{header}.\n\n\\begin{figure}\n\\begin{Verbatim}\n\\title{\\TMR\\ Guidelines for Authors}\n\\author{Shae Matijs Erisson\\email{shae@@scannedinavian.com}}\n\\author{Andres L\\\"oh\\email{loeh@@iai.uni-bonn.de}}\n\\author{Wouter Swierstra\\email{wss@@cs.nott.ac.uk}}\n\\author{Brent Yorgey\\email{byorgey@@cis.upenn.edu}}\n\\end{Verbatim}\n\\caption{Example header with title and author information}\n\\label{header}\n\\end{figure}\n\n\n\\section{Introduction \/ abstract}\n\nAbstracts are common practice for academic papers, to give\nan overview of the contents of a paper. \\TMR\\ is not an academic\njournal, so a more casual introduction is in place. For this\npurpose, there is an \\verb+introduction+ environment available.\n\\begin{compactitem}\n\\item Keep the introduction short. The first section of the\n article should always start on the first page.\n\\item Use as little markup in the introduction as possible.\n Avoid itemizations or enumerations.\n\\end{compactitem}\nThe introduction is printed in italics, and belongs between\nthe header and the first section title.\n\nAs an example, we show the introduction of this document\nin Figure~\\ref{intro}.\n\\begin{figure}\n\\begin{Verbatim}\n\\begin{introduction}\nThis text, written in the style of a typical \\TMR\\ article, gives\nguidelines and advice for \\TMR\\ authors. We explain the \\TeX nical\nconstructs needed to write an article, and give a few hints on how\nto improve style.\n\\end{introduction}\n\\end{Verbatim}\n\\caption{Example introduction}\n\\label{intro}\n\\end{figure}\n\n\\section{Structuring the article}\n\n\\subsection{Sections}\n\nAn article is structured in sections and subsections,\ncreated by the commands \\verb+\\section+ and \\verb+\\subsection+\nrespectively. Avoid more levels of structuring, and don't\ncreate your own sectioning constructs by starting paragraphs\nwith emphasized words. Avoid math or code in sections.\nIf you must, use the same fonts as elsewhere.\n\nBoth sections and subsections do not have numbers. If\nyou'd like to refer to a section, try to refer to it\nby name and page, using \\verb+\\pageref+.\n\nIf you write a very long article and really need\nnumbers, you can always make them part of the name\nof a section, such as ``Explanation -- Part 1''.\n\n\\subsection{Paragraphs}\n\nYou can start a new paragraph using a blank line or\nby saying \\verb+\\par+. Never use \\verb+\\\\+ to start a new line\nin ordinary paragraphs. Avoid using manually inserted\nvertical spaces.\n\n\\subsection{Enumerations}\n\nAvoid extreme use of enumerations. In particular, try\nto avoid nested enumerations. The default of the \\TMR\\\nclass is to display enumerations rather tight, like\nthis\n\\begin{itemize}\n\\item foo\n\\item bar\n\\end{itemize}\nIf you have very long items, possibly even including\nparagraph breaks, use the \\verb+longitem+ or\n\\verb+longenum+ environments.\n\\begin{longenum}\n\\item This is an example of the \\verb+longenum+ environment.\n It has some vertical space to separate it from the\n surrounding text \\dots\n\\item \\dots and the different items are also separated\n by vertical space.\n\\end{longenum}\n\n\\subsection{Blocks (theorems, examples, exercises etc.)}\n\nA good way to structure results is to use labelled blocks.\nTheorems, definitions, examples, exercises, are all examples\nof this. The class file provides a large number of predefined\nenvironments for this purpose.\n\nThe \\verb+theorem+, \\verb+lemma+, and \\verb+corollary+ environments\ncreate blocks in the following style:\n\\begin{theorem}\nThis is an important theorem.\n\\end{theorem}\n\nAny block can have an optional argument, which will be\nused as its name.\n\\begin{corollary}[Adams]\nThe answer is 42.\n\\end{corollary}\n\n\\begin{remark}\n The \\verb+remark+ block is predefined and looks like this. All\n other blocks are numbered, and use the same counter. Other\n predefined blocks with a counter (besides \\verb+theorem+ and\n \\verb+corollary+) are \\verb+definition+, \\verb+example+, and\n \\verb+exercise+. They appear as follows:\n\\end{remark}\n\n\\begin{exercise}\nNew sorts of blocks can be defined using the \\verb+\\theoremstyle+\nand \\verb+\\newtheorem+ commands. The style can be either\n\\verb+plain+ (as for \\verb+theorem+, \\verb+lemma+ etc.) or\n\\verb+definition+ (as for \\verb+definition+, \\verb+example+ etc.).\nThe \\verb+exercise+ environment, for example, is defined using the\ncode shown in Listing~\\ref{lst:exercise}.\n\\end{exercise}\n\n\\begin{listing}\n\\begin{Verbatim}\n\\theoremstyle{definition}\n\\newtheorem{exercise}{Exercise}\n\\end{Verbatim}\n\\caption{Definition of the \\texttt{exercise} environment}\n\\label{lst:exercise}\n\\end{listing}\n\n\\begin{proof}\nProofs can be typeset using the \\verb+proof+ environment.\nPlease don't use your own environment for proofs.\nAlso, don't use anything else than the standard\nend-of-proof symbol. If the placement of the end-of-proof\nsymbol is not optimal, it can be forced to a different\nposition using \\verb+\\qedhere+.\n\\end{proof}\n\n\\subsection{Floats and images}\n\nUse floats for everything that is longer than a few\nlines and should not break across pages. Most code\nexamples, tables, diagrams etc.~should use a floating\nenvironment. Don't be afraid that they do as their\nname says and appear in a different place as they have\nbeen defined.\n\nAll floating environments need a caption and are\nnumbered, so they can be referred to by number.\n\nThere are three predefined floating environments:\n\\verb+figure+, \\verb+table+, and \\verb+listing+.\n\nImages should be included using \\verb+\\includegraphics+\nor using the commands provided by the \\verb+pgf+ package\n(\\verb+pgfdeclareimage+ and \\verb+pgfuseimage+).\nAvoid using \\verb+\\epsfig+ or the like.\n\n\\subsection{Cross-references}\n\nNever refer to page numbers absolutely; if you wish to refer to a\ncertain page, always use \\verb+\\pageref+. In the official \\TMR, your\narticle will not start on page 1.\n\n\\LaTeX\\ offers powerful facilities to cross-reference correctly to\nauto-numbered things such as figures, tables, code listings, theorems,\nand equations. Use \\verb+\\label+ and \\verb+\\ref+\/\\verb+pageref+\nwhenever possible.\n\nIf you refer to something by number, always mention what it is, and\ncapitalize the category. For example, say ``Figure 1'' rather than\n``figure 1''. The \\verb+prettyref+ package~\\cite{auth:prettyref} can\nhelp consistently generate such references.\n\n\\subsection{Footnotes}\n\nAvoid them at as much as possible. Footnotes\nare disabled by default. If you need one, you have\nto use \\verb+\\musthavefootnote+ rather than \\verb+\\footnote+.\n\n\n\\section{Verbatim and code}\n\nCode examples should, if they exceed a few lines,\nbe placed in floats, preferably of the \\verb+listing+\ncategory. Code can be either plain verbatim or\nformatted.\n\nFor displayed verbatim, use the \\verb+Verbatim+ rather than the\n\\verb+verbatim+ environment. This reimplementation, offered\nby the \\verb+fancyvrb+~\\cite{auth:fancyvrb} package, offers\nmany additional features.\n\nFor formatted Haskell or Agda code, we recommend using\nlhs2\\TeX~\\cite{auth:lhs2tex}. You should use lhs2\\TeX\\ for Haskell\ncode even if you want the code typeset using a typewriter font;\nlhs2\\TeX\\ has various options to control the formatting. If you want\nyour article to be valid Haskell source, surround your code blocks\nwith \\verb+\\begin{code}+ and \\verb+\\end{code}+; such sections will be\nread by \\verb+ghc+ or \\verb+ghci+ when loaded. For code blocks which\nshould be typeset but ignored by \\verb+ghc+, use \\verb+\\begin{spec}+\nand \\verb+\\end{spec}+.\n\nFor example, this:\n\\begin{Verbatim}[commandchars=\\\\\\{\\}]\n\\textbackslash{}begin\\{spec\\}\nclass Foo f where\n foo :: f a -> f b -> f (a,b)\n -- use for wibbling\n\nbar :: Int -> Bool\nbar 23 = False\nbar x = not (x `elem` [1,2,3])\n\\textbackslash{}end\\{spec\\}\n\\end{Verbatim}\nproduces this:\n\\begin{spec}\nclass Foo f where\n foo :: f a -> f b -> f (a,b)\n -- use for wibbling\n\nbar :: Int -> Bool\nbar 23 = False\nbar x = not (x `elem` [1,2,3])\n\\end{spec}\n\nDo not use the \\verb+listings+~\\cite{auth:listings} package for\nHaskell code, since it produces output that is quite ugly. You may\nuse the \\verb+listings+ package if you need to typeset code in some\nother language.\n\n\\section{Diagrams and images}\n\nFor diagrams, we recommend \\MP~\\cite{auth:metapost} and\n\\verb+pgf+~\\cite{auth:pgf}, but\nother tools are possible, too.\nTry to use the same font (\\ie\\ Computer Modern) in pictures,\nif possible without too much effort.\n\n\\section{Math}\n\n\\subsection{Displayed math}\n\nAvoid using \\verb+$$+ for display-style math. Use \\verb+\\[+ and \\verb+\\]+ or any of the\n\\verb+amsmath+~\\cite{auth:amsmath} environments, such as \\verb+align+, \\verb+multline+, or\n\\verb+gather+. Similarly, don't use \\verb+eqnalign+, but rather \\verb+align+ or \\verb+alignat+.\n\nDon't be afraid of using display-style math. There is no page limit for\narticles in TMR, so space is not an issue. If a formula is too high to fit on\na line and \\TeX\\ increases the interline spacing in order to cope with it,\nthis is usually a good indication that the formula should be displayed.\n\n\\subsection{Numbering of equations}\n\nDon't use equation numbers if you do\nnot need them for reference. On the other hand, do use equation numbers\nif you refer to them! Use \\verb+\\eqref+ to refer to equations.\nDon't use symbols to mark equations, use the\nstandard mechanism. Example:\n\\begin{equation}\n1 + 1\\label{oneplusone}\n\\end{equation}\nThe equation \\eqref{oneplusone} evaluates to $2$.\n\n\n\\subsection{Text in math}\n\nText in math mode should be written using \\verb+\\text+.\nIf you want to print words in italic within math\nmode, use \\verb+\\text{\\textit{foo}}+ or \\verb+\\mathit{foo}+,\nbut never simply \\verb+foo+. Compare the results:\n\\[ foo \\text{ (plain foo)} \\qquad \\mathit{foo} \\text{ (math italics)} . \\]\n\n\n\\section{General advice}\n\n\\subsection{Spelling}\nPlease, please, please use a spell checker. There are plenty of free\nspell checkers that know to ignore \\LaTeX\\ commands, such as ispell or\naspell, readily available. It makes the life of an editor much\neasier.\n\nYou may use either British or American spelling, whichever you\nprefer, as long as you are consistent.\n\n\\subsection{Don't touch the defaults}\n\nMost default settings are chosen consciously, to allow a\nconsistent appearance of \\TMR\\ to the reader. Therefore,\nmost things should just be left alone. For example:\n\\begin{compactitem}\n\\item don't change the page layout;\n\\item don't change the default fonts;\n\\item don't change the font sizes (\\ie\\ avoid\n the use of commands such as \\verb+\\small+ or \\verb+\\large+); and\n\\item don't change the vertical spacing (\\ie\\ avoid the use of\n \\verb+\\vskip+, \\verb+\\vspace+, \\verb+\\\\+, \\verb+\\bigskip+,\n \\verb+\\medskip+, and \\verb+\\smallskip+, and do not change the\n interline space).\n\\end{compactitem}\n\n\\subsection{Line and page breaks}\n\nIt is the job of the editor to get page breaks right. Avoid inserting\ncommands such as \\verb+\\enlargethispage+, \\verb+\\pagebreak+, or \\verb+\\newpage+\ncommands in your article. On the other hand, try to help \\LaTeX\\ to\nbreak lines where it fails on its own. Prevent undesired line breaks\nusing non-breakable spaces \\verb+~+. Prevent hyphenation of words using\n\\verb+\\mbox+. Help with the hyphenation of words using \\verb+\\-+. Try not to\nhave any overfull horizontal boxes in your final document.\n\n\\subsection{Usage of dashes}\n\nUse the \\verb+-+ symbol only to connect words as in\n``type-checking algorithm''. Use \\verb+-+\\verb+-+ for ranges\nsuch as ``1--5''. In this case, the `--' is not surrounded\nby spaces. Use `--' also to separate thoughts -- such\nas this one -- from the rest of the sentence. In this\ncase, the `--' is surrounded by spaces. Do not use\nthe \\verb+-+\\verb+-+\\verb+-+ symbol `---' at all.\n\n\\subsection{Quotation marks}\n\nUse two backticks to produce an opening quotation mark, and two single\nquotes to produce a closing one. (Most editors with a specific\nediting mode for \\LaTeX\\ will automatically produce such single quotes\nwhen you type a double quote character.) For example, \\verb+``this''+\nwill produce ``this''. Do not use actual double quote characters in a\n\\LaTeX{} document, since it will look bad, \"like this\".\n\nPunctuation following quoted text should use so-called ``logical\nstyle''. That is, put punctuation before a closing quotation mark\nonly if the punctuation actually belongs to the phrase or sentence\nbeing quoted. Do not do ``this,'' instead, do ``this''.\n\n\\subsection{\\eg, \\ie, and \\emph{etc.}}\n\nTry to avoid using the abbreviations \\eg\\ and\n\\ie\\ However, if you must:\n\\begin{itemize}\n\\item Don't confuse them. \\eg\\ stands for \\textit{exempli gratia} and\n means ``for example''; \\ie\\ stands for \\textit{id est} and means\n ``that is''. ``There were many people at the conference,\n \\eg{} Shae, Andres, Wouter, and Brent. We met all of the cool ones,\n \\ie{} everyone.''\n\n\\item Use the provided commands \\verb+\\ie+ and \\verb+\\eg+ so they will\n be properly typeset.\n\\item Do not follow them by a comma.\n\\end{itemize}\n\nUnder no circumstances should you use \\textit{etc.}\n\n\\subsection{Use complete sentences}\n\nUse complete sentences with proper punctuation, even if they involve\nenumerations or math. For example: we can\nnow see that \\[ x = 4, \\] and substituting this into equation (6)\nyields \\[ y = \\sqrt{4 + \\tau}. \\]\n\nTry to avoid starting sentences with mathematical symbols or function\nnames. Good: The function \\verb+map+ rocks. Bad: \\verb+map+ rocks.\n\n\\subsection{Use consistent markup}\n\nUse italics for mathematical symbols, also when they appear embedded\nin text: We have $n$ items. Use consistent markup also for Haskell\nfunction names. If you use verbatim code, then surround function names\nwith\n\\verb+\\verb+. If you use lhs2\\TeX, surround function names with\nvertical bars.\n\nPrefer \\verb+\\emph+ for emphasis, over \\verb+\\textbf+ or \\verb+\\textit+.\nMore importantly,\ndon't use \\verb+\\bf+, \\verb+\\it+, \\verb+\\sc+, etc.\\ --\nuse \\verb+\\textbf+, \\verb+\\textit+, \\verb+\\textsc+ etc.\\ instead.\n\nTry to define your own macros and use logical markup, so that changes in layout\nare easy at a later stage.\n\n\\subsection{Footnotes}\n\nTry to avoid using footnotes. Footnotes tend to disrupt the typesetting and are\nusually unneccessary: remember that \\TMR\\ is not an academic publication!\n\n\n\\subsection{Bibliography}\n\nDon't link to web pages directly; make them a reference instead. Try to include\nweb pages for as many references as possible.\n\nThe citations are ordered in order of appearance. Citations appear as\nnumbers, which should not be used as nouns. Don't say: In\n\\cite{auth:tackling}, we see this. Rather say: Peyton\nJones~\\cite{auth:tackling} shows that. Don't be afraid to use the\nnames of authors in your sentences. It can save your readers the\neffort of looking up who wrote the paper.\n\nUse a space before a \\verb+\\cite+, or better, \\verb+~+ (a non-breaking\nspace), like this: \\verb+foo bar~\\cite{jones}+. Generally, try to use\n\\verb+~+ frequently to avoid bad line breaks. For example, writing\n\\verb+a function~$f$+ is better than \\verb+a function $f$+.\n\n\\section{Final remarks}\n\nIf you have any specific instructions for the editor, add an attachment to your\nsubmission. Don't include in-line remarks to editors or reviewers.\n\n\\TMR\\ is meant to be a light-hearted and easy to read. Don't write code that is\ngratuitously complex. Don't shun mathematical formulas, but avoid using too many\ntechnical terms without explaining what you mean. Not everyone knows what a left\nKan extension is -- relish the opportunity to explain these things to a wider\naudience.\n\nTry to keep your sentences short and to the point. Keep your writing informal,\nbut precise. Use plenty of examples to drive your point home. Don't try to write\nan academic paper about the unified theory of computing science; just try to\nshow how cool functional programming can be. Above all, however, don't forget to\nhave fun!\n\n\\bibliography{Author}\n\n\\end{document}\n","avg_line_length":36.4624505929,"max_line_length":147,"alphanum_fraction":0.7571815718} {"size":10536,"ext":"lhs","lang":"Literate Haskell","max_stars_count":48.0,"content":"\n\n%if False\n\n> {-# OPTIONS_GHC -F -pgmF she #-}\n> {-# LANGUAGE TypeOperators, GADTs, KindSignatures,\n> TypeSynonymInstances, FlexibleInstances, PatternGuards #-}\n\n> module Tests.Tactics where\n\n> import BwdFwd\n> import Tm\n> import Rules\n\n%endif\n\n\\subsection{Some machinery}\n\n> fromRight (Right x) = x\n> fromRight (Left y) = error $ \"fromRight: got a left term: \" ++ show y\n\n> isRight (Right _) = True\n> isRight _ = False\n\n\\subsection{Enum}\n\nbranches is supposed to build the following term:\n\n> branchesOpRun t e' p = TIMES (p $$ A ZE) \n> (branchesOp @@ [e' , L (H (B0 :< p) \n> \"\" (N (V 1 :$ A ((C (Su (N (V 0))))))))])\n\nLet's test it:\n\n> testBranches = equal (typ :>: (fromRight $ withTac, orig)) (B0,3)\n> where t = N (P ([(\"\",0)] := DECL :<: UID))\n> e'= N (P ([(\"\",1)] := DECL :<: ENUMU))\n> p = N (P ([(\"\",2)] := DECL :<: ARR (ENUMT (CONSE t e')) SET))\n> typ = SET\n> withTac = opRun branchesOp [CONSE t e', p]\n> orig = branchesOpRun t e' p\n\nswitch is supposed to build the following term:\n\n> switchOpRun t e' p ps n =\n> switchOp @@ [e' \n> , L (H (B0 :< p) \"\" (N (V 1 :$ A ((C (Su (N (V 0))))))))\n> , ps $$ Snd\n> , n ]\n\nLet's test it:\n\n> testSwitch = equal (typ :>: (fromRight $ withTac, orig)) (B0,5)\n> where t = N (P ([(\"\",0)] := DECL :<: UID))\n> e'= N (P ([(\"\",1)] := DECL :<: ENUMU))\n> p = N (P ([(\"\",2)] := DECL :<: ARR (ENUMT (CONSE t e')) SET))\n> ps = N (P ([(\"\",3)] := DECL :<: branchesOp @@ [CONSE t e', ARR (ENUMT (CONSE t e')) SET]))\n> n = N (P ([(\"\",4)] := DECL :<: ENUMT (CONSE t e')))\n> typ = SET\n> withTac = opRun switchOp [CONSE t e' , p , ps , SU n]\n> orig = switchOpRun t e' p ps n\n\n\n\\subsection{Desc}\n\nDesc on Arg is supposed to build this term:\n\n> argDescRun x y z = eval [.x.y.z. \n> SIGMA (NV x) . L $ \"\" :. [.a.\n> (N (descOp :@ [y $# [a],NV z]))\n> ]] $ B0 :< x :< y :< z\n\nLet's test it:\n\n> testDescArg = equal (typ :>: (fromRight $ withTac, orig)) (B0,3)\n> where x = N (P ([(\"\",0)] := DECL :<: SET))\n> y = N (P ([(\"\",1)] := DECL :<: ARR x DESC))\n> z = N (P ([(\"\",2)] := DECL :<: SET))\n> typ = SET\n> withTac = opRun descOp [ARG x y, z]\n> orig = argDescRun x y z\n\nInd is supposed to build this term:\n\n> indDescRun x y z = TIMES (ARR x z) (descOp @@ [y,z])\n\nLet's test it:\n\n> testDescInd = equal (typ :>: (fromRight $ withTac, orig)) (B0,3)\n> where x = N (P ([(\"\",0)] := DECL :<: SET))\n> y = N (P ([(\"\",1)] := DECL :<: DESC))\n> z = N (P ([(\"\",2)] := DECL :<: SET))\n> typ = SET\n> withTac = opRun descOp [IND x y, z]\n> orig = indDescRun x y z\n\nJust check that we can use Ind1:\n\n> testDescInd1 = isRight withTac \n> where x = N (P ([(\"\",1)] := DECL :<: DESC))\n> z = N (P ([(\"\",2)] := DECL :<: SET))\n> typ = SET\n> withTac = opRun descOp [IND1 x, z]\n\n\nBox on an Arg is supposed to build this term:\n\n> boxArgRun a f d p v = boxOp @@ [f $$ A (v $$ Fst),d,p,v $$ Snd] \n\nLet's test it:\n\n> testBoxArg = equal (typ :>: (fromRight $ withTac, orig)) (B0,5)\n> where a = N (P ([(\"\",0)] := DECL :<: SET))\n> f = N (P ([(\"\",1)] := DECL :<: ARR a DESC))\n> d = N (P ([(\"\",2)] := DECL :<: SET))\n> p = N (P ([(\"\",3)] := DECL :<: ARR d SET))\n> v = N (P ([(\"\",4)] := DECL :<: descOp @@ [ARG a f, p]))\n> typ = SET\n> withTac = opRun boxOp [ARG a f, d, p, v]\n> orig = boxArgRun a f d p v\n\nBox on an Ind is supposed to build this term:\n\n> boxIndRun h x d p v =\n> eval [.h.x.d.p.v.\n> TIMES (C (Pi (NV h) . L $ \"\" :. [.y.\n> N (V p :$ A (N (V v :$ Fst :$ A (NV y))))]))\n> (N (boxOp :@ [NV x,NV d,NV p,N (V v :$ Snd)]))\n> ] $ B0 :< h :< x :< d :< p :< v\n\nLet's test it:\n\n> testBoxInd = equal (typ :>: (fromRight $ withTac, orig)) (B0,5)\n> where h = N (P ([(\"\",0)] := DECL :<: SET))\n> x = N (P ([(\"\",1)] := DECL :<: DESC))\n> d = N (P ([(\"\",2)] := DECL :<: SET))\n> p = N (P ([(\"\",3)] := DECL :<: ARR d SET))\n> v = N (P ([(\"\",4)] := DECL :<: descOp @@ [IND h x, p]))\n> typ = SET\n> withTac = opRun boxOp [IND h x, d, p, v]\n> orig = boxIndRun h x d p v\n\nJust check that box does something on Ind1:\n\n> testBoxInd1 = isRight withTac\n> where x = N (P ([(\"\",1)] := DECL :<: DESC))\n> d = N (P ([(\"\",2)] := DECL :<: SET))\n> p = N (P ([(\"\",3)] := DECL :<: ARR d SET))\n> v = N (P ([(\"\",4)] := DECL :<: descOp @@ [IND1 x, p]))\n> typ = SET\n> withTac = opRun boxOp [IND1 x, d, p, v]\n\n\nMapbox on an Arg is supposed to build this term:\n\n> mapboxArgRun a f d bp p v =\n> mapBoxOp @@ [f $$ (A (v $$ Fst)),d,bp,p,v $$ Snd]\n\nLet's test it:\n\n> testMapboxArg = equal (typ :>: (fromRight $ withTac, orig)) (B0,6)\n> where a = N (P ([(\"\",0)] := DECL :<: SET))\n> f = N (P ([(\"\",1)] := DECL :<: ARR a DESC))\n> d = N (P ([(\"\",2)] := DECL :<: SET))\n> bpv = N (P ([(\"\",3)] := DECL :<: ARR d SET))\n> p = N (P ([(\"\",4)] := DECL :<: (C (Pi d (eval [.bpv. L $ \"\" :. \n> [.y. N (V bpv :$ A (NV y))]\n> ] $ B0 :< bpv)))))\n> v = N (P ([(\"\",5)] := DECL :<: descOp @@ [ARG a f, d]))\n> typ = boxOp @@ [ARG a f, d, bpv,v]\n> withTac = opRun mapBoxOp [ARG a f, d, bpv, p, v]\n> orig = mapboxArgRun a f d bpv p v\n\nMapbox on an Ind is supposed to build this term:\n\n> mapboxIndRun h x d bp p v =\n> eval [.h.x.d.bp.p.v.\n> PAIR (L $ \"\" :. [.y. N (V p :$ A (N (V v :$ Fst :$ A (NV y))))])\n> (N (mapBoxOp :@ [NV x,NV d\n> ,NV bp\n> ,NV p\n> ,N (V v :$ Snd)\n> ]))\n> ] $ B0 :< h :< x :< d :< bp :< p :< v\n\nTest:\n\n> testMapboxInd = equal (typ :>: (fromRight $ withTac, orig)) (B0,6)\n> where h = N (P ([(\"\",0)] := DECL :<: SET))\n> x = N (P ([(\"\",1)] := DECL :<: DESC))\n> d = N (P ([(\"\",2)] := DECL :<: SET))\n> bpv = N (P ([(\"\",3)] := DECL :<: ARR d SET))\n> p = N (P ([(\"\",4)] := DECL :<: (C (Pi d (eval [.bpv. L $ \"\" :. \n> [.y. N (V bpv :$ A (NV y))]\n> ] $ B0 :< bpv)))))\n> v = N (P ([(\"\",5)] := DECL :<: descOp @@ [IND h x, d]))\n> typ = boxOp @@ [IND h x, d, bpv,v]\n> withTac = opRun mapBoxOp [IND h x, d, bpv, p, v]\n> orig = mapboxIndRun h x d bpv p v\n\nJust check that mapBox build something with Ind1:\n\n> testMapboxInd1 = isRight withTac\n> where x = N (P ([(\"\",1)] := DECL :<: DESC))\n> d = N (P ([(\"\",2)] := DECL :<: SET))\n> bpv = N (P ([(\"\",3)] := DECL :<: ARR d SET))\n> p = N (P ([(\"\",4)] := DECL :<: (C (Pi d (eval [.bpv. L $ \"\" :. \n> [.y. N (V bpv :$ A (NV y))]\n> ] $ B0 :< bpv)))))\n> v = N (P ([(\"\",5)] := DECL :<: descOp @@ [IND1 x, d]))\n> typ = boxOp @@ [IND1 x, d, bpv,v]\n> withTac = opRun mapBoxOp [IND1 x, d, bpv, p, v]\n\n\n\nelimOp is supposed to build this term:\n\n> elimRun d bp p v =\n> p $$ A v $$ A (mapBoxOp @@ \n> [d\n> ,MU d\n> ,bp\n> ,eval [.d.bp.p. L $ \"\" :. [.x. \n> N (elimOp :@ [NV d,NV bp,NV p,NV x])]\n> ] $ B0 :< d :< bp :< p\n> ,v])\n\nLet's test now:\n\n\n> testElim = equal (typ :>: (fromRight $ withTac, orig)) (B0,6)\n> where d = N (P ([(\"\",0)] := DECL :<: DESC))\n> bp = (P ([(\"\",1)] := DECL :<: ARR (MU d) SET))\n> bpv = N bp\n> p = N (P ([(\"\",2)] := DECL :<: (C (Pi (descOp @@ [d,MU d])\n> (eval [.d.bp. L $ \"\" :. [.x. \n> ARR (N (boxOp :@ [NV d,MU (NV d),NV bp,NV x]))\n> (N (V bp :$ A (CON (NV x))))]\n> ] $ B0 :< d :< bpv)))))\n> v = N (P ([(\"\",3)] := DECL :<: (descOp @@ [d, MU d])))\n> typ = N (bp :$ A (MU v))\n> withTac = opRun elimOp [d, bpv, p, CON v]\n> orig = elimRun d bpv p v\n\n\n\\subsection{Equality}\n\nBe green on a Pi:\n\n> eqGreenPiRun s1 t1 f1 s2 t2 f2 = \n> eval [.s1.t1.f1.s2.t2.f2.\n> ALL (NV s1) . L $ \"\" :. [.x1.\n> ALL (NV s2) . L $ \"\" :. [.x2.\n> IMP (EQBLUE (NV s2 :>: NV x2) (NV s1 :>: NV x1))\n> (eqGreenT (t1 $# [x1] :>: f1 $# [x1]) (t2 $# [x2] :>: f2 $# [x2]))\n> ]]]\n> $ B0 :< s1 :< t1 :< f1 :< s2 :< t2 :< f2\n\nI don't believe it:\n\n> testEqGreenPi = equal (typ :>: (fromRight $ withTac, orig)) (B0,6)\n> where s1 = N (P ([(\"\",0)] := DECL :<: SET))\n> t1 = N (P ([(\"\",1)] := DECL :<: (ARR s1 SET)))\n> f1 = N (P ([(\"\",2)] := DECL :<: (C $ Pi s1 t1)))\n> s2 = N (P ([(\"\",3)] := DECL :<: SET))\n> t2 = N (P ([(\"\",4)] := DECL :<: (ARR s2 SET)))\n> f2 = N (P ([(\"\",5)] := DECL :<: (C $ Pi s2 t2)))\n> typ = PROP\n> withTac = opRun eqGreen [C (Pi s1 t1),f1,C (Pi s2 t2),f2]\n> orig = eqGreenPiRun s1 t1 f1 s2 t2 f2\n\n\n\\subsection{Testing}\n\n> main = do\n> putStrLn $ \"Is branches ok? \" ++ show testBranches\n> putStrLn $ \"Is switch ok? \" ++ show testSwitch\n> putStrLn $ \"Is desc arg ok? \" ++ show testDescArg\n> putStrLn $ \"Is desc ind ok? \" ++ show testDescInd\n> putStrLn $ \"Is desc ind1 ok? \" ++ show testDescInd1\n> putStrLn $ \"Is box arg ok? \" ++ show testBoxArg\n> putStrLn $ \"Is box ind ok? \" ++ show testBoxInd\n> putStrLn $ \"Is box ind1 ok? \" ++ show testBoxInd1\n> putStrLn $ \"Is mapBox arg ok? \" ++ show testMapboxArg\n> putStrLn $ \"Is mapBox ind ok? \" ++ show testMapboxInd\n> putStrLn $ \"Is mapBox ind1 ok? \" ++ show testMapboxInd1\n> putStrLn $ \"Is elim ok ? \" ++ show testElim\n> putStrLn $ \"Is eqGreen Pi ok ? \" ++ show testEqGreenPi","avg_line_length":36.5833333333,"max_line_length":102,"alphanum_fraction":0.3970197418} {"size":32070,"ext":"lhs","lang":"Literate Haskell","max_stars_count":1.0,"content":"> module Convertibility (\n>\t\t\t\tconvertible,\t\t-- with NO substitution\n>\t\t\t\tunifiable,\t\t\t-- with substitution\n>\t\t\t\ttest_conv_or_unify,\t-- either, according to option\n\n>\t\t\t\tapplySubstitution,\n>\t\t\t\temptySubstitution,\n>\t\t\t\tisEmptySubstitution,\n>\t\t\t\tshowSubstitution,\n>\t\t\t\tSubstitution(..), OkF\n>\t\t\t) where\n\n\nTests if two terms are convertible.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nINFO\n\nConversion, as per rules. \n\nMaintains strict \"side-ness\" for the terms being converted - ie, an error\nmessage mentioning \"left\" will concern a subterm derived from the left\nargument.\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n\n> import Data.Maybe\n> import Data.List(insertBy)\n\n\n> import Base hiding (M_)\n> import Context(Ctxt, orderVars, addCtxt_BV)\n> import Terms\n> import SharedSyntax\n> import TermOps(applyToDummy, replaceTerminal, namesIn)\n> import CommandTypes(ConvOpt(..))\n\n> import Reduction(FlatTerm(..), flatten, unflatten, Result(..), dwhnf_)\n> import Reduction(reducibleApplication, applyToDummy_FlatTerm)\n\n> import Printing(showTerm)\n> import PrettyAux(shortRender, longRender, stext)\n> import Pretty(Doc, text, (<+>), nest, vcat)\n\n> import Universes(has_name_in, has_solution)\n\n> import DebugOptions(traceOnDebugFlag)\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nNOTES (to self):\n\nQ: using continuations with reducibility tests - any speed improvements? \nBUT - need 2 extra args for pass and fail ctus - plus problems of arg permute?\n\n\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nTRACING and DEBUGGING functions.\n\nUse switchable, forced trace.\n\n> conv_trace :: Ctxt -> String -> FlatTerm -> FlatTerm -> a -> a\n> conv_trace c m lt rt\n> = let msg = conv_report c m lt rt\n> in traceOnDebugFlag \"CONV_TRACING\" (msg `fseq` msg)\n\n---\n`conv_report'\n - neat layout of terms, shown in context\n\n> conv_report :: Ctxt -> String -> FlatTerm -> FlatTerm -> String\n> conv_report c m lt rt\n> = m ++ \"\\n\" \n> ++ (shortRender $ nest 4 $ text \"LT\" <+> show_ (unflatten lt))++\"\\n\"\n> ++ (shortRender $ nest 4 $ text \"RT\" <+> show_ (unflatten rt))++\"\\n\"\n> where\n>\t\t-- show_ = text . show\t\t-- for debugging.\n>\t\tshow_ = showTerm c\n\n\n---\n`unify_trace'\n - report the results of unification\n - use forced, switchable traces.\n\n> unify_trace c ok@(Ok [])\n> = unify_trace_fseq (\"UNIFIED, with EMPTY substitution\\n\") ok\n> unify_trace c ok@(Ok ss)\n> = unify_trace_fseq (\"UNIFIED, with substitution:\\n\" ++ showSubstitution c ss) ok\n> unify_trace c ok@(Fail m)\n> = unify_trace_fseq (\"COULDN'T UNIFY: because \" ++ m ++ \"\\n\") ok\n\n> unify_trace_fseq m = traceOnDebugFlag \"UNIFY_TRACING\" (m `fseq` m)\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nMAIN (EXTERNAL) FUNCTIONS.\n\n`convertible' \n - Ok or Fail test on conversion without unification.\n\n> convertible :: Ctxt -> Term -> Term -> OkF ()\n> convertible c t1 t2 \n> = do\n>\t\ttest_conv_or_unify StrictConv c t1 t2\n>\t\treturn ()\n\n\n---\n`unifiable' \n - run conversion with view to producing a substitution.\n\n> unifiable :: Ctxt -> Term -> Term -> OkF Substitution\n> unifiable c t1 t2 = unify_trace c $ conv c t1 t2\n\n\n\n---\n`test_conv_or_unify'\n - intended for implementing a testing command; returns String or Fail.\n - if strict conv. chosen, then checks that conversion doesn't produce a \n\tsubstitution.\n - if unify chosen, then shows the result nicely.\n\n> test_conv_or_unify :: ConvOpt -> Ctxt -> Term -> Term -> OkF String\n> test_conv_or_unify StrictConv c t1 t2\n> = do\n>\t\tss <- conv c t1 t2\n>\t\tif isEmptySubstitution ss\n>\t\t then\n>\t\t\treturn \"Converts ok\\n\"\n>\t\t else\n>\t\t\tfail_with_msg $ \"Conversion resulted in unification:\\n\" \n>\t\t\t\t\t\t\t\t++ showSubstitution c ss\n\n> test_conv_or_unify UnifyConv c t1 t2\n> = do\n>\t\tss <- unifiable c t1 t2\n>\t\tcase ss of\n>\t\t\t[] -> return \"Unifies, no substitution required\"\n>\t\t\tss -> return $ \"Unifies, with substitution:\\n\" ++ showSubstitution c ss\n\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n`conv' \n - convertibility for two terms.\n - calls `cvt' after flattening the two terms.\n\n> conv :: Ctxt -> Term -> Term -> M\n> conv ctxt t1@(Co c a) t2@(Ap m@(MetaVar _) x)\n> = conv_trace ctxt \"Conv (CO case) testing:\" flatten_t1 flatten_t2 $ \n> conv_list ctxt [c,a] [m,x]\n> where\n>\t\tflatten_t1 = flatten t1\n>\t\tflatten_t2 = flatten t2\n\nIDEA: allow higher order unif. in this highly constrained case.\n there's a requirement that co solutions are ONLY co.\n\n\n> conv c t1 t2\n> = conv_trace c \"Conv testing:\" flatten_t1 flatten_t2 $ \n> cvt c flatten_t1 flatten_t2\n> where\n>\t\tflatten_t1 = flatten t1\n>\t\tflatten_t2 = flatten t2\n\n\n%---------------------------------------\n`cvt' \n - does all the work, and calls `conv' to compare subterms.\n - cases are handled in the series of sections below (after the monad)\n\n> cvt :: Ctxt -> FlatTerm -> FlatTerm -> M\n\n\n\n%-------------------------------------------------------------------------------\nThe monad\n\n * need useful fail\n * need changeable context, for right context and printing, plus metavars.\n * might need to return a substitution for Unify.\n\nBUT: passing context explicitly since this simplifies handling of the\nbacktracking cases.\n\nHence the following:\n\n> type M_ a = OkF a\n> type M = M_ Substitution\n\n---\n`conv_ok' - synonym for success. \n`conv_if' - lifts boolean to monad. \n\t\t - NB, the uninformative error msg is often superceeded by caller.\n\n> conv_ok :: M\n> conv_ok = return emptySubstitution\n\n> conv_if :: Bool -> M\n> conv_if True = conv_ok\n> conv_if False = fail_with_msg \"conv_if test failed\"\n\n---\n`conv_FAIL' - synonym for (big) failure.\n\n> conv_FAIL c m l r\n> = fail_with_msg $ conv_report c m l r\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nCVT: SPECIFIC UNIMPLEMENTED\n\neg, do we fail if we see a Let? \n\t(mar99) let handled via reducibleApplication...\n\n - and rely on compiler trapping.\n\n\n%---------------------------------------\n\n\n<> cvt c (E (CAST le lt) []) (E (CAST re rt) [])\n<> = conv c le re\n\tDISABLE - should not allow unguarded recurse in to CAST term? \n\t - Following case does better job.\n\n<> cvt c l@(E (Ext (HardCast le lt)) ls) r@(E (Ext (HardCast re rt)) rs)\n<> = if eqConst le re && isOk args_convert\n<>\t\tthen do\n<>\t\t\t\t-- no substit here, aim for simple cases.\n<>\t\t\t\t-- ss <- conv c lt rt\t\t-- types must convert.\n<>\t\t\t\t-- sss <- conv_list c ls rs\n<>\t\t\t\t-- return $ combineSubstitutions c ss sss\n<>\t\t\t\t-- no - make use of NAME EQUIVALENCE.\n<>\t\t\t\t-- IMPORTANT - better to ensure this via ctxt\n<>\t\t\t\targs_convert {- PASS -}\n<>\t\telse do\n<>\t\t\tlet r_left = reducibleApplication l\n<>\t\t\tlet r_right = reducibleApplication r\n<>\t\t\terror \"not considered CAST cases yet\"\n\n<>\t\t\tcase r_right of\n<>\t\t\t\tNothing -> case r_left of \n<>\t\t\t\t\t\t\t\tNothing -> conv_FAIL c fail_msg l r\n<>\t\t\t\t\t\t\t\tJust l2 -> cvt c l2 r\n<>\t\t\t\tJust r2 -> case r_left of \n<>\t\t\t\t\t\t\t\tNothing -> cvt c l r2\n<>\t\t\t\t\t\t\t\tJust l2 -> cvt c l2 r2\n\n<> where\n<>\t\tfail_msg = \"CAST terms not convertible\"\n<>\t\targs_convert = conv_list c ls rs\n\n\n\nIMPORTANT: this just catches simple cases of equality, eg (T^x ls) (T^x ys)\n\tneed to think more about this. Eg, significance of types converting? \n\nTHIS RAISES ISSUE OF autogen name equality, (ie, from (T^n ?)).\nmight be a problem; we want extensional equality - but conv can recurse inside\nthe T^n. . \n\n = error \"asym CAST cmp\"\n\n%---------------------------------------\neg T^i ? and something.\n\n******************\n** ALL DISABLED **\n******************\n\nIMPL does not implement the full restrictions proper\n\nwant to abstract univ repres. details from here, eg Universes.has_solution...\n\n<> cvt c l@(E (CAST _ _) _) r@(E (El _) _)\t-- avoid getting El in solutions\n<> = conv_FAIL c \"Cast vs (El x)\" l r\n<> cvt c l@(E (El _) _) r@(E (CAST _ _) _)\t-- best to do here? \n<> = conv_FAIL c \"(El x) vs CAST\" l r\n\n******************\n** ALL DISABLED **\n******************\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nfirst step to metavars\n\n> cvt c left right\n> | isOk mv_result = mv_result\n> where\n>\tmv_result = cvt_mv c left right\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nCVT: Both are El-applications\n\nEl terms must convert directly - ie, since El(x) is a kind, then no reducible \nterm will yield it.\n\nKinds also can't be applied, so we only need to check the no-args case.\n\nComplain (ie, crash) if we find an El which is applied to arguments.\n\n> cvt c (E (El l) []) (E (El r) [])\n> = conv c l r\n\n> cvt c l@(E (El _) _) r\n> = conv_FAIL c \"Comparing El to non-El\" l r\n> cvt c l r@(E (El _) _)\n> = conv_FAIL c \"Comparing non-El to El\" l r\n\n> cvt c left@(E (El _) _) right@(E (El _) _) \n> = error $ \n> conv_report c \"Comparing El terms that have arguments\" left right\n> ++ \"\\n\\n *** POSSIBLE INTERNAL TYPE-SYNTHESIS ERROR? ***\\n\"\n\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nCVT: BOTH are Application Expressions.\n\nNB - ignore arities of specific things like Type \n - TypeInference should catch them.\n - MetaVar cases are handled above.\n\nMETHOD:\n - do the terms have the same (identical) head constant?\n\t - do they have the same number of arguments? \n\t\t- do the arguments pairwise convert? \n\t\t - THEN return what the substitution is.\n - OTHERWISE\n\t - call try_right_then_left, passing sufficient arguments to allow that\n\t function to produce an interesting error message.\n\nNOTE:\n - there is no substitution arising from comparison of head constants.\n - all treatment of metavariables-with-arguments is handled above.\n\n - does forcing the length and using it as discriminator make a difference? \n arguably it is cheaper than (min m n) conversions which may fail. \n (Is there a case where collecting arguments is expensive? unlikely...)\n\n - Having the same head does NOT mean that both terms are reducible \n (eg E_ (f x) 0 and E_ (g 0) x, where x is a var.)\n\n - some error functions accumulate error messages from below, thus showing\n how the conversion failed and the path(s) reduction took.\n\n%---------------------------------------\nneed refine here.\n\nto dwhnf, -> either elim-var or constr head \nat this point, can compare for same term in low level, eg both elims.\n\n%---------------------------------------\nThis case needed for T^n a = T^n b equations\n\n> cvt c left@(E (Ext (HardCast le lt)) ls) \n> right@(E (Ext (HardCast re rt)) rs)\n> | eqConst le re \n> = case conv_list c ls rs of\n>\t Fail _ -> fail_ \"CAST terms not convertible\"\n>\t ok@Ok{} -> ok\n> where\n>\t\tfail_ m = conv_FAIL c m left right\n\n\n%---------------------------------------\nNext 2 cases for T^n ? = T equations\n\nWARNING\n has_solution test doesn't implement internal (non-parameter) restrictions \n from the constructor schemata - so is unsound. \n\n May look at it again some time (the code exists via AddToUniverse etc).\n\nWEAKNESS\n This looks for universe decoders purely by finding the cast\n This isn't satisfactory (although has_solution will catch)\n\nDISABLED 6aug03 - too many problems. better to internalise the indices.\n\n<> cvt c l@(E lc@(Ext (HardCast{})) [MetaVar la]) r@(E _ _)\n<> = do\n<>\t\tconv_trace c (\"CAST: \" ++ show lc ++ \"\\n\") l r $ return ()\n<>\t\tt <- embed $ has_solution lc (unflatten r)\n<>\t\tmk_subst la t\n\n<> cvt c l@(E _ _) r@(E rc@(Ext (HardCast{})) [MetaVar ra])\n<> = do\n<>\t\tconv_trace c (\"CAST: \" ++ show rc ++ \"\\n\") l r $ return ()\n<>\t\tt <- embed $ has_solution rc (unflatten l)\n<>\t\tmk_subst ra t\n\n\n%---------------------------------------\nChecking prf terms\n\n> cvt c left@(E (ContextVar (Plain \"Prf\")) [l]) right@(E (ContextVar (Plain \"Prf\")) [r]) \n> = conv c l r \n\n%---------------------------------------\n\n * if equal set of symbols, go no further\n * if unifiable in parts, then loop back on updated terms\n * else try expand until block.\n\n> cvt c left@(E l ls) right@(E r rs) \n> | same_shape && and (zipWith eqConst ls rs)\n> = conv_ok\n\n> | same_shape \n> = conv_trace c \"Pre-easy unifs\" left right\n> $ case conv_mv_list c ls rs of\n>\tRight ss -> -- perfect match (on simple consts)\n>\t conv_trace c \"Easy unifs matches to:\" \n>\t\t (applySubstitution ss left) (applySubstitution ss right)\n>\t\t $ return ss\n\n>\tLeft ss -> -- imperfect match, needs more work\n>\t let new_left = applySubstitution ss left\n>\t new_right = applySubstitution ss right in\n>\t conv_trace c \"Easy unifs PARTIAL match:\" new_left new_right\n>\t\t $ do\n>\t\t sss <- cvt_blocked c (dwhnf_ new_left) (dwhnf_ new_right)\n>\t\t -- IMPORTANT: have to go to blocked stage now\n>\t return $ combineSubstitutions c ss sss\n\n> | otherwise\n> = conv_trace c \"Trying cvt on blocked terms\" blocked_left blocked_right\n> $ cvt_blocked c blocked_left blocked_right\n> where\n>\tsame_shape = eqConst l r && sameLength ls rs\n>\tblocked_left = dwhnf_ left\n>\tblocked_right = dwhnf_ right\n\n\n%-------------------------------------------------------------------------------\nCVT: catch-all case for comparing E's - shouldn't be reached.\n\n> cvt c left@(E _ _) right@(E _ _) \n> = error $ \n> conv_report c \"hit E~E default case. shouldn't!\" left right\n\nWAS:\nconv_FAIL c \"Application cases don't convert\" left right\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nCVT: Comparing Expr + Abstr\n\nMETHOD\n - If abstr is not a FO, then try expanding the Expr (if possible) and\n try conversion with that.\n\n - If Abstr is a FO then use the Coquand trick for eta conversion. \n Instead of (expensively) testing the body of the abstr to see if it \n has form (B x), where x is the bound var and B is x-free, and then testing\n convertibility with B, we apply the other Expr term to a dummy var so that\n it ITSELF looks like an abstr body in (B x) form, and test this against the \n body of the abstraction. Ie, it has effect:\n\t naive: ([x:T]B x) ~ E => B ~ E\n\t trick: ([x:T]C) ~ E => C ~ applyToDummy E\n\n If the new conversion fails, then we fail without reducing the Expr side.\n This is because the eta trick will cause the Expr side to be reduced\n in the course of comparison. \n\n NB we add a binder to the context when comparing the bodies.\n\nIMPL\n - We keep the cases separate (instead of eg implementing A ~ E by swapping\n order of arguments and recursing) in order to help debugging - ie the left\n arg is ALWAYS the left arg...\n - OBVIOUSLY - need to make sure the two clauses are symmetric! \n\n - saving a bit of time by applyToDummy_FlatTerm, rather than applyToDummy\n with unflattening etc.\n\n%-------------------\nFO cases - can do eta trick.\n\n> cvt c left@(E _ _) right@(A ra@(FO n ty body))\n> = conv_trace c \"Conv E A: TRY ETA\" left (flatten ra) $\n> case try_eta of\t\t\t\t\t\t\t-- can only try eta on a FO.\n> Ok ss -> {-PASS-} try_eta\n> Fail m -> case reducibleApplication left of\n> Blocked _ -> {-FAIL-} couldn't_convert_expr_and_abs\n> Reduced _ -> {-FAIL-} couldn't_convert_REDUCIBLE_expr_and_abs\n> where\n>\t\tflat_body = flatten body\n>\t\ttry_eta = cvt (addCtxt_BV n ty c) (applyToDummy_FlatTerm left) flat_body\n>\n>\t\tcouldn't_convert_expr_and_abs\n>\t\t = fail_couldn't_convert \"Expr\" \"FO Abs\" c left right\n>\t\tcouldn't_convert_REDUCIBLE_expr_and_abs\n>\t\t = fail_couldn't_convert \"REDUCIBLE Expr\" \"FO Abs\" c left right\n\n\n> cvt c left@(A la@(FO n ty body)) right@(E _ _)\n> = conv_trace c \"Conv A E: TRY ETA\" (flatten la) right $\n> case try_eta of\t\t\t\t\t\t\t-- can only try eta on a FO.\n> Ok ss -> {-PASS-} try_eta\n> Fail m -> case reducibleApplication right of\n> Blocked _ -> {-FAIL-} couldn't_convert_abs_and_expr\n> Reduced _ -> {-FAIL-} couldn't_convert_abs_and_REDUCIBLE_expr\n> where\n>\t\tflat_body = flatten body\n>\t\ttry_eta = cvt (addCtxt_BV n ty c) flat_body (applyToDummy_FlatTerm right) \n>\n>\t\tcouldn't_convert_abs_and_expr\n>\t\t = fail_couldn't_convert \"FO Abs\" \"Expr\" c left right\n>\t\tcouldn't_convert_abs_and_REDUCIBLE_expr\n>\t\t = fail_couldn't_convert \"FO Abs\" \"REDUCIBLE Expr\" c left right\n\n%-------------------\nDP cases - nothing can produce a DP, so fail here.\n\n> cvt c left@(E _ _) right@(A (DP _ _ _))\n> = fail_couldn't_convert \"Expr\" \"DP Abs\" c left right\n\n> cvt c left@(A (DP _ _ _)) right@(E _ _)\n> = fail_couldn't_convert \"DP Abs\" \"Expr\" c left right\n\n\n\n%-------------------\nCatch remaining cases.\n\n> cvt c left@(E _ _) right@(A _)\n> = error $ \n> conv_report c \"hit E~A default case. shouldn't!\" left right\n\n> cvt c left@(A _) right@(E _ _)\n> = error $ \n> conv_report c \"hit A~E default case. shouldn't!\" left right\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nCVT: Comparing Abstractions.\n\n - the types and the bodies must be convertible.\n - when processing the bodies, do so in a context that contains the bound var. \n - the type of this new var is (arbitrarily) from the left term (NB, this type \n is converible with the right, if the conv of bodies is reached, of course)\n - the new var name is a mixture of the two names. (MIGHT REMOVE.)\n\n> cvt c (A (DP v1 ty1 t1)) (A (DP v2 ty2 t2))\n> = do\n>\t\ts1 <- conv c ty1 ty2 \n>\t\ts2 <- conv\t(addCtxt_BV new_var ty1 c) \n>\t\t\t\t\t(applySubstitution s1 t1)\n>\t\t\t\t\t(applySubstitution s1 t2)\n> \t\treturn $ combineSubstitutions c s1 s2\n> where\n>\t\tnew_var | v1 == v2 = v1\n>\t\t | otherwise = Bound $ Plain $ show v1 ++\"\/\"++ show v2\n\n> cvt c (A (FO v1 ty1 t1)) (A (FO v2 ty2 t2))\n> = do\n>\t\ts1 <- conv c ty1 ty2 \n>\t\ts2 <- conv\t(addCtxt_BV new_var ty1 c) \n>\t\t\t\t\t(applySubstitution s1 t1)\n>\t\t\t\t\t(applySubstitution s1 t2)\n> \t\treturn $ combineSubstitutions c s1 s2\n> where\n>\t\tnew_var | v1 == v2 = v1\n>\t\t | otherwise = Bound $ Plain $ show v1 ++\"\/\"++ show v2\n\n> cvt c l@(A _) r@(A _)\n> = conv_FAIL c \"Mismatched binders\" l r\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nNothing else works.\n\nBUT - we should have tested and caught the possibility above. \n\t- therefore, we crash here.\n\n> cvt c l r\n> = error $ conv_report c \"Reached Catch-all case, should not occur.\" l r\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n> cvt_blocked c left@(E l ls) right@(E r rs) \n> | eqConst l r\n> = case have_same_length of\n> True -> -- arg count is the same.\n> case args_convert of\n> Ok _ -> {-PASS-} args_convert\n> Fail _ -> -- args do not convert.\n> fail_ \"args differ\"\n> -- failure here, eg eg Cons a b \/~ Cons c d\n\n> False -> -- arg counts differ.\n> -- NOTE: failure here should be caught by type_checking ?\n> -- eg Cons a b c \/~ Cons a b\n> error \"Arg mismatch under same heads\"\n> where\n>\thave_same_length = sameLength ls rs\n>\targs_convert = conv_list c ls rs\n>\tfail_ m = conv_FAIL c m left right\n\n> cvt_blocked c left@(E (Ext (HardCast le lt)) ls) \n> right@(E (Ext (HardCast re rt)) rs)\n> | eqConst le re \n> = case conv_list c ls rs of\n>\t Fail _ -> fail_ \"CAST terms not convertible\"\n>\t ok@Ok{} -> ok\n> where\n>\t\tfail_ m = conv_FAIL c m left right\n\n> cvt_blocked c left@(E l ls) right@(E r rs) \n> = -- non-equal heads \n> -- failure here, eg (Cons x y) vs (Nil).\n> case cvt_mv c left right of\n>\t ok@Ok{} -> ok\n>\t Fail _ -> fail_ \"head symbols differ or not unifiable\"\n> where\n>\tfail_ m = conv_FAIL c m left right\n\n> cvt_blocked c l@(A{}) r\t-- pass back to main cvt\n> = cvt c l r\n> cvt_blocked c l r@(A{})\t-- pass back to main cvt\n> = cvt c l r\n\n> cvt_blocked c l r\n> = conv_FAIL c \"cvt_blocked for:\" l r\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nCVT: METAVAR CASES.\n\nNB here, we can do tests about solvability (eg (B a) vs (? ? a))\n\nNB same metavar test - should be here.\n<>\t\teq (MetaVar i1) (MetaVar i2) = conv_if $ i1 == i2\nANS: it is handled via orderVars returning Eq.\n\t (if speed important >> error checks, can weaken orderVars; see src)\n\n%-------------------------------------------------------------------------------\nSIMPLE METAVAR CASES. \n\n - Just for unapplied terminals.\n - No equational unification (eg (B a) and (? b a))\n - Substitutions are ordered so that older vars get substituted with the younger\n\n> cvt_mv c (E left_var@(MetaVar left) []) (E right_var@(MetaVar right) [])\n> | left == right\t-- avoid dbl lookup if avoidable\n> = conv_ok\n> | otherwise \n> = do\n>\torder <- orderVars c left right\t\t-- get order wrt context\n>\tcase order of\n>\t\tLT -> mk_subst right left_var -- r < l, replace n2\n>\t\tGT -> mk_subst left right_var -- l > r, replace n1\n>\t\tEQ -> error \"identical MVs should be picked up earlier\"\n \n> cvt_mv c l@(E (MetaVar left) ls) r@(E (MetaVar right) rs)\n> | left == right \n> = if length ls == length rs && and (zipWith eqConst ls rs)\n>\tthen conv_ok\n>\telse conv_FAIL c fail_lists l r\n> where\n>\tfail_lists = \"Arguments lists under identical MV heads don't match\"\n\n<>\t-- eg (?1 ?2) = (?1 ?2)\n<>\t-- BUT (?1 x) (?1 ?2) = unsafe; so mvs only!\n<>\t-- might reconsider this later.\n\nPREVIOUS VERSIONS.\n<> -- expensive !!! EQ -> conv_list c ls rs-- EXPERIMENT.\n<> -- = conv_mv_list c ls rs\t\t-- EXPERIMENT.\nHERE - to decide - keep or no? \n - this postpones full unif check until later\n - might be cheapr in the long run.\n\n \n> cvt_mv c l r@(E (MetaVar _) (_:_))\n> = conv_FAIL c \"Unification on Metavar-headed terms NYI\" l r\n \n> cvt_mv c l@(E (MetaVar _) (_:_)) r\n> = conv_FAIL c \"Unification on Metavar-headed terms NYI\" l r\n\n\n> cvt_mv c l (E (MetaVar r) [])\n> = mk_subst r $ unflatten l\n \n> cvt_mv c (E (MetaVar l) []) r\n> = mk_subst l $ unflatten r\n\n> cvt_mv c _ _ \n> = Fail \"not a metavar problem\"\n\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n`try_right_then_left'\n - when comparing E's, try reducing first the right then the left term, to \n\tsee if a conversion is possible with the reduced form.\n\n - if the right reduces but no conversion, then try reducing left. if ok\n\tthen try conv with\n\t a) BOTH reduced\n b) just the LEFT reduced? \n not sure - no difference, just a question of efficiency vs preservation? \n\n<> try_right_then_left :: E_Cvt_Error -> (String -> M) -> Ctxt -> FlatTerm -> FlatTerm -> M\n<> try_right_then_left error_case error_fn c left right\n<> = case reducible_right of\n<> Nothing -> case reducible_left of\n<> Nothing -> {-FAIL-} error_fn \"Couldn't reduce either side\"\n<> Just lEFT -> let cvt_left = cvt c lEFT right in\n<> case cvt_left of\n<> Ok _ -> {-PASS-} cvt_left\n<> Fail _ -> {-FAIL-} reducible_won't_convert \"left\" error_case $\n<> cvt_left\n<\n<> Just rIGHT -> let cvt_right = cvt c left rIGHT in\n<> case cvt_right of\n<> Ok _ -> {-PASS-} cvt_right \n<> Fail _ -> case reducible_left of \n<> Nothing -> {-FAIL-} reducible_won't_convert \"right\" error_case $\n<> cvt_right\n<> Just lEFT -> let cvt_left = cvt c lEFT right in\n<> case cvt_left of\n<> Ok _ -> {-PASS-} cvt_left\n<> Fail _ -> {-FAIL-} reducible_won't_convert \"left after right fail\" error_case $\n<> cvt_left\n<> where\n<>\t\treducible_left = reducibleApplication left\n<>\t\treducible_right = reducibleApplication right\n<\n<>\t\treducible_won't_convert :: String -> E_Cvt_Error -> M -> M\n<>\t\treducible_won't_convert \"both\" what\n<>\t\t = prepend_msg $ conv_report c msg left right\n<>\t\t where msg = \"Reducible terms (both) aren't convertible \" ++ decode_ec_error what\n<>\t\treducible_won't_convert which what\n<>\t\t = prepend_msg $ conv_report c msg left right\n<>\t\t where msg = \"Reducible term (\"++which++\") isn't convertible \"++decode_ec_error what\n\n\n---\n`E_Cvt_Error'\n - represent case of error in conversion of E terms.\n - meaning represented by `decode_ec_error'\n\n> data E_Cvt_Error = Parity | Lengths_differ | Neq_heads\n\n> decode_ec_error Parity = \"(identical head, arg counts same)\"\n> decode_ec_error Lengths_differ = \"(identical head, arg counts differ)\"\n> decode_ec_error Neq_heads = \"(different heads)\"\n\n\n---\n`fail_couldn't_convert'\n - generic fail function.\n\n> fail_couldn't_convert \n> :: String -> String -> Ctxt -> FlatTerm -> FlatTerm -> M\n> fail_couldn't_convert lm rm c left right\n> = conv_FAIL c msg left right\n> where\n>\t\tmsg = unwords [\"Could not convert\", lm, \"with\", rm]\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n`conv_list'\n - checks that terms are pairwise convertible\n - substitutions are propagated through the list as they are generated.\n - the caller should check list length first\n - but for safety, we assert that recursion ends on double empty lists.\n\t(ie, no zipWith!)\n\n> conv_list :: Ctxt -> [Term] -> [Term] -> M\n> conv_list c [] [] \n> = conv_ok\n> conv_list c (l:ls) (r:rs)\n> = do\n>\t\tss <- conv c l r\n>\t\tsss <- conv_list c \n>\t\t\t\t\t\t (map (applySubstitution ss) ls) \n>\t\t\t\t\t\t (map (applySubstitution ss) rs)\n>\t\treturn $ combineSubstitutions c ss sss\n\n> -- catch and fail here.\n> conv_list c ls@[] rs = conv_list_error c ls rs\n> conv_list c ls rs@[] = conv_list_error c ls rs\n\n\n\n\n%---------------------------------------\nunifying a list of terms (under identical heads)\n\nrebuilding\n - how to get unified term out ? subst applied to EITHER side\n - there's an efficiency question too? \n - easy - unify then subst. \n - NB note this ignores mismatches, so have to subst BOTH terms.\n - LEAVE THIS FOR LATER. (not impl yet)\n\nRight = unifying ok\nLeft = some disagreement found.\n\n> conv_mv_list :: Ctxt -> [Term] -> [Term] -> Either Substitution Substitution\n\n> conv_mv_list c [] [] \n> = Right emptySubstitution\n\n> conv_mv_list c (l:ls) (r:rs)\n> | eqConst l r\n> = conv_mv_list c ls rs\n\n> | otherwise\n> = case cvt_mv c (flatten l) (flatten r) of\n>\tFail _ -> toLeft $ conv_mv_list c ls rs\n>\tOk ss -> addIn (combineSubstitutions c ss) \n>\t $ conv_mv_list c (map (applySubstitution ss) ls) \n>\t\t\t (map (applySubstitution ss) rs)\n> where\n>\ttoLeft l@(Left _) = l\t\t-- propagate\n>\ttoLeft (Right s) = Left s\t-- signal incomplete result\n>\taddIn f (Left s) = Left (f s)\n>\taddIn f (Right s) = Right (f s)\n\n> -- catch and fail here.\n> conv_mv_list c ls@[] rs = conv_list_error c ls rs\n> conv_mv_list c ls rs@[] = conv_list_error c ls rs\n\n\n%---------------------------------------\nShared error msg\n\n> conv_list_error c ls rs \n> = do\n>\tlet left = text \"Left \" <+> vcat (map (showTerm c) ls)\n>\t right = text \"Right\" <+> vcat (map (showTerm c) rs)\n>\t msg = text \"Mismatch of arg. list lengths in applications\"\n>\terror $ longRender $ vcat [msg, left, right]\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n`eqConst'\n - basic syntactic test on constants\n\n> eqConst :: Term -> Term -> Bool\n> eqConst Type Type = True\n> eqConst (IVar i1) (IVar i2) = i1 == i2\n> eqConst (ContextVar i1) (ContextVar i2) = i1 == i2\n> eqConst (MetaVar i1) (MetaVar i2) = i1 == i2\t-- special case. \n> eqConst (GlobalDef i1 _) (GlobalDef i2 _) = i1 == i2\n\n-- ELIM \n\n> eqConst (Elim i1 _) (Elim i2 _) = i1 == i2\n> eqConst (F_Elim i1) (F_Elim i2) = i1 == i2\n> eqConst (Elim i1 _) (F_Elim i2) = i1 == i2\n> eqConst (F_Elim i1) (Elim i2 _) = i1 == i2\n\n-- CONSTRUCTORS \n\n> eqConst (Const i1 _) (ContextVar i2) = i1 == i2\n> eqConst (F_Const i1 _) (ContextVar i2) = i1 == i2\t-- needed for compile\n\n>\t\t-- TEST - for adding comp rules. CHECK THIS AGAIN!\n>\t\t-- DISABLED - eq (ContextVar i1) (Const i2 _ _) = i1 == i2\n\n> eqConst (Const i1 _) (Const i2 _) = i1 == i2\n> eqConst (F_Const i1 _) (F_Const i2 _) = i1 == i2\n\n> eqConst (F_Const i1 _) (Const i2 _) = i1 == i2\t\t-- different forms\n> eqConst (Const i1 _) (F_Const i2 _) = i1 == i2\n\nCLOSED TERMS - probably not reliable - they should be reduced! \nNB - should come out when being flattened.\n\n> eqConst l@Closed{} _ = error $ \"Closed left term: \" ++ show l\n> eqConst _ r@Closed{} = error $ \"Closed right term: \" ++ show r\n\n-- ALLOW KINDS, for new metavar code.\n\n<> eqConst Kind _ = error $ \"Kind in Conv (left)\"\n<> eqConst _ Kind = error $ \"Kind in Conv (right)\"\n\n> eqConst Kind Kind = True\n\n\n-- ASSERTIONS - should not happen.\n\n> eqConst (Var i1) _ = error $ \"Var in Conv (left) \" ++ show i1\n> eqConst _ (Var i2) = error $ \"Var in Conv (right) \" ++ show i2\n\n> -- failure case\n> eqConst _ _ = False\n\n\n\n\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nSUBSTITUTIONS\n\n> type Substitution = [(Name,Term)]\n\n> emptySubstitution = [] :: Substitution\n> isEmptySubstitution = null :: Substitution -> Bool\n\n> showSubstitution :: Ctxt -> Substitution -> String\n> showSubstitution c ss\n> = longRender (vcat $ map show_one ss) ++ \"\\n\"\n> where\n>\t\tshow_one (n,t) = nest 4 $ stext n <+> text \"->\" <+> showTerm c t\n\n---\n\n> empty_subst :: Monad m => m Substitution\n> empty_subst = return emptySubstitution\n\n---\noccurs check here.\n\n> mk_subst :: (Monad m, Fallible m) => Name -> Term -> m Substitution\n> mk_subst n t \n> | n `elem` namesIn t \n> = fail_with_msg $ \"occurs check, \" ++ show n ++ \" \" ++ show t\n> | otherwise \n> = return [(n,t)]\n\n\n\n%---------------------------------------\n`applySubstitution'\n - replace metavars with fixed terms.\n - ASSUME that subst is in age-order for vars. \n - NB this rewrites terms O(num of substs) - clearly bad! \n - should rewrite to be O(1), but be caseful of [] case.\n\n> class Substitutable t where\n> applySubstitution :: Substitution -> t -> t\n\n> instance Substitutable Term where\n>\tapplySubstitution ss t = foldl replace_ t ss\n\n> instance Substitutable FlatTerm where\n>\tapplySubstitution ss (A t) = A (applySubstitution ss t)\n>\tapplySubstitution ss (E t ts) = E (applySubstitution ss t)\n>\t (map (applySubstitution ss) ts) \n\n> replace_ :: Term -> (Name,Term) -> Term\n> replace_ t n_t = replaceTerminal (replace_metavar n_t) t\n\n> replace_metavar (m,t) (MetaVar n) \n> | m == n = Just t\n> replace_metavar (m,t) (Var (Plain ('?':n))) \n> | m == Plain (\"m_\" ++ n) = Just t\t\t\t-- HACK\n> replace_metavar (m,t) _ = Nothing\n\n\n\n%---------------------------------------\n`combineSubstitutions'\n - merge substitutions from 2 trees.\n - first, apply left substs to right substs\n - and then merge in var order, according to predicate.\n\nBUT - shouldn't these already be done by caller of this function, through\n applying left substs to right terms BEFORE trying cvt? \nTRY ASSERTION.\n\nNOTE:\n - special case for empty right subst - as will result from `conv_list'\n\n> combineSubstitutions :: Ctxt -> Substitution -> Substitution -> Substitution\n> combineSubstitutions c left []\n> = left\n> combineSubstitutions c left right\n> | cmp_show = error $ \"combineSubstitutions assert failed.\\n\" \n>\t\t\t++ \"Left\\n\" ++ showSubstitution c left\n>\t\t\t++ \"Right\\n\" ++ showSubstitution c right\n>\t\t\t++ \"\\n---\\n\" ++ show right \n>\t\t\t++ \"\\n---\\n\" ++ show right_2 ++ \"\\n\\n\"\n> | otherwise = foldl (flip $ insertBy order) right_2 left\n> where\n>\tright_2 = [ (n, applySubstitution left t) | (n,t) <- right ]\n>\torder (v1,_) (v2,_) = elimOk (error) (id) $ orderVars c v1 v2\n\n>\tcmp_show = show right \/= show right_2\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n","avg_line_length":31.5649606299,"max_line_length":122,"alphanum_fraction":0.5896788276} {"size":1939,"ext":"lhs","lang":"Literate Haskell","max_stars_count":8.0,"content":"\n> {-# LANGUAGE OverloadedStrings #-}\n> module Database.HsSqlPpp.Tests.TypeChecking.CaseExpressions\n> (caseExpressions) where\n>\n> import Database.HsSqlPpp.Tests.TestTypes\n\n\n> import Database.HsSqlPpp.Types\n>\n>\n> caseExpressions :: Item\n> caseExpressions =\n> Group \"case expressions\" [\n> e \"case\\n\\\n> \\ when true then 1\\n\\\n> \\end\" $ Right typeInt\n> ,e \"case\\n\\\n> \\ when 1=2 then 'stuff'\\n\\\n> \\ when 2=3 then 'blah'\\n\\\n> \\ else 'test'\\n\\\n> \\end\" $ Right UnknownType\n> ,e \"case\\n\\\n> \\ when 1=2 then 'stuff'\\n\\\n> \\ when 2=3 then 'blah'\\n\\\n> \\ else 'test'::text\\n\\\n> \\end\" $ Right $ ScalarType \"text\"\n> ,e \"case\\n\\\n> \\ when 1=2 then 'stuff'\\n\\\n> \\ when true=3 then 'blah'\\n\\\n> \\ else 'test'\\n\\\n> \\end\" $ Left [NoMatchingOperator \"=\" [typeBool,typeInt]]\n> ,e \"case\\n\\\n> \\ when 1=2 then true\\n\\\n> \\ when 2=3 then false\\n\\\n> \\ else 1\\n\\\n> \\end\" $ Left [IncompatibleTypeSet [typeBool\n> ,typeBool\n> ,typeInt]]\n> ,e \"case\\n\\\n> \\ when 1=2 then false\\n\\\n> \\ when 2=3 then 1\\n\\\n> \\ else true\\n\\\n> \\end\" $ Left [IncompatibleTypeSet [typeBool\n> ,typeInt\n> ,typeBool]]\n> ,e \"case 1 when 2 then 3 else 4 end\" $ Right typeInt\n> ,e \"case 1 when true then 3 else 4 end\"\n> $ Left [IncompatibleTypeSet [ScalarType \"int4\"\n> ,ScalarType \"bool\"]]\n> ,e \"case 1 when 2 then true else false end\" $ Right typeBool\n> ,e \"case 1 when 2 then 3 else false end\"\n> $ Left [IncompatibleTypeSet [ScalarType \"int4\"\n> ,ScalarType \"bool\"]]\n> ]\n> where\n> e = ScalExpr\n\n","avg_line_length":32.8644067797,"max_line_length":67,"alphanum_fraction":0.4703455389} {"size":3560,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"> module Rationals \n\n>\t\t(Rationals(..),rndNR)\n\n> where\n \n> import Data.Ratio\n> infix 7 :%%\n\n\n\tData declaration for Rationals. \n\tRationals are defined in terms of Ints.\n \n> data Rationals = Int :%% Int deriving (Show{-was:Text-},Eq)\n\n\tLazyRationals instance of Ord declared with\n\tsimple degeneration to lazyInteger where\n\tnumerators are multiplied by opposite denumerators.\n\n> instance Ord Rationals where\n>\t(p :%% q) <= (r :%% s) = p*s <= q*r\n\n> instance Num Rationals where\n\t\n\t(+): rational addition is performed by converting\n\tthe arguments to a form where they share a common\n\tdenominator. \n\tThe function simplify ensures that the answer\n\tis in normal form. \n\tUnit denominators are treated as special cases.\n\n>\t(+) (p :%% 1) (r :%% 1) = (p+r) :%% 1\n>\t(+) (p :%% 1) (r :%% s) = simplify (p*s +r) s\n>\t(+) (p :%% q) (r :%% 1) = simplify (p+ q*r) q\n>\t(+) (p :%% q) (r :%% s) = simplify (p*s+q*r) (q*s)\n \n\n\tMultiplication of rationals provided by degeneration\n\tto Ints. Unit denominators are treated as special\n\tcases.\n\n>\t(*) (p :%% 1) (r :%% 1) = (p*r) :%% 1\n>\t(*) (p :%% 1) (r :%% s) = simplify (p*r) s\n>\t(*) (p :%% q) (r :%% 1) = simplify (p*r) q\n>\t(*) (p :%% q) (r :%% s) = simplify (p*r) (q*s)\n\n\n\tnegate: Simply change the sign of the numerator to negate\n\n>\tnegate (x :%% y) = (negate x :%% y)\n\n\tabs: Take the abs value of the numerator and place over the\n\t\tdenominator \n\n>\tabs (x :%% y) = (abs x :%% y)\n\n\tsignum: sign simply determined by sign of numerator\n\n>\tsignum (x :%% _) = (signum x:%%1)\n\n\tfromInteger: Change to an Integer and stick it over one.\n\n>\tfromInteger x = (fromInteger x) :%% 1\n\n\tdefines LazyRational as a instance of the class Fractional \n\n> instance Fractional Rationals where\n\n\tdivideLazyNum: performed by turning over second Rational\n\t\tand multiplying.\n\tZero cases handled appropriately.\n\n> \t(\/) x y@(r :%% s)\t| r==0 = error \"Attempt to divide by Zero\" \n> \t\t\t\t| otherwise = x * (s :%% r) \n\n\n\tfromRational : defines conversion of a Rational\n\t(Ratio Integer) to Rationals. NB Rational\n\tis supposedly in normal form.\n \n>\tfromRational r = (fromInteger (numerator r)) :%% (fromInteger (denominator r))\n\n\n\tsimplify: produces a Rational in normal\n\tform from the Ints given. Note normal\n\tform means a positive denominator. Sign\n\tof numerator therefore determines sign of number.\n \n> simplify :: Int -> Int -> Rationals\n> simplify x y = (signum bottom*top) :%% abs bottom\n>\t\t\twhere \n>\t\t\ttop = x `div` d\n>\t\t\tbottom = y `div` d\n>\t\t\td = gcd x y\n\n\n\tDefines Rationals as a member of the class Real. \n\n> instance Real Rationals where\n> \ttoRational (x:%%y) = toInteger x % toInteger y\n\n> instance Enum Rationals where\t-- partain\n> enumFrom\t\t= error \"Enum.Rationals.enumFrom\"\n> enumFromThen\t= error \"Enum.Rationals.enumFromThen\"\n> enumFromTo\t\t= error \"Enum.Rationals.enumFromTo\"\n> enumFromThenTo\t= error \"Enum.Rationals.enumFromThenTo\"\n\n\n\ttop : extracts the numerator part of a rational\n\n> top :: Rationals -> Int\n> top (a :%% b) = a\n\n\n\tbottom : extract the denominator part of a rational \n\n> bottom :: Rationals -> Int\n> bottom (a :%% b) = b\n \n\n\trndNR : Converts a Rational to an Int. Rounding to the\n\t\tnearest. NB: Safe to use Int. Only results\n\t\tthat lie on screen are arguments to rndNR. \n\t\tNB: Also no need to worry about rounding \n\t\tnegative strategy for same reason.\n \n> rndNR :: Rationals -> Int\n> rndNR (a :%% 1) = fromInteger (toInteger a)\n> rndNR a = fromInteger (toInteger (div n d))\n> where r = (1\/2) + a\n> n = top r\n> d = bottom r\n \n\n","avg_line_length":26.3703703704,"max_line_length":80,"alphanum_fraction":0.6345505618} {"size":3362,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"\\begin{code}\nmodule ICFP15 where\n\nimport Prelude hiding ((.), (++), filter)\n\n{-@ LIQUID \"--no-termination\" @-}\n{-@ LIQUID \"--short-names\" @-}\n\n\\end{code}\n\nFunction Composition: Bringing Everything into Scope!\n-----------------------------------------------------\n\n- Definition\n\n\\begin{code}\n{-@ \n(.) :: forall

c -> Prop, q :: a -> b -> Prop, r :: a -> c -> Prop>. \n {x::a, w::b |- c

<: c}\n (y:b -> c

)\n -> (z:a -> b)\n -> x:a -> c\n@-} \n(.) f g x = f (g x) \n\\end{code}\n\n- Usage \n\n\\begin{code}\n{-@ plusminus :: n:Nat -> m:Nat -> x:{Nat | x <= m} -> {v:Nat | v = (m - x) + n} @-}\nplusminus :: Int -> Int -> Int -> Int\nplusminus n m = (n+) . (m-)\n\\end{code}\n\n- Qualifiers\n\n\\begin{code}\n{-@ qualif PLUSMINUS(v:int, x:int, y:int, z:int): (v = (x - y) + z) @-}\n{-@ qualif PLUS (v:int, x:int, y:int) : (v = x + y) @-}\n{-@ qualif MINUS (v:int, x:int, y:int) : (v = x - y) @-}\n\\end{code}\n\n\nFolding \n-------\nsee `FoldAbs.hs`\n\nAppending Sorted Lists\n-----------------------\n\\begin{code}\n{-@ type OList a = [a]<{\\x v -> v >= x}> @-}\n\n{-@ (++) :: forall

Prop, q :: a -> Prop, w :: a -> a -> Prop>.\n {x::a

|- a <: a}\n [a

] -> [a] -> [a] @-}\n(++) [] ys = ys\n(++) (x:xs) ys = x:(xs ++ ys)\n\n{-@ qsort :: xs:[a] -> OList a @-}\nqsort [] = []\nqsort (x:xs) = (qsort [y | y <- xs, y < x]) ++ (x:(qsort [z | z <- xs, z >= x])) \n\\end{code}\n\nRelative Complete\n-----------------\n\n\n\\begin{code}\nmain i = app (check i) i\n-- Here p of `app` will be instantiated to \n-- p := \\v -> i <= v\n\n{-@ check :: x:Int -> {y:Int | x <= y} -> () @-}\ncheck :: Int -> Int -> ()\ncheck x y | x < y = () \n | otherwise = error \"oups!\"\n\\end{code}\n\n\n\\begin{code}\n{-@ app :: forall

Prop>. \n {x::Int

|- {v:Int| v = x + 1} <: Int

}\n (Int

-> ()) -> x:Int

-> () @-}\napp :: (Int -> ()) -> Int -> ()\napp f x = if cond x then app f (x + 1) else f x\n\ncond :: Int -> Bool\n{-@ cond :: Int -> Bool @-}\ncond = undefined\n\\end{code}\n\n- TODO: compare with related paper\n\nFilter\n------\n\n\\begin{code}\n{-@ filter :: forall

Prop, q :: a -> Bool -> Prop>.\n {y::a, flag::{v:Bool | Prop v} |- {v:a | v = y} <: a

}\n (x:a -> Bool) -> [a] -> [a

]\n @-}\n\nfilter :: (a -> Bool) -> [a] -> [a]\nfilter f (x:xs)\n | f x = x : filter f xs\n | otherwise = filter f xs\nfilter _ [] = []\n\n\n{-@ measure isPrime :: Int -> Prop @-}\nisPrime :: Int -> Bool \n{-@ isPrime :: n:Int -> {v:Bool | Prop v <=> isPrime n} @-}\nisPrime = undefined\n\n-- | `positives` works by instantiating:\n-- p := \\v -> isPrime v\n-- q := \\n v -> Prop v <=> isPrime n\n\n\t\n{-@ primes :: [Int] -> [{v:Int | isPrime v}] @-}\nprimes = filter isPrime\n\\end{code}\n\n\n- filter in Katalyst:\n\n('R filter) : \n l -> f: (x -> {v | v = false => 'R(x) = emp \n \/\\ v = true => 'R(x) = Rid(x)})\n-> {v | Rmem (v) = (Rmem 'R)(l)}\n\n\nSimilar in that the result refinement depends on the 'R.\nIn our types `p` depends on the `q`.\n\nPrecondition constraints the relation 'R and then post condition \n\nDifferences\nKatalyst talks about ordering and not other theories, like linear arithmetic\n\nSimilarities \nBounds can be seen as Abstract Refinement transformers, i.e., higher order Abstract Refinements. ","avg_line_length":23.676056338,"max_line_length":97,"alphanum_fraction":0.4643069601} {"size":23288,"ext":"lhs","lang":"Literate Haskell","max_stars_count":32.0,"content":"---\ntitle: Part One - Infrastructure with GLFW\nhas-toc: yes\ndate: 2016-02-12\ndescription: Creating the infrastructure for Odin using GLFW\n---\n\n*tl;dr* This is part of a series where we'll be writing a roguelike using FRP \nand Haskell. This first article is about setting up the main loop.\n\nIntro\n================================================================================\nI'd like to learn how to write a push based FRP implementation and so I decided\nI would first take a stab at flipping my pull based FRP into a pushy one. My \nFRP is called [varying][1]. It's very simple and inspired by [netwire][4]. It \nuses automatons to generate a stream of varying values - hence the name. For \ninfo on the core concepts of varying check out the [hackage docs][varying core]. \n[varying][1] is a rather squishy FRP (and I use the term FRP liberally) but it's \nfun and simple. In this article we'll be building a quick demo 'game' to \ndemonstrate how to set up the infrastructure needed for a bigger application. \n\nGet the Code\n--------------------------------------------------------------------------------\nThis is a Literate Haskell file which can be downloaded from the \n[github repo][odin]. To build, run `stack build` from the project\ndirectory. [Go here](http:\/\/docs.haskellstack.org\/en\/stable\/README.html) \nfor help with `stack`.\n\nMain\n================================================================================\n\n> -- |\n> -- Module: Main\n> -- Copyright: (c) 2015 Schell Scivally\n> -- License: MIT\n> --\n> -- The entrypoint to part-one of the odin series. \n> module Main where\n\nOur first import is [varying][1], which allows us to describe values \nthat change over time and user input.\n\n> import Control.Varying\n\nNext up is the graphics library [gelatin-picture][2], which we use to \ndescribe two dimensional pictures. Since we'll be rendering for desktop with \nglfw we will use [gelatin-glfw][3]. It's currently the only backend for gelatin. \n[gelatin-glfw][3] re-exports [GLFW-b][glfw-b] and [gelatin-picture][2] so we \ndon't have to clutter our workspace with those imports.\n\n> import Gelatin.GLFW\n\nNext we'll need some infrastructure in the form of `TVar` and `WriterT`. We'll\nuse `TVar`s to synchronize updates across threads and `WriterT` to allow our\nnetwork entities to reach out to the world and each other.\n\n> import Control.Concurrent\n> import Control.Concurrent.Async\n> import Control.Concurrent.STM.TVar\n> import Control.Monad.STM\n> import Control.Monad.Trans.Writer.Strict\n> import Control.Monad\n\nLastly we'll need some miscellaneous bits and pieces.\n\n> import Data.Bits ((.|.))\n> import Data.Time.Clock\n> import System.Exit\n\nTypes\n================================================================================\nWe need to be able to describe our game and since this is Haskell we'll use\nlots of types.\n\n`UserInput` will represent everything we want to push into our FRP network. If\nour game or display logic needs to know about it, it should be covered by \n`UserInput`.\n\n> data UserInput = InputUnknown String\n> | InputTime Float\n> | InputCursor Float Float\n> | InputWindowSize Int Int\n> deriving (Show)\n\nWe use `OutputEvent` with `WriterT` to allow entities within\nour network to have a very managed effect on the network as a whole.\n\n> data OutputEvent = OutputEventUnknown String\n> | OutputNeedsUpdate\n> deriving (Ord, Eq)\n\n> type Effect = Writer [OutputEvent]\n\nWe'll be rendering `Picture`s from [gelatin-picture][2]. We'll talk more about \nrendering in the [rendering][#rendering] section. Here we just do a litty type\nsynonym to keep us from having to type \"()\" all over the place.\n\n> type Pic = Picture ()\n\nThe `Network` is a varying value. This means that it represents a value that\nchanges over some domain. When you see the type of a varying value as \n`VarT m a b` it means that an output value `b` varies over input `a` within an \neffect `m`. [varying][1] is an arrowized FRP implementation with a twist, so if \nyou've ever used [netwire][4] you'll be a bit familiar. Some differences \nbetween [varying][1] and [netwire][4] are\n\n* [varying][1]'s inhibition is explicit using the type `VarT m a (Event b)`. \n* [varying][1]'s time is not encoded in its type.\n\nHere our `Network` is defined as a varying value that depends on `UserInput`s\nand produces a `Pic` inside the `Effect` monad. The `Effect` monad allows\nour network streams to call out to the rest of the world, writing attempted\nside-effects to the network as a whole through our `WriterT` monad stack.\n\n> type Network = VarT Effect UserInput Pic\n\nA Bit More Infrastructure\n================================================================================\nFinally we can declare our main infrastructure type, `AppData`. An `AppData`\ncontains the `Network` in its current state, a cache of renderers that will\nbe managed by [renderable][5], and a list of user input events.\n\n> data AppData = AppData { appNetwork :: Network\n> , appCache :: Cache IO Transform\n> , appEvents :: [UserInput]\n> , appUTC :: UTCTime\n> }\n\nThe Network\n================================================================================\nWe need a network to test our infrastructure, so for now we'll use the simplest \nnetwork I can think of that demonstrates change over time and user input - a \ncircle that follows the mouse, changing shape and color over time. \n\nCursor Move Events\n--------------------------------------------------------------------------------\nIn order to do this we'll first need a stream of cursor move events. We'll be \nusing `V2 Float` from [linear][linear] to represent our points.\n \n> cursorMoved :: (Applicative m, Monad m) => VarT m UserInput (Event (V2 Float))\n\nThis type signature shows that our stream will 'consume' user input and 'emit'\nposition events. \n\nFor the implementation we use the `var` constructor to turn a pure function into \na stream \n\n> cursorMoved = var f -- :: VarT m InputEvent (Maybe (V2 Float))\n\nand then combine that with the event generator `onJust` which takes a `Maybe a` \nas input and produces a stream of `Event a`. We also use the plug right `~>` \ncombinator to plug the output of `var f` into the input of `onJust`.\n\n> ~> onJust\n\nAnd now we write our function that maps input values to `Maybe (V2 Float)`.\n\n> where f (InputCursor x y) = Just $ V2 x y\n> f _ = Nothing\n\nCursor Position\n--------------------------------------------------------------------------------\nNext we need a stream that produces the current cursor position each frame. \nUntil the cursor moves we need a default cursor position. `V2 -1 -1` seems \npretty good.\n\n> cursorPosition :: (Applicative m, Monad m) => VarT m UserInput (V2 Float)\n> cursorPosition = cursorMoved ~> foldStream (\\_ v -> v) (-1)\n\n`foldStream` works just like `foldl`, but it only operates on streams of Events,\nfolding event values into an accumulator. In this case the accumulator is simply\nthe latest cursor position.\n\nTime Deltas\n--------------------------------------------------------------------------------\nTo demonstrate that our pull network can be run on demand we'll describe a \ncircle that follows the cursor *and* changes shape and color over time. \nEventually the circle will stop changing shape and color so unless there's an \ninput event we should see the network *go to sleep*. \n\nSo far we have a stream for the cursor position but now we'll need a \nstream of time deltas. The reason we need deltas is that we'll use some \ntweening streams from [varying][1] that require deltas as input. This breaks \nsome of the rules of hard FRP - we're not supposed to deal in deltas, but I \nthink you'll find that once it's done we don't have to use deltas for \nanything more than a plug. \n\n> timeUpdated :: (Applicative m, Monad m) => VarT m UserInput (Event Float)\n> timeUpdated = var f ~> onJust\n> where f (InputTime t) = Just t\n> f _ = Nothing\n\nWe need to fill in the gaps in `timeUpdated` when it doesn't produce an event.\n\n> deltas :: (Applicative m, Monad m) => VarT m UserInput Float\n> deltas = 0 `orE` timeUpdated\n\nThe implementation of `deltas` says \"produce `0` unless `timeUpdated` produces\nan event - if so, produce the value of that event.\n\nRequesting Updates\n--------------------------------------------------------------------------------\nIn order to have smooth animation over time we need to know that the network \nrequires frequent updates. In our main loop we'll block until user input \nhappens or until the network requests that itself be updated. \n\nUsing the monadic constructor `varM` we write a stream that can reach out to the \nWriter monad, make the request and pass whatever input it received through as \noutput. It's a bit hacky as far as FRP goes (we're only using its side-effect), \nbut this is what's required to use our system in a pushy fashion.\n\n> requestUpdate :: VarT Effect a a\n> requestUpdate = varM $ \\input -> do\n> tell [OutputNeedsUpdate] \n> return input\n\nYou can see that both `var` and `varM` use regular functions to create a stream.\nThe input to the stream is the only parameter to the function and the output\nof the function becomes the output of the stream. Check out the other \nconstructors in the [varying docs][varying constructors].\n\nTime\n--------------------------------------------------------------------------------\nCombining our `deltas` and `requestUpdate` streams gives us our main `time` \nstream. Whenever a part of our network depends on `time` the underlying \ninfrastructure should receive a request that the network be updated. If no part \nof the network depends on time our main loop will sleep and only wake to \nrender on user input.\n\n> time :: VarT Effect UserInput Float\n> time = deltas ~> requestUpdate\n\nSince we now have time flowing through our network we can use the tweening\ncapabilities that ship with [varying][1] in **Control.Varying.Tween**. Tweens \nrun one step higher in abstraction in order to play with varying values that are \nonly defined over a select domain. These are called splines. \n\nTweening With Splines\n--------------------------------------------------------------------------------\nA spline is a temporary changing value that has an end result. Splines are \nessentially chains of event streams. When an event stream stops producing the \ncurrent spline terminates and the next event stream takes over. The input, \noutput and result value of a spline are encoded in the spline's type signature. \nA 'Spline a b m c' describes a spline that takes 'a's as input,\nproduces 'b's as output - runs in the 'm' monad and results in 'c'. Our tweening\nspline will take time (`Float`) as input, give `Float` as output and result\nin the last tweened output value, also `Float`. We'll want to be able to change \nthe duration of the spline so we'll write a function that takes the duration and \nreturns our spline.\n\n> easeInOutSpline :: (Applicative m, Monad m) \n> => Float -> SplineT Float Float m Float\n> easeInOutSpline t = do\n> halfway <- tween easeInExpo 1 0 $ t\/2\n\nAbove we tween from `1` to `0` over `t\/2` seconds using an easing function.\nThe spline produces the interpolated values until `t\/2` seconds, and then \nresults in the last interpolated value, which is either `0` or very close to \n`0`. \n\nNow we complete the tween.\n\n> tween linear halfway 1 $ t\/2\n\nAs you can see, with splines we can use monadic notation. Since a spline is \nonly defined over a certain domain it can be considered to terminate, giving an \nend result and allowing you to chain another spline. This chaining or sequencing \nis what makes splines so useful for defining a signals behavior over events. \n\nActually Using Splines\n--------------------------------------------------------------------------------\nOur tweening spline represents a number over time but splines are only \ncontinuous over a finite domain, which is in this case `t` seconds. We'd like to \nuse a completely continuous signal (a `Var m InputEvent Pic`) to tween our \n`Pic`. `Spline` runs at one level of abstraction above `VarT` and as such, we \ncan create an output stream of the spline, turning it back into a `VarT`. All \nwe need is to use `outputStream` along with an initial value. The resulting \nstream will produce the values of the spline until the it concludes. \nOnce the spline terminates the stream will repeat the last known value forever. \nIf the spline *never* produces, the initial value will be produced forever. \n\n> easeInOutExpo :: (Applicative m, Monad m) => Float -> VarT m Float Float \n> easeInOutExpo = outputStream 1 . easeInOutSpline\n\nWe'd like to demonstrate that the network sleeps when time is no longer\na dependency, which means we'll have to set up a network that depends on time\nfor a while and then moves on. Since this sounds like sequencing event streams\nwe'll use splines again.\n\n> multSequence :: Float -> SplineT UserInput Float Effect Float\n\nNote how the type signature now contains `Effect` since we'll need to use time,\nwhich requires the ability to access the `Writer` monad to write out update \nrequests.\n\n> multSequence t = do\n> (val,_) <- (time ~> easeInOutExpo t) `untilEvent` (time ~> after t)\n> return val\n\nAbove we plug time into two streams - the first is our tweening stream and the\nsecond is an event stream we use as a timer. `time ~> after t` will produce \n`Event ()` forever after `t` seconds. We combine the two streams using the \ncombinator `untilEvent`. This combinator produces output values using the first\nstream until an event occurs in the second stream, then returns the last values\nof each in a tuple. `after` will produce `Event ()`, and `untilEvent` unwraps\nthe event value for you, tupling it up - making the right value of our spline's \nresult `()`, which we can ignore.\n\nAll together, `multSequence` is a spline that depends on time for `t` seconds\nand then returns `1` forever. This should help us demonstrate that the network\nno longer depends on time and can render sporatically whenever a user event\ncomes down the pipe.\n\nLastly we'll turn that spline into a continuous stream.\n\n> multOverTime :: Float -> VarT Effect UserInput Float\n> multOverTime = outputStream 0 . multSequence \n\nThe Big Picture\n--------------------------------------------------------------------------------\nNow we can combine our network. We start by writing a pure function that \ntakes a position, scale, red, green and blue parameters and returns a `Pic`. \nThis `Pic` is what we'll render each frame.\n\nHere we use functions from [gelatin-picture][2] to translate, scale and fill\na circle of radius 100. \n\n> picture :: V2 Float -> Float -> Float -> Float -> Float -> Pic\n> picture cursor s r g b = \n> move cursor $ scale (V2 s s) $ withFill (solid $ V4 r g b 1)\n> $ circle 100 \n\nWe put it all together with [varying][1]s Applicative instance to construct our \n`Pic` stream.\n\n> network :: VarT Effect UserInput Pic\n> network = picture <$> cursorPosition \n> <*> multOverTime 3\n> <*> multOverTime 1 <*> multOverTime 2 <*> multOverTime 3\n\nWhat this bit of code says is that our network is a picture that follows the\ncursor position changing scale over three seconds, changing its red color \nchannel over one second, its green over two seconds and its blue over three \nseconds. This will all happen in parallel so all time-based animation should \nconclude after three seconds. At that point the circle will be white \n(multOverTime ends on `1.0`), with a radius of `100` and following the cursor.\n\nOur Game Loop\n================================================================================\nIn our main loop we need to make a window and write functions for pushing input\ninto our network. We'll also sample our network and render it.\n\n> main :: IO ()\n> main = do\n\nStart up glfw, receiving a `Rez`. A `Rez` is a type of resource that\n[gelatin-glfw][3] uses to render. It's a composite type containing a glfw window\nand some shaders.\n\n> (rez,window) <- startupGLFWBackend 800 600 \"Odin Part One - GLFW\" Nothing Nothing \n> setWindowPos window 400 400\n\nNext we'll need a `TVar` to contain our app data - that way we can access and \nmodify it from any thread. This is going to be a big part of turning our pull\nbased FRP network into a push-esque system, where we only render what and when\nwe need. \n\n> t0 <- getCurrentTime\n> tvar <- atomically $ newTVar AppData{ appNetwork = network \n> , appCache = mempty\n> , appEvents = []\n> , appUTC = t0\n> }\n\n[varying][1] is a pull based FRP implementation. This roughly means that as the \nprogrammer you describe a network using various streams, combining them together \nuntil you have one final stream that you can \"pull\" or sample from **each \nframe**. Your per-frame game data will essentially just fall out of the network \nevery frame. The downside to a pull network is that you have to sample it often, \ntypically every frame - regardless of whether or not anything has changed. This\nis what we are trying to avoid. \n\nIn an attempt to remedy that situation we'll only run the network when we \nreceive an event from glfw - or if our network requests that we run it. Let's \ndefine the function we'll use to \"push\" a user event into our app. \n\n> let push input = atomically $ modifyTVar' tvar $ \\app -> \n> app{ appEvents = appEvents app ++ [input] }\n\nThat's it! We simply use `atomically $ modifyTVar'` to update the app's \n`appEvent` slot, adding the new input at the end of the list. Haskell makes \nthreading suuuuper easy.\n\nSeparately we need a function to sample and render. We'll run this any time glfw\nreceives an event of any kind. Actually we'll run this any time glfw wakes the\nmain thread but that's almost the same thing.\n\n> step = do \n> t <- getCurrentTime\n> putStrLn $ \"Stepping \" ++ show t\n\nAbove we print the current time just so we know the app is stepping in the \nconsole. We can use this to verify that we're only running the step function\nwhen an event occurs.\n\nThe Network Step\n----------------\nNow we'll run the network. We keep track of the last time we ran the network\nand create a new time `InputEvent` to step our time based streams with, then\nwe add that on to the end of our stored events and sample the network using\n`stepMany`. \n\n`stepMany` takes a list of input to iterate your network over, returning the \nlast sample. Earlier I mentioned that `UserInput` needs a `Monoid` instance. \nThe reason for this is that stepMany runs over each item of the list until it \ngets to the empty list. It then runs one more time using the `mempty` event. \nUltimately, if you fed your network an empty list of events, `stepMany` would \ncreate one and step your network over it. If that sounds funky don't worry, \nthere are other strategies to use for sampling, but this makes the most sense \nhere. \n\n> AppData net cache events lastUTC <- readTVarIO tvar\n> let dt = max oneFrame $ realToFrac $ diffUTCTime t lastUTC \n> ev = InputTime dt \n> ((pic, nextNet), outs) = runWriter $ stepMany events ev net \n\nNow we can render our `Pic`.\n\n> newCache <- renderWithGLFW window rez cache pic\n\nAnd write our new app state, making sure to clear out our events.\n\n> atomically $ writeTVar tvar $ AppData nextNet newCache [] t\n\nThen apply our network's requests. We fold our output using a `Set` since we \nonly want unique requests. We don't want time going super fast just because \nmore network nodes request it. Time...woah.\n\n> let needsUpdate = OutputNeedsUpdate `elem` outs\n> requests = filter (\/= OutputNeedsUpdate) outs \n> mapM_ applyOutput requests \n> when needsUpdate $ applyOutput OutputNeedsUpdate\n\nHere's where the update request magic happens. We spawn a new thread to wait a \nduration, whatever we see fit. In this case it's `oneFrame`, or one thirtieth of \na second. Then we `push` an `InputTime` event into our app and call glfw's\n`postEmptyEvent`. `postEmptyEvent` will wake up the main thread from \n`waitEvents`, which you'll see later. \n\n> oneFrame = 1\/30 \n> applyOutput OutputNeedsUpdate = void $ async $ do \n> threadDelay $ round (oneFrame * 1000)\n> postEmptyEvent\n> applyOutput _ = return ()\n\nBut what about our user input events? For this we can wire up glfw's nifty \ncallbacks. They'll simply push some events in our event queue. \n\n> setCursorPosCallback window $ Just $ \\_ x y ->\n> push $ InputCursor (realToFrac x) (realToFrac y)\n\n> setWindowSizeCallback window $ Just $ \\_ w h -> do\n> print (\"window size\",w,h)\n> push $ InputWindowSize w h\n\nNext is a nifty hack. In this callback we won't push an event, we'll just call\n`step` to update our app while the window is refreshing. This gives the app the\nability to run even when the window is actively being resized.\n\n> setWindowRefreshCallback window $ Just $ \\_ -> do\n> putStrLn \"widow refresh\"\n> step\n\nAnd now we can loop forever! We process all the stored events and render our \napp. Then we call `waitEvents` which will put the main thread to sleep until\nsome events have been stored - rinse and repeat. \n\nOutput requests for updates will be scheduled in a separate thread and will post \nan empty event, waking up the main thread from `waitEvents` - causing our loop\nto recurse.\n\n> let loop = step >> waitEvents >> loop\n> loop\n\nConclusion\n--------------------------------------------------------------------------------\nFRP is a pretty cool thing. It's got some great ideas and its a nice way to \norganize your code. It encourages very granular functions and by providing a \nsmall feedback mechanism you can remedy some of the bitter taste of constant \nrendering. \n\nHopefully this tutorial has been helpful. Please comment at [HN](https:\/\/news.ycombinator.com\/item?id=11090457) or [Reddit](https:\/\/www.reddit.com\/r\/haskell\/comments\/45gsbw\/learning_me_a_haskell_frp_game_infrastructure\/),\nconstructive or not! You can say things like \"I hate me an FRP\" or \"I learn me\nsome streams, hot dang.\" Thanks for reading :)\n\n[Oh - also - don't forget that this is a literate haskell file and can be built \nand run in standard cabal or stack fasion][odin].\n\n[1]: http:\/\/hackage.haskell.org\/package\/varying\n[2]: http:\/\/github.com\/schell\/gelatin\/tree\/master\/gelatin-picture\n[3]: http:\/\/github.com\/schell\/gelatin\/tree\/master\/gelatin-glfw\n[4]: http:\/\/hackage.haskell.org\/package\/netwire\n[5]: http:\/\/hackage.haskell.org\/package\/renderable\n\n[odin]: https:\/\/github.com\/schell\/odin\n[fonty]: http:\/\/hackage.haskell.org\/package\/FontyFruity\n[linear]: http:\/\/hackage.haskell.org\/package\/linear\n[glfw-b]: http:\/\/hackage.haskell.org\/package\/GLFW-b\n[varying core]: http:\/\/hackage.haskell.org\/package\/varying\/docs\/Control-Varying-Core.html\n[varying constructors]: http:\/\/hackage.haskell.org\/package\/varying\/docs\/Control-Varying-Core.html#g:1\n","avg_line_length":45.5733855186,"max_line_length":221,"alphanum_fraction":0.6823686019} {"size":560,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"Exercises: The Quad\n\n> data Quad = One | Two | Three | Four deriving (Eq, Show)\n\n1. eQuad :: Either Quad Quad\n\neQuad Can be 8 different possiblities\n\n\n2. prodQuad :: (Quad, Quad)\n\nprodQuad can be 4*4 = 16 different possibilities\n\n3. funcQuad :: Quad -> Quad\n\nfuncQuad can be 4*4 = 16 different possiblities\n\n4. prodTBool :: (Bool, Bool, Bool)\n\nprodTBool can be 2^3 = 8 different possibilities\n\n5. gTwo :: Bool -> Bool -> Bool\n\ngTwo can have 2^3 = 8 different possibilities\n\n6. fTwo :: Bool -> Quad -> Quad\n\nfTwo can have 2 * 4 * 4 = 32 different possibilities\n","avg_line_length":19.3103448276,"max_line_length":58,"alphanum_fraction":0.6875} {"size":2719,"ext":"lhs","lang":"Literate Haskell","max_stars_count":8.0,"content":"\nCode for the sql server type conversion rules (and implicit\/explicit\ncasting)\n\nThe plan is to follow the same functions as postgresql support for now:\n\nfindCallMatch\nresolveResultSetType\ncheckAssignmentValid\n\nThe rules in sql server for implicit casting and function resolution\nare quite different to postgresql. The biggest one is that e.g. select\ncast(1 as int) + cast('2' as varchar(20)) works in sql server but not\nin postgresql.\n\n\njust hack for operators for now: if one argument is a number type, and\nthe other is a text type, then cast the text to number.\n\n> {-# LANGUAGE PatternGuards,OverloadedStrings #-}\n> module Database.HsSqlPpp.Internals.TypeChecking.SqlTypeConversion (\n> findCallMatch\n> ) where\n>\n> --import Data.Maybe\n> --import Data.List\n> --import Data.Either\n> --import Debug.Trace\n> --import Data.Char\n> --import Control.Monad\n>\n> import Database.HsSqlPpp.Internals.TypesInternal\n> import Database.HsSqlPpp.Internals.Catalog.CatalogInternal\n> --import Database.HsSqlPpp.Utils.Utils\n> --import Database.HsSqlPpp.Internals.TypeChecking.OldTediousTypeUtils\n> import qualified Database.HsSqlPpp.Internals.TypeChecking.OldTypeConversion as T\n> import Data.Text (Text)\n\n> findCallMatch :: Catalog -> Text -> [Type] -> Either [TypeError] OperatorPrototype\n> findCallMatch cat fnName argsType =\n> case argsType of\n> [_a,_b] | Just x <- checkOperator cat fnName argsType -> Right x\n> _ -> T.findCallMatch cat fnName argsType\n\nhack to allow implicit casting one of the args to a numeric operator\nfrom a text type to the correct numeric type:\n\ncheck is an operator\ncheck on arg is numeric, the other text\nmatch an exact operator itself with two args the same numeric type\n- in this case, cast the text arg to the numeric type and return a match\n\n\n> checkOperator :: Catalog -> Text -> [Type] -> Maybe OperatorPrototype\n> checkOperator cat fnName [a,b] | Just t <- ty a b =\n> -- have the argument type in t\n> -- find all the matching fns by name\n> let nm = catLookupFns cat fnName\n> -- keep the ones which have exactly two args with the\n> -- type t, only proceed if there is onne match\n> cands = filter (\\(_,as,_,_) -> as == [t,t]) nm\n> in case cands of\n> [c] -> return c\n> _ -> Nothing\n> where\n> ty a' b' | isNumber a' && isText b' = Just a'\n> ty a' b' | isText a' && isNumber b' = Just b'\n> ty _ _ = Nothing\n> isNumber x =\n> x `elem` [typeSmallInt,typeBigInt,typeInt\n> ,typeNumeric,typeFloat4,typeFloat8]\n> --isNumber _ = False\n> isText x =\n> x `elem` [typeVarChar,typeNVarChar,typeChar, ScalarType \"text\"]\n> --isText _ = False\n> checkOperator _ _ _ = Nothing\n","avg_line_length":35.3116883117,"max_line_length":85,"alphanum_fraction":0.6976829717} {"size":603,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"Primes\r\n13th October, 2013\r\nIn Chapter 09\r\n\r\n> module Primes where\r\n\r\n> primes :: [Integer]\r\n> primes = 2:([3..] \\\\ composites)\r\n> where composites = mergeAll [map (p*) [p..] | p <- primes]\r\n\r\n> (x:xs) \\\\ (y:ys) | x | x==y = xs \\\\ ys\r\n> | x>y = (x:xs) \\\\ ys\r\n\r\n> mergeAll (xs:xss) = xmerge xs (mergeAll xss)\r\n\r\n> xmerge (x:xs) ys = x:merge xs ys\r\n\r\n> merge :: Ord a => [a] -> [a] -> [a]\r\n> merge (x:xs) (y:ys) | x | x==y = x:merge xs ys\r\n> | x>y = y:merge (x:xs) ys\r\n\r\n\r\n","avg_line_length":24.12,"max_line_length":62,"alphanum_fraction":0.4245439469} {"size":50892,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"%\n% (c) The University of Glasgow 2006\n%\n\n\\begin{code}\n{-# OPTIONS -fno-warn-tabs #-}\n-- The above warning supression flag is a temporary kludge.\n-- While working on this module you are encouraged to remove it and\n-- detab the module (please do the detabbing in a separate patch). See\n-- http:\/\/hackage.haskell.org\/trac\/ghc\/wiki\/Commentary\/CodingStyle#TabsvsSpaces\n-- for details\n\n-- | Module for (a) type kinds and (b) type coercions, \n-- as used in System FC. See 'CoreSyn.Expr' for\n-- more on System FC and how coercions fit into it.\n--\nmodule Coercion (\n -- * CoAxioms\n mkCoAxBranch, mkBranchedCoAxiom, mkSingleCoAxiom,\n\n -- * Main data type\n Coercion(..), Var, CoVar,\n LeftOrRight(..), pickLR,\n\n -- ** Functions over coercions\n coVarKind,\n coercionType, coercionKind, coercionKinds, isReflCo,\n isReflCo_maybe,\n mkCoercionType,\n\n\t-- ** Constructing coercions\n mkReflCo, mkCoVarCo, \n mkAxInstCo, mkUnbranchedAxInstCo, mkAxInstLHS, mkAxInstRHS,\n mkUnbranchedAxInstRHS,\n mkPiCo, mkPiCos, mkCoCast,\n mkSymCo, mkTransCo, mkNthCo, mkLRCo,\n\tmkInstCo, mkAppCo, mkTyConAppCo, mkFunCo,\n mkForAllCo, mkUnsafeCo,\n mkNewTypeCo, \n\n -- ** Decomposition\n splitNewTypeRepCo_maybe, instNewTyCon_maybe, \n topNormaliseNewType, topNormaliseNewTypeX,\n\n decomposeCo, getCoVar_maybe,\n splitTyConAppCo_maybe,\n splitAppCo_maybe,\n splitForAllCo_maybe,\n\n\t-- ** Coercion variables\n\tmkCoVar, isCoVar, isCoVarType, coVarName, setCoVarName, setCoVarUnique,\n\n -- ** Free variables\n tyCoVarsOfCo, tyCoVarsOfCos, coVarsOfCo, coercionSize,\n\t\n -- ** Substitution\n CvSubstEnv, emptyCvSubstEnv, \n \tCvSubst(..), emptyCvSubst, Coercion.lookupTyVar, lookupCoVar,\n\tisEmptyCvSubst, zapCvSubstEnv, getCvInScope,\n substCo, substCos, substCoVar, substCoVars,\n substCoWithTy, substCoWithTys, \n\tcvTvSubst, tvCvSubst, mkCvSubst, zipOpenCvSubst,\n substTy, extendTvSubst, extendCvSubstAndInScope,\n\tsubstTyVarBndr, substCoVarBndr,\n\n\t-- ** Lifting\n\tliftCoMatch, liftCoSubstTyVar, liftCoSubstWith, \n \n -- ** Comparison\n coreEqCoercion, coreEqCoercion2,\n\n -- ** Forcing evaluation of coercions\n seqCo,\n \n -- * Pretty-printing\n pprCo, pprParendCo, \n pprCoAxiom, pprCoAxBranch, pprCoAxBranchHdr, \n\n -- * Tidying\n tidyCo, tidyCos,\n\n -- * Other\n applyCo\n ) where \n\n#include \"HsVersions.h\"\n\nimport Unify\t( MatchEnv(..), matchList )\nimport TypeRep\nimport qualified Type\nimport Type hiding( substTy, substTyVarBndr, extendTvSubst )\nimport TyCon\nimport CoAxiom\nimport Var\nimport VarEnv\nimport VarSet\nimport Maybes ( orElse )\nimport Name\t( Name, NamedThing(..), nameUnique, nameModule, getSrcSpan )\nimport NameSet\nimport OccName \t( parenSymOcc )\nimport Util\nimport BasicTypes\nimport Outputable\nimport Unique\nimport Pair\nimport SrcLoc\nimport PrelNames\t( funTyConKey, eqPrimTyConKey )\nimport Control.Applicative\nimport Data.Traversable (traverse, sequenceA)\nimport Control.Arrow (second)\nimport FastString\n\nimport qualified Data.Data as Data hiding ( TyCon )\n\\end{code}\n\n\n%************************************************************************\n%* *\n Constructing axioms\n These functions are here because tidyType etc \n are not available in CoAxiom\n%* *\n%************************************************************************\n\nNote [Tidy axioms when we build them]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWe print out axioms and don't want to print stuff like\n F k k a b = ...\nInstead we must tidy those kind variables. See Trac #7524.\n\n\n\\begin{code}\nmkCoAxBranch :: [TyVar] -- original, possibly stale, tyvars\n -> [Type] -- LHS patterns\n -> Type -- RHS\n -> SrcSpan\n -> CoAxBranch\nmkCoAxBranch tvs lhs rhs loc\n = CoAxBranch { cab_tvs = tvs1\n , cab_lhs = tidyTypes env lhs\n , cab_rhs = tidyType env rhs\n , cab_loc = loc }\n where\n (env, tvs1) = tidyTyVarBndrs emptyTidyEnv tvs\n -- See Note [Tidy axioms when we build them]\n \n\nmkBranchedCoAxiom :: Name -> TyCon -> [CoAxBranch] -> CoAxiom Branched\nmkBranchedCoAxiom ax_name fam_tc branches\n = CoAxiom { co_ax_unique = nameUnique ax_name\n , co_ax_name = ax_name\n , co_ax_tc = fam_tc\n , co_ax_implicit = False\n , co_ax_branches = toBranchList branches }\n\nmkSingleCoAxiom :: Name -> [TyVar] -> TyCon -> [Type] -> Type -> CoAxiom Unbranched\nmkSingleCoAxiom ax_name tvs fam_tc lhs_tys rhs_ty\n = CoAxiom { co_ax_unique = nameUnique ax_name\n , co_ax_name = ax_name\n , co_ax_tc = fam_tc\n , co_ax_implicit = False\n , co_ax_branches = FirstBranch branch }\n where\n branch = mkCoAxBranch tvs lhs_tys rhs_ty (getSrcSpan ax_name)\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Coercions\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\n-- | A 'Coercion' is concrete evidence of the equality\/convertibility\n-- of two types.\n\n-- If you edit this type, you may need to update the GHC formalism\n-- See Note [GHC Formalism] in coreSyn\/CoreLint.lhs\ndata Coercion \n -- These ones mirror the shape of types\n = Refl Type -- See Note [Refl invariant]\n -- Invariant: applications of (Refl T) to a bunch of identity coercions\n -- always show up as Refl.\n -- For example (Refl T) (Refl a) (Refl b) shows up as (Refl (T a b)).\n\n -- Applications of (Refl T) to some coercions, at least one of\n -- which is NOT the identity, show up as TyConAppCo.\n -- (They may not be fully saturated however.)\n -- ConAppCo coercions (like all coercions other than Refl)\n -- are NEVER the identity.\n\n -- These ones simply lift the correspondingly-named \n -- Type constructors into Coercions\n | TyConAppCo TyCon [Coercion] -- lift TyConApp \n \t -- The TyCon is never a synonym; \n\t -- we expand synonyms eagerly\n\t -- But it can be a type function\n\n | AppCo Coercion Coercion -- lift AppTy\n\n -- See Note [Forall coercions]\n | ForAllCo TyVar Coercion -- forall a. g\n\n -- These are special\n | CoVarCo CoVar\n | AxiomInstCo (CoAxiom Branched) BranchIndex [Coercion]\n -- See also [CoAxiom index]\n -- The coercion arguments always *precisely* saturate \n -- arity of (that branch of) the CoAxiom. If there are\n -- any left over, we use AppCo. See \n -- See [Coercion axioms applied to coercions]\n\n | UnsafeCo Type Type\n | SymCo Coercion\n | TransCo Coercion Coercion\n\n -- These are destructors\n | NthCo Int Coercion -- Zero-indexed; decomposes (T t0 ... tn)\n | LRCo LeftOrRight Coercion -- Decomposes (t_left t_right)\n | InstCo Coercion Type\n deriving (Data.Data, Data.Typeable)\n\n-- If you edit this type, you may need to update the GHC formalism\n-- See Note [GHC Formalism] in coreSyn\/CoreLint.lhs\ndata LeftOrRight = CLeft | CRight \n deriving( Eq, Data.Data, Data.Typeable )\n\npickLR :: LeftOrRight -> (a,a) -> a\npickLR CLeft (l,_) = l\npickLR CRight (_,r) = r\n\\end{code}\n\n\nNote [Refl invariant]\n~~~~~~~~~~~~~~~~~~~~~\nCoercions have the following invariant \n Refl is always lifted as far as possible. \n\nYou might think that a consequencs is:\n Every identity coercions has Refl at the root\n\nBut that's not quite true because of coercion variables. Consider\n g where g :: Int~Int\n Left h where h :: Maybe Int ~ Maybe Int\netc. So the consequence is only true of coercions that\nhave no coercion variables.\n\nNote [Coercion axioms applied to coercions]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe reason coercion axioms can be applied to coercions and not just\ntypes is to allow for better optimization. There are some cases where\nwe need to be able to \"push transitivity inside\" an axiom in order to\nexpose further opportunities for optimization. \n\nFor example, suppose we have\n\n C a : t[a] ~ F a\n g : b ~ c\n\nand we want to optimize\n\n sym (C b) ; t[g] ; C c\n\nwhich has the kind\n\n F b ~ F c\n\n(stopping through t[b] and t[c] along the way).\n\nWe'd like to optimize this to just F g -- but how? The key is\nthat we need to allow axioms to be instantiated by *coercions*,\nnot just by types. Then we can (in certain cases) push\ntransitivity inside the axiom instantiations, and then react\nopposite-polarity instantiations of the same axiom. In this\ncase, e.g., we match t[g] against the LHS of (C c)'s kind, to\nobtain the substitution a |-> g (note this operation is sort\nof the dual of lifting!) and hence end up with\n\n C g : t[b] ~ F c\n\nwhich indeed has the same kind as t[g] ; C c.\n\nNow we have\n\n sym (C b) ; C g\n\nwhich can be optimized to F g.\n\nNote [CoAxiom index]\n~~~~~~~~~~~~~~~~~~~~\nA CoAxiom has 1 or more branches. Each branch has contains a list\nof the free type variables in that branch, the LHS type patterns,\nand the RHS type for that branch. When we apply an axiom to a list\nof coercions, we must choose which branch of the axiom we wish to\nuse, as the different branches may have different numbers of free\ntype variables. (The number of type patterns is always the same\namong branches, but that doesn't quite concern us here.)\n\nThe Int in the AxiomInstCo constructor is the 0-indexed number\nof the chosen branch.\n\nNote [Forall coercions]\n~~~~~~~~~~~~~~~~~~~~~~~\nConstructing coercions between forall-types can be a bit tricky.\nCurrently, the situation is as follows:\n\n ForAllCo TyVar Coercion\n\nrepresents a coercion between polymorphic types, with the rule\n\n v : k g : t1 ~ t2\n ----------------------------------------------\n ForAllCo v g : (all v:k . t1) ~ (all v:k . t2)\n\nNote that it's only necessary to coerce between polymorphic types\nwhere the type variables have identical kinds, because equality on\nkinds is trivial.\n\nNote [Predicate coercions]\n~~~~~~~~~~~~~~~~~~~~~~~~~~\nSuppose we have\n g :: a~b\nHow can we coerce between types\n ([c]~a) => [a] -> c\nand\n ([c]~b) => [b] -> c\nwhere the equality predicate *itself* differs?\n\nAnswer: we simply treat (~) as an ordinary type constructor, so these\ntypes really look like\n\n ((~) [c] a) -> [a] -> c\n ((~) [c] b) -> [b] -> c\n\nSo the coercion between the two is obviously\n\n ((~) [c] g) -> [g] -> c\n\nAnother way to see this to say that we simply collapse predicates to\ntheir representation type (see Type.coreView and Type.predTypeRep).\n\nThis collapse is done by mkPredCo; there is no PredCo constructor\nin Coercion. This is important because we need Nth to work on \npredicates too:\n Nth 1 ((~) [c] g) = g\nSee Simplify.simplCoercionF, which generates such selections.\n\nNote [Kind coercions]\n~~~~~~~~~~~~~~~~~~~~~\nSuppose T :: * -> *, and g :: A ~ B\nThen the coercion\n TyConAppCo T [g] T g : T A ~ T B\n\nNow suppose S :: forall k. k -> *, and g :: A ~ B\nThen the coercion\n TyConAppCo S [Refl *, g] T <*> g : T * A ~ T * B\n\nNotice that the arguments to TyConAppCo are coercions, but the first\nrepresents a *kind* coercion. Now, we don't allow any non-trivial kind\ncoercions, so it's an invariant that any such kind coercions are Refl.\nLint checks this. \n\nHowever it's inconvenient to insist that these kind coercions are always\n*structurally* (Refl k), because the key function exprIsConApp_maybe\npushes coercions into constructor arguments, so \n C k ty e |> g\nmay turn into\n C (Nth 0 g) ....\nNow (Nth 0 g) will optimise to Refl, but perhaps not instantly.\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\\subsection{Coercion variables}\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ncoVarName :: CoVar -> Name\ncoVarName = varName\n\nsetCoVarUnique :: CoVar -> Unique -> CoVar\nsetCoVarUnique = setVarUnique\n\nsetCoVarName :: CoVar -> Name -> CoVar\nsetCoVarName = setVarName\n\nisCoVar :: Var -> Bool\nisCoVar v = isCoVarType (varType v)\n\nisCoVarType :: Type -> Bool\nisCoVarType ty \t -- Tests for t1 ~# t2, the unboxed equality\n = case splitTyConApp_maybe ty of\n Just (tc,tys) -> tc `hasKey` eqPrimTyConKey && tys `lengthAtLeast` 2\n Nothing -> False\n\\end{code}\n\n\n\\begin{code}\ntyCoVarsOfCo :: Coercion -> VarSet\n-- Extracts type and coercion variables from a coercion\ntyCoVarsOfCo (Refl ty) = tyVarsOfType ty\ntyCoVarsOfCo (TyConAppCo _ cos) = tyCoVarsOfCos cos\ntyCoVarsOfCo (AppCo co1 co2) = tyCoVarsOfCo co1 `unionVarSet` tyCoVarsOfCo co2\ntyCoVarsOfCo (ForAllCo tv co) = tyCoVarsOfCo co `delVarSet` tv\ntyCoVarsOfCo (CoVarCo v) = unitVarSet v\ntyCoVarsOfCo (AxiomInstCo _ _ cos) = tyCoVarsOfCos cos\ntyCoVarsOfCo (UnsafeCo ty1 ty2) = tyVarsOfType ty1 `unionVarSet` tyVarsOfType ty2\ntyCoVarsOfCo (SymCo co) = tyCoVarsOfCo co\ntyCoVarsOfCo (TransCo co1 co2) = tyCoVarsOfCo co1 `unionVarSet` tyCoVarsOfCo co2\ntyCoVarsOfCo (NthCo _ co) = tyCoVarsOfCo co\ntyCoVarsOfCo (LRCo _ co) = tyCoVarsOfCo co\ntyCoVarsOfCo (InstCo co ty) = tyCoVarsOfCo co `unionVarSet` tyVarsOfType ty\n\ntyCoVarsOfCos :: [Coercion] -> VarSet\ntyCoVarsOfCos cos = foldr (unionVarSet . tyCoVarsOfCo) emptyVarSet cos\n\ncoVarsOfCo :: Coercion -> VarSet\n-- Extract *coerction* variables only. Tiresome to repeat the code, but easy.\ncoVarsOfCo (Refl _) = emptyVarSet\ncoVarsOfCo (TyConAppCo _ cos) = coVarsOfCos cos\ncoVarsOfCo (AppCo co1 co2) = coVarsOfCo co1 `unionVarSet` coVarsOfCo co2\ncoVarsOfCo (ForAllCo _ co) = coVarsOfCo co\ncoVarsOfCo (CoVarCo v) = unitVarSet v\ncoVarsOfCo (AxiomInstCo _ _ cos) = coVarsOfCos cos\ncoVarsOfCo (UnsafeCo _ _) = emptyVarSet\ncoVarsOfCo (SymCo co) = coVarsOfCo co\ncoVarsOfCo (TransCo co1 co2) = coVarsOfCo co1 `unionVarSet` coVarsOfCo co2\ncoVarsOfCo (NthCo _ co) = coVarsOfCo co\ncoVarsOfCo (LRCo _ co) = coVarsOfCo co\ncoVarsOfCo (InstCo co _) = coVarsOfCo co\n\ncoVarsOfCos :: [Coercion] -> VarSet\ncoVarsOfCos cos = foldr (unionVarSet . coVarsOfCo) emptyVarSet cos\n\ncoercionSize :: Coercion -> Int\ncoercionSize (Refl ty) = typeSize ty\ncoercionSize (TyConAppCo _ cos) = 1 + sum (map coercionSize cos)\ncoercionSize (AppCo co1 co2) = coercionSize co1 + coercionSize co2\ncoercionSize (ForAllCo _ co) = 1 + coercionSize co\ncoercionSize (CoVarCo _) = 1\ncoercionSize (AxiomInstCo _ _ cos) = 1 + sum (map coercionSize cos)\ncoercionSize (UnsafeCo ty1 ty2) = typeSize ty1 + typeSize ty2\ncoercionSize (SymCo co) = 1 + coercionSize co\ncoercionSize (TransCo co1 co2) = 1 + coercionSize co1 + coercionSize co2\ncoercionSize (NthCo _ co) = 1 + coercionSize co\ncoercionSize (LRCo _ co) = 1 + coercionSize co\ncoercionSize (InstCo co ty) = 1 + coercionSize co + typeSize ty\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Tidying coercions\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ntidyCo :: TidyEnv -> Coercion -> Coercion\ntidyCo env@(_, subst) co\n = go co\n where\n go (Refl ty) = Refl (tidyType env ty)\n go (TyConAppCo tc cos) = let args = map go cos\n in args `seqList` TyConAppCo tc args\n go (AppCo co1 co2) = (AppCo $! go co1) $! go co2\n go (ForAllCo tv co) = ForAllCo tvp $! (tidyCo envp co)\n where\n (envp, tvp) = tidyTyVarBndr env tv\n go (CoVarCo cv) = case lookupVarEnv subst cv of\n Nothing -> CoVarCo cv\n Just cv' -> CoVarCo cv'\n go (AxiomInstCo con ind cos) = let args = tidyCos env cos\n in args `seqList` AxiomInstCo con ind args\n go (UnsafeCo ty1 ty2) = (UnsafeCo $! tidyType env ty1) $! tidyType env ty2\n go (SymCo co) = SymCo $! go co\n go (TransCo co1 co2) = (TransCo $! go co1) $! go co2\n go (NthCo d co) = NthCo d $! go co\n go (LRCo lr co) = LRCo lr $! go co\n go (InstCo co ty) = (InstCo $! go co) $! tidyType env ty\n\ntidyCos :: TidyEnv -> [Coercion] -> [Coercion]\ntidyCos env = map (tidyCo env)\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Pretty-printing coercions\n%* *\n%************************************************************************\n\n@pprCo@ is the standard @Coercion@ printer; the overloaded @ppr@\nfunction is defined to use this. @pprParendCo@ is the same, except it\nputs parens around the type, except for the atomic cases.\n@pprParendCo@ works just by setting the initial context precedence\nvery high.\n\n\\begin{code}\ninstance Outputable Coercion where\n ppr = pprCo\n\npprCo, pprParendCo :: Coercion -> SDoc\npprCo co = ppr_co TopPrec co\npprParendCo co = ppr_co TyConPrec co\n\nppr_co :: Prec -> Coercion -> SDoc\nppr_co _ (Refl ty) = angleBrackets (ppr ty)\n\nppr_co p co@(TyConAppCo tc [_,_])\n | tc `hasKey` funTyConKey = ppr_fun_co p co\n\nppr_co p (TyConAppCo tc cos) = pprTcApp p ppr_co tc cos\nppr_co p (AppCo co1 co2) = maybeParen p TyConPrec $\n pprCo co1 <+> ppr_co TyConPrec co2\nppr_co p co@(ForAllCo {}) = ppr_forall_co p co\nppr_co _ (CoVarCo cv) = parenSymOcc (getOccName cv) (ppr cv)\nppr_co p (AxiomInstCo con index cos)\n = pprPrefixApp p (ppr (getName con) <> brackets (ppr index))\n (map (ppr_co TyConPrec) cos)\n\nppr_co p co@(TransCo {}) = maybeParen p FunPrec $\n case trans_co_list co [] of\n [] -> panic \"ppr_co\"\n (co:cos) -> sep ( ppr_co FunPrec co\n : [ char ';' <+> ppr_co FunPrec co | co <- cos])\nppr_co p (InstCo co ty) = maybeParen p TyConPrec $\n pprParendCo co <> ptext (sLit \"@\") <> pprType ty\n\nppr_co p (UnsafeCo ty1 ty2) = pprPrefixApp p (ptext (sLit \"UnsafeCo\")) \n [pprParendType ty1, pprParendType ty2]\nppr_co p (SymCo co) = pprPrefixApp p (ptext (sLit \"Sym\")) [pprParendCo co]\nppr_co p (NthCo n co) = pprPrefixApp p (ptext (sLit \"Nth:\") <> int n) [pprParendCo co]\nppr_co p (LRCo sel co) = pprPrefixApp p (ppr sel) [pprParendCo co]\n\ntrans_co_list :: Coercion -> [Coercion] -> [Coercion]\ntrans_co_list (TransCo co1 co2) cos = trans_co_list co1 (trans_co_list co2 cos)\ntrans_co_list co cos = co : cos\n\ninstance Outputable LeftOrRight where\n ppr CLeft = ptext (sLit \"Left\")\n ppr CRight = ptext (sLit \"Right\")\n\nppr_fun_co :: Prec -> Coercion -> SDoc\nppr_fun_co p co = pprArrowChain p (split co)\n where\n split :: Coercion -> [SDoc]\n split (TyConAppCo f [arg,res])\n | f `hasKey` funTyConKey\n = ppr_co FunPrec arg : split res\n split co = [ppr_co TopPrec co]\n\nppr_forall_co :: Prec -> Coercion -> SDoc\nppr_forall_co p ty\n = maybeParen p FunPrec $\n sep [pprForAll tvs, ppr_co TopPrec rho]\n where\n (tvs, rho) = split1 [] ty\n split1 tvs (ForAllCo tv ty) = split1 (tv:tvs) ty\n split1 tvs ty = (reverse tvs, ty)\n\\end{code}\n\n\\begin{code}\npprCoAxiom :: CoAxiom br -> SDoc\npprCoAxiom ax@(CoAxiom { co_ax_tc = tc, co_ax_branches = branches })\n = hang (ptext (sLit \"axiom\") <+> ppr ax <+> dcolon)\n 2 (vcat (map (pprCoAxBranch tc) $ fromBranchList branches))\n\npprCoAxBranch :: TyCon -> CoAxBranch -> SDoc\npprCoAxBranch fam_tc (CoAxBranch { cab_tvs = tvs\n , cab_lhs = lhs\n , cab_rhs = rhs })\n = hang (ifPprDebug (pprForAll tvs))\n 2 (hang (pprTypeApp fam_tc lhs) 2 (equals <+> (ppr rhs)))\n\npprCoAxBranchHdr :: CoAxiom br -> BranchIndex -> SDoc\npprCoAxBranchHdr ax@(CoAxiom { co_ax_tc = fam_tc, co_ax_name = name }) index\n | CoAxBranch { cab_lhs = tys, cab_loc = loc } <- coAxiomNthBranch ax index\n = hang (pprTypeApp fam_tc tys)\n 2 (ptext (sLit \"-- Defined\") <+> ppr_loc loc)\n where\n ppr_loc loc\n | isGoodSrcSpan loc\n = ptext (sLit \"at\") <+> ppr (srcSpanStart loc)\n \n | otherwise\n = ptext (sLit \"in\") <+>\n quotes (ppr (nameModule name))\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\tFunctions over Kinds\t\t\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\n-- | This breaks a 'Coercion' with type @T A B C ~ T D E F@ into\n-- a list of 'Coercion's of kinds @A ~ D@, @B ~ E@ and @E ~ F@. Hence:\n--\n-- > decomposeCo 3 c = [nth 0 c, nth 1 c, nth 2 c]\ndecomposeCo :: Arity -> Coercion -> [Coercion]\ndecomposeCo arity co \n = [mkNthCo n co | n <- [0..(arity-1)] ]\n -- Remember, Nth is zero-indexed\n\n-- | Attempts to obtain the type variable underlying a 'Coercion'\ngetCoVar_maybe :: Coercion -> Maybe CoVar\ngetCoVar_maybe (CoVarCo cv) = Just cv \ngetCoVar_maybe _ = Nothing\n\n-- | Attempts to tease a coercion apart into a type constructor and the application\n-- of a number of coercion arguments to that constructor\nsplitTyConAppCo_maybe :: Coercion -> Maybe (TyCon, [Coercion])\nsplitTyConAppCo_maybe (Refl ty) = (fmap . second . map) Refl (splitTyConApp_maybe ty)\nsplitTyConAppCo_maybe (TyConAppCo tc cos) = Just (tc, cos)\nsplitTyConAppCo_maybe _ = Nothing\n\nsplitAppCo_maybe :: Coercion -> Maybe (Coercion, Coercion)\n-- ^ Attempt to take a coercion application apart.\nsplitAppCo_maybe (AppCo co1 co2) = Just (co1, co2)\nsplitAppCo_maybe (TyConAppCo tc cos)\n | isDecomposableTyCon tc || cos `lengthExceeds` tyConArity tc \n , Just (cos', co') <- snocView cos\n = Just (mkTyConAppCo tc cos', co') -- Never create unsaturated type family apps!\n -- Use mkTyConAppCo to preserve the invariant\n -- that identity coercions are always represented by Refl\nsplitAppCo_maybe (Refl ty) \n | Just (ty1, ty2) <- splitAppTy_maybe ty \n = Just (Refl ty1, Refl ty2)\nsplitAppCo_maybe _ = Nothing\n\nsplitForAllCo_maybe :: Coercion -> Maybe (TyVar, Coercion)\nsplitForAllCo_maybe (ForAllCo tv co) = Just (tv, co)\nsplitForAllCo_maybe _ = Nothing\n\n-------------------------------------------------------\n-- and some coercion kind stuff\n\ncoVarKind :: CoVar -> (Type,Type) \ncoVarKind cv\n | Just (tc, [_kind,ty1,ty2]) <- splitTyConApp_maybe (varType cv)\n = ASSERT (tc `hasKey` eqPrimTyConKey)\n (ty1,ty2)\n | otherwise = panic \"coVarKind, non coercion variable\"\n\n-- | Makes a coercion type from two types: the types whose equality \n-- is proven by the relevant 'Coercion'\nmkCoercionType :: Type -> Type -> Type\nmkCoercionType = mkPrimEqPred\n\nisReflCo :: Coercion -> Bool\nisReflCo (Refl {}) = True\nisReflCo _ = False\n\nisReflCo_maybe :: Coercion -> Maybe Type\nisReflCo_maybe (Refl ty) = Just ty\nisReflCo_maybe _ = Nothing\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Building coercions\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\nmkCoVarCo :: CoVar -> Coercion\n-- cv :: s ~# t\nmkCoVarCo cv\n | ty1 `eqType` ty2 = Refl ty1\n | otherwise = CoVarCo cv\n where\n (ty1, ty2) = ASSERT( isCoVar cv ) coVarKind cv\n\nmkReflCo :: Type -> Coercion\nmkReflCo = Refl\n\nmkAxInstCo :: CoAxiom br -> Int -> [Type] -> Coercion\n-- mkAxInstCo can legitimately be called over-staturated; \n-- i.e. with more type arguments than the coercion requires\nmkAxInstCo ax index tys\n | arity == n_tys = AxiomInstCo ax_br index rtys\n | otherwise = ASSERT( arity < n_tys )\n foldl AppCo (AxiomInstCo ax_br index (take arity rtys))\n (drop arity rtys)\n where\n n_tys = length tys\n arity = coAxiomArity ax index\n rtys = map Refl tys\n ax_br = toBranchedAxiom ax\n\n-- to be used only with unbranched axioms\nmkUnbranchedAxInstCo :: CoAxiom Unbranched -> [Type] -> Coercion\nmkUnbranchedAxInstCo ax tys\n = mkAxInstCo ax 0 tys\n\nmkAxInstLHS, mkAxInstRHS :: CoAxiom br -> BranchIndex -> [Type] -> Type\n-- Instantiate the axiom with specified types,\n-- returning the instantiated RHS\n-- A companion to mkAxInstCo: \n-- mkAxInstRhs ax index tys = snd (coercionKind (mkAxInstCo ax index tys))\nmkAxInstLHS ax index tys\n | CoAxBranch { cab_tvs = tvs, cab_lhs = lhs } <- coAxiomNthBranch ax index\n , (tys1, tys2) <- splitAtList tvs tys\n = ASSERT( tvs `equalLength` tys1 ) \n mkTyConApp (coAxiomTyCon ax) (substTysWith tvs tys1 lhs ++ tys2)\n\nmkAxInstRHS ax index tys\n | CoAxBranch { cab_tvs = tvs, cab_rhs = rhs } <- coAxiomNthBranch ax index\n , (tys1, tys2) <- splitAtList tvs tys\n = ASSERT( tvs `equalLength` tys1 ) \n mkAppTys (substTyWith tvs tys1 rhs) tys2\n\nmkUnbranchedAxInstRHS :: CoAxiom Unbranched -> [Type] -> Type\nmkUnbranchedAxInstRHS ax = mkAxInstRHS ax 0\n\n-- | Apply a 'Coercion' to another 'Coercion'.\nmkAppCo :: Coercion -> Coercion -> Coercion\nmkAppCo (Refl ty1) (Refl ty2) = Refl (mkAppTy ty1 ty2)\nmkAppCo (Refl (TyConApp tc tys)) co = TyConAppCo tc (map Refl tys ++ [co])\nmkAppCo (TyConAppCo tc cos) co = TyConAppCo tc (cos ++ [co])\nmkAppCo co1 co2 = AppCo co1 co2\n-- Note, mkAppCo is careful to maintain invariants regarding\n-- where Refl constructors appear; see the comments in the definition\n-- of Coercion and the Note [Refl invariant] in types\/TypeRep.lhs.\n\n-- | Applies multiple 'Coercion's to another 'Coercion', from left to right.\n-- See also 'mkAppCo'\nmkAppCos :: Coercion -> [Coercion] -> Coercion\nmkAppCos co1 tys = foldl mkAppCo co1 tys\n\n-- | Apply a type constructor to a list of coercions.\nmkTyConAppCo :: TyCon -> [Coercion] -> Coercion\nmkTyConAppCo tc cos\n\t -- Expand type synonyms\n | Just (tv_co_prs, rhs_ty, leftover_cos) <- tcExpandTyCon_maybe tc cos\n = mkAppCos (liftCoSubst tv_co_prs rhs_ty) leftover_cos\n\n | Just tys <- traverse isReflCo_maybe cos \n = Refl (mkTyConApp tc tys)\t-- See Note [Refl invariant]\n\n | otherwise = TyConAppCo tc cos\n\n-- | Make a function 'Coercion' between two other 'Coercion's\nmkFunCo :: Coercion -> Coercion -> Coercion\nmkFunCo co1 co2 = mkTyConAppCo funTyCon [co1, co2]\n\n-- | Make a 'Coercion' which binds a variable within an inner 'Coercion'\nmkForAllCo :: Var -> Coercion -> Coercion\n-- note that a TyVar should be used here, not a CoVar (nor a TcTyVar)\nmkForAllCo tv (Refl ty) = ASSERT( isTyVar tv ) Refl (mkForAllTy tv ty)\nmkForAllCo tv co = ASSERT ( isTyVar tv ) ForAllCo tv co\n\n-------------------------------\n\n-- | Create a symmetric version of the given 'Coercion' that asserts\n-- equality between the same types but in the other \"direction\", so\n-- a kind of @t1 ~ t2@ becomes the kind @t2 ~ t1@.\nmkSymCo :: Coercion -> Coercion\n\n-- Do a few simple optimizations, but don't bother pushing occurrences\n-- of symmetry to the leaves; the optimizer will take care of that.\nmkSymCo co@(Refl {}) = co\nmkSymCo (UnsafeCo ty1 ty2) = UnsafeCo ty2 ty1\nmkSymCo (SymCo co) = co\nmkSymCo co = SymCo co\n\n-- | Create a new 'Coercion' by composing the two given 'Coercion's transitively.\nmkTransCo :: Coercion -> Coercion -> Coercion\nmkTransCo (Refl _) co = co\nmkTransCo co (Refl _) = co\nmkTransCo co1 co2 = TransCo co1 co2\n\nmkNthCo :: Int -> Coercion -> Coercion\nmkNthCo n (Refl ty) = ASSERT( ok_tc_app ty n ) \n Refl (tyConAppArgN n ty)\nmkNthCo n co = ASSERT( ok_tc_app _ty1 n && ok_tc_app _ty2 n )\n NthCo n co\n where\n Pair _ty1 _ty2 = coercionKind co\n\nmkLRCo :: LeftOrRight -> Coercion -> Coercion\nmkLRCo lr (Refl ty) = Refl (pickLR lr (splitAppTy ty))\nmkLRCo lr co = LRCo lr co\n\nok_tc_app :: Type -> Int -> Bool\nok_tc_app ty n = case splitTyConApp_maybe ty of\n Just (_, tys) -> tys `lengthExceeds` n\n Nothing -> False\n\n-- | Instantiates a 'Coercion' with a 'Type' argument. \nmkInstCo :: Coercion -> Type -> Coercion\nmkInstCo co ty = InstCo co ty\n\n-- | Manufacture a coercion from thin air. Needless to say, this is\n-- not usually safe, but it is used when we know we are dealing with\n-- bottom, which is one case in which it is safe. This is also used\n-- to implement the @unsafeCoerce#@ primitive. Optimise by pushing\n-- down through type constructors.\nmkUnsafeCo :: Type -> Type -> Coercion\nmkUnsafeCo ty1 ty2 | ty1 `eqType` ty2 = Refl ty1\nmkUnsafeCo (TyConApp tc1 tys1) (TyConApp tc2 tys2)\n | tc1 == tc2\n = mkTyConAppCo tc1 (zipWith mkUnsafeCo tys1 tys2)\n\nmkUnsafeCo (FunTy a1 r1) (FunTy a2 r2)\n = mkFunCo (mkUnsafeCo a1 a2) (mkUnsafeCo r1 r2)\n\nmkUnsafeCo ty1 ty2 = UnsafeCo ty1 ty2\n\n-- See note [Newtype coercions] in TyCon\n\n-- | Create a coercion constructor (axiom) suitable for the given\n-- newtype 'TyCon'. The 'Name' should be that of a new coercion\n-- 'CoAxiom', the 'TyVar's the arguments expected by the @newtype@ and\n-- the type the appropriate right hand side of the @newtype@, with\n-- the free variables a subset of those 'TyVar's.\nmkNewTypeCo :: Name -> TyCon -> [TyVar] -> Type -> CoAxiom Unbranched\nmkNewTypeCo name tycon tvs rhs_ty\n = CoAxiom { co_ax_unique = nameUnique name\n , co_ax_name = name\n , co_ax_implicit = True -- See Note [Implicit axioms] in TyCon\n , co_ax_tc = tycon\n , co_ax_branches = FirstBranch branch }\n where branch = CoAxBranch { cab_loc = getSrcSpan name\n , cab_tvs = tvs\n , cab_lhs = mkTyVarTys tvs\n , cab_rhs = rhs_ty }\n\nmkPiCos :: [Var] -> Coercion -> Coercion\nmkPiCos vs co = foldr mkPiCo co vs\n\nmkPiCo :: Var -> Coercion -> Coercion\nmkPiCo v co | isTyVar v = mkForAllCo v co\n | otherwise = mkFunCo (mkReflCo (varType v)) co\n\nmkCoCast :: Coercion -> Coercion -> Coercion\n-- (mkCoCast (c :: s1 ~# t1) (g :: (s1 ~# t1) ~# (s2 ~# t2)\nmkCoCast c g\n = mkSymCo g1 `mkTransCo` c `mkTransCo` g2\n where\n -- g :: (s1 ~# s2) ~# (t1 ~# t2)\n -- g1 :: s1 ~# t1\n -- g2 :: s2 ~# t2\n [_reflk, g1, g2] = decomposeCo 3 g\n -- Remember, (~#) :: forall k. k -> k -> *\n -- so it takes *three* arguments, not two\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Newtypes\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ninstNewTyCon_maybe :: TyCon -> [Type] -> Maybe (Type, Coercion)\n-- ^ If @co :: T ts ~ rep_ty@ then:\n--\n-- > instNewTyCon_maybe T ts = Just (rep_ty, co)\n-- Checks for a newtype, and for being saturated\ninstNewTyCon_maybe tc tys\n | Just (tvs, ty, co_tc) <- unwrapNewTyCon_maybe tc -- Check for newtype\n , tys `lengthIs` tyConArity tc -- Check saturated\n = Just (substTyWith tvs tys ty, mkUnbranchedAxInstCo co_tc tys)\n | otherwise\n = Nothing\n\nsplitNewTypeRepCo_maybe :: Type -> Maybe (Type, Coercion) \n-- ^ Sometimes we want to look through a @newtype@ and get its associated coercion.\n-- This function only strips *one layer* of @newtype@ off, so the caller will usually call\n-- itself recursively. If\n--\n-- > splitNewTypeRepCo_maybe ty = Just (ty', co)\n--\n-- then @co : ty ~ ty'@. The function returns @Nothing@ for non-@newtypes@, \n-- or unsaturated applications\nsplitNewTypeRepCo_maybe ty \n | Just ty' <- coreView ty \n = splitNewTypeRepCo_maybe ty'\nsplitNewTypeRepCo_maybe (TyConApp tc tys)\n = instNewTyCon_maybe tc tys\nsplitNewTypeRepCo_maybe _\n = Nothing\n\ntopNormaliseNewType :: Type -> Maybe (Type, Coercion)\ntopNormaliseNewType ty\n = case topNormaliseNewTypeX emptyNameSet ty of\n Just (_, co, ty) -> Just (ty, co)\n Nothing -> Nothing\n\ntopNormaliseNewTypeX :: NameSet -> Type -> Maybe (NameSet, Coercion, Type)\ntopNormaliseNewTypeX rec_nts ty\n | Just ty' <- coreView ty -- Expand predicates and synonyms\n = topNormaliseNewTypeX rec_nts ty'\n\ntopNormaliseNewTypeX rec_nts (TyConApp tc tys)\n | Just (rep_ty, co) <- instNewTyCon_maybe tc tys\n , not (tc_name `elemNameSet` rec_nts) -- See Note [Expanding newtypes] in Type\n = case topNormaliseNewTypeX rec_nts' rep_ty of\n Nothing -> Just (rec_nts', co, rep_ty)\n Just (rec_nts', co', rep_ty') -> Just (rec_nts', co `mkTransCo` co', rep_ty')\n where\n tc_name = tyConName tc\n rec_nts' | isRecursiveTyCon tc = addOneToNameSet rec_nts tc_name\n | otherwise\t = rec_nts\n\ntopNormaliseNewTypeX _ _ = Nothing\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Equality of coercions\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | Determines syntactic equality of coercions\ncoreEqCoercion :: Coercion -> Coercion -> Bool\ncoreEqCoercion co1 co2 = coreEqCoercion2 rn_env co1 co2\n where rn_env = mkRnEnv2 (mkInScopeSet (tyCoVarsOfCo co1 `unionVarSet` tyCoVarsOfCo co2))\n\ncoreEqCoercion2 :: RnEnv2 -> Coercion -> Coercion -> Bool\ncoreEqCoercion2 env (Refl ty1) (Refl ty2) = eqTypeX env ty1 ty2\ncoreEqCoercion2 env (TyConAppCo tc1 cos1) (TyConAppCo tc2 cos2)\n = tc1 == tc2 && all2 (coreEqCoercion2 env) cos1 cos2\n\ncoreEqCoercion2 env (AppCo co11 co12) (AppCo co21 co22)\n = coreEqCoercion2 env co11 co21 && coreEqCoercion2 env co12 co22\n\ncoreEqCoercion2 env (ForAllCo v1 co1) (ForAllCo v2 co2)\n = coreEqCoercion2 (rnBndr2 env v1 v2) co1 co2\n\ncoreEqCoercion2 env (CoVarCo cv1) (CoVarCo cv2)\n = rnOccL env cv1 == rnOccR env cv2\n\ncoreEqCoercion2 env (AxiomInstCo con1 ind1 cos1) (AxiomInstCo con2 ind2 cos2)\n = con1 == con2\n && ind1 == ind2\n && all2 (coreEqCoercion2 env) cos1 cos2\n\ncoreEqCoercion2 env (UnsafeCo ty11 ty12) (UnsafeCo ty21 ty22)\n = eqTypeX env ty11 ty21 && eqTypeX env ty12 ty22\n\ncoreEqCoercion2 env (SymCo co1) (SymCo co2)\n = coreEqCoercion2 env co1 co2\n\ncoreEqCoercion2 env (TransCo co11 co12) (TransCo co21 co22)\n = coreEqCoercion2 env co11 co21 && coreEqCoercion2 env co12 co22\n\ncoreEqCoercion2 env (NthCo d1 co1) (NthCo d2 co2)\n = d1 == d2 && coreEqCoercion2 env co1 co2\ncoreEqCoercion2 env (LRCo d1 co1) (LRCo d2 co2)\n = d1 == d2 && coreEqCoercion2 env co1 co2\n\ncoreEqCoercion2 env (InstCo co1 ty1) (InstCo co2 ty2)\n = coreEqCoercion2 env co1 co2 && eqTypeX env ty1 ty2\n\ncoreEqCoercion2 _ _ _ = False\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Substitution of coercions\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | A substitution of 'Coercion's for 'CoVar's (OR 'TyVar's, when\n-- doing a \\\"lifting\\\" substitution)\ntype CvSubstEnv = VarEnv Coercion\n\nemptyCvSubstEnv :: CvSubstEnv\nemptyCvSubstEnv = emptyVarEnv\n\ndata CvSubst \t\t\n = CvSubst InScopeSet \t-- The in-scope type variables\n\t TvSubstEnv\t-- Substitution of types\n CvSubstEnv -- Substitution of coercions\n\ninstance Outputable CvSubst where\n ppr (CvSubst ins tenv cenv)\n = brackets $ sep[ ptext (sLit \"CvSubst\"),\n\t\t nest 2 (ptext (sLit \"In scope:\") <+> ppr ins), \n\t\t nest 2 (ptext (sLit \"Type env:\") <+> ppr tenv),\n\t\t nest 2 (ptext (sLit \"Coercion env:\") <+> ppr cenv) ]\n\nemptyCvSubst :: CvSubst\nemptyCvSubst = CvSubst emptyInScopeSet emptyVarEnv emptyVarEnv\n\nisEmptyCvSubst :: CvSubst -> Bool\nisEmptyCvSubst (CvSubst _ tenv cenv) = isEmptyVarEnv tenv && isEmptyVarEnv cenv\n\ngetCvInScope :: CvSubst -> InScopeSet\ngetCvInScope (CvSubst in_scope _ _) = in_scope\n\nzapCvSubstEnv :: CvSubst -> CvSubst\nzapCvSubstEnv (CvSubst in_scope _ _) = CvSubst in_scope emptyVarEnv emptyVarEnv\n\ncvTvSubst :: CvSubst -> TvSubst\ncvTvSubst (CvSubst in_scope tvs _) = TvSubst in_scope tvs\n\ntvCvSubst :: TvSubst -> CvSubst\ntvCvSubst (TvSubst in_scope tenv) = CvSubst in_scope tenv emptyCvSubstEnv\n\nextendTvSubst :: CvSubst -> TyVar -> Type -> CvSubst\nextendTvSubst (CvSubst in_scope tenv cenv) tv ty\n = CvSubst in_scope (extendVarEnv tenv tv ty) cenv\n\nextendCvSubstAndInScope :: CvSubst -> CoVar -> Coercion -> CvSubst\n-- Also extends the in-scope set\nextendCvSubstAndInScope (CvSubst in_scope tenv cenv) cv co\n = CvSubst (in_scope `extendInScopeSetSet` tyCoVarsOfCo co)\n tenv\n (extendVarEnv cenv cv co)\n\nsubstCoVarBndr :: CvSubst -> CoVar -> (CvSubst, CoVar)\nsubstCoVarBndr subst@(CvSubst in_scope tenv cenv) old_var\n = ASSERT( isCoVar old_var )\n (CvSubst (in_scope `extendInScopeSet` new_var) tenv new_cenv, new_var)\n where\n -- When we substitute (co :: t1 ~ t2) we may get the identity (co :: t ~ t)\n -- In that case, mkCoVarCo will return a ReflCoercion, and\n -- we want to substitute that (not new_var) for old_var\n new_co = mkCoVarCo new_var\n no_change = new_var == old_var && not (isReflCo new_co)\n\n new_cenv | no_change = delVarEnv cenv old_var\n | otherwise = extendVarEnv cenv old_var new_co\n\n new_var = uniqAway in_scope subst_old_var\n subst_old_var = mkCoVar (varName old_var) (substTy subst (varType old_var))\n\t\t -- It's important to do the substitution for coercions,\n\t\t -- because they can have free type variables\n\nsubstTyVarBndr :: CvSubst -> TyVar -> (CvSubst, TyVar)\nsubstTyVarBndr (CvSubst in_scope tenv cenv) old_var\n = case Type.substTyVarBndr (TvSubst in_scope tenv) old_var of\n (TvSubst in_scope' tenv', new_var) -> (CvSubst in_scope' tenv' cenv, new_var)\n\nmkCvSubst :: InScopeSet -> [(Var,Coercion)] -> CvSubst\nmkCvSubst in_scope prs = CvSubst in_scope Type.emptyTvSubstEnv (mkVarEnv prs)\n\nzipOpenCvSubst :: [Var] -> [Coercion] -> CvSubst\nzipOpenCvSubst vs cos\n | debugIsOn && (length vs \/= length cos)\n = pprTrace \"zipOpenCvSubst\" (ppr vs $$ ppr cos) emptyCvSubst\n | otherwise \n = CvSubst (mkInScopeSet (tyCoVarsOfCos cos)) emptyTvSubstEnv (zipVarEnv vs cos)\n\nsubstCoWithTy :: InScopeSet -> TyVar -> Type -> Coercion -> Coercion\nsubstCoWithTy in_scope tv ty = substCoWithTys in_scope [tv] [ty]\n\nsubstCoWithTys :: InScopeSet -> [TyVar] -> [Type] -> Coercion -> Coercion\nsubstCoWithTys in_scope tvs tys co\n | debugIsOn && (length tvs \/= length tys)\n = pprTrace \"substCoWithTys\" (ppr tvs $$ ppr tys) co\n | otherwise \n = ASSERT( length tvs == length tys )\n substCo (CvSubst in_scope (zipVarEnv tvs tys) emptyVarEnv) co\n\n-- | Substitute within a 'Coercion'\nsubstCo :: CvSubst -> Coercion -> Coercion\nsubstCo subst co | isEmptyCvSubst subst = co\n | otherwise = subst_co subst co\n\n-- | Substitute within several 'Coercion's\nsubstCos :: CvSubst -> [Coercion] -> [Coercion]\nsubstCos subst cos | isEmptyCvSubst subst = cos\n | otherwise = map (substCo subst) cos\n\nsubstTy :: CvSubst -> Type -> Type\nsubstTy subst = Type.substTy (cvTvSubst subst)\n\nsubst_co :: CvSubst -> Coercion -> Coercion\nsubst_co subst co\n = go co\n where\n go_ty :: Type -> Type\n go_ty = Coercion.substTy subst\n\n go :: Coercion -> Coercion\n go (Refl ty) = Refl $! go_ty ty\n go (TyConAppCo tc cos) = let args = map go cos\n in args `seqList` TyConAppCo tc args\n go (AppCo co1 co2) = mkAppCo (go co1) $! go co2\n go (ForAllCo tv co) = case substTyVarBndr subst tv of\n (subst', tv') ->\n ForAllCo tv' $! subst_co subst' co\n go (CoVarCo cv) = substCoVar subst cv\n go (AxiomInstCo con ind cos) = AxiomInstCo con ind $! map go cos\n go (UnsafeCo ty1 ty2) = (UnsafeCo $! go_ty ty1) $! go_ty ty2\n go (SymCo co) = mkSymCo (go co)\n go (TransCo co1 co2) = mkTransCo (go co1) (go co2)\n go (NthCo d co) = mkNthCo d (go co)\n go (LRCo lr co) = mkLRCo lr (go co)\n go (InstCo co ty) = mkInstCo (go co) $! go_ty ty\n\nsubstCoVar :: CvSubst -> CoVar -> Coercion\nsubstCoVar (CvSubst in_scope _ cenv) cv\n | Just co <- lookupVarEnv cenv cv = co\n | Just cv1 <- lookupInScope in_scope cv = ASSERT( isCoVar cv1 ) CoVarCo cv1\n | otherwise = WARN( True, ptext (sLit \"substCoVar not in scope\") <+> ppr cv $$ ppr in_scope)\n ASSERT( isCoVar cv ) CoVarCo cv\n\nsubstCoVars :: CvSubst -> [CoVar] -> [Coercion]\nsubstCoVars subst cvs = map (substCoVar subst) cvs\n\nlookupTyVar :: CvSubst -> TyVar -> Maybe Type\nlookupTyVar (CvSubst _ tenv _) tv = lookupVarEnv tenv tv\n\nlookupCoVar :: CvSubst -> Var -> Maybe Coercion\nlookupCoVar (CvSubst _ _ cenv) v = lookupVarEnv cenv v\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n \"Lifting\" substitution\n\t [(TyVar,Coercion)] -> Type -> Coercion\n%* *\n%************************************************************************\n\n\\begin{code}\ndata LiftCoSubst = LCS InScopeSet LiftCoEnv\n\ntype LiftCoEnv = VarEnv Coercion\n -- Maps *type variables* to *coercions*\n -- That's the whole point of this function!\n\nliftCoSubstWith :: [TyVar] -> [Coercion] -> Type -> Coercion\nliftCoSubstWith tvs cos ty\n = liftCoSubst (zipEqual \"liftCoSubstWith\" tvs cos) ty\n\nliftCoSubst :: [(TyVar,Coercion)] -> Type -> Coercion\nliftCoSubst prs ty\n | null prs = Refl ty\n | otherwise = ty_co_subst (LCS (mkInScopeSet (tyCoVarsOfCos (map snd prs)))\n (mkVarEnv prs)) ty\n\n-- | The \\\"lifting\\\" operation which substitutes coercions for type\n-- variables in a type to produce a coercion.\n--\n-- For the inverse operation, see 'liftCoMatch' \nty_co_subst :: LiftCoSubst -> Type -> Coercion\nty_co_subst subst ty\n = go ty\n where\n go (TyVarTy tv) = liftCoSubstTyVar subst tv `orElse` Refl (TyVarTy tv)\n \t\t\t -- A type variable from a non-cloned forall\n\t\t\t -- won't be in the substitution\n go (AppTy ty1 ty2) = mkAppCo (go ty1) (go ty2)\n go (TyConApp tc tys) = mkTyConAppCo tc (map go tys)\n -- IA0_NOTE: Do we need to do anything\n -- about kind instantiations? I don't think\n -- so. see Note [Kind coercions]\n go (FunTy ty1 ty2) = mkFunCo (go ty1) (go ty2)\n go (ForAllTy v ty) = mkForAllCo v' $! (ty_co_subst subst' ty)\n where\n (subst', v') = liftCoSubstTyVarBndr subst v\n go ty@(LitTy {}) = mkReflCo ty\n\nliftCoSubstTyVar :: LiftCoSubst -> TyVar -> Maybe Coercion\nliftCoSubstTyVar (LCS _ cenv) tv = lookupVarEnv cenv tv \n\nliftCoSubstTyVarBndr :: LiftCoSubst -> TyVar -> (LiftCoSubst, TyVar)\nliftCoSubstTyVarBndr (LCS in_scope cenv) old_var\n = (LCS (in_scope `extendInScopeSet` new_var) new_cenv, new_var)\t\t\n where\n new_cenv | no_change = delVarEnv cenv old_var\n\t | otherwise = extendVarEnv cenv old_var (Refl (TyVarTy new_var))\n\n no_change = new_var == old_var\n new_var = uniqAway in_scope old_var\n\\end{code}\n\n\\begin{code}\n-- | 'liftCoMatch' is sort of inverse to 'liftCoSubst'. In particular, if\n-- @liftCoMatch vars ty co == Just s@, then @tyCoSubst s ty == co@.\n-- That is, it matches a type against a coercion of the same\n-- \"shape\", and returns a lifting substitution which could have been\n-- used to produce the given coercion from the given type.\nliftCoMatch :: TyVarSet -> Type -> Coercion -> Maybe LiftCoSubst\nliftCoMatch tmpls ty co \n = case ty_co_match menv emptyVarEnv ty co of\n Just cenv -> Just (LCS in_scope cenv)\n Nothing -> Nothing\n where\n menv = ME { me_tmpls = tmpls, me_env = mkRnEnv2 in_scope }\n in_scope = mkInScopeSet (tmpls `unionVarSet` tyCoVarsOfCo co)\n -- Like tcMatchTy, assume all the interesting variables \n -- in ty are in tmpls\n\n-- | 'ty_co_match' does all the actual work for 'liftCoMatch'.\nty_co_match :: MatchEnv -> LiftCoEnv -> Type -> Coercion -> Maybe LiftCoEnv\nty_co_match menv subst ty co \n | Just ty' <- coreView ty = ty_co_match menv subst ty' co\n\n -- Match a type variable against a non-refl coercion\nty_co_match menv cenv (TyVarTy tv1) co\n | Just co1' <- lookupVarEnv cenv tv1' -- tv1' is already bound to co1\n = if coreEqCoercion2 (nukeRnEnvL rn_env) co1' co\n then Just cenv\n else Nothing -- no match since tv1 matches two different coercions\n\n | tv1' `elemVarSet` me_tmpls menv -- tv1' is a template var\n = if any (inRnEnvR rn_env) (varSetElems (tyCoVarsOfCo co))\n then Nothing -- occurs check failed\n else return (extendVarEnv cenv tv1' co)\n -- BAY: I don't think we need to do any kind matching here yet\n -- (compare 'match'), but we probably will when moving to SHE.\n\n | otherwise -- tv1 is not a template ty var, so the only thing it\n -- can match is a reflexivity coercion for itself.\n\t\t -- But that case is dealt with already\n = Nothing\n\n where\n rn_env = me_env menv\n tv1' = rnOccL rn_env tv1\n\nty_co_match menv subst (AppTy ty1 ty2) co\n | Just (co1, co2) <- splitAppCo_maybe co\t-- c.f. Unify.match on AppTy\n = do { subst' <- ty_co_match menv subst ty1 co1 \n ; ty_co_match menv subst' ty2 co2 }\n\nty_co_match menv subst (TyConApp tc1 tys) (TyConAppCo tc2 cos)\n | tc1 == tc2 = ty_co_matches menv subst tys cos\n\nty_co_match menv subst (FunTy ty1 ty2) (TyConAppCo tc cos)\n | tc == funTyCon = ty_co_matches menv subst [ty1,ty2] cos\n\nty_co_match menv subst (ForAllTy tv1 ty) (ForAllCo tv2 co) \n = ty_co_match menv' subst ty co\n where\n menv' = menv { me_env = rnBndr2 (me_env menv) tv1 tv2 }\n\nty_co_match menv subst ty co\n | Just co' <- pushRefl co = ty_co_match menv subst ty co'\n | otherwise = Nothing\n\nty_co_matches :: MatchEnv -> LiftCoEnv -> [Type] -> [Coercion] -> Maybe LiftCoEnv\nty_co_matches menv = matchList (ty_co_match menv)\n\npushRefl :: Coercion -> Maybe Coercion\npushRefl (Refl (AppTy ty1 ty2)) = Just (AppCo (Refl ty1) (Refl ty2))\npushRefl (Refl (FunTy ty1 ty2)) = Just (TyConAppCo funTyCon [Refl ty1, Refl ty2])\npushRefl (Refl (TyConApp tc tys)) = Just (TyConAppCo tc (map Refl tys))\npushRefl (Refl (ForAllTy tv ty)) = Just (ForAllCo tv (Refl ty))\npushRefl _ = Nothing\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Sequencing on coercions\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\nseqCo :: Coercion -> ()\nseqCo (Refl ty) = seqType ty\nseqCo (TyConAppCo tc cos) = tc `seq` seqCos cos\nseqCo (AppCo co1 co2) = seqCo co1 `seq` seqCo co2\nseqCo (ForAllCo tv co) = tv `seq` seqCo co\nseqCo (CoVarCo cv) = cv `seq` ()\nseqCo (AxiomInstCo con ind cos) = con `seq` ind `seq` seqCos cos\nseqCo (UnsafeCo ty1 ty2) = seqType ty1 `seq` seqType ty2\nseqCo (SymCo co) = seqCo co\nseqCo (TransCo co1 co2) = seqCo co1 `seq` seqCo co2\nseqCo (NthCo _ co) = seqCo co\nseqCo (LRCo _ co) = seqCo co\nseqCo (InstCo co ty) = seqCo co `seq` seqType ty\n\nseqCos :: [Coercion] -> ()\nseqCos [] = ()\nseqCos (co:cos) = seqCo co `seq` seqCos cos\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\t The kind of a type, and of a coercion\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ncoercionType :: Coercion -> Type\ncoercionType co = case coercionKind co of\n Pair ty1 ty2 -> mkCoercionType ty1 ty2\n\n------------------\n-- | If it is the case that\n--\n-- > c :: (t1 ~ t2)\n--\n-- i.e. the kind of @c@ relates @t1@ and @t2@, then @coercionKind c = Pair t1 t2@.\n\ncoercionKind :: Coercion -> Pair Type \ncoercionKind co = go co\n where \n go (Refl ty) = Pair ty ty\n go (TyConAppCo tc cos) = mkTyConApp tc <$> (sequenceA $ map go cos)\n go (AppCo co1 co2) = mkAppTy <$> go co1 <*> go co2\n go (ForAllCo tv co) = mkForAllTy tv <$> go co\n go (CoVarCo cv) = toPair $ coVarKind cv\n go (AxiomInstCo ax ind cos)\n | CoAxBranch { cab_tvs = tvs, cab_lhs = lhs, cab_rhs = rhs } <- coAxiomNthBranch ax ind\n , Pair tys1 tys2 <- sequenceA (map go cos)\n = ASSERT( cos `equalLength` tvs ) -- Invariant of AxiomInstCo: cos should \n -- exactly saturate the axiom branch\n Pair (substTyWith tvs tys1 (mkTyConApp (coAxiomTyCon ax) lhs))\n (substTyWith tvs tys2 rhs)\n go (UnsafeCo ty1 ty2) = Pair ty1 ty2\n go (SymCo co) = swap $ go co\n go (TransCo co1 co2) = Pair (pFst $ go co1) (pSnd $ go co2)\n go (NthCo d co) = tyConAppArgN d <$> go co\n go (LRCo lr co) = (pickLR lr . splitAppTy) <$> go co\n go (InstCo aco ty) = go_app aco [ty]\n\n go_app :: Coercion -> [Type] -> Pair Type\n -- Collect up all the arguments and apply all at once\n -- See Note [Nested InstCos]\n go_app (InstCo co ty) tys = go_app co (ty:tys)\n go_app co tys = (`applyTys` tys) <$> go co\n\n-- | Apply 'coercionKind' to multiple 'Coercion's\ncoercionKinds :: [Coercion] -> Pair [Type]\ncoercionKinds tys = sequenceA $ map coercionKind tys\n\\end{code}\n\nNote [Nested InstCos]\n~~~~~~~~~~~~~~~~~~~~~\nIn Trac #5631 we found that 70% of the entire compilation time was\nbeing spent in coercionKind! The reason was that we had\n (g @ ty1 @ ty2 .. @ ty100) -- The \"@s\" are InstCos\nwhere \n g :: forall a1 a2 .. a100. phi\nIf we deal with the InstCos one at a time, we'll do this:\n 1. Find the kind of (g @ ty1 .. @ ty99) : forall a100. phi'\n 2. Substitute phi'[ ty100\/a100 ], a single tyvar->type subst\nBut this is a *quadratic* algorithm, and the blew up Trac #5631.\nSo it's very important to do the substitution simultaneously.\n\ncf Type.applyTys (which in fact we call here)\n\n\n\\begin{code}\napplyCo :: Type -> Coercion -> Type\n-- Gives the type of (e co) where e :: (a~b) => ty\napplyCo ty co | Just ty' <- coreView ty = applyCo ty' co\napplyCo (FunTy _ ty) _ = ty\napplyCo _ _ = panic \"applyCo\"\n\\end{code}\n\nNote [Kind coercions]\n~~~~~~~~~~~~~~~~~~~~~\nKind coercions are only of the form: Refl kind. They are only used to\ninstantiate kind polymorphic type constructors in TyConAppCo. Remember\nthat kind instantiation only happens with TyConApp, not AppTy.\n","avg_line_length":37.4481236203,"max_line_length":95,"alphanum_fraction":0.620215358} {"size":1415,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"> module Chapters.Ch07 where\n> import Euterpea\n\nChapter 7 - Qualified Types, Type Classes\n\nExercise 7.1 Prove that the instance of Music in class Eq satisfies the laws of\nits class.\n\nExercise 7.2\nWrite out appropriate instance declarations for the Color type in the classes \n\nExercise 7.3 Define a type class called Temporal whos members are types that can be\ninterpreted as having a temporal duration.\n\n\n> trill :: Int -> Dur -> Music Pitch -> Music Pitch\n> trill i sDur (Prim (Note tDur p)) =\n> if sDur >= tDur then note tDur p\n> else note sDur p :+: \n> trill (negate i) sDur \n> (note (tDur-sDur) (trans i p))\n> trill i d (Modify (Tempo r) m) = tempo r (trill i (d*r) m)\n> trill i d (Modify c m) = Modify c (trill i d m)\n> trill _ _ _ = \n> error \"trill: input must be a single note.\"\n> trill' :: Int -> Dur -> Music Pitch -> Music Pitch\n> trill' i sDur m = trill (negate i) sDur (transpose i m)\n> trilln :: Int -> Int -> Music Pitch -> Music Pitch\n> trilln i nTimes m = trill i (dur m \/ fromIntegral nTimes) m\n> trilln' :: Int -> Int -> Music Pitch -> Music Pitch\n> trilln' i nTimes m = trilln (negate i) nTimes (transpose i m)\n> roll :: Dur -> Music Pitch -> Music Pitch\n> rolln :: Int -> Music Pitch -> Music Pitch\n> \n> roll dur m = trill 0 dur m\n> rolln nTimes m = trilln 0 nTimes m\n","avg_line_length":38.2432432432,"max_line_length":83,"alphanum_fraction":0.6155477032} {"size":21525,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"%\n% (c) The University of Glasgow 2002-2006\n%\n\nByteCodeLink: Bytecode assembler and linker\n\n\\begin{code}\n{-# OPTIONS -optc-DNON_POSIX_SOURCE #-}\n{-# LANGUAGE BangPatterns #-}\n\nmodule ByteCodeAsm (\n assembleBCOs, assembleBCO,\n\n CompiledByteCode(..),\n UnlinkedBCO(..), BCOPtr(..), BCONPtr(..), bcoFreeNames,\n SizedSeq, sizeSS, ssElts,\n iNTERP_STACK_CHECK_THRESH\n ) where\n\n#include \"HsVersions.h\"\n\nimport ByteCodeInstr\nimport ByteCodeItbls\n\nimport Name\nimport NameSet\nimport Literal\nimport TyCon\nimport PrimOp\nimport Constants\nimport FastString\nimport SMRep\nimport ClosureInfo -- CgRep stuff\nimport DynFlags\nimport Outputable\nimport Platform\n\nimport Control.Monad ( foldM )\nimport Control.Monad.ST ( runST )\n\nimport Data.Array.MArray\nimport Data.Array.Unboxed ( listArray )\nimport Data.Array.Base ( UArray(..) )\nimport Data.Array.Unsafe( castSTUArray )\n\nimport Foreign\nimport Data.Char ( ord )\nimport Data.List\nimport Data.Map (Map)\nimport qualified Data.Map as Map\n\nimport GHC.Base ( ByteArray#, MutableByteArray#, RealWorld )\n\n-- -----------------------------------------------------------------------------\n-- Unlinked BCOs\n\n-- CompiledByteCode represents the result of byte-code\n-- compiling a bunch of functions and data types\n\ndata CompiledByteCode\n = ByteCode [UnlinkedBCO] -- Bunch of interpretable bindings\n ItblEnv -- A mapping from DataCons to their itbls\n\ninstance Outputable CompiledByteCode where\n ppr (ByteCode bcos _) = ppr bcos\n\n\ndata UnlinkedBCO\n = UnlinkedBCO {\n unlinkedBCOName :: Name,\n unlinkedBCOArity :: Int,\n unlinkedBCOInstrs :: ByteArray#, -- insns\n unlinkedBCOBitmap :: ByteArray#, -- bitmap\n unlinkedBCOLits :: (SizedSeq BCONPtr), -- non-ptrs\n unlinkedBCOPtrs :: (SizedSeq BCOPtr) -- ptrs\n }\n\ndata BCOPtr\n = BCOPtrName Name\n | BCOPtrPrimOp PrimOp\n | BCOPtrBCO UnlinkedBCO\n | BCOPtrBreakInfo BreakInfo\n | BCOPtrArray (MutableByteArray# RealWorld)\n\ndata BCONPtr\n = BCONPtrWord Word\n | BCONPtrLbl FastString\n | BCONPtrItbl Name\n\n-- | Finds external references. Remember to remove the names\n-- defined by this group of BCOs themselves\nbcoFreeNames :: UnlinkedBCO -> NameSet\nbcoFreeNames bco\n = bco_refs bco `minusNameSet` mkNameSet [unlinkedBCOName bco]\n where\n bco_refs (UnlinkedBCO _ _ _ _ nonptrs ptrs)\n = unionManyNameSets (\n mkNameSet [ n | BCOPtrName n <- ssElts ptrs ] :\n mkNameSet [ n | BCONPtrItbl n <- ssElts nonptrs ] :\n map bco_refs [ bco | BCOPtrBCO bco <- ssElts ptrs ]\n )\n\ninstance Outputable UnlinkedBCO where\n ppr (UnlinkedBCO nm _arity _insns _bitmap lits ptrs)\n = sep [text \"BCO\", ppr nm, text \"with\",\n ppr (sizeSS lits), text \"lits\",\n ppr (sizeSS ptrs), text \"ptrs\" ]\n\n-- -----------------------------------------------------------------------------\n-- The bytecode assembler\n\n-- The object format for bytecodes is: 16 bits for the opcode, and 16\n-- for each field -- so the code can be considered a sequence of\n-- 16-bit ints. Each field denotes either a stack offset or number of\n-- items on the stack (eg SLIDE), and index into the pointer table (eg\n-- PUSH_G), an index into the literal table (eg PUSH_I\/D\/L), or a\n-- bytecode address in this BCO.\n\n-- Top level assembler fn.\nassembleBCOs :: DynFlags -> [ProtoBCO Name] -> [TyCon] -> IO CompiledByteCode\nassembleBCOs dflags proto_bcos tycons\n = do itblenv <- mkITbls tycons\n bcos <- mapM (assembleBCO dflags) proto_bcos\n return (ByteCode bcos itblenv)\n\nassembleBCO :: DynFlags -> ProtoBCO Name -> IO UnlinkedBCO\nassembleBCO dflags (ProtoBCO nm instrs bitmap bsize arity _origin _malloced)\n = let\n -- pass 1: collect up the offsets of the local labels.\n -- Remember that the first insn starts at offset\n -- sizeOf Word \/ sizeOf Word16\n -- since offset 0 (eventually) will hold the total # of insns.\n lableInitialOffset\n | wORD_SIZE_IN_BITS == 64 = 4\n | wORD_SIZE_IN_BITS == 32 = 2\n | otherwise = error \"wORD_SIZE_IN_BITS not 32 or 64?\"\n label_env = mkLabelEnv Map.empty lableInitialOffset instrs\n\n mkLabelEnv :: Map Word16 Word -> Word -> [BCInstr]\n -> Map Word16 Word\n mkLabelEnv env _ [] = env\n mkLabelEnv env i_offset (i:is)\n = let new_env\n = case i of LABEL n -> Map.insert n i_offset env ; _ -> env\n in mkLabelEnv new_env (i_offset + instrSize16s i) is\n\n findLabel :: Word16 -> Word\n findLabel lab\n = case Map.lookup lab label_env of\n Just bco_offset -> bco_offset\n Nothing -> pprPanic \"assembleBCO.findLabel\" (ppr lab)\n in\n do -- pass 2: generate the instruction, ptr and nonptr bits\n insns <- return emptySS :: IO (SizedSeq Word16)\n lits <- return emptySS :: IO (SizedSeq BCONPtr)\n ptrs <- return emptySS :: IO (SizedSeq BCOPtr)\n let init_asm_state = (insns,lits,ptrs)\n (final_insns, final_lits, final_ptrs)\n <- mkBits dflags findLabel init_asm_state instrs\n\n let asm_insns = ssElts final_insns\n n_insns = sizeSS final_insns\n\n insns_arr = mkInstrArray lableInitialOffset n_insns asm_insns\n !insns_barr = case insns_arr of UArray _lo _hi _n barr -> barr\n\n bitmap_arr = mkBitmapArray bsize bitmap\n !bitmap_barr = case bitmap_arr of UArray _lo _hi _n barr -> barr\n\n let ul_bco = UnlinkedBCO nm arity insns_barr bitmap_barr final_lits final_ptrs\n\n -- 8 Aug 01: Finalisers aren't safe when attached to non-primitive\n -- objects, since they might get run too early. Disable this until\n -- we figure out what to do.\n -- when (notNull malloced) (addFinalizer ul_bco (mapM_ zonk malloced))\n\n return ul_bco\n -- where\n -- zonk ptr = do -- putStrLn (\"freeing malloc'd block at \" ++ show (A# a#))\n -- free ptr\n\nmkBitmapArray :: Word16 -> [StgWord] -> UArray Int StgWord\nmkBitmapArray bsize bitmap\n = listArray (0, length bitmap) (fromIntegral bsize : bitmap)\n\nmkInstrArray :: Word -> Word -> [Word16] -> UArray Word Word16\nmkInstrArray lableInitialOffset n_insns asm_insns\n = let size = lableInitialOffset + n_insns\n in listArray (0, size - 1) (largeArg size ++ asm_insns)\n\n-- instrs nonptrs ptrs\ntype AsmState = (SizedSeq Word16,\n SizedSeq BCONPtr,\n SizedSeq BCOPtr)\n\ndata SizedSeq a = SizedSeq !Word [a]\nemptySS :: SizedSeq a\nemptySS = SizedSeq 0 []\n\n-- Why are these two monadic???\naddToSS :: SizedSeq a -> a -> IO (SizedSeq a)\naddToSS (SizedSeq n r_xs) x = return (SizedSeq (n+1) (x:r_xs))\naddListToSS :: SizedSeq a -> [a] -> IO (SizedSeq a)\naddListToSS (SizedSeq n r_xs) xs\n = return (SizedSeq (n + genericLength xs) (reverse xs ++ r_xs))\n\nssElts :: SizedSeq a -> [a]\nssElts (SizedSeq _ r_xs) = reverse r_xs\n\nsizeSS :: SizedSeq a -> Word\nsizeSS (SizedSeq n _) = n\n\nsizeSS16 :: SizedSeq a -> Word16\nsizeSS16 (SizedSeq n _) = fromIntegral n\n\n-- Bring in all the bci_ bytecode constants.\n#include \"rts\/Bytecodes.h\"\n\nlargeArgInstr :: Word16 -> Word16\nlargeArgInstr bci = bci_FLAG_LARGE_ARGS .|. bci\n\nlargeArg :: Word -> [Word16]\nlargeArg w\n | wORD_SIZE_IN_BITS == 64\n = [fromIntegral (w `shiftR` 48),\n fromIntegral (w `shiftR` 32),\n fromIntegral (w `shiftR` 16),\n fromIntegral w]\n | wORD_SIZE_IN_BITS == 32\n = [fromIntegral (w `shiftR` 16),\n fromIntegral w]\n | otherwise = error \"wORD_SIZE_IN_BITS not 32 or 64?\"\n\n-- This is where all the action is (pass 2 of the assembler)\nmkBits :: DynFlags\n -> (Word16 -> Word) -- label finder\n -> AsmState\n -> [BCInstr] -- instructions (in)\n -> IO AsmState\n\nmkBits dflags findLabel st proto_insns\n = foldM doInstr st proto_insns\n where\n doInstr :: AsmState -> BCInstr -> IO AsmState\n doInstr st i\n = case i of\n STKCHECK n -> instr1Large st bci_STKCHECK n\n PUSH_L o1 -> instr2 st bci_PUSH_L o1\n PUSH_LL o1 o2 -> instr3 st bci_PUSH_LL o1 o2\n PUSH_LLL o1 o2 o3 -> instr4 st bci_PUSH_LLL o1 o2 o3\n PUSH_G nm -> do (p, st2) <- ptr st (BCOPtrName nm)\n instr2 st2 bci_PUSH_G p\n PUSH_PRIMOP op -> do (p, st2) <- ptr st (BCOPtrPrimOp op)\n instr2 st2 bci_PUSH_G p\n PUSH_BCO proto -> do ul_bco <- assembleBCO dflags proto\n (p, st2) <- ptr st (BCOPtrBCO ul_bco)\n instr2 st2 bci_PUSH_G p\n PUSH_ALTS proto -> do ul_bco <- assembleBCO dflags proto\n (p, st2) <- ptr st (BCOPtrBCO ul_bco)\n instr2 st2 bci_PUSH_ALTS p\n PUSH_ALTS_UNLIFTED proto pk -> do\n ul_bco <- assembleBCO dflags proto\n (p, st2) <- ptr st (BCOPtrBCO ul_bco)\n instr2 st2 (push_alts pk) p\n PUSH_UBX (Left lit) nws\n -> do (np, st2) <- literal st lit\n instr3 st2 bci_PUSH_UBX np nws\n PUSH_UBX (Right aa) nws\n -> do (np, st2) <- addr st aa\n instr3 st2 bci_PUSH_UBX np nws\n\n PUSH_APPLY_N -> do instr1 st bci_PUSH_APPLY_N\n PUSH_APPLY_V -> do instr1 st bci_PUSH_APPLY_V\n PUSH_APPLY_F -> do instr1 st bci_PUSH_APPLY_F\n PUSH_APPLY_D -> do instr1 st bci_PUSH_APPLY_D\n PUSH_APPLY_L -> do instr1 st bci_PUSH_APPLY_L\n PUSH_APPLY_P -> do instr1 st bci_PUSH_APPLY_P\n PUSH_APPLY_PP -> do instr1 st bci_PUSH_APPLY_PP\n PUSH_APPLY_PPP -> do instr1 st bci_PUSH_APPLY_PPP\n PUSH_APPLY_PPPP -> do instr1 st bci_PUSH_APPLY_PPPP\n PUSH_APPLY_PPPPP -> do instr1 st bci_PUSH_APPLY_PPPPP\n PUSH_APPLY_PPPPPP -> do instr1 st bci_PUSH_APPLY_PPPPPP\n\n SLIDE n by -> instr3 st bci_SLIDE n by\n ALLOC_AP n -> instr2 st bci_ALLOC_AP n\n ALLOC_AP_NOUPD n -> instr2 st bci_ALLOC_AP_NOUPD n\n ALLOC_PAP arity n -> instr3 st bci_ALLOC_PAP arity n\n MKAP off sz -> instr3 st bci_MKAP off sz\n MKPAP off sz -> instr3 st bci_MKPAP off sz\n UNPACK n -> instr2 st bci_UNPACK n\n PACK dcon sz -> do (itbl_no,st2) <- itbl st dcon\n instr3 st2 bci_PACK itbl_no sz\n LABEL _ -> return st\n TESTLT_I i l -> do (np, st2) <- int st i\n instr2Large st2 bci_TESTLT_I np (findLabel l)\n TESTEQ_I i l -> do (np, st2) <- int st i\n instr2Large st2 bci_TESTEQ_I np (findLabel l)\n TESTLT_W w l -> do (np, st2) <- word st w\n instr2Large st2 bci_TESTLT_W np (findLabel l)\n TESTEQ_W w l -> do (np, st2) <- word st w\n instr2Large st2 bci_TESTEQ_W np (findLabel l)\n TESTLT_F f l -> do (np, st2) <- float st f\n instr2Large st2 bci_TESTLT_F np (findLabel l)\n TESTEQ_F f l -> do (np, st2) <- float st f\n instr2Large st2 bci_TESTEQ_F np (findLabel l)\n TESTLT_D d l -> do (np, st2) <- double st d\n instr2Large st2 bci_TESTLT_D np (findLabel l)\n TESTEQ_D d l -> do (np, st2) <- double st d\n instr2Large st2 bci_TESTEQ_D np (findLabel l)\n TESTLT_P i l -> instr2Large st bci_TESTLT_P i (findLabel l)\n TESTEQ_P i l -> instr2Large st bci_TESTEQ_P i (findLabel l)\n CASEFAIL -> instr1 st bci_CASEFAIL\n SWIZZLE stkoff n -> instr3 st bci_SWIZZLE stkoff n\n JMP l -> instr1Large st bci_JMP (findLabel l)\n ENTER -> instr1 st bci_ENTER\n RETURN -> instr1 st bci_RETURN\n RETURN_UBX rep -> instr1 st (return_ubx rep)\n CCALL off m_addr int -> do (np, st2) <- addr st m_addr\n instr4 st2 bci_CCALL off np int\n BRK_FUN array index info -> do\n (p1, st2) <- ptr st (BCOPtrArray array)\n (p2, st3) <- ptr st2 (BCOPtrBreakInfo info)\n instr4 st3 bci_BRK_FUN p1 index p2\n\n instrn :: AsmState -> [Word16] -> IO AsmState\n instrn st [] = return st\n instrn (st_i, st_l, st_p) (i:is)\n = do st_i' <- addToSS st_i i\n instrn (st_i', st_l, st_p) is\n\n instr1Large st i1 large\n | large > 65535 = instrn st (largeArgInstr i1 : largeArg large)\n | otherwise = instr2 st i1 (fromIntegral large)\n\n instr2Large st i1 i2 large\n | large > 65535 = instrn st (largeArgInstr i1 : i2 : largeArg large)\n | otherwise = instr3 st i1 i2 (fromIntegral large)\n\n instr1 (st_i0,st_l0,st_p0) i1\n = do st_i1 <- addToSS st_i0 i1\n return (st_i1,st_l0,st_p0)\n\n instr2 (st_i0,st_l0,st_p0) w1 w2\n = do st_i1 <- addToSS st_i0 w1\n st_i2 <- addToSS st_i1 w2\n return (st_i2,st_l0,st_p0)\n\n instr3 (st_i0,st_l0,st_p0) w1 w2 w3\n = do st_i1 <- addToSS st_i0 w1\n st_i2 <- addToSS st_i1 w2\n st_i3 <- addToSS st_i2 w3\n return (st_i3,st_l0,st_p0)\n\n instr4 (st_i0,st_l0,st_p0) w1 w2 w3 w4\n = do st_i1 <- addToSS st_i0 w1\n st_i2 <- addToSS st_i1 w2\n st_i3 <- addToSS st_i2 w3\n st_i4 <- addToSS st_i3 w4\n return (st_i4,st_l0,st_p0)\n\n float (st_i0,st_l0,st_p0) f\n = do let ws = mkLitF f\n st_l1 <- addListToSS st_l0 (map BCONPtrWord ws)\n return (sizeSS16 st_l0, (st_i0,st_l1,st_p0))\n\n double (st_i0,st_l0,st_p0) d\n = do let ws = mkLitD d\n st_l1 <- addListToSS st_l0 (map BCONPtrWord ws)\n return (sizeSS16 st_l0, (st_i0,st_l1,st_p0))\n\n int (st_i0,st_l0,st_p0) i\n = do let ws = mkLitI i\n st_l1 <- addListToSS st_l0 (map BCONPtrWord ws)\n return (sizeSS16 st_l0, (st_i0,st_l1,st_p0))\n\n word (st_i0,st_l0,st_p0) w\n = do let ws = [w]\n st_l1 <- addListToSS st_l0 (map BCONPtrWord ws)\n return (sizeSS16 st_l0, (st_i0,st_l1,st_p0))\n\n int64 (st_i0,st_l0,st_p0) i\n = do let ws = mkLitI64 i\n st_l1 <- addListToSS st_l0 (map BCONPtrWord ws)\n return (sizeSS16 st_l0, (st_i0,st_l1,st_p0))\n\n addr (st_i0,st_l0,st_p0) a\n = do let ws = mkLitPtr a\n st_l1 <- addListToSS st_l0 (map BCONPtrWord ws)\n return (sizeSS16 st_l0, (st_i0,st_l1,st_p0))\n\n litlabel (st_i0,st_l0,st_p0) fs\n = do st_l1 <- addListToSS st_l0 [BCONPtrLbl fs]\n return (sizeSS16 st_l0, (st_i0,st_l1,st_p0))\n\n ptr (st_i0,st_l0,st_p0) p\n = do st_p1 <- addToSS st_p0 p\n return (sizeSS16 st_p0, (st_i0,st_l0,st_p1))\n\n itbl (st_i0,st_l0,st_p0) dcon\n = do st_l1 <- addToSS st_l0 (BCONPtrItbl (getName dcon))\n return (sizeSS16 st_l0, (st_i0,st_l1,st_p0))\n\n literal st (MachLabel fs (Just sz) _)\n | platformOS (targetPlatform dflags) == OSMinGW32\n = litlabel st (appendFS fs (mkFastString ('@':show sz)))\n -- On Windows, stdcall labels have a suffix indicating the no. of\n -- arg words, e.g. foo@8. testcase: ffi012(ghci)\n literal st (MachLabel fs _ _) = litlabel st fs\n literal st (MachWord w) = int st (fromIntegral w)\n literal st (MachInt j) = int st (fromIntegral j)\n literal st MachNullAddr = int st 0\n literal st (MachFloat r) = float st (fromRational r)\n literal st (MachDouble r) = double st (fromRational r)\n literal st (MachChar c) = int st (ord c)\n literal st (MachInt64 ii) = int64 st (fromIntegral ii)\n literal st (MachWord64 ii) = int64 st (fromIntegral ii)\n literal _ other = pprPanic \"ByteCodeAsm.literal\" (ppr other)\n\n\npush_alts :: CgRep -> Word16\npush_alts NonPtrArg = bci_PUSH_ALTS_N\npush_alts FloatArg = bci_PUSH_ALTS_F\npush_alts DoubleArg = bci_PUSH_ALTS_D\npush_alts VoidArg = bci_PUSH_ALTS_V\npush_alts LongArg = bci_PUSH_ALTS_L\npush_alts PtrArg = bci_PUSH_ALTS_P\n\nreturn_ubx :: CgRep -> Word16\nreturn_ubx NonPtrArg = bci_RETURN_N\nreturn_ubx FloatArg = bci_RETURN_F\nreturn_ubx DoubleArg = bci_RETURN_D\nreturn_ubx VoidArg = bci_RETURN_V\nreturn_ubx LongArg = bci_RETURN_L\nreturn_ubx PtrArg = bci_RETURN_P\n\n\n-- The size in 16-bit entities of an instruction.\ninstrSize16s :: BCInstr -> Word\ninstrSize16s instr\n = case instr of\n STKCHECK{} -> 2\n PUSH_L{} -> 2\n PUSH_LL{} -> 3\n PUSH_LLL{} -> 4\n PUSH_G{} -> 2\n PUSH_PRIMOP{} -> 2\n PUSH_BCO{} -> 2\n PUSH_ALTS{} -> 2\n PUSH_ALTS_UNLIFTED{} -> 2\n PUSH_UBX{} -> 3\n PUSH_APPLY_N{} -> 1\n PUSH_APPLY_V{} -> 1\n PUSH_APPLY_F{} -> 1\n PUSH_APPLY_D{} -> 1\n PUSH_APPLY_L{} -> 1\n PUSH_APPLY_P{} -> 1\n PUSH_APPLY_PP{} -> 1\n PUSH_APPLY_PPP{} -> 1\n PUSH_APPLY_PPPP{} -> 1\n PUSH_APPLY_PPPPP{} -> 1\n PUSH_APPLY_PPPPPP{} -> 1\n SLIDE{} -> 3\n ALLOC_AP{} -> 2\n ALLOC_AP_NOUPD{} -> 2\n ALLOC_PAP{} -> 3\n MKAP{} -> 3\n MKPAP{} -> 3\n UNPACK{} -> 2\n PACK{} -> 3\n LABEL{} -> 0 -- !!\n TESTLT_I{} -> 3\n TESTEQ_I{} -> 3\n TESTLT_W{} -> 3\n TESTEQ_W{} -> 3\n TESTLT_F{} -> 3\n TESTEQ_F{} -> 3\n TESTLT_D{} -> 3\n TESTEQ_D{} -> 3\n TESTLT_P{} -> 3\n TESTEQ_P{} -> 3\n JMP{} -> 2\n CASEFAIL{} -> 1\n ENTER{} -> 1\n RETURN{} -> 1\n RETURN_UBX{} -> 1\n CCALL{} -> 4\n SWIZZLE{} -> 3\n BRK_FUN{} -> 4\n\n-- Make lists of host-sized words for literals, so that when the\n-- words are placed in memory at increasing addresses, the\n-- bit pattern is correct for the host's word size and endianness.\nmkLitI :: Int -> [Word]\nmkLitF :: Float -> [Word]\nmkLitD :: Double -> [Word]\nmkLitPtr :: Ptr () -> [Word]\nmkLitI64 :: Int64 -> [Word]\n\nmkLitF f\n = runST (do\n arr <- newArray_ ((0::Int),0)\n writeArray arr 0 f\n f_arr <- castSTUArray arr\n w0 <- readArray f_arr 0\n return [w0 :: Word]\n )\n\nmkLitD d\n | wORD_SIZE == 4\n = runST (do\n arr <- newArray_ ((0::Int),1)\n writeArray arr 0 d\n d_arr <- castSTUArray arr\n w0 <- readArray d_arr 0\n w1 <- readArray d_arr 1\n return [w0 :: Word, w1]\n )\n | wORD_SIZE == 8\n = runST (do\n arr <- newArray_ ((0::Int),0)\n writeArray arr 0 d\n d_arr <- castSTUArray arr\n w0 <- readArray d_arr 0\n return [w0 :: Word]\n )\n | otherwise\n = panic \"mkLitD: Bad wORD_SIZE\"\n\nmkLitI64 ii\n | wORD_SIZE == 4\n = runST (do\n arr <- newArray_ ((0::Int),1)\n writeArray arr 0 ii\n d_arr <- castSTUArray arr\n w0 <- readArray d_arr 0\n w1 <- readArray d_arr 1\n return [w0 :: Word,w1]\n )\n | wORD_SIZE == 8\n = runST (do\n arr <- newArray_ ((0::Int),0)\n writeArray arr 0 ii\n d_arr <- castSTUArray arr\n w0 <- readArray d_arr 0\n return [w0 :: Word]\n )\n | otherwise\n = panic \"mkLitI64: Bad wORD_SIZE\"\n\nmkLitI i\n = runST (do\n arr <- newArray_ ((0::Int),0)\n writeArray arr 0 i\n i_arr <- castSTUArray arr\n w0 <- readArray i_arr 0\n return [w0 :: Word]\n )\n\nmkLitPtr a\n = runST (do\n arr <- newArray_ ((0::Int),0)\n writeArray arr 0 a\n a_arr <- castSTUArray arr\n w0 <- readArray a_arr 0\n return [w0 :: Word]\n )\n\niNTERP_STACK_CHECK_THRESH :: Int\niNTERP_STACK_CHECK_THRESH = INTERP_STACK_CHECK_THRESH\n\\end{code}\n","avg_line_length":37.6970227671,"max_line_length":87,"alphanum_fraction":0.5462485482} {"size":20161,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"%\n% (c) The University of Glasgow 2006\n% (c) The GRASP\/AQUA Project, Glasgow University, 1992-1998\n%\n\n@DsMonad@: monadery used in desugaring\n\n\\begin{code}\n{-# LANGUAGE FlexibleInstances #-}\n\nmodule DsMonad (\n DsM, mapM, mapAndUnzipM,\n initDs, initDsTc, fixDs,\n foldlM, foldrM, whenGOptM, unsetGOptM, unsetWOptM,\n Applicative(..),(<$>),\n\n newLocalName,\n duplicateLocalDs, newSysLocalDs, newSysLocalsDs, newUniqueId,\n newFailLocalDs, newPredVarDs,\n getSrcSpanDs, putSrcSpanDs,\n mkPrintUnqualifiedDs,\n newUnique, \n UniqSupply, newUniqueSupply,\n getGhcModeDs, dsGetFamInstEnvs,\n dsLookupGlobal, dsLookupGlobalId, dsDPHBuiltin, dsLookupTyCon, dsLookupDataCon,\n \n PArrBuiltin(..), \n dsLookupDPHRdrEnv, dsLookupDPHRdrEnv_maybe,\n dsInitPArrBuiltin,\n\n DsMetaEnv, DsMetaVal(..), dsGetMetaEnv, dsLookupMetaEnv, dsExtendMetaEnv,\n\n -- Warnings\n DsWarning, warnDs, failWithDs, discardWarningsDs,\n\n -- Data types\n DsMatchContext(..),\n EquationInfo(..), MatchResult(..), DsWrapper, idDsWrapper,\n CanItFail(..), orFail\n ) where\n\nimport TcRnMonad\nimport FamInstEnv\nimport CoreSyn\nimport HsSyn\nimport TcIface\nimport LoadIface\nimport Finder\nimport PrelNames\nimport RdrName\nimport HscTypes\nimport Bag\nimport DataCon\nimport TyCon\nimport Id\nimport Module\nimport Outputable\nimport SrcLoc\nimport Type\nimport UniqSupply\nimport Name\nimport NameEnv\nimport DynFlags\nimport ErrUtils\nimport FastString\nimport Maybes\n\nimport Data.IORef\nimport Control.Monad\n\\end{code}\n\n%************************************************************************\n%* *\n Data types for the desugarer\n%* *\n%************************************************************************\n\n\\begin{code}\ndata DsMatchContext\n = DsMatchContext (HsMatchContext Name) SrcSpan\n deriving ()\n\ndata EquationInfo\n = EqnInfo { eqn_pats :: [Pat Id], -- The patterns for an eqn\n eqn_rhs :: MatchResult } -- What to do after match\n\ninstance Outputable EquationInfo where\n ppr (EqnInfo pats _) = ppr pats\n\ntype DsWrapper = CoreExpr -> CoreExpr\nidDsWrapper :: DsWrapper\nidDsWrapper e = e\n\n-- The semantics of (match vs (EqnInfo wrap pats rhs)) is the MatchResult\n-- \\fail. wrap (case vs of { pats -> rhs fail })\n-- where vs are not bound by wrap\n\n\n-- A MatchResult is an expression with a hole in it\ndata MatchResult\n = MatchResult\n CanItFail -- Tells whether the failure expression is used\n (CoreExpr -> DsM CoreExpr)\n -- Takes a expression to plug in at the\n -- failure point(s). The expression should\n -- be duplicatable!\n\ndata CanItFail = CanFail | CantFail\n\norFail :: CanItFail -> CanItFail -> CanItFail\norFail CantFail CantFail = CantFail\norFail _ _ = CanFail\n\\end{code}\n\n\n%************************************************************************\n%* *\n Monad stuff\n%* *\n%************************************************************************\n\nNow the mondo monad magic (yes, @DsM@ is a silly name)---carry around\na @UniqueSupply@ and some annotations, which\npresumably include source-file location information:\n\n\\begin{code}\ntype DsM result = TcRnIf DsGblEnv DsLclEnv result\n\n-- Compatibility functions\nfixDs :: (a -> DsM a) -> DsM a\nfixDs = fixM\n\ntype DsWarning = (SrcSpan, SDoc)\n -- Not quite the same as a WarnMsg, we have an SDoc here \n -- and we'll do the print_unqual stuff later on to turn it\n -- into a Doc.\n\n-- If '-XParallelArrays' is given, the desugarer populates this table with the corresponding\n-- variables found in 'Data.Array.Parallel'.\n--\ndata PArrBuiltin\n = PArrBuiltin\n { lengthPVar :: Var -- ^ lengthP\n , replicatePVar :: Var -- ^ replicateP\n , singletonPVar :: Var -- ^ singletonP\n , mapPVar :: Var -- ^ mapP\n , filterPVar :: Var -- ^ filterP\n , zipPVar :: Var -- ^ zipP\n , crossMapPVar :: Var -- ^ crossMapP\n , indexPVar :: Var -- ^ (!:)\n , emptyPVar :: Var -- ^ emptyP\n , appPVar :: Var -- ^ (+:+)\n , enumFromToPVar :: Var -- ^ enumFromToP\n , enumFromThenToPVar :: Var -- ^ enumFromThenToP\n }\n\ndata DsGblEnv \n = DsGblEnv\n { ds_mod :: Module -- For SCC profiling\n , ds_fam_inst_env :: FamInstEnv -- Like tcg_fam_inst_env\n , ds_unqual :: PrintUnqualified\n , ds_msgs :: IORef Messages -- Warning messages\n , ds_if_env :: (IfGblEnv, IfLclEnv) -- Used for looking up global, \n -- possibly-imported things\n , ds_dph_env :: GlobalRdrEnv -- exported entities of 'Data.Array.Parallel.Prim'\n -- iff '-fvectorise' flag was given as well as\n -- exported entities of 'Data.Array.Parallel' iff\n -- '-XParallelArrays' was given; otherwise, empty\n , ds_parr_bi :: PArrBuiltin -- desugarar names for '-XParallelArrays'\n }\n\ninstance ContainsModule DsGblEnv where\n extractModule = ds_mod\n\ndata DsLclEnv = DsLclEnv {\n ds_meta :: DsMetaEnv, -- Template Haskell bindings\n ds_loc :: SrcSpan -- to put in pattern-matching error msgs\n }\n\n-- Inside [| |] brackets, the desugarer looks \n-- up variables in the DsMetaEnv\ntype DsMetaEnv = NameEnv DsMetaVal\n\ndata DsMetaVal\n = Bound Id -- Bound by a pattern inside the [| |]. \n -- Will be dynamically alpha renamed.\n -- The Id has type THSyntax.Var\n\n | Splice (HsExpr Id) -- These bindings are introduced by\n -- the PendingSplices on a HsBracketOut\n\ninitDs :: HscEnv\n -> Module -> GlobalRdrEnv -> TypeEnv -> FamInstEnv\n -> DsM a\n -> IO (Messages, Maybe a)\n-- Print errors and warnings, if any arise\n\ninitDs hsc_env mod rdr_env type_env fam_inst_env thing_inside\n = do { msg_var <- newIORef (emptyBag, emptyBag)\n ; let dflags = hsc_dflags hsc_env\n (ds_gbl_env, ds_lcl_env) = mkDsEnvs dflags mod rdr_env type_env fam_inst_env msg_var\n\n ; either_res <- initTcRnIf 'd' hsc_env ds_gbl_env ds_lcl_env $\n loadDAP $\n initDPHBuiltins $\n tryM thing_inside -- Catch exceptions (= errors during desugaring)\n\n -- Display any errors and warnings \n -- Note: if -Werror is used, we don't signal an error here.\n ; msgs <- readIORef msg_var\n\n ; let final_res | errorsFound dflags msgs = Nothing\n | otherwise = case either_res of\n Right res -> Just res\n Left exn -> pprPanic \"initDs\" (text (show exn))\n -- The (Left exn) case happens when the thing_inside throws\n -- a UserError exception. Then it should have put an error\n -- message in msg_var, so we just discard the exception\n\n ; return (msgs, final_res) \n }\n where\n -- Extend the global environment with a 'GlobalRdrEnv' containing the exported entities of\n -- * 'Data.Array.Parallel' iff '-XParallelArrays' specified (see also 'checkLoadDAP').\n -- * 'Data.Array.Parallel.Prim' iff '-fvectorise' specified.\n loadDAP thing_inside\n = do { dapEnv <- loadOneModule dATA_ARRAY_PARALLEL_NAME checkLoadDAP paErr\n ; dappEnv <- loadOneModule dATA_ARRAY_PARALLEL_PRIM_NAME (goptM Opt_Vectorise) veErr\n ; updGblEnv (\\env -> env {ds_dph_env = dapEnv `plusOccEnv` dappEnv }) thing_inside\n }\n where\n loadOneModule :: ModuleName -- the module to load\n -> DsM Bool -- under which condition\n -> MsgDoc -- error message if module not found\n -> DsM GlobalRdrEnv -- empty if condition 'False'\n loadOneModule modname check err\n = do { doLoad <- check\n ; if not doLoad \n then return emptyGlobalRdrEnv\n else do {\n ; result <- liftIO $ findImportedModule hsc_env modname Nothing\n ; case result of\n Found _ mod -> loadModule err mod\n _ -> pprPgmError \"Unable to use Data Parallel Haskell (DPH):\" err\n } }\n\n paErr = ptext (sLit \"To use ParallelArrays,\") <+> specBackend $$ hint1 $$ hint2\n veErr = ptext (sLit \"To use -fvectorise,\") <+> specBackend $$ hint1 $$ hint2\n specBackend = ptext (sLit \"you must specify a DPH backend package\")\n hint1 = ptext (sLit \"Look for packages named 'dph-lifted-*' with 'ghc-pkg'\")\n hint2 = ptext (sLit \"You may need to install them with 'cabal install dph-examples'\")\n\n initDPHBuiltins thing_inside\n = do { -- If '-XParallelArrays' given, we populate the builtin table for desugaring those\n ; doInitBuiltins <- checkLoadDAP\n ; if doInitBuiltins\n then dsInitPArrBuiltin thing_inside\n else thing_inside\n }\n\n checkLoadDAP = do { paEnabled <- xoptM Opt_ParallelArrays\n ; return $ paEnabled &&\n mod \/= gHC_PARR' && \n moduleName mod \/= dATA_ARRAY_PARALLEL_NAME\n }\n -- do not load 'Data.Array.Parallel' iff compiling 'base:GHC.PArr' or a\n -- module called 'dATA_ARRAY_PARALLEL_NAME'; see also the comments at the top\n -- of 'base:GHC.PArr' and 'Data.Array.Parallel' in the DPH libraries\n\ninitDsTc :: DsM a -> TcM a\ninitDsTc thing_inside\n = do { this_mod <- getModule\n ; tcg_env <- getGblEnv\n ; msg_var <- getErrsVar\n ; dflags <- getDynFlags\n ; let type_env = tcg_type_env tcg_env\n rdr_env = tcg_rdr_env tcg_env\n fam_inst_env = tcg_fam_inst_env tcg_env\n ds_envs = mkDsEnvs dflags this_mod rdr_env type_env fam_inst_env msg_var\n ; setEnvs ds_envs thing_inside\n }\n\nmkDsEnvs :: DynFlags -> Module -> GlobalRdrEnv -> TypeEnv -> FamInstEnv -> IORef Messages -> (DsGblEnv, DsLclEnv)\nmkDsEnvs dflags mod rdr_env type_env fam_inst_env msg_var\n = let if_genv = IfGblEnv { if_rec_types = Just (mod, return type_env) }\n if_lenv = mkIfLclEnv mod (ptext (sLit \"GHC error in desugarer lookup in\") <+> ppr mod)\n gbl_env = DsGblEnv { ds_mod = mod\n , ds_fam_inst_env = fam_inst_env\n , ds_if_env = (if_genv, if_lenv)\n , ds_unqual = mkPrintUnqualified dflags rdr_env\n , ds_msgs = msg_var\n , ds_dph_env = emptyGlobalRdrEnv\n , ds_parr_bi = panic \"DsMonad: uninitialised ds_parr_bi\"\n }\n lcl_env = DsLclEnv { ds_meta = emptyNameEnv\n , ds_loc = noSrcSpan\n }\n in (gbl_env, lcl_env)\n\n-- Attempt to load the given module and return its exported entities if successful.\n--\nloadModule :: SDoc -> Module -> DsM GlobalRdrEnv\nloadModule doc mod\n = do { env <- getGblEnv\n ; setEnvs (ds_if_env env) $ do\n { iface <- loadInterface doc mod ImportBySystem\n ; case iface of\n Failed err -> pprPanic \"DsMonad.loadModule: failed to load\" (err $$ doc)\n Succeeded iface -> return $ mkGlobalRdrEnv . gresFromAvails prov . mi_exports $ iface\n } }\n where\n prov = Imported [ImpSpec { is_decl = imp_spec, is_item = ImpAll }]\n imp_spec = ImpDeclSpec { is_mod = name, is_qual = True,\n is_dloc = wiredInSrcSpan, is_as = name }\n name = moduleName mod\n\\end{code}\n\n\n%************************************************************************\n%* *\n Operations in the monad\n%* *\n%************************************************************************\n\nAnd all this mysterious stuff is so we can occasionally reach out and\ngrab one or more names. @newLocalDs@ isn't exported---exported\nfunctions are defined with it. The difference in name-strings makes\nit easier to read debugging output.\n\n\\begin{code}\n-- Make a new Id with the same print name, but different type, and new unique\nnewUniqueId :: Id -> Type -> DsM Id\nnewUniqueId id = mkSysLocalM (occNameFS (nameOccName (idName id)))\n\nduplicateLocalDs :: Id -> DsM Id\nduplicateLocalDs old_local \n = do { uniq <- newUnique\n ; return (setIdUnique old_local uniq) }\n\nnewPredVarDs :: PredType -> DsM Var\nnewPredVarDs pred\n = newSysLocalDs pred\n \nnewSysLocalDs, newFailLocalDs :: Type -> DsM Id\nnewSysLocalDs = mkSysLocalM (fsLit \"ds\")\nnewFailLocalDs = mkSysLocalM (fsLit \"fail\")\n\nnewSysLocalsDs :: [Type] -> DsM [Id]\nnewSysLocalsDs tys = mapM newSysLocalDs tys\n\\end{code}\n\nWe can also reach out and either set\/grab location information from\nthe @SrcSpan@ being carried around.\n\n\\begin{code}\ngetGhcModeDs :: DsM GhcMode\ngetGhcModeDs = getDynFlags >>= return . ghcMode\n\ngetSrcSpanDs :: DsM SrcSpan\ngetSrcSpanDs = do { env <- getLclEnv; return (ds_loc env) }\n\nputSrcSpanDs :: SrcSpan -> DsM a -> DsM a\nputSrcSpanDs new_loc thing_inside = updLclEnv (\\ env -> env {ds_loc = new_loc}) thing_inside\n\nwarnDs :: SDoc -> DsM ()\nwarnDs warn = do { env <- getGblEnv \n ; loc <- getSrcSpanDs\n ; dflags <- getDynFlags\n ; let msg = mkWarnMsg dflags loc (ds_unqual env) warn\n ; updMutVar (ds_msgs env) (\\ (w,e) -> (w `snocBag` msg, e)) }\n\nfailWithDs :: SDoc -> DsM a\nfailWithDs err \n = do { env <- getGblEnv \n ; loc <- getSrcSpanDs\n ; dflags <- getDynFlags\n ; let msg = mkErrMsg dflags loc (ds_unqual env) err\n ; updMutVar (ds_msgs env) (\\ (w,e) -> (w, e `snocBag` msg))\n ; failM }\n\nmkPrintUnqualifiedDs :: DsM PrintUnqualified\nmkPrintUnqualifiedDs = ds_unqual <$> getGblEnv\n\\end{code}\n\n\\begin{code}\ninstance MonadThings (IOEnv (Env DsGblEnv DsLclEnv)) where\n lookupThing = dsLookupGlobal\n\ndsLookupGlobal :: Name -> DsM TyThing\n-- Very like TcEnv.tcLookupGlobal\ndsLookupGlobal name \n = do { env <- getGblEnv\n ; setEnvs (ds_if_env env)\n (tcIfaceGlobal name) }\n\ndsLookupGlobalId :: Name -> DsM Id\ndsLookupGlobalId name \n = tyThingId <$> dsLookupGlobal name\n\n-- |Get a name from \"Data.Array.Parallel\" for the desugarer, from the 'ds_parr_bi' component of the\n-- global desugerar environment.\n--\ndsDPHBuiltin :: (PArrBuiltin -> a) -> DsM a\ndsDPHBuiltin sel = (sel . ds_parr_bi) <$> getGblEnv\n\ndsLookupTyCon :: Name -> DsM TyCon\ndsLookupTyCon name\n = tyThingTyCon <$> dsLookupGlobal name\n\ndsLookupDataCon :: Name -> DsM DataCon\ndsLookupDataCon name\n = tyThingDataCon <$> dsLookupGlobal name\n\\end{code}\n\n\\begin{code}\n\n\n-- |Lookup a name exported by 'Data.Array.Parallel.Prim' or 'Data.Array.Parallel.Prim'.\n-- Panic if there isn't one, or if it is defined multiple times.\ndsLookupDPHRdrEnv :: OccName -> DsM Name\ndsLookupDPHRdrEnv occ\n = liftM (fromMaybe (pprPanic nameNotFound (ppr occ)))\n $ dsLookupDPHRdrEnv_maybe occ\n where nameNotFound = \"Name not found in 'Data.Array.Parallel' or 'Data.Array.Parallel.Prim':\"\n\n-- |Lookup a name exported by 'Data.Array.Parallel.Prim' or 'Data.Array.Parallel.Prim',\n-- returning `Nothing` if it's not defined. Panic if it's defined multiple times.\ndsLookupDPHRdrEnv_maybe :: OccName -> DsM (Maybe Name)\ndsLookupDPHRdrEnv_maybe occ\n = do { env <- ds_dph_env <$> getGblEnv\n ; let gres = lookupGlobalRdrEnv env occ\n ; case gres of\n [] -> return $ Nothing\n [gre] -> return $ Just $ gre_name gre\n _ -> pprPanic multipleNames (ppr occ)\n }\n where multipleNames = \"Multiple definitions in 'Data.Array.Parallel' and 'Data.Array.Parallel.Prim':\"\n\n\n-- Populate 'ds_parr_bi' from 'ds_dph_env'.\n--\ndsInitPArrBuiltin :: DsM a -> DsM a\ndsInitPArrBuiltin thing_inside\n = do { lengthPVar <- externalVar (fsLit \"lengthP\")\n ; replicatePVar <- externalVar (fsLit \"replicateP\")\n ; singletonPVar <- externalVar (fsLit \"singletonP\")\n ; mapPVar <- externalVar (fsLit \"mapP\")\n ; filterPVar <- externalVar (fsLit \"filterP\")\n ; zipPVar <- externalVar (fsLit \"zipP\")\n ; crossMapPVar <- externalVar (fsLit \"crossMapP\")\n ; indexPVar <- externalVar (fsLit \"!:\")\n ; emptyPVar <- externalVar (fsLit \"emptyP\")\n ; appPVar <- externalVar (fsLit \"+:+\")\n -- ; enumFromToPVar <- externalVar (fsLit \"enumFromToP\")\n -- ; enumFromThenToPVar <- externalVar (fsLit \"enumFromThenToP\")\n ; enumFromToPVar <- return arithErr\n ; enumFromThenToPVar <- return arithErr\n\n ; updGblEnv (\\env -> env {ds_parr_bi = PArrBuiltin\n { lengthPVar = lengthPVar\n , replicatePVar = replicatePVar\n , singletonPVar = singletonPVar\n , mapPVar = mapPVar\n , filterPVar = filterPVar\n , zipPVar = zipPVar\n , crossMapPVar = crossMapPVar\n , indexPVar = indexPVar\n , emptyPVar = emptyPVar\n , appPVar = appPVar\n , enumFromToPVar = enumFromToPVar\n , enumFromThenToPVar = enumFromThenToPVar\n } })\n thing_inside\n }\n where\n externalVar :: FastString -> DsM Var\n externalVar fs = dsLookupDPHRdrEnv (mkVarOccFS fs) >>= dsLookupGlobalId\n\n arithErr = panic \"Arithmetic sequences have to wait until we support type classes\"\n\\end{code}\n\n\\begin{code}\ndsGetFamInstEnvs :: DsM FamInstEnvs\n-- Gets both the external-package inst-env\n-- and the home-pkg inst env (includes module being compiled)\ndsGetFamInstEnvs\n = do { eps <- getEps; env <- getGblEnv\n ; return (eps_fam_inst_env eps, ds_fam_inst_env env) }\n\ndsGetMetaEnv :: DsM (NameEnv DsMetaVal)\ndsGetMetaEnv = do { env <- getLclEnv; return (ds_meta env) }\n\ndsLookupMetaEnv :: Name -> DsM (Maybe DsMetaVal)\ndsLookupMetaEnv name = do { env <- getLclEnv; return (lookupNameEnv (ds_meta env) name) }\n\ndsExtendMetaEnv :: DsMetaEnv -> DsM a -> DsM a\ndsExtendMetaEnv menv thing_inside\n = updLclEnv (\\env -> env { ds_meta = ds_meta env `plusNameEnv` menv }) thing_inside\n\\end{code}\n\n\\begin{code}\ndiscardWarningsDs :: DsM a -> DsM a\n-- Ignore warnings inside the thing inside;\n-- used to ignore inaccessable cases etc. inside generated code\ndiscardWarningsDs thing_inside\n = do { env <- getGblEnv\n ; old_msgs <- readTcRef (ds_msgs env)\n\n ; result <- thing_inside\n\n -- Revert messages to old_msgs\n ; writeTcRef (ds_msgs env) old_msgs\n\n ; return result }\n\\end{code}\n","avg_line_length":39.0717054264,"max_line_length":113,"alphanum_fraction":0.5678785775} {"size":81152,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"ToDo [Oct 2013]\n~~~~~~~~~~~~~~~\n1. Nuke ForceSpecConstr for good (it is subsumed by GHC.Types.SPEC in ghc-prim)\n2. Nuke NoSpecConstr\n\n%\n% (c) The GRASP\/AQUA Project, Glasgow University, 1992-1998\n%\n\\section[SpecConstr]{Specialise over constructors}\n\n\\begin{code}\nmodule SpecConstr(\n specConstrProgram\n#ifdef GHCI\n , SpecConstrAnnotation(..)\n#endif\n ) where\n\n#include \"HsVersions.h\"\n\nimport CoreSyn\nimport CoreSubst\nimport CoreUtils\nimport CoreUnfold ( couldBeSmallEnoughToInline )\nimport CoreFVs ( exprsFreeVars )\nimport CoreMonad\nimport Literal ( litIsLifted )\nimport HscTypes ( ModGuts(..) )\nimport WwLib ( mkWorkerArgs )\nimport DataCon\nimport Coercion hiding( substTy, substCo )\nimport Rules\nimport Type hiding ( substTy )\nimport TyCon ( isRecursiveTyCon, tyConName )\nimport Id\nimport PprCore ( pprParendExpr )\nimport MkCore ( mkImpossibleExpr )\nimport Var\nimport VarEnv\nimport VarSet\nimport Name\nimport BasicTypes\nimport DynFlags ( DynFlags(..) )\nimport StaticFlags ( opt_PprStyle_Debug )\nimport Maybes ( orElse, catMaybes, isJust, isNothing )\nimport Demand\nimport Serialized ( deserializeWithData )\nimport Util\nimport Pair\nimport UniqSupply\nimport Outputable\nimport FastString\nimport UniqFM\nimport MonadUtils\nimport Control.Monad ( zipWithM )\nimport Data.List\nimport PrelNames ( specTyConName )\n\n-- See Note [Forcing specialisation]\n#ifndef GHCI\ntype SpecConstrAnnotation = ()\n#else\nimport TyCon ( TyCon )\nimport GHC.Exts( SpecConstrAnnotation(..) )\n#endif\n\\end{code}\n\n-----------------------------------------------------\n Game plan\n-----------------------------------------------------\n\nConsider\n drop n [] = []\n drop 0 xs = []\n drop n (x:xs) = drop (n-1) xs\n\nAfter the first time round, we could pass n unboxed. This happens in\nnumerical code too. Here's what it looks like in Core:\n\n drop n xs = case xs of\n [] -> []\n (y:ys) -> case n of\n I# n# -> case n# of\n 0 -> []\n _ -> drop (I# (n# -# 1#)) xs\n\nNotice that the recursive call has an explicit constructor as argument.\nNoticing this, we can make a specialised version of drop\n\n RULE: drop (I# n#) xs ==> drop' n# xs\n\n drop' n# xs = let n = I# n# in ...orig RHS...\n\nNow the simplifier will apply the specialisation in the rhs of drop', giving\n\n drop' n# xs = case xs of\n [] -> []\n (y:ys) -> case n# of\n 0 -> []\n _ -> drop' (n# -# 1#) xs\n\nMuch better!\n\nWe'd also like to catch cases where a parameter is carried along unchanged,\nbut evaluated each time round the loop:\n\n f i n = if i>0 || i>n then i else f (i*2) n\n\nHere f isn't strict in n, but we'd like to avoid evaluating it each iteration.\nIn Core, by the time we've w\/wd (f is strict in i) we get\n\n f i# n = case i# ># 0 of\n False -> I# i#\n True -> case n of { I# n# ->\n case i# ># n# of\n False -> I# i#\n True -> f (i# *# 2#) n\n\nAt the call to f, we see that the argument, n is known to be (I# n#),\nand n is evaluated elsewhere in the body of f, so we can play the same\ntrick as above.\n\n\nNote [Reboxing]\n~~~~~~~~~~~~~~~\nWe must be careful not to allocate the same constructor twice. Consider\n f p = (...(case p of (a,b) -> e)...p...,\n ...let t = (r,s) in ...t...(f t)...)\nAt the recursive call to f, we can see that t is a pair. But we do NOT want\nto make a specialised copy:\n f' a b = let p = (a,b) in (..., ...)\nbecause now t is allocated by the caller, then r and s are passed to the\nrecursive call, which allocates the (r,s) pair again.\n\nThis happens if\n (a) the argument p is used in other than a case-scrutinisation way.\n (b) the argument to the call is not a 'fresh' tuple; you have to\n look into its unfolding to see that it's a tuple\n\nHence the \"OR\" part of Note [Good arguments] below.\n\nALTERNATIVE 2: pass both boxed and unboxed versions. This no longer saves\nallocation, but does perhaps save evals. In the RULE we'd have\nsomething like\n\n f (I# x#) = f' (I# x#) x#\n\nIf at the call site the (I# x) was an unfolding, then we'd have to\nrely on CSE to eliminate the duplicate allocation.... This alternative\ndoesn't look attractive enough to pursue.\n\nALTERNATIVE 3: ignore the reboxing problem. The trouble is that\nthe conservative reboxing story prevents many useful functions from being\nspecialised. Example:\n foo :: Maybe Int -> Int -> Int\n foo (Just m) 0 = 0\n foo x@(Just m) n = foo x (n-m)\nHere the use of 'x' will clearly not require boxing in the specialised function.\n\nThe strictness analyser has the same problem, in fact. Example:\n f p@(a,b) = ...\nIf we pass just 'a' and 'b' to the worker, it might need to rebox the\npair to create (a,b). A more sophisticated analysis might figure out\nprecisely the cases in which this could happen, but the strictness\nanalyser does no such analysis; it just passes 'a' and 'b', and hopes\nfor the best.\n\nSo my current choice is to make SpecConstr similarly aggressive, and\nignore the bad potential of reboxing.\n\n\nNote [Good arguments]\n~~~~~~~~~~~~~~~~~~~~~\nSo we look for\n\n* A self-recursive function. Ignore mutual recursion for now,\n because it's less common, and the code is simpler for self-recursion.\n\n* EITHER\n\n a) At a recursive call, one or more parameters is an explicit\n constructor application\n AND\n That same parameter is scrutinised by a case somewhere in\n the RHS of the function\n\n OR\n\n b) At a recursive call, one or more parameters has an unfolding\n that is an explicit constructor application\n AND\n That same parameter is scrutinised by a case somewhere in\n the RHS of the function\n AND\n Those are the only uses of the parameter (see Note [Reboxing])\n\n\nWhat to abstract over\n~~~~~~~~~~~~~~~~~~~~~\nThere's a bit of a complication with type arguments. If the call\nsite looks like\n\n f p = ...f ((:) [a] x xs)...\n\nthen our specialised function look like\n\n f_spec x xs = let p = (:) [a] x xs in ....as before....\n\nThis only makes sense if either\n a) the type variable 'a' is in scope at the top of f, or\n b) the type variable 'a' is an argument to f (and hence fs)\n\nActually, (a) may hold for value arguments too, in which case\nwe may not want to pass them. Supose 'x' is in scope at f's\ndefn, but xs is not. Then we'd like\n\n f_spec xs = let p = (:) [a] x xs in ....as before....\n\nSimilarly (b) may hold too. If x is already an argument at the\ncall, no need to pass it again.\n\nFinally, if 'a' is not in scope at the call site, we could abstract\nit as we do the term variables:\n\n f_spec a x xs = let p = (:) [a] x xs in ...as before...\n\nSo the grand plan is:\n\n * abstract the call site to a constructor-only pattern\n e.g. C x (D (f p) (g q)) ==> C s1 (D s2 s3)\n\n * Find the free variables of the abstracted pattern\n\n * Pass these variables, less any that are in scope at\n the fn defn. But see Note [Shadowing] below.\n\n\nNOTICE that we only abstract over variables that are not in scope,\nso we're in no danger of shadowing variables used in \"higher up\"\nin f_spec's RHS.\n\n\nNote [Shadowing]\n~~~~~~~~~~~~~~~~\nIn this pass we gather up usage information that may mention variables\nthat are bound between the usage site and the definition site; or (more\nseriously) may be bound to something different at the definition site.\nFor example:\n\n f x = letrec g y v = let x = ...\n in ...(g (a,b) x)...\n\nSince 'x' is in scope at the call site, we may make a rewrite rule that\nlooks like\n RULE forall a,b. g (a,b) x = ...\nBut this rule will never match, because it's really a different 'x' at\nthe call site -- and that difference will be manifest by the time the\nsimplifier gets to it. [A worry: the simplifier doesn't *guarantee*\nno-shadowing, so perhaps it may not be distinct?]\n\nAnyway, the rule isn't actually wrong, it's just not useful. One possibility\nis to run deShadowBinds before running SpecConstr, but instead we run the\nsimplifier. That gives the simplest possible program for SpecConstr to\nchew on; and it virtually guarantees no shadowing.\n\nNote [Specialising for constant parameters]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThis one is about specialising on a *constant* (but not necessarily\nconstructor) argument\n\n foo :: Int -> (Int -> Int) -> Int\n foo 0 f = 0\n foo m f = foo (f m) (+1)\n\nIt produces\n\n lvl_rmV :: GHC.Base.Int -> GHC.Base.Int\n lvl_rmV =\n \\ (ds_dlk :: GHC.Base.Int) ->\n case ds_dlk of wild_alH { GHC.Base.I# x_alG ->\n GHC.Base.I# (GHC.Prim.+# x_alG 1)\n\n T.$wfoo :: GHC.Prim.Int# -> (GHC.Base.Int -> GHC.Base.Int) ->\n GHC.Prim.Int#\n T.$wfoo =\n \\ (ww_sme :: GHC.Prim.Int#) (w_smg :: GHC.Base.Int -> GHC.Base.Int) ->\n case ww_sme of ds_Xlw {\n __DEFAULT ->\n case w_smg (GHC.Base.I# ds_Xlw) of w1_Xmo { GHC.Base.I# ww1_Xmz ->\n T.$wfoo ww1_Xmz lvl_rmV\n };\n 0 -> 0\n }\n\nThe recursive call has lvl_rmV as its argument, so we could create a specialised copy\nwith that argument baked in; that is, not passed at all. Now it can perhaps be inlined.\n\nWhen is this worth it? Call the constant 'lvl'\n- If 'lvl' has an unfolding that is a constructor, see if the corresponding\n parameter is scrutinised anywhere in the body.\n\n- If 'lvl' has an unfolding that is a inlinable function, see if the corresponding\n parameter is applied (...to enough arguments...?)\n\n Also do this is if the function has RULES?\n\nAlso\n\nNote [Specialising for lambda parameters]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n foo :: Int -> (Int -> Int) -> Int\n foo 0 f = 0\n foo m f = foo (f m) (\\n -> n-m)\n\nThis is subtly different from the previous one in that we get an\nexplicit lambda as the argument:\n\n T.$wfoo :: GHC.Prim.Int# -> (GHC.Base.Int -> GHC.Base.Int) ->\n GHC.Prim.Int#\n T.$wfoo =\n \\ (ww_sm8 :: GHC.Prim.Int#) (w_sma :: GHC.Base.Int -> GHC.Base.Int) ->\n case ww_sm8 of ds_Xlr {\n __DEFAULT ->\n case w_sma (GHC.Base.I# ds_Xlr) of w1_Xmf { GHC.Base.I# ww1_Xmq ->\n T.$wfoo\n ww1_Xmq\n (\\ (n_ad3 :: GHC.Base.Int) ->\n case n_ad3 of wild_alB { GHC.Base.I# x_alA ->\n GHC.Base.I# (GHC.Prim.-# x_alA ds_Xlr)\n })\n };\n 0 -> 0\n }\n\nI wonder if SpecConstr couldn't be extended to handle this? After all,\nlambda is a sort of constructor for functions and perhaps it already\nhas most of the necessary machinery?\n\nFurthermore, there's an immediate win, because you don't need to allocate the lamda\nat the call site; and if perchance it's called in the recursive call, then you\nmay avoid allocating it altogether. Just like for constructors.\n\nLooks cool, but probably rare...but it might be easy to implement.\n\n\nNote [SpecConstr for casts]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\nConsider\n data family T a :: *\n data instance T Int = T Int\n\n foo n = ...\n where\n go (T 0) = 0\n go (T n) = go (T (n-1))\n\nThe recursive call ends up looking like\n go (T (I# ...) `cast` g)\nSo we want to spot the constructor application inside the cast.\nThat's why we have the Cast case in argToPat\n\nNote [Local recursive groups]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nFor a *local* recursive group, we can see all the calls to the\nfunction, so we seed the specialisation loop from the calls in the\nbody, not from the calls in the RHS. Consider:\n\n bar m n = foo n (n,n) (n,n) (n,n) (n,n)\n where\n foo n p q r s\n | n == 0 = m\n | n > 3000 = case p of { (p1,p2) -> foo (n-1) (p2,p1) q r s }\n | n > 2000 = case q of { (q1,q2) -> foo (n-1) p (q2,q1) r s }\n | n > 1000 = case r of { (r1,r2) -> foo (n-1) p q (r2,r1) s }\n | otherwise = case s of { (s1,s2) -> foo (n-1) p q r (s2,s1) }\n\nIf we start with the RHSs of 'foo', we get lots and lots of specialisations,\nmost of which are not needed. But if we start with the (single) call\nin the rhs of 'bar' we get exactly one fully-specialised copy, and all\nthe recursive calls go to this fully-specialised copy. Indeed, the original\nfunction is later collected as dead code. This is very important in\nspecialising the loops arising from stream fusion, for example in NDP where\nwe were getting literally hundreds of (mostly unused) specialisations of\na local function.\n\nIn a case like the above we end up never calling the original un-specialised\nfunction. (Although we still leave its code around just in case.)\n\nHowever, if we find any boring calls in the body, including *unsaturated*\nones, such as\n letrec foo x y = ....foo...\n in map foo xs\nthen we will end up calling the un-specialised function, so then we *should*\nuse the calls in the un-specialised RHS as seeds. We call these\n\"boring call patterns\", and callsToPats reports if it finds any of these.\n\n\nNote [Top-level recursive groups]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIf all the bindings in a top-level recursive group are local (not\nexported), then all the calls are in the rest of the top-level\nbindings. This means we can specialise with those call patterns\ninstead of with the RHSs of the recursive group.\n\n(Question: maybe we should *also* use calls in the rest of the\ntop-level bindings as seeds?\n\nTo get the call usage information, we work backwards through the\ntop-level bindings so we see the usage before we get to the binding of\nthe function. Before we can collect the usage though, we go through\nall the bindings and add them to the environment. This is necessary\nbecause usage is only tracked for functions in the environment.\n\nThe actual seeding of the specialisation is very similar to Note [Local recursive group].\n\n\nNote [Do not specialise diverging functions]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nSpecialising a function that just diverges is a waste of code.\nFurthermore, it broke GHC (simpl014) thus:\n {-# STR Sb #-}\n f = \\x. case x of (a,b) -> f x\nIf we specialise f we get\n f = \\x. case x of (a,b) -> fspec a b\nBut fspec doesn't have decent strictness info. As it happened,\n(f x) :: IO t, so the state hack applied and we eta expanded fspec,\nand hence f. But now f's strictness is less than its arity, which\nbreaks an invariant.\n\n\nNote [Forcing specialisation]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWith stream fusion and in other similar cases, we want to fully\nspecialise some (but not necessarily all!) loops regardless of their\nsize and the number of specialisations.\n\nWe allow a library to do this, in one of two ways (one which is\ndeprecated):\n\n 1) Add a parameter of type GHC.Types.SPEC (from ghc-prim) to the loop body.\n\n 2) (Deprecated) Annotate a type with ForceSpecConstr from GHC.Exts,\n and then add *that* type as a parameter to the loop body\n\nThe reason #2 is deprecated is because it requires GHCi, which isn't\navailable for things like a cross compiler using stage1.\n\nHere's a (simplified) example from the `vector` package. You may bring\nthe special 'force specialization' type into scope by saying:\n\n import GHC.Types (SPEC(..))\n\nor by defining your own type (again, deprecated):\n\n data SPEC = SPEC | SPEC2\n {-# ANN type SPEC ForceSpecConstr #-}\n\n(Note this is the exact same definition of GHC.Types.SPEC, just\nwithout the annotation.)\n\nAfter that, you say:\n\n foldl :: (a -> b -> a) -> a -> Stream b -> a\n {-# INLINE foldl #-}\n foldl f z (Stream step s _) = foldl_loop SPEC z s\n where\n foldl_loop !sPEC z s = case step s of\n Yield x s' -> foldl_loop sPEC (f z x) s'\n Skip -> foldl_loop sPEC z s'\n Done -> z\n\nSpecConstr will spot the SPEC parameter and always fully specialise\nfoldl_loop. Note that\n\n * We have to prevent the SPEC argument from being removed by\n w\/w which is why (a) SPEC is a sum type, and (b) we have to seq on\n the SPEC argument.\n\n * And lastly, the SPEC argument is ultimately eliminated by\n SpecConstr itself so there is no runtime overhead.\n\nThis is all quite ugly; we ought to come up with a better design.\n\nForceSpecConstr arguments are spotted in scExpr' and scTopBinds which then set\nsc_force to True when calling specLoop. This flag does four things:\n * Ignore specConstrThreshold, to specialise functions of arbitrary size\n (see scTopBind)\n * Ignore specConstrCount, to make arbitrary numbers of specialisations\n (see specialise)\n * Specialise even for arguments that are not scrutinised in the loop\n (see argToPat; Trac #4488)\n * Only specialise on recursive types a finite number of times\n (see is_too_recursive; Trac #5550; Note [Limit recursive specialisation])\n\nThis flag is inherited for nested non-recursive bindings (which are likely to\nbe join points and hence should be fully specialised) but reset for nested\nrecursive bindings.\n\nWhat alternatives did I consider? Annotating the loop itself doesn't\nwork because (a) it is local and (b) it will be w\/w'ed and having\nw\/w propagating annotations somehow doesn't seem like a good idea. The\ntypes of the loop arguments really seem to be the most persistent\nthing.\n\nAnnotating the types that make up the loop state doesn't work,\neither, because (a) it would prevent us from using types like Either\nor tuples here, (b) we don't want to restrict the set of types that\ncan be used in Stream states and (c) some types are fixed by the user\n(e.g., the accumulator here) but we still want to specialise as much\nas possible.\n\nAlternatives to ForceSpecConstr\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nInstead of giving the loop an extra argument of type SPEC, we\nalso considered *wrapping* arguments in SPEC, thus\n data SPEC a = SPEC a | SPEC2\n\n loop = \\arg -> case arg of\n SPEC state ->\n case state of (x,y) -> ... loop (SPEC (x',y')) ...\n S2 -> error ...\nThe idea is that a SPEC argument says \"specialise this argument\nregardless of whether the function case-analyses it\". But this\ndoesn't work well:\n * SPEC must still be a sum type, else the strictness analyser\n eliminates it\n * But that means that 'loop' won't be strict in its real payload\nThis loss of strictness in turn screws up specialisation, because\nwe may end up with calls like\n loop (SPEC (case z of (p,q) -> (q,p)))\nWithout the SPEC, if 'loop' were strict, the case would move out\nand we'd see loop applied to a pair. But if 'loop' isn't strict\nthis doesn't look like a specialisable call.\n\nNote [Limit recursive specialisation]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIt is possible for ForceSpecConstr to cause an infinite loop of specialisation.\nBecause there is no limit on the number of specialisations, a recursive call with\na recursive constructor as an argument (for example, list cons) will generate\na specialisation for that constructor. If the resulting specialisation also\ncontains a recursive call with the constructor, this could proceed indefinitely.\n\nFor example, if ForceSpecConstr is on:\n loop :: [Int] -> [Int] -> [Int]\n loop z [] = z\n loop z (x:xs) = loop (x:z) xs\nthis example will create a specialisation for the pattern\n loop (a:b) c = loop' a b c\n\n loop' a b [] = (a:b)\n loop' a b (x:xs) = loop (x:(a:b)) xs\nand a new pattern is found:\n loop (a:(b:c)) d = loop'' a b c d\nwhich can continue indefinitely.\n\nRoman's suggestion to fix this was to stop after a couple of times on recursive types,\nbut still specialising on non-recursive types as much as possible.\n\nTo implement this, we count the number of recursive constructors in each\nfunction argument. If the maximum is greater than the specConstrRecursive limit,\ndo not specialise on that pattern.\n\nThis is only necessary when ForceSpecConstr is on: otherwise the specConstrCount\nwill force termination anyway.\n\nSee Trac #5550.\n\nNote [NoSpecConstr]\n~~~~~~~~~~~~~~~~~~~\nThe ignoreDataCon stuff allows you to say\n {-# ANN type T NoSpecConstr #-}\nto mean \"don't specialise on arguments of this type\". It was added\nbefore we had ForceSpecConstr. Lacking ForceSpecConstr we specialised\nregardless of size; and then we needed a way to turn that *off*. Now\nthat we have ForceSpecConstr, this NoSpecConstr is probably redundant.\n(Used only for PArray.)\n\n-----------------------------------------------------\n Stuff not yet handled\n-----------------------------------------------------\n\nHere are notes arising from Roman's work that I don't want to lose.\n\nExample 1\n~~~~~~~~~\n data T a = T !a\n\n foo :: Int -> T Int -> Int\n foo 0 t = 0\n foo x t | even x = case t of { T n -> foo (x-n) t }\n | otherwise = foo (x-1) t\n\nSpecConstr does no specialisation, because the second recursive call\nlooks like a boxed use of the argument. A pity.\n\n $wfoo_sFw :: GHC.Prim.Int# -> T.T GHC.Base.Int -> GHC.Prim.Int#\n $wfoo_sFw =\n \\ (ww_sFo [Just L] :: GHC.Prim.Int#) (w_sFq [Just L] :: T.T GHC.Base.Int) ->\n case ww_sFo of ds_Xw6 [Just L] {\n __DEFAULT ->\n case GHC.Prim.remInt# ds_Xw6 2 of wild1_aEF [Dead Just A] {\n __DEFAULT -> $wfoo_sFw (GHC.Prim.-# ds_Xw6 1) w_sFq;\n 0 ->\n case w_sFq of wild_Xy [Just L] { T.T n_ad5 [Just U(L)] ->\n case n_ad5 of wild1_aET [Just A] { GHC.Base.I# y_aES [Just L] ->\n $wfoo_sFw (GHC.Prim.-# ds_Xw6 y_aES) wild_Xy\n } } };\n 0 -> 0\n\nExample 2\n~~~~~~~~~\n data a :*: b = !a :*: !b\n data T a = T !a\n\n foo :: (Int :*: T Int) -> Int\n foo (0 :*: t) = 0\n foo (x :*: t) | even x = case t of { T n -> foo ((x-n) :*: t) }\n | otherwise = foo ((x-1) :*: t)\n\nVery similar to the previous one, except that the parameters are now in\na strict tuple. Before SpecConstr, we have\n\n $wfoo_sG3 :: GHC.Prim.Int# -> T.T GHC.Base.Int -> GHC.Prim.Int#\n $wfoo_sG3 =\n \\ (ww_sFU [Just L] :: GHC.Prim.Int#) (ww_sFW [Just L] :: T.T\n GHC.Base.Int) ->\n case ww_sFU of ds_Xws [Just L] {\n __DEFAULT ->\n case GHC.Prim.remInt# ds_Xws 2 of wild1_aEZ [Dead Just A] {\n __DEFAULT ->\n case ww_sFW of tpl_B2 [Just L] { T.T a_sFo [Just A] ->\n $wfoo_sG3 (GHC.Prim.-# ds_Xws 1) tpl_B2 -- $wfoo1\n };\n 0 ->\n case ww_sFW of wild_XB [Just A] { T.T n_ad7 [Just S(L)] ->\n case n_ad7 of wild1_aFd [Just L] { GHC.Base.I# y_aFc [Just L] ->\n $wfoo_sG3 (GHC.Prim.-# ds_Xws y_aFc) wild_XB -- $wfoo2\n } } };\n 0 -> 0 }\n\nWe get two specialisations:\n\"SC:$wfoo1\" [0] __forall {a_sFB :: GHC.Base.Int sc_sGC :: GHC.Prim.Int#}\n Foo.$wfoo sc_sGC (Foo.T @ GHC.Base.Int a_sFB)\n = Foo.$s$wfoo1 a_sFB sc_sGC ;\n\"SC:$wfoo2\" [0] __forall {y_aFp :: GHC.Prim.Int# sc_sGC :: GHC.Prim.Int#}\n Foo.$wfoo sc_sGC (Foo.T @ GHC.Base.Int (GHC.Base.I# y_aFp))\n = Foo.$s$wfoo y_aFp sc_sGC ;\n\nBut perhaps the first one isn't good. After all, we know that tpl_B2 is\na T (I# x) really, because T is strict and Int has one constructor. (We can't\nunbox the strict fields, because T is polymorphic!)\n\n%************************************************************************\n%* *\n\\subsection{Top level wrapper stuff}\n%* *\n%************************************************************************\n\n\\begin{code}\nspecConstrProgram :: ModGuts -> CoreM ModGuts\nspecConstrProgram guts\n = do\n dflags <- getDynFlags\n us <- getUniqueSupplyM\n annos <- getFirstAnnotations deserializeWithData guts\n let binds' = reverse $ fst $ initUs us $ do\n -- Note [Top-level recursive groups]\n (env, binds) <- goEnv (initScEnv dflags annos) (mg_binds guts)\n go env nullUsage (reverse binds)\n\n return (guts { mg_binds = binds' })\n where\n goEnv env [] = return (env, [])\n goEnv env (bind:binds) = do (env', bind') <- scTopBindEnv env bind\n (env'', binds') <- goEnv env' binds\n return (env'', bind' : binds')\n\n go _ _ [] = return []\n go env usg (bind:binds) = do (usg', bind') <- scTopBind env usg bind\n binds' <- go env usg' binds\n return (bind' : binds')\n\\end{code}\n\n\n%************************************************************************\n%* *\n\\subsection{Environment: goes downwards}\n%* *\n%************************************************************************\n\nNote [Work-free values only in environment]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe sc_vals field keeps track of in-scope value bindings, so \nthat if we come across (case x of Just y ->...) we can reduce the\ncase from knowing that x is bound to a pair.\n\nBut only *work-free* values are ok here. For example if the envt had\n x -> Just (expensive v)\nthen we do NOT want to expand to\n let y = expensive v in ...\nbecause the x-binding still exists and we've now duplicated (expensive v).\n\nThis seldom happens because let-bound constructor applications are \nANF-ised, but it can happen as a result of on-the-fly transformations in\nSpecConstr itself. Here is Trac #7865:\n\n let {\n a'_shr =\n case xs_af8 of _ {\n [] -> acc_af6;\n : ds_dgt [Dmd=] ds_dgu [Dmd=] ->\n (expensive x_af7, x_af7\n } } in\n let {\n ds_sht =\n case a'_shr of _ { (p'_afd, q'_afe) ->\n TSpecConstr_DoubleInline.recursive\n (GHC.Types.: @ GHC.Types.Int x_af7 wild_X6) (q'_afe, p'_afd)\n } } in\n\nWhen processed knowing that xs_af8 was bound to a cons, we simplify to \n a'_shr = (expensive x_af7, x_af7)\nand we do NOT want to inline that at the occurrence of a'_shr in ds_sht.\n(There are other occurrences of a'_shr.) No no no.\n\nIt would be possible to do some on-the-fly ANF-ising, so that a'_shr turned\ninto a work-free value again, thus\n a1 = expensive x_af7\n a'_shr = (a1, x_af7)\nbut that's more work, so until its shown to be important I'm going to \nleave it for now.\n\n\\begin{code}\ndata ScEnv = SCE { sc_dflags :: DynFlags,\n sc_size :: Maybe Int, -- Size threshold\n sc_count :: Maybe Int, -- Max # of specialisations for any one fn\n -- See Note [Avoiding exponential blowup]\n\n sc_recursive :: Int, -- Max # of specialisations over recursive type.\n -- Stops ForceSpecConstr from diverging.\n\n sc_force :: Bool, -- Force specialisation?\n -- See Note [Forcing specialisation]\n\n sc_subst :: Subst, -- Current substitution\n -- Maps InIds to OutExprs\n\n sc_how_bound :: HowBoundEnv,\n -- Binds interesting non-top-level variables\n -- Domain is OutVars (*after* applying the substitution)\n\n sc_vals :: ValueEnv,\n -- Domain is OutIds (*after* applying the substitution)\n -- Used even for top-level bindings (but not imported ones)\n -- The range of the ValueEnv is *work-free* values\n -- such as (\\x. blah), or (Just v)\n -- but NOT (Just (expensive v))\n -- See Note [Work-free values only in environment]\n\n sc_annotations :: UniqFM SpecConstrAnnotation\n }\n\n---------------------\n-- As we go, we apply a substitution (sc_subst) to the current term\ntype InExpr = CoreExpr -- _Before_ applying the subst\ntype InVar = Var\n\ntype OutExpr = CoreExpr -- _After_ applying the subst\ntype OutId = Id\ntype OutVar = Var\n\n---------------------\ntype HowBoundEnv = VarEnv HowBound -- Domain is OutVars\n\n---------------------\ntype ValueEnv = IdEnv Value -- Domain is OutIds\ndata Value = ConVal AltCon [CoreArg] -- _Saturated_ constructors\n -- The AltCon is never DEFAULT\n | LambdaVal -- Inlinable lambdas or PAPs\n\ninstance Outputable Value where\n ppr (ConVal con args) = ppr con <+> interpp'SP args\n ppr LambdaVal = ptext (sLit \"\")\n\n---------------------\ninitScEnv :: DynFlags -> UniqFM SpecConstrAnnotation -> ScEnv\ninitScEnv dflags anns\n = SCE { sc_dflags = dflags,\n sc_size = specConstrThreshold dflags,\n sc_count = specConstrCount dflags,\n sc_recursive = specConstrRecursive dflags,\n sc_force = False,\n sc_subst = emptySubst,\n sc_how_bound = emptyVarEnv,\n sc_vals = emptyVarEnv,\n sc_annotations = anns }\n\ndata HowBound = RecFun -- These are the recursive functions for which\n -- we seek interesting call patterns\n\n | RecArg -- These are those functions' arguments, or their sub-components;\n -- we gather occurrence information for these\n\ninstance Outputable HowBound where\n ppr RecFun = text \"RecFun\"\n ppr RecArg = text \"RecArg\"\n\nscForce :: ScEnv -> Bool -> ScEnv\nscForce env b = env { sc_force = b }\n\nlookupHowBound :: ScEnv -> Id -> Maybe HowBound\nlookupHowBound env id = lookupVarEnv (sc_how_bound env) id\n\nscSubstId :: ScEnv -> Id -> CoreExpr\nscSubstId env v = lookupIdSubst (text \"scSubstId\") (sc_subst env) v\n\nscSubstTy :: ScEnv -> Type -> Type\nscSubstTy env ty = substTy (sc_subst env) ty\n\nscSubstCo :: ScEnv -> Coercion -> Coercion\nscSubstCo env co = substCo (sc_subst env) co\n\nzapScSubst :: ScEnv -> ScEnv\nzapScSubst env = env { sc_subst = zapSubstEnv (sc_subst env) }\n\nextendScInScope :: ScEnv -> [Var] -> ScEnv\n -- Bring the quantified variables into scope\nextendScInScope env qvars = env { sc_subst = extendInScopeList (sc_subst env) qvars }\n\n -- Extend the substitution\nextendScSubst :: ScEnv -> Var -> OutExpr -> ScEnv\nextendScSubst env var expr = env { sc_subst = extendSubst (sc_subst env) var expr }\n\nextendScSubstList :: ScEnv -> [(Var,OutExpr)] -> ScEnv\nextendScSubstList env prs = env { sc_subst = extendSubstList (sc_subst env) prs }\n\nextendHowBound :: ScEnv -> [Var] -> HowBound -> ScEnv\nextendHowBound env bndrs how_bound\n = env { sc_how_bound = extendVarEnvList (sc_how_bound env)\n [(bndr,how_bound) | bndr <- bndrs] }\n\nextendBndrsWith :: HowBound -> ScEnv -> [Var] -> (ScEnv, [Var])\nextendBndrsWith how_bound env bndrs\n = (env { sc_subst = subst', sc_how_bound = hb_env' }, bndrs')\n where\n (subst', bndrs') = substBndrs (sc_subst env) bndrs\n hb_env' = sc_how_bound env `extendVarEnvList`\n [(bndr,how_bound) | bndr <- bndrs']\n\nextendBndrWith :: HowBound -> ScEnv -> Var -> (ScEnv, Var)\nextendBndrWith how_bound env bndr\n = (env { sc_subst = subst', sc_how_bound = hb_env' }, bndr')\n where\n (subst', bndr') = substBndr (sc_subst env) bndr\n hb_env' = extendVarEnv (sc_how_bound env) bndr' how_bound\n\nextendRecBndrs :: ScEnv -> [Var] -> (ScEnv, [Var])\nextendRecBndrs env bndrs = (env { sc_subst = subst' }, bndrs')\n where\n (subst', bndrs') = substRecBndrs (sc_subst env) bndrs\n\nextendBndr :: ScEnv -> Var -> (ScEnv, Var)\nextendBndr env bndr = (env { sc_subst = subst' }, bndr')\n where\n (subst', bndr') = substBndr (sc_subst env) bndr\n\nextendValEnv :: ScEnv -> Id -> Maybe Value -> ScEnv\nextendValEnv env _ Nothing = env\nextendValEnv env id (Just cv) \n | valueIsWorkFree cv -- Don't duplicate work!! Trac #7865\n = env { sc_vals = extendVarEnv (sc_vals env) id cv }\nextendValEnv env _ _ = env\n\nextendCaseBndrs :: ScEnv -> OutExpr -> OutId -> AltCon -> [Var] -> (ScEnv, [Var])\n-- When we encounter\n-- case scrut of b\n-- C x y -> ...\n-- we want to bind b, to (C x y)\n-- NB1: Extends only the sc_vals part of the envt\n-- NB2: Kill the dead-ness info on the pattern binders x,y, since\n-- they are potentially made alive by the [b -> C x y] binding\nextendCaseBndrs env scrut case_bndr con alt_bndrs\n = (env2, alt_bndrs')\n where\n live_case_bndr = not (isDeadBinder case_bndr)\n env1 | Var v <- scrut = extendValEnv env v cval\n | otherwise = env -- See Note [Add scrutinee to ValueEnv too]\n env2 | live_case_bndr = extendValEnv env1 case_bndr cval\n | otherwise = env1\n\n alt_bndrs' | case scrut of { Var {} -> True; _ -> live_case_bndr }\n = map zap alt_bndrs\n | otherwise\n = alt_bndrs\n\n cval = case con of\n DEFAULT -> Nothing\n LitAlt {} -> Just (ConVal con [])\n DataAlt {} -> Just (ConVal con vanilla_args)\n where\n vanilla_args = map Type (tyConAppArgs (idType case_bndr)) ++\n varsToCoreExprs alt_bndrs\n\n zap v | isTyVar v = v -- See NB2 above\n | otherwise = zapIdOccInfo v\n\n\ndecreaseSpecCount :: ScEnv -> Int -> ScEnv\n-- See Note [Avoiding exponential blowup]\ndecreaseSpecCount env n_specs\n = env { sc_count = case sc_count env of\n Nothing -> Nothing\n Just n -> Just (n `div` (n_specs + 1)) }\n -- The \"+1\" takes account of the original function;\n -- See Note [Avoiding exponential blowup]\n\n---------------------------------------------------\n-- See Note [Forcing specialisation]\nignoreType :: ScEnv -> Type -> Bool\nignoreDataCon :: ScEnv -> DataCon -> Bool\nforceSpecBndr :: ScEnv -> Var -> Bool\n\n#ifndef GHCI\nignoreType _ _ = False\nignoreDataCon _ _ = False\n#else \/* GHCI *\/\n\nignoreDataCon env dc = ignoreTyCon env (dataConTyCon dc)\n\nignoreType env ty\n = case tyConAppTyCon_maybe ty of\n Just tycon -> ignoreTyCon env tycon\n _ -> False\n\nignoreTyCon :: ScEnv -> TyCon -> Bool\nignoreTyCon env tycon\n = lookupUFM (sc_annotations env) tycon == Just NoSpecConstr\n#endif \/* GHCI *\/\n\nforceSpecBndr env var = forceSpecFunTy env . snd . splitForAllTys . varType $ var\n\nforceSpecFunTy :: ScEnv -> Type -> Bool\nforceSpecFunTy env = any (forceSpecArgTy env) . fst . splitFunTys\n\nforceSpecArgTy :: ScEnv -> Type -> Bool\nforceSpecArgTy env ty\n | Just ty' <- coreView ty = forceSpecArgTy env ty'\n\nforceSpecArgTy env ty\n | Just (tycon, tys) <- splitTyConApp_maybe ty\n , tycon \/= funTyCon\n = tyConName tycon == specTyConName\n#ifdef GHCI\n || lookupUFM (sc_annotations env) tycon == Just ForceSpecConstr\n#endif\n || any (forceSpecArgTy env) tys\n\nforceSpecArgTy _ _ = False\n\\end{code}\n\nNote [Add scrutinee to ValueEnv too]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nConsider this:\n case x of y\n (a,b) -> case b of c\n I# v -> ...(f y)...\nBy the time we get to the call (f y), the ValueEnv\nwill have a binding for y, and for c\n y -> (a,b)\n c -> I# v\nBUT that's not enough! Looking at the call (f y) we\nsee that y is pair (a,b), but we also need to know what 'b' is.\nSo in extendCaseBndrs we must *also* add the binding\n b -> I# v\nelse we lose a useful specialisation for f. This is necessary even\nthough the simplifier has systematically replaced uses of 'x' with 'y'\nand 'b' with 'c' in the code. The use of 'b' in the ValueEnv came\nfrom outside the case. See Trac #4908 for the live example.\n\nNote [Avoiding exponential blowup]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe sc_count field of the ScEnv says how many times we are prepared to\nduplicate a single function. But we must take care with recursive\nspecialisations. Consider\n\n let $j1 = let $j2 = let $j3 = ...\n in\n ...$j3...\n in\n ...$j2...\n in\n ...$j1...\n\nIf we specialise $j1 then in each specialisation (as well as the original)\nwe can specialise $j2, and similarly $j3. Even if we make just *one*\nspecialisation of each, because we also have the original we'll get 2^n\ncopies of $j3, which is not good.\n\nSo when recursively specialising we divide the sc_count by the number of\ncopies we are making at this level, including the original.\n\n\n%************************************************************************\n%* *\n\\subsection{Usage information: flows upwards}\n%* *\n%************************************************************************\n\n\\begin{code}\ndata ScUsage\n = SCU {\n scu_calls :: CallEnv, -- Calls\n -- The functions are a subset of the\n -- RecFuns in the ScEnv\n\n scu_occs :: !(IdEnv ArgOcc) -- Information on argument occurrences\n } -- The domain is OutIds\n\ntype CallEnv = IdEnv [Call]\ndata Call = Call Id [CoreArg] ValueEnv\n -- The arguments of the call, together with the\n -- env giving the constructor bindings at the call site\n -- We keep the function mainly for debug output\n\ninstance Outputable Call where\n ppr (Call fn args _) = ppr fn <+> fsep (map pprParendExpr args)\n\nnullUsage :: ScUsage\nnullUsage = SCU { scu_calls = emptyVarEnv, scu_occs = emptyVarEnv }\n\ncombineCalls :: CallEnv -> CallEnv -> CallEnv\ncombineCalls = plusVarEnv_C (++)\n where\n-- plus cs ds | length res > 1\n-- = pprTrace \"combineCalls\" (vcat [ ptext (sLit \"cs:\") <+> ppr cs\n-- , ptext (sLit \"ds:\") <+> ppr ds])\n-- res\n-- | otherwise = res\n-- where\n-- res = cs ++ ds\n\ncombineUsage :: ScUsage -> ScUsage -> ScUsage\ncombineUsage u1 u2 = SCU { scu_calls = combineCalls (scu_calls u1) (scu_calls u2),\n scu_occs = plusVarEnv_C combineOcc (scu_occs u1) (scu_occs u2) }\n\ncombineUsages :: [ScUsage] -> ScUsage\ncombineUsages [] = nullUsage\ncombineUsages us = foldr1 combineUsage us\n\nlookupOccs :: ScUsage -> [OutVar] -> (ScUsage, [ArgOcc])\nlookupOccs (SCU { scu_calls = sc_calls, scu_occs = sc_occs }) bndrs\n = (SCU {scu_calls = sc_calls, scu_occs = delVarEnvList sc_occs bndrs},\n [lookupVarEnv sc_occs b `orElse` NoOcc | b <- bndrs])\n\ndata ArgOcc = NoOcc -- Doesn't occur at all; or a type argument\n | UnkOcc -- Used in some unknown way\n\n | ScrutOcc -- See Note [ScrutOcc]\n (DataConEnv [ArgOcc]) -- How the sub-components are used\n\ntype DataConEnv a = UniqFM a -- Keyed by DataCon\n\n{- Note [ScrutOcc]\n~~~~~~~~~~~~~~~~~~~\nAn occurrence of ScrutOcc indicates that the thing, or a `cast` version of the thing,\nis *only* taken apart or applied.\n\n Functions, literal: ScrutOcc emptyUFM\n Data constructors: ScrutOcc subs,\n\nwhere (subs :: UniqFM [ArgOcc]) gives usage of the *pattern-bound* components,\nThe domain of the UniqFM is the Unique of the data constructor\n\nThe [ArgOcc] is the occurrences of the *pattern-bound* components\nof the data structure. E.g.\n data T a = forall b. MkT a b (b->a)\nA pattern binds b, x::a, y::b, z::b->a, but not 'a'!\n\n-}\n\ninstance Outputable ArgOcc where\n ppr (ScrutOcc xs) = ptext (sLit \"scrut-occ\") <> ppr xs\n ppr UnkOcc = ptext (sLit \"unk-occ\")\n ppr NoOcc = ptext (sLit \"no-occ\")\n\nevalScrutOcc :: ArgOcc\nevalScrutOcc = ScrutOcc emptyUFM\n\n-- Experimentally, this vesion of combineOcc makes ScrutOcc \"win\", so\n-- that if the thing is scrutinised anywhere then we get to see that\n-- in the overall result, even if it's also used in a boxed way\n-- This might be too agressive; see Note [Reboxing] Alternative 3\ncombineOcc :: ArgOcc -> ArgOcc -> ArgOcc\ncombineOcc NoOcc occ = occ\ncombineOcc occ NoOcc = occ\ncombineOcc (ScrutOcc xs) (ScrutOcc ys) = ScrutOcc (plusUFM_C combineOccs xs ys)\ncombineOcc UnkOcc (ScrutOcc ys) = ScrutOcc ys\ncombineOcc (ScrutOcc xs) UnkOcc = ScrutOcc xs\ncombineOcc UnkOcc UnkOcc = UnkOcc\n\ncombineOccs :: [ArgOcc] -> [ArgOcc] -> [ArgOcc]\ncombineOccs xs ys = zipWithEqual \"combineOccs\" combineOcc xs ys\n\nsetScrutOcc :: ScEnv -> ScUsage -> OutExpr -> ArgOcc -> ScUsage\n-- _Overwrite_ the occurrence info for the scrutinee, if the scrutinee\n-- is a variable, and an interesting variable\nsetScrutOcc env usg (Cast e _) occ = setScrutOcc env usg e occ\nsetScrutOcc env usg (Tick _ e) occ = setScrutOcc env usg e occ\nsetScrutOcc env usg (Var v) occ\n | Just RecArg <- lookupHowBound env v = usg { scu_occs = extendVarEnv (scu_occs usg) v occ }\n | otherwise = usg\nsetScrutOcc _env usg _other _occ -- Catch-all\n = usg\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{The main recursive function}\n%* *\n%************************************************************************\n\nThe main recursive function gathers up usage information, and\ncreates specialised versions of functions.\n\n\\begin{code}\nscExpr, scExpr' :: ScEnv -> CoreExpr -> UniqSM (ScUsage, CoreExpr)\n -- The unique supply is needed when we invent\n -- a new name for the specialised function and its args\n\nscExpr env e = scExpr' env e\n\n\nscExpr' env (Var v) = case scSubstId env v of\n Var v' -> return (mkVarUsage env v' [], Var v')\n e' -> scExpr (zapScSubst env) e'\n\nscExpr' env (Type t) = return (nullUsage, Type (scSubstTy env t))\nscExpr' env (Coercion c) = return (nullUsage, Coercion (scSubstCo env c))\nscExpr' _ e@(Lit {}) = return (nullUsage, e)\nscExpr' env (Tick t e) = do (usg, e') <- scExpr env e\n return (usg, Tick t e')\nscExpr' env (Cast e co) = do (usg, e') <- scExpr env e\n return (usg, Cast e' (scSubstCo env co))\nscExpr' env e@(App _ _) = scApp env (collectArgs e)\nscExpr' env (Lam b e) = do let (env', b') = extendBndr env b\n (usg, e') <- scExpr env' e\n return (usg, Lam b' e')\n\nscExpr' env (Case scrut b ty alts)\n = do { (scrut_usg, scrut') <- scExpr env scrut\n ; case isValue (sc_vals env) scrut' of\n Just (ConVal con args) -> sc_con_app con args scrut'\n _other -> sc_vanilla scrut_usg scrut'\n }\n where\n sc_con_app con args scrut' -- Known constructor; simplify\n = do { let (_, bs, rhs) = findAlt con alts\n `orElse` (DEFAULT, [], mkImpossibleExpr ty)\n alt_env' = extendScSubstList env ((b,scrut') : bs `zip` trimConArgs con args)\n ; scExpr alt_env' rhs }\n\n sc_vanilla scrut_usg scrut' -- Normal case\n = do { let (alt_env,b') = extendBndrWith RecArg env b\n -- Record RecArg for the components\n\n ; (alt_usgs, alt_occs, alts')\n <- mapAndUnzip3M (sc_alt alt_env scrut' b') alts\n\n ; let scrut_occ = foldr combineOcc NoOcc alt_occs\n scrut_usg' = setScrutOcc env scrut_usg scrut' scrut_occ\n -- The combined usage of the scrutinee is given\n -- by scrut_occ, which is passed to scScrut, which\n -- in turn treats a bare-variable scrutinee specially\n\n ; return (foldr combineUsage scrut_usg' alt_usgs,\n Case scrut' b' (scSubstTy env ty) alts') }\n\n sc_alt env scrut' b' (con,bs,rhs)\n = do { let (env1, bs1) = extendBndrsWith RecArg env bs\n (env2, bs2) = extendCaseBndrs env1 scrut' b' con bs1\n ; (usg, rhs') <- scExpr env2 rhs\n ; let (usg', b_occ:arg_occs) = lookupOccs usg (b':bs2)\n scrut_occ = case con of\n DataAlt dc -> ScrutOcc (unitUFM dc arg_occs)\n _ -> ScrutOcc emptyUFM\n ; return (usg', b_occ `combineOcc` scrut_occ, (con, bs2, rhs')) }\n\nscExpr' env (Let (NonRec bndr rhs) body)\n | isTyVar bndr -- Type-lets may be created by doBeta\n = scExpr' (extendScSubst env bndr rhs) body\n\n | otherwise\n = do { let (body_env, bndr') = extendBndr env bndr\n ; (rhs_usg, rhs_info) <- scRecRhs env (bndr',rhs)\n\n ; let body_env2 = extendHowBound body_env [bndr'] RecFun\n -- Note [Local let bindings]\n RI _ rhs' _ _ _ = rhs_info\n body_env3 = extendValEnv body_env2 bndr' (isValue (sc_vals env) rhs')\n\n ; (body_usg, body') <- scExpr body_env3 body\n\n -- NB: For non-recursive bindings we inherit sc_force flag from\n -- the parent function (see Note [Forcing specialisation])\n ; (spec_usg, specs) <- specialise env\n (scu_calls body_usg)\n rhs_info\n (SI [] 0 (Just rhs_usg))\n\n ; return (body_usg { scu_calls = scu_calls body_usg `delVarEnv` bndr' }\n `combineUsage` spec_usg, -- Note [spec_usg includes rhs_usg]\n mkLets [NonRec b r | (b,r) <- specInfoBinds rhs_info specs] body')\n }\n\n\n-- A *local* recursive group: see Note [Local recursive groups]\nscExpr' env (Let (Rec prs) body)\n = do { let (bndrs,rhss) = unzip prs\n (rhs_env1,bndrs') = extendRecBndrs env bndrs\n rhs_env2 = extendHowBound rhs_env1 bndrs' RecFun\n force_spec = any (forceSpecBndr env) bndrs'\n -- Note [Forcing specialisation]\n\n ; (rhs_usgs, rhs_infos) <- mapAndUnzipM (scRecRhs rhs_env2) (bndrs' `zip` rhss)\n ; (body_usg, body') <- scExpr rhs_env2 body\n\n -- NB: start specLoop from body_usg\n ; (spec_usg, specs) <- specLoop (scForce rhs_env2 force_spec)\n (scu_calls body_usg) rhs_infos nullUsage\n [SI [] 0 (Just usg) | usg <- rhs_usgs]\n -- Do not unconditionally generate specialisations from rhs_usgs\n -- Instead use them only if we find an unspecialised call\n -- See Note [Local recursive groups]\n\n ; let all_usg = spec_usg `combineUsage` body_usg -- Note [spec_usg includes rhs_usg]\n bind' = Rec (concat (zipWith specInfoBinds rhs_infos specs))\n\n ; return (all_usg { scu_calls = scu_calls all_usg `delVarEnvList` bndrs' },\n Let bind' body') }\n\\end{code}\n\nNote [Local let bindings]\n~~~~~~~~~~~~~~~~~~~~~~~~~\nIt is not uncommon to find this\n\n let $j = \\x. in ...$j True...$j True...\n\nHere $j is an arbitrary let-bound function, but it often comes up for\njoin points. We might like to specialise $j for its call patterns.\nNotice the difference from a letrec, where we look for call patterns\nin the *RHS* of the function. Here we look for call patterns in the\n*body* of the let.\n\nAt one point I predicated this on the RHS mentioning the outer\nrecursive function, but that's not essential and might even be\nharmful. I'm not sure.\n\n\n\\begin{code}\nscApp :: ScEnv -> (InExpr, [InExpr]) -> UniqSM (ScUsage, CoreExpr)\n\nscApp env (Var fn, args) -- Function is a variable\n = ASSERT( not (null args) )\n do { args_w_usgs <- mapM (scExpr env) args\n ; let (arg_usgs, args') = unzip args_w_usgs\n arg_usg = combineUsages arg_usgs\n ; case scSubstId env fn of\n fn'@(Lam {}) -> scExpr (zapScSubst env) (doBeta fn' args')\n -- Do beta-reduction and try again\n\n Var fn' -> return (arg_usg `combineUsage` mkVarUsage env fn' args',\n mkApps (Var fn') args')\n\n other_fn' -> return (arg_usg, mkApps other_fn' args') }\n -- NB: doing this ignores any usage info from the substituted\n -- function, but I don't think that matters. If it does\n -- we can fix it.\n where\n doBeta :: OutExpr -> [OutExpr] -> OutExpr\n -- ToDo: adjust for System IF\n doBeta (Lam bndr body) (arg : args) = Let (NonRec bndr arg) (doBeta body args)\n doBeta fn args = mkApps fn args\n\n-- The function is almost always a variable, but not always.\n-- In particular, if this pass follows float-in,\n-- which it may, we can get\n-- (let f = ...f... in f) arg1 arg2\nscApp env (other_fn, args)\n = do { (fn_usg, fn') <- scExpr env other_fn\n ; (arg_usgs, args') <- mapAndUnzipM (scExpr env) args\n ; return (combineUsages arg_usgs `combineUsage` fn_usg, mkApps fn' args') }\n\n----------------------\nmkVarUsage :: ScEnv -> Id -> [CoreExpr] -> ScUsage\nmkVarUsage env fn args\n = case lookupHowBound env fn of\n Just RecFun -> SCU { scu_calls = unitVarEnv fn [Call fn args (sc_vals env)]\n , scu_occs = emptyVarEnv }\n Just RecArg -> SCU { scu_calls = emptyVarEnv\n , scu_occs = unitVarEnv fn arg_occ }\n Nothing -> nullUsage\n where\n -- I rather think we could use UnkOcc all the time\n arg_occ | null args = UnkOcc\n | otherwise = evalScrutOcc\n\n----------------------\nscTopBindEnv :: ScEnv -> CoreBind -> UniqSM (ScEnv, CoreBind)\nscTopBindEnv env (Rec prs)\n = do { let (rhs_env1,bndrs') = extendRecBndrs env bndrs\n rhs_env2 = extendHowBound rhs_env1 bndrs RecFun\n\n prs' = zip bndrs' rhss\n ; return (rhs_env2, Rec prs') }\n where\n (bndrs,rhss) = unzip prs\n\nscTopBindEnv env (NonRec bndr rhs)\n = do { let (env1, bndr') = extendBndr env bndr\n env2 = extendValEnv env1 bndr' (isValue (sc_vals env) rhs)\n ; return (env2, NonRec bndr' rhs) }\n\n----------------------\nscTopBind :: ScEnv -> ScUsage -> CoreBind -> UniqSM (ScUsage, CoreBind)\n\n{-\nscTopBind _ usage _\n | pprTrace \"scTopBind_usage\" (ppr (scu_calls usage)) False\n = error \"false\"\n-}\n\nscTopBind env body_usage (Rec prs)\n | Just threshold <- sc_size env\n , not force_spec\n , not (all (couldBeSmallEnoughToInline (sc_dflags env) threshold) rhss)\n -- No specialisation\n = do { (rhs_usgs, rhss') <- mapAndUnzipM (scExpr env) rhss\n ; return (body_usage `combineUsage` combineUsages rhs_usgs, Rec (bndrs `zip` rhss')) }\n\n | otherwise -- Do specialisation\n = do { (rhs_usgs, rhs_infos) <- mapAndUnzipM (scRecRhs env) prs\n -- ; pprTrace \"scTopBind\" (ppr bndrs $$ ppr (map (lookupVarEnv (scu_calls body_usage)) bndrs)) (return ())\n\n -- Note [Top-level recursive groups]\n ; let (usg,rest) | any isExportedId bndrs -- Seed from RHSs\n = ( combineUsages rhs_usgs, [SI [] 0 Nothing | _ <- rhs_usgs] )\n | otherwise -- Seed from body only\n = ( body_usage, [SI [] 0 (Just us) | us <- rhs_usgs] )\n\n ; (spec_usage, specs) <- specLoop (scForce env force_spec)\n (scu_calls usg) rhs_infos nullUsage rest\n\n ; return (body_usage `combineUsage` spec_usage,\n Rec (concat (zipWith specInfoBinds rhs_infos specs))) }\n where\n (bndrs,rhss) = unzip prs\n force_spec = any (forceSpecBndr env) bndrs\n -- Note [Forcing specialisation]\n\nscTopBind env usage (NonRec bndr rhs) -- Oddly, we don't seem to specialise top-level non-rec functions\n = do { (rhs_usg', rhs') <- scExpr env rhs\n ; return (usage `combineUsage` rhs_usg', NonRec bndr rhs') }\n\n----------------------\nscRecRhs :: ScEnv -> (OutId, InExpr) -> UniqSM (ScUsage, RhsInfo)\nscRecRhs env (bndr,rhs)\n = do { let (arg_bndrs,body) = collectBinders rhs\n (body_env, arg_bndrs') = extendBndrsWith RecArg env arg_bndrs\n ; (body_usg, body') <- scExpr body_env body\n ; let (rhs_usg, arg_occs) = lookupOccs body_usg arg_bndrs'\n ; return (rhs_usg, RI bndr (mkLams arg_bndrs' body')\n arg_bndrs body arg_occs) }\n -- The arg_occs says how the visible,\n -- lambda-bound binders of the RHS are used\n -- (including the TyVar binders)\n -- Two pats are the same if they match both ways\n\n----------------------\nspecInfoBinds :: RhsInfo -> SpecInfo -> [(Id,CoreExpr)]\nspecInfoBinds (RI fn new_rhs _ _ _) (SI specs _ _)\n = [(id,rhs) | OS _ _ id rhs <- specs] ++\n -- First the specialised bindings\n\n [(fn `addIdSpecialisations` rules, new_rhs)]\n -- And now the original binding\n where\n rules = [r | OS _ r _ _ <- specs]\n\n\\end{code}\n\n\n%************************************************************************\n%* *\n The specialiser itself\n%* *\n%************************************************************************\n\n\\begin{code}\ndata RhsInfo = RI OutId -- The binder\n OutExpr -- The new RHS\n [InVar] InExpr -- The *original* RHS (\\xs.body)\n -- Note [Specialise original body]\n [ArgOcc] -- Info on how the xs occur in body\n\ndata SpecInfo = SI [OneSpec] -- The specialisations we have generated\n\n Int -- Length of specs; used for numbering them\n\n (Maybe ScUsage) -- Just cs => we have not yet used calls in the\n -- from calls in the *original* RHS as\n -- seeds for new specialisations;\n -- if you decide to do so, here is the\n -- RHS usage (which has not yet been\n -- unleashed)\n -- Nothing => we have\n -- See Note [Local recursive groups]\n -- See Note [spec_usg includes rhs_usg]\n\n -- One specialisation: Rule plus definition\ndata OneSpec = OS CallPat -- Call pattern that generated this specialisation\n CoreRule -- Rule connecting original id with the specialisation\n OutId OutExpr -- Spec id + its rhs\n\n\nspecLoop :: ScEnv\n -> CallEnv\n -> [RhsInfo]\n -> ScUsage -> [SpecInfo] -- One per binder; acccumulating parameter\n -> UniqSM (ScUsage, [SpecInfo]) -- ...ditto...\n\nspecLoop env all_calls rhs_infos usg_so_far specs_so_far\n = do { specs_w_usg <- zipWithM (specialise env all_calls) rhs_infos specs_so_far\n ; let (new_usg_s, all_specs) = unzip specs_w_usg\n new_usg = combineUsages new_usg_s\n new_calls = scu_calls new_usg\n all_usg = usg_so_far `combineUsage` new_usg\n ; if isEmptyVarEnv new_calls then\n return (all_usg, all_specs)\n else\n specLoop env new_calls rhs_infos all_usg all_specs }\n\nspecialise\n :: ScEnv\n -> CallEnv -- Info on newly-discovered calls to this function\n -> RhsInfo\n -> SpecInfo -- Original RHS plus patterns dealt with\n -> UniqSM (ScUsage, SpecInfo) -- New specialised versions and their usage\n\n-- See Note [spec_usg includes rhs_usg]\n\n-- Note: this only generates *specialised* bindings\n-- The original binding is added by specInfoBinds\n--\n-- Note: the rhs here is the optimised version of the original rhs\n-- So when we make a specialised copy of the RHS, we're starting\n-- from an RHS whose nested functions have been optimised already.\n\nspecialise env bind_calls (RI fn _ arg_bndrs body arg_occs)\n spec_info@(SI specs spec_count mb_unspec)\n | isBottomingId fn -- Note [Do not specialise diverging functions]\n -- and do not generate specialisation seeds from its RHS\n = return (nullUsage, spec_info)\n\n | isNeverActive (idInlineActivation fn) -- See Note [Transfer activation]\n || null arg_bndrs -- Only specialise functions\n = case mb_unspec of -- Behave as if there was a single, boring call\n Just rhs_usg -> return (rhs_usg, SI specs spec_count Nothing)\n -- See Note [spec_usg includes rhs_usg]\n Nothing -> return (nullUsage, spec_info)\n\n | Just all_calls <- lookupVarEnv bind_calls fn\n = -- pprTrace \"specialise entry {\" (ppr fn <+> ppr (length all_calls)) $\n do { (boring_call, pats) <- callsToPats env specs arg_occs all_calls\n\n -- Bale out if too many specialisations\n ; let n_pats = length pats\n spec_count' = n_pats + spec_count\n ; case sc_count env of\n Just max | not (sc_force env) && spec_count' > max\n -> if (debugIsOn || opt_PprStyle_Debug) -- Suppress this scary message for\n then pprTrace \"SpecConstr\" msg $ -- ordinary users! Trac #5125\n return (nullUsage, spec_info)\n else return (nullUsage, spec_info)\n where\n msg = vcat [ sep [ ptext (sLit \"Function\") <+> quotes (ppr fn)\n , nest 2 (ptext (sLit \"has\") <+>\n speakNOf spec_count' (ptext (sLit \"call pattern\")) <> comma <+>\n ptext (sLit \"but the limit is\") <+> int max) ]\n , ptext (sLit \"Use -fspec-constr-count=n to set the bound\")\n , extra ]\n extra | not opt_PprStyle_Debug = ptext (sLit \"Use -dppr-debug to see specialisations\")\n | otherwise = ptext (sLit \"Specialisations:\") <+> ppr (pats ++ [p | OS p _ _ _ <- specs])\n\n _normal_case -> do {\n\n-- ; if (not (null pats) || isJust mb_unspec) then\n-- pprTrace \"specialise\" (vcat [ ppr fn <+> text \"with\" <+> int (length pats) <+> text \"good patterns\"\n-- , text \"mb_unspec\" <+> ppr (isJust mb_unspec)\n-- , text \"arg_occs\" <+> ppr arg_occs\n-- , text \"good pats\" <+> ppr pats]) $\n-- return ()\n-- else return ()\n\n ; let spec_env = decreaseSpecCount env n_pats\n ; (spec_usgs, new_specs) <- mapAndUnzipM (spec_one spec_env fn arg_bndrs body)\n (pats `zip` [spec_count..])\n -- See Note [Specialise original body]\n\n ; let spec_usg = combineUsages spec_usgs\n\n -- If there were any boring calls among the seeds (= all_calls), then those\n -- calls will call the un-specialised function. So we should use the seeds\n -- from the _unspecialised_ function's RHS, which are in mb_unspec, by returning\n -- then in new_usg.\n (new_usg, mb_unspec')\n = case mb_unspec of\n Just rhs_usg | boring_call -> (spec_usg `combineUsage` rhs_usg, Nothing)\n _ -> (spec_usg, mb_unspec)\n\n-- ; pprTrace \"specialise return }\" (ppr fn\n-- <+> ppr (scu_calls new_usg))\n ; return (new_usg, SI (new_specs ++ specs) spec_count' mb_unspec') } }\n\n\n | otherwise -- No new seeds, so return nullUsage\n = return (nullUsage, spec_info)\n\n\n---------------------\nspec_one :: ScEnv\n -> OutId -- Function\n -> [InVar] -- Lambda-binders of RHS; should match patterns\n -> InExpr -- Body of the original function\n -> (CallPat, Int)\n -> UniqSM (ScUsage, OneSpec) -- Rule and binding\n\n-- spec_one creates a specialised copy of the function, together\n-- with a rule for using it. I'm very proud of how short this\n-- function is, considering what it does :-).\n\n{-\n Example\n\n In-scope: a, x::a\n f = \/\\b \\y::[(a,b)] -> ....f (b,c) ((:) (a,(b,c)) (x,v) (h w))...\n [c::*, v::(b,c) are presumably bound by the (...) part]\n ==>\n f_spec = \/\\ b c \\ v::(b,c) hw::[(a,(b,c))] ->\n (...entire body of f...) [b -> (b,c),\n y -> ((:) (a,(b,c)) (x,v) hw)]\n\n RULE: forall b::* c::*, -- Note, *not* forall a, x\n v::(b,c),\n hw::[(a,(b,c))] .\n\n f (b,c) ((:) (a,(b,c)) (x,v) hw) = f_spec b c v hw\n-}\n\nspec_one env fn arg_bndrs body (call_pat@(qvars, pats), rule_number)\n = do { spec_uniq <- getUniqueUs\n ; let spec_env = extendScSubstList (extendScInScope env qvars)\n (arg_bndrs `zip` pats)\n fn_name = idName fn\n fn_loc = nameSrcSpan fn_name\n fn_occ = nameOccName fn_name\n spec_occ = mkSpecOcc fn_occ\n -- We use fn_occ rather than fn in the rule_name string\n -- as we don't want the uniq to end up in the rule, and\n -- hence in the ABI, as that can cause spurious ABI\n -- changes (#4012).\n rule_name = mkFastString (\"SC:\" ++ occNameString fn_occ ++ show rule_number)\n spec_name = mkInternalName spec_uniq spec_occ fn_loc\n-- ; pprTrace \"{spec_one\" (ppr (sc_count env) <+> ppr fn <+> ppr pats <+> text \"-->\" <+> ppr spec_name) $ \n-- return ()\n\n -- Specialise the body\n ; (spec_usg, spec_body) <- scExpr spec_env body\n\n-- ; pprTrace \"done spec_one}\" (ppr fn) $ \n-- return ()\n\n -- And build the results\n ; let spec_id = mkLocalId spec_name (mkPiTypes spec_lam_args body_ty) \n -- See Note [Transfer strictness]\n `setIdStrictness` spec_str\n `setIdArity` count isId spec_lam_args\n spec_str = calcSpecStrictness fn spec_lam_args pats\n -- Conditionally use result of new worker-wrapper transform\n (spec_lam_args, spec_call_args) = mkWorkerArgs (sc_dflags env) qvars NoOneShotInfo body_ty\n -- Usual w\/w hack to avoid generating \n -- a spec_rhs of unlifted type and no args\n\n spec_rhs = mkLams spec_lam_args spec_body\n body_ty = exprType spec_body\n rule_rhs = mkVarApps (Var spec_id) spec_call_args\n inline_act = idInlineActivation fn\n rule = mkRule True {- Auto -} True {- Local -}\n rule_name inline_act fn_name qvars pats rule_rhs\n -- See Note [Transfer activation]\n ; return (spec_usg, OS call_pat rule spec_id spec_rhs) }\n\ncalcSpecStrictness :: Id -- The original function\n -> [Var] -> [CoreExpr] -- Call pattern\n -> StrictSig -- Strictness of specialised thing\n-- See Note [Transfer strictness]\ncalcSpecStrictness fn qvars pats\n = mkClosedStrictSig spec_dmds topRes\n where\n spec_dmds = [ lookupVarEnv dmd_env qv `orElse` topDmd | qv <- qvars, isId qv ]\n StrictSig (DmdType _ dmds _) = idStrictness fn\n\n dmd_env = go emptyVarEnv dmds pats\n\n go :: DmdEnv -> [Demand] -> [CoreExpr] -> DmdEnv\n go env ds (Type {} : pats) = go env ds pats\n go env ds (Coercion {} : pats) = go env ds pats\n go env (d:ds) (pat : pats) = go (go_one env d pat) ds pats\n go env _ _ = env\n\n go_one :: DmdEnv -> Demand -> CoreExpr -> DmdEnv\n go_one env d (Var v) = extendVarEnv_C bothDmd env v d\n go_one env d e \n | Just ds <- splitProdDmd_maybe d -- NB: d does not have to be strict\n , (Var _, args) <- collectArgs e = go env ds args\n go_one env _ _ = env\n\\end{code}\n\nNote [spec_usg includes rhs_usg]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIn calls to 'specialise', the returned ScUsage must include the rhs_usg in\nthe passed-in SpecInfo, unless there are no calls at all to the function.\n\nThe caller can, indeed must, assume this. He should not combine in rhs_usg\nhimself, or he'll get rhs_usg twice -- and that can lead to an exponential\nblowup of duplicates in the CallEnv. This is what gave rise to the massive\nperformace loss in Trac #8852.\n\nNote [Specialise original body]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe RhsInfo for a binding keeps the *original* body of the binding. We\nmust specialise that, *not* the result of applying specExpr to the RHS\n(which is also kept in RhsInfo). Otherwise we end up specialising a\nspecialised RHS, and that can lead directly to exponential behaviour.\n\nNote [Transfer activation]\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n This note is for SpecConstr, but exactly the same thing\n happens in the overloading specialiser; see\n Note [Auto-specialisation and RULES] in Specialise.\n\nIn which phase should the specialise-constructor rules be active?\nOriginally I made them always-active, but Manuel found that this\ndefeated some clever user-written rules. Then I made them active only\nin Phase 0; after all, currently, the specConstr transformation is\nonly run after the simplifier has reached Phase 0, but that meant\nthat specialisations didn't fire inside wrappers; see test\nsimplCore\/should_compile\/spec-inline.\n\nSo now I just use the inline-activation of the parent Id, as the\nactivation for the specialiation RULE, just like the main specialiser;\n\nThis in turn means there is no point in specialising NOINLINE things,\nso we test for that.\n\nNote [Transfer strictness]\n~~~~~~~~~~~~~~~~~~~~~~~~~~\nWe must transfer strictness information from the original function to\nthe specialised one. Suppose, for example\n\n f has strictness SS\n and a RULE f (a:as) b = f_spec a as b\n\nNow we want f_spec to have strictness LLS, otherwise we'll use call-by-need\nwhen calling f_spec instead of call-by-value. And that can result in\nunbounded worsening in space (cf the classic foldl vs foldl')\n\nSee Trac #3437 for a good example.\n\nThe function calcSpecStrictness performs the calculation.\n\n\n%************************************************************************\n%* *\n\\subsection{Argument analysis}\n%* *\n%************************************************************************\n\nThis code deals with analysing call-site arguments to see whether\nthey are constructor applications.\n\nNote [Free type variables of the qvar types]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIn a call (f @a x True), that we want to specialise, what variables should\nwe quantify over. Clearly over 'a' and 'x', but what about any type variables\nfree in x's type? In fact we don't need to worry about them because (f @a)\ncan only be a well-typed application if its type is compatible with x, so any\nvariables free in x's type must be free in (f @a), and hence either be gathered\nvia 'a' itself, or be in scope at f's defn. Hence we just take\n (exprsFreeVars pats).\n\nBUT phantom type synonyms can mess this reasoning up,\n eg x::T b with type T b = Int\nSo we apply expandTypeSynonyms to the bound Ids.\nSee Trac # 5458. Yuk.\n\n\\begin{code}\ntype CallPat = ([Var], [CoreExpr]) -- Quantified variables and arguments\n\ncallsToPats :: ScEnv -> [OneSpec] -> [ArgOcc] -> [Call] -> UniqSM (Bool, [CallPat])\n -- Result has no duplicate patterns,\n -- nor ones mentioned in done_pats\n -- Bool indicates that there was at least one boring pattern\ncallsToPats env done_specs bndr_occs calls\n = do { mb_pats <- mapM (callToPats env bndr_occs) calls\n\n ; let good_pats :: [(CallPat, ValueEnv)]\n good_pats = catMaybes mb_pats\n done_pats = [p | OS p _ _ _ <- done_specs]\n is_done p = any (samePat p) done_pats\n no_recursive = map fst (filterOut (is_too_recursive env) good_pats)\n\n ; return (any isNothing mb_pats,\n filterOut is_done (nubBy samePat no_recursive)) }\n\nis_too_recursive :: ScEnv -> (CallPat, ValueEnv) -> Bool\n -- Count the number of recursive constructors in a call pattern,\n -- filter out if there are more than the maximum.\n -- This is only necessary if ForceSpecConstr is in effect:\n -- otherwise specConstrCount will cause specialisation to terminate.\n -- See Note [Limit recursive specialisation]\nis_too_recursive env ((_,exprs), val_env)\n = sc_force env && maximum (map go exprs) > sc_recursive env\n where\n go e\n | Just (ConVal (DataAlt dc) args) <- isValue val_env e\n , isRecursiveTyCon (dataConTyCon dc)\n = 1 + sum (map go args)\n\n |App f a <- e\n = go f + go a\n\n | otherwise\n = 0\n\ncallToPats :: ScEnv -> [ArgOcc] -> Call -> UniqSM (Maybe (CallPat, ValueEnv))\n -- The [Var] is the variables to quantify over in the rule\n -- Type variables come first, since they may scope\n -- over the following term variables\n -- The [CoreExpr] are the argument patterns for the rule\ncallToPats env bndr_occs (Call _ args con_env)\n | length args < length bndr_occs -- Check saturated\n = return Nothing\n | otherwise\n = do { let in_scope = substInScope (sc_subst env)\n ; (interesting, pats) <- argsToPats env in_scope con_env args bndr_occs\n ; let pat_fvs = varSetElems (exprsFreeVars pats)\n in_scope_vars = getInScopeVars in_scope\n qvars = filterOut (`elemVarSet` in_scope_vars) pat_fvs\n -- Quantify over variables that are not in scope\n -- at the call site\n -- See Note [Free type variables of the qvar types]\n -- See Note [Shadowing] at the top\n\n (tvs, ids) = partition isTyVar qvars\n qvars' = tvs ++ map sanitise ids\n -- Put the type variables first; the type of a term\n -- variable may mention a type variable\n\n sanitise id = id `setIdType` expandTypeSynonyms (idType id)\n -- See Note [Free type variables of the qvar types]\n\n ; -- pprTrace \"callToPats\" (ppr args $$ ppr bndr_occs) $\n if interesting\n then return (Just ((qvars', pats), con_env))\n else return Nothing }\n\n -- argToPat takes an actual argument, and returns an abstracted\n -- version, consisting of just the \"constructor skeleton\" of the\n -- argument, with non-constructor sub-expression replaced by new\n -- placeholder variables. For example:\n -- C a (D (f x) (g y)) ==> C p1 (D p2 p3)\n\nargToPat :: ScEnv\n -> InScopeSet -- What's in scope at the fn defn site\n -> ValueEnv -- ValueEnv at the call site\n -> CoreArg -- A call arg (or component thereof)\n -> ArgOcc\n -> UniqSM (Bool, CoreArg)\n\n-- Returns (interesting, pat),\n-- where pat is the pattern derived from the argument\n-- interesting=True if the pattern is non-trivial (not a variable or type)\n-- E.g. x:xs --> (True, x:xs)\n-- f xs --> (False, w) where w is a fresh wildcard\n-- (f xs, 'c') --> (True, (w, 'c')) where w is a fresh wildcard\n-- \\x. x+y --> (True, \\x. x+y)\n-- lvl7 --> (True, lvl7) if lvl7 is bound\n-- somewhere further out\n\nargToPat _env _in_scope _val_env arg@(Type {}) _arg_occ\n = return (False, arg)\n\nargToPat env in_scope val_env (Tick _ arg) arg_occ\n = argToPat env in_scope val_env arg arg_occ\n -- Note [Notes in call patterns]\n -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n -- Ignore Notes. In particular, we want to ignore any InlineMe notes\n -- Perhaps we should not ignore profiling notes, but I'm going to\n -- ride roughshod over them all for now.\n --- See Note [Notes in RULE matching] in Rules\n\nargToPat env in_scope val_env (Let _ arg) arg_occ\n = argToPat env in_scope val_env arg arg_occ\n -- See Note [Matching lets] in Rule.lhs\n -- Look through let expressions\n -- e.g. f (let v = rhs in (v,w))\n -- Here we can specialise for f (v,w)\n -- because the rule-matcher will look through the let.\n\n{- Disabled; see Note [Matching cases] in Rule.lhs\nargToPat env in_scope val_env (Case scrut _ _ [(_, _, rhs)]) arg_occ\n | exprOkForSpeculation scrut -- See Note [Matching cases] in Rule.hhs\n = argToPat env in_scope val_env rhs arg_occ\n-}\n\nargToPat env in_scope val_env (Cast arg co) arg_occ\n | isReflCo co -- Substitution in the SpecConstr itself\n -- can lead to identity coercions\n = argToPat env in_scope val_env arg arg_occ\n | not (ignoreType env ty2)\n = do { (interesting, arg') <- argToPat env in_scope val_env arg arg_occ\n ; if not interesting then\n wildCardPat ty2\n else do\n { -- Make a wild-card pattern for the coercion\n uniq <- getUniqueUs\n ; let co_name = mkSysTvName uniq (fsLit \"sg\")\n co_var = mkCoVar co_name (mkCoercionType Representational ty1 ty2)\n ; return (interesting, Cast arg' (mkCoVarCo co_var)) } }\n where\n Pair ty1 ty2 = coercionKind co\n\n\n\n{- Disabling lambda specialisation for now\n It's fragile, and the spec_loop can be infinite\nargToPat in_scope val_env arg arg_occ\n | is_value_lam arg\n = return (True, arg)\n where\n is_value_lam (Lam v e) -- Spot a value lambda, even if\n | isId v = True -- it is inside a type lambda\n | otherwise = is_value_lam e\n is_value_lam other = False\n-}\n\n -- Check for a constructor application\n -- NB: this *precedes* the Var case, so that we catch nullary constrs\nargToPat env in_scope val_env arg arg_occ\n | Just (ConVal (DataAlt dc) args) <- isValue val_env arg\n , not (ignoreDataCon env dc) -- See Note [NoSpecConstr]\n , Just arg_occs <- mb_scrut dc\n = do { let (ty_args, rest_args) = splitAtList (dataConUnivTyVars dc) args\n ; (_, args') <- argsToPats env in_scope val_env rest_args arg_occs\n ; return (True,\n mkConApp dc (ty_args ++ args')) }\n where\n mb_scrut dc = case arg_occ of\n ScrutOcc bs\n | Just occs <- lookupUFM bs dc\n -> Just (occs) -- See Note [Reboxing]\n _other | sc_force env -> Just (repeat UnkOcc)\n | otherwise -> Nothing\n\n -- Check if the argument is a variable that\n -- (a) is used in an interesting way in the body\n -- (b) we know what its value is\n -- In that case it counts as \"interesting\"\nargToPat env in_scope val_env (Var v) arg_occ\n | sc_force env || case arg_occ of { UnkOcc -> False; _other -> True }, -- (a)\n is_value, -- (b)\n not (ignoreType env (varType v))\n = return (True, Var v)\n where\n is_value\n | isLocalId v = v `elemInScopeSet` in_scope\n && isJust (lookupVarEnv val_env v)\n -- Local variables have values in val_env\n | otherwise = isValueUnfolding (idUnfolding v)\n -- Imports have unfoldings\n\n-- I'm really not sure what this comment means\n-- And by not wild-carding we tend to get forall'd\n-- variables that are in scope, which in turn can\n-- expose the weakness in let-matching\n-- See Note [Matching lets] in Rules\n\n -- Check for a variable bound inside the function.\n -- Don't make a wild-card, because we may usefully share\n -- e.g. f a = let x = ... in f (x,x)\n -- NB: this case follows the lambda and con-app cases!!\n-- argToPat _in_scope _val_env (Var v) _arg_occ\n-- = return (False, Var v)\n -- SLPJ : disabling this to avoid proliferation of versions\n -- also works badly when thinking about seeding the loop\n -- from the body of the let\n -- f x y = letrec g z = ... in g (x,y)\n -- We don't want to specialise for that *particular* x,y\n\n -- The default case: make a wild-card\n -- We use this for coercions too\nargToPat _env _in_scope _val_env arg _arg_occ\n = wildCardPat (exprType arg)\n\nwildCardPat :: Type -> UniqSM (Bool, CoreArg)\nwildCardPat ty\n = do { uniq <- getUniqueUs\n ; let id = mkSysLocal (fsLit \"sc\") uniq ty\n ; return (False, varToCoreExpr id) }\n\nargsToPats :: ScEnv -> InScopeSet -> ValueEnv\n -> [CoreArg] -> [ArgOcc] -- Should be same length\n -> UniqSM (Bool, [CoreArg])\nargsToPats env in_scope val_env args occs\n = do { stuff <- zipWithM (argToPat env in_scope val_env) args occs\n ; let (interesting_s, args') = unzip stuff\n ; return (or interesting_s, args') }\n\\end{code}\n\n\n\\begin{code}\nisValue :: ValueEnv -> CoreExpr -> Maybe Value\nisValue _env (Lit lit)\n | litIsLifted lit = Nothing\n | otherwise = Just (ConVal (LitAlt lit) [])\n\nisValue env (Var v)\n | Just cval <- lookupVarEnv env v\n = Just cval -- You might think we could look in the idUnfolding here\n -- but that doesn't take account of which branch of a\n -- case we are in, which is the whole point\n\n | not (isLocalId v) && isCheapUnfolding unf\n = isValue env (unfoldingTemplate unf)\n where\n unf = idUnfolding v\n -- However we do want to consult the unfolding\n -- as well, for let-bound constructors!\n\nisValue env (Lam b e)\n | isTyVar b = case isValue env e of\n Just _ -> Just LambdaVal\n Nothing -> Nothing\n | otherwise = Just LambdaVal\n\nisValue _env expr -- Maybe it's a constructor application\n | (Var fun, args) <- collectArgs expr\n = case isDataConWorkId_maybe fun of\n\n Just con | args `lengthAtLeast` dataConRepArity con\n -- Check saturated; might be > because the\n -- arity excludes type args\n -> Just (ConVal (DataAlt con) args)\n\n _other | valArgCount args < idArity fun\n -- Under-applied function\n -> Just LambdaVal -- Partial application\n\n _other -> Nothing\n\nisValue _env _expr = Nothing\n\nvalueIsWorkFree :: Value -> Bool\nvalueIsWorkFree LambdaVal = True\nvalueIsWorkFree (ConVal _ args) = all exprIsWorkFree args\n\nsamePat :: CallPat -> CallPat -> Bool\nsamePat (vs1, as1) (vs2, as2)\n = all2 same as1 as2\n where\n same (Var v1) (Var v2)\n | v1 `elem` vs1 = v2 `elem` vs2\n | v2 `elem` vs2 = False\n | otherwise = v1 == v2\n\n same (Lit l1) (Lit l2) = l1==l2\n same (App f1 a1) (App f2 a2) = same f1 f2 && same a1 a2\n\n same (Type {}) (Type {}) = True -- Note [Ignore type differences]\n same (Coercion {}) (Coercion {}) = True\n same (Tick _ e1) e2 = same e1 e2 -- Ignore casts and notes\n same (Cast e1 _) e2 = same e1 e2\n same e1 (Tick _ e2) = same e1 e2\n same e1 (Cast e2 _) = same e1 e2\n\n same e1 e2 = WARN( bad e1 || bad e2, ppr e1 $$ ppr e2)\n False -- Let, lambda, case should not occur\n bad (Case {}) = True\n bad (Let {}) = True\n bad (Lam {}) = True\n bad _other = False\n\\end{code}\n\nNote [Ignore type differences]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWe do not want to generate specialisations where the call patterns\ndiffer only in their type arguments! Not only is it utterly useless,\nbut it also means that (with polymorphic recursion) we can generate\nan infinite number of specialisations. Example is Data.Sequence.adjustTree,\nI think.\n\n","avg_line_length":40.3139592648,"max_line_length":114,"alphanum_fraction":0.5859867902} {"size":457,"ext":"lhs","lang":"Literate Haskell","max_stars_count":1789.0,"content":"-- Copyright (c) 2019 The DAML Authors. All rights reserved.\n-- SPDX-License-Identifier: Apache-2.0\n\n\n\\subsection{Testing LHS}\n\n\\begin{code}\n{-# LANGUAGE CPP #-}\n\nmodule Test\n (\n main\n ) where\n\n\nimport Bird\n\n\\end{code}\n\nfor this file, \\emph{hlint} should be turned off.\n\\begin{code}\n{-# ANN module (\"HLint: ignore\" :: String) #-}\n\\end{code}\n\nour main procedure\n\n\\begin{code}\n\nmain :: IO ()\nmain = do\n putStrLn \"hello world.\"\n fly\n\n\\end{code}\n\n\n","avg_line_length":12.3513513514,"max_line_length":60,"alphanum_fraction":0.6433260394} {"size":4195,"ext":"lhs","lang":"Literate Haskell","max_stars_count":80.0,"content":"> {-# LANGUAGE Arrows #-}\n\nThis file demonstrates how to turn a Music value into an audio signal \nusing the Render module.\n\n> module HSoM.Examples.MusicToSignal where\n> import HSoM\n> import Euterpea\n\nFirst, define some instruments.\n\n> reedyWav = tableSinesN 1024 [0.4, 0.3, 0.35, 0.5, 0.1, 0.2, 0.15, \n> 0.0, 0.02, 0.05, 0.03]\n\n> reed :: Instr (Stereo AudRate)\n> reed dur pch vol params = \n> let reedy = osc reedyWav 0\n> freq = apToHz pch\n> vel = fromIntegral vol \/ 127 \/ 3\n> env = envLineSeg [0, 1, 0.8, 0.6, 0.7, 0.6, 0] \n> (replicate 6 (fromRational dur\/6))\n> in proc _ -> do\n> amp <- env -< ()\n> r1 <- reedy -< freq\n> r2 <- reedy -< freq + (0.023 * freq)\n> r3 <- reedy -< freq + (0.019 * freq)\n> let [a1, a2, a3] = map (* (amp * vel)) [r1, r2, r3]\n> let rleft = a1 * 0.5 + a2 * 0.44 * 0.35 + a3 * 0.26 * 0.65\n> rright = a1 * 0.5 + a2 * 0.44 * 0.65 + a3 * 0.26 * 0.35\n> outA -< (rleft, rright)\n\n> saw = tableSinesN 4096 [1, 0.5, 0.333, 0.25, 0.2, 0.166, 0.142, 0.125, \n> 0.111, 0.1, 0.09, 0.083, 0.076, 0.071, 0.066, 0.062]\n\n> plk :: Instr (Stereo AudRate)\n> plk dur pch vol params = \n> let vel = fromIntegral vol \/ 127 \/ 3\n> freq = apToHz pch\n> sf = pluck saw freq SimpleAveraging\n> in proc _ -> do\n> a <- sf -< freq\n> outA -< (a * vel * 0.4, a * vel * 0.6)\n\nDefine some instruments:\n\n> myBass, myReed :: InstrumentName\n> myBass = CustomInstrument \"pluck-like\"\n> myReed = CustomInstrument \"reed-like\"\n\nConstruct a custom instrument map. An instrument map is just \nan association list containing mappings from InstrumentName to Instr.\n\n> myMap :: InstrMap (Stereo AudRate)\n> myMap = [(myBass, plk), (myReed, reed)]\n\n> bass = mMap (\\p-> (p, 40 :: Volume)) $ instrument myBass bassLine\n> melody = mMap (\\p-> (p,100 :: Volume)) $ instrument myReed mainVoice\n\n> childSong6 :: Music (Pitch, Volume)\n> childSong6 = tempo 1.5 (bass :=: melody)\n\nAll instruments used in the same performance must output the same number \nof channels, but renderSF supports both mono or stereo instruments \n(and any instrument that produces samples in the AudioSample type class).\nThe outFile function will produce a monaural or stereo file accordingly.\n\n> recordSong = uncurry (outFile \"song.wav\") (renderSF childSong6 myMap)\n\n> main = recordSong\n\nThis stuff is taken from Euterpea.Examples.Interlude:\n\n> bassLine = times 3 b1 :+: times 2 b2 :+: \n> times 4 b3 :+: times 5 b1\n\n> mainVoice = times 3 v1 :+: v2\n\n> v1 = v1a :+: graceNote (-1) (d 5 qn) :+: v1b -- bars 1-2\n> v1a = addDur en [a 5, e 5, d 5, fs 5, cs 5, b 4, e 5, b 4]\n> v1b = addDur en [cs 5, b 4]\n\n> v2 = v2a :+: v2b :+: v2c :+: v2d :+: v2e :+: v2f :+: v2g\n> v2a = line [ cs 5 (dhn+dhn), d 5 dhn, \n> f 5 hn, gs 5 qn, fs 5 (hn+en), g 5 en]\n> v2b = addDur en [ fs 5, e 5, cs 5, as 4] :+: a 4 dqn :+:\n> addDur en [ as 4, cs 5, fs 5, e 5, fs 5]\n> v2c = line [ g 5 en, as 5 en, cs 6 (hn+en), d 6 en, cs 6 en] :+:\n> e 5 en :+: enr :+: \n> line [ as 5 en, a 5 en, g 5 en, d 5 qn, c 5 en, cs 5 en] \n> v2d = addDur en [ fs 5, cs 5, e 5, cs 5, \n> a 4, as 4, d 5, e 5, fs 5]\n> v2e = line [ graceNote 2 (e 5 qn), d 5 en, graceNote 2 (d 5 qn), cs 5 en,\n> graceNote 1 (cs 5 qn), b 4 (en+hn), cs 5 en, b 4 en ] \n> v2f = line [ fs 5 en, a 5 en, b 5 (hn+qn), a 5 en, fs 5 en, e 5 qn,\n> d 5 en, fs 5 en, e 5 hn, d 5 hn, fs 5 qn]\n> v2g = tempo (3\/2) (line [cs 5 en, d 5 en, cs 5 en]) :+: \n> b 4 (3*dhn+hn)\n\n> b1 = addDur dqn [b 3, fs 4, g 4, fs 4]\n> b2 = addDur dqn [b 3, es 4, fs 4, es 4]\n> b3 = addDur dqn [as 3, fs 4, g 4, fs 4]\n\n> addDur :: Dur -> [Dur -> Music a] -> Music a\n> addDur d ns = let f n = n d\n> in line (map f ns)\n\n> graceNote :: Int -> Music Pitch -> Music Pitch\n> graceNote n (Prim (Note d p)) =\n> note (d\/8) (trans n p) :+: note (7*d\/8) p\n> graceNote n _ = \n> error \"Can only add a grace note to a note.\"","avg_line_length":37.7927927928,"max_line_length":78,"alphanum_fraction":0.5356376639} {"size":59671,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"-----------------------------------------------------------------------------\n--\n-- (c) The University of Glasgow 2001-2003\n--\n-- Access to system tools: gcc, cp, rm etc\n--\n-----------------------------------------------------------------------------\n\n\\begin{code}\nmodule SysTools (\n -- Initialisation\n initSysTools,\n\n -- Interface to system tools\n runUnlit, runCpp, runCc, -- [Option] -> IO ()\n runPp, -- [Option] -> IO ()\n runSplit, -- [Option] -> IO ()\n runAs, runLink, runLibtool, -- [Option] -> IO ()\n runMkDLL,\n runWindres,\n runLlvmOpt,\n runLlvmLlc,\n runClang,\n figureLlvmVersion,\n readElfSection,\n\n getLinkerInfo,\n getCompilerInfo,\n\n linkDynLib,\n\n askCc,\n\n touch, -- String -> String -> IO ()\n copy,\n copyWithHeader,\n\n -- Temporary-file management\n setTmpDir,\n newTempName,\n cleanTempDirs, cleanTempFiles, cleanTempFilesExcept,\n addFilesToClean,\n\n Option(..)\n\n ) where\n\n#include \"HsVersions.h\"\n\nimport DriverPhases\nimport Module\nimport Packages\nimport Config\nimport Outputable\nimport ErrUtils\nimport Panic\nimport Platform\nimport Util\nimport DynFlags\nimport Exception\n\nimport Data.IORef\nimport Control.Monad\nimport System.Exit\nimport System.Environment\nimport System.FilePath\nimport System.IO\nimport System.IO.Error as IO\nimport System.Directory\nimport Data.Char\nimport Data.List\nimport qualified Data.Map as Map\nimport Text.ParserCombinators.ReadP hiding (char)\nimport qualified Text.ParserCombinators.ReadP as R\n\n#ifndef mingw32_HOST_OS\nimport qualified System.Posix.Internals\n#else \/* Must be Win32 *\/\nimport Foreign\nimport Foreign.C.String\n#endif\n\nimport System.Process\nimport Control.Concurrent\nimport FastString\nimport SrcLoc ( SrcLoc, mkSrcLoc, noSrcSpan, mkSrcSpan )\n\n#ifdef mingw32_HOST_OS\n# if defined(i386_HOST_ARCH)\n# define WINDOWS_CCONV stdcall\n# elif defined(x86_64_HOST_ARCH)\n# define WINDOWS_CCONV ccall\n# else\n# error Unknown mingw32 arch\n# endif\n#endif\n\\end{code}\n\nHow GHC finds its files\n~~~~~~~~~~~~~~~~~~~~~~~\n\n[Note topdir]\n\nGHC needs various support files (library packages, RTS etc), plus\nvarious auxiliary programs (cp, gcc, etc). It starts by finding topdir,\nthe root of GHC's support files\n\nOn Unix:\n - ghc always has a shell wrapper that passes a -B

option\n\nOn Windows:\n - ghc never has a shell wrapper.\n - we can find the location of the ghc binary, which is\n $topdir\/bin\/.exe\n where may be \"ghc\", \"ghc-stage2\", or similar\n - we strip off the \"bin\/.exe\" to leave $topdir.\n\nfrom topdir we can find package.conf, ghc-asm, etc.\n\n\nSysTools.initSysProgs figures out exactly where all the auxiliary programs\nare, and initialises mutable variables to make it easy to call them.\nTo to this, it makes use of definitions in Config.hs, which is a Haskell\nfile containing variables whose value is figured out by the build system.\n\nConfig.hs contains two sorts of things\n\n cGCC, The *names* of the programs\n cCPP e.g. cGCC = gcc\n cUNLIT cCPP = gcc -E\n etc They do *not* include paths\n\n\n cUNLIT_DIR The *path* to the directory containing unlit, split etc\n cSPLIT_DIR *relative* to the root of the build tree,\n for use when running *in-place* in a build tree (only)\n\n\n\n---------------------------------------------\nNOTES for an ALTERNATIVE scheme (i.e *not* what is currently implemented):\n\nAnother hair-brained scheme for simplifying the current tool location\nnightmare in GHC: Simon originally suggested using another\nconfiguration file along the lines of GCC's specs file - which is fine\nexcept that it means adding code to read yet another configuration\nfile. What I didn't notice is that the current package.conf is\ngeneral enough to do this:\n\nPackage\n {name = \"tools\", import_dirs = [], source_dirs = [],\n library_dirs = [], hs_libraries = [], extra_libraries = [],\n include_dirs = [], c_includes = [], package_deps = [],\n extra_ghc_opts = [\"-pgmc\/usr\/bin\/gcc\",\"-pgml${topdir}\/bin\/unlit\", ... etc.],\n extra_cc_opts = [], extra_ld_opts = []}\n\nWhich would have the advantage that we get to collect together in one\nplace the path-specific package stuff with the path-specific tool\nstuff.\n End of NOTES\n---------------------------------------------\n\n%************************************************************************\n%* *\n\\subsection{Initialisation}\n%* *\n%************************************************************************\n\n\\begin{code}\ninitSysTools :: Maybe String -- Maybe TopDir path (without the '-B' prefix)\n -> IO Settings -- Set all the mutable variables above, holding\n -- (a) the system programs\n -- (b) the package-config file\n -- (c) the GHC usage message\ninitSysTools mbMinusB\n = do top_dir <- findTopDir mbMinusB\n -- see [Note topdir]\n -- NB: top_dir is assumed to be in standard Unix\n -- format, '\/' separated\n\n let settingsFile = top_dir <\/> \"settings\"\n platformConstantsFile = top_dir <\/> \"platformConstants\"\n installed :: FilePath -> FilePath\n installed file = top_dir <\/> file\n\n settingsStr <- readFile settingsFile\n platformConstantsStr <- readFile platformConstantsFile\n mySettings <- case maybeReadFuzzy settingsStr of\n Just s ->\n return s\n Nothing ->\n pgmError (\"Can't parse \" ++ show settingsFile)\n platformConstants <- case maybeReadFuzzy platformConstantsStr of\n Just s ->\n return s\n Nothing ->\n pgmError (\"Can't parse \" ++\n show platformConstantsFile)\n let getSetting key = case lookup key mySettings of\n Just xs ->\n return $ case stripPrefix \"$topdir\" xs of\n Just [] ->\n top_dir\n Just xs'@(c:_)\n | isPathSeparator c ->\n top_dir ++ xs'\n _ ->\n xs\n Nothing -> pgmError (\"No entry for \" ++ show key ++ \" in \" ++ show settingsFile)\n getBooleanSetting key = case lookup key mySettings of\n Just \"YES\" -> return True\n Just \"NO\" -> return False\n Just xs -> pgmError (\"Bad value for \" ++ show key ++ \": \" ++ show xs)\n Nothing -> pgmError (\"No entry for \" ++ show key ++ \" in \" ++ show settingsFile)\n readSetting key = case lookup key mySettings of\n Just xs ->\n case maybeRead xs of\n Just v -> return v\n Nothing -> pgmError (\"Failed to read \" ++ show key ++ \" value \" ++ show xs)\n Nothing -> pgmError (\"No entry for \" ++ show key ++ \" in \" ++ show settingsFile)\n targetArch <- readSetting \"target arch\"\n targetOS <- readSetting \"target os\"\n targetWordSize <- readSetting \"target word size\"\n targetUnregisterised <- getBooleanSetting \"Unregisterised\"\n targetHasGnuNonexecStack <- readSetting \"target has GNU nonexec stack\"\n targetHasIdentDirective <- readSetting \"target has .ident directive\"\n targetHasSubsectionsViaSymbols <- readSetting \"target has subsections via symbols\"\n myExtraGccViaCFlags <- getSetting \"GCC extra via C opts\"\n -- On Windows, mingw is distributed with GHC,\n -- so we look in TopDir\/..\/mingw\/bin\n -- It would perhaps be nice to be able to override this\n -- with the settings file, but it would be a little fiddly\n -- to make that possible, so for now you can't.\n gcc_prog <- getSetting \"C compiler command\"\n gcc_args_str <- getSetting \"C compiler flags\"\n cpp_prog <- getSetting \"Haskell CPP command\"\n cpp_args_str <- getSetting \"Haskell CPP flags\"\n let unreg_gcc_args = if targetUnregisterised\n then [\"-DNO_REGS\", \"-DUSE_MINIINTERPRETER\"]\n else []\n -- TABLES_NEXT_TO_CODE affects the info table layout.\n tntc_gcc_args\n | mkTablesNextToCode targetUnregisterised\n = [\"-DTABLES_NEXT_TO_CODE\"]\n | otherwise = []\n cpp_args= map Option (words cpp_args_str)\n gcc_args = map Option (words gcc_args_str\n ++ unreg_gcc_args\n ++ tntc_gcc_args)\n ldSupportsCompactUnwind <- getBooleanSetting \"ld supports compact unwind\"\n ldSupportsBuildId <- getBooleanSetting \"ld supports build-id\"\n ldSupportsFilelist <- getBooleanSetting \"ld supports filelist\"\n ldIsGnuLd <- getBooleanSetting \"ld is GNU ld\"\n perl_path <- getSetting \"perl command\"\n\n let pkgconfig_path = installed \"package.conf.d\"\n ghc_usage_msg_path = installed \"ghc-usage.txt\"\n ghci_usage_msg_path = installed \"ghci-usage.txt\"\n\n -- For all systems, unlit, split, mangle are GHC utilities\n -- architecture-specific stuff is done when building Config.hs\n unlit_path = installed cGHC_UNLIT_PGM\n\n -- split is a Perl script\n split_script = installed cGHC_SPLIT_PGM\n\n windres_path <- getSetting \"windres command\"\n libtool_path <- getSetting \"libtool command\"\n\n tmpdir <- getTemporaryDirectory\n\n touch_path <- getSetting \"touch command\"\n\n let -- On Win32 we don't want to rely on #!\/bin\/perl, so we prepend\n -- a call to Perl to get the invocation of split.\n -- On Unix, scripts are invoked using the '#!' method. Binary\n -- installations of GHC on Unix place the correct line on the\n -- front of the script at installation time, so we don't want\n -- to wire-in our knowledge of $(PERL) on the host system here.\n (split_prog, split_args)\n | isWindowsHost = (perl_path, [Option split_script])\n | otherwise = (split_script, [])\n mkdll_prog <- getSetting \"dllwrap command\"\n let mkdll_args = []\n\n -- cpp is derived from gcc on all platforms\n -- HACK, see setPgmP below. We keep 'words' here to remember to fix\n -- Config.hs one day.\n\n\n -- Other things being equal, as and ld are simply gcc\n gcc_link_args_str <- getSetting \"C compiler link flags\"\n let as_prog = gcc_prog\n as_args = gcc_args\n ld_prog = gcc_prog\n ld_args = gcc_args ++ map Option (words gcc_link_args_str)\n\n -- We just assume on command line\n lc_prog <- getSetting \"LLVM llc command\"\n lo_prog <- getSetting \"LLVM opt command\"\n\n let platform = Platform {\n platformArch = targetArch,\n platformOS = targetOS,\n platformWordSize = targetWordSize,\n platformUnregisterised = targetUnregisterised,\n platformHasGnuNonexecStack = targetHasGnuNonexecStack,\n platformHasIdentDirective = targetHasIdentDirective,\n platformHasSubsectionsViaSymbols = targetHasSubsectionsViaSymbols\n }\n\n return $ Settings {\n sTargetPlatform = platform,\n sTmpDir = normalise tmpdir,\n sGhcUsagePath = ghc_usage_msg_path,\n sGhciUsagePath = ghci_usage_msg_path,\n sTopDir = top_dir,\n sRawSettings = mySettings,\n sExtraGccViaCFlags = words myExtraGccViaCFlags,\n sSystemPackageConfig = pkgconfig_path,\n sLdSupportsCompactUnwind = ldSupportsCompactUnwind,\n sLdSupportsBuildId = ldSupportsBuildId,\n sLdSupportsFilelist = ldSupportsFilelist,\n sLdIsGnuLd = ldIsGnuLd,\n sPgm_L = unlit_path,\n sPgm_P = (cpp_prog, cpp_args),\n sPgm_F = \"\",\n sPgm_c = (gcc_prog, gcc_args),\n sPgm_s = (split_prog,split_args),\n sPgm_a = (as_prog, as_args),\n sPgm_l = (ld_prog, ld_args),\n sPgm_dll = (mkdll_prog,mkdll_args),\n sPgm_T = touch_path,\n sPgm_sysman = top_dir ++ \"\/ghc\/rts\/parallel\/SysMan\",\n sPgm_windres = windres_path,\n sPgm_libtool = libtool_path,\n sPgm_lo = (lo_prog,[]),\n sPgm_lc = (lc_prog,[]),\n -- Hans: this isn't right in general, but you can\n -- elaborate it in the same way as the others\n sOpt_L = [],\n sOpt_P = [],\n sOpt_F = [],\n sOpt_c = [],\n sOpt_a = [],\n sOpt_l = [],\n sOpt_windres = [],\n sOpt_lo = [],\n sOpt_lc = [],\n sPlatformConstants = platformConstants\n }\n\\end{code}\n\n\\begin{code}\n-- returns a Unix-format path (relying on getBaseDir to do so too)\nfindTopDir :: Maybe String -- Maybe TopDir path (without the '-B' prefix).\n -> IO String -- TopDir (in Unix format '\/' separated)\nfindTopDir (Just minusb) = return (normalise minusb)\nfindTopDir Nothing\n = do -- Get directory of executable\n maybe_exec_dir <- getBaseDir\n case maybe_exec_dir of\n -- \"Just\" on Windows, \"Nothing\" on unix\n Nothing -> throwGhcExceptionIO (InstallationError \"missing -B option\")\n Just dir -> return dir\n\\end{code}\n\n\n%************************************************************************\n%* *\n\\subsection{Running an external program}\n%* *\n%************************************************************************\n\n\n\\begin{code}\nrunUnlit :: DynFlags -> [Option] -> IO ()\nrunUnlit dflags args = do\n let prog = pgm_L dflags\n opts = getOpts dflags opt_L\n runSomething dflags \"Literate pre-processor\" prog\n (map Option opts ++ args)\n\nrunCpp :: DynFlags -> [Option] -> IO ()\nrunCpp dflags args = do\n let (p,args0) = pgm_P dflags\n args1 = map Option (getOpts dflags opt_P)\n args2 = if gopt Opt_WarnIsError dflags\n then [Option \"-Werror\"]\n else []\n mb_env <- getGccEnv args2\n runSomethingFiltered dflags id \"C pre-processor\" p\n (args0 ++ args1 ++ args2 ++ args) mb_env\n\nrunPp :: DynFlags -> [Option] -> IO ()\nrunPp dflags args = do\n let prog = pgm_F dflags\n opts = map Option (getOpts dflags opt_F)\n runSomething dflags \"Haskell pre-processor\" prog (args ++ opts)\n\nrunCc :: DynFlags -> [Option] -> IO ()\nrunCc dflags args = do\n let (p,args0) = pgm_c dflags\n args1 = map Option (getOpts dflags opt_c)\n args2 = args0 ++ args1 ++ args\n mb_env <- getGccEnv args2\n runSomethingFiltered dflags cc_filter \"C Compiler\" p args2 mb_env\n where\n -- discard some harmless warnings from gcc that we can't turn off\n cc_filter = unlines . doFilter . lines\n\n {-\n gcc gives warnings in chunks like so:\n In file included from \/foo\/bar\/baz.h:11,\n from \/foo\/bar\/baz2.h:22,\n from wibble.c:33:\n \/foo\/flibble:14: global register variable ...\n \/foo\/flibble:15: warning: call-clobbered r...\n We break it up into its chunks, remove any call-clobbered register\n warnings from each chunk, and then delete any chunks that we have\n emptied of warnings.\n -}\n doFilter = unChunkWarnings . filterWarnings . chunkWarnings []\n -- We can't assume that the output will start with an \"In file inc...\"\n -- line, so we start off expecting a list of warnings rather than a\n -- location stack.\n chunkWarnings :: [String] -- The location stack to use for the next\n -- list of warnings\n -> [String] -- The remaining lines to look at\n -> [([String], [String])]\n chunkWarnings loc_stack [] = [(loc_stack, [])]\n chunkWarnings loc_stack xs\n = case break loc_stack_start xs of\n (warnings, lss:xs') ->\n case span loc_start_continuation xs' of\n (lsc, xs'') ->\n (loc_stack, warnings) : chunkWarnings (lss : lsc) xs''\n _ -> [(loc_stack, xs)]\n\n filterWarnings :: [([String], [String])] -> [([String], [String])]\n filterWarnings [] = []\n -- If the warnings are already empty then we are probably doing\n -- something wrong, so don't delete anything\n filterWarnings ((xs, []) : zs) = (xs, []) : filterWarnings zs\n filterWarnings ((xs, ys) : zs) = case filter wantedWarning ys of\n [] -> filterWarnings zs\n ys' -> (xs, ys') : filterWarnings zs\n\n unChunkWarnings :: [([String], [String])] -> [String]\n unChunkWarnings [] = []\n unChunkWarnings ((xs, ys) : zs) = xs ++ ys ++ unChunkWarnings zs\n\n loc_stack_start s = \"In file included from \" `isPrefixOf` s\n loc_start_continuation s = \" from \" `isPrefixOf` s\n wantedWarning w\n | \"warning: call-clobbered register used\" `isContainedIn` w = False\n | otherwise = True\n\nisContainedIn :: String -> String -> Bool\nxs `isContainedIn` ys = any (xs `isPrefixOf`) (tails ys)\n\naskCc :: DynFlags -> [Option] -> IO String\naskCc dflags args = do\n let (p,args0) = pgm_c dflags\n args1 = map Option (getOpts dflags opt_c)\n args2 = args0 ++ args1 ++ args\n mb_env <- getGccEnv args2\n runSomethingWith dflags \"gcc\" p args2 $ \\real_args ->\n readCreateProcess (proc p real_args){ env = mb_env }\n\n-- Version of System.Process.readProcessWithExitCode that takes an environment\nreadCreateProcess\n :: CreateProcess\n -> IO (ExitCode, String) -- ^ stdout\nreadCreateProcess proc = do\n (_, Just outh, _, pid) <-\n createProcess proc{ std_out = CreatePipe }\n\n -- fork off a thread to start consuming the output\n output <- hGetContents outh\n outMVar <- newEmptyMVar\n _ <- forkIO $ evaluate (length output) >> putMVar outMVar ()\n\n -- wait on the output\n takeMVar outMVar\n hClose outh\n\n -- wait on the process\n ex <- waitForProcess pid\n\n return (ex, output)\n\n\n-- If the -B option is set, add to PATH. This works around\n-- a bug in gcc on Windows Vista where it can't find its auxiliary\n-- binaries (see bug #1110).\ngetGccEnv :: [Option] -> IO (Maybe [(String,String)])\ngetGccEnv opts =\n if null b_dirs\n then return Nothing\n else do env <- getEnvironment\n return (Just (map mangle_path env))\n where\n (b_dirs, _) = partitionWith get_b_opt opts\n\n get_b_opt (Option ('-':'B':dir)) = Left dir\n get_b_opt other = Right other\n\n mangle_path (path,paths) | map toUpper path == \"PATH\"\n = (path, '\\\"' : head b_dirs ++ \"\\\";\" ++ paths)\n mangle_path other = other\n\nrunSplit :: DynFlags -> [Option] -> IO ()\nrunSplit dflags args = do\n let (p,args0) = pgm_s dflags\n runSomething dflags \"Splitter\" p (args0++args)\n\nrunAs :: DynFlags -> [Option] -> IO ()\nrunAs dflags args = do\n let (p,args0) = pgm_a dflags\n args1 = map Option (getOpts dflags opt_a)\n args2 = args0 ++ args1 ++ args\n mb_env <- getGccEnv args2\n runSomethingFiltered dflags id \"Assembler\" p args2 mb_env\n\n-- | Run the LLVM Optimiser\nrunLlvmOpt :: DynFlags -> [Option] -> IO ()\nrunLlvmOpt dflags args = do\n let (p,args0) = pgm_lo dflags\n args1 = map Option (getOpts dflags opt_lo)\n runSomething dflags \"LLVM Optimiser\" p (args0 ++ args1 ++ args)\n\n-- | Run the LLVM Compiler\nrunLlvmLlc :: DynFlags -> [Option] -> IO ()\nrunLlvmLlc dflags args = do\n let (p,args0) = pgm_lc dflags\n args1 = map Option (getOpts dflags opt_lc)\n runSomething dflags \"LLVM Compiler\" p (args0 ++ args1 ++ args)\n\n-- | Run the clang compiler (used as an assembler for the LLVM\n-- backend on OS X as LLVM doesn't support the OS X system\n-- assembler)\nrunClang :: DynFlags -> [Option] -> IO ()\nrunClang dflags args = do\n -- we simply assume its available on the PATH\n let clang = \"clang\"\n -- be careful what options we call clang with\n -- see #5903 and #7617 for bugs caused by this.\n (_,args0) = pgm_a dflags\n args1 = map Option (getOpts dflags opt_a)\n args2 = args0 ++ args1 ++ args\n mb_env <- getGccEnv args2\n Exception.catch (do\n runSomethingFiltered dflags id \"Clang (Assembler)\" clang args2 mb_env\n )\n (\\(err :: SomeException) -> do\n errorMsg dflags $\n text (\"Error running clang! you need clang installed to use the\" ++\n \"LLVM backend\") $+$\n text \"(or GHC tried to execute clang incorrectly)\"\n throwIO err\n )\n\n-- | Figure out which version of LLVM we are running this session\nfigureLlvmVersion :: DynFlags -> IO (Maybe Int)\nfigureLlvmVersion dflags = do\n let (pgm,opts) = pgm_lc dflags\n args = filter notNull (map showOpt opts)\n -- we grab the args even though they should be useless just in\n -- case the user is using a customised 'llc' that requires some\n -- of the options they've specified. llc doesn't care what other\n -- options are specified when '-version' is used.\n args' = args ++ [\"-version\"]\n ver <- catchIO (do\n (pin, pout, perr, _) <- runInteractiveProcess pgm args'\n Nothing Nothing\n {- > llc -version\n Low Level Virtual Machine (http:\/\/llvm.org\/):\n llvm version 2.8 (Ubuntu 2.8-0Ubuntu1)\n ...\n -}\n hSetBinaryMode pout False\n _ <- hGetLine pout\n vline <- hGetLine pout\n v <- case filter isDigit vline of\n [] -> fail \"no digits!\"\n [x] -> fail $ \"only 1 digit! (\" ++ show x ++ \")\"\n (x:y:_) -> return ((read [x,y]) :: Int)\n hClose pin\n hClose pout\n hClose perr\n return $ Just v\n )\n (\\err -> do\n debugTraceMsg dflags 2\n (text \"Error (figuring out LLVM version):\" <+>\n text (show err))\n errorMsg dflags $ vcat\n [ text \"Warning:\", nest 9 $\n text \"Couldn't figure out LLVM version!\" $$\n text \"Make sure you have installed LLVM\"]\n return Nothing)\n return ver\n\n{- Note [Windows stack usage]\n\nSee: Trac #8870 (and #8834 for related info)\n\nOn Windows, occasionally we need to grow the stack. In order to do\nthis, we would normally just bump the stack pointer - but there's a\ncatch on Windows.\n\nIf the stack pointer is bumped by more than a single page, then the\npages between the initial pointer and the resulting location must be\nproperly committed by the Windows virtual memory subsystem. This is\nonly needed in the event we bump by more than one page (i.e 4097 bytes\nor more).\n\nWindows compilers solve this by emitting a call to a special function\ncalled _chkstk, which does this committing of the pages for you.\n\nThe reason this was causing a segfault was because due to the fact the\nnew code generator tends to generate larger functions, we needed more\nstack space in GHC itself. In the x86 codegen, we needed approximately\n~12kb of stack space in one go, which caused the process to segfault,\nas the intervening pages were not committed.\n\nIn the future, we should do the same thing, to make the problem\ncompletely go away. In the mean time, we're using a workaround: we\ninstruct the linker to specify the generated PE as having an initial\nreserved stack size of 8mb, as well as a initial *committed* stack\nsize of 8mb. The default committed size was previously only 4k.\n\nTheoretically it's possible to still hit this problem if you request a\nstack bump of more than 8mb in one go. But the amount of code\nnecessary is quite large, and 8mb \"should be more than enough for\nanyone\" right now (he said, before millions of lines of code cried out\nin terror).\n\n-}\n\n{- Note [Run-time linker info]\n\nSee also: Trac #5240, Trac #6063\n\nBefore 'runLink', we need to be sure to get the relevant information\nabout the linker we're using at runtime to see if we need any extra\noptions. For example, GNU ld requires '--reduce-memory-overheads' and\n'--hash-size=31' in order to use reasonable amounts of memory (see\ntrac #5240.) But this isn't supported in GNU gold.\n\nGenerally, the linker changing from what was detected at .\/configure\ntime has always been possible using -pgml, but on Linux it can happen\n'transparently' by installing packages like binutils-gold, which\nchange what \/usr\/bin\/ld actually points to.\n\nClang vs GCC notes:\n\nFor gcc, 'gcc -Wl,--version' gives a bunch of output about how to\ninvoke the linker before the version information string. For 'clang',\nthe version information for 'ld' is all that's output. For this\nreason, we typically need to slurp up all of the standard error output\nand look through it.\n\nOther notes:\n\nWe cache the LinkerInfo inside DynFlags, since clients may link\nmultiple times. The definition of LinkerInfo is there to avoid a\ncircular dependency.\n\n-}\n\n\nneededLinkArgs :: LinkerInfo -> [Option]\nneededLinkArgs (GnuLD o) = o\nneededLinkArgs (GnuGold o) = o\nneededLinkArgs (DarwinLD o) = o\nneededLinkArgs (SolarisLD o) = o\nneededLinkArgs UnknownLD = []\n\n-- Grab linker info and cache it in DynFlags.\ngetLinkerInfo :: DynFlags -> IO LinkerInfo\ngetLinkerInfo dflags = do\n info <- readIORef (rtldInfo dflags)\n case info of\n Just v -> return v\n Nothing -> do\n v <- getLinkerInfo' dflags\n writeIORef (rtldInfo dflags) (Just v)\n return v\n\n-- See Note [Run-time linker info].\ngetLinkerInfo' :: DynFlags -> IO LinkerInfo\ngetLinkerInfo' dflags = do\n let platform = targetPlatform dflags\n os = platformOS platform\n (pgm,_) = pgm_l dflags\n\n -- Try to grab the info from the process output.\n parseLinkerInfo stdo _stde _exitc\n | any (\"GNU ld\" `isPrefixOf`) stdo =\n -- GNU ld specifically needs to use less memory. This especially\n -- hurts on small object files. Trac #5240.\n return (GnuLD $ map Option [\"-Wl,--hash-size=31\",\n \"-Wl,--reduce-memory-overheads\"])\n\n | any (\"GNU gold\" `isPrefixOf`) stdo =\n -- GNU gold does not require any special arguments.\n return (GnuGold [])\n\n -- Unknown linker.\n | otherwise = fail \"invalid --version output, or linker is unsupported\"\n\n -- Process the executable call\n info <- catchIO (do\n case os of\n OSSolaris2 ->\n -- Solaris uses its own Solaris linker. Even all\n -- GNU C are recommended to configure with Solaris\n -- linker instead of using GNU binutils linker. Also\n -- all GCC distributed with Solaris follows this rule\n -- precisely so we assume here, the Solaris linker is\n -- used.\n return $ SolarisLD []\n OSDarwin ->\n -- Darwin has neither GNU Gold or GNU LD, but a strange linker\n -- that doesn't support --version. We can just assume that's\n -- what we're using.\n return $ DarwinLD []\n OSiOS ->\n -- Ditto for iOS\n return $ DarwinLD []\n OSMinGW32 ->\n -- GHC doesn't support anything but GNU ld on Windows anyway.\n -- Process creation is also fairly expensive on win32, so\n -- we short-circuit here.\n return $ GnuLD $ map Option\n [ -- Reduce ld memory usage\n \"-Wl,--hash-size=31\"\n , \"-Wl,--reduce-memory-overheads\"\n -- Increase default stack, see\n -- Note [Windows stack usage]\n , \"-Xlinker\", \"--stack=0x800000,0x800000\" ]\n _ -> do\n -- In practice, we use the compiler as the linker here. Pass\n -- -Wl,--version to get linker version info.\n (exitc, stdo, stde) <- readProcessWithExitCode pgm\n [\"-Wl,--version\"] \"\"\n -- Split the output by lines to make certain kinds\n -- of processing easier. In particular, 'clang' and 'gcc'\n -- have slightly different outputs for '-Wl,--version', but\n -- it's still easy to figure out.\n parseLinkerInfo (lines stdo) (lines stde) exitc\n )\n (\\err -> do\n debugTraceMsg dflags 2\n (text \"Error (figuring out linker information):\" <+>\n text (show err))\n errorMsg dflags $ hang (text \"Warning:\") 9 $\n text \"Couldn't figure out linker information!\" $$\n text \"Make sure you're using GNU ld, GNU gold\" <+>\n text \"or the built in OS X linker, etc.\"\n return UnknownLD)\n return info\n\n-- Grab compiler info and cache it in DynFlags.\ngetCompilerInfo :: DynFlags -> IO CompilerInfo\ngetCompilerInfo dflags = do\n info <- readIORef (rtccInfo dflags)\n case info of\n Just v -> return v\n Nothing -> do\n v <- getCompilerInfo' dflags\n writeIORef (rtccInfo dflags) (Just v)\n return v\n\n-- See Note [Run-time linker info].\ngetCompilerInfo' :: DynFlags -> IO CompilerInfo\ngetCompilerInfo' dflags = do\n let (pgm,_) = pgm_c dflags\n -- Try to grab the info from the process output.\n parseCompilerInfo _stdo stde _exitc\n -- Regular GCC\n | any (\"gcc version\" `isPrefixOf`) stde =\n return GCC\n -- Regular clang\n | any (\"clang version\" `isPrefixOf`) stde =\n return Clang\n -- XCode 5.1 clang\n | any (\"Apple LLVM version 5.1\" `isPrefixOf`) stde =\n return AppleClang51\n -- XCode 5 clang\n | any (\"Apple LLVM version\" `isPrefixOf`) stde =\n return AppleClang\n -- XCode 4.1 clang\n | any (\"Apple clang version\" `isPrefixOf`) stde =\n return AppleClang\n -- Unknown linker.\n | otherwise = fail \"invalid -v output, or compiler is unsupported\"\n\n -- Process the executable call\n info <- catchIO (do\n (exitc, stdo, stde) <- readProcessWithExitCode pgm [\"-v\"] \"\"\n -- Split the output by lines to make certain kinds\n -- of processing easier.\n parseCompilerInfo (lines stdo) (lines stde) exitc\n )\n (\\err -> do\n debugTraceMsg dflags 2\n (text \"Error (figuring out compiler information):\" <+>\n text (show err))\n errorMsg dflags $ hang (text \"Warning:\") 9 $\n text \"Couldn't figure out linker information!\" $$\n text \"Make sure you're using GNU gcc, or clang\"\n return UnknownCC)\n return info\n\nrunLink :: DynFlags -> [Option] -> IO ()\nrunLink dflags args = do\n -- See Note [Run-time linker info]\n linkargs <- neededLinkArgs `fmap` getLinkerInfo dflags\n let (p,args0) = pgm_l dflags\n args1 = map Option (getOpts dflags opt_l)\n args2 = args0 ++ args1 ++ args ++ linkargs\n mb_env <- getGccEnv args2\n runSomethingFiltered dflags id \"Linker\" p args2 mb_env\n\nrunLibtool :: DynFlags -> [Option] -> IO ()\nrunLibtool dflags args = do\n linkargs <- neededLinkArgs `fmap` getLinkerInfo dflags\n let args1 = map Option (getOpts dflags opt_l)\n args2 = [Option \"-static\"] ++ args1 ++ args ++ linkargs\n libtool = pgm_libtool dflags \n mb_env <- getGccEnv args2\n runSomethingFiltered dflags id \"Linker\" libtool args2 mb_env\n\nrunMkDLL :: DynFlags -> [Option] -> IO ()\nrunMkDLL dflags args = do\n let (p,args0) = pgm_dll dflags\n args1 = args0 ++ args\n mb_env <- getGccEnv (args0++args)\n runSomethingFiltered dflags id \"Make DLL\" p args1 mb_env\n\nrunWindres :: DynFlags -> [Option] -> IO ()\nrunWindres dflags args = do\n let (gcc, gcc_args) = pgm_c dflags\n windres = pgm_windres dflags\n opts = map Option (getOpts dflags opt_windres)\n quote x = \"\\\"\" ++ x ++ \"\\\"\"\n args' = -- If windres.exe and gcc.exe are in a directory containing\n -- spaces then windres fails to run gcc. We therefore need\n -- to tell it what command to use...\n Option (\"--preprocessor=\" ++\n unwords (map quote (gcc :\n map showOpt gcc_args ++\n map showOpt opts ++\n [\"-E\", \"-xc\", \"-DRC_INVOKED\"])))\n -- ...but if we do that then if windres calls popen then\n -- it can't understand the quoting, so we have to use\n -- --use-temp-file so that it interprets it correctly.\n -- See #1828.\n : Option \"--use-temp-file\"\n : args\n mb_env <- getGccEnv gcc_args\n runSomethingFiltered dflags id \"Windres\" windres args' mb_env\n\ntouch :: DynFlags -> String -> String -> IO ()\ntouch dflags purpose arg =\n runSomething dflags purpose (pgm_T dflags) [FileOption \"\" arg]\n\ncopy :: DynFlags -> String -> FilePath -> FilePath -> IO ()\ncopy dflags purpose from to = copyWithHeader dflags purpose Nothing from to\n\ncopyWithHeader :: DynFlags -> String -> Maybe String -> FilePath -> FilePath\n -> IO ()\ncopyWithHeader dflags purpose maybe_header from to = do\n showPass dflags purpose\n\n hout <- openBinaryFile to WriteMode\n hin <- openBinaryFile from ReadMode\n ls <- hGetContents hin -- inefficient, but it'll do for now. ToDo: speed up\n maybe (return ()) (header hout) maybe_header\n hPutStr hout ls\n hClose hout\n hClose hin\n where\n -- write the header string in UTF-8. The header is something like\n -- {-# LINE \"foo.hs\" #-}\n -- and we want to make sure a Unicode filename isn't mangled.\n header h str = do\n hSetEncoding h utf8\n hPutStr h str\n hSetBinaryMode h True\n\n-- | read the contents of the named section in an ELF object as a\n-- String.\nreadElfSection :: DynFlags -> String -> FilePath -> IO (Maybe String)\nreadElfSection _dflags section exe = do\n let\n prog = \"readelf\"\n args = [Option \"-p\", Option section, FileOption \"\" exe]\n --\n r <- readProcessWithExitCode prog (filter notNull (map showOpt args)) \"\"\n case r of\n (ExitSuccess, out, _err) -> return (doFilter (lines out))\n _ -> return Nothing\n where\n doFilter [] = Nothing\n doFilter (s:r) = case readP_to_S parse s of\n [(p,\"\")] -> Just p\n _r -> doFilter r\n where parse = do\n skipSpaces\n _ <- R.char '['\n skipSpaces\n _ <- string \"0]\"\n skipSpaces\n munch (const True)\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Managing temporary files\n%* *\n%************************************************************************\n\n\\begin{code}\ncleanTempDirs :: DynFlags -> IO ()\ncleanTempDirs dflags\n = unless (gopt Opt_KeepTmpFiles dflags)\n $ mask_\n $ do let ref = dirsToClean dflags\n ds <- atomicModifyIORef ref $ \\ds -> (Map.empty, ds)\n removeTmpDirs dflags (Map.elems ds)\n\ncleanTempFiles :: DynFlags -> IO ()\ncleanTempFiles dflags\n = unless (gopt Opt_KeepTmpFiles dflags)\n $ mask_\n $ do let ref = filesToClean dflags\n fs <- atomicModifyIORef ref $ \\fs -> ([],fs)\n removeTmpFiles dflags fs\n\ncleanTempFilesExcept :: DynFlags -> [FilePath] -> IO ()\ncleanTempFilesExcept dflags dont_delete\n = unless (gopt Opt_KeepTmpFiles dflags)\n $ mask_\n $ do let ref = filesToClean dflags\n to_delete <- atomicModifyIORef ref $ \\files ->\n let (to_keep,to_delete) = partition (`elem` dont_delete) files\n in (to_keep,to_delete)\n removeTmpFiles dflags to_delete\n\n\n-- Return a unique numeric temp file suffix\nnewTempSuffix :: DynFlags -> IO Int\nnewTempSuffix dflags = atomicModifyIORef (nextTempSuffix dflags) $ \\n -> (n+1,n)\n\n-- Find a temporary name that doesn't already exist.\nnewTempName :: DynFlags -> Suffix -> IO FilePath\nnewTempName dflags extn\n = do d <- getTempDir dflags\n x <- getProcessID\n findTempName (d <\/> \"ghc\" ++ show x ++ \"_\")\n where\n findTempName :: FilePath -> IO FilePath\n findTempName prefix\n = do n <- newTempSuffix dflags\n let filename = prefix ++ show n <.> extn\n b <- doesFileExist filename\n if b then findTempName prefix\n else do -- clean it up later\n consIORef (filesToClean dflags) filename\n return filename\n\n-- Return our temporary directory within tmp_dir, creating one if we\n-- don't have one yet.\ngetTempDir :: DynFlags -> IO FilePath\ngetTempDir dflags = do\n mapping <- readIORef dir_ref\n case Map.lookup tmp_dir mapping of\n Nothing -> do\n pid <- getProcessID\n let prefix = tmp_dir <\/> \"ghc\" ++ show pid ++ \"_\"\n mask_ $ mkTempDir prefix\n Just dir -> return dir\n where\n tmp_dir = tmpDir dflags\n dir_ref = dirsToClean dflags\n\n mkTempDir :: FilePath -> IO FilePath\n mkTempDir prefix = do\n n <- newTempSuffix dflags\n let our_dir = prefix ++ show n\n\n -- 1. Speculatively create our new directory.\n createDirectory our_dir\n\n -- 2. Update the dirsToClean mapping unless an entry already exists\n -- (i.e. unless another thread beat us to it).\n their_dir <- atomicModifyIORef dir_ref $ \\mapping ->\n case Map.lookup tmp_dir mapping of\n Just dir -> (mapping, Just dir)\n Nothing -> (Map.insert tmp_dir our_dir mapping, Nothing)\n\n -- 3. If there was an existing entry, return it and delete the\n -- directory we created. Otherwise return the directory we created.\n case their_dir of\n Nothing -> do\n debugTraceMsg dflags 2 $\n text \"Created temporary directory:\" <+> text our_dir\n return our_dir\n Just dir -> do\n removeDirectory our_dir\n return dir\n `catchIO` \\e -> if isAlreadyExistsError e\n then mkTempDir prefix else ioError e\n\naddFilesToClean :: DynFlags -> [FilePath] -> IO ()\n-- May include wildcards [used by DriverPipeline.run_phase SplitMangle]\naddFilesToClean dflags new_files\n = atomicModifyIORef (filesToClean dflags) $ \\files -> (new_files++files, ())\n\nremoveTmpDirs :: DynFlags -> [FilePath] -> IO ()\nremoveTmpDirs dflags ds\n = traceCmd dflags \"Deleting temp dirs\"\n (\"Deleting: \" ++ unwords ds)\n (mapM_ (removeWith dflags removeDirectory) ds)\n\nremoveTmpFiles :: DynFlags -> [FilePath] -> IO ()\nremoveTmpFiles dflags fs\n = warnNon $\n traceCmd dflags \"Deleting temp files\"\n (\"Deleting: \" ++ unwords deletees)\n (mapM_ (removeWith dflags removeFile) deletees)\n where\n -- Flat out refuse to delete files that are likely to be source input\n -- files (is there a worse bug than having a compiler delete your source\n -- files?)\n --\n -- Deleting source files is a sign of a bug elsewhere, so prominently flag\n -- the condition.\n warnNon act\n | null non_deletees = act\n | otherwise = do\n putMsg dflags (text \"WARNING - NOT deleting source files:\" <+> hsep (map text non_deletees))\n act\n\n (non_deletees, deletees) = partition isHaskellUserSrcFilename fs\n\nremoveWith :: DynFlags -> (FilePath -> IO ()) -> FilePath -> IO ()\nremoveWith dflags remover f = remover f `catchIO`\n (\\e ->\n let msg = if isDoesNotExistError e\n then ptext (sLit \"Warning: deleting non-existent\") <+> text f\n else ptext (sLit \"Warning: exception raised when deleting\")\n <+> text f <> colon\n $$ text (show e)\n in debugTraceMsg dflags 2 msg\n )\n\n-----------------------------------------------------------------------------\n-- Running an external program\n\nrunSomething :: DynFlags\n -> String -- For -v message\n -> String -- Command name (possibly a full path)\n -- assumed already dos-ified\n -> [Option] -- Arguments\n -- runSomething will dos-ify them\n -> IO ()\n\nrunSomething dflags phase_name pgm args =\n runSomethingFiltered dflags id phase_name pgm args Nothing\n\nrunSomethingFiltered\n :: DynFlags -> (String->String) -> String -> String -> [Option]\n -> Maybe [(String,String)] -> IO ()\n\nrunSomethingFiltered dflags filter_fn phase_name pgm args mb_env = do\n runSomethingWith dflags phase_name pgm args $ \\real_args -> do\n r <- builderMainLoop dflags filter_fn pgm real_args mb_env\n return (r,())\n\nrunSomethingWith\n :: DynFlags -> String -> String -> [Option]\n -> ([String] -> IO (ExitCode, a))\n -> IO a\n\nrunSomethingWith dflags phase_name pgm args io = do\n let real_args = filter notNull (map showOpt args)\n cmdLine = showCommandForUser pgm real_args\n traceCmd dflags phase_name cmdLine $ handleProc pgm phase_name $ io real_args\n\nhandleProc :: String -> String -> IO (ExitCode, r) -> IO r\nhandleProc pgm phase_name proc = do\n (rc, r) <- proc `catchIO` handler\n case rc of\n ExitSuccess{} -> return r\n ExitFailure n\n -- rawSystem returns (ExitFailure 127) if the exec failed for any\n -- reason (eg. the program doesn't exist). This is the only clue\n -- we have, but we need to report something to the user because in\n -- the case of a missing program there will otherwise be no output\n -- at all.\n | n == 127 -> does_not_exist\n | otherwise -> throwGhcExceptionIO (PhaseFailed phase_name rc)\n where\n handler err =\n if IO.isDoesNotExistError err\n then does_not_exist\n else IO.ioError err\n\n does_not_exist = throwGhcExceptionIO (InstallationError (\"could not execute: \" ++ pgm))\n\n\nbuilderMainLoop :: DynFlags -> (String -> String) -> FilePath\n -> [String] -> Maybe [(String, String)]\n -> IO ExitCode\nbuilderMainLoop dflags filter_fn pgm real_args mb_env = do\n chan <- newChan\n (hStdIn, hStdOut, hStdErr, hProcess) <- runInteractiveProcess pgm real_args Nothing mb_env\n\n -- and run a loop piping the output from the compiler to the log_action in DynFlags\n hSetBuffering hStdOut LineBuffering\n hSetBuffering hStdErr LineBuffering\n _ <- forkIO (readerProc chan hStdOut filter_fn)\n _ <- forkIO (readerProc chan hStdErr filter_fn)\n -- we don't want to finish until 2 streams have been completed\n -- (stdout and stderr)\n -- nor until 1 exit code has been retrieved.\n rc <- loop chan hProcess (2::Integer) (1::Integer) ExitSuccess\n -- after that, we're done here.\n hClose hStdIn\n hClose hStdOut\n hClose hStdErr\n return rc\n where\n -- status starts at zero, and increments each time either\n -- a reader process gets EOF, or the build proc exits. We wait\n -- for all of these to happen (status==3).\n -- ToDo: we should really have a contingency plan in case any of\n -- the threads dies, such as a timeout.\n loop _ _ 0 0 exitcode = return exitcode\n loop chan hProcess t p exitcode = do\n mb_code <- if p > 0\n then getProcessExitCode hProcess\n else return Nothing\n case mb_code of\n Just code -> loop chan hProcess t (p-1) code\n Nothing\n | t > 0 -> do\n msg <- readChan chan\n case msg of\n BuildMsg msg -> do\n log_action dflags dflags SevInfo noSrcSpan defaultUserStyle msg\n loop chan hProcess t p exitcode\n BuildError loc msg -> do\n log_action dflags dflags SevError (mkSrcSpan loc loc) defaultUserStyle msg\n loop chan hProcess t p exitcode\n EOF ->\n loop chan hProcess (t-1) p exitcode\n | otherwise -> loop chan hProcess t p exitcode\n\nreaderProc :: Chan BuildMessage -> Handle -> (String -> String) -> IO ()\nreaderProc chan hdl filter_fn =\n (do str <- hGetContents hdl\n loop (linesPlatform (filter_fn str)) Nothing)\n `finally`\n writeChan chan EOF\n -- ToDo: check errors more carefully\n -- ToDo: in the future, the filter should be implemented as\n -- a stream transformer.\n where\n loop [] Nothing = return ()\n loop [] (Just err) = writeChan chan err\n loop (l:ls) in_err =\n case in_err of\n Just err@(BuildError srcLoc msg)\n | leading_whitespace l -> do\n loop ls (Just (BuildError srcLoc (msg $$ text l)))\n | otherwise -> do\n writeChan chan err\n checkError l ls\n Nothing -> do\n checkError l ls\n _ -> panic \"readerProc\/loop\"\n\n checkError l ls\n = case parseError l of\n Nothing -> do\n writeChan chan (BuildMsg (text l))\n loop ls Nothing\n Just (file, lineNum, colNum, msg) -> do\n let srcLoc = mkSrcLoc (mkFastString file) lineNum colNum\n loop ls (Just (BuildError srcLoc (text msg)))\n\n leading_whitespace [] = False\n leading_whitespace (x:_) = isSpace x\n\nparseError :: String -> Maybe (String, Int, Int, String)\nparseError s0 = case breakColon s0 of\n Just (filename, s1) ->\n case breakIntColon s1 of\n Just (lineNum, s2) ->\n case breakIntColon s2 of\n Just (columnNum, s3) ->\n Just (filename, lineNum, columnNum, s3)\n Nothing ->\n Just (filename, lineNum, 0, s2)\n Nothing -> Nothing\n Nothing -> Nothing\n\nbreakColon :: String -> Maybe (String, String)\nbreakColon xs = case break (':' ==) xs of\n (ys, _:zs) -> Just (ys, zs)\n _ -> Nothing\n\nbreakIntColon :: String -> Maybe (Int, String)\nbreakIntColon xs = case break (':' ==) xs of\n (ys, _:zs)\n | not (null ys) && all isAscii ys && all isDigit ys ->\n Just (read ys, zs)\n _ -> Nothing\n\ndata BuildMessage\n = BuildMsg !SDoc\n | BuildError !SrcLoc !SDoc\n | EOF\n\ntraceCmd :: DynFlags -> String -> String -> IO a -> IO a\n-- trace the command (at two levels of verbosity)\ntraceCmd dflags phase_name cmd_line action\n = do { let verb = verbosity dflags\n ; showPass dflags phase_name\n ; debugTraceMsg dflags 3 (text cmd_line)\n ; case flushErr dflags of\n FlushErr io -> io\n\n -- And run it!\n ; action `catchIO` handle_exn verb\n }\n where\n handle_exn _verb exn = do { debugTraceMsg dflags 2 (char '\\n')\n ; debugTraceMsg dflags 2 (ptext (sLit \"Failed:\") <+> text cmd_line <+> text (show exn))\n ; throwGhcExceptionIO (PhaseFailed phase_name (ExitFailure 1)) }\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Support code}\n%* *\n%************************************************************************\n\n\\begin{code}\n-----------------------------------------------------------------------------\n-- Define getBaseDir :: IO (Maybe String)\n\ngetBaseDir :: IO (Maybe String)\n#if defined(mingw32_HOST_OS)\n-- Assuming we are running ghc, accessed by path $(stuff)\/bin\/ghc.exe,\n-- return the path $(stuff)\/lib.\ngetBaseDir = try_size 2048 -- plenty, PATH_MAX is 512 under Win32.\n where\n try_size size = allocaArray (fromIntegral size) $ \\buf -> do\n ret <- c_GetModuleFileName nullPtr buf size\n case ret of\n 0 -> return Nothing\n _ | ret < size -> fmap (Just . rootDir) $ peekCWString buf\n | otherwise -> try_size (size * 2)\n \n rootDir s = case splitFileName $ normalise s of\n (d, ghc_exe)\n | lower ghc_exe `elem` [\"ghc.exe\",\n \"ghc-stage1.exe\",\n \"ghc-stage2.exe\",\n \"ghc-stage3.exe\"] ->\n case splitFileName $ takeDirectory d of\n -- ghc is in $topdir\/bin\/ghc.exe\n (d', bin) | lower bin == \"bin\" -> takeDirectory d' <\/> \"lib\"\n _ -> fail\n _ -> fail\n where fail = panic (\"can't decompose ghc.exe path: \" ++ show s)\n lower = map toLower\n\nforeign import WINDOWS_CCONV unsafe \"windows.h GetModuleFileNameW\"\n c_GetModuleFileName :: Ptr () -> CWString -> Word32 -> IO Word32\n#else\ngetBaseDir = return Nothing\n#endif\n\n#ifdef mingw32_HOST_OS\nforeign import ccall unsafe \"_getpid\" getProcessID :: IO Int -- relies on Int == Int32 on Windows\n#else\ngetProcessID :: IO Int\ngetProcessID = System.Posix.Internals.c_getpid >>= return . fromIntegral\n#endif\n\n-- Divvy up text stream into lines, taking platform dependent\n-- line termination into account.\nlinesPlatform :: String -> [String]\n#if !defined(mingw32_HOST_OS)\nlinesPlatform ls = lines ls\n#else\nlinesPlatform \"\" = []\nlinesPlatform xs =\n case lineBreak xs of\n (as,xs1) -> as : linesPlatform xs1\n where\n lineBreak \"\" = (\"\",\"\")\n lineBreak ('\\r':'\\n':xs) = ([],xs)\n lineBreak ('\\n':xs) = ([],xs)\n lineBreak (x:xs) = let (as,bs) = lineBreak xs in (x:as,bs)\n\n#endif\n\nlinkDynLib :: DynFlags -> [String] -> [PackageId] -> IO ()\nlinkDynLib dflags0 o_files dep_packages\n = do\n let -- This is a rather ugly hack to fix dynamically linked\n -- GHC on Windows. If GHC is linked with -threaded, then\n -- it links against libHSrts_thr. But if base is linked\n -- against libHSrts, then both end up getting loaded,\n -- and things go wrong. We therefore link the libraries\n -- with the same RTS flags that we link GHC with.\n dflags1 = if cGhcThreaded then addWay' WayThreaded dflags0\n else dflags0\n dflags2 = if cGhcDebugged then addWay' WayDebug dflags1\n else dflags1\n dflags = updateWays dflags2\n\n verbFlags = getVerbFlags dflags\n o_file = outputFile dflags\n\n pkgs <- getPreloadPackagesAnd dflags dep_packages\n\n let pkg_lib_paths = collectLibraryPaths pkgs\n let pkg_lib_path_opts = concatMap get_pkg_lib_path_opts pkg_lib_paths\n get_pkg_lib_path_opts l\n | ( osElfTarget (platformOS (targetPlatform dflags)) ||\n osMachOTarget (platformOS (targetPlatform dflags)) ) &&\n dynLibLoader dflags == SystemDependent &&\n not (gopt Opt_Static dflags)\n = [\"-L\" ++ l, \"-Wl,-rpath\", \"-Wl,\" ++ l]\n | otherwise = [\"-L\" ++ l]\n\n let lib_paths = libraryPaths dflags\n let lib_path_opts = map (\"-L\"++) lib_paths\n\n -- We don't want to link our dynamic libs against the RTS package,\n -- because the RTS lib comes in several flavours and we want to be\n -- able to pick the flavour when a binary is linked.\n -- On Windows we need to link the RTS import lib as Windows does\n -- not allow undefined symbols.\n -- The RTS library path is still added to the library search path\n -- above in case the RTS is being explicitly linked in (see #3807).\n let platform = targetPlatform dflags\n os = platformOS platform\n pkgs_no_rts = case os of\n OSMinGW32 ->\n pkgs\n _ ->\n filter ((\/= rtsPackageId) . packageConfigId) pkgs\n let pkg_link_opts = let (package_hs_libs, extra_libs, other_flags) = collectLinkOpts dflags pkgs_no_rts\n in package_hs_libs ++ extra_libs ++ other_flags\n\n -- probably _stub.o files\n let extra_ld_inputs = ldInputs dflags\n\n case os of\n OSMinGW32 -> do\n -------------------------------------------------------------\n -- Making a DLL\n -------------------------------------------------------------\n let output_fn = case o_file of\n Just s -> s\n Nothing -> \"HSdll.dll\"\n\n runLink dflags (\n map Option verbFlags\n ++ [ Option \"-o\"\n , FileOption \"\" output_fn\n , Option \"-shared\"\n ] ++\n [ FileOption \"-Wl,--out-implib=\" (output_fn ++ \".a\")\n | gopt Opt_SharedImplib dflags\n ]\n ++ map (FileOption \"\") o_files\n\n -- Permit the linker to auto link _symbol to _imp_symbol\n -- This lets us link against DLLs without needing an \"import library\"\n ++ [Option \"-Wl,--enable-auto-import\"]\n\n ++ extra_ld_inputs\n ++ map Option (\n lib_path_opts\n ++ pkg_lib_path_opts\n ++ pkg_link_opts\n ))\n OSDarwin -> do\n -------------------------------------------------------------------\n -- Making a darwin dylib\n -------------------------------------------------------------------\n -- About the options used for Darwin:\n -- -dynamiclib\n -- Apple's way of saying -shared\n -- -undefined dynamic_lookup:\n -- Without these options, we'd have to specify the correct\n -- dependencies for each of the dylibs. Note that we could\n -- (and should) do without this for all libraries except\n -- the RTS; all we need to do is to pass the correct\n -- HSfoo_dyn.dylib files to the link command.\n -- This feature requires Mac OS X 10.3 or later; there is\n -- a similar feature, -flat_namespace -undefined suppress,\n -- which works on earlier versions, but it has other\n -- disadvantages.\n -- -single_module\n -- Build the dynamic library as a single \"module\", i.e. no\n -- dynamic binding nonsense when referring to symbols from\n -- within the library. The NCG assumes that this option is\n -- specified (on i386, at least).\n -- -install_name\n -- Mac OS\/X stores the path where a dynamic library is (to\n -- be) installed in the library itself. It's called the\n -- \"install name\" of the library. Then any library or\n -- executable that links against it before it's installed\n -- will search for it in its ultimate install location.\n -- By default we set the install name to the absolute path\n -- at build time, but it can be overridden by the\n -- -dylib-install-name option passed to ghc. Cabal does\n -- this.\n -------------------------------------------------------------------\n\n let output_fn = case o_file of { Just s -> s; Nothing -> \"a.out\"; }\n\n instName <- case dylibInstallName dflags of\n Just n -> return n\n Nothing -> return $ \"@rpath\" `combine` (takeFileName output_fn)\n runLink dflags (\n map Option verbFlags\n ++ [ Option \"-dynamiclib\"\n , Option \"-o\"\n , FileOption \"\" output_fn\n ]\n ++ map Option o_files\n ++ [ Option \"-undefined\",\n Option \"dynamic_lookup\",\n Option \"-single_module\" ]\n ++ (if platformArch platform == ArchX86_64\n then [ ]\n else [ Option \"-Wl,-read_only_relocs,suppress\" ])\n ++ [ Option \"-install_name\", Option instName ]\n ++ map Option lib_path_opts\n ++ extra_ld_inputs\n ++ map Option pkg_lib_path_opts\n ++ map Option pkg_link_opts\n )\n OSiOS -> throwGhcExceptionIO (ProgramError \"dynamic libraries are not supported on iOS target\")\n _ -> do\n -------------------------------------------------------------------\n -- Making a DSO\n -------------------------------------------------------------------\n\n let output_fn = case o_file of { Just s -> s; Nothing -> \"a.out\"; }\n let buildingRts = thisPackage dflags == rtsPackageId\n let bsymbolicFlag = if buildingRts\n then -- -Bsymbolic breaks the way we implement\n -- hooks in the RTS\n []\n else -- we need symbolic linking to resolve\n -- non-PIC intra-package-relocations\n [\"-Wl,-Bsymbolic\"]\n\n runLink dflags (\n map Option verbFlags\n ++ [ Option \"-o\"\n , FileOption \"\" output_fn\n ]\n ++ map Option o_files\n ++ [ Option \"-shared\" ]\n ++ map Option bsymbolicFlag\n -- Set the library soname. We use -h rather than -soname as\n -- Solaris 10 doesn't support the latter:\n ++ [ Option (\"-Wl,-h,\" ++ takeFileName output_fn) ]\n ++ map Option lib_path_opts\n ++ extra_ld_inputs\n ++ map Option pkg_lib_path_opts\n ++ map Option pkg_link_opts\n )\n\\end{code}\n","avg_line_length":40.0207914152,"max_line_length":117,"alphanum_fraction":0.5611268455} {"size":9545,"ext":"lhs","lang":"Literate Haskell","max_stars_count":2.0,"content":"\r\n|\r\nModule : Database.Util\r\nCopyright : (c) 2004 Oleg Kiselyov, Alistair Bayley\r\nLicense : BSD-style\r\nMaintainer : oleg@pobox.com, alistair@abayley.org\r\nStability : experimental\r\nPortability : non-portable\r\n\r\nUtility functions. Mostly used in database back-ends, and tests.\r\n\r\n\r\n> {-# LANGUAGE TypeSynonymInstances #-}\r\n> {-# LANGUAGE FlexibleInstances #-}\r\n> {-# LANGUAGE OverlappingInstances #-}\r\n> {-# LANGUAGE UndecidableInstances #-}\r\n\r\n> module Database.Util where\r\n\r\n> import System.Time\r\n> import Control.Monad.Trans (liftIO)\r\n> import Control.Monad.Reader\r\n> import Data.Int\r\n> import Data.List\r\n> import Data.Char\r\n> import Data.Time\r\n> import Data.Word (Word8)\r\n> import Foreign.Ptr (Ptr, castPtr)\r\n> import Foreign.Marshal.Array (peekArray)\r\n> import Numeric (showHex)\r\n> import Text.Printf\r\n\r\n\r\nMyShow requires overlapping AND undecidable instances.\r\n\r\n> class Show a => MyShow a where show_ :: a -> String\r\n> instance MyShow String where show_ s = s\r\n> instance (Show a) => MyShow a where show_ s = show s\r\n\r\n| Like 'System.IO.print', except that Strings are not escaped or quoted.\r\n\r\n> print_ :: (MonadIO m, MyShow a) => a -> m ()\r\n> print_ s = liftIO (putStrLn (show_ s))\r\n\r\n| Convenience for making UTCTimes. Assumes the time given is already UTC time\r\ni.e. there's no timezone adjustment.\r\n\r\n> mkUTCTime :: (Integral a, Real b) => a -> a -> a -> a -> a -> b -> UTCTime\r\n> mkUTCTime year month day hour minute second =\r\n> localTimeToUTC (hoursToTimeZone 0)\r\n> (LocalTime\r\n> (fromGregorian (fromIntegral year) (fromIntegral month) (fromIntegral day))\r\n> (TimeOfDay (fromIntegral hour) (fromIntegral minute) (realToFrac second)))\r\n\r\n> mkCalTime :: Integral a => a -> a -> a -> a -> a -> a -> CalendarTime\r\n> mkCalTime year month day hour minute second =\r\n> CalendarTime\r\n> { ctYear = fromIntegral year\r\n> , ctMonth = toEnum (fromIntegral month - 1)\r\n> , ctDay = fromIntegral day\r\n> , ctHour = fromIntegral hour\r\n> , ctMin = fromIntegral minute\r\n> , ctSec = fromIntegral second\r\n> , ctPicosec = 0\r\n> , ctWDay = Sunday\r\n> , ctYDay = -1\r\n> , ctTZName = \"UTC\"\r\n> , ctTZ = 0\r\n> , ctIsDST = False\r\n> }\r\n\r\n\r\n20040822073512\r\n 10000000000 (10 ^ 10) * year\r\n 100000000 (10 ^ 8) * month\r\n 1000000 (10 ^ 6) * day\r\n 10000 (10^4) * hour\r\n\r\nUse quot and rem, \/not\/ div and mod,\r\nso that we get sensible behaviour for -ve numbers.\r\n\r\n> int64ToDateParts :: Int64 -> (Int64, Int64, Int64, Int64, Int64, Int64)\r\n> int64ToDateParts i =\r\n> let\r\n> year1 = (i `quot` 10000000000)\r\n> month = ((abs i) `rem` 10000000000) `quot` 100000000\r\n> day = ((abs i) `rem` 100000000) `quot` 1000000\r\n> hour = ((abs i) `rem` 1000000) `quot` 10000\r\n> minute = ((abs i) `rem` 10000) `quot` 100\r\n> second = ((abs i) `rem` 100)\r\n> in (year1, month, day, hour, minute, second)\r\n\r\n> datePartsToInt64 ::\r\n> (Integral a1, Integral a2, Integral a3, Integral a4, Integral a5, Integral a6)\r\n> => (a1, a2, a3, a4, a5, a6) -> Int64 \r\n> datePartsToInt64 (year, month, day, hour, minute, second) =\r\n> let\r\n> yearm :: Int64\r\n> yearm = 10000000000\r\n> sign :: Int64\r\n> sign = if year < 0 then -1 else 1\r\n> in yearm * fromIntegral year\r\n> + sign * 100000000 * fromIntegral month\r\n> + sign * 1000000 * fromIntegral day\r\n> + sign * 10000 * fromIntegral hour\r\n> + sign * 100 * fromIntegral minute\r\n> + sign * fromIntegral second\r\n\r\n\r\n> calTimeToInt64 :: CalendarTime -> Int64\r\n> calTimeToInt64 ct =\r\n> datePartsToInt64\r\n> ( ctYear ct, fromEnum (ctMonth ct) + 1, ctDay ct\r\n> , ctHour ct, ctMin ct, ctSec ct)\r\n\r\n> utcTimeToInt64 utc =\r\n> let\r\n> (LocalTime ltday time) = utcToLocalTime (hoursToTimeZone 0) utc\r\n> (TimeOfDay hour minute second) = time\r\n> (year, month, day) = toGregorian ltday\r\n> in datePartsToInt64 (year, month, day, hour, minute, round second)\r\n\r\n\r\n> int64ToCalTime :: Int64 -> CalendarTime\r\n> int64ToCalTime i =\r\n> let (year, month, day, hour, minute, second) = int64ToDateParts i\r\n> in mkCalTime year month day hour minute second\r\n\r\n> int64ToUTCTime :: Int64 -> UTCTime\r\n> int64ToUTCTime i =\r\n> let (year, month, day, hour, minute, second) = int64ToDateParts i\r\n> in mkUTCTime year month day hour minute second\r\n\r\n\r\n> zeroPad n i =\r\n> if i < 0\r\n> then \"-\" ++ (zeroPad n (abs i))\r\n> else take (n - length (show i)) (repeat '0') ++ show i\r\n\r\n> substr i n s = take n (drop (i-1) s)\r\n\r\n> wordsBy :: (Char -> Bool) -> String -> [String] \r\n> wordsBy pred s = skipNonMatch pred s\r\n\r\n2 states:\r\n - skipNonMatch is for when we are looking for the start of our next word\r\n - scanWord is for when we are currently scanning a word\r\n\r\n> skipNonMatch :: (Char -> Bool) -> String -> [String] \r\n> skipNonMatch pred \"\" = []\r\n> skipNonMatch pred (c:cs)\r\n> | pred c = scanWord pred cs [c]\r\n> | otherwise = skipNonMatch pred cs\r\n\r\n> scanWord pred \"\" acc = [reverse acc]\r\n> scanWord pred (c:cs) acc\r\n> | pred c = scanWord pred cs (c:acc)\r\n> | otherwise = [reverse acc] ++ skipNonMatch pred cs\r\n\r\n\r\n> positions :: Eq a => [a] -> [a] -> [Int]\r\n> positions [] _ = []\r\n> positions s ins = map fst (filter (isPrefixOf s . snd) (zip [1..] (tails ins)))\r\n\r\n\r\n 1234567890123456789012345\r\n\"2006-11-24 07:51:49.228+00\"\r\n\"2006-11-24 07:51:49.228\"\r\n\"2006-11-24 07:51:49.228 BC\"\r\n\"2006-11-24 07:51:49+00 BC\"\r\n\r\nFIXME use TZ to specify timezone?\r\nNot necessary, PostgreSQL always seems to output\r\n+00 for timezone. It's already adjusted the time,\r\nI think. Need to test this with different server timezones, though.\r\n\r\n> pgDatetimetoUTCTime :: String -> UTCTime\r\n> pgDatetimetoUTCTime s =\r\n> let (year, month, day, hour, minute, second, tz) = pgDatetimeToParts s\r\n> in mkUTCTime year month day hour minute second\r\n\r\n> isoDatetimeToUTCTime s = pgDatetimetoUTCTime s\r\n\r\n> pgDatetimetoCalTime :: String -> CalendarTime\r\n> pgDatetimetoCalTime s =\r\n> let (year, month, day, hour, minute, second, tz) = pgDatetimeToParts s\r\n> in mkCalTime year month day hour minute (round second)\r\n\r\nisInfixOf is defined in the Data.List that comes with ghc-6.6,\r\nbut it is not in the libs that come with ghc-6.4.1.\r\n\r\n> myIsInfixOf srch list = or (map (isPrefixOf srch) (tails list))\r\n\r\nParses ISO format datetimes, and also the variation that PostgreSQL uses.\r\n\r\n> pgDatetimeToParts :: String -> (Int, Int, Int, Int, Int, Double, Int)\r\n> pgDatetimeToParts s =\r\n> let\r\n> pred c = isAlphaNum c || c == '.'\r\n> ws = wordsBy pred s\r\n> parts :: [Int]; parts = map read (take 5 ws)\r\n> secs :: Double; secs = read (ws!!5)\r\n> hasTZ = myIsInfixOf \"+\" s\r\n> tz :: Int; tz = if hasTZ then read (ws !! 6) else 0\r\n> isBC = myIsInfixOf \"BC\" s\r\n> -- It seems only PostgreSQL uses the AD\/BC suffix.\r\n> -- If BC is present then we need to do something odd with the year.\r\n> year :: Int; year = if isBC then (- ((parts !! 0) - 1)) else parts !! 0\r\n> in (year, (parts !! 1), (parts !! 2)\r\n> , (parts !! 3), (parts !! 4), secs, tz)\r\n\r\n\r\n> utcTimeToIsoString :: (Integral a, Integral b, Show a, Show b) =>\r\n> UTCTime -> String -> (a -> a) -> (b -> String) -> String\r\n> utcTimeToIsoString utc dtSep adjYear mkSuffix =\r\n> let\r\n> (LocalTime ltday time) = utcToLocalTime (hoursToTimeZone 0) utc\r\n> (TimeOfDay hour minute second) = time\r\n> (year1, month, day) = toGregorian ltday\r\n> suffix = mkSuffix (fromIntegral year1)\r\n> year = adjYear (fromIntegral year1)\r\n> s1 :: Double; s1 = realToFrac second\r\n> secs :: String; secs = printf \"%09.6f\" s1\r\n> in zeroPad 4 year\r\n> ++ \"-\" ++ zeroPad 2 month\r\n> ++ \"-\" ++ zeroPad 2 day\r\n> ++ dtSep ++ zeroPad 2 hour\r\n> ++ \":\" ++ zeroPad 2 minute\r\n> ++ \":\" ++ secs\r\n> ++ \"+00\" ++ suffix\r\n\r\n> utcTimeToPGDatetime :: UTCTime -> String\r\n> utcTimeToPGDatetime utc = utcTimeToIsoString utc \"T\" adjYear mkSuffix\r\n> where \r\n> mkSuffix year1 = if year1 < 1 then \" BC\" else \" AD\"\r\n> adjYear year1 = if year1 < 1 then abs(year1 - 1) else year1\r\n\r\n> utcTimeToIsoDatetime :: UTCTime -> String\r\n> utcTimeToIsoDatetime utc = utcTimeToIsoString utc \"T\" id (const \"Z\") \r\n\r\n> utcTimeToOdbcDatetime :: UTCTime -> String\r\n> utcTimeToOdbcDatetime utc = utcTimeToIsoString utc \" \" id (const \"\")\r\n\r\n\r\n| Assumes CalendarTime is also UTC i.e. ignores ctTZ component.\r\n\r\n> calTimeToPGDatetime :: CalendarTime -> String\r\n> calTimeToPGDatetime ct =\r\n> let\r\n> (year1, month, day, hour, minute, second, pico, tzsecs) =\r\n> ( ctYear ct, fromEnum (ctMonth ct) + 1, ctDay ct\r\n> , ctHour ct, ctMin ct, ctSec ct, ctPicosec ct, ctTZ ct)\r\n> suffix = if year1 < 1 then \" BC\" else \" AD\"\r\n> year = if year1 < 1 then abs(year1 - 1) else year1\r\n> s1 :: Double; s1 = realToFrac second + ((fromIntegral pico) \/ (10.0 ^ 12) )\r\n> secs :: String; secs = printf \"%09.6f\" s1\r\n> in zeroPad 4 year\r\n> ++ \"-\" ++ zeroPad 2 month\r\n> ++ \"-\" ++ zeroPad 2 day\r\n> ++ \" \" ++ zeroPad 2 hour\r\n> ++ \":\" ++ zeroPad 2 minute\r\n> ++ \":\" ++ secs\r\n> ++ \"+00\" ++ suffix\r\n\r\n\r\n> printArrayContents :: Int -> Ptr Word8 -> IO ()\r\n> printArrayContents sz ptr = do\r\n> putStrLn (\"printArrayContents: sz = \" ++ show sz)\r\n> l <- peekArray sz ptr\r\n> let\r\n> toHex :: Word8 -> String;\r\n> toHex i = (if i < 16 then \"0\" else \"\") ++ showHex i \"\"\r\n> putStrLn (concat (intersperse \" \" (map toHex l)))\r\n> let\r\n> toChar :: Word8 -> String\r\n> toChar i = if 31 < i && i < 127 then [chr (fromIntegral i)] else \" \"\r\n> putStrLn (concat (intersperse \" \" (map toChar l)))\r\n","avg_line_length":34.3345323741,"max_line_length":84,"alphanum_fraction":0.6200104767} {"size":1308,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"Gentle Introduction to Haskell 98, Online Supplement \nPart 17\nCovers Section 8.4\n\nSection: 8.4 Derived Instances\n\nWe have actually been using the derived Show instances all along for\nprinting out trees and other structures we have defined. The code\nin the tutorial for the Eq and Ord instance of Tree is created\nimplicitly by the deriving clause so there is no need to write it\nhere.\n\n> data Tree a = Leaf a | Branch (Tree a) (Tree a) deriving (Eq,Ord,Show)\n\nNow we can fire up both Eq and Ord functions for trees:\n\n> tree1, tree2, tree3, tree4 :: Tree Int\n> tree1 = Branch (Leaf 1) (Leaf 3)\n> tree2 = Branch (Leaf 1) (Leaf 5)\n> tree3 = Leaf 4\n> tree4 = Branch (Branch (Leaf 4) (Leaf 3)) (Leaf 5)\n\n> e1 = tree1 == tree1\n> e2 = tree1 == tree2\n> e3 = tree1 < tree2\n\n> quicksort :: Ord a => [a] -> [a]\n> quicksort [] = []\n> quicksort (x:xs) = quicksort [y | y <- xs, y <= x] ++\n> [x] ++\n> quicksort [y | y <- xs, y > x]\n\n> e4 = quicksort [tree1,tree2,tree3,tree4]\n\nNow for Enum: \n\n> data Day = Sunday | Monday | Tuesday | Wednesday | Thursday |\n> Friday | Saturday deriving (Show,Eq,Ord,Enum)\n\n> e5 = quicksort [Monday,Saturday,Friday,Sunday]\n> e6 = [Wednesday .. Friday]\n> e7 = [Monday, Wednesday ..]\n> e8 = [Saturday, Friday ..]\n\nContinued in part18.lhs\n","avg_line_length":28.4347826087,"max_line_length":72,"alphanum_fraction":0.6414373089} {"size":1974,"ext":"lhs","lang":"Literate Haskell","max_stars_count":111.0,"content":"%if false\n Copyright (c) 2009 ETH Zurich.\n All rights reserved.\n\n This file is distributed under the terms in the attached LICENSE file.\n If you do not find this file, copies can be found by writing to:\n ETH Zurich D-INFK, Universitaetstrasse 6, CH-8092 Zurich. Attn: Systems Group.\n%endif\n\n%include polycode.fmt\n\n%if false\n\n> module Libc.Printf where\n\n> import Semantics\n> import Constructs\n> import PureExpressions\n> import {-# SOURCE #-} Expressions\n\n> import IL.FoF.FoF\n\n\n%endif\n\n\\section{Printf}\n\nThe |Printf| constructs is a simple foreign function wrapper around\nthe C library @printf@.\n\n\\subsection{Smart Constructors}\n\nProvided with a format string and a list of parameters, the |printf|\nPcombinator emulates @printf@.\n\n> printf :: String -> [PureExpr] -> FoFCode PureExpr\n> printf format params = inject (Printf format params (return Void))\n\n\\subsection{Compile Instantiation}\n\nCompilation is a natural foreign function call. Note the quoting of\n|format|: we sacrify the semantics of the format string. We could\npossibly apply some tricks to recover it, or to get it in a \"nice\"\nformat thanks to the |printf| combinator. However, for simplicity, we\ndrop its semantics for now.\n\n> compilePrintf (Printf format params r) binding = \n> let (cont, binding1) = r binding in\n> (FStatement (FFFICall \"printf\" ((quote format) : params)) cont, \n> binding1)\n\n\\subsection{Run Instantiation}\n\nFor the reason mentioned above, it is a pain to recover the semantics\nof the @printf@. Hence, we drop its side-effect when interpreting it. \n\n> runPrintf (Printf a b r) heap = r heap\n\nAn esthetically satisfying solution would be to store this (and\nothers) side-effecting operations in a stream, along with its\narguments. Hence, we could compare side-effecting programs by their\nso-called \\emph{trace}. By ignoring the effect of |printf| here, we\nconsider that side-effects have no semantic significance. This is kind\nof lie when interpreting an imperative language.\n","avg_line_length":30.3692307692,"max_line_length":80,"alphanum_fraction":0.7558257345} {"size":50621,"ext":"lhs","lang":"Literate Haskell","max_stars_count":1.0,"content":"%\n% (c) The University of Glasgow 2005-2006\n%\n\\begin{code}\n-- | The dynamic linker for GHCi.\n--\n-- This module deals with the top-level issues of dynamic linking,\n-- calling the object-code linker and the byte-code linker where\n-- necessary.\n\n{-# OPTIONS -fno-cse #-}\n-- -fno-cse is needed for GLOBAL_VAR's to behave properly\n\nmodule Linker ( HValue, getHValue, showLinkerState,\n linkExpr, linkDecls, unload, withExtendedLinkEnv,\n extendLinkEnv, deleteFromLinkEnv,\n extendLoadedPkgs,\n linkPackages,initDynLinker,linkModule,\n\n -- Saving\/restoring globals\n PersistentLinkerState, saveLinkerGlobals, restoreLinkerGlobals\n ) where\n\n#include \"HsVersions.h\"\n\nimport LoadIface\nimport ObjLink\nimport ByteCodeLink\nimport ByteCodeItbls\nimport ByteCodeAsm\nimport TcRnMonad\nimport Packages\nimport DriverPhases\nimport Finder\nimport HscTypes\nimport Name\nimport NameEnv\nimport NameSet\nimport UniqFM\nimport Module\nimport ListSetOps\nimport DynFlags\nimport BasicTypes\nimport Outputable\nimport Panic\nimport Util\nimport StaticFlags\nimport ErrUtils\nimport SrcLoc\nimport qualified Maybes\nimport UniqSet\nimport FastString\nimport Config\nimport SysTools\nimport PrelNames\n\n-- Standard libraries\nimport Control.Monad\n\nimport Data.IORef\nimport Data.List\nimport qualified Data.Map as Map\nimport Control.Concurrent.MVar\n\nimport System.FilePath\nimport System.IO\n#if __GLASGOW_HASKELL__ > 704\nimport System.Directory hiding (findFile)\n#else\nimport System.Directory\n#endif\n\nimport Distribution.Package hiding (depends, PackageId)\n\nimport Exception\n\\end{code}\n\n\n%************************************************************************\n%* *\n The Linker's state\n%* *\n%************************************************************************\n\nThe persistent linker state *must* match the actual state of the\nC dynamic linker at all times, so we keep it in a private global variable.\n\nThe global IORef used for PersistentLinkerState actually contains another MVar.\nThe reason for this is that we want to allow another loaded copy of the GHC\nlibrary to side-effect the PLS and for those changes to be reflected here.\n\nThe PersistentLinkerState maps Names to actual closures (for\ninterpreted code only), for use during linking.\n\n\\begin{code}\nGLOBAL_VAR_M(v_PersistentLinkerState, newMVar (panic \"Dynamic linker not initialised\"), MVar PersistentLinkerState)\nGLOBAL_VAR(v_InitLinkerDone, False, Bool) -- Set True when dynamic linker is initialised\n\nmodifyPLS_ :: (PersistentLinkerState -> IO PersistentLinkerState) -> IO ()\nmodifyPLS_ f = readIORef v_PersistentLinkerState >>= flip modifyMVar_ f\n\nmodifyPLS :: (PersistentLinkerState -> IO (PersistentLinkerState, a)) -> IO a\nmodifyPLS f = readIORef v_PersistentLinkerState >>= flip modifyMVar f\n\ndata PersistentLinkerState\n = PersistentLinkerState {\n\n -- Current global mapping from Names to their true values\n closure_env :: ClosureEnv,\n\n -- The current global mapping from RdrNames of DataCons to\n -- info table addresses.\n -- When a new Unlinked is linked into the running image, or an existing\n -- module in the image is replaced, the itbl_env must be updated\n -- appropriately.\n itbl_env :: !ItblEnv,\n\n -- The currently loaded interpreted modules (home package)\n bcos_loaded :: ![Linkable],\n\n -- And the currently-loaded compiled modules (home package)\n objs_loaded :: ![Linkable],\n\n -- The currently-loaded packages; always object code\n -- Held, as usual, in dependency order; though I am not sure if\n -- that is really important\n pkgs_loaded :: ![PackageId]\n }\n\nemptyPLS :: DynFlags -> PersistentLinkerState\nemptyPLS _ = PersistentLinkerState {\n closure_env = emptyNameEnv,\n itbl_env = emptyNameEnv,\n pkgs_loaded = init_pkgs,\n bcos_loaded = [],\n objs_loaded = [] }\n\n -- Packages that don't need loading, because the compiler\n -- shares them with the interpreted program.\n --\n -- The linker's symbol table is populated with RTS symbols using an\n -- explicit list. See rts\/Linker.c for details.\n where init_pkgs = [rtsPackageId]\n\n\nextendLoadedPkgs :: [PackageId] -> IO ()\nextendLoadedPkgs pkgs =\n modifyPLS_ $ \\s ->\n return s{ pkgs_loaded = pkgs ++ pkgs_loaded s }\n\nextendLinkEnv :: [(Name,HValue)] -> IO ()\n-- Automatically discards shadowed bindings\nextendLinkEnv new_bindings =\n modifyPLS_ $ \\pls ->\n let new_closure_env = extendClosureEnv (closure_env pls) new_bindings\n in return pls{ closure_env = new_closure_env }\n\ndeleteFromLinkEnv :: [Name] -> IO ()\ndeleteFromLinkEnv to_remove =\n modifyPLS_ $ \\pls ->\n let new_closure_env = delListFromNameEnv (closure_env pls) to_remove\n in return pls{ closure_env = new_closure_env }\n\n-- | Get the 'HValue' associated with the given name.\n--\n-- May cause loading the module that contains the name.\n--\n-- Throws a 'ProgramError' if loading fails or the name cannot be found.\ngetHValue :: HscEnv -> Name -> IO HValue\ngetHValue hsc_env name = do\n initDynLinker (hsc_dflags hsc_env)\n pls <- modifyPLS $ \\pls -> do\n if (isExternalName name) then do\n (pls', ok) <- linkDependencies hsc_env pls noSrcSpan [nameModule name]\n if (failed ok) then ghcError (ProgramError \"\")\n else return (pls', pls')\n else\n return (pls, pls)\n lookupName (closure_env pls) name\n\nlinkDependencies :: HscEnv -> PersistentLinkerState\n -> SrcSpan -> [Module]\n -> IO (PersistentLinkerState, SuccessFlag)\nlinkDependencies hsc_env pls span needed_mods = do\n-- initDynLinker (hsc_dflags hsc_env)\n let hpt = hsc_HPT hsc_env\n dflags = hsc_dflags hsc_env\n -- The interpreter and dynamic linker can only handle object code built\n -- the \"normal\" way, i.e. no non-std ways like profiling or ticky-ticky.\n -- So here we check the build tag: if we're building a non-standard way\n -- then we need to find & link object files built the \"normal\" way.\n maybe_normal_osuf <- checkNonStdWay dflags span\n\n -- Find what packages and linkables are required\n (lnks, pkgs) <- getLinkDeps hsc_env hpt pls\n maybe_normal_osuf span needed_mods\n\n -- Link the packages and modules required\n pls1 <- linkPackages' dflags pkgs pls\n linkModules dflags pls1 lnks\n\n\n-- | Temporarily extend the linker state.\n\nwithExtendedLinkEnv :: (MonadIO m, ExceptionMonad m) =>\n [(Name,HValue)] -> m a -> m a\nwithExtendedLinkEnv new_env action\n = gbracket (liftIO $ extendLinkEnv new_env)\n (\\_ -> reset_old_env)\n (\\_ -> action)\n where\n -- Remember that the linker state might be side-effected\n -- during the execution of the IO action, and we don't want to\n -- lose those changes (we might have linked a new module or\n -- package), so the reset action only removes the names we\n -- added earlier.\n reset_old_env = liftIO $ do\n modifyPLS_ $ \\pls ->\n let cur = closure_env pls\n new = delListFromNameEnv cur (map fst new_env)\n in return pls{ closure_env = new }\n\n-- filterNameMap removes from the environment all entries except\n-- those for a given set of modules;\n-- Note that this removes all *local* (i.e. non-isExternal) names too\n-- (these are the temporary bindings from the command line).\n-- Used to filter both the ClosureEnv and ItblEnv\n\nfilterNameMap :: [Module] -> NameEnv (Name, a) -> NameEnv (Name, a)\nfilterNameMap mods env\n = filterNameEnv keep_elt env\n where\n keep_elt (n,_) = isExternalName n\n && (nameModule n `elem` mods)\n\n\n-- | Display the persistent linker state.\nshowLinkerState :: DynFlags -> IO ()\nshowLinkerState dflags\n = do pls <- readIORef v_PersistentLinkerState >>= readMVar\n log_action dflags dflags SevDump noSrcSpan defaultDumpStyle\n (vcat [text \"----- Linker state -----\",\n text \"Pkgs:\" <+> ppr (pkgs_loaded pls),\n text \"Objs:\" <+> ppr (objs_loaded pls),\n text \"BCOs:\" <+> ppr (bcos_loaded pls)])\n\\end{code}\n\n\n%************************************************************************\n%* *\n\\subsection{Initialisation}\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | Initialise the dynamic linker. This entails\n--\n-- a) Calling the C initialisation procedure,\n--\n-- b) Loading any packages specified on the command line,\n--\n-- c) Loading any packages specified on the command line, now held in the\n-- @-l@ options in @v_Opt_l@,\n--\n-- d) Loading any @.o\\\/.dll@ files specified on the command line, now held\n-- in @v_Ld_inputs@,\n--\n-- e) Loading any MacOS frameworks.\n--\n-- NOTE: This function is idempotent; if called more than once, it does\n-- nothing. This is useful in Template Haskell, where we call it before\n-- trying to link.\n--\ninitDynLinker :: DynFlags -> IO ()\ninitDynLinker dflags =\n modifyPLS_ $ \\pls0 -> do\n done <- readIORef v_InitLinkerDone\n if done then return pls0\n else do writeIORef v_InitLinkerDone True\n reallyInitDynLinker dflags\n\nreallyInitDynLinker :: DynFlags -> IO PersistentLinkerState\nreallyInitDynLinker dflags =\n do { -- Initialise the linker state\n let pls0 = emptyPLS dflags\n\n -- (a) initialise the C dynamic linker\n ; initObjLinker\n\n -- (b) Load packages from the command-line (Note [preload packages])\n ; pls <- linkPackages' dflags (preloadPackages (pkgState dflags)) pls0\n\n -- (c) Link libraries from the command-line\n ; let optl = getOpts dflags opt_l\n ; let minus_ls = [ lib | '-':'l':lib <- optl ]\n ; let lib_paths = libraryPaths dflags\n ; libspecs <- mapM (locateLib dflags False lib_paths) minus_ls\n\n -- (d) Link .o files from the command-line\n ; cmdline_ld_inputs <- readIORef v_Ld_inputs\n\n ; classified_ld_inputs <- mapM (classifyLdInput dflags) cmdline_ld_inputs\n\n -- (e) Link any MacOS frameworks\n ; let framework_paths\n | isDarwinTarget = frameworkPaths dflags\n | otherwise = []\n ; let frameworks\n | isDarwinTarget = cmdlineFrameworks dflags\n | otherwise = []\n -- Finally do (c),(d),(e)\n ; let cmdline_lib_specs = [ l | Just l <- classified_ld_inputs ]\n ++ libspecs\n ++ map Framework frameworks\n ; if null cmdline_lib_specs then return pls\n else do\n\n { mapM_ (preloadLib dflags lib_paths framework_paths) cmdline_lib_specs\n ; maybePutStr dflags \"final link ... \"\n ; ok <- resolveObjs\n\n ; if succeeded ok then maybePutStrLn dflags \"done\"\n else ghcError (ProgramError \"linking extra libraries\/objects failed\")\n\n ; return pls\n }}\n\n\n{- Note [preload packages]\n\nWhy do we need to preload packages from the command line? This is an\nexplanation copied from #2437:\n\nI tried to implement the suggestion from #3560, thinking it would be\neasy, but there are two reasons we link in packages eagerly when they\nare mentioned on the command line:\n\n * So that you can link in extra object files or libraries that\n depend on the packages. e.g. ghc -package foo -lbar where bar is a\n C library that depends on something in foo. So we could link in\n foo eagerly if and only if there are extra C libs or objects to\n link in, but....\n\n * Haskell code can depend on a C function exported by a package, and\n the normal dependency tracking that TH uses can't know about these\n dependencies. The test ghcilink004 relies on this, for example.\n\nI conclude that we need two -package flags: one that says \"this is a\npackage I want to make available\", and one that says \"this is a\npackage I want to link in eagerly\". Would that be too complicated for\nusers?\n-}\n\nclassifyLdInput :: DynFlags -> FilePath -> IO (Maybe LibrarySpec)\nclassifyLdInput dflags f\n | isObjectFilename f = return (Just (Object f))\n | isDynLibFilename f = return (Just (DLLPath f))\n | otherwise = do\n log_action dflags dflags SevInfo noSrcSpan defaultUserStyle\n (text (\"Warning: ignoring unrecognised input `\" ++ f ++ \"'\"))\n return Nothing\n\npreloadLib :: DynFlags -> [String] -> [String] -> LibrarySpec -> IO ()\npreloadLib dflags lib_paths framework_paths lib_spec\n = do maybePutStr dflags (\"Loading object \" ++ showLS lib_spec ++ \" ... \")\n case lib_spec of\n Object static_ish\n -> do b <- preload_static lib_paths static_ish\n maybePutStrLn dflags (if b then \"done\"\n else \"not found\")\n\n Archive static_ish\n -> do b <- preload_static_archive lib_paths static_ish\n maybePutStrLn dflags (if b then \"done\"\n else \"not found\")\n\n DLL dll_unadorned\n -> do maybe_errstr <- loadDLL (mkSOName dll_unadorned)\n case maybe_errstr of\n Nothing -> maybePutStrLn dflags \"done\"\n Just mm -> preloadFailed mm lib_paths lib_spec\n\n DLLPath dll_path\n -> do maybe_errstr <- loadDLL dll_path\n case maybe_errstr of\n Nothing -> maybePutStrLn dflags \"done\"\n Just mm -> preloadFailed mm lib_paths lib_spec\n\n Framework framework\n | isDarwinTarget\n -> do maybe_errstr <- loadFramework framework_paths framework\n case maybe_errstr of\n Nothing -> maybePutStrLn dflags \"done\"\n Just mm -> preloadFailed mm framework_paths lib_spec\n | otherwise -> panic \"preloadLib Framework\"\n\n where\n preloadFailed :: String -> [String] -> LibrarySpec -> IO ()\n preloadFailed sys_errmsg paths spec\n = do maybePutStr dflags \"failed.\\n\"\n ghcError $\n CmdLineError (\n \"user specified .o\/.so\/.DLL could not be loaded (\"\n ++ sys_errmsg ++ \")\\nWhilst trying to load: \"\n ++ showLS spec ++ \"\\nAdditional directories searched:\"\n ++ (if null paths then \" (none)\" else\n (concat (intersperse \"\\n\" (map (\" \"++) paths)))))\n\n -- Not interested in the paths in the static case.\n preload_static _paths name\n = do b <- doesFileExist name\n if not b then return False\n else loadObj name >> return True\n preload_static_archive _paths name\n = do b <- doesFileExist name\n if not b then return False\n else loadArchive name >> return True\n\\end{code}\n\n\n%************************************************************************\n%* *\n Link a byte-code expression\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | Link a single expression, \/including\/ first linking packages and\n-- modules that this expression depends on.\n--\n-- Raises an IO exception ('ProgramError') if it can't find a compiled\n-- version of the dependents to link.\n--\nlinkExpr :: HscEnv -> SrcSpan -> UnlinkedBCO -> IO HValue\nlinkExpr hsc_env span root_ul_bco\n = do {\n -- Initialise the linker (if it's not been done already)\n let dflags = hsc_dflags hsc_env\n ; initDynLinker dflags\n\n -- Take lock for the actual work.\n ; modifyPLS $ \\pls0 -> do {\n\n -- Link the packages and modules required\n ; (pls, ok) <- linkDependencies hsc_env pls0 span needed_mods\n ; if failed ok then\n ghcError (ProgramError \"\")\n else do {\n\n -- Link the expression itself\n let ie = itbl_env pls\n ce = closure_env pls\n\n -- Link the necessary packages and linkables\n ; (_, (root_hval:_)) <- linkSomeBCOs False ie ce [root_ul_bco]\n ; return (pls, root_hval)\n }}}\n where\n free_names = nameSetToList (bcoFreeNames root_ul_bco)\n\n needed_mods :: [Module]\n needed_mods = [ nameModule n | n <- free_names,\n isExternalName n, -- Names from other modules\n not (isWiredInName n) -- Exclude wired-in names\n ] -- (see note below)\n -- Exclude wired-in names because we may not have read\n -- their interface files, so getLinkDeps will fail\n -- All wired-in names are in the base package, which we link\n -- by default, so we can safely ignore them here.\n\ndieWith :: DynFlags -> SrcSpan -> MsgDoc -> IO a\ndieWith dflags span msg = ghcError (ProgramError (showSDoc dflags (mkLocMessage SevFatal span msg)))\n\n\ncheckNonStdWay :: DynFlags -> SrcSpan -> IO Bool\ncheckNonStdWay dflags srcspan = do\n let tag = buildTag dflags\n if null tag {- || tag == \"dyn\" -} then return False else do\n -- see #3604: object files compiled for way \"dyn\" need to link to the\n -- dynamic packages, so we can't load them into a statically-linked GHCi.\n -- we have to treat \"dyn\" in the same way as \"prof\".\n --\n -- In the future when GHCi is dynamically linked we should be able to relax\n -- this, but they we may have to make it possible to load either ordinary\n -- .o files or -dynamic .o files into GHCi (currently that's not possible\n -- because the dynamic objects contain refs to e.g. __stginit_base_Prelude_dyn\n -- whereas we have __stginit_base_Prelude_.\n if (objectSuf dflags == normalObjectSuffix)\n then failNonStd dflags srcspan\n else return True\n\nnormalObjectSuffix :: String\nnormalObjectSuffix = phaseInputExt StopLn\n\nfailNonStd :: DynFlags -> SrcSpan -> IO Bool\nfailNonStd dflags srcspan = dieWith dflags srcspan $\n ptext (sLit \"Dynamic linking required, but this is a non-standard build (eg. prof).\") $$\n ptext (sLit \"You need to build the program twice: once the normal way, and then\") $$\n ptext (sLit \"in the desired way using -osuf to set the object file suffix.\")\n\n\ngetLinkDeps :: HscEnv -> HomePackageTable\n -> PersistentLinkerState\n -> Bool -- replace object suffices?\n -> SrcSpan -- for error messages\n -> [Module] -- If you need these\n -> IO ([Linkable], [PackageId]) -- ... then link these first\n-- Fails with an IO exception if it can't find enough files\n\ngetLinkDeps hsc_env hpt pls replace_osuf span mods\n-- Find all the packages and linkables that a set of modules depends on\n = do {\n -- 1. Find the dependent home-pkg-modules\/packages from each iface\n -- (omitting iINTERACTIVE, which is already linked)\n (mods_s, pkgs_s) <- follow_deps (filter ((\/=) iNTERACTIVE) mods)\n emptyUniqSet emptyUniqSet;\n\n let {\n -- 2. Exclude ones already linked\n -- Main reason: avoid findModule calls in get_linkable\n mods_needed = mods_s `minusList` linked_mods ;\n pkgs_needed = pkgs_s `minusList` pkgs_loaded pls ;\n\n linked_mods = map (moduleName.linkableModule)\n (objs_loaded pls ++ bcos_loaded pls)\n } ;\n\n -- 3. For each dependent module, find its linkable\n -- This will either be in the HPT or (in the case of one-shot\n -- compilation) we may need to use maybe_getFileLinkable\n let { osuf = objectSuf dflags } ;\n lnks_needed <- mapM (get_linkable osuf replace_osuf) mods_needed ;\n\n return (lnks_needed, pkgs_needed) }\n where\n dflags = hsc_dflags hsc_env\n this_pkg = thisPackage dflags\n\n -- The ModIface contains the transitive closure of the module dependencies\n -- within the current package, *except* for boot modules: if we encounter\n -- a boot module, we have to find its real interface and discover the\n -- dependencies of that. Hence we need to traverse the dependency\n -- tree recursively. See bug #936, testcase ghci\/prog007.\n follow_deps :: [Module] -- modules to follow\n -> UniqSet ModuleName -- accum. module dependencies\n -> UniqSet PackageId -- accum. package dependencies\n -> IO ([ModuleName], [PackageId]) -- result\n follow_deps [] acc_mods acc_pkgs\n = return (uniqSetToList acc_mods, uniqSetToList acc_pkgs)\n follow_deps (mod:mods) acc_mods acc_pkgs\n = do\n mb_iface <- initIfaceCheck hsc_env $\n loadInterface msg mod (ImportByUser False)\n iface <- case mb_iface of\n Maybes.Failed err -> ghcError (ProgramError (showSDoc dflags err))\n Maybes.Succeeded iface -> return iface\n\n when (mi_boot iface) $ link_boot_mod_error mod\n\n let\n pkg = modulePackageId mod\n deps = mi_deps iface\n\n pkg_deps = dep_pkgs deps\n (boot_deps, mod_deps) = partitionWith is_boot (dep_mods deps)\n where is_boot (m,True) = Left m\n is_boot (m,False) = Right m\n\n boot_deps' = filter (not . (`elementOfUniqSet` acc_mods)) boot_deps\n acc_mods' = addListToUniqSet acc_mods (moduleName mod : mod_deps)\n acc_pkgs' = addListToUniqSet acc_pkgs $ map fst pkg_deps\n --\n if pkg \/= this_pkg\n then follow_deps mods acc_mods (addOneToUniqSet acc_pkgs' pkg)\n else follow_deps (map (mkModule this_pkg) boot_deps' ++ mods)\n acc_mods' acc_pkgs'\n where\n msg = text \"need to link module\" <+> ppr mod <+>\n text \"due to use of Template Haskell\"\n\n\n link_boot_mod_error mod =\n ghcError (ProgramError (showSDoc dflags (\n text \"module\" <+> ppr mod <+>\n text \"cannot be linked; it is only available as a boot module\")))\n\n no_obj :: Outputable a => a -> IO b\n no_obj mod = dieWith dflags span $\n ptext (sLit \"cannot find object file for module \") <>\n quotes (ppr mod) $$\n while_linking_expr\n\n while_linking_expr = ptext (sLit \"while linking an interpreted expression\")\n\n -- This one is a build-system bug\n\n get_linkable osuf replace_osuf mod_name -- A home-package module\n | Just mod_info <- lookupUFM hpt mod_name\n = adjust_linkable (Maybes.expectJust \"getLinkDeps\" (hm_linkable mod_info))\n | otherwise\n = do -- It's not in the HPT because we are in one shot mode,\n -- so use the Finder to get a ModLocation...\n mb_stuff <- findHomeModule hsc_env mod_name\n case mb_stuff of\n Found loc mod -> found loc mod\n _ -> no_obj mod_name\n where\n found loc mod = do {\n -- ...and then find the linkable for it\n mb_lnk <- findObjectLinkableMaybe mod loc ;\n case mb_lnk of {\n Nothing -> no_obj mod ;\n Just lnk -> adjust_linkable lnk\n }}\n\n adjust_linkable lnk\n | replace_osuf = do\n new_uls <- mapM adjust_ul (linkableUnlinked lnk)\n return lnk{ linkableUnlinked=new_uls }\n | otherwise =\n return lnk\n\n adjust_ul (DotO file) = do\n MASSERT (osuf `isSuffixOf` file)\n let new_file = reverse (drop (length osuf + 1) (reverse file))\n <.> normalObjectSuffix\n ok <- doesFileExist new_file\n if (not ok)\n then dieWith dflags span $\n ptext (sLit \"cannot find normal object file \")\n <> quotes (text new_file) $$ while_linking_expr\n else return (DotO new_file)\n adjust_ul _ = panic \"adjust_ul\"\n\\end{code}\n\n\n%************************************************************************\n%* *\n Loading a Decls statement\n%* *\n%************************************************************************\n\\begin{code}\nlinkDecls :: HscEnv -> SrcSpan -> CompiledByteCode -> IO () --[HValue]\nlinkDecls hsc_env span (ByteCode unlinkedBCOs itblEnv) = do\n -- Initialise the linker (if it's not been done already)\n let dflags = hsc_dflags hsc_env\n initDynLinker dflags\n\n -- Take lock for the actual work.\n modifyPLS $ \\pls0 -> do\n\n -- Link the packages and modules required\n (pls, ok) <- linkDependencies hsc_env pls0 span needed_mods\n if failed ok\n then ghcError (ProgramError \"\")\n else do\n\n -- Link the expression itself\n let ie = plusNameEnv (itbl_env pls) itblEnv\n ce = closure_env pls\n\n -- Link the necessary packages and linkables\n (final_gce, _) <- linkSomeBCOs False ie ce unlinkedBCOs\n let pls2 = pls { closure_env = final_gce,\n itbl_env = ie }\n return (pls2, ()) --hvals)\n where\n free_names = concatMap (nameSetToList . bcoFreeNames) unlinkedBCOs\n\n needed_mods :: [Module]\n needed_mods = [ nameModule n | n <- free_names,\n isExternalName n, -- Names from other modules\n not (isWiredInName n) -- Exclude wired-in names\n ] -- (see note below)\n -- Exclude wired-in names because we may not have read\n -- their interface files, so getLinkDeps will fail\n -- All wired-in names are in the base package, which we link\n -- by default, so we can safely ignore them here.\n\\end{code}\n\n\n\n%************************************************************************\n%* *\n Loading a single module\n%* *\n%************************************************************************\n\n\\begin{code}\nlinkModule :: HscEnv -> Module -> IO ()\nlinkModule hsc_env mod = do\n initDynLinker (hsc_dflags hsc_env)\n modifyPLS_ $ \\pls -> do\n (pls', ok) <- linkDependencies hsc_env pls noSrcSpan [mod]\n if (failed ok) then ghcError (ProgramError \"could not link module\")\n else return pls'\n\\end{code}\n\n%************************************************************************\n%* *\n Link some linkables\n The linkables may consist of a mixture of\n byte-code modules and object modules\n%* *\n%************************************************************************\n\n\\begin{code}\nlinkModules :: DynFlags -> PersistentLinkerState -> [Linkable]\n -> IO (PersistentLinkerState, SuccessFlag)\nlinkModules dflags pls linkables\n = mask_ $ do -- don't want to be interrupted by ^C in here\n\n let (objs, bcos) = partition isObjectLinkable\n (concatMap partitionLinkable linkables)\n\n -- Load objects first; they can't depend on BCOs\n (pls1, ok_flag) <- dynLinkObjs dflags pls objs\n\n if failed ok_flag then\n return (pls1, Failed)\n else do\n pls2 <- dynLinkBCOs pls1 bcos\n return (pls2, Succeeded)\n\n\n-- HACK to support f-x-dynamic in the interpreter; no other purpose\npartitionLinkable :: Linkable -> [Linkable]\npartitionLinkable li\n = let li_uls = linkableUnlinked li\n li_uls_obj = filter isObject li_uls\n li_uls_bco = filter isInterpretable li_uls\n in\n case (li_uls_obj, li_uls_bco) of\n (_:_, _:_) -> [li {linkableUnlinked=li_uls_obj},\n li {linkableUnlinked=li_uls_bco}]\n _ -> [li]\n\nfindModuleLinkable_maybe :: [Linkable] -> Module -> Maybe Linkable\nfindModuleLinkable_maybe lis mod\n = case [LM time nm us | LM time nm us <- lis, nm == mod] of\n [] -> Nothing\n [li] -> Just li\n _ -> pprPanic \"findModuleLinkable\" (ppr mod)\n\nlinkableInSet :: Linkable -> [Linkable] -> Bool\nlinkableInSet l objs_loaded =\n case findModuleLinkable_maybe objs_loaded (linkableModule l) of\n Nothing -> False\n Just m -> linkableTime l == linkableTime m\n\\end{code}\n\n\n%************************************************************************\n%* *\n\\subsection{The object-code linker}\n%* *\n%************************************************************************\n\n\\begin{code}\ndynLinkObjs :: DynFlags -> PersistentLinkerState -> [Linkable]\n -> IO (PersistentLinkerState, SuccessFlag)\ndynLinkObjs dflags pls objs = do\n -- Load the object files and link them\n let (objs_loaded', new_objs) = rmDupLinkables (objs_loaded pls) objs\n pls1 = pls { objs_loaded = objs_loaded' }\n unlinkeds = concatMap linkableUnlinked new_objs\n\n mapM_ loadObj (map nameOfObject unlinkeds)\n\n -- Link the all together\n ok <- resolveObjs\n\n -- If resolving failed, unload all our\n -- object modules and carry on\n if succeeded ok then do\n return (pls1, Succeeded)\n else do\n pls2 <- unload_wkr dflags [] pls1\n return (pls2, Failed)\n\n\nrmDupLinkables :: [Linkable] -- Already loaded\n -> [Linkable] -- New linkables\n -> ([Linkable], -- New loaded set (including new ones)\n [Linkable]) -- New linkables (excluding dups)\nrmDupLinkables already ls\n = go already [] ls\n where\n go already extras [] = (already, extras)\n go already extras (l:ls)\n | linkableInSet l already = go already extras ls\n | otherwise = go (l:already) (l:extras) ls\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{The byte-code linker}\n%* *\n%************************************************************************\n\n\\begin{code}\ndynLinkBCOs :: PersistentLinkerState -> [Linkable] -> IO PersistentLinkerState\ndynLinkBCOs pls bcos = do\n\n let (bcos_loaded', new_bcos) = rmDupLinkables (bcos_loaded pls) bcos\n pls1 = pls { bcos_loaded = bcos_loaded' }\n unlinkeds :: [Unlinked]\n unlinkeds = concatMap linkableUnlinked new_bcos\n\n cbcs :: [CompiledByteCode]\n cbcs = map byteCodeOfObject unlinkeds\n\n\n ul_bcos = [b | ByteCode bs _ <- cbcs, b <- bs]\n ies = [ie | ByteCode _ ie <- cbcs]\n gce = closure_env pls\n final_ie = foldr plusNameEnv (itbl_env pls) ies\n\n (final_gce, _linked_bcos) <- linkSomeBCOs True final_ie gce ul_bcos\n -- XXX What happens to these linked_bcos?\n\n let pls2 = pls1 { closure_env = final_gce,\n itbl_env = final_ie }\n\n return pls2\n\n-- Link a bunch of BCOs and return them + updated closure env.\nlinkSomeBCOs :: Bool -- False <=> add _all_ BCOs to returned closure env\n -- True <=> add only toplevel BCOs to closure env\n -> ItblEnv\n -> ClosureEnv\n -> [UnlinkedBCO]\n -> IO (ClosureEnv, [HValue])\n -- The returned HValues are associated 1-1 with\n -- the incoming unlinked BCOs. Each gives the\n -- value of the corresponding unlinked BCO\n\nlinkSomeBCOs toplevs_only ie ce_in ul_bcos\n = do let nms = map unlinkedBCOName ul_bcos\n hvals <- fixIO\n ( \\ hvs -> let ce_out = extendClosureEnv ce_in (zipLazy nms hvs)\n in mapM (linkBCO ie ce_out) ul_bcos )\n let ce_all_additions = zip nms hvals\n ce_top_additions = filter (isExternalName.fst) ce_all_additions\n ce_additions = if toplevs_only then ce_top_additions\n else ce_all_additions\n ce_out = -- make sure we're not inserting duplicate names into the\n -- closure environment, which leads to trouble.\n ASSERT (all (not . (`elemNameEnv` ce_in)) (map fst ce_additions))\n extendClosureEnv ce_in ce_additions\n return (ce_out, hvals)\n\n\\end{code}\n\n\n%************************************************************************\n%* *\n Unload some object modules\n%* *\n%************************************************************************\n\n\\begin{code}\n-- ---------------------------------------------------------------------------\n-- | Unloading old objects ready for a new compilation sweep.\n--\n-- The compilation manager provides us with a list of linkables that it\n-- considers \\\"stable\\\", i.e. won't be recompiled this time around. For\n-- each of the modules current linked in memory,\n--\n-- * if the linkable is stable (and it's the same one -- the user may have\n-- recompiled the module on the side), we keep it,\n--\n-- * otherwise, we unload it.\n--\n-- * we also implicitly unload all temporary bindings at this point.\n--\nunload :: DynFlags\n -> [Linkable] -- ^ The linkables to *keep*.\n -> IO ()\nunload dflags linkables\n = mask_ $ do -- mask, so we're safe from Ctrl-C in here\n\n -- Initialise the linker (if it's not been done already)\n initDynLinker dflags\n\n new_pls\n <- modifyPLS $ \\pls -> do\n pls1 <- unload_wkr dflags linkables pls\n return (pls1, pls1)\n\n debugTraceMsg dflags 3 (text \"unload: retaining objs\" <+> ppr (objs_loaded new_pls))\n debugTraceMsg dflags 3 (text \"unload: retaining bcos\" <+> ppr (bcos_loaded new_pls))\n return ()\n\nunload_wkr :: DynFlags\n -> [Linkable] -- stable linkables\n -> PersistentLinkerState\n -> IO PersistentLinkerState\n-- Does the core unload business\n-- (the wrapper blocks exceptions and deals with the PLS get and put)\n\nunload_wkr _ linkables pls\n = do let (objs_to_keep, bcos_to_keep) = partition isObjectLinkable linkables\n\n objs_loaded' <- filterM (maybeUnload objs_to_keep) (objs_loaded pls)\n bcos_loaded' <- filterM (maybeUnload bcos_to_keep) (bcos_loaded pls)\n\n let bcos_retained = map linkableModule bcos_loaded'\n itbl_env' = filterNameMap bcos_retained (itbl_env pls)\n closure_env' = filterNameMap bcos_retained (closure_env pls)\n new_pls = pls { itbl_env = itbl_env',\n closure_env = closure_env',\n bcos_loaded = bcos_loaded',\n objs_loaded = objs_loaded' }\n\n return new_pls\n where\n maybeUnload :: [Linkable] -> Linkable -> IO Bool\n maybeUnload keep_linkables lnk\n | linkableInSet lnk keep_linkables = return True\n | otherwise\n = do mapM_ unloadObj [f | DotO f <- linkableUnlinked lnk]\n -- The components of a BCO linkable may contain\n -- dot-o files. Which is very confusing.\n --\n -- But the BCO parts can be unlinked just by\n -- letting go of them (plus of course depopulating\n -- the symbol table which is done in the main body)\n return False\n\\end{code}\n\n\n%************************************************************************\n%* *\n Loading packages\n%* *\n%************************************************************************\n\n\n\\begin{code}\ndata LibrarySpec\n = Object FilePath -- Full path name of a .o file, including trailing .o\n -- For dynamic objects only, try to find the object\n -- file in all the directories specified in\n -- v_Library_paths before giving up.\n\n | Archive FilePath -- Full path name of a .a file, including trailing .a\n\n | DLL String -- \"Unadorned\" name of a .DLL\/.so\n -- e.g. On unix \"qt\" denotes \"libqt.so\"\n -- On WinDoze \"burble\" denotes \"burble.DLL\"\n -- loadDLL is platform-specific and adds the lib\/.so\/.DLL\n -- suffixes platform-dependently\n\n | DLLPath FilePath -- Absolute or relative pathname to a dynamic library\n -- (ends with .dll or .so).\n\n | Framework String -- Only used for darwin, but does no harm\n\n-- If this package is already part of the GHCi binary, we'll already\n-- have the right DLLs for this package loaded, so don't try to\n-- load them again.\n--\n-- But on Win32 we must load them 'again'; doing so is a harmless no-op\n-- as far as the loader is concerned, but it does initialise the list\n-- of DLL handles that rts\/Linker.c maintains, and that in turn is\n-- used by lookupSymbol. So we must call addDLL for each library\n-- just to get the DLL handle into the list.\npartOfGHCi :: [PackageName]\npartOfGHCi\n | isWindowsTarget || isDarwinTarget = []\n | otherwise = map PackageName\n [\"base\", \"template-haskell\", \"editline\"]\n\nshowLS :: LibrarySpec -> String\nshowLS (Object nm) = \"(static) \" ++ nm\nshowLS (Archive nm) = \"(static archive) \" ++ nm\nshowLS (DLL nm) = \"(dynamic) \" ++ nm\nshowLS (DLLPath nm) = \"(dynamic) \" ++ nm\nshowLS (Framework nm) = \"(framework) \" ++ nm\n\n-- | Link exactly the specified packages, and their dependents (unless of\n-- course they are already linked). The dependents are linked\n-- automatically, and it doesn't matter what order you specify the input\n-- packages.\n--\nlinkPackages :: DynFlags -> [PackageId] -> IO ()\n-- NOTE: in fact, since each module tracks all the packages it depends on,\n-- we don't really need to use the package-config dependencies.\n--\n-- However we do need the package-config stuff (to find aux libs etc),\n-- and following them lets us load libraries in the right order, which\n-- perhaps makes the error message a bit more localised if we get a link\n-- failure. So the dependency walking code is still here.\n\nlinkPackages dflags new_pkgs = do\n -- It's probably not safe to try to load packages concurrently, so we take\n -- a lock.\n initDynLinker dflags\n modifyPLS_ $ \\pls -> do\n linkPackages' dflags new_pkgs pls\n\nlinkPackages' :: DynFlags -> [PackageId] -> PersistentLinkerState\n -> IO PersistentLinkerState\nlinkPackages' dflags new_pks pls = do\n pkgs' <- link (pkgs_loaded pls) new_pks\n return $! pls { pkgs_loaded = pkgs' }\n where\n pkg_map = pkgIdMap (pkgState dflags)\n ipid_map = installedPackageIdMap (pkgState dflags)\n\n link :: [PackageId] -> [PackageId] -> IO [PackageId]\n link pkgs new_pkgs =\n foldM link_one pkgs new_pkgs\n\n link_one pkgs new_pkg\n | new_pkg `elem` pkgs -- Already linked\n = return pkgs\n\n | Just pkg_cfg <- lookupPackage pkg_map new_pkg\n = do { -- Link dependents first\n pkgs' <- link pkgs [ Maybes.expectJust \"link_one\" $\n Map.lookup ipid ipid_map\n | ipid <- depends pkg_cfg ]\n -- Now link the package itself\n ; linkPackage dflags pkg_cfg\n ; return (new_pkg : pkgs') }\n\n | otherwise\n = ghcError (CmdLineError (\"unknown package: \" ++ packageIdString new_pkg))\n\n\nlinkPackage :: DynFlags -> PackageConfig -> IO ()\nlinkPackage dflags pkg\n = do\n let dirs = Packages.libraryDirs pkg\n\n let hs_libs = Packages.hsLibraries pkg\n -- The FFI GHCi import lib isn't needed as\n -- compiler\/ghci\/Linker.lhs + rts\/Linker.c link the\n -- interpreted references to FFI to the compiled FFI.\n -- We therefore filter it out so that we don't get\n -- duplicate symbol errors.\n hs_libs' = filter (\"HSffi\" \/=) hs_libs\n\n -- Because of slight differences between the GHC dynamic linker and\n -- the native system linker some packages have to link with a\n -- different list of libraries when using GHCi. Examples include: libs\n -- that are actually gnu ld scripts, and the possability that the .a\n -- libs do not exactly match the .so\/.dll equivalents. So if the\n -- package file provides an \"extra-ghci-libraries\" field then we use\n -- that instead of the \"extra-libraries\" field.\n extra_libs =\n (if null (Packages.extraGHCiLibraries pkg)\n then Packages.extraLibraries pkg\n else Packages.extraGHCiLibraries pkg)\n ++ [ lib | '-':'l':lib <- Packages.ldOptions pkg ]\n\n hs_classifieds <- mapM (locateLib dflags True dirs) hs_libs'\n extra_classifieds <- mapM (locateLib dflags False dirs) extra_libs\n let classifieds = hs_classifieds ++ extra_classifieds\n\n -- Complication: all the .so's must be loaded before any of the .o's.\n let known_dlls = [ dll | DLLPath dll <- classifieds ]\n dlls = [ dll | DLL dll <- classifieds ]\n objs = [ obj | Object obj <- classifieds ]\n archs = [ arch | Archive arch <- classifieds ]\n\n maybePutStr dflags (\"Loading package \" ++ display (sourcePackageId pkg) ++ \" ... \")\n\n -- See comments with partOfGHCi\n when (packageName pkg `notElem` partOfGHCi) $ do\n loadFrameworks pkg\n mapM_ load_dyn (known_dlls ++ map mkSOName dlls)\n\n -- After loading all the DLLs, we can load the static objects.\n -- Ordering isn't important here, because we do one final link\n -- step to resolve everything.\n mapM_ loadObj objs\n mapM_ loadArchive archs\n\n maybePutStr dflags \"linking ... \"\n ok <- resolveObjs\n if succeeded ok then maybePutStrLn dflags \"done.\"\n else ghcError (InstallationError (\"unable to load package `\" ++ display (sourcePackageId pkg) ++ \"'\"))\n\n-- we have already searched the filesystem; the strings passed to load_dyn\n-- can be passed directly to loadDLL. They are either fully-qualified\n-- (\"\/usr\/lib\/libfoo.so\"), or unqualified (\"libfoo.so\"). In the latter case,\n-- loadDLL is going to search the system paths to find the library.\n--\nload_dyn :: FilePath -> IO ()\nload_dyn dll = do r <- loadDLL dll\n case r of\n Nothing -> return ()\n Just err -> ghcError (CmdLineError (\"can't load .so\/.DLL for: \"\n ++ dll ++ \" (\" ++ err ++ \")\" ))\n\nloadFrameworks :: InstalledPackageInfo_ ModuleName -> IO ()\nloadFrameworks pkg\n | isDarwinTarget = mapM_ load frameworks\n | otherwise = return ()\n where\n fw_dirs = Packages.frameworkDirs pkg\n frameworks = Packages.frameworks pkg\n\n load fw = do r <- loadFramework fw_dirs fw\n case r of\n Nothing -> return ()\n Just err -> ghcError (CmdLineError (\"can't load framework: \"\n ++ fw ++ \" (\" ++ err ++ \")\" ))\n\n-- Try to find an object file for a given library in the given paths.\n-- If it isn't present, we assume that addDLL in the RTS can find it,\n-- which generally means that it should be a dynamic library in the\n-- standard system search path.\n\nlocateLib :: DynFlags -> Bool -> [FilePath] -> String -> IO LibrarySpec\nlocateLib dflags is_hs dirs lib\n | not is_hs\n -- For non-Haskell libraries (e.g. gmp, iconv):\n -- first look in library-dirs for a dynamic library (libfoo.so)\n -- then look in library-dirs for a static library (libfoo.a)\n -- then try \"gcc --print-file-name\" to search gcc's search path\n -- for a dynamic library (#5289)\n -- otherwise, assume loadDLL can find it\n --\n = findDll `orElse` findArchive `orElse` tryGcc `orElse` assumeDll\n\n | not isDynamicGhcLib\n -- When the GHC package was not compiled as dynamic library\n -- (=DYNAMIC not set), we search for .o libraries or, if they\n -- don't exist, .a libraries.\n = findObject `orElse` findArchive `orElse` assumeDll\n\n | otherwise\n -- When the GHC package was compiled as dynamic library (=DYNAMIC set),\n -- we search for .so libraries first.\n = findHSDll `orElse` findObject `orElse` findArchive `orElse` assumeDll\n where\n mk_obj_path dir = dir <\/> (lib <.> \"o\")\n mk_arch_path dir = dir <\/> (\"lib\" ++ lib <.> \"a\")\n\n hs_dyn_lib_name = lib ++ \"-ghc\" ++ cProjectVersion\n mk_hs_dyn_lib_path dir = dir <\/> mkSOName hs_dyn_lib_name\n\n so_name = mkSOName lib\n mk_dyn_lib_path dir = dir <\/> so_name\n\n findObject = liftM (fmap Object) $ findFile mk_obj_path dirs\n findArchive = liftM (fmap Archive) $ findFile mk_arch_path dirs\n findHSDll = liftM (fmap DLLPath) $ findFile mk_hs_dyn_lib_path dirs\n findDll = liftM (fmap DLLPath) $ findFile mk_dyn_lib_path dirs\n tryGcc = liftM (fmap DLLPath) $ searchForLibUsingGcc dflags so_name dirs\n\n assumeDll = return (DLL lib)\n infixr `orElse`\n f `orElse` g = do m <- f\n case m of\n Just x -> return x\n Nothing -> g\n\nsearchForLibUsingGcc :: DynFlags -> String -> [FilePath] -> IO (Maybe FilePath)\nsearchForLibUsingGcc dflags so dirs = do\n str <- askCc dflags (map (FileOption \"-L\") dirs\n ++ [Option \"--print-file-name\", Option so])\n let file = case lines str of\n [] -> \"\"\n l:_ -> l\n if (file == so)\n then return Nothing\n else return (Just file)\n\n-- ----------------------------------------------------------------------------\n-- Loading a dyanmic library (dlopen()-ish on Unix, LoadLibrary-ish on Win32)\n\nmkSOName :: FilePath -> FilePath\nmkSOName root\n | isDarwinTarget = (\"lib\" ++ root) <.> \"dylib\"\n | isWindowsTarget = root <.> \"dll\"\n | otherwise = (\"lib\" ++ root) <.> \"so\"\n\n-- Darwin \/ MacOS X only: load a framework\n-- a framework is a dynamic library packaged inside a directory of the same\n-- name. They are searched for in different paths than normal libraries.\nloadFramework :: [FilePath] -> FilePath -> IO (Maybe String)\nloadFramework extraPaths rootname\n = do { either_dir <- tryIO getHomeDirectory\n ; let homeFrameworkPath = case either_dir of\n Left _ -> []\n Right dir -> [dir ++ \"\/Library\/Frameworks\"]\n ps = extraPaths ++ homeFrameworkPath ++ defaultFrameworkPaths\n ; mb_fwk <- findFile mk_fwk ps\n ; case mb_fwk of\n Just fwk_path -> loadDLL fwk_path\n Nothing -> return (Just \"not found\") }\n -- Tried all our known library paths, but dlopen()\n -- has no built-in paths for frameworks: give up\n where\n mk_fwk dir = dir <\/> (rootname ++ \".framework\/\" ++ rootname)\n -- sorry for the hardcoded paths, I hope they won't change anytime soon:\n defaultFrameworkPaths = [\"\/Library\/Frameworks\", \"\/System\/Library\/Frameworks\"]\n\\end{code}\n\n%************************************************************************\n%* *\n Helper functions\n%* *\n%************************************************************************\n\n\\begin{code}\nfindFile :: (FilePath -> FilePath) -- Maps a directory path to a file path\n -> [FilePath] -- Directories to look in\n -> IO (Maybe FilePath) -- The first file path to match\nfindFile _ []\n = return Nothing\nfindFile mk_file_path (dir:dirs)\n = do { let file_path = mk_file_path dir\n ; b <- doesFileExist file_path\n ; if b then\n return (Just file_path)\n else\n findFile mk_file_path dirs }\n\\end{code}\n\n\\begin{code}\nmaybePutStr :: DynFlags -> String -> IO ()\nmaybePutStr dflags s | verbosity dflags > 0 = putStr s\n | otherwise = return ()\n\nmaybePutStrLn :: DynFlags -> String -> IO ()\nmaybePutStrLn dflags s | verbosity dflags > 0 = putStrLn s\n | otherwise = return ()\n\\end{code}\n\n%************************************************************************\n%* *\n Tunneling global variables into new instance of GHC library\n%* *\n%************************************************************************\n\n\\begin{code}\nsaveLinkerGlobals :: IO (MVar PersistentLinkerState, Bool)\nsaveLinkerGlobals = liftM2 (,) (readIORef v_PersistentLinkerState) (readIORef v_InitLinkerDone)\n\nrestoreLinkerGlobals :: (MVar PersistentLinkerState, Bool) -> IO ()\nrestoreLinkerGlobals (pls, ild) = do\n writeIORef v_PersistentLinkerState pls\n writeIORef v_InitLinkerDone ild\n\\end{code}\n","avg_line_length":40.4644284572,"max_line_length":116,"alphanum_fraction":0.5718772841} {"size":305,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"Intermission: Check your understanding\n\n1. forever, when\n2. Data.Bits and Database.Blacktip.Types\n3. Database.Blacktip.Types probably brings in database types for the blacktip application\n4. a) Control.Concurrent.MVar, Filesystem.Path.CurrentOS and Control.Concurrent\n b) Filesystem\n c) Control.Monad\n","avg_line_length":33.8888888889,"max_line_length":89,"alphanum_fraction":0.8032786885} {"size":10683,"ext":"lhs","lang":"Literate Haskell","max_stars_count":48.0,"content":"\\section{Lexer}\n\\label{sec:DisplayLang.Lexer}\n\nThe lexical structure is extremely simple. The reason is that Cochon\nbeing an interactive theorem prover, its inputs will be\nstraightforward, 1-dimension terms. Being interactive, our user is\nalso more interested in knowing where she did a mistake, rather than\nhaving the ability to write terms in 3D.\n\nWe want to recognize ``valid'' identifiers, and keywords, with all of\nthis structures by brackets. Interestingly, we only consider correctly\npaired brackets: we never use left-over brackets, and it is much\nsimpler to work with well-parenthesized expressions when parsing\nterms. Brackets are round, square, or curly, and you can make fancy\nbrackets by wedging an identifier between open-and-bar, or\nbar-and-close without whitespace. Sequences of non-whitespace are\nidentifiers unless they're keywords.\n\n\n%if False\n\n> {-# OPTIONS_GHC -F -pgmF she #-}\n> {-# LANGUAGE GADTs, TypeSynonymInstances #-}\n\n> module DisplayLang.Lexer where\n\n> import Control.Applicative\n> import Data.List\n> import Data.Char\n\n> import Features.Features\n\n> import Kit.Parsley\n\n%endif\n\n%------------------------------------------------------------------------\n\\subsection{What are tokens?}\n%------------------------------------------------------------------------\n\nWe lex into tokens, classified as follows.\n\n> data Token\n> = Identifier String -- identifiers\n> | Keyword Keyword -- keywords\n> | Brackets Bracket [Token] -- bracketted tokens\n> deriving (Eq, Show)\n\nBrackets are the structuring tokens. We have:\n\n> data Bracket\n> = Round | RoundB String -- |(| or |(foo|||\n> | Square | SquareB String -- |[| or |[foo|||\n> | Curly | CurlyB String -- |{| or |{foo|||\n> deriving (Eq, Show)\n\nAs we are very likely to look at our tokens all too often, let us\nimplement a function to crush tokens down to strings.\n\n> crushToken :: Token -> String\n> crushToken (Identifier s) = s\n> crushToken (Keyword s) = key s\n> crushToken (Brackets bra toks) = showOpenB bra ++\n> (intercalate \" \" (map crushToken toks)) ++ showCloseB bra \n> where\n> showOpenB Round = \"(\"\n> showOpenB Square = \"[\"\n> showOpenB Curly = \"{\"\n> showOpenB (RoundB s) = \"(\" ++ s ++ \"|\"\n> showOpenB (SquareB s) = \"[\" ++ s ++ \"|\"\n> showOpenB (CurlyB s) = \"{\" ++ s ++ \"|\"\n> showCloseB Round = \")\"\n> showCloseB Square = \"]\"\n> showCloseB Curly = \"}\"\n> showCloseB (RoundB s) = \"|\" ++ s ++ \")\"\n> showCloseB (SquareB s) = \"|\" ++ s ++ \"]\"\n> showCloseB (CurlyB s) = \"|\" ++ s ++ \"}\"\n\n\n\\subsection{Lexer}\n\nWe implement the tokenizer as a |Parsley| on |Char|s. That's a cheap\nsolution for what we have to do. The previous implementation was\nrunning other the string of characters, wrapped in a |StateT| monad\ntransformer. \n\n\\question{What was the benefit of a |StateT| lexer, as we have a\n parser combinator library at hand?}\n\nA token is either a bracketted expression, a keyword, or, failing all\nthat, an identifier. Hence, we can recognize a token with the\nfollowing parser:\n\n> parseToken :: Parsley Char Token\n> parseToken = (|id parseBrackets\n> |id parseKeyword \n> |id parseIdent \n> |)\n\nTokenizing an input string then simply consists in matching a bunch of\ntoken separated by spaces. For readability, we are also glutton in\nspaces the user may have put before and after the tokens.\n\n> tokenize :: Parsley Char [Token]\n> tokenize = spaces *> pSep spaces parseToken <* spaces\n\nIn the following, we implement these combinators in turn: |spaces|,\n|parseIdent|, |parseKeyword|, and |parseBrackets|.\n\n\\subsubsection{Lexing spaces}\n\nA space is one of the following character:\n\n> space :: String\n> space = \" \\n\\r\\t\"\n\nSo, we look for |many| of them:\n\n> spaces :: Parsley Char ()\n> spaces = (many $ tokenFilter (flip elem space)) *> pure ()\n\n\\subsubsection{Parsing words}\n\nAs an intermediary step before keyword, identifier, and brackets, let\nus introduce a parser for words. A word is any non-empty string of\ncharacters that doesn't include a space, a bracketting symbol, or one\nof the protected symbols. A protected symbol is, simply, a\none-character symbol which can be prefix or suffix a word, but will\nnot be merged into the parsed word. For example, \"foo,\" lexes into\nfirst |Idenfitier foo| then |Keyword ,|. In |Parsley|, this translates\nto:\n\n> parseWord :: Parsley Char String\n> parseWord = (|id (some $ tokenFilter (\\t -> not $ elem t $ space ++ bracketChars ++ protected)) \n> |(: []) (tokenFilter (flip elem protected))|)\n> where protected = \",`';\"\n\nAs we are at it, we can test for word equality, that is build a parser\nmatching a given word:\n\n> wordEq :: String -> Parsley Char ()\n> wordEq \"\" = pure ()\n> wordEq word = pFilter filter parseWord\n> where filter s | s == word = Just () \n> | otherwise = Nothing\n\nEquipped with |parseWord| and |wordEq|, the following lexers win a\nlevel of abstraction, working on words instead of characters.\n\n\n\\subsubsection{Lexing keywords}\n\nKeywords are slightly more involved. A keyword is one of the following\nthings...\n\n> data Keyword where\n\n> import <- KeywordConstructors\n\n> KwAsc :: Keyword\n> KwComma :: Keyword\n> KwSemi :: Keyword\n> KwDefn :: Keyword\n> KwUnderscore :: Keyword\n> KwEq :: Keyword\n> KwBy :: Keyword\n\n> KwSet :: Keyword\n> KwPi :: Keyword\n> KwLambda :: Keyword\n\n> KwCon :: Keyword\n> KwOut :: Keyword\n\n> deriving (Bounded, Enum, Eq, Show)\n\n...and they look like this:\n\n> key :: Keyword -> String\n\n> import <- KeywordTable\n\n> key KwAsc = \":\"\n> key KwComma = \",\"\n> key KwSemi = \";\"\n> key KwDefn = \":=\"\n> key KwUnderscore = \"_\"\n> key KwEq = \"=\"\n> key KwBy = \"<=\"\n\n\n> key KwSet = \"Set\"\n> key KwPi = \"Pi\"\n> key KwLambda = \"\\\\\"\n\n> key KwCon = \"con\"\n> key KwOut = \"%\"\n\n> key k = error (\"key: missing keyword \" ++ show k)\n\nIt is straightforward to make a translation table, |keywords|:\n\n> keywords :: [(String, Keyword)]\n> keywords = map (\\k -> (key k, k)) (enumFromTo minBound maxBound)\n\nTo implement |parseKeyword|, we can simply filter by words that\ncan be found in the |keywords| list.\n\n> parseKeyword :: Parsley Char Token\n> parseKeyword = pFilter (\\t -> fmap (Keyword . snd) $ find ((t ==) . fst) keywords) parseWord\n\n\\subsubsection{Lexing identifiers}\n\nHence, parsing an identifier simply consists in successfully parsing a\nword -- which is not a keyword -- and saying ``oh! it's an\n|Identifier|''.\n\n> parseIdent = (|id (%parseKeyword%) (|)\n> |Identifier parseWord |)\n\n\n\\subsubsection{Lexing brackets}\n\nBrackets, open and closed, are one of the following.\n\n> openBracket, closeBracket, bracketChars :: String\n> openBracket = \"([{\"\n> closeBracket = \"}])\"\n> bracketChars = \"|\" ++ openBracket ++ closeBracket\n\nParsing brackets, as you would expect, requires a monad: we're not\ncontext-free my friend. This is slight variation around the |pLoop|\ncombinator. \n\nFirst, we use |parseOpenBracket| to match an opening bracket, and get\nit's code. Thanks to this code, we can already say that we hold a\n|Brackets|. We are left with tokenizing the content of the bracket, up\nto parsing the corresponding closing bracket. \n\nParsing the closing bracket is made slightly more complex by the\npresence of fancy brackets: we have to match the fancy name of the\nopening bracket with the one of the closing bracket.\n\n> parseBrackets :: Parsley Char Token\n> parseBrackets = do\n> bra <- parseOpenBracket\n> (|(Brackets bra)\n> (|id tokenize (%parseCloseBracket bra %) |) |)\n> where parseOpenBracket :: Parsley Char Bracket\n> parseOpenBracket = (|id (% tokenEq '(' %)\n> (|RoundB possibleWord (% tokenEq '|' %)\n> |Round (% spaces %)|)\n> |id (% tokenEq '[' %)\n> (|SquareB possibleWord (% tokenEq '|' %)\n> |Square (% spaces %)|)\n> |id (% tokenEq '{' %)\n> (|CurlyB possibleWord (% tokenEq '|' %)\n> |Curly (% spaces %)|)\n> |) \n> parseCloseBracket :: Bracket -> Parsley Char ()\n> parseCloseBracket Round = tokenEq ')' \n> parseCloseBracket Square = tokenEq ']'\n> parseCloseBracket Curly = tokenEq '}'\n> parseCloseBracket (RoundB s) = matchBracketB s ')'\n> parseCloseBracket (SquareB s) = matchBracketB s ']'\n> parseCloseBracket (CurlyB s) = matchBracketB s '}'\n> parseBracket x = tokenFilter (flip elem x)\n> matchBracketB s bra = (|id ~ () (% tokenEq '|' %) \n> (% wordEq s %) \n> (% tokenEq bra %) |)\n>\n> possibleWord = parseWord <|> pure \"\"\n\n\\subsection{Abstracting tokens}\n\nAs we are very likely to use these tokens in a parser, let us readily\ndefine parser combinators for them. Hence, looking for a given keyword\nis not more difficult than that:\n\n> keyword :: Keyword -> Parsley Token ()\n> keyword s = tokenEq (Keyword s)\n\nAnd we can match any keyword (though we rarely want to) using:\n\n> anyKeyword :: Parsley Token Keyword\n> anyKeyword = pFilter filterKeyword nextToken\n> where filterKeyword (Keyword k) = Just k\n> filterKeyword _ = Nothing\n\nParsing an identifier or a number is as simple as:\n\n> ident :: Parsley Token String\n> ident = pFilter filterIdent nextToken\n> where filterIdent (Identifier s) | not (isDigit $ head s) = Just s\n> filterIdent _ = Nothing\n>\n> digits :: Parsley Token String\n> digits = pFilter filterInt nextToken\n> where filterInt (Identifier s) | all isDigit s = Just s\n> filterInt _ = Nothing\n\nOccasionally we may want to match a specific identifier:\n\n> identEq :: String -> Parsley Token ()\n> identEq s = ident >>= pGuard . (== s)\n\nFinally, we can match a bracketted expression and use a specific\nparser for the bracketted tokens:\n\n> bracket :: Bracket -> Parsley Token x -> Parsley Token x\n> bracket bra p = pFilter filterBra nextToken\n> where filterBra (Brackets bra' toks) | bra == bra' = \n> either (\\_ ->Nothing) Just $ parse p toks\n> filterBra _ = Nothing\n\n","avg_line_length":33.8069620253,"max_line_length":98,"alphanum_fraction":0.6129364411} {"size":3021,"ext":"lhs","lang":"Literate Haskell","max_stars_count":6.0,"content":"%\n% (c) The GRASP\/AQUA Project, Glasgow University, 1993-1998\n%\n\\section[SimplStg]{Driver for simplifying @STG@ programs}\n\n\\begin{code}\n{-# LANGUAGE CPP #-}\n\nmodule SimplStg ( stg2stg ) where\n\n#include \"HsVersions.h\"\n\nimport StgSyn\n\nimport CostCentre ( CollectedCCs )\nimport SCCfinal ( stgMassageForProfiling )\nimport StgLint ( lintStgBindings )\nimport StgStats ( showStgStats )\nimport UnariseStg ( unarise )\n\nimport DynFlags\nimport Module ( Module )\nimport ErrUtils\nimport SrcLoc\nimport UniqSupply ( mkSplitUniqSupply, splitUniqSupply )\nimport Outputable\nimport Control.Monad\n\\end{code}\n\n\\begin{code}\nstg2stg :: DynFlags -- includes spec of what stg-to-stg passes to do\n -> Module -- module name (profiling only)\n -> [StgBinding] -- input...\n -> IO ( [StgBinding] -- output program...\n , CollectedCCs) -- cost centre information (declared and used)\n\nstg2stg dflags module_name binds\n = do { showPass dflags \"Stg2Stg\"\n ; us <- mkSplitUniqSupply 'g'\n\n ; when (dopt Opt_D_verbose_stg2stg dflags)\n (log_action dflags dflags SevDump noSrcSpan defaultDumpStyle (text \"VERBOSE STG-TO-STG:\"))\n\n ; (binds', us', ccs) <- end_pass us \"Stg2Stg\" ([],[],[]) binds\n\n -- Do the main business!\n ; let (us0, us1) = splitUniqSupply us'\n ; (processed_binds, _, cost_centres)\n <- foldM do_stg_pass (binds', us0, ccs) (getStgToDo dflags)\n\n ; let un_binds = unarise us1 processed_binds\n\n ; dumpIfSet_dyn dflags Opt_D_dump_stg \"STG syntax:\"\n (pprStgBindings un_binds)\n\n ; return (un_binds, cost_centres)\n }\n\n where\n stg_linter = if gopt Opt_DoStgLinting dflags\n then lintStgBindings\n else ( \\ _whodunnit binds -> binds )\n\n -------------------------------------------\n do_stg_pass (binds, us, ccs) to_do\n = let\n (us1, us2) = splitUniqSupply us\n in\n case to_do of\n D_stg_stats ->\n trace (showStgStats binds)\n end_pass us2 \"StgStats\" ccs binds\n\n StgDoMassageForProfiling ->\n {-# SCC \"ProfMassage\" #-}\n let\n (collected_CCs, binds3)\n = stgMassageForProfiling dflags module_name us1 binds\n in\n end_pass us2 \"ProfMassage\" collected_CCs binds3\n\n end_pass us2 what ccs binds2\n = do -- report verbosely, if required\n dumpIfSet_dyn dflags Opt_D_verbose_stg2stg what\n (vcat (map ppr binds2))\n let linted_binds = stg_linter what binds2\n return (linted_binds, us2, ccs)\n -- return: processed binds\n -- UniqueSupply for the next guy to use\n -- cost-centres to be declared\/registered (specialised)\n -- add to description of what's happened (reverse order)\n\\end{code}\n","avg_line_length":32.4838709677,"max_line_length":105,"alphanum_fraction":0.5822575306} {"size":10253,"ext":"lhs","lang":"Literate Haskell","max_stars_count":2.0,"content":"\r\n|\r\nModule : Database.InternalEnumerator\r\nCopyright : (c) 2004 Oleg Kiselyov, Alistair Bayley\r\nLicense : BSD-style\r\nMaintainer : oleg@pobox.com, alistair@abayley.org\r\nStability : experimental\r\nPortability : non-portable\r\n\r\nThis is the interface between the middle Enumerator layer and the\r\nlow-level, Database-specific layer. This file is not exported to the end user.\r\n\r\nOnly the programmer for a new back-end needs to consult this file.\r\n\r\n\r\n> {-# LANGUAGE MultiParamTypeClasses #-}\r\n> {-# LANGUAGE FunctionalDependencies #-}\r\n> {-# LANGUAGE DeriveDataTypeable #-}\r\n\r\n> module Database.InternalEnumerator\r\n> (\r\n> -- * Session object.\r\n> ISession(..), ConnectA(..)\r\n> , Statement(..), Command(..), EnvInquiry(..)\r\n> , PreparationA(..), IPrepared(..)\r\n> , PreparedStmt(..)\r\n> , BindA(..), DBBind(..)\r\n> , IsolationLevel(..)\r\n> , Position\r\n> , IQuery(..)\r\n> , DBType(..)\r\n> , throwIfDBNull\r\n> -- * Exceptions and handlers\r\n> , DBException(..)\r\n> , throwDB\r\n> , ColNum, RowNum\r\n> , SqlState, SqlStateClass, SqlStateSubClass\r\n> ) where\r\n\r\n> import Data.Typeable\r\n> import Control.Exception.Extensible (throw, Exception)\r\n> import qualified Control.Exception (catch)\r\n\r\n> data IsolationLevel =\r\n> ReadUncommitted\r\n> | ReadCommitted\r\n> | RepeatableRead\r\n> | Serialisable\r\n> | Serializable -- ^ for alternative spellers\r\n> deriving (Show, Eq, Ord, Enum)\r\n\r\n\r\n| A wrapper around the action to open the database. That wrapper is not\r\nexported to the end user. The only reason for the wrapper is to\r\nguarantee that the only thing to do with the result of\r\n'Database.Enumerator.Sqlite.connect' function is to pass it out\r\ndirectly to 'Database.Enumerator.withSession'.\r\n\r\n> newtype ConnectA sess = ConnectA (IO sess) deriving Typeable\r\n\r\n\r\n\r\nPosition within the result set. Not for the end user.\r\n\r\n> type Position = Int\r\n\r\nNeeded for exceptions\r\n\r\n> type RowNum = Int\r\n> type ColNum = Int \r\n\r\n--------------------------------------------------------------------\r\n-- ** Exceptions and handlers\r\n--------------------------------------------------------------------\r\n\r\n> type SqlStateClass = String\r\n> type SqlStateSubClass = String\r\n> type SqlState = (SqlStateClass, SqlStateSubClass)\r\n\r\n> data DBException\r\n> -- | DBMS error message.\r\n> = DBError SqlState Int String\r\n> | DBFatal SqlState Int String\r\n> -- | the iteratee function used for queries accepts both nullable (Maybe) and\r\n> -- non-nullable types. If the query itself returns a null in a column where a\r\n> -- non-nullable type was specified, we can't handle it, so DBUnexpectedNull is thrown.\r\n> | DBUnexpectedNull RowNum ColNum\r\n> -- | Thrown by cursor functions if you try to fetch after the end.\r\n> | DBNoData\r\n> deriving (Typeable, Show)\r\n\r\n> instance Exception DBException\r\n\r\n| Throw a DBException. It's just a type-specific 'Control.Exception.throwDyn'.\r\n\r\n> throwDB :: DBException -> a\r\n> throwDB = throw\r\n\r\n\r\n--------------------------------------------------------------------\r\n-- ** Session interface\r\n--------------------------------------------------------------------\r\n\r\n| The 'ISession' class describes a database session to a particular\r\nDBMS. Oracle has its own Session object, SQLite has its own\r\nsession object (which maintains the connection handle to the database\r\nengine and other related stuff). Session objects for different databases\r\nnormally have different types -- yet they all belong to the class ISession\r\nso we can do generic operations like @commit@, @execDDL@, etc. \r\nin a database-independent manner.\r\n\r\nSession objects per se are created by database connection\\\/login functions.\r\n\r\nThe class 'ISession' is thus an interface between low-level (and\r\ndatabase-specific) code and the Enumerator, database-independent\r\ncode.\r\nThe 'ISession' class is NOT visible to the end user -- neither the class,\r\nnor any of its methods.\r\n\r\nThe 'ISession' class describes the mapping from connection object to\r\nthe session object. The connection object is created by the end user\r\n(and this is how the end user tells which particular back end he wants).\r\nThe session object is not accessible by the end user in any way.\r\nEven the type of the session object should be hidden!\r\n\r\n> class ISession sess where\r\n> disconnect :: sess -> IO ()\r\n> beginTransaction :: sess -> IsolationLevel -> IO ()\r\n> commit :: sess -> IO ()\r\n> rollback :: sess -> IO ()\r\n\r\n\r\nWe can have several types of statements: just plain strings,\r\nstrings bundled with tuning parameters, prepared statements.\r\nBTW, statement with unbound variables should have a different type\r\nfrom that of the statement without bound variables or the statement\r\nwith all bound variables.\r\n\r\n| 'Command' is not a query: command deletes or updates rows, creates\\\/drops\r\ntables, or changes database state.\r\n'executeCommand' returns the number of affected rows (or 0 if DDL i.e. not DML).\r\n\r\n> class ISession sess => Command stmt sess where\r\n> -- insert\/update\/delete; returns number of rows affected\r\n> executeCommand :: sess -> stmt -> IO Int\r\n\r\n> class ISession sess =>\r\n> EnvInquiry inquirykey sess result | inquirykey sess -> result where\r\n> inquire :: inquirykey -> sess -> IO result\r\n\r\n\r\n| 'Statement' defines the API for query objects i.e.\r\nwhich types can be queries.\r\n\r\n> class ISession sess => Statement stmt sess q | stmt sess -> q where\r\n> makeQuery :: sess -> stmt -> IO q\r\n\r\n\r\n|The class IQuery describes the class of query objects. Each\r\ndatabase (that is, each Session object) has its own Query object. \r\nWe may assume that a Query object includes (at least, conceptually)\r\na (pointer to) a Session object, so a Query object determines the\r\nSession object.\r\nA back-end provides an instance (or instances) of IQuery.\r\nThe end user never seens the IQuery class (let alone its methods).\r\n\r\nCan a session have several types of query objects?\r\nLet's assume that it can: but a statement plus the session uniquely\r\ndetermine the query,\r\n\r\nNote that we explicitly use IO monad because we will have to explicitly\r\ndo FFI.\r\n\r\n> class ISession sess => IQuery q sess b | q -> sess, q -> b\r\n> where\r\n> fetchOneRow :: q -> IO Bool\r\n> currentRowNum :: q -> IO Int\r\n> freeBuffer :: q -> b -> IO ()\r\n> destroyQuery :: q -> IO ()\r\n\r\n|A \\'buffer\\' means a column buffer: a data structure that points to a\r\nblock of memory allocated for the values of one particular\r\ncolumn. Since a query normally fetches a row of several columns, we\r\ntypically deal with a list of column buffers. Although the column data\r\nare typed (e.g., Integer, CalendarDate, etc), column buffers hide that\r\ntype. Think of the column buffer as Dynamics. The class DBType below\r\ndescribes marshalling functions, to fetch a typed value out of the\r\n\\'untyped\\' columnBuffer.\r\n\r\nDifferent DBMS's (that is, different session objects) have, in\r\ngeneral, columnBuffers of different types: the type of Column Buffer\r\nis specific to a database.\r\nSo, ISession (m) uniquely determines the buffer type (b)??\r\nOr, actually, a query uniquely determines the buffer.\r\n\r\n\r\n| The class DBType is not used by the end-user.\r\nIt is used to tie up low-level database access and the enumerator.\r\nA database-specific library must provide a set of instances for DBType.\r\n\r\n> class DBType a q b | q -> b where\r\n> allocBufferFor :: a -> q -> Position -> IO b\r\n> fetchCol :: q -> b -> IO a\r\n\r\n| Used by instances of DBType to throw an exception\r\nwhen a null (Nothing) is returned.\r\nWill work for any type, as you pass the fetch action in the fetcher arg.\r\n\r\n> throwIfDBNull :: (Monad m) => m (RowNum, ColNum) -> m (Maybe a) -> m a\r\n> throwIfDBNull pos fetcher = do\r\n> v <- fetcher \r\n> case v of\r\n> Nothing -> do\r\n> (row,col) <- pos\r\n> throwDB (DBUnexpectedNull row col)\r\n> Just m -> return m\r\n\r\n\r\n\r\n------------------------------------------------------------------------\r\nPrepared commands and statements\r\n\r\n> newtype PreparedStmt mark stmt = PreparedStmt stmt\r\n\r\n| This type is not visible to the end user (cf. ConnectA). It forms a private\r\n`communication channel' between Database.Enumerator and a back end.\r\n\r\nWhy don't we make a user-visible class with a @prepare@ method?\r\nBecause it means to standardize the preparation method signature\r\nacross all databases. Some databases need more parameters, some\r\nfewer. There may be several statement preparation functions within one\r\ndatabase. So, instead of standardizing the signature of the\r\npreparation function, we standardize on the _result_ of that\r\nfunction. To be more precise, we standardize on the properties of the\r\nresult: whatever it is, the eventual prepared statement should be\r\nsuitable to be passed to 'bindRun'.\r\n\r\n> newtype PreparationA sess stmt = PreparationA (sess -> IO stmt)\r\n\r\n\r\n> class ISession sess => IPrepared stmt sess bound_stmt bo\r\n> | stmt -> bound_stmt, stmt -> bo where\r\n> bindRun :: sess -> stmt -> [BindA sess stmt bo] -> (bound_stmt -> IO a) -> IO a\r\n> -- Should this be here? we have a need to free statements\r\n> -- separately from result-sets (which are handled by IQuery.destroyQuery).\r\n> -- It might be useful to prepare a statement, use it a number of times\r\n> -- (so many result-sets are created+destroyed), and then destroy it,\r\n> -- so it has a lifecycle independent of Queries.\r\n> destroyStmt :: sess -> stmt -> IO ()\r\n\r\n\r\n| The binding object (bo) below is very abstract, on purpose.\r\nIt may be |IO a|, it may be String, it may be a function, etc.\r\nThe binding object can hold the result of marshalling,\r\nor bo can hold the current counter, etc.\r\nDifferent databases do things very differently:\r\ncompare PostgreSQL and the Stub (which models Oracle).\r\n\r\n> newtype BindA sess stmt bo = BindA (sess -> stmt -> bo)\r\n\r\n| The class DBBind is not used by the end-user.\r\nIt is used to tie up low-level database access and the enumerator.\r\nA database-specific library must provide a set of instances for DBBind.\r\nThe latter are the dual of DBType.\r\n\r\n> class ISession sess => DBBind a sess stmt bo | stmt -> bo where\r\n> -- | This is really just a wrapper that lets us write lists of\r\n> -- heterogenous bind values e.g. @[bindP \\\"string\\\", bindP (0::Int), ...]@\r\n> bindP :: a -> BindA sess stmt bo\r\n","avg_line_length":37.9740740741,"max_line_length":91,"alphanum_fraction":0.6805812933} {"size":17659,"ext":"lhs","lang":"Literate Haskell","max_stars_count":8.0,"content":"\n> {-# LANGUAGE TupleSections,OverloadedStrings #-}\n> module Database.HsSqlPpp.LexicalSyntax\n> (Token(..)\n> ,prettyToken\n> ,sqlToken\n> ,sqlTokens\n> ,module Database.HsSqlPpp.SqlDialect\n> ) where\n\n> import qualified Data.Text as T\n> import qualified Data.Text.Lazy as LT\n> import Text.Parsec\n> --Cimport Text.Parsec.String hdi\n> import Text.Parsec.Text\n> import Control.Applicative hiding ((<|>), many)\n> import Data.Char\n> import Database.HsSqlPpp.SqlDialect\n> import Control.Monad\n> import Prelude hiding (takeWhile)\n> import Data.Maybe\n\n> -- | Represents a lexed token\n> data Token\n> -- | a symbol in postgresql dialect is one of the following:\n> --\n> -- * one of the characters (),;[]{} (the {} is for odbc)\n> --\n> -- * \\'..\\' or \\':=\\' or \\'.\\' or \\':\\'\n> --\n> -- * a compound symbol, which starts with one of \\'*\\\/\\<>=~!\\@#%^&|\\`?+-' and follows with 0 or more of '*\\\/<>=~!\\@#%^&|\\`?'\n> --\n> -- things that are not lexed as symbols:\n> --\n> -- * [] used in quoted identifiers, prefix \\@,#,: used in identifiers\n> --\n> -- * $n positional arg\n> --\n> = Symbol T.Text\n>\n> -- | This is an identifier or keyword.\n> --\n> -- The 'Maybe (Char,Char)' selects the quoted style - 'Nothing' means the\n> -- identifier was unquoted\n> -- otherwise the two characters are the start and end quote.\n> --\n> -- \\'\\\"\\' is used to quote identifiers in standard sql, sql server also uses [brackets]\n> -- to quote identifiers.\n> --\n> -- The identifier also includes the \\'variable marker prefix\\'\n> -- used in sql server (e.g. \\@identifier, #identifier), and oracle\n> -- (e.g. :identifier)\n> | Identifier (Maybe (Char,Char)) T.Text\n>\n> -- | This is a string literal.\n> --\n> -- The first field is the quotes used: single quote (\\')\n> -- for normal strings, E' for escape supporting strings,\n> -- and $$ delimiter for postgresql dollar quoted strings.\n> --\n> -- The lexer doesn't process the escapes in strings, but passes\n> -- on the literal source e.g. E\\'\\\\n\\' parses to SqlString \\\"E\\'\\\" \\\"\\\\n\\\"\n> -- with the literal characters \\'\\\\\\' and \\'n\\' in the string, not a newline character.\n> -- quotes within a string (\\'\\') or escaped string (\\'\\' or \\\\\\') are passed through unchanged\n> | SqlString T.Text T.Text\n>\n> -- | a number literal (integral or otherwise), stored in original format\n> -- unchanged\n> | SqlNumber T.Text\n>\n> -- | non-significant whitespace (space, tab, newline) (strictly speaking,\n> -- it is up to the client to decide whether the whitespace is significant\n> -- or not)\n> | Whitespace T.Text\n>\n> -- | a postgresql positional arg, e.g. $1\n> | PositionalArg Int\n>\n> -- | a commented line using --, contains every character starting with the\n> -- \\'--\\' and including the terminating newline character if there is one\n> -- - this will be missing if the last line in the source is a line comment\n> -- with no trailing newline\n> | LineComment T.Text\n>\n> -- | a block comment, \\\/* stuff *\\\/, includes the comment delimiters\n> | BlockComment T.Text\n>\n> -- | an antiquotation splice, e.g. $x(stuff)\n> | Splice Char T.Text\n>\n> -- | the copy data in a copy from stdin\n> | CopyPayload T.Text\n> deriving (Eq,Show)\n\n> -- | Accurate pretty printing, if you lex a bunch of tokens,\n> -- then pretty print them, should should get back exactly the\n> -- same string\n> prettyToken :: SQLSyntaxDialect -> Token -> LT.Text\n> prettyToken _ (Symbol s) = LT.fromChunks [s]\n> prettyToken _ (Identifier Nothing t) = LT.fromChunks [t]\n> prettyToken _ (Identifier (Just (a,b)) t) =\n> LT.fromChunks [T.singleton a, t, T.singleton b]\n> prettyToken _ (SqlString \"E'\" t) = LT.fromChunks [\"E'\",t,\"'\"]\n> prettyToken _ (SqlString q t) = LT.fromChunks [q,t,q]\n> prettyToken _ (SqlNumber r) = LT.fromChunks [r]\n> prettyToken _ (Whitespace t) = LT.fromChunks [t]\n> prettyToken _ (PositionalArg n) = LT.fromChunks [T.singleton '$', T.pack $ show n]\n> prettyToken _ (LineComment l) = LT.fromChunks [l]\n> prettyToken _ (BlockComment c) = LT.fromChunks [c]\n> prettyToken _ (Splice c t) =\n> LT.fromChunks [T.singleton '$'\n> ,T.singleton c\n> ,T.singleton '('\n> ,t\n> ,T.singleton ')']\n> prettyToken _ (CopyPayload s) = LT.fromChunks [s,\"\\\\.\\n\"]\n\n\nnot sure how to get the position information in the parse errors\n\nTODO: try to make all parsers applicative only\ninvestigate what is missing for postgresql\ninvestigate differences for sql server, oracle, maybe db2 and mysql\n also\n\n> sqlTokens :: SQLSyntaxDialect -> FilePath -> Maybe (Int,Int) -> T.Text -> Either ParseError [((FilePath,Int,Int),Token)]\n> sqlTokens dialect fn' mp txt =\n> let (l',c') = fromMaybe (1,1) mp\n> in runParser (setPos (fn',l',c') *> many_p <* eof) () \"\" txt\n> where\n\npretty hacky, want to switch to a different lexer for copy from stdin\nstatements\n\nif we see 'from stdin;' then try to lex a copy payload\n\n> many_p = some_p `mplus` return []\n> some_p = do\n> tok <- sqlToken dialect\n> case tok of\n> (_, Identifier Nothing t) | T.map toLower t == \"from\" -> (tok:) <$> seeStdin\n> _ -> (tok:) <$> many_p\n> seeStdin = do\n> tok <- sqlToken dialect\n> case tok of\n> (_,Identifier Nothing t) | T.map toLower t == \"stdin\" -> (tok:) <$> seeColon\n> (_,x) | isWs x -> (tok:) <$> seeStdin\n> _ -> (tok:) <$> many_p\n> seeColon = do\n> tok <- sqlToken dialect\n> case tok of\n> (_,Symbol \";\") -> (tok:) <$> copyPayload\n> _ -> (tok:) <$> many_p\n> copyPayload = do\n> p' <- getPosition\n> let pos = (sourceName p',sourceLine p', sourceColumn p')\n> tok <- char '\\n' *>\n> ((\\x -> (pos, CopyPayload $ T.pack $ x ++ \"\\n\"))\n> <$> manyTill anyChar (try $ string \"\\n\\\\.\\n\"))\n> --let (_,CopyPayload t) = tok\n> --trace (\"payload is '\" ++ T.unpack t ++ \"'\") $ return ()\n> (tok:) <$> many_p\n> setPos (fn,l,c) = do\n> fmap (flip setSourceName fn\n> . flip setSourceLine l\n> . flip setSourceColumn c) getPosition\n> >>= setPosition\n> isWs :: Token -> Bool\n> isWs (Whitespace {}) = True\n> isWs (BlockComment {}) = True\n> isWs (LineComment {}) = True\n> isWs _ = False\n\n> -- | parser for a sql token\n> sqlToken :: SQLSyntaxDialect -> Parser ((FilePath,Int,Int),Token)\n> sqlToken d = do\n> p' <- getPosition\n> let p = (sourceName p',sourceLine p', sourceColumn p')\n> (p,) <$> choice [sqlString d\n> ,identifier d\n> ,lineComment d\n> ,blockComment d\n> ,sqlNumber d\n> ,symbol d\n> ,sqlWhitespace d\n> ,positionalArg d\n> ,splice d]\n\n> identifier :: SQLSyntaxDialect -> Parser Token\n\nsql server: identifiers can start with @ or #\nquoting uses [] or \"\"\n\nTODO: fix all the \"qiden\" parsers to allow \"qid\"\"en\"\n\n> identifier SQLServerDialect =\n> choice\n> [Identifier (Just ('[',']'))\n> <$> (char '[' *> takeWhile1 (\/=']') <* char ']')\n> ,Identifier (Just ('\"','\"'))\n> <$> (char '\"' *> takeWhile1 (\/='\"') <* char '\"')\n> ,Identifier Nothing <$> identifierStringPrefix '@'\n> ,Identifier Nothing <$> identifierStringPrefix '#'\n> ,Identifier Nothing <$> identifierString\n> ]\n\noracle: identifiers can start with :\nquoting uses \"\"\n(todo: check other possibilities)\n\n> identifier OracleDialect =\n> choice\n> [Identifier (Just ('\"','\"'))\n> <$> (char '\"' *> takeWhile1 (\/='\"') <* char '\"')\n> ,Identifier Nothing <$> identifierStringPrefix ':'\n> ,Identifier Nothing <$> identifierString\n> ]\n\n> identifier PostgreSQLDialect =\n> choice\n> [Identifier (Just ('\"','\"'))\n> <$> (char '\"' *> takeWhile1 (\/='\"') <* char '\"')\n> ,Identifier Nothing <$> identifierString\n> ]\n\n\n> identifierStringPrefix :: Char -> Parser T.Text\n> identifierStringPrefix p = do\n> void $ char p\n> i <- identifierString\n> return $ T.cons p i\n\n> identifierString :: Parser T.Text\n> identifierString =\n> startsWith (\\c -> c == '_' || isAlpha c)\n> (\\c -> c == '_' || isAlphaNum c)\n\nStrings in sql:\npostgresql dialect:\nstrings delimited with single quotes\na literal quote is written ''\nthe lexer leaves the double quote in the string in the ast\nstrings can also be written like this:\nE'string with quotes in \\n \\t'\nthe \\n and \\t are escape sequences. The lexer passes these through unchanged.\nan 'E' escaped string can also contain \\' for a literal single quote.\nthis are also passed into the ast unchanged\nstrings can be dollar quoted:\n$$string$$\nthe dollar quote can contain an optional tag:\n$tag$string$tag$\nwhich allows nesting of dollar quoted strings with different tags\n\nNot sure what behaviour in sql server and oracle, pretty sure they\ndon't have dollar quoting, but I think they have the other two\nvariants.\n\n> sqlString :: SQLSyntaxDialect -> Parser Token\n> sqlString _ =\n> choice [normalString\n> ,eString\n> ,dollarString]\n> where\n> normalString = SqlString \"'\" <$> (char '\\'' *> normalStringSuffix \"\")\n> normalStringSuffix t = do\n> s <- takeTill (=='\\'')\n> void $ char '\\''\n> -- deal with '' as literal quote character\n> choice [do\n> void $ char '\\''\n> normalStringSuffix $ T.concat [t,s,\"''\"]\n> ,return $ T.concat [t,s]]\n> eString = SqlString \"E'\" <$> (try (string \"E'\") *> eStringSuffix \"\")\n> eStringSuffix :: T.Text -> Parser T.Text\n> eStringSuffix t = do\n> s <- takeTill (`elem` (\"\\\\'\"::String))\n> choice [do\n> try $ void $ string \"\\\\'\"\n> eStringSuffix $ T.concat [t,s,\"\\\\'\"]\n> ,do\n> void $ try $ string \"''\"\n> eStringSuffix $ T.concat [t,s,\"''\"]\n> ,do\n> void $ char '\\''\n> return $ T.concat [t,s]\n> ,do\n> c <- anyChar\n> eStringSuffix $ T.concat [t,s,T.singleton c]]\n> dollarString = do\n> delim <- dollarDelim\n> y <- manyTill anyChar (try $ string $ T.unpack delim)\n> return $ SqlString delim $ T.pack y\n> dollarDelim :: Parser T.Text\n> dollarDelim = try $ do\n> void $ char '$'\n> tag <- option \"\" identifierString\n> void $ char '$'\n> return $ T.concat [\"$\", tag, \"$\"]\n\npostgresql number parsing\n\ndigits\ndigits.[digits][e[+-]digits]\n[digits].digits[e[+-]digits]\ndigitse[+-]digits\nwhere digits is one or more decimal digits (0 through 9). At least one digit must be before or after the decimal point, if one is used. At least one digit must follow the exponent marker (e), if one is present. There cannot be any spaces or other characters embedded in the constant. Note that any leading plus or minus sign is not actually considered part of the constant; it is an operator applied to the constant.\n\n> sqlNumber :: SQLSyntaxDialect -> Parser Token\n> sqlNumber _ = (SqlNumber . T.pack) <$>\n> (int (pp dot pp int)\n> -- try is used in case we read a dot\n> -- and it isn't part of a number\n> -- if there are any following digits, then we commit\n> -- to it being a number and not something else\n> <|> try ((++) <$> dot <*> int))\n> pp expon\n> where\n> int = many1 digit\n> dot = do\n> -- make sure we don't parse '..' as part of a number\n> -- this is so we can parser e.g. 1..2 correctly\n> -- as '1', '..', '2', and not as '1.' '.2' or\n> -- '1.' '.' '2'\n> notFollowedBy (string \"..\")\n> string \".\"\n> expon = (:) <$> oneOf \"eE\" <*> sInt\n> sInt = (++) <$> option \"\" (string \"+\" <|> string \"-\") <*> int\n> pp = (<$$> (++))\n\n> () :: Parser a -> Parser (a -> a) -> Parser a\n> p q = p <**> option id q\n\n> () :: Parser (a -> a) -> Parser (a -> a) -> Parser (a -> a)\n> () pa pb = (.) `c` pa <*> option id pb\n> -- todo: fix this mess\n> where c = (<$>) . flip\n\n> (<$$>) :: Applicative f =>\n> f b -> (a -> b -> c) -> f (a -> c)\n> (<$$>) pa c = pa <**> pure (flip c)\n\nSymbols:\n\nCopied from the postgresql manual:\n\nAn operator name is a sequence of up to NAMEDATALEN-1 (63 by default) characters from the following list:\n\n+ - * \/ < > = ~ ! @ # % ^ & | ` ?\n\nThere are a few restrictions on operator names, however:\n-- and \/* cannot appear anywhere in an operator name, since they will be taken as the start of a comment.\n\nA multiple-character operator name cannot end in + or -, unless the name also contains at least one of these characters:\n\n~ ! @ # % ^ & | ` ?\n\nFor example, @- is an allowed operator name, but *- is not. This restriction allows PostgreSQL to parse SQL-compliant queries without requiring spaces between tokens.\nWhen working with non-SQL-standard operator names, you will usually need to separate adjacent operators with spaces to avoid ambiguity. For example, if you have defined a left unary operator named @, you cannot write X*@Y; you must write X* @Y to ensure that PostgreSQL reads it as two operator names not one.\n\nTODO: try to match this behaviour\n\ninClass :: String -> Char -> Bool\n\n> symbol :: SQLSyntaxDialect -> Parser Token\n> symbol dialect = Symbol <$> T.pack <$>\n> choice\n> [(:[]) <$> satisfy (`elem` simpleSymbols)\n> ,try $ string \"..\"\n> ,string \".\"\n> ,try $ string \"::\"\n> ,try $ string \":=\"\n> ,string \":\"\n> ,anotherOp\n> ]\n> where\n> anotherOp :: Parser String\n> anotherOp = do\n> -- first char can be any, this is always a valid operator name\n> c0 <- satisfy (`elem` compoundFirst)\n> --recurse:\n> let r = choice\n> [do\n> c1 <- satisfy (`elem` compoundTail)\n> choice [do\n> x <- r\n> return $ c1 : x\n> ,return [c1]]\n> ,try $ do\n> a <- satisfy (`elem` (\"+-\"::String))\n> b <- r\n> return $ a : b]\n> choice [do\n> tl <- r\n> return $ c0 : tl\n> ,return [c0]]\n> {-biggerSymbol =\n> startsWith (inClass compoundFirst)\n> (inClass compoundTail) -}\n> simpleSymbols :: String\n> simpleSymbols | dialect == PostgreSQLDialect = \"(),;[]{}\"\n> | otherwise = \"(),;{}\"\n> compoundFirst :: String\n> compoundFirst | dialect == PostgreSQLDialect = \"*\/<>=~!@#%^&|`?+-\"\n> | otherwise = \"*\/<>=~!%^&|`?+-\"\n> compoundTail :: String\n> compoundTail | dialect == PostgreSQLDialect = \"*\/<>=~!@#%^&|`?\"\n> | otherwise = \"*\/<>=~!%^&|`?\"\n\n\n> sqlWhitespace :: SQLSyntaxDialect -> Parser Token\n> sqlWhitespace _ = (Whitespace . T.pack) <$> many1 (satisfy isSpace)\n\n> positionalArg :: SQLSyntaxDialect -> Parser Token\n> -- uses try so we don't get confused with $splices\n> positionalArg PostgreSQLDialect = try (\n> PositionalArg <$> (char '$' *> (read <$> many1 digit)))\n\n> positionalArg _ = satisfy (const False) >> fail \"positional arg unsupported\"\n\n> lineComment :: SQLSyntaxDialect -> Parser Token\n> lineComment _ =\n> (\\s -> (LineComment . T.pack) $ concat [\"--\",s]) <$>\n> -- try is used here in case we see a - symbol\n> -- once we read two -- then we commit to the comment token\n> (try (string \"--\") *> (\n> conc <$> manyTill anyChar (lookAhead lineCommentEnd) <*> lineCommentEnd))\n> where\n> conc a Nothing = a\n> conc a (Just b) = a ++ b\n> lineCommentEnd = Just \"\\n\" <$ char '\\n' <|> Nothing <$ eof\n\n> blockComment :: SQLSyntaxDialect -> Parser Token\n> blockComment _ =\n> (\\s -> BlockComment $ T.concat [\"\/*\",s]) <$>\n> (try (string \"\/*\") *> commentSuffix 0)\n> where\n> commentSuffix :: Int -> Parser T.Text\n> commentSuffix n = do\n> -- read until a possible end comment or nested comment\n> x <- takeWhile (\\e -> e \/= '\/' && e \/= '*')\n> choice [-- close comment: if the nesting is 0, done\n> -- otherwise recurse on commentSuffix\n> try (string \"*\/\") *> let t = T.concat [x,\"*\/\"]\n> in if n == 0\n> then return t\n> else (\\s -> T.concat [t,s]) <$> commentSuffix (n - 1)\n> -- nested comment, recurse\n> ,try (string \"\/*\") *> ((\\s -> T.concat [x,\"\/*\",s]) <$> commentSuffix (n + 1))\n> -- not an end comment or nested comment, continue\n> ,(\\c s -> T.concat [x,T.pack [c], s]) <$> anyChar <*> commentSuffix n]\n\n> splice :: SQLSyntaxDialect -> Parser Token\n> splice _ = do\n> Splice\n> <$> (char '$' *> letter)\n> <*> (char '(' *> identifierString <* char ')')\n\n> startsWith :: (Char -> Bool) -> (Char -> Bool) -> Parser T.Text\n> startsWith p ps = do\n> c <- satisfy p\n> choice [T.cons c <$> (takeWhile1 ps)\n> ,return $ T.singleton c]\n\n> takeWhile1 :: (Char -> Bool) -> Parser T.Text\n> takeWhile1 p = T.pack <$> many1 (satisfy p)\n\n> takeWhile :: (Char -> Bool) -> Parser T.Text\n> takeWhile p = T.pack <$> many (satisfy p)\n\n> takeTill :: (Char -> Bool) -> Parser T.Text\n> takeTill p =\n> T.pack <$> manyTill anyChar (peekSatisfy p)\n\n> peekSatisfy :: (Char -> Bool) -> Parser ()\n> peekSatisfy p = do\n> void $ lookAhead (satisfy p)\n","avg_line_length":37.0209643606,"max_line_length":416,"alphanum_fraction":0.5604507617} {"size":17422,"ext":"lhs","lang":"Literate Haskell","max_stars_count":1.0,"content":"> module Commands where\n\n> import Screens -- Screens.lhs\n> import Rooms -- Rooms.lhs\n> import Save -- Save.lhs\n> import Helpers -- Helpers.lhs\n> import Screens -- Screens.lhs\n> import GTypes -- GTypes.lhs\n> import Output -- Output.lhs\n> import Data.Char (toLower,isSpace)\n> import Data.Maybe\n> import Data.List (nub)\n> import System.IO\n> import System.Console.Readline -- Rich terminal IO (binding of GNU library)\n> import System.Console.ANSI -- Useful terminal manipulation functionality\n\nThe main game loop, prompts the user for input and passes it to the exeCmd function.\nAlso maintains history of user input that the player can recall with the arrow keys.\n\n> play :: GState -> Prompt -> IO ()\n> play (is,r) p = do\n> loadScreen r is\n> l <- readline (p++\"\\n> \")\n> case l of\n> Nothing -> play (is,r) noCommand \n> Just \"\" -> play (is,r) noCommand\n> Just input -> do\n> addHistory input -- Enables player to use up\/down arrows to look at past input lines\n> exeCmd (is,r) input\n\nThe readCmd function isn't named very intuitively, because all it really does is take in a\ncommand that was entered by the player, see if it's valid, and return it in all lowercase if\nit is. If it isn't a valid command, we just return \"BAD_CMD\" to inform the calling function.\n\n> readCmd :: Maybe String -> String\n> readCmd s | (isJust s) && lowCmd `elem` cmds = lowCmd\n> | otherwise = \"BAD_CMD\"\n> where lowCmd = map toLower (fromJust s)\n\nTakes user input, grabs and passes the command to readCmd, then calls the function that\ncorresponds to the determined command, if appropriate.\n\n> exeCmd :: GState -> Argument -> IO ()\n> exeCmd (is,r) s = case readCmd (getElem 0 (words s)) of\n> c|c `elem` takeSyns -> addItem (is,r) (parseArgs s)\n> c|c `elem` dropSyns -> remItem (is,r) (parseArgs s)\n> c|c `elem` moveSyns -> moveTo (is,r) (parseArgs s)\n> c|c `elem` examSyns -> exam (is,r) (parseArgs s)\n> c|c `elem` openSyns -> openSsm (is,r) (parseArgs s)\n> c|c `elem` saveSyns -> do saveGame (is,r)\n> play (is,r) gameSaved\n> c|c `elem` loadSyns ->\n> case loadGame save of\n> Just saveState -> play saveState \"Game loaded! :)\\n\"\n> Nothing -> play (is,r) loadFail\n> c|c `elem` whoaSyns -> play (is,r) noViolence\n> c|c `elem` helpSyns -> helpDisp (is,r)\n> c|c `elem` quitSyns -> do setSGR []; loadScreen \"Quit\" []\n> _ -> play (is,r) (fromMaybe \"Nothing\" (getElem 0 (words s))\n> ++\" is not a valid command!\\n\")\n\nIn order to separate out the command validation process from the argument parsing, and\nto declutter those functions a little, we can just create a big function that takes a\ncommand and its validated arguments and figures out what to do with them:\n\n> doCmd :: Command -> Argument -> GState -> IO ()\n> doCmd c a (is,r) = case c of\n> \"add\" -> if a `elem` staticObjs\n> then play (is,r) addFail3\n> else play (a:is,r) (addSucc a)\n> \"drop\" -> case a of\n> \"mat\" -> play (is,r) dropMat\n> \"key\" -> play (is,r) dropKey\n> z|z `elem` boxes -> play (is,r) dropBox\n> _ -> play (filter (\\x -> x \/= a) is,r) (dropSucc a)\n> \"examObj\" -> case a of\n> \"mat\" -> play (is,r) eMat\n> \"key\" -> if isForbidden (is,r) \"exam\" \"key\"\n> then play (is,r) examFail\n> else play (is,r) eKey\n> \"window\" -> if \"painting\" `elem` is\n> then if \"vase\" `elem` is\n> then displayWait (is,r) windowS4 eWindow\n> else displayWait (is,r) windowS2 eWindow\n> else if \"vase\" `elem` is\n> then displayWait (is,r) windowS3 eWindow\n> else displayWait (is,r) windowS1 eWindow\n> \"lighter\" -> play (is,r) eLighter\n> \"painting\" -> displayWait (is,r) paintingS ePainting1\n> \"vase\" -> play (is,r) eVase1\n> \"door\" -> play (is,r) eDoor\n> \"doors\" -> play (is,r) eDoors\n> \"toilet\" -> play (is,r) eToilet\n> \"sink\" -> play (is,r) eSink\n> \"mirror\" -> play (is,r) eMirror\n> \"shower\" -> play (is,r) eShower\n> \"paper\" -> play (is,r) ePaper\n> \"box\" -> if r == \"Closet\"\n> then play (is,r) eShoeBox\n> else play (is,r) eQBox\n> \"clothes\" -> play (is,r) eClothes\n> \"fence\" -> play (is,r) eFence\n> z|z `elem` dirtSyns -> if \"shovel\" `elem` is\n> then digUp (is,r)\n> else displayWait (is,r) dirtS1 eDirt\n> \"cup\" -> play (is,r) eCup\n> \"boots\" -> play (is,r) eBoots\n> \"shovel\" -> if isForbidden (is,r) \"exam\" \"shovel\"\n> then play (is,r) examFail\n> else play (is,r) eShovel\n> _ -> play (is,r) (notExamable a)\n> \"examItem\" -> case a of\n> \"mat\" -> play (is,r) eMat\n> \"key\" -> play (is,r) eKey\n> \"lighter\" -> play (is,r) eLighter\n> \"painting\" -> play (is,r) ePainting2\n> \"vase\" -> play (is,r) eVase2\n> \"paper\" -> play (is,r) ePaper\n> \"shoebox\" -> play (is,r) eShoeBox\n> \"bigbox\" -> play (is,r) eQBox\n> \"cup\" -> play (is,r) eCup\n> \"boots\" -> play (is,r) eBoots\n> \"shovel\" -> play (is,r) eShovel\n> _ -> play (is,r) (notExamable a)\n> \"open\" -> case a of\n> \"door\" -> case r of\n> \"Porch\" -> if isForbidden (is,r) \"open\" \"door\"\n> then play (is,r) doorLocked\n> else moveTo (is,r) (\"living\")\n> \"Living\" -> moveTo (is,r) (\"closet\")\n> \"Bathroom\" -> moveTo (is,r) (\"hallway\")\n> \"Hallway\" -> play (is,r) hwAmbiguity\n> \"Closet\" -> moveTo (is,r) (\"hallway\")\n> \"Basement\" -> moveTo (is,r) (\"hallway\")\n> \"Backyard\" -> moveTo (is,r) (\"hallway\")\n> _ -> play (is,r) (notOpenable a)\n> where staticObjs = [\"door\",\"doors\",\"window\",\"toilet\",\"sink\",\"mirror\",\"shower\",\"clothes\",\"fence\",\"dirt\"]\n> boxes = [\"box\",\"bigbox\",\"shoebox\"]\n> dirtSyns = [\"dirt\",\"soil\",\"mound\"]\n> digUp (is,r) = do displayWait (is,r) dirtS2 digText\n\nIn order to add an item to the list of player's items in the game state, we must check if\nthe player doesn't have it already, if the item is even in the room with the player, and if\nthe player is trying to \"take all\", and then react accordingly.\n\n> addItem :: GState -> Item -> IO ()\n> addItem (is,r) i = if lowi `elem` is\n> then play (is,r) (addFail1 lowi) -- Already in inventory: Fail\n> else if lowi `isInRoom` r == False\n> then if (dropWhile isSpace lowi) == \"\"\n> then play (is,r) addFail3 -- No text after cmd: Fail\n> else if lowi == \"all\"\n> then takeAll (is,r)\n> else if lowi == \"box\"\n> then case r of\n> \"Closet\" -> doCmd \"add\" \"shoebox\" (is,r)\n> \"Basement\" -> doCmd \"add\" \"bigbox\" (is,r)\n> _ -> play (is,r) (addFail2 lowi)\n> else play (is,r) (addFail2 lowi) -- Item not in room: Fail\n> else case isForbidden (is,r) \"add\" lowi of\n> True -> play (is,r) addFail3 -- Contextually forbidden item: Fail\n> False -> doCmd \"add\" lowi (is,r) -- Item is valid: Success\n> where lowi = lowerStr i\n> takeAll (is,r) = play ((nub $ is++(findAllItems r)),r) \"Took all\\n\"\n\nChecks to make sure the player isn't trying take items or do things that are not allowed\nin the current context. Applies to multiple commands.\n\n> isForbidden :: GState -> Command -> Argument -> Bool\n> isForbidden (is,r) c a = case c of\n> x|x `elem` [\"add\",\"exam\"] -> case a of\n> \"key\" -> \"mat\" `notElem` is\n> \"shovel\" -> \"bigbox\" `notElem` is\n> _ -> False\n> \"open\" -> case a of\n> \"door\" -> \"key\" `notElem` is\n> _ -> False\n> \"move\" -> case a of\n> y|y `elem` [\"living\",\"forward\"] -> \"key\" `notElem` is\n> _ -> False\n> _ -> False\n\nIf we can add an item, we need a way to remove an item, but some items are too crucial\nto allow the player to drop because it would slightly break the game, so we check for\nthat in doCmd. Only fails directly if the item isn't in the player inventory.\n\n> remItem :: GState -> Item -> IO ()\n> remItem (is,r) i = if lowi `elem` is || lowi == \"box\"\n> then doCmd \"drop\" lowi (is,r)\n> else play (is,r) dropFail -- Item not in inventory: Fail\n> where lowi = lowerStr i\n\nCommand to move from one room to another. Allows the player to move to adjacent rooms\nby mentioning them by name or to move around via directional keywords.\n\n> moveTo :: GState -> Room -> IO ()\n> moveTo (is,r) r' = case r `isAdjTo` r' of\n> Just x -> if isForbidden (is,r) \"move\" r'\n> then play (is,r) doorLocked -- Door is locked; No key\n> else play (is,(rooms !! x)) (roomPrompt !! x)\n> Nothing -> if r' `elem` dirKeywords\n> then moveDir (is,r) r'\n> else play (is,r) moveFail -- In wrong room: Fail\n> where moveDir (is,r) d = case d of\n> \"forward\" -> moveTo (is,r) (findFwd r d)\n> \"back\" -> moveTo (is,r) (findBwd r d)\n> \"right\" -> moveTo (is,r) (findRgt r d)\n> \"left\" -> moveTo (is,r) (findLft r d)\n\nExamining something is either looking at an item, looking at an object, or look at the\nplayer's general surroundings. Respond differently for items in vs out of inventory.\n\n> exam :: GState -> Object -> IO ()\n> exam (is,r) i = if lowi `elem` is\n> then doCmd \"examItem\" lowi (is,r)\n> else if lowi `isInRoom` r || lowi == \"box\"\n> then doCmd \"examObj\" lowi (is,r) -- Success\n> else moveTo (is,r) (lowerStr r) -- Examine current room by default\n> where lowi = lowerStr i\n\nOpening is supposed to be applied to items and objects in general, but is currently only\napplicable to doors, which essentially just results in moving to the room on the other side.\n\n> openSsm :: GState -> Object -> IO ()\n> openSsm (is,r) i = doCmd \"open\" lowi (is,r)\n> where lowi = lowerStr i\n\nThis is a workaround to not have to deal with IO from readFile directly. Writes to and loads\nfrom a module SavedGame.lhs, meaning the game must be recompiled before a new save will be\nrecognized and usable. Not exactly ideal, but it works for now...\n\n> saveGame :: GState -> IO ()\n> saveGame (is,r) = writeFile \".\/Save.lhs\"\n> (\"> module Save where\\n\\n> save = (\"++(show is)++\",\"++(show r)++\")\")\n\nBefore loading a game, we determine whether or not there even is a saved game in SavedGame.lhs.\nThe default game state is compared to in order to see if loading would result in a change from default.\n\n> loadGame :: GState -> Maybe GState\n> loadGame (is,r) = if is \/= [] || r \/= \"Porch\"\n> then Just save\n> else Nothing\n\nDisplay a temporary informational screen and then wait for user input to continue.\n\n> displayWait :: GState -> String -> String -> IO ()\n> displayWait gs screen text =\n> do clearScreen; putStr screen; putStr text; hideCursor;\n> readline genericPrompt; showCursor\n> if screen == dirtS2 \n> then exeCmd gs \"quit\" -- GAME OVER\n> else play gs \"Well, that was exciting, eh?\\n\"\n\nDisplays the help menu. Displays command info and current inventory and location for player.\n\n> helpDisp :: GState -> IO ()\n> helpDisp (is,r) =\n> do clearScreen; putStr helpText; displayCmds; displayState (is,r);\n> hideCursor; readline helpPrompt; showCursor; play (is,r) \"Hope that helped ;)\\n\\n\"\n\nDislaying the information contained within the game state may be called with a user command\nat some point. Currently shows the list of items and then prints the room below that.\n\n> displayState :: GState -> IO ()\n> displayState ([],r) = putStrLn (\"\\nItems: None\\n\\n\"++\"Current Location: \"++r)\n> displayState (is,r) = putStrLn (\"\\nItems:\\n\"++(unlines is)++\"Current Location: \"++r)\n\nUsing displayCmds with the play function, we can display all the possible commands in a nice format\nto the player when they invoke the \"help\" (or \"?\") command. We do this using the cmds and dCmds lists.\n\n> displayCmds :: IO ()\n> displayCmds = putStrLn (helpCmdTitle++(unlines hCmds))\n\nBasic commands and their synonyms, if any:\n\n> takeSyns = [\"take\",\"t\",\"grab\",\"pick\",\"steal\"]\n> dropSyns = [\"drop\",\"d\"]\n> moveSyns = [\"move\",\"m\",\"walk\",\"go\",\"enter\",\"cd\"]\n> examSyns = [\"examine\",\"e\",\"look\",\"view\",\"check\",\"read\",\"inspect\"]\n> openSyns = [\"open\",\"o\"]\n> saveSyns = [\"save\",\"s\"]\n> loadSyns = [\"load\",\"l\"]\n> whoaSyns = [\"punch\",\"kick\",\"bite\",\"kill\",\"stab\",\"beat\",\"die\",\"murder\"]\n> helpSyns = [\"help\",\"h\",\"?\",\"how\",\"what\"]\n> quitSyns = [\"quit\",\"q\",\"exit\",\"shutdown\"]\n\ncmds is a list of strings that represents the possible valid commands that a player can enter.\nhCmds is a helper list for zipping with cmds when displaying command usage to player.\n\n> cmds = takeSyns++dropSyns++moveSyns++examSyns++openSyns++saveSyns\n> ++loadSyns++whoaSyns++helpSyns++quitSyns\n\n> hCmds = [\"take [item] ------------ Take item, put in your inventory\"\n> ,\"drop [item] ------------ Remove item from inventory forever\"\n> ,\"go [direction\/room] ---- Move from current room to an adjacent room\"\n> ,\"look at [object\/item] -- Take a closer look at something\"\n> ,\"open [door\/item] ------- Open and enter door \/ Open an item or object\"\n> ,\"save (s) --------------- Save your game (overwrites last save)\"\n> ,\"load (l) --------------- Load your previously saved game [Must restart game to load a new save!]\"\n> ,\"help (?) --------------- Display this list of commands...\"\n> ,\"quit (q) --------------- Quit the game (Save first!)\"]\n\n","avg_line_length":57.498349835,"max_line_length":109,"alphanum_fraction":0.4643554127} {"size":5573,"ext":"lhs","lang":"Literate Haskell","max_stars_count":219.0,"content":"#!\/usr\/bin\/env runhaskell\n\nI have two laptops connected to monitors of different sizes in two\ndifferent rooms. I always start out with a terminal, Emacs, and\nFirefox when I boot up. I got tired of putting the windows in the\nright place every time, and wanted to write a little script that takes\ncare of that for me. Using the shell for this turned out to be a\nlittle more complicated than I wanted, since I'm not that familiar\nwith bash\/zsh programming, but quite comfortable with Haskell. I had\nheard of the shelly package, and not because Percy Bysshe is one of my\nfavorite poets, I decided to give it a try. I found the examples a\nlittle lacking, so I cobbled this together and hope that it can help\nothers who want to use Haskell to do the things a shell should do.\n\n\nShelly tells you to start with this, so here it is:\n\n> {-# LANGUAGE OverloadedStrings #-}\n> {-# LANGUAGE ExtendedDefaultRules #-}\n> {-# OPTIONS_GHC -fno-warn-type-defaults #-}\n> import Shelly\n> import qualified Data.Text as T\n> import Data.Text (Text)\n> default (T.Text)\n\nSometimes I can't remember exactly what a function does (especial the\norder of the argument) so I just paste the type signature in the\nsource.\n\n> -- runFoldLines :: a -> FoldCallback a -> FilePath -> [Text] -> Sh a\n> -- type FoldCallback a = a -> Text -> a \n\n> wanted :: [Text]\n> wanted = [\"terminology\", \"emacs\", \"firefox\"] -- The apps I want to run\n>\n> xps13WindowLocations, xpsWindowLocations :: [(Int,Text)]\n> xps13WindowLocations = zip [0..] [\n> \"0, 12,38,1894,1012\"\n> , \"0,1967,26,2512,1390\"\n> , \"0,1921,16,2541,1349\"\n> ]\n> \n> xpsWindowLocations = zip [0..] [\n> \"0, 12,38,1894,1012\"\n> , \"0,1967,30,3404,1390\"\n> , \"0,1921,16,3400,1360\"\n> ]\n\n> main = do\n> -- shelly $ verbosely $ do\n> shelly $ silently $ do\n> c <- concat <$> mapM runIfNotRunning wanted -- if they aren't already running, fire them up\n> echo (T.pack . show $ c)\n> when ((not . null) c) (sleep 5) -- it takes emacs a while to start up\n\nDepending upon which machine I'm running on, I need different monitor\nand window parameters. Getting the window id of a running app is not\ncompletely trivial. First I collect the window Ids of all the windows\nwith \"wmctrl -l\" and then get the WM_CLASS of that window with \"xprop\n-id | grep WM_CLASS\" The text returned by that command\nshould contain the app name, which is searched with the function\nhasApp. The end result is that windowIdList contains a list of window\nIds in the same order ad the app names in the wanted list above.\n\n> host <- run \"hostname\" []\n> wids <- wmctl collectWindowIds\n> appNames <- mapM getClass wids\n> let zipped = zip wids appNames\n> -- liftIO $ mapM_ print zipped\n> let windowIdList = windowIds zipped\n> -- echo . T.unwords $ windowIdList\n> case host of -- Note the \"\\n\" after the host names. That's what you get back from above\n> \"xps13\\n\" -> do\n> escaping False $ cmd \"\/usr\/bin\/xrandr\"\n> \"--output eDP1 --mode 1920x1080 --left-of DP1 --output DP1 --mode 2560x1440\"\n> mapM_ (moveWindowsAround windowIdList) xps13WindowLocations\n> \"xps\\n\" -> do\n> escaping False $ cmd \"\/usr\/bin\/xrandr\"\n> \"--output eDP1 --mode 1920x1080 --left-of DP1 --output DP1 --mode 3440x1440\"\n> mapM_ (moveWindowsAround windowIdList) xpsWindowLocations\n> otherwise -> echo $ T.unwords [\"bad host\", host]\n> where\n> collectWindowIds a b = (head . T.words $ b):a -- break into words and get the first one\n> hasApp :: Text -> (a, Text) -> Bool\n> hasApp x w = x `T.isInfixOf` (snd w) -- does the x passed in match?\n> windowIdOfApp :: [(c, Text)] -> Text -> c -- \n> windowIdOfApp ws x = fst . head . (filter (hasApp x)) $ ws\n> windowIds :: [(b, Text)] -> [b]\n> windowIds ws = map (windowIdOfApp ws) wanted\n\nThe output of wmctrl looks like this:\n0x00400002 0 xps henry@xps: wmctrl -l\n0x00c0013e 0 xps emacs@xps\n0x00e0002b 0 xps (1) Why Liquid Haskell matters - Tweag : haskell \u2014 Mozilla Firefox\n\nI only want the first column, namely the window Ids. The wmctrl\nfunction I define uses a fold with collectWindowIds over the lines\noutput by the \"wmctrl -l\" program to grab the window Ids\n\n> \n> wmctl :: FoldCallback [a] -> Sh [a]\n> wmctl f = runFoldLines [] f \"wmctrl\" [\"-l\"]\n>\n\nLook for the WM_CLASS line of the output of the \"xprop -id \"\nprogram. In my case they look likes this:\n\nWM_CLASS(STRING) = \"Navigator\", \"firefox\"\nWM_CLASS(STRING) = \"main\", \"terminology\"\nWM_CLASS(STRING) = \"emacs\", \"Emacs\"\n\n> getClass :: Text -> Sh Text\n> getClass x = (run \"xprop\" [\"-id\", x]) -|- run \"grep\" [\"WM_CLASS\"]\n> \n> ps :: Sh Text\n> ps = run \"ps\" [\"a\"]\n\nGiven a program name, see if it is already running. Uses \"ps a\"\ncommands and checks if the given string exists in the \"ps a\"\noutput\n\n> isRunning :: Text -> Sh Bool\n> isRunning x = ps >>= return . T.isInfixOf x\n\nIf the program is already running, return an empty list, otherwise\nreturn a non-empty list. An easy way to check if ALL of the programs\nare already running so we won't have to sleep waiting for them\n\n> runIfNotRunning :: Text -> Sh [Int]\n> runIfNotRunning name = do\n> running <- isRunning name\n> if running\n> then return []\n> else ((asyncSh $ run_ (fromText name) []) >> return [1])\n\n> moveWindowsAround :: [Text] -> (Int, Text) -> Sh ()\n> moveWindowsAround windowIdList (i,location) =\n> run_ \"wmctrl\" [\"-i\", \"-r\", windowIdList!!i, \"-e\", location]\n\n","avg_line_length":39.8071428571,"max_line_length":116,"alphanum_fraction":0.6628386865} {"size":25045,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"%\n% (c) The University of Glasgow 2006\n% (c) The GRASP\/AQUA Project, Glasgow University, 1992-1998\n%\n\nThe @Inst@ type: dictionaries or method instances\n\n\\begin{code}\n{-# LANGUAGE CPP #-}\n{-# OPTIONS_GHC -fno-warn-tabs #-}\n-- The above warning supression flag is a temporary kludge.\n-- While working on this module you are encouraged to remove it and\n-- detab the module (please do the detabbing in a separate patch). See\n-- http:\/\/ghc.haskell.org\/trac\/ghc\/wiki\/Commentary\/CodingStyle#TabsvsSpaces\n-- for details\n\nmodule Inst ( \n deeplySkolemise, \n deeplyInstantiate, instCall, instStupidTheta,\n emitWanted, emitWanteds,\n\n newOverloadedLit, mkOverLit, \n \n tcGetInsts, tcGetInstEnvs, getOverlapFlag,\n tcExtendLocalInstEnv, instCallConstraints, newMethodFromName,\n tcExtendDefInstEnv,\n tcSyntaxName,\n\n -- Simple functions over evidence variables\n tyVarsOfWC, tyVarsOfBag, \n tyVarsOfCt, tyVarsOfCts, \n\n tidyEvVar, tidyCt, tidySkolemInfo\n ) where\n\n#include \"HsVersions.h\"\n\nimport {-# SOURCE #-}\tTcExpr( tcPolyExpr, tcSyntaxOp )\nimport {-# SOURCE #-}\tTcUnify( unifyType )\n\nimport FastString\nimport HsSyn\nimport TcHsSyn\nimport TcRnMonad\nimport TcEnv\nimport TcEvidence\nimport InstEnv\nimport FunDeps\nimport TcMType\nimport Type\nimport Coercion ( Role(..) )\nimport TcType\nimport Unify\nimport HscTypes\nimport Id\nimport Name\nimport Var ( EvVar, varType, setVarType )\nimport VarEnv\nimport VarSet\nimport PrelNames\nimport SrcLoc\nimport DynFlags\nimport Bag\nimport Maybes\nimport Util\nimport Outputable\nimport Data.List( mapAccumL )\n\\end{code}\n\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\t\tEmitting constraints\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\nemitWanteds :: CtOrigin -> TcThetaType -> TcM [EvVar]\nemitWanteds origin theta = mapM (emitWanted origin) theta\n\nemitWanted :: CtOrigin -> TcPredType -> TcM EvVar\nemitWanted origin pred \n = do { loc <- getCtLoc origin\n ; ev <- newWantedEvVar pred\n ; emitFlat $ mkNonCanonical $\n CtWanted { ctev_pred = pred, ctev_evar = ev, ctev_loc = loc }\n ; return ev }\n\nnewMethodFromName :: CtOrigin -> Name -> TcRhoType -> TcM (HsExpr TcId)\n-- Used when Name is the wired-in name for a wired-in class method,\n-- so the caller knows its type for sure, which should be of form\n-- forall a. C a => \n-- newMethodFromName is supposed to instantiate just the outer \n-- type variable and constraint\n\nnewMethodFromName origin name inst_ty\n = do { id <- tcLookupId name\n \t -- Use tcLookupId not tcLookupGlobalId; the method is almost\n\t -- always a class op, but with -XRebindableSyntax GHC is\n\t -- meant to find whatever thing is in scope, and that may\n\t -- be an ordinary function. \n\n ; let (tvs, theta, _caller_knows_this) = tcSplitSigmaTy (idType id)\n (the_tv:rest) = tvs\n subst = zipOpenTvSubst [the_tv] [inst_ty]\n\n ; wrap <- ASSERT( null rest && isSingleton theta )\n instCall origin [inst_ty] (substTheta subst theta)\n ; return (mkHsWrap wrap (HsVar id)) }\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\tDeep instantiation and skolemisation\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\nNote [Deep skolemisation]\n~~~~~~~~~~~~~~~~~~~~~~~~~\ndeeplySkolemise decomposes and skolemises a type, returning a type\nwith all its arrows visible (ie not buried under foralls)\n\nExamples:\n\n deeplySkolemise (Int -> forall a. Ord a => blah) \n = ( wp, [a], [d:Ord a], Int -> blah )\n where wp = \\x:Int. \/\\a. \\(d:Ord a). x\n\n deeplySkolemise (forall a. Ord a => Maybe a -> forall b. Eq b => blah) \n = ( wp, [a,b], [d1:Ord a,d2:Eq b], Maybe a -> blah )\n where wp = \/\\a.\\(d1:Ord a).\\(x:Maybe a).\/\\b.\\(d2:Ord b). x\n\nIn general,\n if deeplySkolemise ty = (wrap, tvs, evs, rho)\n and e :: rho\n then wrap e :: ty\n and 'wrap' binds tvs, evs\n\nToDo: this eta-abstraction plays fast and loose with termination,\n because it can introduce extra lambdas. Maybe add a `seq` to\n fix this\n\n\n\\begin{code}\ndeeplySkolemise\n :: TcSigmaType\n -> TcM (HsWrapper, [TyVar], [EvVar], TcRhoType)\n\ndeeplySkolemise ty\n | Just (arg_tys, tvs, theta, ty') <- tcDeepSplitSigmaTy_maybe ty\n = do { ids1 <- newSysLocalIds (fsLit \"dk\") arg_tys\n ; (subst, tvs1) <- tcInstSkolTyVars tvs\n ; ev_vars1 <- newEvVars (substTheta subst theta)\n ; (wrap, tvs2, ev_vars2, rho) <- deeplySkolemise (substTy subst ty')\n ; return ( mkWpLams ids1\n <.> mkWpTyLams tvs1\n <.> mkWpLams ev_vars1\n <.> wrap\n <.> mkWpEvVarApps ids1\n , tvs1 ++ tvs2\n , ev_vars1 ++ ev_vars2\n , mkFunTys arg_tys rho ) }\n\n | otherwise\n = return (idHsWrapper, [], [], ty)\n\ndeeplyInstantiate :: CtOrigin -> TcSigmaType -> TcM (HsWrapper, TcRhoType)\n-- Int -> forall a. a -> a ==> (\\x:Int. [] x alpha) :: Int -> alpha\n-- In general if\n-- if deeplyInstantiate ty = (wrap, rho)\n-- and e :: ty\n-- then wrap e :: rho\n\ndeeplyInstantiate orig ty\n | Just (arg_tys, tvs, theta, rho) <- tcDeepSplitSigmaTy_maybe ty\n = do { (_, tys, subst) <- tcInstTyVars tvs\n ; ids1 <- newSysLocalIds (fsLit \"di\") (substTys subst arg_tys)\n ; wrap1 <- instCall orig tys (substTheta subst theta)\n ; (wrap2, rho2) <- deeplyInstantiate orig (substTy subst rho)\n ; return (mkWpLams ids1 \n <.> wrap2\n <.> wrap1 \n <.> mkWpEvVarApps ids1,\n mkFunTys arg_tys rho2) }\n\n | otherwise = return (idHsWrapper, ty)\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Instantiating a call\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\n----------------\ninstCall :: CtOrigin -> [TcType] -> TcThetaType -> TcM HsWrapper\n-- Instantiate the constraints of a call\n--\t(instCall o tys theta)\n-- (a) Makes fresh dictionaries as necessary for the constraints (theta)\n-- (b) Throws these dictionaries into the LIE\n-- (c) Returns an HsWrapper ([.] tys dicts)\n\ninstCall orig tys theta \n = do\t{ dict_app <- instCallConstraints orig theta\n\t; return (dict_app <.> mkWpTyApps tys) }\n\n----------------\ninstCallConstraints :: CtOrigin -> TcThetaType -> TcM HsWrapper\n-- Instantiates the TcTheta, puts all constraints thereby generated\n-- into the LIE, and returns a HsWrapper to enclose the call site.\n\ninstCallConstraints orig preds\n | null preds \n = return idHsWrapper\n | otherwise\n = do { evs <- mapM go preds\n ; traceTc \"instCallConstraints\" (ppr evs)\n ; return (mkWpEvApps evs) }\n where\n go pred \n | Just (Nominal, ty1, ty2) <- getEqPredTys_maybe pred -- Try short-cut\n = do { co <- unifyType ty1 ty2\n ; return (EvCoercion co) }\n | otherwise\n = do { ev_var <- emitWanted orig pred\n \t ; return (EvId ev_var) }\n\n----------------\ninstStupidTheta :: CtOrigin -> TcThetaType -> TcM ()\n-- Similar to instCall, but only emit the constraints in the LIE\n-- Used exclusively for the 'stupid theta' of a data constructor\ninstStupidTheta orig theta\n = do\t{ _co <- instCallConstraints orig theta -- Discard the coercion\n\t; return () }\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\t\tLiterals\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\nIn newOverloadedLit we convert directly to an Int or Integer if we\nknow that's what we want. This may save some time, by not\ntemporarily generating overloaded literals, but it won't catch all\ncases (the rest are caught in lookupInst).\n\n\\begin{code}\nnewOverloadedLit :: CtOrigin\n -> HsOverLit Name\n -> TcRhoType\n -> TcM (HsOverLit TcId)\nnewOverloadedLit orig lit res_ty\n = do dflags <- getDynFlags\n newOverloadedLit' dflags orig lit res_ty\n\nnewOverloadedLit' :: DynFlags\n -> CtOrigin\n -> HsOverLit Name\n -> TcRhoType\n -> TcM (HsOverLit TcId)\nnewOverloadedLit' dflags orig\n lit@(OverLit { ol_val = val, ol_rebindable = rebindable\n\t , ol_witness = meth_name }) res_ty\n\n | not rebindable\n , Just expr <- shortCutLit dflags val res_ty \n\t-- Do not generate a LitInst for rebindable syntax. \n\t-- Reason: If we do, tcSimplify will call lookupInst, which\n\t--\t will call tcSyntaxName, which does unification, \n\t--\t which tcSimplify doesn't like\n = return (lit { ol_witness = expr, ol_type = res_ty })\n\n | otherwise\n = do\t{ hs_lit <- mkOverLit val\n\t; let lit_ty = hsLitType hs_lit\n\t; fi' <- tcSyntaxOp orig meth_name (mkFunTy lit_ty res_ty)\n\t \t-- Overloaded literals must have liftedTypeKind, because\n\t \t-- we're instantiating an overloaded function here,\n\t \t-- whereas res_ty might be openTypeKind. This was a bug in 6.2.2\n\t\t-- However this'll be picked up by tcSyntaxOp if necessary\n\t; let witness = HsApp (noLoc fi') (noLoc (HsLit hs_lit))\n\t; return (lit { ol_witness = witness, ol_type = res_ty }) }\n\n------------\nmkOverLit :: OverLitVal -> TcM HsLit\nmkOverLit (HsIntegral i) \n = do\t{ integer_ty <- tcMetaTy integerTyConName\n\t; return (HsInteger i integer_ty) }\n\nmkOverLit (HsFractional r)\n = do\t{ rat_ty <- tcMetaTy rationalTyConName\n\t; return (HsRat r rat_ty) }\n\nmkOverLit (HsIsString s) = return (HsString s)\n\\end{code}\n\n\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\t\tRe-mappable syntax\n \n Used only for arrow syntax -- find a way to nuke this\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\nSuppose we are doing the -XRebindableSyntax thing, and we encounter\na do-expression. We have to find (>>) in the current environment, which is\ndone by the rename. Then we have to check that it has the same type as\nControl.Monad.(>>). Or, more precisely, a compatible type. One 'customer' had\nthis:\n\n (>>) :: HB m n mn => m a -> n b -> mn b\n\nSo the idea is to generate a local binding for (>>), thus:\n\n\tlet then72 :: forall a b. m a -> m b -> m b\n\t then72 = ...something involving the user's (>>)...\n\tin\n\t...the do-expression...\n\nNow the do-expression can proceed using then72, which has exactly\nthe expected type.\n\nIn fact tcSyntaxName just generates the RHS for then72, because we only\nwant an actual binding in the do-expression case. For literals, we can \njust use the expression inline.\n\n\\begin{code}\ntcSyntaxName :: CtOrigin\n\t -> TcType\t\t\t-- Type to instantiate it at\n\t -> (Name, HsExpr Name)\t-- (Standard name, user name)\n\t -> TcM (Name, HsExpr TcId)\t-- (Standard name, suitable expression)\n-- USED ONLY FOR CmdTop (sigh) ***\n-- See Note [CmdSyntaxTable] in HsExpr\n\ntcSyntaxName orig ty (std_nm, HsVar user_nm)\n | std_nm == user_nm\n = do rhs <- newMethodFromName orig std_nm ty\n return (std_nm, rhs)\n\ntcSyntaxName orig ty (std_nm, user_nm_expr) = do\n std_id <- tcLookupId std_nm\n let\t\n\t-- C.f. newMethodAtLoc\n\t([tv], _, tau) = tcSplitSigmaTy (idType std_id)\n \tsigma1\t\t= substTyWith [tv] [ty] tau\n\t-- Actually, the \"tau-type\" might be a sigma-type in the\n\t-- case of locally-polymorphic methods.\n\n addErrCtxtM (syntaxNameCtxt user_nm_expr orig sigma1) $ do\n\n\t-- Check that the user-supplied thing has the\n\t-- same type as the standard one. \n\t-- Tiresome jiggling because tcCheckSigma takes a located expression\n span <- getSrcSpanM\n expr <- tcPolyExpr (L span user_nm_expr) sigma1\n return (std_nm, unLoc expr)\n\nsyntaxNameCtxt :: HsExpr Name -> CtOrigin -> Type -> TidyEnv\n -> TcRn (TidyEnv, SDoc)\nsyntaxNameCtxt name orig ty tidy_env\n = do { inst_loc <- getCtLoc orig\n ; let msg = vcat [ ptext (sLit \"When checking that\") <+> quotes (ppr name)\n\t\t\t <+> ptext (sLit \"(needed by a syntactic construct)\")\n\t\t , nest 2 (ptext (sLit \"has the required type:\")\n <+> ppr (tidyType tidy_env ty))\n\t\t , nest 2 (pprArisingAt inst_loc) ]\n ; return (tidy_env, msg) }\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\t\tInstances\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ngetOverlapFlag :: TcM OverlapFlag\ngetOverlapFlag\n = do { dflags <- getDynFlags\n ; let overlap_ok = xopt Opt_OverlappingInstances dflags\n incoherent_ok = xopt Opt_IncoherentInstances dflags\n use x = OverlapFlag { isSafeOverlap = safeLanguageOn dflags\n , overlapMode = x }\n overlap_flag | incoherent_ok = use Incoherent\n | overlap_ok = use OverlapOk\n | otherwise = use NoOverlap\n\n ; return overlap_flag }\n\ntcGetInstEnvs :: TcM (InstEnv, InstEnv)\n-- Gets both the external-package inst-env\n-- and the home-pkg inst env (includes module being compiled)\ntcGetInstEnvs = do { eps <- getEps; env <- getGblEnv;\n\t\t return (eps_inst_env eps, tcg_inst_env env) }\n\ntcGetInsts :: TcM [ClsInst]\n-- Gets the local class instances.\ntcGetInsts = fmap tcg_insts getGblEnv\n\ntcExtendLocalInstEnv :: [ClsInst] -> TcM a -> TcM a\n -- Add new locally-defined instances\ntcExtendLocalInstEnv dfuns thing_inside\n = do { traceDFuns dfuns\n ; env <- getGblEnv\n ; inst_env' <- foldlM addLocalInst (tcg_inst_env env) dfuns\n ; let env' = env { tcg_insts = dfuns ++ tcg_insts env,\n\t\t\t tcg_inst_env = inst_env' }\n ; setGblEnv env' thing_inside }\n\naddLocalInst :: InstEnv -> ClsInst -> TcM InstEnv\n-- Check that the proposed new instance is OK, \n-- and then add it to the home inst env\n-- If overwrite_inst, then we can overwrite a direct match\naddLocalInst home_ie ispec\n = do {\n -- Instantiate the dfun type so that we extend the instance\n -- envt with completely fresh template variables\n -- This is important because the template variables must\n -- not overlap with anything in the things being looked up\n -- (since we do unification). \n --\n -- We use tcInstSkolType because we don't want to allocate fresh\n -- *meta* type variables.\n --\n -- We use UnkSkol --- and *not* InstSkol or PatSkol --- because\n -- these variables must be bindable by tcUnifyTys. See\n -- the call to tcUnifyTys in InstEnv, and the special\n -- treatment that instanceBindFun gives to isOverlappableTyVar\n -- This is absurdly delicate.\n\n -- Load imported instances, so that we report\n -- duplicates correctly\n eps <- getEps\n ; let inst_envs = (eps_inst_env eps, home_ie)\n (tvs, cls, tys) = instanceHead ispec\n\n -- Check functional dependencies\n ; case checkFunDeps inst_envs ispec of\n Just specs -> funDepErr ispec specs\n Nothing -> return ()\n\n -- Check for duplicate instance decls\n ; let (matches, unifs, _) = lookupInstEnv inst_envs cls tys\n dup_ispecs = [ dup_ispec \n | (dup_ispec, _) <- matches\n , let dup_tys = is_tys dup_ispec\n , isJust (tcMatchTys (mkVarSet tvs) tys dup_tys)]\n \n -- Find memebers of the match list which ispec itself matches.\n -- If the match is 2-way, it's a duplicate\n -- If it's a duplicate, but we can overwrite home package dups, then overwrite\n ; isGHCi <- getIsGHCi\n ; overlapFlag <- getOverlapFlag\n ; case isGHCi of\n False -> case dup_ispecs of\n dup : _ -> dupInstErr ispec dup >> return (extendInstEnv home_ie ispec)\n [] -> return (extendInstEnv home_ie ispec)\n True -> case (dup_ispecs, home_ie_matches, unifs, overlapMode overlapFlag) of\n (_, _:_, _, _) -> return (overwriteInstEnv home_ie ispec)\n (dup:_, [], _, _) -> dupInstErr ispec dup >> return (extendInstEnv home_ie ispec)\n ([], _, u:_, NoOverlap) -> overlappingInstErr ispec u >> return (extendInstEnv home_ie ispec)\n _ -> return (extendInstEnv home_ie ispec)\n where (homematches, _) = lookupInstEnv' home_ie cls tys\n home_ie_matches = [ dup_ispec \n | (dup_ispec, _) <- homematches\n , let dup_tys = is_tys dup_ispec\n , isJust (tcMatchTys (mkVarSet tvs) tys dup_tys)] }\n\ntcExtendDefInstEnv :: [ClsInst] -> TcM a -> TcM a\n -- Add new locally-defined default instances\ntcExtendDefInstEnv dfuns thing_inside\n = do { traceDFuns dfuns\n ; env <- getGblEnv\n ; dinst_env' <- foldlM addDefInst (tcg_dinst_env env) dfuns\n ; let env' = env { tcg_dinsts = dfuns ++ tcg_insts env,\n\t\t\t tcg_dinst_env = dinst_env' }\n ; setGblEnv env' thing_inside }\n\naddDefInst :: InstEnv -> ClsInst -> TcM InstEnv\n-- Check that the proposed new default instance is OK, \n-- and then add it to the home def inst env\n-- If overwrite_inst, then we can overwrite a direct match ??? WTF\n-- MyComment: It is mostly the same, the only difference\n-- should occur in context checking\naddDefInst home_ie ispec\n = do {\n -- Instantiate the dfun type so that we extend the instance\n -- envt with completely fresh template variables\n -- This is important because the template variables must\n -- not overlap with anything in the things being looked up\n -- (since we do unification). \n --\n -- We use tcInstSkolType because we don't want to allocate fresh\n -- *meta* type variables.\n --\n -- We use UnkSkol --- and *not* InstSkol or PatSkol --- because\n -- these variables must be bindable by tcUnifyTys. See\n -- the call to tcUnifyTys in InstEnv, and the special\n -- treatment that instanceBindFun gives to isOverlappableTyVar\n -- This is absurdly delicate.\n\n -- Load imported instances, so that we report\n -- duplicates correctly\n eps <- getEps\n ; let inst_envs = (eps_inst_env eps, home_ie)\n (tvs, cls, tys) = instanceHead ispec\n\n -- Check functional dependencies\n ; case checkFunDeps inst_envs ispec of\n Just specs -> funDepErr ispec specs\n Nothing -> return ()\n\n -- Check for duplicate instance decls\n ; let (matches, unifs, _) = lookupInstEnv inst_envs cls tys\n dup_ispecs = [ dup_ispec \n | (dup_ispec, _) <- matches\n , let dup_tys = is_tys dup_ispec\n , isJust (tcMatchTys (mkVarSet tvs) tys dup_tys)]\n \n -- Find memebers of the match list which ispec itself matches.\n -- If the match is 2-way, it's a duplicate\n -- If it's a duplicate, but we can overwrite home package dups, then overwrite\n ; isGHCi <- getIsGHCi\n ; overlapFlag <- getOverlapFlag\n ; case isGHCi of\n False -> case dup_ispecs of\n dup : _ -> dupInstErr ispec dup >> return (extendInstEnv home_ie ispec)\n [] -> return (extendInstEnv home_ie ispec)\n True -> case (dup_ispecs, home_ie_matches, unifs, overlapMode overlapFlag) of\n (_, _:_, _, _) -> return (overwriteInstEnv home_ie ispec)\n (dup:_, [], _, _) -> dupInstErr ispec dup >> return (extendInstEnv home_ie ispec)\n ([], _, u:_, NoOverlap) -> overlappingInstErr ispec u >> return (extendInstEnv home_ie ispec)\n _ -> return (extendInstEnv home_ie ispec)\n where (homematches, _) = lookupInstEnv' home_ie cls tys\n home_ie_matches = [ dup_ispec \n | (dup_ispec, _) <- homematches\n , let dup_tys = is_tys dup_ispec\n , isJust (tcMatchTys (mkVarSet tvs) tys dup_tys)] }\n\n\n\n\ntraceDFuns :: [ClsInst] -> TcRn ()\ntraceDFuns ispecs\n = traceTc \"Adding instances:\" (vcat (map pp ispecs))\n where\n pp ispec = hang (ppr (instanceDFunId ispec) <+> colon)\n 2 (ppr ispec)\n\t-- Print the dfun name itself too\n\nfunDepErr :: ClsInst -> [ClsInst] -> TcRn ()\nfunDepErr ispec ispecs\n = addClsInstsErr (ptext (sLit \"Functional dependencies conflict between instance declarations:\"))\n (ispec : ispecs)\n\ndupInstErr :: ClsInst -> ClsInst -> TcRn ()\ndupInstErr ispec dup_ispec\n = addClsInstsErr (ptext (sLit \"Duplicate instance declarations:\"))\n\t [ispec, dup_ispec]\n\noverlappingInstErr :: ClsInst -> ClsInst -> TcRn ()\noverlappingInstErr ispec dup_ispec\n = addClsInstsErr (ptext (sLit \"Overlapping instance declarations:\")) \n [ispec, dup_ispec]\n\naddClsInstsErr :: SDoc -> [ClsInst] -> TcRn ()\naddClsInstsErr herald ispecs\n = setSrcSpan (getSrcSpan (head sorted)) $\n addErr (hang herald 2 (pprInstances sorted))\n where\n sorted = sortWith getSrcLoc ispecs\n -- The sortWith just arranges that instances are dislayed in order\n -- of source location, which reduced wobbling in error messages,\n -- and is better for users\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\tSimple functions over evidence variables\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\n---------------- Getting free tyvars -------------------------\ntyVarsOfCt :: Ct -> TcTyVarSet\n-- NB: the \ntyVarsOfCt (CTyEqCan { cc_tyvar = tv, cc_rhs = xi }) = extendVarSet (tyVarsOfType xi) tv\ntyVarsOfCt (CFunEqCan { cc_tyargs = tys, cc_rhs = xi }) = tyVarsOfTypes (xi:tys)\ntyVarsOfCt (CDictCan { cc_tyargs = tys }) \t = tyVarsOfTypes tys\ntyVarsOfCt (CIrredEvCan { cc_ev = ev }) = tyVarsOfType (ctEvPred ev)\ntyVarsOfCt (CHoleCan { cc_ev = ev }) = tyVarsOfType (ctEvPred ev)\ntyVarsOfCt (CNonCanonical { cc_ev = ev }) = tyVarsOfType (ctEvPred ev)\n\ntyVarsOfCts :: Cts -> TcTyVarSet\ntyVarsOfCts = foldrBag (unionVarSet . tyVarsOfCt) emptyVarSet\n\ntyVarsOfWC :: WantedConstraints -> TyVarSet\n-- Only called on *zonked* things, hence no need to worry about flatten-skolems\ntyVarsOfWC (WC { wc_flat = flat, wc_impl = implic, wc_insol = insol })\n = tyVarsOfCts flat `unionVarSet`\n tyVarsOfBag tyVarsOfImplic implic `unionVarSet`\n tyVarsOfCts insol\n\ntyVarsOfImplic :: Implication -> TyVarSet\n-- Only called on *zonked* things, hence no need to worry about flatten-skolems\ntyVarsOfImplic (Implic { ic_skols = skols, ic_fsks = fsks\n , ic_given = givens, ic_wanted = wanted })\n = (tyVarsOfWC wanted `unionVarSet` tyVarsOfTypes (map evVarPred givens))\n `delVarSetList` skols `delVarSetList` fsks\n\ntyVarsOfBag :: (a -> TyVarSet) -> Bag a -> TyVarSet\ntyVarsOfBag tvs_of = foldrBag (unionVarSet . tvs_of) emptyVarSet\n\n---------------- Tidying -------------------------\n\ntidyCt :: TidyEnv -> Ct -> Ct\n-- Used only in error reporting\n-- Also converts it to non-canonical\ntidyCt env ct \n = case ct of\n CHoleCan { cc_ev = ev }\n -> ct { cc_ev = tidy_ev env ev }\n _ -> mkNonCanonical (tidy_ev env (ctEvidence ct))\n where \n tidy_ev :: TidyEnv -> CtEvidence -> CtEvidence\n -- NB: we do not tidy the ctev_evtm\/var field because we don't \n -- show it in error messages\n tidy_ev env ctev@(CtGiven { ctev_pred = pred })\n = ctev { ctev_pred = tidyType env pred }\n tidy_ev env ctev@(CtWanted { ctev_pred = pred })\n = ctev { ctev_pred = tidyType env pred }\n tidy_ev env ctev@(CtDerived { ctev_pred = pred })\n = ctev { ctev_pred = tidyType env pred }\n\ntidyEvVar :: TidyEnv -> EvVar -> EvVar\ntidyEvVar env var = setVarType var (tidyType env (varType var))\n\ntidySkolemInfo :: TidyEnv -> SkolemInfo -> (TidyEnv, SkolemInfo)\ntidySkolemInfo env (SigSkol cx ty) \n = (env', SigSkol cx ty')\n where\n (env', ty') = tidyOpenType env ty\n\ntidySkolemInfo env (InferSkol ids) \n = (env', InferSkol ids')\n where\n (env', ids') = mapAccumL do_one env ids\n do_one env (name, ty) = (env', (name, ty'))\n where\n (env', ty') = tidyOpenType env ty\n\ntidySkolemInfo env (UnifyForAllSkol skol_tvs ty) \n = (env1, UnifyForAllSkol skol_tvs' ty')\n where\n env1 = tidyFreeTyVars env (tyVarsOfType ty `delVarSetList` skol_tvs)\n (env2, skol_tvs') = tidyTyVarBndrs env1 skol_tvs\n ty' = tidyType env2 ty\n\ntidySkolemInfo env info = (env, info)\n\\end{code}\n","avg_line_length":37.3805970149,"max_line_length":113,"alphanum_fraction":0.599680575} {"size":39452,"ext":"lhs","lang":"Literate Haskell","max_stars_count":2.0,"content":"%\n% (c) The GRASP\/AQUA Project, Glasgow University, 1992-1998\n%\n\\section[RnSource]{Main pass of renamer}\n\n\\begin{code}\nmodule RnTypes (\n -- Type related stuff\n rnHsType, rnLHsType, rnLHsTypes, rnContext,\n rnHsKind, rnLHsKind, rnLHsMaybeKind,\n rnHsSigType, rnLHsInstType, rnConDeclFields,\n newTyVarNameRn,\n\n -- Precence related stuff\n mkOpAppRn, mkNegAppRn, mkOpFormRn, mkConOpPatRn,\n checkPrecMatch, checkSectionPrec, warnUnusedForAlls,\n\n -- Binding related stuff\n bindSigTyVarsFV, bindHsTyVars, rnHsBndrSig,\n extractHsTyRdrTyVars, extractHsTysRdrTyVars,\n extractRdrKindSigVars, extractDataDefnKindVars, filterInScope\n ) where\n\nimport {-# SOURCE #-} TcSplice( runQuasiQuoteType )\nimport {-# SOURCE #-} RnSplice( rnSpliceType )\n\nimport DynFlags\nimport HsSyn\nimport RnHsDoc ( rnLHsDoc, rnMbLHsDoc )\nimport RnEnv\nimport TcRnMonad\nimport RdrName\nimport PrelNames\nimport TysPrim ( funTyConName )\nimport Name\nimport SrcLoc\nimport NameSet\n\nimport Util\nimport BasicTypes ( compareFixity, funTyFixity, negateFixity,\n Fixity(..), FixityDirection(..) )\nimport Outputable\nimport FastString\nimport Maybes\nimport Data.List ( nub )\nimport Control.Monad ( unless, when )\n\n#include \"HsVersions.h\"\n\\end{code}\n\nThese type renamers are in a separate module, rather than in (say) RnSource,\nto break several loop.\n\n%*********************************************************\n%* *\n\\subsection{Renaming types}\n%* *\n%*********************************************************\n\n\\begin{code}\nrnHsSigType :: SDoc -> LHsType RdrName -> RnM (LHsType Name, FreeVars)\n -- rnHsSigType is used for source-language type signatures,\n -- which use *implicit* universal quantification.\nrnHsSigType doc_str ty = rnLHsType (TypeSigCtx doc_str) ty\n\nrnLHsInstType :: SDoc -> LHsType RdrName -> RnM (LHsType Name, FreeVars)\n-- Rename the type in an instance or standalone deriving decl\nrnLHsInstType doc_str ty\n = do { (ty', fvs) <- rnLHsType (GenericCtx doc_str) ty\n ; unless good_inst_ty (addErrAt (getLoc ty) (badInstTy ty))\n ; return (ty', fvs) }\n where\n good_inst_ty\n | Just (_, _, L _ cls, _) <- splitLHsInstDeclTy_maybe ty\n , isTcOcc (rdrNameOcc cls) = True\n | otherwise = False\n\nbadInstTy :: LHsType RdrName -> SDoc\nbadInstTy ty = ptext (sLit \"Malformed instance:\") <+> ppr ty\n\\end{code}\n\nrnHsType is here because we call it from loadInstDecl, and I didn't\nwant a gratuitous knot.\n\n\\begin{code}\nrnLHsTyKi :: Bool -- True <=> renaming a type, False <=> a kind\n -> HsDocContext -> LHsType RdrName -> RnM (LHsType Name, FreeVars)\nrnLHsTyKi isType doc (L loc ty)\n = setSrcSpan loc $\n do { (ty', fvs) <- rnHsTyKi isType doc ty\n ; return (L loc ty', fvs) }\n\nrnLHsType :: HsDocContext -> LHsType RdrName -> RnM (LHsType Name, FreeVars)\nrnLHsType = rnLHsTyKi True\n\nrnLHsKind :: HsDocContext -> LHsKind RdrName -> RnM (LHsKind Name, FreeVars)\nrnLHsKind = rnLHsTyKi False\n\nrnLHsMaybeKind :: HsDocContext -> Maybe (LHsKind RdrName)\n -> RnM (Maybe (LHsKind Name), FreeVars)\nrnLHsMaybeKind _ Nothing\n = return (Nothing, emptyFVs)\nrnLHsMaybeKind doc (Just kind)\n = do { (kind', fvs) <- rnLHsKind doc kind\n ; return (Just kind', fvs) }\n\nrnHsType :: HsDocContext -> HsType RdrName -> RnM (HsType Name, FreeVars)\nrnHsType = rnHsTyKi True\nrnHsKind :: HsDocContext -> HsKind RdrName -> RnM (HsKind Name, FreeVars)\nrnHsKind = rnHsTyKi False\n\nrnHsTyKi :: Bool -> HsDocContext -> HsType RdrName -> RnM (HsType Name, FreeVars)\n\nrnHsTyKi isType doc (HsForAllTy Implicit _ lctxt@(L _ ctxt) ty)\n = ASSERT( isType ) do\n -- Implicit quantifiction in source code (no kinds on tyvars)\n -- Given the signature C => T we universally quantify\n -- over FV(T) \\ {in-scope-tyvars}\n rdr_env <- getLocalRdrEnv\n loc <- getSrcSpanM\n let\n (forall_kvs, forall_tvs) = filterInScope rdr_env $\n extractHsTysRdrTyVars (ty:ctxt)\n -- In for-all types we don't bring in scope\n -- kind variables mentioned in kind signatures\n -- (Well, not yet anyway....)\n -- f :: Int -> T (a::k) -- Not allowed\n\n -- The filterInScope is to ensure that we don't quantify over\n -- type variables that are in scope; when GlasgowExts is off,\n -- there usually won't be any, except for class signatures:\n -- class C a where { op :: a -> a }\n tyvar_bndrs = userHsTyVarBndrs loc forall_tvs\n\n rnForAll doc Implicit forall_kvs (mkHsQTvs tyvar_bndrs) lctxt ty\n\nrnHsTyKi isType doc ty@(HsForAllTy Explicit forall_tyvars lctxt@(L _ ctxt) tau)\n = ASSERT( isType ) do { -- Explicit quantification.\n -- Check that the forall'd tyvars are actually\n -- mentioned in the type, and produce a warning if not\n let (kvs, mentioned) = extractHsTysRdrTyVars (tau:ctxt)\n in_type_doc = ptext (sLit \"In the type\") <+> quotes (ppr ty)\n ; warnUnusedForAlls (in_type_doc $$ docOfHsDocContext doc) forall_tyvars mentioned\n\n ; rnForAll doc Explicit kvs forall_tyvars lctxt tau }\n\nrnHsTyKi isType _ (HsTyVar rdr_name)\n = do { name <- rnTyVar isType rdr_name\n ; return (HsTyVar name, unitFV name) }\n\n-- If we see (forall a . ty), without foralls on, the forall will give\n-- a sensible error message, but we don't want to complain about the dot too\n-- Hence the jiggery pokery with ty1\nrnHsTyKi isType doc ty@(HsOpTy ty1 (wrapper, L loc op) ty2)\n = ASSERT( isType ) setSrcSpan loc $\n do { ops_ok <- xoptM Opt_TypeOperators\n ; op' <- if ops_ok\n then rnTyVar isType op\n else do { addErr (opTyErr op ty)\n ; return (mkUnboundName op) } -- Avoid double complaint\n ; let l_op' = L loc op'\n ; fix <- lookupTyFixityRn l_op'\n ; (ty1', fvs1) <- rnLHsType doc ty1\n ; (ty2', fvs2) <- rnLHsType doc ty2\n ; res_ty <- mkHsOpTyRn (\\t1 t2 -> HsOpTy t1 (wrapper, l_op') t2)\n op' fix ty1' ty2'\n ; return (res_ty, (fvs1 `plusFV` fvs2) `addOneFV` op') }\n\nrnHsTyKi isType doc (HsParTy ty)\n = do { (ty', fvs) <- rnLHsTyKi isType doc ty\n ; return (HsParTy ty', fvs) }\n\nrnHsTyKi isType doc (HsBangTy b ty)\n = ASSERT( isType )\n do { (ty', fvs) <- rnLHsType doc ty\n ; return (HsBangTy b ty', fvs) }\n\nrnHsTyKi _ doc ty@(HsRecTy flds)\n = do { addErr (hang (ptext (sLit \"Record syntax is illegal here:\"))\n 2 (ppr ty))\n ; (flds', fvs) <- rnConDeclFields doc flds\n ; return (HsRecTy flds', fvs) }\n\nrnHsTyKi isType doc (HsFunTy ty1 ty2)\n = do { (ty1', fvs1) <- rnLHsTyKi isType doc ty1\n -- Might find a for-all as the arg of a function type\n ; (ty2', fvs2) <- rnLHsTyKi isType doc ty2\n -- Or as the result. This happens when reading Prelude.hi\n -- when we find return :: forall m. Monad m -> forall a. a -> m a\n\n -- Check for fixity rearrangements\n ; res_ty <- if isType\n then mkHsOpTyRn HsFunTy funTyConName funTyFixity ty1' ty2'\n else return (HsFunTy ty1' ty2')\n ; return (res_ty, fvs1 `plusFV` fvs2) }\n\nrnHsTyKi isType doc listTy@(HsListTy ty)\n = do { data_kinds <- xoptM Opt_DataKinds\n ; unless (data_kinds || isType) (addErr (dataKindsErr isType listTy))\n ; (ty', fvs) <- rnLHsTyKi isType doc ty\n ; return (HsListTy ty', fvs) }\n\nrnHsTyKi isType doc (HsKindSig ty k)\n = ASSERT( isType )\n do { kind_sigs_ok <- xoptM Opt_KindSignatures\n ; unless kind_sigs_ok (badSigErr False doc ty)\n ; (ty', fvs1) <- rnLHsType doc ty\n ; (k', fvs2) <- rnLHsKind doc k\n ; return (HsKindSig ty' k', fvs1 `plusFV` fvs2) }\n\nrnHsTyKi isType doc (HsPArrTy ty)\n = ASSERT( isType )\n do { (ty', fvs) <- rnLHsType doc ty\n ; return (HsPArrTy ty', fvs) }\n\n-- Unboxed tuples are allowed to have poly-typed arguments. These\n-- sometimes crop up as a result of CPR worker-wrappering dictionaries.\nrnHsTyKi isType doc tupleTy@(HsTupleTy tup_con tys)\n = do { data_kinds <- xoptM Opt_DataKinds\n ; unless (data_kinds || isType) (addErr (dataKindsErr isType tupleTy))\n ; (tys', fvs) <- mapFvRn (rnLHsTyKi isType doc) tys\n ; return (HsTupleTy tup_con tys', fvs) }\n\n-- Perhaps we should use a separate extension here?\n-- Ensure that a type-level integer is nonnegative (#8306, #8412)\nrnHsTyKi isType _ tyLit@(HsTyLit t)\n = do { data_kinds <- xoptM Opt_DataKinds\n ; unless (data_kinds || isType) (addErr (dataKindsErr isType tyLit))\n ; when (negLit t) (addErr negLitErr)\n ; return (HsTyLit t, emptyFVs) }\n where\n negLit (HsStrTy _) = False\n negLit (HsNumTy i) = i < 0\n negLitErr = ptext (sLit \"Illegal literal in type (type literals must not be negative):\") <+> ppr tyLit\n\nrnHsTyKi isType doc (HsAppTy ty1 ty2)\n = do { (ty1', fvs1) <- rnLHsTyKi isType doc ty1\n ; (ty2', fvs2) <- rnLHsTyKi isType doc ty2\n ; return (HsAppTy ty1' ty2', fvs1 `plusFV` fvs2) }\n\nrnHsTyKi isType doc (HsIParamTy n ty)\n = ASSERT( isType )\n do { (ty', fvs) <- rnLHsType doc ty\n ; return (HsIParamTy n ty', fvs) }\n\nrnHsTyKi isType doc (HsEqTy ty1 ty2)\n = ASSERT( isType )\n do { (ty1', fvs1) <- rnLHsType doc ty1\n ; (ty2', fvs2) <- rnLHsType doc ty2\n ; return (HsEqTy ty1' ty2', fvs1 `plusFV` fvs2) }\n\nrnHsTyKi isType _ (HsSpliceTy sp k)\n = ASSERT( isType )\n rnSpliceType sp k\n\nrnHsTyKi isType doc (HsDocTy ty haddock_doc)\n = ASSERT( isType )\n do { (ty', fvs) <- rnLHsType doc ty\n ; haddock_doc' <- rnLHsDoc haddock_doc\n ; return (HsDocTy ty' haddock_doc', fvs) }\n\nrnHsTyKi isType doc (HsQuasiQuoteTy qq)\n = ASSERT( isType )\n do { ty <- runQuasiQuoteType qq\n -- Wrap the result of the quasi-quoter in parens so that we don't\n -- lose the outermost location set by runQuasiQuote (#7918) \n ; rnHsType doc (HsParTy ty) }\n\nrnHsTyKi isType _ (HsCoreTy ty)\n = ASSERT( isType )\n return (HsCoreTy ty, emptyFVs)\n -- The emptyFVs probably isn't quite right\n -- but I don't think it matters\n\nrnHsTyKi _ _ (HsWrapTy {})\n = panic \"rnHsTyKi\"\n\nrnHsTyKi isType doc ty@(HsExplicitListTy k tys)\n = ASSERT( isType )\n do { data_kinds <- xoptM Opt_DataKinds\n ; unless data_kinds (addErr (dataKindsErr isType ty))\n ; (tys', fvs) <- rnLHsTypes doc tys\n ; return (HsExplicitListTy k tys', fvs) }\n\nrnHsTyKi isType doc ty@(HsExplicitTupleTy kis tys)\n = ASSERT( isType )\n do { data_kinds <- xoptM Opt_DataKinds\n ; unless data_kinds (addErr (dataKindsErr isType ty))\n ; (tys', fvs) <- rnLHsTypes doc tys\n ; return (HsExplicitTupleTy kis tys', fvs) }\n\n--------------\nrnTyVar :: Bool -> RdrName -> RnM Name\nrnTyVar is_type rdr_name\n | is_type = lookupTypeOccRn rdr_name\n | otherwise = lookupKindOccRn rdr_name\n\n\n--------------\nrnLHsTypes :: HsDocContext -> [LHsType RdrName]\n -> RnM ([LHsType Name], FreeVars)\nrnLHsTypes doc tys = mapFvRn (rnLHsType doc) tys\n\\end{code}\n\n\n\\begin{code}\nrnForAll :: HsDocContext -> HsExplicitFlag\n -> [RdrName] -- Kind variables\n -> LHsTyVarBndrs RdrName -- Type variables\n -> LHsContext RdrName -> LHsType RdrName\n -> RnM (HsType Name, FreeVars)\n\nrnForAll doc exp kvs forall_tyvars ctxt ty\n | null kvs, null (hsQTvBndrs forall_tyvars), null (unLoc ctxt)\n = rnHsType doc (unLoc ty)\n -- One reason for this case is that a type like Int#\n -- starts off as (HsForAllTy Nothing [] Int), in case\n -- there is some quantification. Now that we have quantified\n -- and discovered there are no type variables, it's nicer to turn\n -- it into plain Int. If it were Int# instead of Int, we'd actually\n -- get an error, because the body of a genuine for-all is\n -- of kind *.\n\n | otherwise\n = bindHsTyVars doc Nothing kvs forall_tyvars $ \\ new_tyvars ->\n do { (new_ctxt, fvs1) <- rnContext doc ctxt\n ; (new_ty, fvs2) <- rnLHsType doc ty\n ; return (HsForAllTy exp new_tyvars new_ctxt new_ty, fvs1 `plusFV` fvs2) }\n -- Retain the same implicit\/explicit flag as before\n -- so that we can later print it correctly\n\n---------------\nbindSigTyVarsFV :: [Name]\n -> RnM (a, FreeVars)\n -> RnM (a, FreeVars)\n-- Used just before renaming the defn of a function\n-- with a separate type signature, to bring its tyvars into scope\n-- With no -XScopedTypeVariables, this is a no-op\nbindSigTyVarsFV tvs thing_inside\n = do { scoped_tyvars <- xoptM Opt_ScopedTypeVariables\n ; if not scoped_tyvars then\n thing_inside\n else\n bindLocalNamesFV tvs thing_inside }\n\n---------------\nbindHsTyVars :: HsDocContext\n -> Maybe a -- Just _ => an associated type decl\n -> [RdrName] -- Kind variables from scope\n -> LHsTyVarBndrs RdrName -- Type variables\n -> (LHsTyVarBndrs Name -> RnM (b, FreeVars))\n -> RnM (b, FreeVars)\n-- (a) Bring kind variables into scope\n-- both (i) passed in (kv_bndrs)\n-- and (ii) mentioned in the kinds of tv_bndrs\n-- (b) Bring type variables into scope\nbindHsTyVars doc mb_assoc kv_bndrs tv_bndrs thing_inside\n = do { rdr_env <- getLocalRdrEnv\n ; let tvs = hsQTvBndrs tv_bndrs\n kvs_from_tv_bndrs = [ kv | L _ (KindedTyVar _ kind) <- tvs\n , let (_, kvs) = extractHsTyRdrTyVars kind\n , kv <- kvs ]\n all_kvs = filterOut (`elemLocalRdrEnv` rdr_env) $\n nub (kv_bndrs ++ kvs_from_tv_bndrs)\n overlap_kvs = [ kv | kv <- all_kvs, any ((==) kv . hsLTyVarName) tvs ]\n -- These variables appear both as kind and type variables\n -- in the same declaration; eg type family T (x :: *) (y :: x)\n -- We disallow this: too confusing!\n\n ; poly_kind <- xoptM Opt_PolyKinds\n ; unless (poly_kind || null all_kvs)\n (addErr (badKindBndrs doc all_kvs))\n ; unless (null overlap_kvs)\n (addErr (overlappingKindVars doc overlap_kvs))\n\n ; loc <- getSrcSpanM\n ; kv_names <- mapM (newLocalBndrRn . L loc) all_kvs\n ; bindLocalNamesFV kv_names $\n do { let tv_names_w_loc = hsLTyVarLocNames tv_bndrs\n\n rn_tv_bndr :: LHsTyVarBndr RdrName -> RnM (LHsTyVarBndr Name, FreeVars)\n rn_tv_bndr (L loc (UserTyVar rdr))\n = do { nm <- newTyVarNameRn mb_assoc rdr_env loc rdr\n ; return (L loc (UserTyVar nm), emptyFVs) }\n rn_tv_bndr (L loc (KindedTyVar rdr kind))\n = do { sig_ok <- xoptM Opt_KindSignatures\n ; unless sig_ok (badSigErr False doc kind)\n ; nm <- newTyVarNameRn mb_assoc rdr_env loc rdr\n ; (kind', fvs) <- rnLHsKind doc kind\n ; return (L loc (KindedTyVar nm kind'), fvs) }\n\n -- Check for duplicate or shadowed tyvar bindrs\n ; checkDupRdrNames tv_names_w_loc\n ; when (isNothing mb_assoc) (checkShadowedRdrNames tv_names_w_loc)\n\n ; (tv_bndrs', fvs1) <- mapFvRn rn_tv_bndr tvs\n ; (res, fvs2) <- bindLocalNamesFV (map hsLTyVarName tv_bndrs') $\n do { env <- getLocalRdrEnv\n ; traceRn (text \"bhtv\" <+> (ppr tvs $$ ppr all_kvs $$ ppr env))\n ; thing_inside (HsQTvs { hsq_tvs = tv_bndrs', hsq_kvs = kv_names }) }\n ; return (res, fvs1 `plusFV` fvs2) } }\n\nnewTyVarNameRn :: Maybe a -> LocalRdrEnv -> SrcSpan -> RdrName -> RnM Name\nnewTyVarNameRn mb_assoc rdr_env loc rdr\n | Just _ <- mb_assoc -- Use the same Name as the parent class decl\n , Just n <- lookupLocalRdrEnv rdr_env rdr\n = return n\n | otherwise\n = newLocalBndrRn (L loc rdr)\n\n--------------------------------\nrnHsBndrSig :: HsDocContext\n -> HsWithBndrs (LHsType RdrName)\n -> (HsWithBndrs (LHsType Name) -> RnM (a, FreeVars))\n -> RnM (a, FreeVars)\nrnHsBndrSig doc (HsWB { hswb_cts = ty@(L loc _) }) thing_inside\n = do { sig_ok <- xoptM Opt_ScopedTypeVariables\n ; unless sig_ok (badSigErr True doc ty)\n ; let (kv_bndrs, tv_bndrs) = extractHsTyRdrTyVars ty\n ; name_env <- getLocalRdrEnv\n ; tv_names <- newLocalBndrsRn [L loc tv | tv <- tv_bndrs\n , not (tv `elemLocalRdrEnv` name_env) ]\n ; kv_names <- newLocalBndrsRn [L loc kv | kv <- kv_bndrs\n , not (kv `elemLocalRdrEnv` name_env) ]\n ; bindLocalNamesFV kv_names $\n bindLocalNamesFV tv_names $\n do { (ty', fvs1) <- rnLHsType doc ty\n ; (res, fvs2) <- thing_inside (HsWB { hswb_cts = ty', hswb_kvs = kv_names, hswb_tvs = tv_names })\n ; return (res, fvs1 `plusFV` fvs2) } }\n\noverlappingKindVars :: HsDocContext -> [RdrName] -> SDoc\noverlappingKindVars doc kvs\n = vcat [ ptext (sLit \"Kind variable\") <> plural kvs <+>\n ptext (sLit \"also used as type variable\") <> plural kvs\n <> colon <+> pprQuotedList kvs\n , docOfHsDocContext doc ]\n\nbadKindBndrs :: HsDocContext -> [RdrName] -> SDoc\nbadKindBndrs doc kvs\n = vcat [ hang (ptext (sLit \"Unexpected kind variable\") <> plural kvs\n <+> pprQuotedList kvs)\n 2 (ptext (sLit \"Perhaps you intended to use PolyKinds\"))\n , docOfHsDocContext doc ]\n\nbadSigErr :: Bool -> HsDocContext -> LHsType RdrName -> TcM ()\nbadSigErr is_type doc (L loc ty)\n = setSrcSpan loc $ addErr $\n vcat [ hang (ptext (sLit \"Illegal\") <+> what\n <+> ptext (sLit \"signature:\") <+> quotes (ppr ty))\n 2 (ptext (sLit \"Perhaps you intended to use\") <+> flag)\n , docOfHsDocContext doc ]\n where\n what | is_type = ptext (sLit \"type\")\n | otherwise = ptext (sLit \"kind\")\n flag | is_type = ptext (sLit \"ScopedTypeVariables\")\n | otherwise = ptext (sLit \"KindSignatures\")\n\ndataKindsErr :: Bool -> HsType RdrName -> SDoc\ndataKindsErr is_type thing\n = hang (ptext (sLit \"Illegal\") <+> what <> colon <+> quotes (ppr thing))\n 2 (ptext (sLit \"Perhaps you intended to use DataKinds\"))\n where\n what | is_type = ptext (sLit \"type\")\n | otherwise = ptext (sLit \"kind\")\n\\end{code}\n\nNote [Renaming associated types]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nCheck that the RHS of the decl mentions only type variables\nbound on the LHS. For example, this is not ok\n class C a b where\n type F a x :: *\n instance C (p,q) r where\n type F (p,q) x = (x, r) -- BAD: mentions 'r'\nc.f. Trac #5515\n\nWhat makes it tricky is that the *kind* variable from the class *are*\nin scope (Trac #5862):\n class Category (x :: k -> k -> *) where\n type Ob x :: k -> Constraint\n id :: Ob x a => x a a\n (.) :: (Ob x a, Ob x b, Ob x c) => x b c -> x a b -> x a c\nHere 'k' is in scope in the kind signature even though it's not\nexplicitly mentioned on the LHS of the type Ob declaration.\n\nWe could force you to mention k explicitly, thus\n class Category (x :: k -> k -> *) where\n type Ob (x :: k -> k -> *) :: k -> Constraint\nbut it seems tiresome to do so.\n\n\n%*********************************************************\n%* *\n\\subsection{Contexts and predicates}\n%* *\n%*********************************************************\n\n\\begin{code}\nrnConDeclFields :: HsDocContext -> [ConDeclField RdrName]\n -> RnM ([ConDeclField Name], FreeVars)\nrnConDeclFields doc fields = mapFvRn (rnField doc) fields\n\nrnField :: HsDocContext -> ConDeclField RdrName -> RnM (ConDeclField Name, FreeVars)\nrnField doc (ConDeclField name ty haddock_doc)\n = do { new_name <- lookupLocatedTopBndrRn name\n ; (new_ty, fvs) <- rnLHsType doc ty\n ; new_haddock_doc <- rnMbLHsDoc haddock_doc\n ; return (ConDeclField new_name new_ty new_haddock_doc, fvs) }\n\nrnContext :: HsDocContext -> LHsContext RdrName -> RnM (LHsContext Name, FreeVars)\nrnContext doc (L loc cxt)\n = do { (cxt', fvs) <- rnLHsTypes doc cxt\n ; return (L loc cxt', fvs) }\n\\end{code}\n\n\n%************************************************************************\n%* *\n Fixities and precedence parsing\n%* *\n%************************************************************************\n\n@mkOpAppRn@ deals with operator fixities. The argument expressions\nare assumed to be already correctly arranged. It needs the fixities\nrecorded in the OpApp nodes, because fixity info applies to the things\nthe programmer actually wrote, so you can't find it out from the Name.\n\nFurthermore, the second argument is guaranteed not to be another\noperator application. Why? Because the parser parses all\noperator appications left-associatively, EXCEPT negation, which\nwe need to handle specially.\nInfix types are read in a *right-associative* way, so that\n a `op` b `op` c\nis always read in as\n a `op` (b `op` c)\n\nmkHsOpTyRn rearranges where necessary. The two arguments\nhave already been renamed and rearranged. It's made rather tiresome\nby the presence of ->, which is a separate syntactic construct.\n\n\\begin{code}\n---------------\n-- Building (ty1 `op1` (ty21 `op2` ty22))\nmkHsOpTyRn :: (LHsType Name -> LHsType Name -> HsType Name)\n -> Name -> Fixity -> LHsType Name -> LHsType Name\n -> RnM (HsType Name)\n\nmkHsOpTyRn mk1 pp_op1 fix1 ty1 (L loc2 (HsOpTy ty21 (w2, op2) ty22))\n = do { fix2 <- lookupTyFixityRn op2\n ; mk_hs_op_ty mk1 pp_op1 fix1 ty1\n (\\t1 t2 -> HsOpTy t1 (w2, op2) t2)\n (unLoc op2) fix2 ty21 ty22 loc2 }\n\nmkHsOpTyRn mk1 pp_op1 fix1 ty1 (L loc2 (HsFunTy ty21 ty22))\n = mk_hs_op_ty mk1 pp_op1 fix1 ty1\n HsFunTy funTyConName funTyFixity ty21 ty22 loc2\n\nmkHsOpTyRn mk1 _ _ ty1 ty2 -- Default case, no rearrangment\n = return (mk1 ty1 ty2)\n\n---------------\nmk_hs_op_ty :: (LHsType Name -> LHsType Name -> HsType Name)\n -> Name -> Fixity -> LHsType Name\n -> (LHsType Name -> LHsType Name -> HsType Name)\n -> Name -> Fixity -> LHsType Name -> LHsType Name -> SrcSpan\n -> RnM (HsType Name)\nmk_hs_op_ty mk1 op1 fix1 ty1\n mk2 op2 fix2 ty21 ty22 loc2\n | nofix_error = do { precParseErr (op1,fix1) (op2,fix2)\n ; return (mk1 ty1 (L loc2 (mk2 ty21 ty22))) }\n | associate_right = return (mk1 ty1 (L loc2 (mk2 ty21 ty22)))\n | otherwise = do { -- Rearrange to ((ty1 `op1` ty21) `op2` ty22)\n new_ty <- mkHsOpTyRn mk1 op1 fix1 ty1 ty21\n ; return (mk2 (noLoc new_ty) ty22) }\n where\n (nofix_error, associate_right) = compareFixity fix1 fix2\n\n\n---------------------------\nmkOpAppRn :: LHsExpr Name -- Left operand; already rearranged\n -> LHsExpr Name -> Fixity -- Operator and fixity\n -> LHsExpr Name -- Right operand (not an OpApp, but might\n -- be a NegApp)\n -> RnM (HsExpr Name)\n\n-- (e11 `op1` e12) `op2` e2\nmkOpAppRn e1@(L _ (OpApp e11 op1 fix1 e12)) op2 fix2 e2\n | nofix_error\n = do precParseErr (get_op op1,fix1) (get_op op2,fix2)\n return (OpApp e1 op2 fix2 e2)\n\n | associate_right = do\n new_e <- mkOpAppRn e12 op2 fix2 e2\n return (OpApp e11 op1 fix1 (L loc' new_e))\n where\n loc'= combineLocs e12 e2\n (nofix_error, associate_right) = compareFixity fix1 fix2\n\n---------------------------\n-- (- neg_arg) `op` e2\nmkOpAppRn e1@(L _ (NegApp neg_arg neg_name)) op2 fix2 e2\n | nofix_error\n = do precParseErr (negateName,negateFixity) (get_op op2,fix2)\n return (OpApp e1 op2 fix2 e2)\n\n | associate_right\n = do new_e <- mkOpAppRn neg_arg op2 fix2 e2\n return (NegApp (L loc' new_e) neg_name)\n where\n loc' = combineLocs neg_arg e2\n (nofix_error, associate_right) = compareFixity negateFixity fix2\n\n---------------------------\n-- e1 `op` - neg_arg\nmkOpAppRn e1 op1 fix1 e2@(L _ (NegApp _ _)) -- NegApp can occur on the right\n | not associate_right -- We *want* right association\n = do precParseErr (get_op op1, fix1) (negateName, negateFixity)\n return (OpApp e1 op1 fix1 e2)\n where\n (_, associate_right) = compareFixity fix1 negateFixity\n\n---------------------------\n-- Default case\nmkOpAppRn e1 op fix e2 -- Default case, no rearrangment\n = ASSERT2( right_op_ok fix (unLoc e2),\n ppr e1 $$ text \"---\" $$ ppr op $$ text \"---\" $$ ppr fix $$ text \"---\" $$ ppr e2\n )\n return (OpApp e1 op fix e2)\n\n----------------------------\nget_op :: LHsExpr Name -> Name\nget_op (L _ (HsVar n)) = n\nget_op other = pprPanic \"get_op\" (ppr other)\n\n-- Parser left-associates everything, but\n-- derived instances may have correctly-associated things to\n-- in the right operarand. So we just check that the right operand is OK\nright_op_ok :: Fixity -> HsExpr Name -> Bool\nright_op_ok fix1 (OpApp _ _ fix2 _)\n = not error_please && associate_right\n where\n (error_please, associate_right) = compareFixity fix1 fix2\nright_op_ok _ _\n = True\n\n-- Parser initially makes negation bind more tightly than any other operator\n-- And \"deriving\" code should respect this (use HsPar if not)\nmkNegAppRn :: LHsExpr id -> SyntaxExpr id -> RnM (HsExpr id)\nmkNegAppRn neg_arg neg_name\n = ASSERT( not_op_app (unLoc neg_arg) )\n return (NegApp neg_arg neg_name)\n\nnot_op_app :: HsExpr id -> Bool\nnot_op_app (OpApp _ _ _ _) = False\nnot_op_app _ = True\n\n---------------------------\nmkOpFormRn :: LHsCmdTop Name -- Left operand; already rearranged\n -> LHsExpr Name -> Fixity -- Operator and fixity\n -> LHsCmdTop Name -- Right operand (not an infix)\n -> RnM (HsCmd Name)\n\n-- (e11 `op1` e12) `op2` e2\nmkOpFormRn a1@(L loc (HsCmdTop (L _ (HsCmdArrForm op1 (Just fix1) [a11,a12])) _ _ _))\n op2 fix2 a2\n | nofix_error\n = do precParseErr (get_op op1,fix1) (get_op op2,fix2)\n return (HsCmdArrForm op2 (Just fix2) [a1, a2])\n\n | associate_right\n = do new_c <- mkOpFormRn a12 op2 fix2 a2\n return (HsCmdArrForm op1 (Just fix1)\n [a11, L loc (HsCmdTop (L loc new_c) placeHolderType placeHolderType [])])\n -- TODO: locs are wrong\n where\n (nofix_error, associate_right) = compareFixity fix1 fix2\n\n-- Default case\nmkOpFormRn arg1 op fix arg2 -- Default case, no rearrangment\n = return (HsCmdArrForm op (Just fix) [arg1, arg2])\n\n\n--------------------------------------\nmkConOpPatRn :: Located Name -> Fixity -> LPat Name -> LPat Name\n -> RnM (Pat Name)\n\nmkConOpPatRn op2 fix2 p1@(L loc (ConPatIn op1 (InfixCon p11 p12))) p2\n = do { fix1 <- lookupFixityRn (unLoc op1)\n ; let (nofix_error, associate_right) = compareFixity fix1 fix2\n\n ; if nofix_error then do\n { precParseErr (unLoc op1,fix1) (unLoc op2,fix2)\n ; return (ConPatIn op2 (InfixCon p1 p2)) }\n\n else if associate_right then do\n { new_p <- mkConOpPatRn op2 fix2 p12 p2\n ; return (ConPatIn op1 (InfixCon p11 (L loc new_p))) } -- XXX loc right?\n else return (ConPatIn op2 (InfixCon p1 p2)) }\n\nmkConOpPatRn op _ p1 p2 -- Default case, no rearrangment\n = ASSERT( not_op_pat (unLoc p2) )\n return (ConPatIn op (InfixCon p1 p2))\n\nnot_op_pat :: Pat Name -> Bool\nnot_op_pat (ConPatIn _ (InfixCon _ _)) = False\nnot_op_pat _ = True\n\n--------------------------------------\ncheckPrecMatch :: Name -> MatchGroup Name body -> RnM ()\n -- Check precedence of a function binding written infix\n -- eg a `op` b `C` c = ...\n -- See comments with rnExpr (OpApp ...) about \"deriving\"\n\ncheckPrecMatch op (MG { mg_alts = ms })\n = mapM_ check ms\n where\n check (L _ (Match (L l1 p1 : L l2 p2 :_) _ _))\n = setSrcSpan (combineSrcSpans l1 l2) $\n do checkPrec op p1 False\n checkPrec op p2 True\n\n check _ = return ()\n -- This can happen. Consider\n -- a `op` True = ...\n -- op = ...\n -- The infix flag comes from the first binding of the group\n -- but the second eqn has no args (an error, but not discovered\n -- until the type checker). So we don't want to crash on the\n -- second eqn.\n\ncheckPrec :: Name -> Pat Name -> Bool -> IOEnv (Env TcGblEnv TcLclEnv) ()\ncheckPrec op (ConPatIn op1 (InfixCon _ _)) right = do\n op_fix@(Fixity op_prec op_dir) <- lookupFixityRn op\n op1_fix@(Fixity op1_prec op1_dir) <- lookupFixityRn (unLoc op1)\n let\n inf_ok = op1_prec > op_prec ||\n (op1_prec == op_prec &&\n (op1_dir == InfixR && op_dir == InfixR && right ||\n op1_dir == InfixL && op_dir == InfixL && not right))\n\n info = (op, op_fix)\n info1 = (unLoc op1, op1_fix)\n (infol, infor) = if right then (info, info1) else (info1, info)\n unless inf_ok (precParseErr infol infor)\n\ncheckPrec _ _ _\n = return ()\n\n-- Check precedence of (arg op) or (op arg) respectively\n-- If arg is itself an operator application, then either\n-- (a) its precedence must be higher than that of op\n-- (b) its precedency & associativity must be the same as that of op\ncheckSectionPrec :: FixityDirection -> HsExpr RdrName\n -> LHsExpr Name -> LHsExpr Name -> RnM ()\ncheckSectionPrec direction section op arg\n = case unLoc arg of\n OpApp _ op fix _ -> go_for_it (get_op op) fix\n NegApp _ _ -> go_for_it negateName negateFixity\n _ -> return ()\n where\n op_name = get_op op\n go_for_it arg_op arg_fix@(Fixity arg_prec assoc) = do\n op_fix@(Fixity op_prec _) <- lookupFixityRn op_name\n unless (op_prec < arg_prec\n || (op_prec == arg_prec && direction == assoc))\n (sectionPrecErr (op_name, op_fix)\n (arg_op, arg_fix) section)\n\\end{code}\n\nPrecedence-related error messages\n\n\\begin{code}\nprecParseErr :: (Name, Fixity) -> (Name, Fixity) -> RnM ()\nprecParseErr op1@(n1,_) op2@(n2,_)\n | isUnboundName n1 || isUnboundName n2\n = return () -- Avoid error cascade\n | otherwise\n = addErr $ hang (ptext (sLit \"Precedence parsing error\"))\n 4 (hsep [ptext (sLit \"cannot mix\"), ppr_opfix op1, ptext (sLit \"and\"),\n ppr_opfix op2,\n ptext (sLit \"in the same infix expression\")])\n\nsectionPrecErr :: (Name, Fixity) -> (Name, Fixity) -> HsExpr RdrName -> RnM ()\nsectionPrecErr op@(n1,_) arg_op@(n2,_) section\n | isUnboundName n1 || isUnboundName n2\n = return () -- Avoid error cascade\n | otherwise\n = addErr $ vcat [ptext (sLit \"The operator\") <+> ppr_opfix op <+> ptext (sLit \"of a section\"),\n nest 4 (sep [ptext (sLit \"must have lower precedence than that of the operand,\"),\n nest 2 (ptext (sLit \"namely\") <+> ppr_opfix arg_op)]),\n nest 4 (ptext (sLit \"in the section:\") <+> quotes (ppr section))]\n\nppr_opfix :: (Name, Fixity) -> SDoc\nppr_opfix (op, fixity) = pp_op <+> brackets (ppr fixity)\n where\n pp_op | op == negateName = ptext (sLit \"prefix `-'\")\n | otherwise = quotes (ppr op)\n\\end{code}\n\n%*********************************************************\n%* *\n\\subsection{Errors}\n%* *\n%*********************************************************\n\n\\begin{code}\nwarnUnusedForAlls :: SDoc -> LHsTyVarBndrs RdrName -> [RdrName] -> TcM ()\nwarnUnusedForAlls in_doc bound mentioned_rdrs\n = whenWOptM Opt_WarnUnusedMatches $\n mapM_ add_warn bound_but_not_used\n where\n bound_names = hsLTyVarLocNames bound\n bound_but_not_used = filterOut ((`elem` mentioned_rdrs) . unLoc) bound_names\n\n add_warn (L loc tv)\n = addWarnAt loc $\n vcat [ ptext (sLit \"Unused quantified type variable\") <+> quotes (ppr tv)\n , in_doc ]\n\nopTyErr :: RdrName -> HsType RdrName -> SDoc\nopTyErr op ty@(HsOpTy ty1 _ _)\n = hang (ptext (sLit \"Illegal operator\") <+> quotes (ppr op) <+> ptext (sLit \"in type\") <+> quotes (ppr ty))\n 2 extra\n where\n extra | op == dot_tv_RDR && forall_head ty1\n = perhapsForallMsg\n | otherwise\n = ptext (sLit \"Use TypeOperators to allow operators in types\")\n\n forall_head (L _ (HsTyVar tv)) = tv == forall_tv_RDR\n forall_head (L _ (HsAppTy ty _)) = forall_head ty\n forall_head _other = False\nopTyErr _ ty = pprPanic \"opTyErr: Not an op\" (ppr ty)\n\\end{code}\n\n%************************************************************************\n%* *\n Finding the free type variables of a (HsType RdrName)\n%* *\n%************************************************************************\n\n\nNote [Kind and type-variable binders]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIn a type signature we may implicitly bind type varaible and, more\nrecently, kind variables. For example:\n * f :: a -> a\n f = ...\n Here we need to find the free type variables of (a -> a),\n so that we know what to quantify\n\n * class C (a :: k) where ...\n This binds 'k' in ..., as well as 'a'\n\n * f (x :: a -> [a]) = ....\n Here we bind 'a' in ....\n\n * f (x :: T a -> T (b :: k)) = ...\n Here we bind both 'a' and the kind variable 'k'\n\n * type instance F (T (a :: Maybe k)) = ...a...k...\n Here we want to constrain the kind of 'a', and bind 'k'.\n\nIn general we want to walk over a type, and find\n * Its free type variables\n * The free kind variables of any kind signatures in the type\n\nHence we returns a pair (kind-vars, type vars)\nSee also Note [HsBSig binder lists] in HsTypes\n\n\\begin{code}\ntype FreeKiTyVars = ([RdrName], [RdrName])\n\nfilterInScope :: LocalRdrEnv -> FreeKiTyVars -> FreeKiTyVars\nfilterInScope rdr_env (kvs, tvs)\n = (filterOut in_scope kvs, filterOut in_scope tvs)\n where\n in_scope tv = tv `elemLocalRdrEnv` rdr_env\n\nextractHsTyRdrTyVars :: LHsType RdrName -> FreeKiTyVars\n-- extractHsTyRdrNames finds the free (kind, type) variables of a HsType\n-- or the free (sort, kind) variables of a HsKind\n-- It's used when making the for-alls explicit.\n-- See Note [Kind and type-variable binders]\nextractHsTyRdrTyVars ty\n = case extract_lty ty ([],[]) of\n (kvs, tvs) -> (nub kvs, nub tvs)\n\nextractHsTysRdrTyVars :: [LHsType RdrName] -> FreeKiTyVars\n-- See Note [Kind and type-variable binders]\nextractHsTysRdrTyVars ty\n = case extract_ltys ty ([],[]) of\n (kvs, tvs) -> (nub kvs, nub tvs)\n\nextractRdrKindSigVars :: Maybe (LHsKind RdrName) -> [RdrName]\nextractRdrKindSigVars Nothing = []\nextractRdrKindSigVars (Just k) = nub (fst (extract_lkind k ([],[])))\n\nextractDataDefnKindVars :: HsDataDefn RdrName -> [RdrName]\n-- Get the scoped kind variables mentioned free in the constructor decls\n-- Eg data T a = T1 (S (a :: k) | forall (b::k). T2 (S b)\n-- Here k should scope over the whole definition\nextractDataDefnKindVars (HsDataDefn { dd_ctxt = ctxt, dd_kindSig = ksig\n , dd_cons = cons, dd_derivs = derivs })\n = fst $ extract_lctxt ctxt $\n extract_mb extract_lkind ksig $\n extract_mb extract_ltys derivs $\n foldr (extract_con . unLoc) ([],[]) cons\n where\n extract_con (ConDecl { con_res = ResTyGADT {} }) acc = acc\n extract_con (ConDecl { con_res = ResTyH98, con_qvars = qvs\n , con_cxt = ctxt, con_details = details }) acc\n = extract_hs_tv_bndrs qvs acc $\n extract_lctxt ctxt $\n extract_ltys (hsConDeclArgTys details) ([],[])\n\n\nextract_lctxt :: LHsContext RdrName -> FreeKiTyVars -> FreeKiTyVars\nextract_lctxt ctxt = extract_ltys (unLoc ctxt)\n\nextract_ltys :: [LHsType RdrName] -> FreeKiTyVars -> FreeKiTyVars\nextract_ltys tys acc = foldr extract_lty acc tys\n\nextract_mb :: (a -> FreeKiTyVars -> FreeKiTyVars) -> Maybe a -> FreeKiTyVars -> FreeKiTyVars\nextract_mb _ Nothing acc = acc\nextract_mb f (Just x) acc = f x acc\n\nextract_lkind :: LHsType RdrName -> FreeKiTyVars -> FreeKiTyVars\nextract_lkind kind (acc_kvs, acc_tvs) = case extract_lty kind ([], acc_kvs) of\n (_, res_kvs) -> (res_kvs, acc_tvs)\n -- Kinds shouldn't have sort signatures!\n\nextract_lty :: LHsType RdrName -> FreeKiTyVars -> FreeKiTyVars\nextract_lty (L _ ty) acc\n = case ty of\n HsTyVar tv -> extract_tv tv acc\n HsBangTy _ ty -> extract_lty ty acc\n HsRecTy flds -> foldr (extract_lty . cd_fld_type) acc flds\n HsAppTy ty1 ty2 -> extract_lty ty1 (extract_lty ty2 acc)\n HsListTy ty -> extract_lty ty acc\n HsPArrTy ty -> extract_lty ty acc\n HsTupleTy _ tys -> extract_ltys tys acc\n HsFunTy ty1 ty2 -> extract_lty ty1 (extract_lty ty2 acc)\n HsIParamTy _ ty -> extract_lty ty acc\n HsEqTy ty1 ty2 -> extract_lty ty1 (extract_lty ty2 acc)\n HsOpTy ty1 (_, (L _ tv)) ty2 -> extract_tv tv (extract_lty ty1 (extract_lty ty2 acc))\n HsParTy ty -> extract_lty ty acc\n HsCoreTy {} -> acc -- The type is closed\n HsQuasiQuoteTy {} -> acc -- Quasi quotes mention no type variables\n HsSpliceTy {} -> acc -- Type splices mention no type variables\n HsDocTy ty _ -> extract_lty ty acc\n HsExplicitListTy _ tys -> extract_ltys tys acc\n HsExplicitTupleTy _ tys -> extract_ltys tys acc\n HsTyLit _ -> acc\n HsWrapTy _ _ -> panic \"extract_lty\"\n HsKindSig ty ki -> extract_lty ty (extract_lkind ki acc)\n HsForAllTy _ tvs cx ty -> extract_hs_tv_bndrs tvs acc $\n extract_lctxt cx $\n extract_lty ty ([],[])\n\nextract_hs_tv_bndrs :: LHsTyVarBndrs RdrName -> FreeKiTyVars\n -> FreeKiTyVars -> FreeKiTyVars\nextract_hs_tv_bndrs (HsQTvs { hsq_tvs = tvs })\n (acc_kvs, acc_tvs) -- Note accumulator comes first\n (body_kvs, body_tvs)\n | null tvs\n = (body_kvs ++ acc_kvs, body_tvs ++ acc_tvs)\n | otherwise\n = (acc_kvs ++ filterOut (`elem` local_kvs) body_kvs,\n acc_tvs ++ filterOut (`elem` local_tvs) body_tvs)\n where\n local_tvs = map hsLTyVarName tvs\n (_, local_kvs) = foldr extract_lty ([], []) [k | L _ (KindedTyVar _ k) <- tvs]\n -- These kind variables are bound here if not bound further out\n\nextract_tv :: RdrName -> FreeKiTyVars -> FreeKiTyVars\nextract_tv tv acc\n | isRdrTyVar tv = case acc of (kvs,tvs) -> (kvs, tv : tvs)\n | otherwise = acc\n\\end{code}\n","avg_line_length":40.2161060143,"max_line_length":109,"alphanum_fraction":0.5963195782} {"size":4024,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"% Filename: Main.lhs\n% Version : 1.4\n% Date : 3\/2\/92\n\n\\section{The Main Program.}\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%M O D U L E%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\\begin{code}\n{-# LANGUAGE CPP #-}\nmodule Main(main) where\n\\end{code}\n%%%%%%%%%%%%%%%%%% I M P O R T S \/ T Y P E D E F S %%%%%%%%%%%%%%\n\\begin{code}\nimport ChessSetList (Tile) -- partain:for hbc\nimport KnightHeuristic\nimport Queue\nimport Control.Monad\nimport System.Environment\nimport Data.Char\n\n#define fail ioError\n\n\\end{code}\n\n%%%%%%%%%%%%%%%%%%%%% B O D Y O F M O D U L E %%%%%%%%%%%%%%%%%%%%%\nThe value of a Haskell program is the value of the identifier @main@ in the\nmodule @Main@, and @main@ must have type @[Response] -> [Request]@. Any\nfunction having this type is an I\/O request that\ncommunicates with the outside world via {\\em streams} of messages. Since\nHaskell is lazy language, we have the strange anomaly that the result of a\nI\/O operation is a list of @Requests@'s that is generated by a list of\n@Response@'s. This characteristic is due to laziness because forcing\nevaluation on the $n^{th}$ @Response@, will in turn force evaluation on the\n$n^{th}$ request - enabling I\/O to occur.\n\nThe @main@ function below uses the continuation style of I\/O. Its purpose is\nto read two numbers off the command line, and print out $x$ solutions to\nthe knights tour with a board of size $y$; where $x$ and $y$ represent\nthe first and second command line option respectively.\n\n\\begin{code}\nmain:: IO ()\nmain=replicateM_ 100 $ getArgs >>= \\ss ->\n if (argsOk ss) then\n print (length (printTour ss))\n else\n fail (userError usageString)\n where\n usageString= \"\\nUsage: knights \\n\"\n\targsOk ss = (length ss == 2) && (foldr ((&&) . all_digits) True ss)\n\tall_digits s = foldr ((&&) . isDigit) True s\n\nprintTour::[[Char]] -> [Char]\nprintTour ss\n = pp (take number (depthSearch (root size) grow isFinished))\n where\n [size,number] = map (strToInt 0) ss\n\tstrToInt y [] = y\n\tstrToInt y (x:xs) = strToInt (10*y+(fromEnum x - fromEnum '0')) xs\n\tpp []\t\t = []\n\tpp ((x,y):xs) = \"\\nKnights tour with \" ++ (show x) ++\n\t \t \" backtracking moves\\n\" ++ (show y) ++\n\t\t\t (pp xs)\n\ngrow::(Int,ChessSet) -> [(Int,ChessSet)]\ngrow (x,y) = zip [(x+1),(x+1)..] (descendents y)\n\nisFinished::(Int,ChessSet) -> Bool\nisFinished (x,y) = tourFinished y\n\nroot::Int -> Queue (Int,ChessSet)\nroot sze= addAllFront\n (zip [-(sze*sze)+1,-(sze*sze)+1..]\n\t (zipWith\n\t\t startTour\n\t\t [(x,y) | x<-[1..sze], y<-[1..sze]]\n\t\t (take (sze*sze) [sze,sze..])))\n createQueue\n\\end{code}\n\n\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\nThe knights tour proceeds by applying a depth first search on the combinatorial\nsearch space. The higher order function @depthSearch@ applies this search\nstrategy, and returns a stream of valid search nodes. The arguments\nto this function are :\n\\begin{itemize}\n\\item A {\\tt Queue} of all the possible starting positions of the search.\n\\item A function that given a node in the search returns a list of valid\n nodes that are reachable from the parent node (the descendents).\n\\item\tA function that determines if a given node in the search space is\n a valid solution to the problem (i.e is it a knights tour).\n\\end{itemize}\n\n\\begin{code}\ndepthSearch :: (Eq a) => Queue a -> (a -> [a]) -> (a -> Bool) -> Queue a\ndepthSearch q growFn finFn\n | emptyQueue q = []\n | finFn (inquireFront q) = (inquireFront q):\n\t\t\t (depthSearch (removeFront q) growFn finFn)\n | otherwise \t = (depthSearch\n\t (addAllFront (growFn (inquireFront q))\n\t\t\t\t\t (removeFront q))\n\t \t growFn\n\t finFn)\n\\end{code}\n{\\bf Note :} the above function should be abstracted out into a\nseperate search module, but as depth first search is the only\nrealistic search strategy for the knights tour....\n\n","avg_line_length":36.5818181818,"max_line_length":79,"alphanum_fraction":0.6100894632} {"size":39840,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"%\n% (c) The AQUA Project, Glasgow University, 1993-1998\n%\n\\section[CoreMonad]{The core pipeline monad}\n\n\\begin{code}\n{-# LANGUAGE UndecidableInstances #-}\n\nmodule CoreMonad (\n -- * Configuration of the core-to-core passes\n CoreToDo(..),\n SimplifierMode(..),\n FloatOutSwitches(..),\n getCoreToDo, dumpSimplPhase,\n\n -- * Counting\n SimplCount, doSimplTick, doFreeSimplTick, simplCountN,\n pprSimplCount, plusSimplCount, zeroSimplCount, isZeroSimplCount, Tick(..),\n\n -- * The monad\n CoreM, runCoreM,\n \n -- ** Reading from the monad\n getHscEnv, getRuleBase, getModule,\n getDynFlags, getOrigNameCache,\n \n -- ** Writing to the monad\n addSimplCount,\n \n -- ** Lifting into the monad\n liftIO, liftIOWithCount,\n liftIO1, liftIO2, liftIO3, liftIO4,\n \n -- ** Dealing with annotations\n getAnnotations, getFirstAnnotations,\n \n -- ** Debug output\n showPass, endPass, endIteration, dumpIfSet,\n\n -- ** Screen output\n putMsg, putMsgS, errorMsg, errorMsgS, \n fatalErrorMsg, fatalErrorMsgS, \n debugTraceMsg, debugTraceMsgS,\n dumpIfSet_dyn, \n\n#ifdef GHCI\n -- * Getting 'Name's\n thNameToGhcName\n#endif\n ) where\n\n#ifdef GHCI\nimport Name( Name )\n#endif\nimport CoreSyn\nimport PprCore\nimport CoreUtils\nimport CoreLint\t\t( lintCoreBindings )\nimport PrelNames ( iNTERACTIVE )\nimport HscTypes\nimport Module ( Module )\nimport DynFlags\nimport StaticFlags\t\nimport Rules ( RuleBase )\nimport BasicTypes ( CompilerPhase(..) )\nimport Annotations\nimport Id\t\t( Id )\n\nimport IOEnv hiding ( liftIO, failM, failWithM )\nimport qualified IOEnv ( liftIO )\nimport TcEnv ( tcLookupGlobal )\nimport TcRnMonad ( TcM, initTc )\n\nimport Outputable\nimport FastString\nimport qualified ErrUtils as Err\nimport Bag\nimport Maybes\nimport UniqSupply\nimport UniqFM ( UniqFM, mapUFM, filterUFM )\nimport MonadUtils\n\nimport Util\t\t( split )\nimport Data.List\t( intersperse )\nimport Data.Dynamic\nimport Data.IORef\nimport Data.Map (Map)\nimport qualified Data.Map as Map\nimport Data.Word\nimport Control.Monad\n\nimport Prelude hiding ( read )\n\n#ifdef GHCI\nimport {-# SOURCE #-} TcSplice ( lookupThName_maybe )\nimport qualified Language.Haskell.TH as TH\n#endif\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Debug output\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\nThese functions are not CoreM monad stuff, but they probably ought to\nbe, and it makes a conveneint place. place for them. They print out\nstuff before and after core passes, and do Core Lint when necessary.\n\n\\begin{code}\nshowPass :: DynFlags -> CoreToDo -> IO ()\nshowPass dflags pass = Err.showPass dflags (showSDoc (ppr pass))\n\nendPass :: DynFlags -> CoreToDo -> [CoreBind] -> [CoreRule] -> IO ()\nendPass dflags pass = dumpAndLint dflags True pass empty (coreDumpFlag pass)\n\n-- Same as endPass but doesn't dump Core even with -dverbose-core2core\nendIteration :: DynFlags -> CoreToDo -> Int -> [CoreBind] -> [CoreRule] -> IO ()\nendIteration dflags pass n\n = dumpAndLint dflags False pass (ptext (sLit \"iteration=\") <> int n)\n (Just Opt_D_dump_simpl_iterations)\n\ndumpIfSet :: Bool -> CoreToDo -> SDoc -> SDoc -> IO ()\ndumpIfSet dump_me pass extra_info doc\n = Err.dumpIfSet dump_me (showSDoc (ppr pass <+> extra_info)) doc\n\ndumpAndLint :: DynFlags -> Bool -> CoreToDo -> SDoc -> Maybe DynFlag\n -> [CoreBind] -> [CoreRule] -> IO ()\n-- The \"show_all\" parameter says to print dump if -dverbose-core2core is on\ndumpAndLint dflags show_all pass extra_info mb_dump_flag binds rules\n = do { -- Report result size if required\n\t -- This has the side effect of forcing the intermediate to be evaluated\n ; Err.debugTraceMsg dflags 2 $\n\t\t(text \" Result size =\" <+> int (coreBindsSize binds))\n\n\t-- Report verbosely, if required\n ; let pass_name = showSDoc (ppr pass <+> extra_info)\n dump_doc = pprCoreBindings binds \n $$ ppUnless (null rules) pp_rules\n\n ; case mb_dump_flag of\n Nothing -> return ()\n Just dump_flag -> Err.dumpIfSet_dyn_or dflags dump_flags pass_name dump_doc\n where\n dump_flags | show_all = [dump_flag, Opt_D_verbose_core2core]\n\t\t \t | otherwise = [dump_flag] \n\n\t-- Type check\n ; when (dopt Opt_DoCoreLinting dflags) $\n do { let (warns, errs) = lintCoreBindings binds\n ; Err.showPass dflags (\"Core Linted result of \" ++ pass_name)\n ; displayLintResults dflags pass warns errs binds } }\n where\n pp_rules = vcat [ blankLine\n , ptext (sLit \"------ Local rules for imported ids --------\")\n , pprRules rules ]\n\ndisplayLintResults :: DynFlags -> CoreToDo\n -> Bag Err.Message -> Bag Err.Message -> [CoreBind]\n -> IO ()\ndisplayLintResults dflags pass warns errs binds\n | not (isEmptyBag errs)\n = do { printDump (vcat [ banner \"errors\", Err.pprMessageBag errs\n\t\t\t , ptext (sLit \"*** Offending Program ***\")\n\t\t\t , pprCoreBindings binds\n\t\t\t , ptext (sLit \"*** End of Offense ***\") ])\n ; Err.ghcExit dflags 1 }\n\n | not (isEmptyBag warns)\n , not (case pass of { CoreDesugar -> True; _ -> False })\n \t-- Suppress warnings after desugaring pass because some\n\t-- are legitimate. Notably, the desugarer generates instance\n\t-- methods with INLINE pragmas that form a mutually recursive\n\t-- group. Only afer a round of simplification are they unravelled.\n , not opt_NoDebugOutput\n , showLintWarnings pass\n = printDump (banner \"warnings\" $$ Err.pprMessageBag warns)\n\n | otherwise = return ()\n where\n banner string = ptext (sLit \"*** Core Lint\") <+> text string \n <+> ptext (sLit \": in result of\") <+> ppr pass\n <+> ptext (sLit \"***\")\n\nshowLintWarnings :: CoreToDo -> Bool\n-- Disable Lint warnings on the first simplifier pass, because\n-- there may be some INLINE knots still tied, which is tiresomely noisy\nshowLintWarnings (CoreDoSimplify _ (SimplMode { sm_phase = InitialPhase })) = False\nshowLintWarnings _ = True\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n The CoreToDo type and related types\n\t Abstraction of core-to-core passes to run.\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ndata CoreToDo -- These are diff core-to-core passes,\n -- which may be invoked in any order,\n -- as many times as you like.\n\n = CoreDoSimplify -- The core-to-core simplifier.\n Int -- Max iterations\n SimplifierMode\n\n | CoreDoFloatInwards\n | CoreDoFloatOutwards FloatOutSwitches\n | CoreLiberateCase\n | CoreDoPrintCore\n | CoreDoStaticArgs\n | CoreDoStrictness\n | CoreDoWorkerWrapper\n | CoreDoSpecialising\n | CoreDoSpecConstr\n | CoreDoGlomBinds\n | CoreCSE\n | CoreDoRuleCheck CompilerPhase String -- Check for non-application of rules\n -- matching this string\n | CoreDoVectorisation\n | CoreDoNothing -- Useful when building up\n | CoreDoPasses [CoreToDo] -- lists of these things\n\n | CoreDesugar\t -- Not strictly a core-to-core pass, but produces\n -- Core output, and hence useful to pass to endPass\n\n | CoreTidy\n | CorePrep\n\ncoreDumpFlag :: CoreToDo -> Maybe DynFlag\ncoreDumpFlag (CoreDoSimplify {}) = Just Opt_D_dump_simpl_phases\ncoreDumpFlag CoreDoFloatInwards = Just Opt_D_verbose_core2core\ncoreDumpFlag (CoreDoFloatOutwards {}) = Just Opt_D_verbose_core2core\ncoreDumpFlag CoreLiberateCase = Just Opt_D_verbose_core2core\ncoreDumpFlag CoreDoStaticArgs \t = Just Opt_D_verbose_core2core\ncoreDumpFlag CoreDoStrictness \t = Just Opt_D_dump_stranal\ncoreDumpFlag CoreDoWorkerWrapper = Just Opt_D_dump_worker_wrapper\ncoreDumpFlag CoreDoSpecialising = Just Opt_D_dump_spec\ncoreDumpFlag CoreDoSpecConstr = Just Opt_D_dump_spec\ncoreDumpFlag CoreCSE = Just Opt_D_dump_cse \ncoreDumpFlag CoreDoVectorisation = Just Opt_D_dump_vect\ncoreDumpFlag CoreDesugar = Just Opt_D_dump_ds \ncoreDumpFlag CoreTidy = Just Opt_D_dump_simpl\ncoreDumpFlag CorePrep = Just Opt_D_dump_prep\n\ncoreDumpFlag CoreDoPrintCore = Nothing\ncoreDumpFlag (CoreDoRuleCheck {}) = Nothing\ncoreDumpFlag CoreDoNothing = Nothing\ncoreDumpFlag CoreDoGlomBinds = Nothing\ncoreDumpFlag (CoreDoPasses {}) = Nothing\n\ninstance Outputable CoreToDo where\n ppr (CoreDoSimplify n md) = ptext (sLit \"Simplifier\")\n <+> ppr md\n <+> ptext (sLit \"max-iterations=\") <> int n\n ppr CoreDoFloatInwards = ptext (sLit \"Float inwards\")\n ppr (CoreDoFloatOutwards f) = ptext (sLit \"Float out\") <> parens (ppr f)\n ppr CoreLiberateCase = ptext (sLit \"Liberate case\")\n ppr CoreDoStaticArgs \t = ptext (sLit \"Static argument\")\n ppr CoreDoStrictness \t = ptext (sLit \"Demand analysis\")\n ppr CoreDoWorkerWrapper = ptext (sLit \"Worker Wrapper binds\")\n ppr CoreDoSpecialising = ptext (sLit \"Specialise\")\n ppr CoreDoSpecConstr = ptext (sLit \"SpecConstr\")\n ppr CoreCSE = ptext (sLit \"Common sub-expression\")\n ppr CoreDoVectorisation = ptext (sLit \"Vectorisation\")\n ppr CoreDesugar = ptext (sLit \"Desugar\")\n ppr CoreTidy = ptext (sLit \"Tidy Core\")\n ppr CorePrep \t\t = ptext (sLit \"CorePrep\")\n ppr CoreDoPrintCore = ptext (sLit \"Print core\")\n ppr (CoreDoRuleCheck {}) = ptext (sLit \"Rule check\")\n ppr CoreDoGlomBinds = ptext (sLit \"Glom binds\")\n ppr CoreDoNothing = ptext (sLit \"CoreDoNothing\")\n ppr (CoreDoPasses {}) = ptext (sLit \"CoreDoPasses\")\n\\end{code}\n\n\\begin{code}\ndata SimplifierMode -- See comments in SimplMonad\n = SimplMode\n { sm_names :: [String] -- Name(s) of the phase\n , sm_phase :: CompilerPhase\n , sm_rules :: Bool -- Whether RULES are enabled\n , sm_inline :: Bool -- Whether inlining is enabled\n , sm_case_case :: Bool -- Whether case-of-case is enabled\n , sm_eta_expand :: Bool -- Whether eta-expansion is enabled\n }\n\ninstance Outputable SimplifierMode where\n ppr (SimplMode { sm_phase = p, sm_names = ss\n , sm_rules = r, sm_inline = i\n , sm_eta_expand = eta, sm_case_case = cc })\n = ptext (sLit \"SimplMode\") <+> braces (\n sep [ ptext (sLit \"Phase =\") <+> ppr p <+>\n brackets (text (concat $ intersperse \",\" ss)) <> comma\n , pp_flag i (sLit \"inline\") <> comma\n , pp_flag r (sLit \"rules\") <> comma\n , pp_flag eta (sLit \"eta-expand\") <> comma\n , pp_flag cc (sLit \"case-of-case\") ])\n\t where\n pp_flag f s = ppUnless f (ptext (sLit \"no\")) <+> ptext s\n\\end{code}\n\n\n\\begin{code}\ndata FloatOutSwitches = FloatOutSwitches {\n floatOutLambdas :: Maybe Int, -- ^ Just n <=> float lambdas to top level, if\n -- doing so will abstract over n or fewer \n -- value variables\n\t\t\t\t -- Nothing <=> float all lambdas to top level,\n -- regardless of how many free variables\n -- Just 0 is the vanilla case: float a lambda\n -- iff it has no free vars\n\n floatOutConstants :: Bool, -- ^ True <=> float constants to top level,\n -- even if they do not escape a lambda\n floatOutPartialApplications :: Bool -- ^ True <=> float out partial applications\n -- based on arity information.\n }\ninstance Outputable FloatOutSwitches where\n ppr = pprFloatOutSwitches\n\npprFloatOutSwitches :: FloatOutSwitches -> SDoc\npprFloatOutSwitches sw \n = ptext (sLit \"FOS\") <+> (braces $\n sep $ punctuate comma $ \n [ ptext (sLit \"Lam =\") <+> ppr (floatOutLambdas sw)\n , ptext (sLit \"Consts =\") <+> ppr (floatOutConstants sw)\n , ptext (sLit \"PAPs =\") <+> ppr (floatOutPartialApplications sw) ])\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Generating the main optimisation pipeline\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ngetCoreToDo :: DynFlags -> [CoreToDo]\ngetCoreToDo dflags\n = core_todo\n where\n opt_level = optLevel dflags\n phases = simplPhases dflags\n max_iter = maxSimplIterations dflags\n rule_check = ruleCheck dflags\n strictness = dopt Opt_Strictness \t\t dflags\n full_laziness = dopt Opt_FullLaziness \t\t dflags\n do_specialise = dopt Opt_Specialise \t\t dflags\n do_float_in = dopt Opt_FloatIn \t\t dflags \n cse = dopt Opt_CSE dflags\n spec_constr = dopt Opt_SpecConstr dflags\n liberate_case = dopt Opt_LiberateCase dflags\n static_args = dopt Opt_StaticArgumentTransformation dflags\n rules_on = dopt Opt_EnableRewriteRules dflags\n eta_expand_on = dopt Opt_DoLambdaEtaExpansion dflags\n\n maybe_rule_check phase = runMaybe rule_check (CoreDoRuleCheck phase)\n\n maybe_strictness_before phase\n = runWhen (phase `elem` strictnessBefore dflags) CoreDoStrictness\n\n base_mode = SimplMode { sm_phase = panic \"base_mode\"\n , sm_names = []\n , sm_rules = rules_on\n , sm_eta_expand = eta_expand_on\n , sm_inline = True\n , sm_case_case = True }\n\n simpl_phase phase names iter\n = CoreDoPasses\n [ maybe_strictness_before phase\n , CoreDoSimplify iter\n (base_mode { sm_phase = Phase phase\n , sm_names = names })\n\n , maybe_rule_check (Phase phase)\n ]\n\n vectorisation\n = runWhen (dopt Opt_Vectorise dflags) $\n CoreDoPasses [ simpl_gently, CoreDoVectorisation ]\n\n -- By default, we have 2 phases before phase 0.\n\n -- Want to run with inline phase 2 after the specialiser to give\n -- maximum chance for fusion to work before we inline build\/augment\n -- in phase 1. This made a difference in 'ansi' where an\n -- overloaded function wasn't inlined till too late.\n\n -- Need phase 1 so that build\/augment get\n -- inlined. I found that spectral\/hartel\/genfft lost some useful\n -- strictness in the function sumcode' if augment is not inlined\n -- before strictness analysis runs\n simpl_phases = CoreDoPasses [ simpl_phase phase [\"main\"] max_iter\n | phase <- [phases, phases-1 .. 1] ]\n\n\n -- initial simplify: mk specialiser happy: minimum effort please\n simpl_gently = CoreDoSimplify max_iter\n (base_mode { sm_phase = InitialPhase\n , sm_names = [\"Gentle\"]\n , sm_rules = rules_on -- Note [RULEs enabled in SimplGently]\n , sm_inline = False\n , sm_case_case = False })\n -- Don't do case-of-case transformations.\n -- This makes full laziness work better\n\n core_todo =\n if opt_level == 0 then\n [vectorisation,\n simpl_phase 0 [\"final\"] max_iter]\n else {- opt_level >= 1 -} [\n\n -- We want to do the static argument transform before full laziness as it\n -- may expose extra opportunities to float things outwards. However, to fix\n -- up the output of the transformation we need at do at least one simplify\n -- after this before anything else\n runWhen static_args (CoreDoPasses [ simpl_gently, CoreDoStaticArgs ]),\n\n -- We run vectorisation here for now, but we might also try to run\n -- it later\n vectorisation,\n\n -- initial simplify: mk specialiser happy: minimum effort please\n simpl_gently,\n\n -- Specialisation is best done before full laziness\n -- so that overloaded functions have all their dictionary lambdas manifest\n runWhen do_specialise CoreDoSpecialising,\n\n runWhen full_laziness $\n CoreDoFloatOutwards FloatOutSwitches {\n floatOutLambdas = Just 0,\n floatOutConstants = True,\n floatOutPartialApplications = False },\n \t\t-- Was: gentleFloatOutSwitches\t\n --\n\t\t-- I have no idea why, but not floating constants to\n\t\t-- top level is very bad in some cases.\n --\n\t\t-- Notably: p_ident in spectral\/rewrite\n\t\t-- \t Changing from \"gentle\" to \"constantsOnly\"\n\t\t-- \t improved rewrite's allocation by 19%, and\n\t\t-- \t made 0.0% difference to any other nofib\n\t\t-- \t benchmark\n --\n -- Not doing floatOutPartialApplications yet, we'll do\n -- that later on when we've had a chance to get more\n -- accurate arity information. In fact it makes no\n -- difference at all to performance if we do it here,\n -- but maybe we save some unnecessary to-and-fro in\n -- the simplifier.\n\n runWhen do_float_in CoreDoFloatInwards,\n\n simpl_phases,\n\n -- Phase 0: allow all Ids to be inlined now\n -- This gets foldr inlined before strictness analysis\n\n -- At least 3 iterations because otherwise we land up with\n -- huge dead expressions because of an infelicity in the\n -- simpifier.\n -- let k = BIG in foldr k z xs\n -- ==> let k = BIG in letrec go = \\xs -> ...(k x).... in go xs\n -- ==> let k = BIG in letrec go = \\xs -> ...(BIG x).... in go xs\n -- Don't stop now!\n simpl_phase 0 [\"main\"] (max max_iter 3),\n\n runWhen strictness (CoreDoPasses [\n CoreDoStrictness,\n CoreDoWorkerWrapper,\n CoreDoGlomBinds,\n simpl_phase 0 [\"post-worker-wrapper\"] max_iter\n ]),\n\n runWhen full_laziness $\n CoreDoFloatOutwards FloatOutSwitches {\n floatOutLambdas = floatLamArgs dflags,\n floatOutConstants = True,\n floatOutPartialApplications = True },\n -- nofib\/spectral\/hartel\/wang doubles in speed if you\n -- do full laziness late in the day. It only happens\n -- after fusion and other stuff, so the early pass doesn't\n -- catch it. For the record, the redex is\n -- f_el22 (f_el21 r_midblock)\n\n\n runWhen cse CoreCSE,\n -- We want CSE to follow the final full-laziness pass, because it may\n -- succeed in commoning up things floated out by full laziness.\n -- CSE used to rely on the no-shadowing invariant, but it doesn't any more\n\n runWhen do_float_in CoreDoFloatInwards,\n\n maybe_rule_check (Phase 0),\n\n -- Case-liberation for -O2. This should be after\n -- strictness analysis and the simplification which follows it.\n runWhen liberate_case (CoreDoPasses [\n CoreLiberateCase,\n simpl_phase 0 [\"post-liberate-case\"] max_iter\n ]), -- Run the simplifier after LiberateCase to vastly\n -- reduce the possiblility of shadowing\n -- Reason: see Note [Shadowing] in SpecConstr.lhs\n\n runWhen spec_constr CoreDoSpecConstr,\n\n maybe_rule_check (Phase 0),\n\n -- Final clean-up simplification:\n simpl_phase 0 [\"final\"] max_iter\n ]\n\n-- The core-to-core pass ordering is derived from the DynFlags:\nrunWhen :: Bool -> CoreToDo -> CoreToDo\nrunWhen True do_this = do_this\nrunWhen False _ = CoreDoNothing\n\nrunMaybe :: Maybe a -> (a -> CoreToDo) -> CoreToDo\nrunMaybe (Just x) f = f x\nrunMaybe Nothing _ = CoreDoNothing\n\ndumpSimplPhase :: DynFlags -> SimplifierMode -> Bool\ndumpSimplPhase dflags mode\n | Just spec_string <- shouldDumpSimplPhase dflags\n = match_spec spec_string\n | otherwise\n = dopt Opt_D_verbose_core2core dflags\n\n where\n match_spec :: String -> Bool\n match_spec spec_string \n = or $ map (and . map match . split ':') \n $ split ',' spec_string\n\n match :: String -> Bool\n match \"\" = True\n match s = case reads s of\n [(n,\"\")] -> phase_num n\n _ -> phase_name s\n\n phase_num :: Int -> Bool\n phase_num n = case sm_phase mode of\n Phase k -> n == k\n _ -> False\n\n phase_name :: String -> Bool\n phase_name s = s `elem` sm_names mode\n\\end{code}\n\n\nNote [RULEs enabled in SimplGently]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nRULES are enabled when doing \"gentle\" simplification. Two reasons:\n\n * We really want the class-op cancellation to happen:\n op (df d1 d2) --> $cop3 d1 d2\n because this breaks the mutual recursion between 'op' and 'df'\n\n * I wanted the RULE\n lift String ===> ...\n to work in Template Haskell when simplifying\n splices, so we get simpler code for literal strings\n\nBut watch out: list fusion can prevent floating. So use phase control\nto switch off those rules until after floating.\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Counting and logging\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\nverboseSimplStats :: Bool\nverboseSimplStats = opt_PprStyle_Debug\t\t-- For now, anyway\n\nzeroSimplCount\t :: DynFlags -> SimplCount\nisZeroSimplCount :: SimplCount -> Bool\npprSimplCount\t :: SimplCount -> SDoc\ndoSimplTick, doFreeSimplTick :: Tick -> SimplCount -> SimplCount\nplusSimplCount :: SimplCount -> SimplCount -> SimplCount\n\\end{code}\n\n\\begin{code}\ndata SimplCount \n = VerySimplCount !Int\t-- Used when don't want detailed stats\n\n | SimplCount\t{\n\tticks :: !Int,\t-- Total ticks\n\tdetails :: !TickCounts,\t-- How many of each type\n\n\tn_log\t:: !Int,\t-- N\n\tlog1\t:: [Tick],\t-- Last N events; <= opt_HistorySize, \n\t\t \t\t-- most recent first\n\tlog2\t:: [Tick]\t-- Last opt_HistorySize events before that\n\t\t \t\t-- Having log1, log2 lets us accumulate the\n\t\t\t\t-- recent history reasonably efficiently\n }\n\ntype TickCounts = Map Tick Int\n\nsimplCountN :: SimplCount -> Int\nsimplCountN (VerySimplCount n) = n\nsimplCountN (SimplCount { ticks = n }) = n\n\nzeroSimplCount dflags\n\t\t-- This is where we decide whether to do\n\t\t-- the VerySimpl version or the full-stats version\n | dopt Opt_D_dump_simpl_stats dflags\n = SimplCount {ticks = 0, details = Map.empty,\n n_log = 0, log1 = [], log2 = []}\n | otherwise\n = VerySimplCount 0\n\nisZeroSimplCount (VerySimplCount n) \t = n==0\nisZeroSimplCount (SimplCount { ticks = n }) = n==0\n\ndoFreeSimplTick tick sc@SimplCount { details = dts } \n = sc { details = dts `addTick` tick }\ndoFreeSimplTick _ sc = sc \n\ndoSimplTick tick sc@SimplCount { ticks = tks, details = dts, n_log = nl, log1 = l1 }\n | nl >= opt_HistorySize = sc1 { n_log = 1, log1 = [tick], log2 = l1 }\n | otherwise\t\t = sc1 { n_log = nl+1, log1 = tick : l1 }\n where\n sc1 = sc { ticks = tks+1, details = dts `addTick` tick }\n\ndoSimplTick _ (VerySimplCount n) = VerySimplCount (n+1)\n\n\n-- Don't use Map.unionWith because that's lazy, and we want to \n-- be pretty strict here!\naddTick :: TickCounts -> Tick -> TickCounts\naddTick fm tick = case Map.lookup tick fm of\n\t\t\tNothing -> Map.insert tick 1 fm\n\t\t\tJust n -> n1 `seq` Map.insert tick n1 fm\n\t\t\t\twhere\n\t\t\t\t n1 = n+1\n\n\nplusSimplCount sc1@(SimplCount { ticks = tks1, details = dts1 })\n\t sc2@(SimplCount { ticks = tks2, details = dts2 })\n = log_base { ticks = tks1 + tks2, details = Map.unionWith (+) dts1 dts2 }\n where\n\t-- A hackish way of getting recent log info\n log_base | null (log1 sc2) = sc1\t-- Nothing at all in sc2\n\t | null (log2 sc2) = sc2 { log2 = log1 sc1 }\n\t | otherwise = sc2\n\nplusSimplCount (VerySimplCount n) (VerySimplCount m) = VerySimplCount (n+m)\nplusSimplCount _ _ = panic \"plusSimplCount\"\n -- We use one or the other consistently\n\npprSimplCount (VerySimplCount n) = ptext (sLit \"Total ticks:\") <+> int n\npprSimplCount (SimplCount { ticks = tks, details = dts, log1 = l1, log2 = l2 })\n = vcat [ptext (sLit \"Total ticks: \") <+> int tks,\n\t blankLine,\n\t pprTickCounts (Map.toList dts),\n\t if verboseSimplStats then\n\t\tvcat [blankLine,\n\t\t ptext (sLit \"Log (most recent first)\"),\n\t\t nest 4 (vcat (map ppr l1) $$ vcat (map ppr l2))]\n\t else empty\n ]\n\npprTickCounts :: [(Tick,Int)] -> SDoc\npprTickCounts [] = empty\npprTickCounts ((tick1,n1):ticks)\n = vcat [int tot_n <+> text (tickString tick1),\n\t pprTCDetails real_these,\n\t pprTickCounts others\n ]\n where\n tick1_tag\t\t= tickToTag tick1\n (these, others)\t= span same_tick ticks\n real_these\t\t= (tick1,n1):these\n same_tick (tick2,_) = tickToTag tick2 == tick1_tag\n tot_n\t\t= sum [n | (_,n) <- real_these]\n\npprTCDetails :: [(Tick, Int)] -> SDoc\npprTCDetails ticks\n = nest 4 (vcat [int n <+> pprTickCts tick | (tick,n) <- ticks])\n\\end{code}\n\n\n\\begin{code}\ndata Tick\n = PreInlineUnconditionally\tId\n | PostInlineUnconditionally\tId\n\n | UnfoldingDone \t\tId\n | RuleFired\t\t\tFastString\t-- Rule name\n\n | LetFloatFromLet\n | EtaExpansion\t\tId\t-- LHS binder\n | EtaReduction\t\tId\t-- Binder on outer lambda\n | BetaReduction\t\tId\t-- Lambda binder\n\n\n | CaseOfCase\t\t\tId\t-- Bndr on *inner* case\n | KnownBranch\t\t\tId\t-- Case binder\n | CaseMerge\t\t\tId\t-- Binder on outer case\n | AltMerge\t\t\tId\t-- Case binder\n | CaseElim\t\t\tId\t-- Case binder\n | CaseIdentity\t\tId\t-- Case binder\n | FillInCaseDefault\t\tId\t-- Case binder\n\n | BottomFound\t\t\n | SimplifierDone\t\t-- Ticked at each iteration of the simplifier\n\ninstance Outputable Tick where\n ppr tick = text (tickString tick) <+> pprTickCts tick\n\ninstance Eq Tick where\n a == b = case a `cmpTick` b of\n EQ -> True\n _ -> False\n\ninstance Ord Tick where\n compare = cmpTick\n\ntickToTag :: Tick -> Int\ntickToTag (PreInlineUnconditionally _)\t= 0\ntickToTag (PostInlineUnconditionally _)\t= 1\ntickToTag (UnfoldingDone _)\t\t= 2\ntickToTag (RuleFired _)\t\t\t= 3\ntickToTag LetFloatFromLet\t\t= 4\ntickToTag (EtaExpansion _)\t\t= 5\ntickToTag (EtaReduction _)\t\t= 6\ntickToTag (BetaReduction _)\t\t= 7\ntickToTag (CaseOfCase _)\t\t= 8\ntickToTag (KnownBranch _)\t\t= 9\ntickToTag (CaseMerge _)\t\t\t= 10\ntickToTag (CaseElim _)\t\t\t= 11\ntickToTag (CaseIdentity _)\t\t= 12\ntickToTag (FillInCaseDefault _)\t\t= 13\ntickToTag BottomFound\t\t\t= 14\ntickToTag SimplifierDone\t\t= 16\ntickToTag (AltMerge _)\t\t\t= 17\n\ntickString :: Tick -> String\ntickString (PreInlineUnconditionally _)\t= \"PreInlineUnconditionally\"\ntickString (PostInlineUnconditionally _)= \"PostInlineUnconditionally\"\ntickString (UnfoldingDone _)\t\t= \"UnfoldingDone\"\ntickString (RuleFired _)\t\t= \"RuleFired\"\ntickString LetFloatFromLet\t\t= \"LetFloatFromLet\"\ntickString (EtaExpansion _)\t\t= \"EtaExpansion\"\ntickString (EtaReduction _)\t\t= \"EtaReduction\"\ntickString (BetaReduction _)\t\t= \"BetaReduction\"\ntickString (CaseOfCase _)\t\t= \"CaseOfCase\"\ntickString (KnownBranch _)\t\t= \"KnownBranch\"\ntickString (CaseMerge _)\t\t= \"CaseMerge\"\ntickString (AltMerge _)\t\t\t= \"AltMerge\"\ntickString (CaseElim _)\t\t\t= \"CaseElim\"\ntickString (CaseIdentity _)\t\t= \"CaseIdentity\"\ntickString (FillInCaseDefault _)\t= \"FillInCaseDefault\"\ntickString BottomFound\t\t\t= \"BottomFound\"\ntickString SimplifierDone\t\t= \"SimplifierDone\"\n\npprTickCts :: Tick -> SDoc\npprTickCts (PreInlineUnconditionally v)\t= ppr v\npprTickCts (PostInlineUnconditionally v)= ppr v\npprTickCts (UnfoldingDone v)\t\t= ppr v\npprTickCts (RuleFired v)\t\t= ppr v\npprTickCts LetFloatFromLet\t\t= empty\npprTickCts (EtaExpansion v)\t\t= ppr v\npprTickCts (EtaReduction v)\t\t= ppr v\npprTickCts (BetaReduction v)\t\t= ppr v\npprTickCts (CaseOfCase v)\t\t= ppr v\npprTickCts (KnownBranch v)\t\t= ppr v\npprTickCts (CaseMerge v)\t\t= ppr v\npprTickCts (AltMerge v)\t\t\t= ppr v\npprTickCts (CaseElim v)\t\t\t= ppr v\npprTickCts (CaseIdentity v)\t\t= ppr v\npprTickCts (FillInCaseDefault v)\t= ppr v\npprTickCts _ \t\t\t= empty\n\ncmpTick :: Tick -> Tick -> Ordering\ncmpTick a b = case (tickToTag a `compare` tickToTag b) of\n\t\tGT -> GT\n\t\tEQ -> cmpEqTick a b\n\t\tLT -> LT\n\ncmpEqTick :: Tick -> Tick -> Ordering\ncmpEqTick (PreInlineUnconditionally a)\t(PreInlineUnconditionally b)\t= a `compare` b\ncmpEqTick (PostInlineUnconditionally a)\t(PostInlineUnconditionally b)\t= a `compare` b\ncmpEqTick (UnfoldingDone a)\t\t(UnfoldingDone b)\t\t= a `compare` b\ncmpEqTick (RuleFired a)\t\t\t(RuleFired b)\t\t\t= a `compare` b\ncmpEqTick (EtaExpansion a)\t\t(EtaExpansion b)\t\t= a `compare` b\ncmpEqTick (EtaReduction a)\t\t(EtaReduction b)\t\t= a `compare` b\ncmpEqTick (BetaReduction a)\t\t(BetaReduction b)\t\t= a `compare` b\ncmpEqTick (CaseOfCase a)\t\t(CaseOfCase b)\t\t\t= a `compare` b\ncmpEqTick (KnownBranch a)\t\t(KnownBranch b)\t\t\t= a `compare` b\ncmpEqTick (CaseMerge a)\t\t\t(CaseMerge b)\t\t\t= a `compare` b\ncmpEqTick (AltMerge a)\t\t\t(AltMerge b)\t\t\t= a `compare` b\ncmpEqTick (CaseElim a)\t\t\t(CaseElim b)\t\t\t= a `compare` b\ncmpEqTick (CaseIdentity a)\t\t(CaseIdentity b)\t\t= a `compare` b\ncmpEqTick (FillInCaseDefault a)\t\t(FillInCaseDefault b)\t\t= a `compare` b\ncmpEqTick _ \t\t\t_ \t\t\t\t= EQ\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Monad and carried data structure definitions\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\nnewtype CoreState = CoreState {\n cs_uniq_supply :: UniqSupply\n}\n\ndata CoreReader = CoreReader {\n cr_hsc_env :: HscEnv,\n cr_rule_base :: RuleBase,\n cr_module :: Module\n}\n\ndata CoreWriter = CoreWriter {\n cw_simpl_count :: SimplCount\n}\n\nemptyWriter :: DynFlags -> CoreWriter\nemptyWriter dflags = CoreWriter {\n cw_simpl_count = zeroSimplCount dflags\n }\n\nplusWriter :: CoreWriter -> CoreWriter -> CoreWriter\nplusWriter w1 w2 = CoreWriter {\n cw_simpl_count = (cw_simpl_count w1) `plusSimplCount` (cw_simpl_count w2)\n }\n\ntype CoreIOEnv = IOEnv CoreReader\n\n-- | The monad used by Core-to-Core passes to access common state, register simplification\n-- statistics and so on\nnewtype CoreM a = CoreM { unCoreM :: CoreState -> CoreIOEnv (a, CoreState, CoreWriter) }\n\ninstance Functor CoreM where\n fmap f ma = do\n a <- ma\n return (f a)\n\ninstance Monad CoreM where\n return x = CoreM (\\s -> nop s x)\n mx >>= f = CoreM $ \\s -> do\n (x, s', w1) <- unCoreM mx s\n (y, s'', w2) <- unCoreM (f x) s'\n return (y, s'', w1 `plusWriter` w2)\n\ninstance Applicative CoreM where\n pure = return\n (<*>) = ap\n\n-- For use if the user has imported Control.Monad.Error from MTL\n-- Requires UndecidableInstances\ninstance MonadPlus IO => MonadPlus CoreM where\n mzero = CoreM (const mzero)\n m `mplus` n = CoreM (\\rs -> unCoreM m rs `mplus` unCoreM n rs)\n\ninstance MonadUnique CoreM where\n getUniqueSupplyM = do\n us <- getS cs_uniq_supply\n let (us1, us2) = splitUniqSupply us\n modifyS (\\s -> s { cs_uniq_supply = us2 })\n return us1\n\nrunCoreM :: HscEnv\n -> RuleBase\n -> UniqSupply\n -> Module\n -> CoreM a\n -> IO (a, SimplCount)\nrunCoreM hsc_env rule_base us mod m =\n liftM extract $ runIOEnv reader $ unCoreM m state\n where\n reader = CoreReader {\n cr_hsc_env = hsc_env,\n cr_rule_base = rule_base,\n cr_module = mod\n }\n state = CoreState { \n cs_uniq_supply = us\n }\n\n extract :: (a, CoreState, CoreWriter) -> (a, SimplCount)\n extract (value, _, writer) = (value, cw_simpl_count writer)\n\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Core combinators, not exported\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\n\nnop :: CoreState -> a -> CoreIOEnv (a, CoreState, CoreWriter)\nnop s x = do\n r <- getEnv\n return (x, s, emptyWriter $ (hsc_dflags . cr_hsc_env) r)\n\nread :: (CoreReader -> a) -> CoreM a\nread f = CoreM (\\s -> getEnv >>= (\\r -> nop s (f r)))\n\ngetS :: (CoreState -> a) -> CoreM a\ngetS f = CoreM (\\s -> nop s (f s))\n\nmodifyS :: (CoreState -> CoreState) -> CoreM ()\nmodifyS f = CoreM (\\s -> nop (f s) ())\n\nwrite :: CoreWriter -> CoreM ()\nwrite w = CoreM (\\s -> return ((), s, w))\n\n\\end{code}\n\n\\subsection{Lifting IO into the monad}\n\n\\begin{code}\n\n-- | Lift an 'IOEnv' operation into 'CoreM'\nliftIOEnv :: CoreIOEnv a -> CoreM a\nliftIOEnv mx = CoreM (\\s -> mx >>= (\\x -> nop s x))\n\ninstance MonadIO CoreM where\n liftIO = liftIOEnv . IOEnv.liftIO\n\n-- | Lift an 'IO' operation into 'CoreM' while consuming its 'SimplCount'\nliftIOWithCount :: IO (SimplCount, a) -> CoreM a\nliftIOWithCount what = liftIO what >>= (\\(count, x) -> addSimplCount count >> return x)\n\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Reader, writer and state accessors\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\n\ngetHscEnv :: CoreM HscEnv\ngetHscEnv = read cr_hsc_env\n\ngetRuleBase :: CoreM RuleBase\ngetRuleBase = read cr_rule_base\n\ngetModule :: CoreM Module\ngetModule = read cr_module\n\naddSimplCount :: SimplCount -> CoreM ()\naddSimplCount count = write (CoreWriter { cw_simpl_count = count })\n\n-- Convenience accessors for useful fields of HscEnv\n\ngetDynFlags :: CoreM DynFlags\ngetDynFlags = fmap hsc_dflags getHscEnv\n\n-- | The original name cache is the current mapping from 'Module' and\n-- 'OccName' to a compiler-wide unique 'Name'\ngetOrigNameCache :: CoreM OrigNameCache\ngetOrigNameCache = do\n nameCacheRef <- fmap hsc_NC getHscEnv\n liftIO $ fmap nsNames $ readIORef nameCacheRef\n\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Dealing with annotations\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\n-- | Get all annotations of a given type. This happens lazily, that is\n-- no deserialization will take place until the [a] is actually demanded and\n-- the [a] can also be empty (the UniqFM is not filtered).\n--\n-- This should be done once at the start of a Core-to-Core pass that uses\n-- annotations.\n--\n-- See Note [Annotations]\ngetAnnotations :: Typeable a => ([Word8] -> a) -> ModGuts -> CoreM (UniqFM [a])\ngetAnnotations deserialize guts = do\n hsc_env <- getHscEnv\n ann_env <- liftIO $ prepareAnnotations hsc_env (Just guts)\n return (deserializeAnns deserialize ann_env)\n\n-- | Get at most one annotation of a given type per Unique.\ngetFirstAnnotations :: Typeable a => ([Word8] -> a) -> ModGuts -> CoreM (UniqFM a)\ngetFirstAnnotations deserialize guts\n = liftM (mapUFM head . filterUFM (not . null))\n $ getAnnotations deserialize guts\n \n\\end{code}\n\nNote [Annotations]\n~~~~~~~~~~~~~~~~~~\nA Core-to-Core pass that wants to make use of annotations calls\ngetAnnotations or getFirstAnnotations at the beginning to obtain a UniqFM with\nannotations of a specific type. This produces all annotations from interface\nfiles read so far. However, annotations from interface files read during the\npass will not be visible until getAnnotations is called again. This is similar\nto how rules work and probably isn't too bad.\n\nThe current implementation could be optimised a bit: when looking up\nannotations for a thing from the HomePackageTable, we could search directly in\nthe module where the thing is defined rather than building one UniqFM which\ncontains all annotations we know of. This would work because annotations can\nonly be given to things defined in the same module. However, since we would\nonly want to deserialise every annotation once, we would have to build a cache\nfor every module in the HTP. In the end, it's probably not worth it as long as\nwe aren't using annotations heavily.\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Direct screen output\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\n\nmsg :: (DynFlags -> SDoc -> IO ()) -> SDoc -> CoreM ()\nmsg how doc = do\n dflags <- getDynFlags\n liftIO $ how dflags doc\n\n-- | Output a String message to the screen\nputMsgS :: String -> CoreM ()\nputMsgS = putMsg . text\n\n-- | Output a message to the screen\nputMsg :: SDoc -> CoreM ()\nputMsg = msg Err.putMsg\n\n-- | Output a string error to the screen\nerrorMsgS :: String -> CoreM ()\nerrorMsgS = errorMsg . text\n\n-- | Output an error to the screen\nerrorMsg :: SDoc -> CoreM ()\nerrorMsg = msg Err.errorMsg\n\n-- | Output a fatal string error to the screen. Note this does not by itself cause the compiler to die\nfatalErrorMsgS :: String -> CoreM ()\nfatalErrorMsgS = fatalErrorMsg . text\n\n-- | Output a fatal error to the screen. Note this does not by itself cause the compiler to die\nfatalErrorMsg :: SDoc -> CoreM ()\nfatalErrorMsg = msg Err.fatalErrorMsg\n\n-- | Output a string debugging message at verbosity level of @-v@ or higher\ndebugTraceMsgS :: String -> CoreM ()\ndebugTraceMsgS = debugTraceMsg . text\n\n-- | Outputs a debugging message at verbosity level of @-v@ or higher\ndebugTraceMsg :: SDoc -> CoreM ()\ndebugTraceMsg = msg (flip Err.debugTraceMsg 3)\n\n-- | Show some labelled 'SDoc' if a particular flag is set or at a verbosity level of @-v -ddump-most@ or higher\ndumpIfSet_dyn :: DynFlag -> String -> SDoc -> CoreM ()\ndumpIfSet_dyn flag str = msg (\\dflags -> Err.dumpIfSet_dyn dflags flag str)\n\\end{code}\n\n\\begin{code}\n\ninitTcForLookup :: HscEnv -> TcM a -> IO a\ninitTcForLookup hsc_env = liftM (expectJust \"initTcInteractive\" . snd) . initTc hsc_env HsSrcFile False iNTERACTIVE\n\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Finding TyThings\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ninstance MonadThings CoreM where\n lookupThing name = do\n hsc_env <- getHscEnv\n liftIO $ initTcForLookup hsc_env (tcLookupGlobal name)\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Template Haskell interoperability\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\n#ifdef GHCI\n-- | Attempt to convert a Template Haskell name to one that GHC can\n-- understand. Original TH names such as those you get when you use\n-- the @'foo@ syntax will be translated to their equivalent GHC name\n-- exactly. Qualified or unqualifed TH names will be dynamically bound\n-- to names in the module being compiled, if possible. Exact TH names\n-- will be bound to the name they represent, exactly.\nthNameToGhcName :: TH.Name -> CoreM (Maybe Name)\nthNameToGhcName th_name = do\n hsc_env <- getHscEnv\n liftIO $ initTcForLookup hsc_env (lookupThName_maybe th_name)\n#endif\n\\end{code}\n","avg_line_length":35.6989247312,"max_line_length":115,"alphanum_fraction":0.609939759} {"size":1645,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"> {-# LANGUAGE GADTs, DeriveFunctor, RankNTypes #-}\n> module Algebraic where\n> import Data.Functor.Foldable\n\n\n=== Solution with recursion schemes\n\nFirst, the types. We define a base expression functor and its corresponding catamorphism\n\n> data ExprF a\n> = Const a\n> | Add a a\n> | Sub a a\n> | Mul a a\n> | Div a a\n> deriving (Functor)\n\nAn expresison tree is the fixpoint of the functor\n\n> type Expr = Fix ExprF\n\n-- Lift a catamorphism of a functor to a catamorphism of its fixpoint\n\n-- > cata :: Functor f => (f a -> a) -> (Fix f a -> a)\n-- > cata f = f . fmap (cata f) . out\n\nThe evaluator simply takes the deep embedding to the shallow embedding\n\n> eval :: Expr -> Int\n> eval = cata evalF\n> where\n> evalF :: ExprF Int -> Int\n> evalF (Const i) = i\n> evalF (Add x y) = x + y\n> evalF (Sub x y) = x - y\n> evalF (Mul x y) = x * y\n> evalF (Div x y) = x `div` y\n\n'values' extracts all the constants of an expression\n\n> values :: Expr -> [Int]\n> values = cata valuesF\n> where\n> valuesF :: ExprF [Int] -> [Int]\n> valuesF (Const i) = i\n> valuesF (Add x y) = x ++ y\n> valuesF (Sub x y) = x ++ y\n> valuesF (Mul x y) = x ++ y\n> valuesF (Div x y) = x ++ y\n\nThe predicate 'valid' takes an expression (of type `Expr Int`) and checks\nwhether it evaluates to an integer greater than zero.\n\n> valid :: Expr -> Bool\n> valid = cata validF\n\n> validF :: ExprF Int -> Bool\n> validF (Add _ _) = True\n> validF (Sub x y) = x > y\n> validF (Mul _ _) = True\n> validF (Div x y) = x `mod` y == 0\n\nFinally we fuse the validity check and the evaluation step, such that only valid\nexpressions are evaluated:\n","avg_line_length":25.3076923077,"max_line_length":88,"alphanum_fraction":0.6164133739} {"size":21458,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"%\n% (c) The University of Glasgow 2006\n% (c) The GRASP\/AQUA Project, Glasgow University, 1992-1998\n%\n\n\\begin{code}\n{-# LANGUAGE TypeFamilies #-}\nmodule TrieMap(\n CoreMap, emptyCoreMap, extendCoreMap, lookupCoreMap, foldCoreMap,\n TypeMap, \n CoercionMap, \n MaybeMap, \n ListMap,\n TrieMap(..)\n ) where\n\nimport CoreSyn\nimport Coercion\nimport Literal\nimport Name\nimport Type\nimport TypeRep\nimport Var\nimport CostCentre\nimport UniqFM\nimport Unique( Unique )\n\nimport qualified Data.Map as Map\nimport qualified Data.IntMap as IntMap\nimport VarEnv\nimport NameEnv\nimport Outputable\nimport Control.Monad( (>=>) )\n\\end{code}\n\nThis module implements TrieMaps, which are finite mappings\nwhose key is a structured value like a CoreExpr or Type.\n\nThe code is very regular and boilerplate-like, but there is\nsome neat handling of *binders*. In effect they are deBruijn \nnumbered on the fly.\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n The TrieMap class\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ntype XT a = Maybe a -> Maybe a\t-- How to alter a non-existent elt (Nothing)\n \t \t \t\t-- or an existing elt (Just)\n\nclass TrieMap m where\n type Key m :: *\n emptyTM :: m a\n lookupTM :: forall b. Key m -> m b -> Maybe b\n alterTM :: forall b. Key m -> XT b -> m b -> m b\n\n foldTM :: (a -> b -> b) -> m a -> b -> b\n -- The unusual argument order here makes \n -- it easy to compose calls to foldTM; \n -- see for example fdE below\n\n----------------------\n-- Recall that \n-- Control.Monad.(>=>) :: (a -> Maybe b) -> (b -> Maybe c) -> a -> Maybe c\n\n(>.>) :: (a -> b) -> (b -> c) -> a -> c\n-- Reverse function composition (do f first, then g)\ninfixr 1 >.>\n(f >.> g) x = g (f x)\ninfixr 1 |>, |>>\n\n(|>) :: a -> (a->b) -> b -- Reverse application\nx |> f = f x\n\n----------------------\n(|>>) :: TrieMap m2 \n => (XT (m2 a) -> m1 (m2 a) -> m1 (m2 a))\n -> (m2 a -> m2 a)\n -> m1 (m2 a) -> m1 (m2 a)\n(|>>) f g = f (Just . g . deMaybe)\n\ndeMaybe :: TrieMap m => Maybe (m a) -> m a\ndeMaybe Nothing = emptyTM\ndeMaybe (Just m) = m\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n IntMaps\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ninstance TrieMap IntMap.IntMap where\n type Key IntMap.IntMap = Int\n emptyTM = IntMap.empty\n lookupTM k m = IntMap.lookup k m\n alterTM = xtInt\n foldTM k m z = IntMap.fold k z m\n\nxtInt :: Int -> XT a -> IntMap.IntMap a -> IntMap.IntMap a\nxtInt k f m = IntMap.alter f k m\n\ninstance Ord k => TrieMap (Map.Map k) where\n type Key (Map.Map k) = k\n emptyTM = Map.empty\n lookupTM = Map.lookup\n alterTM k f m = Map.alter f k m\n foldTM k m z = Map.fold k z m\n\ninstance TrieMap UniqFM where\n type Key UniqFM = Unique\n emptyTM = emptyUFM\n lookupTM k m = lookupUFM m k\n alterTM k f m = alterUFM f m k\n foldTM k m z = foldUFM k z m\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Lists\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\nIf m is a map from k -> val\nthen (MaybeMap m) is a map from (Maybe k) -> val\n\n\\begin{code}\ndata MaybeMap m a = MM { mm_nothing :: Maybe a, mm_just :: m a }\n\ninstance TrieMap m => TrieMap (MaybeMap m) where\n type Key (MaybeMap m) = Maybe (Key m)\n emptyTM = MM { mm_nothing = Nothing, mm_just = emptyTM }\n lookupTM = lkMaybe lookupTM\n alterTM = xtMaybe alterTM\n foldTM = fdMaybe \n\nlkMaybe :: TrieMap m => (forall b. k -> m b -> Maybe b)\n -> Maybe k -> MaybeMap m a -> Maybe a\nlkMaybe _ Nothing = mm_nothing\nlkMaybe lk (Just x) = mm_just >.> lk x\n\nxtMaybe :: TrieMap m => (forall b. k -> XT b -> m b -> m b)\n -> Maybe k -> XT a -> MaybeMap m a -> MaybeMap m a\nxtMaybe _ Nothing f m = m { mm_nothing = f (mm_nothing m) }\nxtMaybe tr (Just x) f m = m { mm_just = mm_just m |> tr x f }\n\nfdMaybe :: TrieMap m => (a -> b -> b) -> MaybeMap m a -> b -> b\nfdMaybe k m = foldMaybe k (mm_nothing m)\n . foldTM k (mm_just m)\n\n--------------------\ndata ListMap m a\n = LM { lm_nil :: Maybe a\n , lm_cons :: m (ListMap m a) }\n\ninstance TrieMap m => TrieMap (ListMap m) where\n type Key (ListMap m) = [Key m]\n emptyTM = LM { lm_nil = Nothing, lm_cons = emptyTM }\n lookupTM = lkList lookupTM\n alterTM = xtList alterTM\n foldTM = fdList \n\nlkList :: TrieMap m => (forall b. k -> m b -> Maybe b)\n -> [k] -> ListMap m a -> Maybe a\nlkList _ [] = lm_nil\nlkList lk (x:xs) = lm_cons >.> lk x >=> lkList lk xs\n\nxtList :: TrieMap m => (forall b. k -> XT b -> m b -> m b)\n -> [k] -> XT a -> ListMap m a -> ListMap m a\nxtList _ [] f m = m { lm_nil = f (lm_nil m) }\nxtList tr (x:xs) f m = m { lm_cons = lm_cons m |> tr x |>> xtList tr xs f }\n\nfdList :: forall m a b. TrieMap m \n => (a -> b -> b) -> ListMap m a -> b -> b\nfdList k m = foldMaybe k (lm_nil m)\n . foldTM (fdList k) (lm_cons m)\n\nfoldMaybe :: (a -> b -> b) -> Maybe a -> b -> b\nfoldMaybe _ Nothing b = b\nfoldMaybe k (Just a) b = k a b\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Basic maps\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\nlkNamed :: NamedThing n => n -> NameEnv a -> Maybe a\nlkNamed n env = lookupNameEnv env (getName n)\n\nxtNamed :: NamedThing n => n -> XT a -> NameEnv a -> NameEnv a\nxtNamed tc f m = alterNameEnv f m (getName tc)\n\n------------------------\ntype LiteralMap a = Map.Map Literal a\n\nemptyLiteralMap :: LiteralMap a\nemptyLiteralMap = emptyTM\n\nlkLit :: Literal -> LiteralMap a -> Maybe a\nlkLit = lookupTM\n\nxtLit :: Literal -> XT a -> LiteralMap a -> LiteralMap a\nxtLit = alterTM\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n CoreMap\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\nNote [Binders]\n~~~~~~~~~~~~~~\n * In general we check binders as late as possible because types are\n less likely to differ than expression structure. That's why\n cm_lam :: CoreMap (TypeMap a)\n rather than\n cm_lam :: TypeMap (CoreMap a)\n\n * We don't need to look at the type of some binders, notalby\n - the case binder in (Case _ b _ _)\n - the binders in an alternative\n because they are totally fixed by the context\n\n\n\\begin{code}\ndata CoreMap a\n = EmptyCM\n | CM { cm_var :: VarMap a\n , cm_lit :: LiteralMap a\n , cm_co :: CoercionMap a\n , cm_type :: TypeMap a\n , cm_cast :: CoreMap (CoercionMap a)\n , cm_scc :: CoreMap (CostCentreMap a)\n , cm_app :: CoreMap (CoreMap a)\n , cm_lam :: CoreMap (TypeMap a)\n , cm_letn :: CoreMap (CoreMap (BndrMap a))\n , cm_letr :: ListMap CoreMap (CoreMap (ListMap BndrMap a))\n , cm_case :: CoreMap (ListMap AltMap a)\n \t -- Note [Binders]\n }\n\n\nwrapEmptyCM :: CoreMap a\nwrapEmptyCM = CM { cm_var = emptyTM, cm_lit = emptyLiteralMap\n \t\t , cm_co = emptyTM, cm_type = emptyTM\n \t\t , cm_cast = emptyTM, cm_app = emptyTM \n \t\t , cm_lam = emptyTM, cm_letn = emptyTM \n \t\t , cm_letr = emptyTM, cm_case = emptyTM\n , cm_scc = emptyTM } \n\ninstance TrieMap CoreMap where\n type Key CoreMap = CoreExpr\n emptyTM = EmptyCM\n lookupTM = lkE emptyCME\n alterTM = xtE emptyCME\n foldTM = fdE\n\n--------------------------\nlookupCoreMap :: CoreMap a -> CoreExpr -> Maybe a\nlookupCoreMap cm e = lkE emptyCME e cm\n\nextendCoreMap :: CoreMap a -> CoreExpr -> a -> CoreMap a\nextendCoreMap m e v = xtE emptyCME e (\\_ -> Just v) m\n\nfoldCoreMap :: (a -> b -> b) -> b -> CoreMap a -> b\nfoldCoreMap k z m = fdE k m z\n\nemptyCoreMap :: CoreMap a\nemptyCoreMap = EmptyCM\n\ninstance Outputable a => Outputable (CoreMap a) where\n ppr m = text \"CoreMap elts\" <+> ppr (foldCoreMap (:) [] m)\n\n-------------------------\nfdE :: (a -> b -> b) -> CoreMap a -> b -> b\nfdE _ EmptyCM = \\z -> z\nfdE k m \n = foldTM k (cm_var m) \n . foldTM k (cm_lit m)\n . foldTM k (cm_co m)\n . foldTM k (cm_type m)\n . foldTM (foldTM k) (cm_cast m)\n . foldTM (foldTM k) (cm_scc m)\n . foldTM (foldTM k) (cm_app m)\n . foldTM (foldTM k) (cm_lam m)\n . foldTM (foldTM (foldTM k)) (cm_letn m)\n . foldTM (foldTM (foldTM k)) (cm_letr m)\n . foldTM (foldTM k) (cm_case m)\n\nlkE :: CmEnv -> CoreExpr -> CoreMap a -> Maybe a\n-- lkE: lookup in trie for expressions\nlkE env expr cm\n | EmptyCM <- cm = Nothing\n | otherwise = go expr cm\n where \n go (Var v) \t = cm_var >.> lkVar env v\n go (Lit l) = cm_lit >.> lkLit l\n go (Type t) \t = cm_type >.> lkT env t\n go (Coercion c) = cm_co >.> lkC env c\n go (Cast e c) = cm_cast >.> lkE env e >=> lkC env c\n go (Note (SCC cc) e) = cm_scc >.> lkE env e >=> lkCC cc\n go (Note _ e) = lkE env e\n go (App e1 e2) = cm_app >.> lkE env e2 >=> lkE env e1\n go (Lam v e) = cm_lam >.> lkE (extendCME env v) e >=> lkBndr env v\n go (Let (NonRec b r) e) = cm_letn >.> lkE env r \n >=> lkE (extendCME env b) e >=> lkBndr env b\n go (Let (Rec prs) e) = let (bndrs,rhss) = unzip prs\n env1 = extendCMEs env bndrs\n in cm_letr\n >.> lkList (lkE env1) rhss >=> lkE env1 e\n >=> lkList (lkBndr env1) bndrs\n go (Case e b _ as) = cm_case >.> lkE env e \n >=> lkList (lkA (extendCME env b)) as\n\nxtE :: CmEnv -> CoreExpr -> XT a -> CoreMap a -> CoreMap a\nxtE env e f EmptyCM = xtE env e f wrapEmptyCM\nxtE env (Var v) f m = m { cm_var = cm_var m |> xtVar env v f }\nxtE env (Type t) \t f m = m { cm_type = cm_type m |> xtT env t f }\nxtE env (Coercion c) f m = m { cm_co = cm_co m |> xtC env c f }\nxtE _ (Lit l) f m = m { cm_lit = cm_lit m |> xtLit l f }\nxtE env (Cast e c) f m = m { cm_cast = cm_cast m |> xtE env e |>>\n xtC env c f }\nxtE env (Note (SCC cc) e) f m = m { cm_scc = cm_scc m |> xtE env e |>> xtCC cc f }\nxtE env (Note _ e) f m = xtE env e f m\nxtE env (App e1 e2) f m = m { cm_app = cm_app m |> xtE env e2 |>> xtE env e1 f }\nxtE env (Lam v e) f m = m { cm_lam = cm_lam m |> xtE (extendCME env v) e\n |>> xtBndr env v f }\nxtE env (Let (NonRec b r) e) f m = m { cm_letn = cm_letn m \n |> xtE (extendCME env b) e \n |>> xtE env r |>> xtBndr env b f }\nxtE env (Let (Rec prs) e) f m = m { cm_letr = let (bndrs,rhss) = unzip prs\n env1 = extendCMEs env bndrs\n in cm_letr m \n |> xtList (xtE env1) rhss \n |>> xtE env1 e \n |>> xtList (xtBndr env1) bndrs f }\nxtE env (Case e b _ as) f m = m { cm_case = cm_case m |> xtE env e \n |>> let env1 = extendCME env b\n in xtList (xtA env1) as f }\n\ntype CostCentreMap a = Map.Map CostCentre a\nlkCC :: CostCentre -> CostCentreMap a -> Maybe a\nlkCC = lookupTM\n\nxtCC :: CostCentre -> XT a -> CostCentreMap a -> CostCentreMap a\nxtCC = alterTM\n\n------------------------\ndata AltMap a\t-- A single alternative\n = AM { am_deflt :: CoreMap a\n , am_data :: NameEnv (CoreMap a)\n , am_lit :: LiteralMap (CoreMap a) }\n\ninstance TrieMap AltMap where\n type Key AltMap = CoreAlt\n emptyTM = AM { am_deflt = emptyTM\n , am_data = emptyNameEnv\n , am_lit = emptyLiteralMap }\n lookupTM = lkA emptyCME\n alterTM = xtA emptyCME\n foldTM = fdA\n\nlkA :: CmEnv -> CoreAlt -> AltMap a -> Maybe a\nlkA env (DEFAULT, _, rhs) = am_deflt >.> lkE env rhs\nlkA env (LitAlt lit, _, rhs) = am_lit >.> lkLit lit >=> lkE env rhs\nlkA env (DataAlt dc, bs, rhs) = am_data >.> lkNamed dc >=> lkE (extendCMEs env bs) rhs\n\nxtA :: CmEnv -> CoreAlt -> XT a -> AltMap a -> AltMap a\nxtA env (DEFAULT, _, rhs) f m = m { am_deflt = am_deflt m |> xtE env rhs f }\nxtA env (LitAlt l, _, rhs) f m = m { am_lit = am_lit m |> xtLit l |>> xtE env rhs f }\nxtA env (DataAlt d, bs, rhs) f m = m { am_data = am_data m |> xtNamed d \n |>> xtE (extendCMEs env bs) rhs f }\n\nfdA :: (a -> b -> b) -> AltMap a -> b -> b\nfdA k m = foldTM k (am_deflt m)\n . foldTM (foldTM k) (am_data m)\n . foldTM (foldTM k) (am_lit m)\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Coercions\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ndata CoercionMap a \n = EmptyKM\n | KM { km_refl :: TypeMap a\n , km_tc_app :: NameEnv (ListMap CoercionMap a)\n , km_app :: CoercionMap (CoercionMap a)\n , km_forall :: CoercionMap (TypeMap a)\n , km_var :: VarMap a\n , km_axiom :: NameEnv (ListMap CoercionMap a)\n , km_unsafe :: TypeMap (TypeMap a)\n , km_sym :: CoercionMap a\n , km_trans :: CoercionMap (CoercionMap a)\n , km_nth :: IntMap.IntMap (CoercionMap a)\n , km_inst :: CoercionMap (TypeMap a) }\n\nwrapEmptyKM :: CoercionMap a\nwrapEmptyKM = KM { km_refl = emptyTM, km_tc_app = emptyNameEnv\n , km_app = emptyTM, km_forall = emptyTM\n , km_var = emptyTM, km_axiom = emptyNameEnv\n , km_unsafe = emptyTM, km_sym = emptyTM, km_trans = emptyTM\n , km_nth = emptyTM, km_inst = emptyTM }\n\ninstance TrieMap CoercionMap where\n type Key CoercionMap = Coercion\n emptyTM = EmptyKM\n lookupTM = lkC emptyCME\n alterTM = xtC emptyCME\n foldTM = fdC\n\nlkC :: CmEnv -> Coercion -> CoercionMap a -> Maybe a\nlkC env co m \n | EmptyKM <- m = Nothing\n | otherwise = go co m\n where\n go (Refl ty) = km_refl >.> lkT env ty\n go (TyConAppCo tc cs) = km_tc_app >.> lkNamed tc >=> lkList (lkC env) cs\n go (AxiomInstCo ax cs) = km_axiom >.> lkNamed ax >=> lkList (lkC env) cs\n go (AppCo c1 c2) = km_app >.> lkC env c1 >=> lkC env c2\n go (TransCo c1 c2) = km_trans >.> lkC env c1 >=> lkC env c2\n go (UnsafeCo t1 t2) = km_unsafe >.> lkT env t1 >=> lkT env t2\n go (InstCo c t) = km_inst >.> lkC env c >=> lkT env t\n go (ForAllCo v c) = km_forall >.> lkC (extendCME env v) c >=> lkBndr env v\n go (CoVarCo v) = km_var >.> lkVar env v\n go (SymCo c) = km_sym >.> lkC env c\n go (NthCo n c) = km_nth >.> lookupTM n >=> lkC env c\n\nxtC :: CmEnv -> Coercion -> XT a -> CoercionMap a -> CoercionMap a\nxtC env co f EmptyKM = xtC env co f wrapEmptyKM\nxtC env (Refl ty) f m = m { km_refl = km_refl m |> xtT env ty f }\nxtC env (TyConAppCo tc cs) f m = m { km_tc_app = km_tc_app m |> xtNamed tc |>> xtList (xtC env) cs f }\nxtC env (AxiomInstCo ax cs) f m = m { km_axiom = km_axiom m |> xtNamed ax |>> xtList (xtC env) cs f }\nxtC env (AppCo c1 c2) f m = m { km_app = km_app m |> xtC env c1 |>> xtC env c2 f }\nxtC env (TransCo c1 c2) f m = m { km_trans = km_trans m |> xtC env c1 |>> xtC env c2 f }\nxtC env (UnsafeCo t1 t2) f m = m { km_unsafe = km_unsafe m |> xtT env t1 |>> xtT env t2 f }\nxtC env (InstCo c t) f m = m { km_inst = km_inst m |> xtC env c |>> xtT env t f }\nxtC env (ForAllCo v c) f m = m { km_forall = km_forall m |> xtC (extendCME env v) c \n |>> xtBndr env v f }\nxtC env (CoVarCo v) f m = m { km_var \t= km_var m |> xtVar env v f }\nxtC env (SymCo c) f m = m { km_sym \t= km_sym m |> xtC env c f }\nxtC env (NthCo n c) f m = m { km_nth \t= km_nth m |> xtInt n |>> xtC env c f } \n\nfdC :: (a -> b -> b) -> CoercionMap a -> b -> b\nfdC _ EmptyKM = \\z -> z\nfdC k m = foldTM k (km_refl m)\n . foldTM (foldTM k) (km_tc_app m)\n . foldTM (foldTM k) (km_app m)\n . foldTM (foldTM k) (km_forall m)\n . foldTM k (km_var m)\n . foldTM (foldTM k) (km_axiom m)\n . foldTM (foldTM k) (km_unsafe m)\n . foldTM k (km_sym m)\n . foldTM (foldTM k) (km_trans m)\n . foldTM (foldTM k) (km_nth m)\n . foldTM (foldTM k) (km_inst m)\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Types\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ndata TypeMap a\n = EmptyTM\n | TM { tm_var :: VarMap a\n , tm_app :: TypeMap (TypeMap a)\n , tm_fun :: TypeMap (TypeMap a)\n , tm_tc_app :: NameEnv (ListMap TypeMap a)\n , tm_forall :: TypeMap (BndrMap a) }\n\nwrapEmptyTypeMap :: TypeMap a\nwrapEmptyTypeMap = TM { tm_var = emptyTM\n , tm_app = EmptyTM\n , tm_fun = EmptyTM\n , tm_tc_app = emptyNameEnv\n , tm_forall = EmptyTM }\n\ninstance TrieMap TypeMap where\n type Key TypeMap = Type\n emptyTM = EmptyTM\n lookupTM = lkT emptyCME\n alterTM = xtT emptyCME\n foldTM = fdT\n\n-----------------\nlkT :: CmEnv -> Type -> TypeMap a -> Maybe a\nlkT env ty m\n | EmptyTM <- m = Nothing\n | otherwise = go ty m\n where\n go ty | Just ty' <- coreView ty = go ty'\n go (TyVarTy v) = tm_var >.> lkVar env v\n go (AppTy t1 t2) = tm_app >.> lkT env t1 >=> lkT env t2\n go (FunTy t1 t2) = tm_fun >.> lkT env t1 >=> lkT env t2\n go (TyConApp tc tys) = tm_tc_app >.> lkNamed tc >=> lkList (lkT env) tys\n go (ForAllTy tv ty) = tm_forall >.> lkT (extendCME env tv) ty >=> lkBndr env tv\n\n-----------------\nxtT :: CmEnv -> Type -> XT a -> TypeMap a -> TypeMap a\nxtT env ty f m\n | EmptyTM <- m = xtT env ty f wrapEmptyTypeMap \n | Just ty' <- coreView ty = xtT env ty' f m \n\nxtT env (TyVarTy v) f m = m { tm_var = tm_var m |> xtVar env v f }\nxtT env (AppTy t1 t2) f m = m { tm_app = tm_app m |> xtT env t1 |>> xtT env t2 f }\nxtT env (FunTy t1 t2) f m = m { tm_fun = tm_fun m |> xtT env t1 |>> xtT env t2 f }\nxtT env (ForAllTy tv ty) f m = m { tm_forall = tm_forall m |> xtT (extendCME env tv) ty \n |>> xtBndr env tv f }\nxtT env (TyConApp tc tys) f m = m { tm_tc_app = tm_tc_app m |> xtNamed tc \n |>> xtList (xtT env) tys f }\n\nfdT :: (a -> b -> b) -> TypeMap a -> b -> b\nfdT _ EmptyTM = \\z -> z\nfdT k m = foldTM k (tm_var m)\n . foldTM (foldTM k) (tm_app m)\n . foldTM (foldTM k) (tm_fun m)\n . foldTM (foldTM k) (tm_tc_app m)\n . foldTM (foldTM k) (tm_forall m)\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Variables\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ntype BoundVar = Int -- Bound variables are deBruijn numbered\ntype BoundVarMap a = IntMap.IntMap a\n\ndata CmEnv = CME { cme_next :: BoundVar\n , cme_env :: VarEnv BoundVar } \n\nemptyCME :: CmEnv\nemptyCME = CME { cme_next = 0, cme_env = emptyVarEnv }\n\nextendCME :: CmEnv -> Var -> CmEnv\nextendCME (CME { cme_next = bv, cme_env = env }) v\n = CME { cme_next = bv+1, cme_env = extendVarEnv env v bv }\n\nextendCMEs :: CmEnv -> [Var] -> CmEnv\nextendCMEs env vs = foldl extendCME env vs\n\nlookupCME :: CmEnv -> Var -> Maybe BoundVar\nlookupCME (CME { cme_env = env }) v = lookupVarEnv env v\n\n--------- Variable binders -------------\ntype BndrMap = TypeMap \n\nlkBndr :: CmEnv -> Var -> BndrMap a -> Maybe a\nlkBndr env v m = lkT env (varType v) m\n\nxtBndr :: CmEnv -> Var -> XT a -> BndrMap a -> BndrMap a\nxtBndr env v f = xtT env (varType v) f\n\n--------- Variable occurrence -------------\ndata VarMap a = VM { vm_bvar :: BoundVarMap a -- Bound variable\n , vm_fvar :: VarEnv a } \t -- Free variable\n\ninstance TrieMap VarMap where\n type Key VarMap = Var\n emptyTM = VM { vm_bvar = IntMap.empty, vm_fvar = emptyVarEnv }\n lookupTM = lkVar emptyCME\n alterTM = xtVar emptyCME\n foldTM = fdVar\n\nlkVar :: CmEnv -> Var -> VarMap a -> Maybe a\nlkVar env v \n | Just bv <- lookupCME env v = vm_bvar >.> lookupTM bv\n | otherwise = vm_fvar >.> lkFreeVar v\n\nxtVar :: CmEnv -> Var -> XT a -> VarMap a -> VarMap a\nxtVar env v f m\n | Just bv <- lookupCME env v = m { vm_bvar = vm_bvar m |> xtInt bv f }\n | otherwise = m { vm_fvar = vm_fvar m |> xtFreeVar v f }\n\nfdVar :: (a -> b -> b) -> VarMap a -> b -> b\nfdVar k m = foldTM k (vm_bvar m)\n . foldTM k (vm_fvar m)\n\nlkFreeVar :: Var -> VarEnv a -> Maybe a\nlkFreeVar var env = lookupVarEnv env var\n\nxtFreeVar :: Var -> XT a -> VarEnv a -> VarEnv a\nxtFreeVar v f m = alterVarEnv f m v\n\\end{code}","avg_line_length":35.5854063018,"max_line_length":103,"alphanum_fraction":0.5118370771} {"size":1731,"ext":"lhs","lang":"Literate Haskell","max_stars_count":2.0,"content":"\nListen mit Laenge, unchunked.\n\n\\begin{code}\n{-# OPTIONS -fno-monomorphism-restriction #-}\nmodule MathObj.Vector.Indexed where\n\n-- import MathObj.Vector.Base\nimport qualified Data.List as DL -- (length, transpose)\n-- import Prelude (Int, Num, zipWith)\n-- import qualified Prelude as P\n\ntype Indexed a = (Int, [a])\n\n-- instance Num x => Vectors (x, v) x v where\n\n-- wir brauchen hier Typdefinition!\n-- l2 :: Num a => (a -> a -> a) -> (Int, [a]) -> (Int, [a]) -> (Int, [a])\nl2 (f) (n, xs) (m, ys) | n\/=m = error \"List lengths mismatch\"\n | otherwise = (n, zipWith f xs ys) \nl1 (f) (n, xs) = (n, map (f) xs)\nlength (n, xs) = n\ntoList (n, xs) = xs\nfromList xs = (DL.length xs, xs)\ntranspose xss = map(fromList) $ DL.transpose $ map (snd) xss\nconcat xss = (sum $ map (fst) xss, DL.concat $ map (snd) xss)\n(!!) (n, xs) k | 0<=k && k Indexed where\n-- div = l (P.div)\nadd = l2 (+)\nsubstr = l2 (-)\nmult = l2 (*)\ndivide = l2 (\/)\nabsolute = l1 (abs)\ndivideSkalar (n, xs) k = divide (n, xs) (fromList $ take n $ repeat k)\nmultSkalar (n, xs) k = mult (n, xs) (fromList $ take n $ repeat k)\n\\end{code}\n\nUnd die Kuerzel:\n-- werden erst in Vektor.lhs definiert!\n\\begin{nocode}\n(@+) = add\n(@-) = substr\n(@*) = mult\n(@\/) = divide\n($\/) = divideSkalar\n\\end{nocode}","avg_line_length":28.85,"max_line_length":73,"alphanum_fraction":0.5719237435} {"size":13659,"ext":"lhs","lang":"Literate Haskell","max_stars_count":380.0,"content":"\n[[combinator-review]]\n= Combinator review\n\nIn this tutorial we will go through all the functions in\nText.Parsec.Combinator, and some useful ones in Control.Applicative\nand Control.Monad as well.\n\n> import Text.Parsec (ParseError)\n> import Text.Parsec.String (Parser)\n> import Text.Parsec.String.Parsec (try)\n> import Text.Parsec.String.Char (oneOf, char, digit\n> ,string, letter, satisfy)\n> import Text.Parsec.String.Combinator (many1, choice, chainl1, between\n> ,count, option, optionMaybe, optional)\n> import Control.Applicative ((<$>), (<*>), (<$), (<*), (*>), (<|>), many)\n> import Control.Monad (void, ap, mzero)\n> import Data.Char (isLetter, isDigit)\n> import FunctionsAndTypesForParsing\n\n== Text.Parsec.Combinator\n\nYou should look at the source for these functions and try to\nunderstand how they are implemented.\n\n\n\nThe style of the source code in the Parsec library sources is a little\ndifferent to what we used at the end of the last tutorial. You can try\nreimplementing each of the Text.Parsec.Combinator module functions\nusing the Applicative style. See if you can find a way to reassure\nyourself that the rewritten versions you make are correct, perhaps via\nwriting automated tests, or perhaps some other method.\n\nYou should be able to easily understand the implementation of all the\nfunctions in Text.Parsec.Combinator except possibly `anyToken` and\n`notFollowedBy`.\n\n=== choice\n\n```haskell\nchoice :: [Parser a] -> Parser a\n```\n\n`choice ps` tries to apply the parsers in the list `ps` in order,\nuntil one of them succeeds. It returns the value of the succeeding\nparser.\n\n> a :: Parser Char\n> a = char 'a'\n>\n> b :: Parser Char\n> b = char 'b'\n>\n> aOrB :: Parser Char\n> aOrB = choice [a,b]\n\n```\n*Main> regularParse aOrB \"a\"\nRight 'a'\n\n*Main> regularParse aOrB \"b\"\nRight 'b'\n\n*Main> regularParse aOrB \"c\"\nLeft (line 1, column 1):\nunexpected \"c\"\nexpecting \"a\" or \"b\"\n```\n\n==== using with try\n\nIf a parser fails with `(<|>)` or `choice`, then it will only try the\nnext parser if the last parser consumed no input.\n\nTODO: make the parsers return the keyword and update the examples\n\n> byKeyword :: Parser ()\n> byKeyword = void $ string \"by\"\n>\n> betweenKeyword :: Parser ()\n> betweenKeyword = void $ string \"between\"\n\nSince both of these have the same prefix - b - if we combine them\nusing `choice` then it doesn't work correctly:\n\n```\n*Main> regularParse byKeyword \"by\"\nRight ()\n\n*Main> regularParse byKeyword \"between\"\nLeft (line 1, column 1):\nunexpected \"e\"\nexpecting \"by\"\n\n*Main> regularParse betweenKeyword \"between\"\nRight ()\n\n*Main> regularParse betweenKeyword \"by\"\nLeft (line 1, column 1):\nunexpected \"y\"\nexpecting \"between\"\n\n*Main> regularParse (choice [betweenKeyword,byKeyword]) \"between\"\nRight ()\n\n*Main> regularParse (choice [betweenKeyword,byKeyword]) \"by\"\nLeft (line 1, column 1):\nunexpected \"y\"\nexpecting \"between\"\n\n*Main> regularParse (choice [byKeyword,betweenKeyword]) \"between\"\nLeft (line 1, column 1):\nunexpected \"e\"\nexpecting \"by\"\n\n*Main> regularParse (choice [byKeyword,betweenKeyword]) \"by\"\nRight ()\n```\n\nIf we use `try` on the first option, then it all works fine.\n\n```\n*Main> regularParse (choice [try byKeyword,betweenKeyword]) \"by\"\nRight ()\n\n*Main> regularParse (choice [try byKeyword,betweenKeyword]) \"between\"\nRight ()\n```\n\n=== count\n\n```haskell\ncount :: Int -> Parser a -> Parser [a]\n```\n\n`count n p` parses `n` occurrences of `p`. If `n` is smaller or equal\nto zero, the parser is equivalent to `return []`. It returns a list of\nthe n values returned by `p`.\n\n```\n*Main> regularParse (count 5 a) \"aaaaa\"\nRight \"aaaaa\"\n\n*Main> regularParse (count 5 a) \"aaaa\"\nLeft (line 1, column 5):\nunexpected end of input\nexpecting \"a\"\n\n*Main> regularParse (count 5 a) \"aaaab\"\nLeft (line 1, column 5):\nunexpected \"b\"\nexpecting \"a\"\n\n*Main> regularParse (count 5 aOrB) \"aabaa\"\nRight \"aabaa\"\n```\n\n=== between\n\n```haskell\nbetween :: Parser open -> Parser close -> Parser a -> Parser a\n```\n\n`between open close p` parses `open`, followed by `p` and\n`close`. It returns the value returned by `p`.\n\nWe can replace the betweenParens from the previous tutorial using\nthis:\n\n> betweenParens :: Parser a -> Parser a\n> betweenParens p = between (symbol '(') (symbol ')') p\n\nIt hardly seems worth it to make this change, but it might be slightly\nquicker to read and understand if you aren't already familiar with\nsome code or haven't viewed it for a while. This is good for 'code\nmaintenance', where we need to fix bugs or add new features quickly to\ncode we haven't looked at for two years or something.\n\nHere are the support functions for this parser.\n\n> symbol :: Char -> Parser Char\n> symbol c = lexeme $ char c\n\n> lexeme :: Parser a -> Parser a\n> lexeme p = p <* whitespace\n\n> whitespace :: Parser ()\n> whitespace = void $ oneOf \" \\n\\t\"\n\n=== option\n\n```haskell\noption :: a -> Parser a -> Parser a\n```\n\n`option x p` tries to apply parser `p`. If `p` fails without\nconsuming input, it returns the value `x`, otherwise the value returned\nby `p`.\n\n```\n*Main> regularParse (option \"\" (count 5 aOrB)) \"aaaaa\"\nRight \"aaaaa\"\n\n*Main> regularParse (option \"\" (count 5 aOrB)) \"caaaa\"\nRight \"\"\n\n*Main> regularParse (option \"\" (count 5 aOrB)) \"aaaa\"\nLeft (line 1, column 5):\nunexpected end of input\nexpecting \"a\" or \"b\"\n\n*Main> regularParse (option \"\" (count 5 aOrB)) \"aaaac\"\nLeft (line 1, column 5):\nunexpected \"c\"\nexpecting \"a\" or \"b\"\n\n*Main> regularParse (option \"\" (try (count 5 aOrB))) \"aaaa\"\nRight \"\"\n```\n\n=== optionMaybe\n\n```haskell\noptionMaybe :: Parser a -> Parser (Maybe a)\n```\n\n`optionMaybe p` tries to apply parser `p`. If `p` fails without consuming\ninput, it returns `Nothing`, otherwise it returns `Just` the value returned\nby `p`.\n\n```\n*Main> regularParse (optionMaybe (count 5 aOrB)) \"aaaaa\"\nRight (Just \"aaaaa\")\n\n*Main> regularParse (optionMaybe (count 5 aOrB)) \"caaaa\"\nRight Nothing\n\n*Main> regularParse (optionMaybe (count 5 aOrB)) \"caaa\"\nRight Nothing\n\n*Main> regularParse (optionMaybe (count 5 aOrB)) \"aaaa\"\nLeft (line 1, column 5):\nunexpected end of input\nexpecting \"a\" or \"b\"\n\n*Main> regularParse (optionMaybe (count 5 aOrB)) \"aaaac\"\nLeft (line 1, column 5):\nunexpected \"c\"\nexpecting \"a\" or \"b\"\n\n*Main> regularParse (optionMaybe (try $ count 5 aOrB)) \"aaaac\"\nRight Nothing\n```\n\n=== optional\n\n```haskell\noptional :: Parser a -> Parser ()\n```\n\n`optional p` tries to apply parser `p`. It will parse `p` or\nnothing. It only fails if `p` fails after consuming input. It discards\nthe result of `p`.\n\n```\n*Main> parseWithLeftOver (optional (count 5 aOrB)) \"aaaaa\"\nRight ((),\"\")\n\n*Main> parseWithLeftOver (optional (count 5 aOrB)) \"caaaa\"\nRight ((),\"caaaa\")\n\n*Main> parseWithLeftOver (optional (count 5 aOrB)) \"caaa\"\nRight ((),\"caaa\")\n\n*Main> parseWithLeftOver (optional (count 5 aOrB)) \"aaaa\"\nLeft (line 1, column 5):\nunexpected end of input\nexpecting \"a\" or \"b\"\n\n*Main> parseWithLeftOver (optional (count 5 aOrB)) \"aaaac\"\nLeft (line 1, column 5):\nunexpected \"c\"\nexpecting \"a\" or \"b\"\n\n*Main> parseWithLeftOver (optional (try $ count 5 aOrB)) \"aaaac\"\nRight ((),\"aaaac\")\n```\n\n=== skipMany1\n\n```haskell\nskipMany1 :: Parser a -> Parser ()\n```\n\n`skipMany1 p` applies the parser `p` one or more times, skipping its result.\n\n=== many1\n\n```haskell\nmany1 :: Parser a -> Parser [a]\n```\n\nmany1 p applies the parser p one or more times. Returns a list of the\nreturned values of p.\n\n```haskell\n word = many1 letter\n```\n\n=== sepBy\n\n```haskell\nsepBy :: Parser a -> Parser sep -> Parser [a]\n```\n\nsepBy p sep parses zero or more occurrences of p, separated by\nsep. Returns a list of values returned by p.\n\n```haskell\n commaSep p = p `sepBy` (symbol \",\")\n```\n\n=== sepBy1\n\n```haskell\nsepBy1 :: Parser a -> Parser sep -> Parser [a]\n```\n\nsepBy1 p sep parses one or more occurrences of p, separated by\nsep. Returns a list of values returned by p.\n\n=== endBy\n\n```haskell\nendBy :: Parser a -> Parser sep -> Parser [a]\n```\n\nendBy p sep parses zero or more occurrences of p, seperated and ended\nby sep. Returns a list of values returned by p.\n\n```haskell\n cStatements = cStatement `endBy` semi\n```\n\n=== endBy1\n\n```haskell\nendBy1 :: Parser a -> Parser sep -> Parser [a]\n```\n\nendBy1 p sep parses one or more occurrences of p, seperated and ended\nby sep. Returns a list of values returned by p.\n\n=== sepEndBy\n\n```haskell\nsepEndBy :: Parser a -> Parser sep -> Parser [a]\n```\n\nsepEndBy p sep parses zero or more occurrences of p, separated and\noptionally ended by sep, ie. haskell style statements. Returns a list\nof values returned by p.\n\n```haskell\n haskellStatements = haskellStatement `sepEndBy` semi\n```\n\n=== sepEndBy1\n\n```haskell\nsepEndBy1 :: Parser a -> Parser sep -> Parser [a]\n```\n\nsepEndBy1 p sep parses one or more occurrences of p, separated and\noptionally ended by sep. Returns a list of values returned by p.\n\n=== chainl\n\n```haskell\nchainl :: Parser a -> Parser (a -> a -> a) -> a -> Parser a\n```\n\nchainl p op x parser zero or more occurrences of p, separated by\nop. Returns a value obtained by a left associative application of all\nfunctions returned by op to the values returned by p. If there are\nzero occurrences of p, the value x is returned.\n\n=== chainl1\n\n```haskell\nchainl1 :: Parser a -> Parser (a -> a -> a) -> Parser a\n```\n\nchainl1 p op x parser one or more occurrences of p, separated by op\nReturns a value obtained by a left associative application of all\nfunctions returned by op to the values returned by p. . This parser\ncan for example be used to eliminate left recursion which typically\noccurs in expression grammars.\n\n=== chainr\n\n```haskell\nchainr :: Parser a -> Parser (a -> a -> a) -> a -> Parser a\n```\n\nchainr p op x parser zero or more occurrences of p, separated by op\nReturns a value obtained by a right associative application of all\nfunctions returned by op to the values returned by p. If there are no\noccurrences of p, the value x is returned.\n\n=== chainr1\n\n```haskell\nchainr1 :: Parser a -> Parser (a -> a -> a) -> Parser a\n```\n\nchainr1 p op x parser one or more occurrences of `p`, separated by op\nReturns a value obtained by a right associative application of all\nfunctions returned by op to the values returned by p.\n\n=== eof\n\n```haskell\neof :: Parser ()\n```\n\nThis parser only succeeds at the end of the input. This is not a\nprimitive parser but it is defined using notFollowedBy.\n\n```haskell\n eof = notFollowedBy anyToken \"end of input\"\n```\n\nThe () operator is used for error messages. We will come back to\nerror messages after writing the basic SQL parser.\n\n=== notFollowedBy\n\n```haskell\nnotFollowedBy :: Show a => Parser a -> Parser ()\n```\n\nnotFollowedBy p only succeeds when parser p fails. This parser does\nnot consume any input. This parser can be used to implement the\n'longest match' rule. For example, when recognizing keywords (for\nexample let), we want to make sure that a keyword is not followed by a\nlegal identifier character, in which case the keyword is actually an\nidentifier (for example lets). We can program this behaviour as\nfollows:\n\n```haskell\n keywordLet = try (do{ string \"let\"\n ; notFollowedBy alphaNum\n })\n```\n\n=== manyTill\n\n```haskell\nmanyTill :: Parser a -> Parser end -> Parser [a]\n```\n\nmanyTill p end applies parser p zero or more times until parser end\nsucceeds. Returns the list of values returned by p. This parser can be\nused to scan comments:\n\n```haskell\n simpleComment = do{ string \"\"))\n }\n```\n\/\/ asciidoc: the string should be \"-->\"\n\nNote the overlapping parsers anyChar and string \"-\\->\", and therefore\nthe use of the try combinator.\n\n=== lookAhead\n\n```haskell\nlookAhead :: Parser a -> Parser a\n```\n\nlookAhead p parses p without consuming any input.\n\nIf p fails and consumes some input, so does lookAhead. Combine with\ntry if this is undesirable.\n\n=== anyToken\n\n```haskell\nanyToken :: Parser Char\n```\n\nThe parser anyToken accepts any kind of token. It is for example used\nto implement eof. Returns the accepted token.\n\n== Control.Applicative\n\nHere are the functions from Applicative that are used:\n\n`(<$>)`, `(<*>)`, `(<$)`, `(<*)`, `(*>)`, `(<|>)`, `many`\n\nTODO: examples for all of these\n\nWe've already seen all of these, except `(<$)`. This is often used to\nparse a keyword and return a no argument constructor:\n\n> data Something = Type1 | Type2 | Type3\n\n> something :: Parser Something\n> something = choice [Type1 <$ string \"type1\"\n> ,Type2 <$ string \"type2\"\n> ,Type3 <$ string \"type3\"]\n\n\/\/ the first symbol following should be (<**>)\n\/\/ the backslashes are for asciidoc\n\/\/ todo: do this in the render.lhs?\n\/\/ I wish I understood why you need backslashes in some places\n\/\/ and not others\n\nThere is also `(<\\**>)` which is `(<*>)` with the arguments flipped.\n\nTODO: double check using these from Parsec instead of\nControl.Applicative: possible performance implictions?\n\n== Control.Monad\n\n=== return\n\nOne use of return is to always succeed, and return a value:\n\n> alwaysX :: Parser Char\n> alwaysX = return 'x'\n\n```\n*Main> parseWithLeftOver (a <|> alwaysX) \"a\"\nRight ('a',\"\")\n\n*Main> parseWithLeftOver (a <|> alwaysX) \"b\"\nRight ('x',\"b\")\n```\n\n=== mzero\n\nThis function is used in the implementation of `choice`:\n\n> choice' :: [Parser a] -> Parser a\n> choice' ps = foldr (<|>) mzero ps\n\n\n\nTODO: go through a bunch of functions + do notation examples\n\n >>=\n =<<\n >>\n void\n mapM, mapM_\n sequence,sequence_\n guard\n return\nmzero\nmplus\nwhen, unless\nliftMN\nap\nquick note about fail, will return to this in the error messages stage\n\nTODO: using trace\n","avg_line_length":23.9211908932,"max_line_length":86,"alphanum_fraction":0.6915586793} {"size":257,"ext":"lhs","lang":"Literate Haskell","max_stars_count":1.0,"content":"%\n% (c) The GRASP\/AQUA Project, Glasgow University, 1992-1998\n%\n\\section[Constants]{Info about this compilation}\n\n\\begin{code}\nmodule Constants (module Constants) where\n\n#include \"ghc_boot_platform.h\"\n\n#include \"..\/includes\/HaskellConstants.hs\"\n\n\\end{code}\n","avg_line_length":18.3571428571,"max_line_length":59,"alphanum_fraction":0.7548638132} {"size":1355,"ext":"lhs","lang":"Literate Haskell","max_stars_count":248.0,"content":"This is an example of an equality proof from week 3 \"embedded in Haskell\".\n\n\\begin{code}\nmodule DSLsofMath.PoorMansProofs where\nimport DSLsofMath.EvalD\nimport DSLsofMath.FunNumInst\n-- import DSLsofMath.FunExp\n\ntype EqChain a = [a]\n\neqChain :: FunExp -> EqChain (FD Double)\neqChain e =\n [\n evalD (Exp e)\n\n , {- specification of |evalD| -}\n\n (eval (Exp e), eval' (Exp e))\n\n , {- def. |eval'|, function composition -}\n\n (eval (Exp e), eval (derive (Exp e)))\n\n , {- def. |derive| for |Exp| -}\n\n (eval (Exp e), eval (Exp e :*: derive e))\n\n , {- def. |eval| for |:*:| -}\n\n (eval (Exp e), eval (Exp e) * eval (derive e))\n\n , {- def. |eval| for |Exp| -}\n\n (exp (eval e), exp (eval e) * eval (derive e))\n\n , {- def. |eval'| -}\n\n (exp (eval e), exp (eval e) * eval' e)\n\n , {- introduce names for subexpressions -}\n\n let f = eval e\n f' = eval' e\n in (exp f, exp f * f')\n\n , {- def. |evalD| -}\n\n let (f, f') = evalD e\n in (exp f, exp f * f')\n ]\n\napplyFD :: a -> FD a -> (a, a)\napplyFD c (f, f') = (f c, f' c)\n\ntest :: Double -> FunExp -> [(Double, Double)]\ntest c e = map (applyFD c) (eqChain e)\n\ncheck c e = allEq (test c e)\n\nallEq :: Eq a => EqChain a -> Bool\nallEq [] = True\nallEq (e:es) = all (e==) es\n\nmain = check 0 (Const 1) && check 1 (X :+: Exp X)\n\\end{code}\n","avg_line_length":20.5303030303,"max_line_length":74,"alphanum_fraction":0.5298892989} {"size":1449,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"\nhttp:\/\/www.euterpea.com\/download-and-installation\/\n\nCompiling to Executable\n\nOn newer Macs, MUIs will not work from the GHCi interpreter. Failure cases range from unresponsive windows to threading errors. However, if this happens to you, you will still be able to run MUIs successfully by compiling your code to executable. Windows users can also benefit from compiling to executable with some MUIs because it can help speed up execution between frames, which is useful for interactive programs.\n\nThe easiest way to try compiling a MUI to executable the first time is to use one of the MUI example that comes included with HSoM. \n\n\n> module Main where\n> import HSoM.Examples.MUIExamples1\n\nBased on your test use the desired switch on the command line\n\n> import System.Environment\n> import Data.List \n\n> main :: IO ()\n> main = do \n> args <- getArgs\n> case args of \n> x:_\n> | x == \"mui0\" -> mui0\n> | x == \"mui1\" -> mui1\n> | x == \"mui2\" -> mui2\n> | x == \"mui3\" -> mui3\n> | x == \"mui4\" -> mui4\n> | x == \"mui'4\" -> mui'4\n> | x == \"mui5\" -> mui5\n> | x == \"colorSwatch\" -> colorSwatch\n> _ -> do\n> name <- getProgName\n> putStrLn $ \"usage: \" ++ name ++ \" \" ++ \" where\" ++ \" equals mui0 or colorSwatch\"\n\nThen, run compile this program to executable from a terminal with:\n\nghc MUIExamples1.lhs\n\nand then run it with this.\n\n.\/MUIExamples1 mui0\n\n","avg_line_length":32.9318181818,"max_line_length":418,"alphanum_fraction":0.6528640442} {"size":70270,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"%\n% (c) The University of Glasgow 2006\n% (c) The GRASP\/AQUA Project, Glasgow University, 1992-1998\n%\n\nType checking of type signatures in interface files\n\n\\begin{code}\nmodule TcIface (\n tcLookupImported_maybe,\n importDecl, checkWiredInTyCon, tcHiBootIface, typecheckIface,\n tcIfaceDecl, tcIfaceInst, tcIfaceFamInst, tcIfaceRules,\n tcIfaceVectInfo, tcIfaceAnnotations,\n tcIfaceExpr, -- Desired by HERMIT (Trac #7683)\n tcIfaceGlobal,\n tcExtCoreBindings\n ) where\n\n#include \"HsVersions.h\"\n\nimport TcTypeNats(typeNatCoAxiomRules)\nimport IfaceSyn\nimport LoadIface\nimport IfaceEnv\nimport BuildTyCl\nimport TcRnMonad\nimport TcType\nimport Type\nimport Coercion\nimport TypeRep\nimport HscTypes\nimport Annotations\nimport InstEnv\nimport FamInstEnv\nimport CoreSyn\nimport CoreUtils\nimport CoreUnfold\nimport CoreLint\nimport MkCore ( castBottomExpr )\nimport Id\nimport MkId\nimport IdInfo\nimport Class\nimport TyCon\nimport CoAxiom\nimport ConLike\nimport DataCon\nimport PrelNames\nimport TysWiredIn\nimport TysPrim ( superKindTyConName )\nimport BasicTypes ( strongLoopBreaker )\nimport Literal\nimport qualified Var\nimport VarEnv\nimport VarSet\nimport Name\nimport NameEnv\nimport NameSet\nimport OccurAnal ( occurAnalyseExpr )\nimport Demand\nimport Module\nimport UniqFM\nimport UniqSupply\nimport Outputable\nimport ErrUtils\nimport Maybes\nimport SrcLoc\nimport DynFlags\nimport Util\nimport FastString\n\nimport Control.Monad\nimport qualified Data.Map as Map\nimport Data.Traversable ( traverse )\n\\end{code}\n\nThis module takes\n\n IfaceDecl -> TyThing\n IfaceType -> Type\n etc\n\nAn IfaceDecl is populated with RdrNames, and these are not renamed to\nNames before typechecking, because there should be no scope errors etc.\n\n -- For (b) consider: f = \\$(...h....)\n -- where h is imported, and calls f via an hi-boot file.\n -- This is bad! But it is not seen as a staging error, because h\n -- is indeed imported. We don't want the type-checker to black-hole\n -- when simplifying and compiling the splice!\n --\n -- Simple solution: discard any unfolding that mentions a variable\n -- bound in this module (and hence not yet processed).\n -- The discarding happens when forkM finds a type error.\n\n%************************************************************************\n%* *\n%* tcImportDecl is the key function for \"faulting in\" *\n%* imported things\n%* *\n%************************************************************************\n\nThe main idea is this. We are chugging along type-checking source code, and\nfind a reference to GHC.Base.map. We call tcLookupGlobal, which doesn't find\nit in the EPS type envt. So it\n 1 loads GHC.Base.hi\n 2 gets the decl for GHC.Base.map\n 3 typechecks it via tcIfaceDecl\n 4 and adds it to the type env in the EPS\n\nNote that DURING STEP 4, we may find that map's type mentions a type\nconstructor that also\n\nNotice that for imported things we read the current version from the EPS\nmutable variable. This is important in situations like\n ...$(e1)...$(e2)...\nwhere the code that e1 expands to might import some defns that\nalso turn out to be needed by the code that e2 expands to.\n\n\\begin{code}\ntcLookupImported_maybe :: Name -> TcM (MaybeErr MsgDoc TyThing)\n-- Returns (Failed err) if we can't find the interface file for the thing\ntcLookupImported_maybe name\n = do { hsc_env <- getTopEnv\n ; mb_thing <- liftIO (lookupTypeHscEnv hsc_env name)\n ; case mb_thing of\n Just thing -> return (Succeeded thing)\n Nothing -> tcImportDecl_maybe name }\n\ntcImportDecl_maybe :: Name -> TcM (MaybeErr MsgDoc TyThing)\n-- Entry point for *source-code* uses of importDecl\ntcImportDecl_maybe name\n | Just thing <- wiredInNameTyThing_maybe name\n = do { when (needWiredInHomeIface thing)\n (initIfaceTcRn (loadWiredInHomeIface name))\n -- See Note [Loading instances for wired-in things]\n ; return (Succeeded thing) }\n | otherwise\n = initIfaceTcRn (importDecl name)\n\nimportDecl :: Name -> IfM lcl (MaybeErr MsgDoc TyThing)\n-- Get the TyThing for this Name from an interface file\n-- It's not a wired-in thing -- the caller caught that\nimportDecl name\n = ASSERT( not (isWiredInName name) )\n do { traceIf nd_doc\n\n -- Load the interface, which should populate the PTE\n ; mb_iface <- ASSERT2( isExternalName name, ppr name )\n loadInterface nd_doc (nameModule name) ImportBySystem\n ; case mb_iface of {\n Failed err_msg -> return (Failed err_msg) ;\n Succeeded _ -> do\n\n -- Now look it up again; this time we should find it\n { eps <- getEps\n ; case lookupTypeEnv (eps_PTE eps) name of\n Just thing -> return (Succeeded thing)\n Nothing -> return (Failed not_found_msg)\n }}}\n where\n nd_doc = ptext (sLit \"Need decl for\") <+> ppr name\n not_found_msg = hang (ptext (sLit \"Can't find interface-file declaration for\") <+>\n pprNameSpace (occNameSpace (nameOccName name)) <+> ppr name)\n 2 (vcat [ptext (sLit \"Probable cause: bug in .hi-boot file, or inconsistent .hi file\"),\n ptext (sLit \"Use -ddump-if-trace to get an idea of which file caused the error\")])\n\\end{code}\n\n%************************************************************************\n%* *\n Checks for wired-in things\n%* *\n%************************************************************************\n\nNote [Loading instances for wired-in things]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWe need to make sure that we have at least *read* the interface files\nfor any module with an instance decl or RULE that we might want.\n\n* If the instance decl is an orphan, we have a whole separate mechanism\n (loadOprhanModules)\n\n* If the instance decl not an orphan, then the act of looking at the\n TyCon or Class will force in the defining module for the\n TyCon\/Class, and hence the instance decl\n\n* BUT, if the TyCon is a wired-in TyCon, we don't really need its interface;\n but we must make sure we read its interface in case it has instances or\n rules. That is what LoadIface.loadWiredInHomeInterface does. It's called\n from TcIface.{tcImportDecl, checkWiredInTyCon, ifCheckWiredInThing}\n\n* HOWEVER, only do this for TyCons. There are no wired-in Classes. There\n are some wired-in Ids, but we don't want to load their interfaces. For\n example, Control.Exception.Base.recSelError is wired in, but that module\n is compiled late in the base library, and we don't want to force it to\n load before it's been compiled!\n\nAll of this is done by the type checker. The renamer plays no role.\n(It used to, but no longer.)\n\n\n\\begin{code}\ncheckWiredInTyCon :: TyCon -> TcM ()\n-- Ensure that the home module of the TyCon (and hence its instances)\n-- are loaded. See Note [Loading instances for wired-in things]\n-- It might not be a wired-in tycon (see the calls in TcUnify),\n-- in which case this is a no-op.\ncheckWiredInTyCon tc\n | not (isWiredInName tc_name)\n = return ()\n | otherwise\n = do { mod <- getModule\n ; ASSERT( isExternalName tc_name )\n when (mod \/= nameModule tc_name)\n (initIfaceTcRn (loadWiredInHomeIface tc_name))\n -- Don't look for (non-existent) Float.hi when\n -- compiling Float.lhs, which mentions Float of course\n -- A bit yukky to call initIfaceTcRn here\n }\n where\n tc_name = tyConName tc\n\nifCheckWiredInThing :: TyThing -> IfL ()\n-- Even though we are in an interface file, we want to make\n-- sure the instances of a wired-in thing are loaded (imagine f :: Double -> Double)\n-- Ditto want to ensure that RULES are loaded too\n-- See Note [Loading instances for wired-in things]\nifCheckWiredInThing thing\n = do { mod <- getIfModule\n -- Check whether we are typechecking the interface for this\n -- very module. E.g when compiling the base library in --make mode\n -- we may typecheck GHC.Base.hi. At that point, GHC.Base is not in\n -- the HPT, so without the test we'll demand-load it into the PIT!\n -- C.f. the same test in checkWiredInTyCon above\n ; let name = getName thing\n ; ASSERT2( isExternalName name, ppr name )\n when (needWiredInHomeIface thing && mod \/= nameModule name)\n (loadWiredInHomeIface name) }\n\nneedWiredInHomeIface :: TyThing -> Bool\n-- Only for TyCons; see Note [Loading instances for wired-in things]\nneedWiredInHomeIface (ATyCon {}) = True\nneedWiredInHomeIface _ = False\n\\end{code}\n\n%************************************************************************\n%* *\n Type-checking a complete interface\n%* *\n%************************************************************************\n\nSuppose we discover we don't need to recompile. Then we must type\ncheck the old interface file. This is a bit different to the\nincremental type checking we do as we suck in interface files. Instead\nwe do things similarly as when we are typechecking source decls: we\nbring into scope the type envt for the interface all at once, using a\nknot. Remember, the decls aren't necessarily in dependency order --\nand even if they were, the type decls might be mutually recursive.\n\n\\begin{code}\ntypecheckIface :: ModIface -- Get the decls from here\n -> TcRnIf gbl lcl ModDetails\ntypecheckIface iface\n = initIfaceTc iface $ \\ tc_env_var -> do\n -- The tc_env_var is freshly allocated, private to\n -- type-checking this particular interface\n { -- Get the right set of decls and rules. If we are compiling without -O\n -- we discard pragmas before typechecking, so that we don't \"see\"\n -- information that we shouldn't. From a versioning point of view\n -- It's not actually *wrong* to do so, but in fact GHCi is unable\n -- to handle unboxed tuples, so it must not see unfoldings.\n ignore_prags <- goptM Opt_IgnoreInterfacePragmas\n\n -- Typecheck the decls. This is done lazily, so that the knot-tying\n -- within this single module work out right. In the If monad there is\n -- no global envt for the current interface; instead, the knot is tied\n -- through the if_rec_types field of IfGblEnv\n ; names_w_things <- loadDecls ignore_prags (mi_decls iface)\n ; let type_env = mkNameEnv names_w_things\n ; writeMutVar tc_env_var type_env\n\n -- Now do those rules, instances and annotations\n ; insts <- mapM tcIfaceInst (mi_insts iface)\n ; fam_insts <- mapM tcIfaceFamInst (mi_fam_insts iface)\n ; rules <- tcIfaceRules ignore_prags (mi_rules iface)\n ; anns <- tcIfaceAnnotations (mi_anns iface)\n\n -- Vectorisation information\n ; vect_info <- tcIfaceVectInfo (mi_module iface) type_env (mi_vect_info iface)\n\n -- Exports\n ; exports <- ifaceExportNames (mi_exports iface)\n\n -- Finished\n ; traceIf (vcat [text \"Finished typechecking interface for\" <+> ppr (mi_module iface),\n text \"Type envt:\" <+> ppr type_env])\n ; return $ ModDetails { md_types = type_env\n , md_insts = insts\n , md_fam_insts = fam_insts\n , md_rules = rules\n , md_anns = anns\n , md_vect_info = vect_info\n , md_exports = exports\n }\n }\n\\end{code}\n\n\n%************************************************************************\n%* *\n Type and class declarations\n%* *\n%************************************************************************\n\n\\begin{code}\ntcHiBootIface :: HscSource -> Module -> TcRn ModDetails\n-- Load the hi-boot iface for the module being compiled,\n-- if it indeed exists in the transitive closure of imports\n-- Return the ModDetails, empty if no hi-boot iface\ntcHiBootIface hsc_src mod\n | isHsBoot hsc_src -- Already compiling a hs-boot file\n = return emptyModDetails\n | otherwise\n = do { traceIf (text \"loadHiBootInterface\" <+> ppr mod)\n\n ; mode <- getGhcMode\n ; if not (isOneShot mode)\n -- In --make and interactive mode, if this module has an hs-boot file\n -- we'll have compiled it already, and it'll be in the HPT\n --\n -- We check wheher the interface is a *boot* interface.\n -- It can happen (when using GHC from Visual Studio) that we\n -- compile a module in TypecheckOnly mode, with a stable,\n -- fully-populated HPT. In that case the boot interface isn't there\n -- (it's been replaced by the mother module) so we can't check it.\n -- And that's fine, because if M's ModInfo is in the HPT, then\n -- it's been compiled once, and we don't need to check the boot iface\n then do { hpt <- getHpt\n ; case lookupUFM hpt (moduleName mod) of\n Just info | mi_boot (hm_iface info)\n -> return (hm_details info)\n _ -> return emptyModDetails }\n else do\n\n -- OK, so we're in one-shot mode.\n -- In that case, we're read all the direct imports by now,\n -- so eps_is_boot will record if any of our imports mention us by\n -- way of hi-boot file\n { eps <- getEps\n ; case lookupUFM (eps_is_boot eps) (moduleName mod) of {\n Nothing -> return emptyModDetails ; -- The typical case\n\n Just (_, False) -> failWithTc moduleLoop ;\n -- Someone below us imported us!\n -- This is a loop with no hi-boot in the way\n\n Just (_mod, True) -> -- There's a hi-boot interface below us\n\n do { read_result <- findAndReadIface\n need mod\n True -- Hi-boot file\n\n ; case read_result of\n Failed err -> failWithTc (elaborate err)\n Succeeded (iface, _path) -> typecheckIface iface\n }}}}\n where\n need = ptext (sLit \"Need the hi-boot interface for\") <+> ppr mod\n <+> ptext (sLit \"to compare against the Real Thing\")\n\n moduleLoop = ptext (sLit \"Circular imports: module\") <+> quotes (ppr mod)\n <+> ptext (sLit \"depends on itself\")\n\n elaborate err = hang (ptext (sLit \"Could not find hi-boot interface for\") <+>\n quotes (ppr mod) <> colon) 4 err\n\\end{code}\n\n\n%************************************************************************\n%* *\n Type and class declarations\n%* *\n%************************************************************************\n\nWhen typechecking a data type decl, we *lazily* (via forkM) typecheck\nthe constructor argument types. This is in the hope that we may never\npoke on those argument types, and hence may never need to load the\ninterface files for types mentioned in the arg types.\n\nE.g.\n data Foo.S = MkS Baz.T\nMabye we can get away without even loading the interface for Baz!\n\nThis is not just a performance thing. Suppose we have\n data Foo.S = MkS Baz.T\n data Baz.T = MkT Foo.S\n(in different interface files, of course).\nNow, first we load and typecheck Foo.S, and add it to the type envt.\nIf we do explore MkS's argument, we'll load and typecheck Baz.T.\nIf we explore MkT's argument we'll find Foo.S already in the envt.\n\nIf we typechecked constructor args eagerly, when loading Foo.S we'd try to\ntypecheck the type Baz.T. So we'd fault in Baz.T... and then need Foo.S...\nwhich isn't done yet.\n\nAll very cunning. However, there is a rather subtle gotcha which bit\nme when developing this stuff. When we typecheck the decl for S, we\nextend the type envt with S, MkS, and all its implicit Ids. Suppose\n(a bug, but it happened) that the list of implicit Ids depended in\nturn on the constructor arg types. Then the following sequence of\nevents takes place:\n * we build a thunk for the constructor arg tys\n * we build a thunk for the extended type environment (depends on )\n * we write the extended type envt into the global EPS mutvar\n\nNow we look something up in the type envt\n * that pulls on \n * which reads the global type envt out of the global EPS mutvar\n * but that depends in turn on \n\nIt's subtle, because, it'd work fine if we typechecked the constructor args\neagerly -- they don't need the extended type envt. They just get the extended\ntype envt by accident, because they look at it later.\n\nWhat this means is that the implicitTyThings MUST NOT DEPEND on any of\nthe forkM stuff.\n\n\n\\begin{code}\ntcIfaceDecl :: Bool -- True <=> discard IdInfo on IfaceId bindings\n -> IfaceDecl\n -> IfL TyThing\ntcIfaceDecl = tc_iface_decl NoParentTyCon\n\ntc_iface_decl :: TyConParent -- For nested declarations\n -> Bool -- True <=> discard IdInfo on IfaceId bindings\n -> IfaceDecl\n -> IfL TyThing\ntc_iface_decl _ ignore_prags (IfaceId {ifName = occ_name, ifType = iface_type,\n ifIdDetails = details, ifIdInfo = info})\n = do { name <- lookupIfaceTop occ_name\n ; ty <- tcIfaceType iface_type\n ; details <- tcIdDetails ty details\n ; info <- tcIdInfo ignore_prags name ty info\n ; return (AnId (mkGlobalId details name ty info)) }\n\ntc_iface_decl parent _ (IfaceData {ifName = occ_name,\n ifCType = cType,\n ifTyVars = tv_bndrs,\n ifRoles = roles,\n ifCtxt = ctxt, ifGadtSyntax = gadt_syn,\n ifCons = rdr_cons,\n ifRec = is_rec, ifPromotable = is_prom,\n ifAxiom = mb_axiom_name })\n = bindIfaceTyVars_AT tv_bndrs $ \\ tyvars -> do\n { tc_name <- lookupIfaceTop occ_name\n ; tycon <- fixM $ \\ tycon -> do\n { stupid_theta <- tcIfaceCtxt ctxt\n ; parent' <- tc_parent tyvars mb_axiom_name\n ; cons <- tcIfaceDataCons tc_name tycon tyvars rdr_cons\n ; return (buildAlgTyCon tc_name tyvars roles cType stupid_theta\n cons is_rec is_prom gadt_syn parent') }\n ; traceIf (text \"tcIfaceDecl4\" <+> ppr tycon)\n ; return (ATyCon tycon) }\n where\n tc_parent :: [TyVar] -> Maybe Name -> IfL TyConParent\n tc_parent _ Nothing = return parent\n tc_parent tyvars (Just ax_name)\n = ASSERT( isNoParent parent )\n do { ax <- tcIfaceCoAxiom ax_name\n ; let fam_tc = coAxiomTyCon ax\n ax_unbr = toUnbranchedAxiom ax\n -- data families don't have branches:\n branch = coAxiomSingleBranch ax_unbr\n ax_tvs = coAxBranchTyVars branch\n ax_lhs = coAxBranchLHS branch\n tycon_tys = mkTyVarTys tyvars\n subst = mkTopTvSubst (ax_tvs `zip` tycon_tys)\n -- The subst matches the tyvar of the TyCon\n -- with those from the CoAxiom. They aren't\n -- necessarily the same, since the two may be\n -- gotten from separate interface-file declarations\n -- NB: ax_tvs may be shorter because of eta-reduction\n -- See Note [Eta reduction for data family axioms] in TcInstDcls\n lhs_tys = substTys subst ax_lhs `chkAppend`\n dropList ax_tvs tycon_tys\n -- The 'lhs_tys' should be 1-1 with the 'tyvars'\n -- but ax_tvs maybe shorter because of eta-reduction\n ; return (FamInstTyCon ax_unbr fam_tc lhs_tys) }\n\ntc_iface_decl parent _ (IfaceSyn {ifName = occ_name, ifTyVars = tv_bndrs,\n ifRoles = roles,\n ifSynRhs = mb_rhs_ty,\n ifSynKind = kind })\n = bindIfaceTyVars_AT tv_bndrs $ \\ tyvars -> do\n { tc_name <- lookupIfaceTop occ_name\n ; rhs_kind <- tcIfaceKind kind -- Note [Synonym kind loop]\n ; rhs <- forkM (mk_doc tc_name) $\n tc_syn_rhs mb_rhs_ty\n ; tycon <- buildSynTyCon tc_name tyvars roles rhs rhs_kind parent\n ; return (ATyCon tycon) }\n where\n mk_doc n = ptext (sLit \"Type syonym\") <+> ppr n\n tc_syn_rhs IfaceOpenSynFamilyTyCon = return OpenSynFamilyTyCon\n tc_syn_rhs (IfaceClosedSynFamilyTyCon ax_name)\n = do { ax <- tcIfaceCoAxiom ax_name\n ; return (ClosedSynFamilyTyCon ax) }\n tc_syn_rhs IfaceAbstractClosedSynFamilyTyCon = return AbstractClosedSynFamilyTyCon\n tc_syn_rhs (IfaceSynonymTyCon ty) = do { rhs_ty <- tcIfaceType ty\n ; return (SynonymTyCon rhs_ty) }\n\ntc_iface_decl _parent ignore_prags\n (IfaceClass {ifCtxt = rdr_ctxt, ifName = tc_occ,\n ifTyVars = tv_bndrs, ifRoles = roles, ifFDs = rdr_fds,\n ifATs = rdr_ats, ifSigs = rdr_sigs,\n ifMinDef = mindef_occ, ifRec = tc_isrec })\n-- ToDo: in hs-boot files we should really treat abstract classes specially,\n-- as we do abstract tycons\n = bindIfaceTyVars tv_bndrs $ \\ tyvars -> do\n { tc_name <- lookupIfaceTop tc_occ\n ; traceIf (text \"tc-iface-class1\" <+> ppr tc_occ)\n ; ctxt <- mapM tc_sc rdr_ctxt\n ; traceIf (text \"tc-iface-class2\" <+> ppr tc_occ)\n ; sigs <- mapM tc_sig rdr_sigs\n ; fds <- mapM tc_fd rdr_fds\n ; traceIf (text \"tc-iface-class3\" <+> ppr tc_occ)\n ; mindef <- traverse lookupIfaceTop mindef_occ\n ; cls <- fixM $ \\ cls -> do\n { ats <- mapM (tc_at cls) rdr_ats\n ; traceIf (text \"tc-iface-class4\" <+> ppr tc_occ)\n ; buildClass ignore_prags tc_name tyvars roles ctxt fds ats sigs mindef tc_isrec }\n ; return (ATyCon (classTyCon cls)) }\n where\n tc_sc pred = forkM (mk_sc_doc pred) (tcIfaceType pred)\n -- The *length* of the superclasses is used by buildClass, and hence must\n -- not be inside the thunk. But the *content* maybe recursive and hence\n -- must be lazy (via forkM). Example:\n -- class C (T a) => D a where\n -- data T a\n -- Here the associated type T is knot-tied with the class, and\n -- so we must not pull on T too eagerly. See Trac #5970\n\n tc_sig (IfaceClassOp occ dm rdr_ty)\n = do { op_name <- lookupIfaceTop occ\n ; op_ty <- forkM (mk_op_doc op_name rdr_ty) (tcIfaceType rdr_ty)\n -- Must be done lazily for just the same reason as the\n -- type of a data con; to avoid sucking in types that\n -- it mentions unless it's necessary to do so\n ; return (op_name, dm, op_ty) }\n\n tc_at cls (IfaceAT tc_decl defs_decls)\n = do ATyCon tc <- tc_iface_decl (AssocFamilyTyCon cls) ignore_prags tc_decl\n defs <- forkM (mk_at_doc tc) (tc_ax_branches tc defs_decls)\n -- Must be done lazily in case the RHS of the defaults mention\n -- the type constructor being defined here\n -- e.g. type AT a; type AT b = AT [b] Trac #8002\n return (tc, defs)\n\n mk_sc_doc pred = ptext (sLit \"Superclass\") <+> ppr pred\n mk_at_doc tc = ptext (sLit \"Associated type\") <+> ppr tc\n mk_op_doc op_name op_ty = ptext (sLit \"Class op\") <+> sep [ppr op_name, ppr op_ty]\n\n tc_fd (tvs1, tvs2) = do { tvs1' <- mapM tcIfaceTyVar tvs1\n ; tvs2' <- mapM tcIfaceTyVar tvs2\n ; return (tvs1', tvs2') }\n\ntc_iface_decl _ _ (IfaceForeign {ifName = rdr_name, ifExtName = ext_name})\n = do { name <- lookupIfaceTop rdr_name\n ; return (ATyCon (mkForeignTyCon name ext_name\n liftedTypeKind)) }\n\ntc_iface_decl _ _ (IfaceAxiom { ifName = ax_occ, ifTyCon = tc\n , ifAxBranches = branches, ifRole = role })\n = do { tc_name <- lookupIfaceTop ax_occ\n ; tc_tycon <- tcIfaceTyCon tc\n ; tc_branches <- tc_ax_branches tc_tycon branches\n ; let axiom = computeAxiomIncomps $\n CoAxiom { co_ax_unique = nameUnique tc_name\n , co_ax_name = tc_name\n , co_ax_tc = tc_tycon\n , co_ax_role = role\n , co_ax_branches = toBranchList tc_branches\n , co_ax_implicit = False }\n ; return (ACoAxiom axiom) }\n\ntc_iface_decl _ _ (IfacePatSyn{ ifName = occ_name\n , ifPatMatcher = matcher_name\n , ifPatWrapper = wrapper_name\n , ifPatIsInfix = is_infix\n , ifPatUnivTvs = univ_tvs\n , ifPatExTvs = ex_tvs\n , ifPatProvCtxt = prov_ctxt\n , ifPatReqCtxt = req_ctxt\n , ifPatArgs = args\n , ifPatTy = pat_ty })\n = do { name <- lookupIfaceTop occ_name\n ; traceIf (ptext (sLit \"tc_iface_decl\") <+> ppr name)\n ; matcher <- tcExt \"Matcher\" matcher_name\n ; wrapper <- case wrapper_name of\n Nothing -> return Nothing\n Just wn -> do { wid <- tcExt \"Wrapper\" wn\n ; return (Just wid) }\n ; bindIfaceTyVars univ_tvs $ \\univ_tvs -> do\n { bindIfaceTyVars ex_tvs $ \\ex_tvs -> do\n { patsyn <- forkM (mk_doc name) $\n do { prov_theta <- tcIfaceCtxt prov_ctxt\n ; req_theta <- tcIfaceCtxt req_ctxt\n ; pat_ty <- tcIfaceType pat_ty\n ; arg_tys <- mapM tcIfaceType args\n ; return $ buildPatSyn name is_infix matcher wrapper\n arg_tys univ_tvs ex_tvs prov_theta req_theta pat_ty }\n ; return $ AConLike . PatSynCon $ patsyn }}}\n where\n mk_doc n = ptext (sLit \"Pattern synonym\") <+> ppr n\n tcExt s name = forkM (ptext (sLit s) <+> ppr name) $ tcIfaceExtId name\n\ntc_ax_branches :: TyCon -> [IfaceAxBranch] -> IfL [CoAxBranch]\ntc_ax_branches tc if_branches = foldlM (tc_ax_branch (tyConKind tc)) [] if_branches\n\ntc_ax_branch :: Kind -> [CoAxBranch] -> IfaceAxBranch -> IfL [CoAxBranch]\ntc_ax_branch tc_kind prev_branches\n (IfaceAxBranch { ifaxbTyVars = tv_bndrs, ifaxbLHS = lhs, ifaxbRHS = rhs\n , ifaxbRoles = roles, ifaxbIncomps = incomps })\n = bindIfaceTyVars_AT tv_bndrs $ \\ tvs -> do\n -- The _AT variant is needed here; see Note [CoAxBranch type variables] in CoAxiom\n { tc_lhs <- tcIfaceTcArgs tc_kind lhs -- See Note [Checking IfaceTypes vs IfaceKinds]\n ; tc_rhs <- tcIfaceType rhs\n ; let br = CoAxBranch { cab_loc = noSrcSpan\n , cab_tvs = tvs\n , cab_lhs = tc_lhs\n , cab_roles = roles\n , cab_rhs = tc_rhs\n , cab_incomps = map (prev_branches !!) incomps }\n ; return (prev_branches ++ [br]) }\n\ntcIfaceDataCons :: Name -> TyCon -> [TyVar] -> IfaceConDecls -> IfL AlgTyConRhs\ntcIfaceDataCons tycon_name tycon _ if_cons\n = case if_cons of\n IfAbstractTyCon dis -> return (AbstractTyCon dis)\n IfDataFamTyCon -> return DataFamilyTyCon\n IfDataTyCon cons -> do { data_cons <- mapM tc_con_decl cons\n ; return (mkDataTyConRhs data_cons) }\n IfNewTyCon con -> do { data_con <- tc_con_decl con\n ; mkNewTyConRhs tycon_name tycon data_con }\n where\n tc_con_decl (IfCon { ifConInfix = is_infix,\n ifConUnivTvs = univ_tvs, ifConExTvs = ex_tvs,\n ifConOcc = occ, ifConCtxt = ctxt, ifConEqSpec = spec,\n ifConArgTys = args, ifConFields = field_lbls,\n ifConStricts = if_stricts})\n = bindIfaceTyVars univ_tvs $ \\ univ_tyvars -> do\n bindIfaceTyVars ex_tvs $ \\ ex_tyvars -> do\n { traceIf (text \"Start interface-file tc_con_decl\" <+> ppr occ)\n ; name <- lookupIfaceTop occ\n\n -- Read the context and argument types, but lazily for two reasons\n -- (a) to avoid looking tugging on a recursive use of\n -- the type itself, which is knot-tied\n -- (b) to avoid faulting in the component types unless\n -- they are really needed\n ; ~(eq_spec, theta, arg_tys, stricts) <- forkM (mk_doc name) $\n do { eq_spec <- tcIfaceEqSpec spec\n ; theta <- tcIfaceCtxt ctxt\n ; arg_tys <- mapM tcIfaceType args\n ; stricts <- mapM tc_strict if_stricts\n -- The IfBang field can mention\n -- the type itself; hence inside forkM\n ; return (eq_spec, theta, arg_tys, stricts) }\n ; lbl_names <- mapM lookupIfaceTop field_lbls\n\n -- Remember, tycon is the representation tycon\n ; let orig_res_ty = mkFamilyTyConApp tycon\n (substTyVars (mkTopTvSubst eq_spec) univ_tyvars)\n\n ; con <- buildDataCon (pprPanic \"tcIfaceDataCons: FamInstEnvs\" (ppr name))\n name is_infix\n stricts lbl_names\n univ_tyvars ex_tyvars\n eq_spec theta\n arg_tys orig_res_ty tycon\n ; traceIf (text \"Done interface-file tc_con_decl\" <+> ppr name)\n ; return con }\n mk_doc con_name = ptext (sLit \"Constructor\") <+> ppr con_name\n\n tc_strict IfNoBang = return HsNoBang\n tc_strict IfStrict = return HsStrict\n tc_strict IfUnpack = return (HsUnpack Nothing)\n tc_strict (IfUnpackCo if_co) = do { co <- tcIfaceCo if_co\n ; return (HsUnpack (Just co)) }\n\ntcIfaceEqSpec :: [(OccName, IfaceType)] -> IfL [(TyVar, Type)]\ntcIfaceEqSpec spec\n = mapM do_item spec\n where\n do_item (occ, if_ty) = do { tv <- tcIfaceTyVar (occNameFS occ)\n ; ty <- tcIfaceType if_ty\n ; return (tv,ty) }\n\\end{code}\n\nNote [Synonym kind loop]\n~~~~~~~~~~~~~~~~~~~~~~~~\nNotice that we eagerly grab the *kind* from the interface file, but\nbuild a forkM thunk for the *rhs* (and family stuff). To see why,\nconsider this (Trac #2412)\n\nM.hs: module M where { import X; data T = MkT S }\nX.hs: module X where { import {-# SOURCE #-} M; type S = T }\nM.hs-boot: module M where { data T }\n\nWhen kind-checking M.hs we need S's kind. But we do not want to\nfind S's kind from (typeKind S-rhs), because we don't want to look at\nS-rhs yet! Since S is imported from X.hi, S gets just one chance to\nbe defined, and we must not do that until we've finished with M.T.\n\nSolution: record S's kind in the interface file; now we can safely\nlook at it.\n\n%************************************************************************\n%* *\n Instances\n%* *\n%************************************************************************\n\n\\begin{code}\ntcIfaceInst :: IfaceClsInst -> IfL ClsInst\ntcIfaceInst (IfaceClsInst { ifDFun = dfun_occ, ifOFlag = oflag\n , ifInstCls = cls, ifInstTys = mb_tcs })\n = do { dfun <- forkM (ptext (sLit \"Dict fun\") <+> ppr dfun_occ) $\n tcIfaceExtId dfun_occ\n ; let mb_tcs' = map (fmap ifaceTyConName) mb_tcs\n ; return (mkImportedInstance cls mb_tcs' dfun oflag) }\n\ntcIfaceFamInst :: IfaceFamInst -> IfL FamInst\ntcIfaceFamInst (IfaceFamInst { ifFamInstFam = fam, ifFamInstTys = mb_tcs\n , ifFamInstAxiom = axiom_name } )\n = do { axiom' <- forkM (ptext (sLit \"Axiom\") <+> ppr axiom_name) $\n tcIfaceCoAxiom axiom_name\n -- will panic if branched, but that's OK\n ; let axiom'' = toUnbranchedAxiom axiom'\n mb_tcs' = map (fmap ifaceTyConName) mb_tcs\n ; return (mkImportedFamInst fam mb_tcs' axiom'') }\n\\end{code}\n\n\n%************************************************************************\n%* *\n Rules\n%* *\n%************************************************************************\n\nWe move a IfaceRule from eps_rules to eps_rule_base when all its LHS free vars\nare in the type environment. However, remember that typechecking a Rule may\n(as a side effect) augment the type envt, and so we may need to iterate the process.\n\n\\begin{code}\ntcIfaceRules :: Bool -- True <=> ignore rules\n -> [IfaceRule]\n -> IfL [CoreRule]\ntcIfaceRules ignore_prags if_rules\n | ignore_prags = return []\n | otherwise = mapM tcIfaceRule if_rules\n\ntcIfaceRule :: IfaceRule -> IfL CoreRule\ntcIfaceRule (IfaceRule {ifRuleName = name, ifActivation = act, ifRuleBndrs = bndrs,\n ifRuleHead = fn, ifRuleArgs = args, ifRuleRhs = rhs,\n ifRuleAuto = auto })\n = do { ~(bndrs', args', rhs') <-\n -- Typecheck the payload lazily, in the hope it'll never be looked at\n forkM (ptext (sLit \"Rule\") <+> ftext name) $\n bindIfaceBndrs bndrs $ \\ bndrs' ->\n do { args' <- mapM tcIfaceExpr args\n ; rhs' <- tcIfaceExpr rhs\n ; return (bndrs', args', rhs') }\n ; let mb_tcs = map ifTopFreeName args\n ; return (Rule { ru_name = name, ru_fn = fn, ru_act = act,\n ru_bndrs = bndrs', ru_args = args',\n ru_rhs = occurAnalyseExpr rhs',\n ru_rough = mb_tcs,\n ru_auto = auto,\n ru_local = False }) } -- An imported RULE is never for a local Id\n -- or, even if it is (module loop, perhaps)\n -- we'll just leave it in the non-local set\n where\n -- This function *must* mirror exactly what Rules.topFreeName does\n -- We could have stored the ru_rough field in the iface file\n -- but that would be redundant, I think.\n -- The only wrinkle is that we must not be deceived by\n -- type syononyms at the top of a type arg. Since\n -- we can't tell at this point, we are careful not\n -- to write them out in coreRuleToIfaceRule\n ifTopFreeName :: IfaceExpr -> Maybe Name\n ifTopFreeName (IfaceType (IfaceTyConApp tc _ )) = Just (ifaceTyConName tc)\n ifTopFreeName (IfaceApp f _) = ifTopFreeName f\n ifTopFreeName (IfaceExt n) = Just n\n ifTopFreeName _ = Nothing\n\\end{code}\n\n\n%************************************************************************\n%* *\n Annotations\n%* *\n%************************************************************************\n\n\\begin{code}\ntcIfaceAnnotations :: [IfaceAnnotation] -> IfL [Annotation]\ntcIfaceAnnotations = mapM tcIfaceAnnotation\n\ntcIfaceAnnotation :: IfaceAnnotation -> IfL Annotation\ntcIfaceAnnotation (IfaceAnnotation target serialized) = do\n target' <- tcIfaceAnnTarget target\n return $ Annotation {\n ann_target = target',\n ann_value = serialized\n }\n\ntcIfaceAnnTarget :: IfaceAnnTarget -> IfL (AnnTarget Name)\ntcIfaceAnnTarget (NamedTarget occ) = do\n name <- lookupIfaceTop occ\n return $ NamedTarget name\ntcIfaceAnnTarget (ModuleTarget mod) = do\n return $ ModuleTarget mod\n\n\\end{code}\n\n\n%************************************************************************\n%* *\n Vectorisation information\n%* *\n%************************************************************************\n\n\\begin{code}\n-- We need access to the type environment as we need to look up information about type constructors\n-- (i.e., their data constructors and whether they are class type constructors). If a vectorised\n-- type constructor or class is defined in the same module as where it is vectorised, we cannot\n-- look that information up from the type constructor that we obtained via a 'forkM'ed\n-- 'tcIfaceTyCon' without recursively loading the interface that we are already type checking again\n-- and again and again...\n--\ntcIfaceVectInfo :: Module -> TypeEnv -> IfaceVectInfo -> IfL VectInfo\ntcIfaceVectInfo mod typeEnv (IfaceVectInfo\n { ifaceVectInfoVar = vars\n , ifaceVectInfoTyCon = tycons\n , ifaceVectInfoTyConReuse = tyconsReuse\n , ifaceVectInfoParallelVars = parallelVars\n , ifaceVectInfoParallelTyCons = parallelTyCons\n })\n = do { let parallelTyConsSet = mkNameSet parallelTyCons\n ; vVars <- mapM vectVarMapping vars\n ; let varsSet = mkVarSet (map fst vVars)\n ; tyConRes1 <- mapM (vectTyConVectMapping varsSet) tycons\n ; tyConRes2 <- mapM (vectTyConReuseMapping varsSet) tyconsReuse\n ; vParallelVars <- mapM vectVar parallelVars\n ; let (vTyCons, vDataCons, vScSels) = unzip3 (tyConRes1 ++ tyConRes2)\n ; return $ VectInfo\n { vectInfoVar = mkVarEnv vVars `extendVarEnvList` concat vScSels\n , vectInfoTyCon = mkNameEnv vTyCons\n , vectInfoDataCon = mkNameEnv (concat vDataCons)\n , vectInfoParallelVars = mkVarSet vParallelVars\n , vectInfoParallelTyCons = parallelTyConsSet\n }\n }\n where\n vectVarMapping name\n = do { vName <- lookupOrig mod (mkLocalisedOccName mod mkVectOcc name)\n ; var <- forkM (ptext (sLit \"vect var\") <+> ppr name) $\n tcIfaceExtId name\n ; vVar <- forkM (ptext (sLit \"vect vVar [mod =\") <+>\n ppr mod <> ptext (sLit \"; nameModule =\") <+>\n ppr (nameModule name) <> ptext (sLit \"]\") <+> ppr vName) $\n tcIfaceExtId vName\n ; return (var, (var, vVar))\n }\n -- where\n -- lookupLocalOrExternalId name\n -- = do { let mb_id = lookupTypeEnv typeEnv name\n -- ; case mb_id of\n -- -- id is local\n -- Just (AnId id) -> return id\n -- -- name is not an Id => internal inconsistency\n -- Just _ -> notAnIdErr\n -- -- Id is external\n -- Nothing -> tcIfaceExtId name\n -- }\n --\n -- notAnIdErr = pprPanic \"TcIface.tcIfaceVectInfo: not an id\" (ppr name)\n\n vectVar name\n = forkM (ptext (sLit \"vect scalar var\") <+> ppr name) $\n tcIfaceExtId name\n\n vectTyConVectMapping vars name\n = do { vName <- lookupOrig mod (mkLocalisedOccName mod mkVectTyConOcc name)\n ; vectTyConMapping vars name vName\n }\n\n vectTyConReuseMapping vars name\n = vectTyConMapping vars name name\n\n vectTyConMapping vars name vName\n = do { tycon <- lookupLocalOrExternalTyCon name\n ; vTycon <- forkM (ptext (sLit \"vTycon of\") <+> ppr vName) $\n lookupLocalOrExternalTyCon vName\n\n -- Map the data constructors of the original type constructor to those of the\n -- vectorised type constructor \/unless\/ the type constructor was vectorised\n -- abstractly; if it was vectorised abstractly, the workers of its data constructors\n -- do not appear in the set of vectorised variables.\n --\n -- NB: This is lazy! We don't pull at the type constructors before we actually use\n -- the data constructor mapping.\n ; let isAbstract | isClassTyCon tycon = False\n | datacon:_ <- tyConDataCons tycon\n = not $ dataConWrapId datacon `elemVarSet` vars\n | otherwise = True\n vDataCons | isAbstract = []\n | otherwise = [ (dataConName datacon, (datacon, vDatacon))\n | (datacon, vDatacon) <- zip (tyConDataCons tycon)\n (tyConDataCons vTycon)\n ]\n\n -- Map the (implicit) superclass and methods selectors as they don't occur in\n -- the var map.\n vScSels | Just cls <- tyConClass_maybe tycon\n , Just vCls <- tyConClass_maybe vTycon\n = [ (sel, (sel, vSel))\n | (sel, vSel) <- zip (classAllSelIds cls) (classAllSelIds vCls)\n ]\n | otherwise\n = []\n\n ; return ( (name, (tycon, vTycon)) -- (T, T_v)\n , vDataCons -- list of (Ci, Ci_v)\n , vScSels -- list of (seli, seli_v)\n )\n }\n where\n -- we need a fully defined version of the type constructor to be able to extract\n -- its data constructors etc.\n lookupLocalOrExternalTyCon name\n = do { let mb_tycon = lookupTypeEnv typeEnv name\n ; case mb_tycon of\n -- tycon is local\n Just (ATyCon tycon) -> return tycon\n -- name is not a tycon => internal inconsistency\n Just _ -> notATyConErr\n -- tycon is external\n Nothing -> tcIfaceTyCon (IfaceTc name)\n }\n\n notATyConErr = pprPanic \"TcIface.tcIfaceVectInfo: not a tycon\" (ppr name)\n\\end{code}\n\n%************************************************************************\n%* *\n Types\n%* *\n%************************************************************************\n\n\\begin{code}\ntcIfaceType :: IfaceType -> IfL Type\ntcIfaceType (IfaceTyVar n) = do { tv <- tcIfaceTyVar n; return (TyVarTy tv) }\ntcIfaceType (IfaceAppTy t1 t2) = do { t1' <- tcIfaceType t1; t2' <- tcIfaceType t2; return (AppTy t1' t2') }\ntcIfaceType (IfaceLitTy l) = do { l1 <- tcIfaceTyLit l; return (LitTy l1) }\ntcIfaceType (IfaceFunTy t1 t2) = do { t1' <- tcIfaceType t1; t2' <- tcIfaceType t2; return (FunTy t1' t2') }\ntcIfaceType (IfaceTyConApp tc tks) = do { tc' <- tcIfaceTyCon tc\n ; tks' <- tcIfaceTcArgs (tyConKind tc') tks\n ; return (mkTyConApp tc' tks') }\ntcIfaceType (IfaceForAllTy tv t) = bindIfaceTyVar tv $ \\ tv' -> do { t' <- tcIfaceType t; return (ForAllTy tv' t') }\n\ntcIfaceTypes :: [IfaceType] -> IfL [Type]\ntcIfaceTypes tys = mapM tcIfaceType tys\n\ntcIfaceTcArgs :: Kind -> [IfaceType] -> IfL [Type]\ntcIfaceTcArgs _ []\n = return []\ntcIfaceTcArgs kind (tk:tks)\n = case splitForAllTy_maybe kind of\n Nothing -> tcIfaceTypes (tk:tks)\n Just (_, kind') -> do { k' <- tcIfaceKind tk\n ; tks' <- tcIfaceTcArgs kind' tks\n ; return (k':tks') }\n\n-----------------------------------------\ntcIfaceCtxt :: IfaceContext -> IfL ThetaType\ntcIfaceCtxt sts = mapM tcIfaceType sts\n\n-----------------------------------------\ntcIfaceTyLit :: IfaceTyLit -> IfL TyLit\ntcIfaceTyLit (IfaceNumTyLit n) = return (NumTyLit n)\ntcIfaceTyLit (IfaceStrTyLit n) = return (StrTyLit n)\n\n-----------------------------------------\ntcIfaceKind :: IfaceKind -> IfL Kind -- See Note [Checking IfaceTypes vs IfaceKinds]\ntcIfaceKind (IfaceTyVar n) = do { tv <- tcIfaceTyVar n; return (TyVarTy tv) }\ntcIfaceKind (IfaceAppTy t1 t2) = do { t1' <- tcIfaceKind t1; t2' <- tcIfaceKind t2; return (AppTy t1' t2') }\ntcIfaceKind (IfaceFunTy t1 t2) = do { t1' <- tcIfaceKind t1; t2' <- tcIfaceKind t2; return (FunTy t1' t2') }\ntcIfaceKind (IfaceTyConApp tc ts) = do { tc' <- tcIfaceKindCon tc; ts' <- tcIfaceKinds ts; return (mkTyConApp tc' ts') }\ntcIfaceKind (IfaceForAllTy tv t) = bindIfaceTyVar tv $ \\ tv' -> do { t' <- tcIfaceKind t; return (ForAllTy tv' t') }\ntcIfaceKind t = pprPanic \"tcIfaceKind\" (ppr t) -- IfaceCoApp, IfaceLitTy\n\ntcIfaceKinds :: [IfaceKind] -> IfL [Kind]\ntcIfaceKinds tys = mapM tcIfaceKind tys\n\\end{code}\n\nNote [Checking IfaceTypes vs IfaceKinds]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWe need to know whether we are checking a *type* or a *kind*.\nConsider module M where\n Proxy :: forall k. k -> *\n data T = T\nand consider the two IfaceTypes\n M.Proxy * M.T{tc}\n M.Proxy 'M.T{tc} 'M.T(d}\nThe first is conventional, but in the latter we use the promoted\ntype constructor (as a kind) and data constructor (as a type). However,\nthe Name of the promoted type constructor is just M.T; it's the *same name*\nas the ordinary type constructor.\n\nWe could add a \"promoted\" flag to an IfaceTyCon, but that's a bit heavy.\nInstead we use context to distinguish, as in the source language.\n - When checking a kind, we look up M.T{tc} and promote it\n - When checking a type, we look up M.T{tc} and don't promote it\n and M.T{d} and promote it\n See tcIfaceKindCon and tcIfaceKTyCon respectively\n\nThis context business is why we need tcIfaceTcArgs, and tcIfaceApps\n\n\n%************************************************************************\n%* *\n Coercions\n%* *\n%************************************************************************\n\n\\begin{code}\ntcIfaceCo :: IfaceCoercion -> IfL Coercion\ntcIfaceCo (IfaceReflCo r t) = mkReflCo r <$> tcIfaceType t\ntcIfaceCo (IfaceFunCo r c1 c2) = mkFunCo r <$> tcIfaceCo c1 <*> tcIfaceCo c2\ntcIfaceCo (IfaceTyConAppCo r tc cs) = mkTyConAppCo r <$> tcIfaceTyCon tc\n <*> mapM tcIfaceCo cs\ntcIfaceCo (IfaceAppCo c1 c2) = mkAppCo <$> tcIfaceCo c1\n <*> tcIfaceCo c2\ntcIfaceCo (IfaceForAllCo tv c) = bindIfaceTyVar tv $ \\ tv' ->\n mkForAllCo tv' <$> tcIfaceCo c\ntcIfaceCo (IfaceCoVarCo n) = mkCoVarCo <$> tcIfaceCoVar n\ntcIfaceCo (IfaceAxiomInstCo n i cs) = AxiomInstCo <$> tcIfaceCoAxiom n\n <*> pure i\n <*> mapM tcIfaceCo cs\ntcIfaceCo (IfaceUnivCo r t1 t2) = UnivCo r <$> tcIfaceType t1\n <*> tcIfaceType t2\ntcIfaceCo (IfaceSymCo c) = SymCo <$> tcIfaceCo c\ntcIfaceCo (IfaceTransCo c1 c2) = TransCo <$> tcIfaceCo c1\n <*> tcIfaceCo c2\ntcIfaceCo (IfaceInstCo c1 t2) = InstCo <$> tcIfaceCo c1\n <*> tcIfaceType t2\ntcIfaceCo (IfaceNthCo d c) = NthCo d <$> tcIfaceCo c\ntcIfaceCo (IfaceLRCo lr c) = LRCo lr <$> tcIfaceCo c\ntcIfaceCo (IfaceSubCo c) = SubCo <$> tcIfaceCo c\ntcIfaceCo (IfaceAxiomRuleCo ax tys cos) = AxiomRuleCo\n <$> tcIfaceCoAxiomRule ax\n <*> mapM tcIfaceType tys\n <*> mapM tcIfaceCo cos\n\ntcIfaceCoVar :: FastString -> IfL CoVar\ntcIfaceCoVar = tcIfaceLclId\n\ntcIfaceCoAxiomRule :: FastString -> IfL CoAxiomRule\ntcIfaceCoAxiomRule n =\n case Map.lookup n typeNatCoAxiomRules of\n Just ax -> return ax\n _ -> pprPanic \"tcIfaceCoAxiomRule\" (ppr n)\n\\end{code}\n\n\n%************************************************************************\n%* *\n Core\n%* *\n%************************************************************************\n\n\\begin{code}\ntcIfaceExpr :: IfaceExpr -> IfL CoreExpr\ntcIfaceExpr (IfaceType ty)\n = Type <$> tcIfaceType ty\n\ntcIfaceExpr (IfaceCo co)\n = Coercion <$> tcIfaceCo co\n\ntcIfaceExpr (IfaceCast expr co)\n = Cast <$> tcIfaceExpr expr <*> tcIfaceCo co\n\ntcIfaceExpr (IfaceLcl name)\n = Var <$> tcIfaceLclId name\n\ntcIfaceExpr (IfaceExt gbl)\n = Var <$> tcIfaceExtId gbl\n\ntcIfaceExpr (IfaceLit lit)\n = do lit' <- tcIfaceLit lit\n return (Lit lit')\n\ntcIfaceExpr (IfaceFCall cc ty) = do\n ty' <- tcIfaceType ty\n u <- newUnique\n dflags <- getDynFlags\n return (Var (mkFCallId dflags u cc ty'))\n\ntcIfaceExpr (IfaceTuple boxity args) = do\n args' <- mapM tcIfaceExpr args\n -- Put the missing type arguments back in\n let con_args = map (Type . exprType) args' ++ args'\n return (mkApps (Var con_id) con_args)\n where\n arity = length args\n con_id = dataConWorkId (tupleCon boxity arity)\n\n\ntcIfaceExpr (IfaceLam bndr body)\n = bindIfaceBndr bndr $ \\bndr' ->\n Lam bndr' <$> tcIfaceExpr body\n\ntcIfaceExpr (IfaceApp fun arg)\n = tcIfaceApps fun arg\n\ntcIfaceExpr (IfaceECase scrut ty)\n = do { scrut' <- tcIfaceExpr scrut\n ; ty' <- tcIfaceType ty\n ; return (castBottomExpr scrut' ty') }\n\ntcIfaceExpr (IfaceCase scrut case_bndr alts) = do\n scrut' <- tcIfaceExpr scrut\n case_bndr_name <- newIfaceName (mkVarOccFS case_bndr)\n let\n scrut_ty = exprType scrut'\n case_bndr' = mkLocalId case_bndr_name scrut_ty\n tc_app = splitTyConApp scrut_ty\n -- NB: Won't always succeed (polymorphic case)\n -- but won't be demanded in those cases\n -- NB: not tcSplitTyConApp; we are looking at Core here\n -- look through non-rec newtypes to find the tycon that\n -- corresponds to the datacon in this case alternative\n\n extendIfaceIdEnv [case_bndr'] $ do\n alts' <- mapM (tcIfaceAlt scrut' tc_app) alts\n return (Case scrut' case_bndr' (coreAltsType alts') alts')\n\ntcIfaceExpr (IfaceLet (IfaceNonRec (IfLetBndr fs ty info) rhs) body)\n = do { name <- newIfaceName (mkVarOccFS fs)\n ; ty' <- tcIfaceType ty\n ; id_info <- tcIdInfo False {- Don't ignore prags; we are inside one! -}\n name ty' info\n ; let id = mkLocalIdWithInfo name ty' id_info\n ; rhs' <- tcIfaceExpr rhs\n ; body' <- extendIfaceIdEnv [id] (tcIfaceExpr body)\n ; return (Let (NonRec id rhs') body') }\n\ntcIfaceExpr (IfaceLet (IfaceRec pairs) body)\n = do { ids <- mapM tc_rec_bndr (map fst pairs)\n ; extendIfaceIdEnv ids $ do\n { pairs' <- zipWithM tc_pair pairs ids\n ; body' <- tcIfaceExpr body\n ; return (Let (Rec pairs') body') } }\n where\n tc_rec_bndr (IfLetBndr fs ty _)\n = do { name <- newIfaceName (mkVarOccFS fs)\n ; ty' <- tcIfaceType ty\n ; return (mkLocalId name ty') }\n tc_pair (IfLetBndr _ _ info, rhs) id\n = do { rhs' <- tcIfaceExpr rhs\n ; id_info <- tcIdInfo False {- Don't ignore prags; we are inside one! -}\n (idName id) (idType id) info\n ; return (setIdInfo id id_info, rhs') }\n\ntcIfaceExpr (IfaceTick tickish expr) = do\n expr' <- tcIfaceExpr expr\n tickish' <- tcIfaceTickish tickish\n return (Tick tickish' expr')\n\n-------------------------\ntcIfaceApps :: IfaceExpr -> IfaceExpr -> IfL CoreExpr\n-- See Note [Checking IfaceTypes vs IfaceKinds]\ntcIfaceApps fun arg\n = go_down fun [arg]\n where\n go_down (IfaceApp fun arg) args = go_down fun (arg:args)\n go_down fun args = do { fun' <- tcIfaceExpr fun\n ; go_up fun' (exprType fun') args }\n\n go_up :: CoreExpr -> Type -> [IfaceExpr] -> IfL CoreExpr\n go_up fun _ [] = return fun\n go_up fun fun_ty (IfaceType t : args)\n | Just (tv,body_ty) <- splitForAllTy_maybe fun_ty\n = do { t' <- if isKindVar tv -- See Note [Checking IfaceTypes vs IfaceKinds]\n then tcIfaceKind t\n else tcIfaceType t\n ; let fun_ty' = substTyWith [tv] [t'] body_ty\n ; go_up (App fun (Type t')) fun_ty' args }\n go_up fun fun_ty (arg : args)\n | Just (_, fun_ty') <- splitFunTy_maybe fun_ty\n = do { arg' <- tcIfaceExpr arg\n ; go_up (App fun arg') fun_ty' args }\n go_up fun fun_ty args = pprPanic \"tcIfaceApps\" (ppr fun $$ ppr fun_ty $$ ppr args)\n\n-------------------------\ntcIfaceTickish :: IfaceTickish -> IfM lcl (Tickish Id)\ntcIfaceTickish (IfaceHpcTick modl ix) = return (HpcTick modl ix)\ntcIfaceTickish (IfaceSCC cc tick push) = return (ProfNote cc tick push)\n\n-------------------------\ntcIfaceLit :: Literal -> IfL Literal\n-- Integer literals deserialise to (LitInteger i )\n-- so tcIfaceLit just fills in the type.\n-- See Note [Integer literals] in Literal\ntcIfaceLit (LitInteger i _)\n = do t <- tcIfaceTyCon (IfaceTc integerTyConName)\n return (mkLitInteger i (mkTyConTy t))\ntcIfaceLit lit = return lit\n\n-------------------------\ntcIfaceAlt :: CoreExpr -> (TyCon, [Type])\n -> (IfaceConAlt, [FastString], IfaceExpr)\n -> IfL (AltCon, [TyVar], CoreExpr)\ntcIfaceAlt _ _ (IfaceDefault, names, rhs)\n = ASSERT( null names ) do\n rhs' <- tcIfaceExpr rhs\n return (DEFAULT, [], rhs')\n\ntcIfaceAlt _ _ (IfaceLitAlt lit, names, rhs)\n = ASSERT( null names ) do\n lit' <- tcIfaceLit lit\n rhs' <- tcIfaceExpr rhs\n return (LitAlt lit', [], rhs')\n\n-- A case alternative is made quite a bit more complicated\n-- by the fact that we omit type annotations because we can\n-- work them out. True enough, but its not that easy!\ntcIfaceAlt scrut (tycon, inst_tys) (IfaceDataAlt data_occ, arg_strs, rhs)\n = do { con <- tcIfaceDataCon data_occ\n ; when (debugIsOn && not (con `elem` tyConDataCons tycon))\n (failIfM (ppr scrut $$ ppr con $$ ppr tycon $$ ppr (tyConDataCons tycon)))\n ; tcIfaceDataAlt con inst_tys arg_strs rhs }\n\ntcIfaceDataAlt :: DataCon -> [Type] -> [FastString] -> IfaceExpr\n -> IfL (AltCon, [TyVar], CoreExpr)\ntcIfaceDataAlt con inst_tys arg_strs rhs\n = do { us <- newUniqueSupply\n ; let uniqs = uniqsFromSupply us\n ; let (ex_tvs, arg_ids)\n = dataConRepFSInstPat arg_strs uniqs con inst_tys\n\n ; rhs' <- extendIfaceTyVarEnv ex_tvs $\n extendIfaceIdEnv arg_ids $\n tcIfaceExpr rhs\n ; return (DataAlt con, ex_tvs ++ arg_ids, rhs') }\n\\end{code}\n\n\n\\begin{code}\ntcExtCoreBindings :: [IfaceBinding] -> IfL CoreProgram -- Used for external core\ntcExtCoreBindings [] = return []\ntcExtCoreBindings (b:bs) = do_one b (tcExtCoreBindings bs)\n\ndo_one :: IfaceBinding -> IfL [CoreBind] -> IfL [CoreBind]\ndo_one (IfaceNonRec bndr rhs) thing_inside\n = do { rhs' <- tcIfaceExpr rhs\n ; bndr' <- newExtCoreBndr bndr\n ; extendIfaceIdEnv [bndr'] $ do\n { core_binds <- thing_inside\n ; return (NonRec bndr' rhs' : core_binds) }}\n\ndo_one (IfaceRec pairs) thing_inside\n = do { bndrs' <- mapM newExtCoreBndr bndrs\n ; extendIfaceIdEnv bndrs' $ do\n { rhss' <- mapM tcIfaceExpr rhss\n ; core_binds <- thing_inside\n ; return (Rec (bndrs' `zip` rhss') : core_binds) }}\n where\n (bndrs,rhss) = unzip pairs\n\\end{code}\n\n\n%************************************************************************\n%* *\n IdInfo\n%* *\n%************************************************************************\n\n\\begin{code}\ntcIdDetails :: Type -> IfaceIdDetails -> IfL IdDetails\ntcIdDetails _ IfVanillaId = return VanillaId\ntcIdDetails ty (IfDFunId ns)\n = return (DFunId ns (isNewTyCon (classTyCon cls)))\n where\n (_, _, cls, _) = tcSplitDFunTy ty\n\ntcIdDetails _ (IfRecSelId tc naughty)\n = do { tc' <- tcIfaceTyCon tc\n ; return (RecSelId { sel_tycon = tc', sel_naughty = naughty }) }\n\ntcIdInfo :: Bool -> Name -> Type -> IfaceIdInfo -> IfL IdInfo\ntcIdInfo ignore_prags name ty info\n | ignore_prags = return vanillaIdInfo\n | otherwise = case info of\n NoInfo -> return vanillaIdInfo\n HasInfo info -> foldlM tcPrag init_info info\n where\n -- Set the CgInfo to something sensible but uninformative before\n -- we start; default assumption is that it has CAFs\n init_info = vanillaIdInfo\n\n tcPrag :: IdInfo -> IfaceInfoItem -> IfL IdInfo\n tcPrag info HsNoCafRefs = return (info `setCafInfo` NoCafRefs)\n tcPrag info (HsArity arity) = return (info `setArityInfo` arity)\n tcPrag info (HsStrictness str) = return (info `setStrictnessInfo` str)\n tcPrag info (HsInline prag) = return (info `setInlinePragInfo` prag)\n\n -- The next two are lazy, so they don't transitively suck stuff in\n tcPrag info (HsUnfold lb if_unf)\n = do { unf <- tcUnfolding name ty info if_unf\n ; let info1 | lb = info `setOccInfo` strongLoopBreaker\n | otherwise = info\n ; return (info1 `setUnfoldingInfoLazily` unf) }\n\\end{code}\n\n\\begin{code}\ntcUnfolding :: Name -> Type -> IdInfo -> IfaceUnfolding -> IfL Unfolding\ntcUnfolding name _ info (IfCoreUnfold stable if_expr)\n = do { dflags <- getDynFlags\n ; mb_expr <- tcPragExpr name if_expr\n ; let unf_src | stable = InlineStable\n | otherwise = InlineRhs\n ; return $ case mb_expr of\n Nothing -> NoUnfolding\n Just expr -> mkUnfolding dflags unf_src\n True {- Top level -}\n (isBottomingSig strict_sig)\n expr\n }\n where\n -- Strictness should occur before unfolding!\n strict_sig = strictnessInfo info\ntcUnfolding name _ _ (IfCompulsory if_expr)\n = do { mb_expr <- tcPragExpr name if_expr\n ; return (case mb_expr of\n Nothing -> NoUnfolding\n Just expr -> mkCompulsoryUnfolding expr) }\n\ntcUnfolding name _ _ (IfInlineRule arity unsat_ok boring_ok if_expr)\n = do { mb_expr <- tcPragExpr name if_expr\n ; return (case mb_expr of\n Nothing -> NoUnfolding\n Just expr -> mkCoreUnfolding InlineStable True expr arity\n (UnfWhen unsat_ok boring_ok))\n }\n\ntcUnfolding name dfun_ty _ (IfDFunUnfold bs ops)\n = bindIfaceBndrs bs $ \\ bs' ->\n do { mb_ops1 <- forkM_maybe doc $ mapM tcIfaceExpr ops\n ; return (case mb_ops1 of\n Nothing -> noUnfolding\n Just ops1 -> mkDFunUnfolding bs' (classDataCon cls) ops1) }\n where\n doc = text \"Class ops for dfun\" <+> ppr name\n (_, _, cls, _) = tcSplitDFunTy dfun_ty\n\\end{code}\n\nFor unfoldings we try to do the job lazily, so that we never type check\nan unfolding that isn't going to be looked at.\n\n\\begin{code}\ntcPragExpr :: Name -> IfaceExpr -> IfL (Maybe CoreExpr)\ntcPragExpr name expr\n = forkM_maybe doc $ do\n core_expr' <- tcIfaceExpr expr\n\n -- Check for type consistency in the unfolding\n whenGOptM Opt_DoCoreLinting $ do\n in_scope <- get_in_scope\n case lintUnfolding noSrcLoc in_scope core_expr' of\n Nothing -> return ()\n Just fail_msg -> do { mod <- getIfModule\n ; pprPanic \"Iface Lint failure\"\n (vcat [ ptext (sLit \"In interface for\") <+> ppr mod\n , hang doc 2 fail_msg\n , ppr name <+> equals <+> ppr core_expr'\n , ptext (sLit \"Iface expr =\") <+> ppr expr ]) }\n return core_expr'\n where\n doc = text \"Unfolding of\" <+> ppr name\n\n get_in_scope :: IfL [Var] -- Totally disgusting; but just for linting\n get_in_scope\n = do { (gbl_env, lcl_env) <- getEnvs\n ; rec_ids <- case if_rec_types gbl_env of\n Nothing -> return []\n Just (_, get_env) -> do\n { type_env <- setLclEnv () get_env\n ; return (typeEnvIds type_env) }\n ; return (varEnvElts (if_tv_env lcl_env) ++\n varEnvElts (if_id_env lcl_env) ++\n rec_ids) }\n\\end{code}\n\n\n\n%************************************************************************\n%* *\n Getting from Names to TyThings\n%* *\n%************************************************************************\n\n\\begin{code}\ntcIfaceGlobal :: Name -> IfL TyThing\ntcIfaceGlobal name\n | Just thing <- wiredInNameTyThing_maybe name\n -- Wired-in things include TyCons, DataCons, and Ids\n -- Even though we are in an interface file, we want to make\n -- sure the instances and RULES of this thing (particularly TyCon) are loaded\n -- Imagine: f :: Double -> Double\n = do { ifCheckWiredInThing thing; return thing }\n | otherwise\n = do { env <- getGblEnv\n ; case if_rec_types env of { -- Note [Tying the knot]\n Just (mod, get_type_env)\n | nameIsLocalOrFrom mod name\n -> do -- It's defined in the module being compiled\n { type_env <- setLclEnv () get_type_env -- yuk\n ; case lookupNameEnv type_env name of\n Just thing -> return thing\n Nothing -> pprPanic \"tcIfaceGlobal (local): not found:\"\n (ppr name $$ ppr type_env) }\n\n ; _ -> do\n\n { hsc_env <- getTopEnv\n ; mb_thing <- liftIO (lookupTypeHscEnv hsc_env name)\n ; case mb_thing of {\n Just thing -> return thing ;\n Nothing -> do\n\n { mb_thing <- importDecl name -- It's imported; go get it\n ; case mb_thing of\n Failed err -> failIfM err\n Succeeded thing -> return thing\n }}}}}\n\n-- Note [Tying the knot]\n-- ~~~~~~~~~~~~~~~~~~~~~\n-- The if_rec_types field is used in two situations:\n--\n-- a) Compiling M.hs, which indiretly imports Foo.hi, which mentions M.T\n-- Then we look up M.T in M's type environment, which is splatted into if_rec_types\n-- after we've built M's type envt.\n--\n-- b) In ghc --make, during the upsweep, we encounter M.hs, whose interface M.hi\n-- is up to date. So we call typecheckIface on M.hi. This splats M.T into\n-- if_rec_types so that the (lazily typechecked) decls see all the other decls\n--\n-- In case (b) it's important to do the if_rec_types check *before* looking in the HPT\n-- Because if M.hs also has M.hs-boot, M.T will *already be* in the HPT, but in its\n-- emasculated form (e.g. lacking data constructors).\n\ntcIfaceTyCon :: IfaceTyCon -> IfL TyCon\ntcIfaceTyCon (IfaceTc name)\n = do { thing <- tcIfaceGlobal name\n ; case thing of -- A \"type constructor\" can be a promoted data constructor\n -- c.f. Trac #5881\n ATyCon tc -> return tc\n AConLike (RealDataCon dc) -> return (promoteDataCon dc)\n _ -> pprPanic \"tcIfaceTyCon\" (ppr name $$ ppr thing) }\n\ntcIfaceKindCon :: IfaceTyCon -> IfL TyCon\ntcIfaceKindCon (IfaceTc name)\n = do { thing <- tcIfaceGlobal name\n ; case thing of -- A \"type constructor\" here is a promoted type constructor\n -- c.f. Trac #5881\n ATyCon tc\n | isSuperKind (tyConKind tc)\n -> return tc -- Mainly just '*' or 'AnyK'\n | Just prom_tc <- promotableTyCon_maybe tc\n -> return prom_tc\n\n _ -> pprPanic \"tcIfaceKindCon\" (ppr name $$ ppr thing) }\n\ntcIfaceCoAxiom :: Name -> IfL (CoAxiom Branched)\ntcIfaceCoAxiom name = do { thing <- tcIfaceGlobal name\n ; return (tyThingCoAxiom thing) }\n\ntcIfaceDataCon :: Name -> IfL DataCon\ntcIfaceDataCon name = do { thing <- tcIfaceGlobal name\n ; case thing of\n AConLike (RealDataCon dc) -> return dc\n _ -> pprPanic \"tcIfaceExtDC\" (ppr name$$ ppr thing) }\n\ntcIfaceExtId :: Name -> IfL Id\ntcIfaceExtId name = do { thing <- tcIfaceGlobal name\n ; case thing of\n AnId id -> return id\n _ -> pprPanic \"tcIfaceExtId\" (ppr name$$ ppr thing) }\n\\end{code}\n\n%************************************************************************\n%* *\n Bindings\n%* *\n%************************************************************************\n\n\\begin{code}\nbindIfaceBndr :: IfaceBndr -> (CoreBndr -> IfL a) -> IfL a\nbindIfaceBndr (IfaceIdBndr (fs, ty)) thing_inside\n = do { name <- newIfaceName (mkVarOccFS fs)\n ; ty' <- tcIfaceType ty\n ; let id = mkLocalId name ty'\n ; extendIfaceIdEnv [id] (thing_inside id) }\nbindIfaceBndr (IfaceTvBndr bndr) thing_inside\n = bindIfaceTyVar bndr thing_inside\n\nbindIfaceBndrs :: [IfaceBndr] -> ([CoreBndr] -> IfL a) -> IfL a\nbindIfaceBndrs [] thing_inside = thing_inside []\nbindIfaceBndrs (b:bs) thing_inside\n = bindIfaceBndr b $ \\ b' ->\n bindIfaceBndrs bs $ \\ bs' ->\n thing_inside (b':bs')\n\n-----------------------\nnewExtCoreBndr :: IfaceLetBndr -> IfL Id\nnewExtCoreBndr (IfLetBndr var ty _) -- Ignoring IdInfo for now\n = do { mod <- getIfModule\n ; name <- newGlobalBinder mod (mkVarOccFS var) noSrcSpan\n ; ty' <- tcIfaceType ty\n ; return (mkLocalId name ty') }\n\n-----------------------\nbindIfaceTyVar :: IfaceTvBndr -> (TyVar -> IfL a) -> IfL a\nbindIfaceTyVar (occ,kind) thing_inside\n = do { name <- newIfaceName (mkTyVarOccFS occ)\n ; tyvar <- mk_iface_tyvar name kind\n ; extendIfaceTyVarEnv [tyvar] (thing_inside tyvar) }\n\nbindIfaceTyVars :: [IfaceTvBndr] -> ([TyVar] -> IfL a) -> IfL a\nbindIfaceTyVars bndrs thing_inside\n = do { names <- newIfaceNames (map mkTyVarOccFS occs)\n ; let (kis_kind, tys_kind) = span isSuperIfaceKind kinds\n (kis_name, tys_name) = splitAt (length kis_kind) names\n -- We need to bring the kind variables in scope since type\n -- variables may mention them.\n ; kvs <- zipWithM mk_iface_tyvar kis_name kis_kind\n ; extendIfaceTyVarEnv kvs $ do\n { tvs <- zipWithM mk_iface_tyvar tys_name tys_kind\n ; extendIfaceTyVarEnv tvs (thing_inside (kvs ++ tvs)) } }\n where\n (occs,kinds) = unzip bndrs\n\nisSuperIfaceKind :: IfaceKind -> Bool\nisSuperIfaceKind (IfaceTyConApp (IfaceTc n) []) = n == superKindTyConName\nisSuperIfaceKind _ = False\n\nmk_iface_tyvar :: Name -> IfaceKind -> IfL TyVar\nmk_iface_tyvar name ifKind\n = do { kind <- tcIfaceKind ifKind\n ; return (Var.mkTyVar name kind) }\n\nbindIfaceTyVars_AT :: [IfaceTvBndr] -> ([TyVar] -> IfL a) -> IfL a\n-- Used for type variable in nested associated data\/type declarations\n-- where some of the type variables are already in scope\n-- class C a where { data T a b }\n-- Here 'a' is in scope when we look at the 'data T'\nbindIfaceTyVars_AT [] thing_inside\n = thing_inside []\nbindIfaceTyVars_AT (b@(tv_occ,_) : bs) thing_inside\n = do { mb_tv <- lookupIfaceTyVar tv_occ\n ; let bind_b :: (TyVar -> IfL a) -> IfL a\n bind_b = case mb_tv of\n Just b' -> \\k -> k b'\n Nothing -> bindIfaceTyVar b\n ; bind_b $ \\b' ->\n bindIfaceTyVars_AT bs $ \\bs' ->\n thing_inside (b':bs') }\n\\end{code}\n","avg_line_length":44.4184576485,"max_line_length":120,"alphanum_fraction":0.554646364} {"size":9790,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"Strand Directed Acyclic Graphs\n\nRun this with\n\n $ ghci SDAG.lhs\n\n> module SDAG where\n\n> import qualified Data.List as L\n> import qualified Data.Set as S\n\nThe strands in a skeleton are represented by the natural numbers less\nthan the number of strands in the skeleton. The strands in a\nskeleton are describe by a list of integers, where each element of\nthe list gives the height of its strand.\n\n> type Strands = [Int] -- [Strand height]\n\nA node is a pair of natural numbers. The first number is the node's\nstrand, and the second is the position of the node within the strand.\n\n> type Node = (Int, Int) -- (Strand, Position)\n\nGiven a strand height list, the set of nodes are\n\n> nodes :: Strands -> [Node]\n> nodes heights =\n> [(s, p) | (s, n) <- zip [0..] hts, p <- nats n]\n> where hts = filter (0 <) heights -- remove non-positive heights\n\nwhere nats n is the list of natural numbers less than n.\n\n> nats :: Int -> [Int]\n> nats n = take n [0..]\n\nThus for a skeleton with three strands, of height 2, 3, and 2, the\nnodes are\n\n *SDAG> nodes [2,3,2]\n [(0,0),(0,1),(1,0),(1,1),(1,2),(2,0),(2,1)]\n\nThe edges in an SDAG represent the precedes relation.\n\n> type Edge = (Node, Node) -- Precedes relation\n\nWhen (n0, n1) :: Edge, the message event at n0 precedes the one at n1.\n\nThe strand succession edges are\n\n> successors :: Strands -> [Edge]\n> successors heights =\n> [((s, p), (s, p + 1)) | (s, n) <- zip [0..] hts, p <- nats (n - 1)]\n> where hts = filter (0 <) heights -- remove non-positive heights\n\nFor a skeleton with three strands, of height 2, 3, and 2, the strand\nsuccession edges are\n\n *SDAG> successors [2,3,2]\n [((0,0),(0,1)),((1,0),(1,1)),((1,1),(1,2)),((2,0),(2,1))]\n\nA Strand Directed Acyclic Graph (SDAG) is a strand height list and a\nset of edges. It represents an acyclic graph.\n\n> type SDAG = (Strands, [Edge])\n\nThe normalized form of a SDAG contains no strand succession edges or\nelements in its transitive closure.\n\n> normalize :: SDAG -> SDAG\n> normalize (strands, precedes) =\n> if isAcyclic (adj sdag) ns then\n> sdag -- SDAG must be acyclic\n> else\n> error \"SDAG has a cycle\"\n> where\n> ns = nodes strands -- Sort SDAG and remove duplicates\n> sdag = (strands, L.sort (L.nub prec))\n> prec = [(n0, n1) |\n> (n0, n1) <- precedes,\n> elem n0 ns, -- Ensure n0 and n1 are in nodes\n> elem n1 ns, -- Remove strand succession edges\n> not (sameStrands (n0, n1))]\n\n> sameStrands :: Edge -> Bool\n> sameStrands ((s0, _), (s1, _)) = s0 == s1\n\nThe adjacency list representation of an SDAG is used to map a node to\na list of its predecessors. The adjacency list representation is\n[[[Node]]], and lookup involves list indexing. The representation of\nan SDAG includes the strand succession edges.\n\n> adj :: SDAG -> Node -> [Node]\n> adj (strands, precedes) (s, p) =\n> [ strand s h | (s, h) <- zip [0..] strands ] !! s !! p\n> where\n> strand s h = [ entry (s, p) | p <- nats h ]\n> entry n = enrich n [ n0 | (n0, n1) <- precedes, n1 == n ]\n> -- add strand succession edges\n> enrich (s, p) ns\n> | p > 0 = (s, p - 1) : ns\n> | otherwise = ns\n\nIs graph acyclic?\n\n> isAcyclic :: Ord a => (a -> [a]) -> [a] -> Bool\n> isAcyclic adj nodes =\n> all (not . backEdge numbering) (S.toList edges)\n> where\n> numbering = dfs adj (S.toList start)\n> -- Remove nodes that have non-zero indegree\n> start = S.difference (S.fromList nodes) (S.map fst edges)\n> edges = foldl f S.empty nodes\n> f edges src = foldl (g src) edges (adj src)\n> g src edges dst = S.insert (dst, src) edges\n\nCompute a depth first search numbering of nodes using postorder.\nWith postorder, only back edges go from a lower number to a higher\none. Assumes nodes, the set of nodes with indegree zero, is not empty.\n\n> dfs :: Ord a => (a -> [a]) -> [a] -> [(a, Int)]\n> dfs adj nodes =\n> alist\n> where\n> (_, alist, _) = foldl po (0, [], S.empty) nodes\n> po a@(num, alist, seen) node\n> | S.member node seen = a\n> | otherwise =\n> (num' + 1, (node, num') : alist', seen'')\n> where -- Search is postorder because nodes at the end of\n> (num', alist', seen'') = -- edges are explored before\n> foldl po (num, alist, seen') nodes' -- the node\n> seen' = S.insert node seen -- Insert node as soon as\n> nodes' = adj node -- it's seen\n\nIs edge a back edge, meaning a cycle has been found? If an edge\ncontains a node that is not in the alist, it means it was not\nvisited during the depth first seach. This can happen when there\nis a strong component that has no edges from other strong\ncomponents to it. We report this edge to be a back edge so as to\nget the correct overall result.\n\n> backEdge :: Eq a => [(a, Int)] -> (a, a) -> Bool\n> backEdge alist (node, node') =\n> case (lookup node alist, lookup node' alist) of\n> (Just n, Just n') -> n >= n'\n> _ -> True\n\nCompute the transitive reduction\n\n> reduce :: SDAG -> SDAG\n> reduce g@(strands, precedes) =\n> (strands, filter essential precedes)\n> where\n> essential (dst, src) =\n> loop dst (L.delete dst (adj g src)) [src]\n> loop _ [] _ = True -- No other path found\n> loop dst (n : ns) seen\n> | n == dst = False -- There is another path\n> | elem n seen = loop dst ns seen\n> | otherwise = loop dst (adj g n ++ ns) (n : seen)\n\nCompute the transitive closure\n\n> close :: SDAG -> SDAG\n> close g@(strands, precedes) =\n> normalize (strands, loop prec False prec)\n> where\n> prec = successors strands ++ precedes\n> loop prec False [] = prec\n> loop prec True [] =\n> loop prec False prec -- restart loop\n> loop prec repeat ((n0, n1) : pairs) =\n> inner prec repeat pairs [(n, n1) | n <- adj g n0]\n> inner prec repeat pairs [] =\n> loop prec repeat pairs\n> inner prec repeat pairs (p : rest)\n> | elem p prec = inner prec repeat pairs rest\n> | otherwise = inner (p : prec) True pairs rest\n\nShorthands that check their arguments.\n\n> r :: SDAG -> SDAG\n> r = reduce . normalize\n\n> c :: SDAG -> SDAG\n> c = close . normalize\n\nIs x a proper sublist of y?\n\n> sublist :: Eq a => [a] -> [a] -> Bool\n> sublist x y =\n> all (flip elem y) x && -- All x in y\n> any (flip notElem x) y -- Some y not in x\n\nThe list of all sublists\n\n> sublists :: [a] -> [[a]]\n> sublists [] = [[]]\n> sublists (x:xs) = sublists xs ++ map (x:) (sublists xs)\n\nCompute all the SDAGs that are weaker than the given SDAG.\n\n> w :: SDAG -> [SDAG]\n> w sdag =\n> let (s, es) = c sdag in\n> L.nub [r |\n> es0 <- sublists es,\n> let r = reduce (s, es0),\n> let (_, es1) = close (s, snd r),\n> sublist es1 es]\n\nExamples\n\n *SDAG> w ([2,2], [((0,0),(1, 1))])\n [([2,2],[])]\n\n *SDAG> w ([2,2], [((0,1),(1, 0))])\n [([2,2],[]),\n ([2,2],[((0,1),(1,1))]),\n ([2,2],[((0,0),(1,1))]),\n ([2,2],[((0,0),(1,0))]),\n ([2,2],[((0,0),(1,0)),((0,1),(1,1))])]\n\nCompute the SDAGs in w x that are not weaker than a SDAG in w x.\n\n> m :: SDAG -> [SDAG]\n> m sdag =\n> map reduce (filter maximal sdags)\n> where\n> sdags = map close (w sdag) -- All weaker SDAGs\n> maximal sdag = -- Is SDAG not weaker than some other\n> not (any (weaker sdag) sdags)\n> weaker (s0, es0) (s1, es1) =\n> s0 == s1 && sublist es0 es1\n\nExamples\n\n *SDAG> m ([2,2], [((0,0),(1, 1))])\n [([2,2],[])]\n\n *SDAG> m ([2,2], [((0,1),(1, 0))])\n [([2,2],[((0,0),(1,0)),((0,1),(1,1))])]\n\nCompute the SDAGs in w x that are not weaker than a SDAG in w x using\nthe implemented algorithm.\n\n> m' :: SDAG -> [SDAG]\n> m' sdag =\n> let (s, es) = c sdag in\n> L.nub [reduce (s, L.delete e es) | e <- snd (reduce sdag)]\n\nRead and show for CPSA orderings\n\n> data O = O [Edge]\n\n> instance Show O where\n> showsPrec _ (O es) =\n> showString \"(precedes\" . showl es\n> where\n> showl [] = showChar ')'\n> showl (e : es) = showChar ' ' . shows (E e) . showl es\n\n> instance Read O where\n> readsPrec _ s0 =\n> [(O ord, s3) |\n> (\"(\", s1) <- lex s0,\n> (\"precedes\", s2) <- lex s1,\n> (ord, s3) <- readl s2]\n> where\n> readl s0 = [([], s1) |\n> (\")\", s1) <- lex s0] ++\n> [(e : es, s2) |\n> (E e, s1) <- reads s0,\n> (es, s2) <- readl s1]\n\nA guess at the strand height list associated with some edges\n\n> strands :: [Edge] -> Strands\n> strands es =\n> [height s | s <- nats n]\n> where\n> n = 1 + foldl max 0 (map fst nodes)\n> nodes = L.nub (foldl (\\ns (n0, n1) -> n0 : n1 : ns) [] es)\n> height s = 1 + foldl max 0 [p | (s', p) <- nodes, s' == s]\n\nRead and show for edges\n\n> data E = E Edge\n\n> instance Show E where\n> showsPrec _ (E (n0, n1)) =\n> showChar '(' . shows (N n0) . showChar ' ' .\n> shows (N n1) . showChar ')'\n\n> instance Read E where\n> readsPrec _ s0 =\n> [(E (n0, n1), s4) |\n> (\"(\", s1) <- lex s0,\n> (N n0, s2) <- reads s1,\n> (N n1, s3) <- reads s2,\n> (\")\", s4) <- lex s3]\n\nRead and show for nodes\n\n> data N = N Node\n\n> instance Show N where\n> showsPrec _ (N (s, p)) =\n> showChar '(' . shows s . showChar ' ' . shows p . showChar ')'\n\n> instance Read N where\n> readsPrec _ s0 =\n> [(N (s, p), s4) |\n> (\"(\", s1) <- lex s0,\n> (s, s2) <- reads s1,\n> (p, s3) <- reads s2,\n> (\")\", s4) <- lex s3]\n","avg_line_length":30.786163522,"max_line_length":73,"alphanum_fraction":0.5435137896} {"size":16563,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"> -- |\n> -- Module : RecPoly\n> -- Description : Yorgey's CIS 194, 03 \n> -- Copyright : erlnow 2020 - 2030\n> -- License : BSD3\n> --\n> -- Maintainer : erlestau@gmail.com\n> -- Stability : experimental\n> -- Portability : unknown\n> --\n> -- Recursion Patterns, polymorphism, and the \"Prelude\"\n> --\n> -- Downloaded lesson file 'doc\/03-rec-poly.lhs' as a module.\n> -- Added comments and, probably test, and signature of functions\n> -- and data definitions.\n\n> module RecPoly where \n\nRecursion pattern, polymorphism, and the Prelude\n================================================\n\nCIS 194 Week 3 \n28 January 2013\n\nWhile completing HW 2, you probably spent a lot of time writing\nexplicitly recursive functions. At this point, you might think that's\nwhat Haskell programmers spend most of their time doing. In fact,\nexperienced Haskell programmers *hardly ever* write recursive\nfunctions!\n\nHow is this possible? The key is to notice that although recursive\nfunctions can theoretically do pretty much anything, in practice there\nare certain common patterns that come up over and over again. By\nabstracting out these patterns into library functions, programmers can\nleave the low-level details of actually doing recursion to these\nfunctions, and think about problems at a higher level---that's the\ngoal of *wholemeal programming*.\n\nRecursion patterns\n------------------\n\nRecall our simple definition of lists of `Int` values:\n\n> -- *Recursion patterns\n>\n> -- $recPatt\n>\n> -- |IntList, an list of 'Int'. 'IntList' can be 'Empty' or a\n> -- 'Cons' 'Int' and a 'IntList'.\n> data IntList = Empty | Cons Int IntList\n> deriving Show\n\nWhat sorts of things might we want to do with an `IntList`? Here are\na few common possibilities:\n\n * Perform some operation on every element of the list\n\n * Keep only some elements of the list, and throw others away, based\n on a test\n\n * \"Summarize\" the elements of the list somehow (find their sum,\n product, maximum...).\n\n * You can probably think of others!\n\n**Map**\n\nLet's think about the first one (\"perform some operation on every\nelement of the list\"). For example, we could add one to every element\nin a list:\n\n \n\n \n\nOr we could ensure that every element in a list is nonnegative by\ntaking the absolute value:\n\n> -- |Given an 'IntList', returns another 'IntList' with the absolute\n> -- value of all its elements ('abs')\n> absAll :: IntList -> IntList\n> absAll Empty = Empty\n> absAll (Cons x xs) = Cons (abs x) (absAll xs)\n\nOr we could square every element:\n\n> -- |Given an 'IntList', returns another 'IntList' with all its values\n> -- squared\n> squareAll :: IntList -> IntList\n> squareAll Empty = Empty\n> squareAll (Cons x xs) = Cons (x*x) (squareAll xs)\n\nAt this point, big flashing red lights and warning bells should be\ngoing off in your head. These three functions look way too similar.\nThere ought to be some way to abstract out the commonality so we don't\nhave to repeat ourselves!\n\nThere is indeed a way---can you figure it out? Which parts are the\nsame in all three examples and which parts change?\n\nThe thing that changes, of course, is the operation we want to perform\non each element of the list. We can specify this operation as a\n*function* of type `Int -> Int`. Here is where we begin to see how\nincredibly useful it is to be able to pass functions as inputs to\nother functions!\n\n \n\n \n\nWe can now use `mapIntList` to implement `addOneToAll`, `absAll`, and\n`squareAll`:\n\n> -- |An example of a 'IntList'\n> exampleList :: IntList\n> exampleList = Cons (-1) (Cons 2 (Cons (-6) Empty))\n>\n> -- |Helper function: add one to its argument\n> addOne :: Int -> Int\n> addOne x = x + 1\n> -- |Helper function square its argument\n> square :: Int -> Int\n> square x = x * x\n\n mapIntList addOne exampleList\n mapIntList abs exampleList\n mapIntList square exampleList\n\n**Filter**\n\nAnother common pattern is when we want to keep only some elements of a\nlist, and throw others away, based on a test. For example, we might\nwant to keep only the positive numbers:\n\n \n\n \n\n> -- |Return another 'IntList' with only even values\n> keepOnlyEven :: IntList -> IntList\n> keepOnlyEven Empty = Empty\n> keepOnlyEven (Cons x xs)\n> | even x = Cons x (keepOnlyEven xs)\n> | otherwise = keepOnlyEven xs\n\nHow can we generalize this pattern? What stays the same, and what do\nwe need to abstract out?\n\n \n\n \n\n**Fold**\n\nThe final pattern we mentioned was to \"summarize\" the elements of the\nlist; this is also variously known as a \"fold\" or \"reduce\" operation.\nWe'll come back to this next week. In the meantime, you might want to\nthink about how to abstract out this pattern!\n\n> -- **fold\n> --\n> -- $fold\n> --\n> -- next week\n\nPolymorphism\n------------\n\nWe've now written some nice, general functions for mapping and\nfiltering over lists of `Int`s. But we're not done generalizing!\nWhat if we wanted to filter lists of `Integer`s? or `Bool`s? Or\nlists of lists of trees of stacks of `String`s? We'd have to make a\nnew data type and a new function for each of these cases. Even worse,\nthe *code would be exactly the same*; the only thing that would be\ndifferent is the *type signatures*. Can't Haskell help us out here?\n\nOf course it can! Haskell supports *polymorphism* for both data types\nand functions. The word \"polymorphic\" comes from Greek (\u03c0\u03bf\u03bb\u1f7b\u03bc\u03bf\u03c1\u03c6\u03bf\u03c2)\nand means \"having many forms\": something which is polymorphic works\nfor multiple types.\n\n**Polymorphic data types**\n\nFirst, let's see how to declare a polymorphic data type.\n\n> -- *Polymorphism\n> --\n> -- **Polymorphic data type\n> --\n> -- $polymorphism\n>\n> -- |A polymorphic 'List'. Has a constructor for an empty list, 'E',\n> -- and constructor for build recursively a list, 'C'. A polymorphic\n> -- type expects one or more types as arguments.\n> data List t = E | C t (List t) deriving Show\n\n(We can't reuse `Empty` and `Cons` since we already used those for the\nconstructors of `IntList`, so we'll use `E` and `C` instead.) Whereas\nbefore we had `data IntList = ...`, we now have `data List t = ...`\nThe `t` is a *type variable* which can stand for any type. (Type\nvariables must start with a lowercase letter, whereas types must start\nwith uppercase.) `data List t = ...` means that the `List` type is\n*parameterized* by a type, in much the same way that a function can be\nparameterized by some input.\n\nGiven a type `t`, a `(List t)` consists of either the constructor `E`,\nor the constructor `C` along with a value of type `t` and another\n`(List t)`. Here are some examples:\n\n> -- |Example a 'List' of 'Int'\n> lst1 :: List Int\n> lst1 = C 3 (C 5 (C 2 E))\n>\n> -- |Example a 'List' of 'Char'\n> lst2 :: List Char\n> lst2 = C 'x' (C 'y' (C 'z' E))\n>\n> -- |Example a 'List' of 'Bool'\n> lst3 :: List Bool\n> lst3 = C True (C False E)\n\n**Polymorphic functions**\n\nNow, let's generalize `filterIntList` to work over our new polymorphic\n`List`s. We can just take code of `filterIntList` and replace `Empty`\nby `E` and `Cons` by `C`\n\n> -- *Polymorphic functions\n> -- \n> -- $polyfunc\n> --\n>\n> -- |Return a 'List' with elements that make 'True' a\n> -- test\n> filterList :: (t -> Bool) -> List t -> List t\n> filterList _ E = E\n> filterList p (C x xs)\n> | p x = C x (filterList p xs)\n> | otherwise = filterList p xs\n\nNow, what is the type of `filterList`? Let's see what type `ghci`\ninfers for it:\n\n *Main> :t filterList\n filterList :: (t -> Bool) -> List t -> List t \n\nWe can read this as: \"for any type `t`,\n`filterList` takes a function from `t` to `Bool`, and a list of\n`t`'s, and returns a list of `t`'s.\"\n\nWhat about generalizing `mapIntList`? What type should we give to a\nfunction `mapList` that applies a function to every element in a\n`List t`?\n\nOur first idea might be to give it the type\n\n~~~~ {.haskell}\nmapList :: (t -> t) -> List t -> List t\n~~~~\n\nThis works, but it means that when applying `mapList`, we always get a\nlist with the same type of elements as the list we started with. This\nis overly restrictive: we'd like to be able to do things like `mapList\nshow` in order to convert, say, a list of `Int`s into a list of\n`String`s. Here, then, is the most general possible type for\n`mapList`, along with an implementation:\n\n> -- |apply a function to all elements in a 'List'\n> mapList :: (a -> b) -> List a -> List b\n> mapList _ E = E\n> mapList f (C x xs) = C (f x) (mapList f xs)\n\nOne important thing to remember about polymorphic functions is that\n**the caller gets to pick the types**. When you write a polymorphic\nfunction, it must work for every possible input type. This---together\nwith the fact that Haskell has no way to directly make make decisions\nbased on what type something is---has some interesting implications\nwhich we'll explore later.\n\nThe Prelude\n-----------\n\nThe `Prelude` is a module with a bunch of standard definitions that\ngets implicitly imported into every Haskell program. It's worth\nspending some time [skimming through its\ndocumentation](https:\/\/hackage.haskell.org\/package\/base\/docs\/Prelude.html)\nto familiarize oneself with the tools that are available.\n\nOf course, polymorphic lists are defined in the `Prelude`, along with\n[many useful polymorphic functions for working with\nthem](https:\/\/hackage.haskell.org\/package\/base-4.14.0.0\/docs\/Prelude.html#g:13).\nFor example, `filter` and `map` are the counterparts to our\n`filterList` and `mapList`. In fact, the [`Data.List` module contains\nmany more list functions\nstill](https:\/\/hackage.haskell.org\/package\/base\/docs\/Data-List.html). \n\nAnother useful polymorphic type to know is `Maybe`, defined as\n\n~~~~ {.haskell}\ndata Maybe a = Nothing | Just a\n~~~~\n\nA value of type `Maybe a` either contains a value of type `a` (wrapped\nin the `Just` constructor), or it is `Nothing` (representing some sort\nof failure or error). The [`Data.Maybe` module has functions for\nworking with `Maybe`\nvalues](http:\/\/hackage.haskell.org\/package\/base\/docs\/Data-Maybe.html).\n\nTotal and partial functions\n---------------------------\n\nConsider this polymorphic type:\n\n~~~~ {.haskell}\n[a] -> a\n~~~~\n\nWhat functions could have such a type? The type says that given a\nlist of things of type `a`, the function must produce some value of\ntype `a`. For example, the Prelude function `head` has this type. \n\n...But what happens if `head` is given an empty list as input? Let's\nlook at the [source\ncode](http:\/\/www.haskell.org\/ghc\/docs\/latest\/html\/libraries\/base\/src\/GHC-List.html#head)\nfor `head`... \n\nIt crashes! There's nothing else it possibly could do, since it must\nwork for *all* types. There's no way to make up an element of an\narbitrary type out of thin air.\n\n`head` is what is known as a *partial function*: there are certain\ninputs for which `head` will crash. Functions which have certain\ninputs that will make them recurse infinitely are also called partial.\nFunctions which are well-defined on all possible inputs are known as\n*total functions*.\n\nIt is good Haskell practice to avoid partial functions as much as\npossible. Actually, avoiding partial functions is good practice in\n*any* programming language---but in most of them it's ridiculously\nannoying. Haskell tends to make it quite easy and sensible.\n\n**`head` is a mistake!** It should not be in the `Prelude`. Other\npartial `Prelude` functions you should almost never use include\n`tail`, `init`, `last`, and `(!!)`. From this point on, using one of\nthese functions on a homework assignment will lose style points!\n\nWhat to do instead?\n\n**Replacing partial functions**\n\nOften partial functions like `head`, `tail`, and so on can be replaced\nby pattern-matching. Consider the following two definitions:\n\n\n> -- | doStuf1: sums the first and second elements of a 'Int' list.\n> -- Return 0 if the list has not at least two elements.\n> -- First version: uses 'head' and 'tail'\n> doStuff1 :: [Int] -> Int\n> doStuff1 [] = 0\n> doStuff1 [_] = 0\n> doStuff1 xs = head xs + (head (tail xs)) \n\n> -- | doStuf2: sums the first and second elements of a 'Int' list.\n> -- Return 0 if the list has not at least two elements.\n> -- Second version: uses pattern matching.\n> doStuff2 :: [Int] -> Int\n> doStuff2 [] = 0\n> doStuff2 [_] = 0\n> doStuff2 (x1:x2:_) = x1 + x2\n\nThese functions compute exactly the same result, and they are both\ntotal. But only the second one is *obviously* total, and it is much\neasier to read anyway.\n\n**Writing partial functions**\n\nWhat if you find yourself *writing* a partial functions? There are two\napproaches to take. The first is to change the output type of the\nfunction to indicate the possible failure. Recall the definition of `Maybe`:\n\n~~~~ {.haskell}\ndata Maybe a = Nothing | Just a\n~~~~\n\nNow, suppose we were writing `head`. We could rewrite it safely like\nthis:\n\n> -- | A safe head version. Return 'Nothing' if the list is empty. \n> safeHead :: [a] -> Maybe a\n> safeHead [] = Nothing\n> safeHead (x:_) = Just x\n\nIndeed, there is exactly such a function defined in the [`safe`\npackage](http:\/\/hackage.haskell.org\/package\/safe).\n\nWhy is this a good idea?\n\n1. `safeHead` will never crash. \n2. The type of `safeHead` makes it obvious that it may fail for some\n inputs.\n3. The type system ensures that users of `safeHead` must appropriately\n check the return value of `safeHead` to see whether they got a value\n or `Nothing`.\n\nIn some sense, `safeHead` is still \"partial\"; but we have reflected\nthe partiality in the type system, so it is now safe. The goal is to\nhave the types tell us as much as possible about the behavior of\nfunctions.\n\nOK, but what if we know that we will only use `head` in situations\nwhere we are *guaranteed* to have a non-empty list? In such a\nsituation, it is really annoying to get back a `Maybe a`, since we\nhave to expend effort dealing with a case which we \"know\" cannot\nactually happen. \n\nThe answer is that if some condition is really *guaranteed*, then the\ntypes ought to reflect the guarantee! Then the compiler can enforce\nyour guarantees for you. For example:\n\n> -- |A secure version of List, has at last one element.\n> data NonEmptyList a = NEL a [a] deriving Show\n>\n> -- |'NonEmptyList' to List\n> nelToList :: NonEmptyList a -> [a]\n> nelToList (NEL x xs) = x:xs\n>\n> -- |List to 'NonEmptyList'. If List is empty return 'Nothing', in another case\n> -- returns a 'NonEmptyList' wrapped in 'Just'\n> listToNel :: [a] -> Maybe (NonEmptyList a)\n> listToNel [] = Nothing\n> listToNel (x:xs) = Just $ NEL x xs\n>\n> -- |'head' for 'NonEmptyList'.\n> headNEL :: NonEmptyList a -> a\n> headNEL (NEL a _) = a\n>\n> -- |'tail' for 'NonEmptyList'\n> tailNEL :: NonEmptyList a -> [a]\n> tailNEL (NEL _ as) = as\n\nYou might think doing such things is only for chumps who are not\ncoding super-geniuses like you. Of course, *you* would never make a\nmistake like passing an empty list to a function which expects only\nnon-empty ones. Right? Well, there's definitely a chump involved,\nbut it's not who you think.\n","avg_line_length":32.2865497076,"max_line_length":88,"alphanum_fraction":0.6987864517} {"size":217,"ext":"lhs","lang":"Literate Haskell","max_stars_count":4.0,"content":"#!\/usr\/bin\/env runhaskell\n\nIn principle, we could do with a lot less than autoconfUserhooks, but simpleUserHooks\nis not running 'configure'.\n\n>import Distribution.Simple\n>main = defaultMainWithHooks autoconfUserHooks\n","avg_line_length":27.125,"max_line_length":85,"alphanum_fraction":0.8110599078} {"size":9307,"ext":"lhs","lang":"Literate Haskell","max_stars_count":14.0,"content":"> {-# OPTIONS_HADDOCK show-extensions #-}\n> {-# Language CPP #-}\n\n#if !defined(MIN_VERSION_base)\n# define MIN_VERSION_base(a,b,c) 0\n#endif\n\n> {-|\n> Module : Traversals\n> Copyright : (c) 2017-2019 Jim Rogers and Dakotah Lambert\n> License : MIT\n>\n> Find paths through an automaton.\n> -}\n> module LTK.Traversals\n> ( Path(..)\n> , word\n> , initialsPaths\n> , initialsNDPath\n> , rejectingPaths\n> , acyclicPaths\n> , extensions\n> , boundedCycleExtensions\n> , nondeterministicAcyclicExtensions\n> ) where\n\n> import Data.Monoid (Monoid, mappend, mconcat, mempty)\n#if MIN_VERSION_base(4,9,0)\n> import Data.Semigroup (Semigroup, (<>))\n#endif\n> import Data.Set (Set)\n\n> import LTK.FSA\n\nA Path is\n* a sequence of labels in inverse order of edges in the path\n* the terminal state of the path\n --- This is a Maybe (State n) allowing for an adjoined identity (empty path)\n making Path a monoid wrt concatenation (append).\n* the multiset of states along the path\n --- this allows us to determine how many times a state has been entered on\n the path, which is exactly the number of times a cycle starting and\n ending at that state has been traversed.\n* the length of the path (depth of the terminal state)\n\n> -- |A path through an 'FSA'.\n> data Path n e\n> = Path\n> { -- |Edge labels are gathered in reverse order,\n> -- so 'labels' is a reversed string.\n> labels :: [Symbol e]\n> -- |Current 'State', if any.\n> , endstate :: Maybe (State n)\n> -- |States seen so far, with multiplicity.\n> , stateMultiset :: Multiset (State n)\n> -- |The number of edges taken so far.\n> , depth :: Integer\n> }\n> deriving (Eq, Show)\n\n> -- |The reversal of the 'labels' of the 'Path'.\n> word :: Path n e -> [Symbol e]\n> word = Prelude.reverse . labels\n\nIn order to have a Multiset of Path, Path must be Ord:\n\n> instance (Ord e, Ord n) => Ord (Path n e)\n> where compare p1 p2\n> = mconcat [compare d1 d2, compare l1 l2, compare e1 e2]\n> where (e1, l1, d1) = (endstate p1, labels p1, depth p1)\n> (e2, l2, d2) = (endstate p2, labels p2, depth p2)\n\n#if MIN_VERSION_base(4,9,0)\nSemigroup instance to satisfy base-4.9\n\n> instance (Ord n) => Semigroup (Path n e) where\n> (<>) = mappend\n\n#endif\n\n> instance (Ord n) => Monoid (Path n e) where\n> mempty = Path [] Nothing empty 0\n> mappend (Path xs1 q1 qs1 d1) (Path xs2 q2 qs2 d2)\n> = Path { labels = xs1 ++ xs2\n> , endstate = maybe id (const . Just) q1 q2\n> , stateMultiset = qs1 `union` qs2\n> , depth = d1 + d2\n> }\n\nThe extensions of a path p are paths extending p by a single edge\n\n> extend :: (Ord e, Ord n) =>\n> Path n e -> Set (Transition n e) -> Set (Path n e)\n> extend p = tmap (\\t ->\n> Path { labels = (edgeLabel t : labels p)\n> , endstate = (Just (destination t))\n> , stateMultiset\n> = (insert (destination t) (stateMultiset p))\n> , depth = (depth p + 1)\n> }\n> )\n\nThe nondeterministic extensions of a path p are paths extending p\nby a single edge nondeterminstically. That is, the states of the\npath are sets.\n\n> nondeterministicExtend :: (Ord e, Ord n) =>\n> Path (Set n) e -> Set (Transition n e) ->\n> Set (Path (Set n) e)\n> nondeterministicExtend p ts\n> | isEmpty ts = singleton p\n> | otherwise\n> = tmap\n> (\\xs ->\n> let newState = State $ collapse f empty xs\n> in p { labels = chooseOne (tmap edgeLabel xs) : labels p\n> , endstate = Just newState\n> , stateMultiset\n> = insert newState (stateMultiset p)\n> , depth = depth p + 1\n> }\n> ) tgroups\n> where tgroups = partitionBy edgeLabel ts\n> connectable\n> = maybe (const False) (isIn . nodeLabel) (endstate p)\n> f x = if connectable . nodeLabel $ source x\n> then insert (nodeLabel $ destination x)\n> else id\n\n> -- |Paths extending a given path by a single edge.\n> extensions :: (Ord e, Ord n) =>\n> FSA n e -> Path n e -> Set (Path n e)\n> extensions fsa p\n> = extend p . keep ((== endstate p) . Just . source) $ transitions fsa\n\nAcyclic extensions of a path are extensions other than back-edges\n\n> acyclicExtensions :: (Ord e, Ord n) => FSA n e -> Path n e -> Set (Path n e)\n> acyclicExtensions fsa p\n> = extend p .\n> keep (both\n> (isNotIn (stateMultiset p) . destination)\n> ((== endstate p) . Just . source)) $\n> transitions fsa\n\n> -- |The extensions of a non-deterministic path other than back-edges\n> nondeterministicAcyclicExtensions :: (Ord e, Ord n) =>\n> FSA n e -> Path (Set n) e ->\n> Set (Path (Set n) e)\n> nondeterministicAcyclicExtensions fsa\n> = keep\n> (\\a ->\n> maybe True ((<= 1) . multiplicity (stateMultiset a)) (endstate a)\n> ) . flip nondeterministicExtend (transitions fsa)\n\nboundedCycExtensions are extensions other than back-edges to a state that\nhas been visted more than bound times. This gives traversals that will\nfollow cycles at most bound times. Note that the qualification is that\nthe multiplicity of the state is $\\leq$ bound; states that have not been\nvisited have multiplicity 0.\n\n> -- |Extensions other than back-edges to a state that has been visited\n> -- more than a given number of times.\n> boundedCycleExtensions :: (Ord e, Ord n) =>\n> FSA n e -> Integer -> Path n e -> Set (Path n e)\n> boundedCycleExtensions fsa b p\n> = extend p .\n> keep (both\n> ((== endstate p) . Just . source)\n> ((<= b) . multiplicity (stateMultiset p) . destination)) $\n> transitions fsa\n\n> -- |Initial open list for traversal from initial states.\n> initialsPaths :: (Ord e, Ord n) => FSA n e -> Set (Path n e)\n> initialsPaths = tmap iPath . initials\n> where iPath s = Path [] (Just s) (singleton s) 0\n\n> -- |Initial open list for non-deterministic traversal from initial states.\n> initialsNDPath :: (Ord e, Ord n) => FSA n e -> Path (Set n) e\n> initialsNDPath fsa = Path { labels = empty\n> , endstate = Just istate\n> , stateMultiset = singleton istate\n> , depth = 0\n> }\n> where istate = State $ tmap nodeLabel (initials fsa)\n\n> truth :: a -> b -> Bool\n> truth = const (const True)\n\ntraversalQDFS:\n* First argument is a predicate that is used to filter paths before\n adding them to the closed list\n* Remaining args are the FSA, a depth bound, the open list, and\n the closed list\n* Paths are not added to the open list unless their depth is <= bound\n\n> traversalQDFS :: (Ord e, Ord n) =>\n> (FSA n e -> Path n e -> Bool) ->\n> FSA n e ->\n> Integer ->\n> Set (Path n e) ->\n> Set (Path n e) ->\n> Set (Path n e)\n> traversalQDFS qf fsa bound open closed\n> | isEmpty open = closed\n> | depth p > bound = traversalQDFS qf fsa bound ps addIf\n> | otherwise = traversalQDFS qf fsa bound\n> (union ps $ extensions fsa p)\n> addIf\n> where (p, ps) = choose open\n> addIf\n> | qf fsa p = insert p closed\n> | otherwise = closed\n\nacyclicPathsQ\nall paths from the initial open list that are acyclic \/ and are restricted to\nnodes that satisfy the given predicate\n\n> acyclicPathsQ :: (Ord e, Ord n) =>\n> (FSA n e -> Path n e -> Bool) -> -- predicate\n> FSA n e -> -- graph\n> Set (Path n e) -> -- open\n> Set (Path n e) -> -- closed\n> Set (Path n e)\n> acyclicPathsQ qf fsa open closed\n> | open == empty = closed\n> | otherwise = acyclicPathsQ qf fsa\n> (union ps $ acyclicExtensions fsa p)\n> addIf\n> where (p, ps) = choose open\n> addIf\n> | qf fsa p = insert p closed\n> | otherwise = closed\n\n> -- |All paths from 'initialsPaths' that do not contain cycles.\n> acyclicPaths :: (Ord e, Ord n) => FSA n e -> Set (Path n e)\n> acyclicPaths fsa = acyclicPathsQ truth fsa (initialsPaths fsa) empty\n\nrejectingPaths fsa bound\n= all rejecting Paths of length <= bound\n\n> -- |All paths of length less than or equal to a given bound\n> -- that do not end in an accepting state.\n> rejectingPaths :: (Ord e, Ord n) => FSA n e -> Integer -> Set (Path n e)\n> rejectingPaths fsa bound = traversalQDFS rejecting\n> fsa bound (initialsPaths fsa) empty\n> where rejecting f p = doesNotContain (endstate p) .\n> tmap Just $ finals f\n","avg_line_length":37.3775100402,"max_line_length":78,"alphanum_fraction":0.5533469432} {"size":27192,"ext":"lhs","lang":"Literate Haskell","max_stars_count":5.0,"content":"%\n% (c) The University of Glasgow 2006\n%\n\n\\begin{code}\n{-# LANGUAGE CPP #-}\n\nmodule OptCoercion ( optCoercion, checkAxInstCo ) where\n\n#include \"HsVersions.h\"\n\nimport Coercion\nimport Type hiding( substTyVarBndr, substTy, extendTvSubst )\nimport TcType ( exactTyVarsOfType )\nimport TyCon\nimport CoAxiom\nimport Var\nimport VarSet\nimport FamInstEnv ( flattenTys )\nimport VarEnv\nimport StaticFlags ( opt_NoOptCoercion )\nimport Outputable\nimport Pair\nimport FastString\nimport Util\nimport Unify\nimport ListSetOps\nimport InstEnv\nimport Control.Monad ( zipWithM )\n\\end{code}\n\n%************************************************************************\n%* *\n Optimising coercions\n%* *\n%************************************************************************\n\nNote [Subtle shadowing in coercions]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nSupose we optimising a coercion\n optCoercion (forall (co_X5:t1~t2). ...co_B1...)\nThe co_X5 is a wild-card; the bound variable of a coercion for-all\nshould never appear in the body of the forall. Indeed we often\nwrite it like this\n optCoercion ( (t1~t2) => ...co_B1... )\n\nJust because it's a wild-card doesn't mean we are free to choose\nwhatever variable we like. For example it'd be wrong for optCoercion\nto return\n forall (co_B1:t1~t2). ...co_B1...\nbecause now the co_B1 (which is really free) has been captured, and\nsubsequent substitutions will go wrong. That's why we can't use\nmkCoPredTy in the ForAll case, where this note appears.\n\nNote [Optimising coercion optimisation]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nLooking up a coercion's role or kind is linear in the size of the\ncoercion. Thus, doing this repeatedly during the recursive descent\nof coercion optimisation is disastrous. We must be careful to avoid\ndoing this if at all possible.\n\nBecause it is generally easy to know a coercion's components' roles\nfrom the role of the outer coercion, we pass down the known role of\nthe input in the algorithm below. We also keep functions opt_co2\nand opt_co3 separate from opt_co4, so that the former two do Phantom\nchecks that opt_co4 can avoid. This is a big win because Phantom coercions\nrarely appear within non-phantom coercions -- only in some TyConAppCos\nand some AxiomInstCos. We handle these cases specially by calling\nopt_co2.\n\n\\begin{code}\noptCoercion :: CvSubst -> Coercion -> NormalCo\n-- ^ optCoercion applies a substitution to a coercion,\n-- *and* optimises it to reduce its size\noptCoercion env co\n | opt_NoOptCoercion = substCo env co\n | otherwise = opt_co1 env False co\n\ntype NormalCo = Coercion\n -- Invariants:\n -- * The substitution has been fully applied\n -- * For trans coercions (co1 `trans` co2)\n -- co1 is not a trans, and neither co1 nor co2 is identity\n -- * If the coercion is the identity, it has no CoVars of CoTyCons in it (just types)\n\ntype NormalNonIdCo = NormalCo -- Extra invariant: not the identity\n\n-- | Do we apply a @sym@ to the result?\ntype SymFlag = Bool\n\n-- | Do we force the result to be representational?\ntype ReprFlag = Bool\n\n-- | Optimize a coercion, making no assumptions.\nopt_co1 :: CvSubst\n -> SymFlag\n -> Coercion -> NormalCo\nopt_co1 env sym co = opt_co2 env sym (coercionRole co) co\n{-\nopt_co env sym co\n = pprTrace \"opt_co {\" (ppr sym <+> ppr co $$ ppr env) $\n co1 `seq`\n pprTrace \"opt_co done }\" (ppr co1) $\n (WARN( not same_co_kind, ppr co <+> dcolon <+> ppr (coercionType co)\n $$ ppr co1 <+> dcolon <+> ppr (coercionType co1) )\n WARN( not (coreEqCoercion co1 simple_result),\n (text \"env=\" <+> ppr env) $$\n (text \"input=\" <+> ppr co) $$\n (text \"simple=\" <+> ppr simple_result) $$\n (text \"opt=\" <+> ppr co1) )\n co1)\n where\n co1 = opt_co' env sym co\n same_co_kind = s1 `eqType` s2 && t1 `eqType` t2\n Pair s t = coercionKind (substCo env co)\n (s1,t1) | sym = (t,s)\n | otherwise = (s,t)\n Pair s2 t2 = coercionKind co1\n\n simple_result | sym = mkSymCo (substCo env co)\n | otherwise = substCo env co\n-}\n\n-- See Note [Optimising coercion optimisation]\n-- | Optimize a coercion, knowing the coercion's role. No other assumptions.\nopt_co2 :: CvSubst\n -> SymFlag\n -> Role -- ^ The role of the input coercion\n -> Coercion -> NormalCo\nopt_co2 env sym Phantom co = opt_phantom env sym co\nopt_co2 env sym r co = opt_co3 env sym Nothing r co\n\n-- See Note [Optimising coercion optimisation]\n-- | Optimize a coercion, knowing the coercion's non-Phantom role.\nopt_co3 :: CvSubst -> SymFlag -> Maybe Role -> Role -> Coercion -> NormalCo\nopt_co3 env sym (Just Phantom) _ co = opt_phantom env sym co\nopt_co3 env sym (Just Representational) r co = opt_co4 env sym True r co\n -- if mrole is Just Nominal, that can't be a downgrade, so we can ignore\nopt_co3 env sym _ r co = opt_co4 env sym False r co\n\n\n-- See Note [Optimising coercion optimisation]\n-- | Optimize a non-phantom coercion.\nopt_co4 :: CvSubst -> SymFlag -> ReprFlag -> Role -> Coercion -> NormalCo\n\nopt_co4 env _ rep r (Refl _r ty)\n = ASSERT( r == _r )\n Refl (chooseRole rep r) (substTy env ty)\n\nopt_co4 env sym rep r (SymCo co) = opt_co4 env (not sym) rep r co\n\nopt_co4 env sym rep r g@(TyConAppCo _r tc cos)\n = ASSERT( r == _r )\n case (rep, r) of\n (True, Nominal) ->\n mkTyConAppCo Representational tc\n (zipWith3 (opt_co3 env sym)\n (map Just (tyConRolesX Representational tc))\n (repeat Nominal)\n cos)\n (False, Nominal) ->\n mkTyConAppCo Nominal tc (map (opt_co4 env sym False Nominal) cos)\n (_, Representational) ->\n -- must use opt_co2 here, because some roles may be P\n -- See Note [Optimising coercion optimisation]\n mkTyConAppCo r tc (zipWith (opt_co2 env sym)\n (tyConRolesX r tc) -- the current roles\n cos)\n (_, Phantom) -> pprPanic \"opt_co4 sees a phantom!\" (ppr g)\n\nopt_co4 env sym rep r (AppCo co1 co2) = mkAppCo (opt_co4 env sym rep r co1)\n (opt_co4 env sym False Nominal co2)\nopt_co4 env sym rep r (ForAllCo tv co)\n = case substTyVarBndr env tv of\n (env', tv') -> mkForAllCo tv' (opt_co4 env' sym rep r co)\n -- Use the \"mk\" functions to check for nested Refls\n\nopt_co4 env sym rep r (CoVarCo cv)\n | Just co <- lookupCoVar env cv\n = opt_co4 (zapCvSubstEnv env) sym rep r co\n\n | Just cv1 <- lookupInScope (getCvInScope env) cv\n = ASSERT( isCoVar cv1 ) wrapRole rep r $ wrapSym sym (CoVarCo cv1)\n -- cv1 might have a substituted kind!\n\n | otherwise = WARN( True, ptext (sLit \"opt_co: not in scope:\") <+> ppr cv $$ ppr env)\n ASSERT( isCoVar cv )\n wrapRole rep r $ wrapSym sym (CoVarCo cv)\n\nopt_co4 env sym rep r (AxiomInstCo con ind cos)\n -- Do *not* push sym inside top-level axioms\n -- e.g. if g is a top-level axiom\n -- g a : f a ~ a\n -- then (sym (g ty)) \/= g (sym ty) !!\n = ASSERT( r == coAxiomRole con )\n wrapRole rep (coAxiomRole con) $\n wrapSym sym $\n -- some sub-cos might be P: use opt_co2\n -- See Note [Optimising coercion optimisation]\n AxiomInstCo con ind (zipWith (opt_co2 env False)\n (coAxBranchRoles (coAxiomNthBranch con ind))\n cos)\n -- Note that the_co does *not* have sym pushed into it\n\nopt_co4 env sym rep r (UnivCo _r oty1 oty2)\n = ASSERT( r == _r )\n opt_univ env (chooseRole rep r) a b\n where\n (a,b) = if sym then (oty2,oty1) else (oty1,oty2)\n\nopt_co4 env sym rep r (TransCo co1 co2)\n -- sym (g `o` h) = sym h `o` sym g\n | sym = opt_trans in_scope co2' co1'\n | otherwise = opt_trans in_scope co1' co2'\n where\n co1' = opt_co4 env sym rep r co1\n co2' = opt_co4 env sym rep r co2\n in_scope = getCvInScope env\n\nopt_co4 env sym rep r co@(NthCo {}) = opt_nth_co env sym rep r co\n\nopt_co4 env sym rep r (LRCo lr co)\n | Just pr_co <- splitAppCo_maybe co\n = ASSERT( r == Nominal )\n opt_co4 env sym rep Nominal (pickLR lr pr_co)\n | Just pr_co <- splitAppCo_maybe co'\n = ASSERT( r == Nominal )\n if rep\n then opt_co4 (zapCvSubstEnv env) False True Nominal (pickLR lr pr_co)\n else pickLR lr pr_co\n | otherwise\n = wrapRole rep Nominal $ LRCo lr co'\n where\n co' = opt_co4 env sym False Nominal co\n\nopt_co4 env sym rep r (InstCo co ty)\n -- See if the first arg is already a forall\n -- ...then we can just extend the current substitution\n | Just (tv, co_body) <- splitForAllCo_maybe co\n = opt_co4 (extendTvSubst env tv ty') sym rep r co_body\n\n -- See if it is a forall after optimization\n -- If so, do an inefficient one-variable substitution\n | Just (tv, co'_body) <- splitForAllCo_maybe co'\n = substCoWithTy (getCvInScope env) tv ty' co'_body\n\n | otherwise = InstCo co' ty'\n where\n co' = opt_co4 env sym rep r co\n ty' = substTy env ty\n\nopt_co4 env sym _ r (SubCo co)\n = ASSERT( r == Representational )\n opt_co4 env sym True Nominal co\n\n-- XXX: We could add another field to CoAxiomRule that\n-- would allow us to do custom simplifications.\nopt_co4 env sym rep r (AxiomRuleCo co ts cs)\n = ASSERT( r == coaxrRole co )\n wrapRole rep r $\n wrapSym sym $\n AxiomRuleCo co (map (substTy env) ts)\n (zipWith (opt_co2 env False) (coaxrAsmpRoles co) cs)\n\n\n-------------\n-- | Optimize a phantom coercion. The input coercion may not necessarily\n-- be a phantom, but the output sure will be.\nopt_phantom :: CvSubst -> SymFlag -> Coercion -> NormalCo\nopt_phantom env sym co\n = if sym\n then opt_univ env Phantom ty2 ty1\n else opt_univ env Phantom ty1 ty2\n where\n Pair ty1 ty2 = coercionKind co\n\nopt_univ :: CvSubst -> Role -> Type -> Type -> Coercion\nopt_univ env role oty1 oty2\n | Just (tc1, tys1) <- splitTyConApp_maybe oty1\n , Just (tc2, tys2) <- splitTyConApp_maybe oty2\n , tc1 == tc2\n = mkTyConAppCo role tc1 (zipWith3 (opt_univ env) (tyConRolesX role tc1) tys1 tys2)\n\n | Just (l1, r1) <- splitAppTy_maybe oty1\n , Just (l2, r2) <- splitAppTy_maybe oty2\n , typeKind l1 `eqType` typeKind l2 -- kind(r1) == kind(r2) by consequence\n = let role' = if role == Phantom then Phantom else Nominal in\n -- role' is to comform to mkAppCo's precondition\n mkAppCo (opt_univ env role l1 l2) (opt_univ env role' r1 r2)\n\n | Just (tv1, ty1) <- splitForAllTy_maybe oty1\n , Just (tv2, ty2) <- splitForAllTy_maybe oty2\n , tyVarKind tv1 `eqType` tyVarKind tv2 -- rule out a weird unsafeCo\n = case substTyVarBndr2 env tv1 tv2 of { (env1, env2, tv') ->\n let ty1' = substTy env1 ty1\n ty2' = substTy env2 ty2 in\n mkForAllCo tv' (opt_univ (zapCvSubstEnv2 env1 env2) role ty1' ty2') }\n\n | otherwise\n = mkUnivCo role (substTy env oty1) (substTy env oty2)\n\n-------------\n-- NthCo must be handled separately, because it's the one case where we can't\n-- tell quickly what the component coercion's role is from the containing\n-- coercion. To avoid repeated coercionRole calls as opt_co1 calls opt_co2,\n-- we just look for nested NthCo's, which can happen in practice.\nopt_nth_co :: CvSubst -> SymFlag -> ReprFlag -> Role -> Coercion -> NormalCo\nopt_nth_co env sym rep r = go []\n where\n go ns (NthCo n co) = go (n:ns) co\n -- previous versions checked if the tycon is decomposable. This\n -- is redundant, because a non-decomposable tycon under an NthCo\n -- is entirely bogus. See docs\/core-spec\/core-spec.pdf.\n go ns co\n = opt_nths ns co\n\n -- input coercion is *not* yet sym'd or opt'd\n opt_nths [] co = opt_co4 env sym rep r co\n opt_nths (n:ns) (TyConAppCo _ _ cos) = opt_nths ns (cos `getNth` n)\n\n -- here, the co isn't a TyConAppCo, so we opt it, hoping to get\n -- a TyConAppCo as output. We don't know the role, so we use\n -- opt_co1. This is slightly annoying, because opt_co1 will call\n -- coercionRole, but as long as we don't have a long chain of\n -- NthCo's interspersed with some other coercion former, we should\n -- be OK.\n opt_nths ns co = opt_nths' ns (opt_co1 env sym co)\n\n -- input coercion *is* sym'd and opt'd\n opt_nths' [] co\n = if rep && (r == Nominal)\n -- propagate the SubCo:\n then opt_co4 (zapCvSubstEnv env) False True r co\n else co\n opt_nths' (n:ns) (TyConAppCo _ _ cos) = opt_nths' ns (cos `getNth` n)\n opt_nths' ns co = wrapRole rep r (mk_nths ns co)\n\n mk_nths [] co = co\n mk_nths (n:ns) co = mk_nths ns (mkNthCo n co)\n\n-------------\nopt_transList :: InScopeSet -> [NormalCo] -> [NormalCo] -> [NormalCo]\nopt_transList is = zipWith (opt_trans is)\n\nopt_trans :: InScopeSet -> NormalCo -> NormalCo -> NormalCo\nopt_trans is co1 co2\n | isReflCo co1 = co2\n | otherwise = opt_trans1 is co1 co2\n\nopt_trans1 :: InScopeSet -> NormalNonIdCo -> NormalCo -> NormalCo\n-- First arg is not the identity\nopt_trans1 is co1 co2\n | isReflCo co2 = co1\n | otherwise = opt_trans2 is co1 co2\n\nopt_trans2 :: InScopeSet -> NormalNonIdCo -> NormalNonIdCo -> NormalCo\n-- Neither arg is the identity\nopt_trans2 is (TransCo co1a co1b) co2\n -- Don't know whether the sub-coercions are the identity\n = opt_trans is co1a (opt_trans is co1b co2)\n\nopt_trans2 is co1 co2\n | Just co <- opt_trans_rule is co1 co2\n = co\n\nopt_trans2 is co1 (TransCo co2a co2b)\n | Just co1_2a <- opt_trans_rule is co1 co2a\n = if isReflCo co1_2a\n then co2b\n else opt_trans1 is co1_2a co2b\n\nopt_trans2 _ co1 co2\n = mkTransCo co1 co2\n\n------\n-- Optimize coercions with a top-level use of transitivity.\nopt_trans_rule :: InScopeSet -> NormalNonIdCo -> NormalNonIdCo -> Maybe NormalCo\n\n-- Push transitivity through matching destructors\nopt_trans_rule is in_co1@(NthCo d1 co1) in_co2@(NthCo d2 co2)\n | d1 == d2\n , co1 `compatible_co` co2\n = fireTransRule \"PushNth\" in_co1 in_co2 $\n mkNthCo d1 (opt_trans is co1 co2)\n\nopt_trans_rule is in_co1@(LRCo d1 co1) in_co2@(LRCo d2 co2)\n | d1 == d2\n , co1 `compatible_co` co2\n = fireTransRule \"PushLR\" in_co1 in_co2 $\n mkLRCo d1 (opt_trans is co1 co2)\n\n-- Push transitivity inside instantiation\nopt_trans_rule is in_co1@(InstCo co1 ty1) in_co2@(InstCo co2 ty2)\n | ty1 `eqType` ty2\n , co1 `compatible_co` co2\n = fireTransRule \"TrPushInst\" in_co1 in_co2 $\n mkInstCo (opt_trans is co1 co2) ty1\n\n-- Push transitivity down through matching top-level constructors.\nopt_trans_rule is in_co1@(TyConAppCo r1 tc1 cos1) in_co2@(TyConAppCo r2 tc2 cos2)\n | tc1 == tc2\n = ASSERT( r1 == r2 )\n fireTransRule \"PushTyConApp\" in_co1 in_co2 $\n TyConAppCo r1 tc1 (opt_transList is cos1 cos2)\n\nopt_trans_rule is in_co1@(AppCo co1a co1b) in_co2@(AppCo co2a co2b)\n = fireTransRule \"TrPushApp\" in_co1 in_co2 $\n mkAppCo (opt_trans is co1a co2a) (opt_trans is co1b co2b)\n\n-- Eta rules\nopt_trans_rule is co1@(TyConAppCo r tc cos1) co2\n | Just cos2 <- etaTyConAppCo_maybe tc co2\n = ASSERT( length cos1 == length cos2 )\n fireTransRule \"EtaCompL\" co1 co2 $\n TyConAppCo r tc (opt_transList is cos1 cos2)\n\nopt_trans_rule is co1 co2@(TyConAppCo r tc cos2)\n | Just cos1 <- etaTyConAppCo_maybe tc co1\n = ASSERT( length cos1 == length cos2 )\n fireTransRule \"EtaCompR\" co1 co2 $\n TyConAppCo r tc (opt_transList is cos1 cos2)\n\nopt_trans_rule is co1@(AppCo co1a co1b) co2\n | Just (co2a,co2b) <- etaAppCo_maybe co2\n = fireTransRule \"EtaAppL\" co1 co2 $\n mkAppCo (opt_trans is co1a co2a) (opt_trans is co1b co2b)\n\nopt_trans_rule is co1 co2@(AppCo co2a co2b)\n | Just (co1a,co1b) <- etaAppCo_maybe co1\n = fireTransRule \"EtaAppR\" co1 co2 $\n mkAppCo (opt_trans is co1a co2a) (opt_trans is co1b co2b)\n\n-- Push transitivity inside forall\nopt_trans_rule is co1 co2\n | Just (tv1,r1) <- splitForAllCo_maybe co1\n , Just (tv2,r2) <- etaForAllCo_maybe co2\n , let r2' = substCoWithTy is' tv2 (mkTyVarTy tv1) r2\n is' = is `extendInScopeSet` tv1\n = fireTransRule \"EtaAllL\" co1 co2 $\n mkForAllCo tv1 (opt_trans2 is' r1 r2')\n\n | Just (tv2,r2) <- splitForAllCo_maybe co2\n , Just (tv1,r1) <- etaForAllCo_maybe co1\n , let r1' = substCoWithTy is' tv1 (mkTyVarTy tv2) r1\n is' = is `extendInScopeSet` tv2\n = fireTransRule \"EtaAllR\" co1 co2 $\n mkForAllCo tv1 (opt_trans2 is' r1' r2)\n\n-- Push transitivity inside axioms\nopt_trans_rule is co1 co2\n\n -- See Note [Why call checkAxInstCo during optimisation]\n -- TrPushSymAxR\n | Just (sym, con, ind, cos1) <- co1_is_axiom_maybe\n , Just cos2 <- matchAxiom sym con ind co2\n , True <- sym\n , let newAxInst = AxiomInstCo con ind (opt_transList is (map mkSymCo cos2) cos1)\n , Nothing <- checkAxInstCo newAxInst\n = fireTransRule \"TrPushSymAxR\" co1 co2 $ SymCo newAxInst\n\n -- TrPushAxR\n | Just (sym, con, ind, cos1) <- co1_is_axiom_maybe\n , Just cos2 <- matchAxiom sym con ind co2\n , False <- sym\n , let newAxInst = AxiomInstCo con ind (opt_transList is cos1 cos2)\n , Nothing <- checkAxInstCo newAxInst\n = fireTransRule \"TrPushAxR\" co1 co2 newAxInst\n\n -- TrPushSymAxL\n | Just (sym, con, ind, cos2) <- co2_is_axiom_maybe\n , Just cos1 <- matchAxiom (not sym) con ind co1\n , True <- sym\n , let newAxInst = AxiomInstCo con ind (opt_transList is cos2 (map mkSymCo cos1))\n , Nothing <- checkAxInstCo newAxInst\n = fireTransRule \"TrPushSymAxL\" co1 co2 $ SymCo newAxInst\n\n -- TrPushAxL\n | Just (sym, con, ind, cos2) <- co2_is_axiom_maybe\n , Just cos1 <- matchAxiom (not sym) con ind co1\n , False <- sym\n , let newAxInst = AxiomInstCo con ind (opt_transList is cos1 cos2)\n , Nothing <- checkAxInstCo newAxInst\n = fireTransRule \"TrPushAxL\" co1 co2 newAxInst\n\n -- TrPushAxSym\/TrPushSymAx\n | Just (sym1, con1, ind1, cos1) <- co1_is_axiom_maybe\n , Just (sym2, con2, ind2, cos2) <- co2_is_axiom_maybe\n , con1 == con2\n , ind1 == ind2\n , sym1 == not sym2\n , let branch = coAxiomNthBranch con1 ind1\n qtvs = coAxBranchTyVars branch\n lhs = coAxNthLHS con1 ind1\n rhs = coAxBranchRHS branch\n pivot_tvs = exactTyVarsOfType (if sym2 then rhs else lhs)\n , all (`elemVarSet` pivot_tvs) qtvs\n = fireTransRule \"TrPushAxSym\" co1 co2 $\n if sym2\n then liftCoSubstWith role qtvs (opt_transList is cos1 (map mkSymCo cos2)) lhs -- TrPushAxSym\n else liftCoSubstWith role qtvs (opt_transList is (map mkSymCo cos1) cos2) rhs -- TrPushSymAx\n where\n co1_is_axiom_maybe = isAxiom_maybe co1\n co2_is_axiom_maybe = isAxiom_maybe co2\n role = coercionRole co1 -- should be the same as coercionRole co2!\n\nopt_trans_rule _ co1 co2 -- Identity rule\n | (Pair ty1 _, r) <- coercionKindRole co1\n , Pair _ ty2 <- coercionKind co2\n , ty1 `eqType` ty2\n = fireTransRule \"RedTypeDirRefl\" co1 co2 $\n Refl r ty2\n\nopt_trans_rule _ _ _ = Nothing\n\nfireTransRule :: String -> Coercion -> Coercion -> Coercion -> Maybe Coercion\nfireTransRule _rule _co1 _co2 res\n = -- pprTrace (\"Trans rule fired: \" ++ _rule) (vcat [ppr _co1, ppr _co2, ppr res]) $\n Just res\n\n\\end{code}\n\nNote [Conflict checking with AxiomInstCo]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nConsider the following type family and axiom:\n\ntype family Equal (a :: k) (b :: k) :: Bool\ntype instance where\n Equal a a = True\n Equal a b = False\n--\nEqual :: forall k::BOX. k -> k -> Bool\naxEqual :: { forall k::BOX. forall a::k. Equal k a a ~ True\n ; forall k::BOX. forall a::k. forall b::k. Equal k a b ~ False }\n\nWe wish to disallow (axEqual[1] <*> ) :: (Equal * Int Int ~\nFalse) and that all is OK. But, all is not OK: we want to use the first branch\nof the axiom in this case, not the second. The problem is that the parameters\nof the first branch can unify with the supplied coercions, thus meaning that\nthe first branch should be taken. See also Note [Branched instance checking]\nin types\/FamInstEnv.lhs.\n\nNote [Why call checkAxInstCo during optimisation]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIt is possible that otherwise-good-looking optimisations meet with disaster\nin the presence of axioms with multiple equations. Consider\n\ntype family Equal (a :: *) (b :: *) :: Bool where\n Equal a a = True\n Equal a b = False\ntype family Id (a :: *) :: * where\n Id a = a\n\naxEq :: { [a::*]. Equal a a ~ True\n ; [a::*, b::*]. Equal a b ~ False }\naxId :: [a::*]. Id a ~ a\n\nco1 = Equal (axId[0] Int) (axId[0] Bool)\n :: Equal (Id Int) (Id Bool) ~ Equal Int Bool\nco2 = axEq[1] \n :: Equal Int Bool ~ False\n\nWe wish to optimise (co1 ; co2). We end up in rule TrPushAxL, noting that\nco2 is an axiom and that matchAxiom succeeds when looking at co1. But, what\nhappens when we push the coercions inside? We get\n\nco3 = axEq[1] (axId[0] Int) (axId[0] Bool)\n :: Equal (Id Int) (Id Bool) ~ False\n\nwhich is bogus! This is because the type system isn't smart enough to know\nthat (Id Int) and (Id Bool) are Surely Apart, as they're headed by type\nfamilies. At the time of writing, I (Richard Eisenberg) couldn't think of\na way of detecting this any more efficient than just building the optimised\ncoercion and checking.\n\n\\begin{code}\n-- | Check to make sure that an AxInstCo is internally consistent.\n-- Returns the conflicting branch, if it exists\n-- See Note [Conflict checking with AxiomInstCo]\ncheckAxInstCo :: Coercion -> Maybe CoAxBranch\n-- defined here to avoid dependencies in Coercion\n-- If you edit this function, you may need to update the GHC formalism\n-- See Note [GHC Formalism] in CoreLint\ncheckAxInstCo (AxiomInstCo ax ind cos)\n = let branch = coAxiomNthBranch ax ind\n tvs = coAxBranchTyVars branch\n incomps = coAxBranchIncomps branch\n tys = map (pFst . coercionKind) cos\n subst = zipOpenTvSubst tvs tys\n target = Type.substTys subst (coAxBranchLHS branch)\n in_scope = mkInScopeSet $\n unionVarSets (map (tyVarsOfTypes . coAxBranchLHS) incomps)\n flattened_target = flattenTys in_scope target in\n check_no_conflict flattened_target incomps\n where\n check_no_conflict :: [Type] -> [CoAxBranch] -> Maybe CoAxBranch\n check_no_conflict _ [] = Nothing\n check_no_conflict flat (b@CoAxBranch { cab_lhs = lhs_incomp } : rest)\n -- See Note [Apartness] in FamInstEnv\n | SurelyApart <- tcUnifyTysFG instanceBindFun flat lhs_incomp\n = check_no_conflict flat rest\n | otherwise\n = Just b\ncheckAxInstCo _ = Nothing\n\n-----------\nwrapSym :: SymFlag -> Coercion -> Coercion\nwrapSym sym co | sym = SymCo co\n | otherwise = co\n\n-- | Conditionally set a role to be representational\nwrapRole :: ReprFlag\n -> Role -- ^ current role\n -> Coercion -> Coercion\nwrapRole False _ = id\nwrapRole True current = downgradeRole Representational current\n\n-- | If we require a representational role, return that. Otherwise,\n-- return the \"default\" role provided.\nchooseRole :: ReprFlag\n -> Role -- ^ \"default\" role\n -> Role\nchooseRole True _ = Representational\nchooseRole _ r = r\n-----------\n-- takes two tyvars and builds env'ts to map them to the same tyvar\nsubstTyVarBndr2 :: CvSubst -> TyVar -> TyVar\n -> (CvSubst, CvSubst, TyVar)\nsubstTyVarBndr2 env tv1 tv2\n = case substTyVarBndr env tv1 of\n (env1, tv1') -> (env1, extendTvSubstAndInScope env tv2 (mkTyVarTy tv1'), tv1')\n\nzapCvSubstEnv2 :: CvSubst -> CvSubst -> CvSubst\nzapCvSubstEnv2 env1 env2 = mkCvSubst (is1 `unionInScope` is2) []\n where is1 = getCvInScope env1\n is2 = getCvInScope env2\n-----------\nisAxiom_maybe :: Coercion -> Maybe (Bool, CoAxiom Branched, Int, [Coercion])\nisAxiom_maybe (SymCo co)\n | Just (sym, con, ind, cos) <- isAxiom_maybe co\n = Just (not sym, con, ind, cos)\nisAxiom_maybe (AxiomInstCo con ind cos)\n = Just (False, con, ind, cos)\nisAxiom_maybe _ = Nothing\n\nmatchAxiom :: Bool -- True = match LHS, False = match RHS\n -> CoAxiom br -> Int -> Coercion -> Maybe [Coercion]\n-- If we succeed in matching, then *all the quantified type variables are bound*\n-- E.g. if tvs = [a,b], lhs\/rhs = [b], we'll fail\nmatchAxiom sym ax@(CoAxiom { co_ax_tc = tc }) ind co\n = let (CoAxBranch { cab_tvs = qtvs\n , cab_roles = roles\n , cab_lhs = lhs\n , cab_rhs = rhs }) = coAxiomNthBranch ax ind in\n case liftCoMatch (mkVarSet qtvs) (if sym then (mkTyConApp tc lhs) else rhs) co of\n Nothing -> Nothing\n Just subst -> zipWithM (liftCoSubstTyVar subst) roles qtvs\n\n-------------\ncompatible_co :: Coercion -> Coercion -> Bool\n-- Check whether (co1 . co2) will be well-kinded\ncompatible_co co1 co2\n = x1 `eqType` x2\n where\n Pair _ x1 = coercionKind co1\n Pair x2 _ = coercionKind co2\n\n-------------\netaForAllCo_maybe :: Coercion -> Maybe (TyVar, Coercion)\n-- Try to make the coercion be of form (forall tv. co)\netaForAllCo_maybe co\n | Just (tv, r) <- splitForAllCo_maybe co\n = Just (tv, r)\n\n | Pair ty1 ty2 <- coercionKind co\n , Just (tv1, _) <- splitForAllTy_maybe ty1\n , Just (tv2, _) <- splitForAllTy_maybe ty2\n , tyVarKind tv1 `eqKind` tyVarKind tv2\n = Just (tv1, mkInstCo co (mkTyVarTy tv1))\n\n | otherwise\n = Nothing\n\netaAppCo_maybe :: Coercion -> Maybe (Coercion,Coercion)\n-- If possible, split a coercion\n-- g :: t1a t1b ~ t2a t2b\n-- into a pair of coercions (left g, right g)\netaAppCo_maybe co\n | Just (co1,co2) <- splitAppCo_maybe co\n = Just (co1,co2)\n | (Pair ty1 ty2, Nominal) <- coercionKindRole co\n , Just (_,t1) <- splitAppTy_maybe ty1\n , Just (_,t2) <- splitAppTy_maybe ty2\n , typeKind t1 `eqType` typeKind t2 -- Note [Eta for AppCo]\n = Just (LRCo CLeft co, LRCo CRight co)\n | otherwise\n = Nothing\n\netaTyConAppCo_maybe :: TyCon -> Coercion -> Maybe [Coercion]\n-- If possible, split a coercion\n-- g :: T s1 .. sn ~ T t1 .. tn\n-- into [ Nth 0 g :: s1~t1, ..., Nth (n-1) g :: sn~tn ]\netaTyConAppCo_maybe tc (TyConAppCo _ tc2 cos2)\n = ASSERT( tc == tc2 ) Just cos2\n\netaTyConAppCo_maybe tc co\n | isDecomposableTyCon tc\n , Pair ty1 ty2 <- coercionKind co\n , Just (tc1, tys1) <- splitTyConApp_maybe ty1\n , Just (tc2, tys2) <- splitTyConApp_maybe ty2\n , tc1 == tc2\n , let n = length tys1\n = ASSERT( tc == tc1 )\n ASSERT( n == length tys2 )\n Just (decomposeCo n co)\n -- NB: n might be <> tyConArity tc\n -- e.g. data family T a :: * -> *\n -- g :: T a b ~ T c d\n\n | otherwise\n = Nothing\n\\end{code}\n\nNote [Eta for AppCo]\n~~~~~~~~~~~~~~~~~~~~\nSuppose we have\n g :: s1 t1 ~ s2 t2\n\nThen we can't necessarily make\n left g :: s1 ~ s2\n right g :: t1 ~ t2\nbecause it's possible that\n s1 :: * -> * t1 :: *\n s2 :: (*->*) -> * t2 :: * -> *\nand in that case (left g) does not have the same\nkind on either side.\n\nIt's enough to check that\n kind t1 = kind t2\nbecause if g is well-kinded then\n kind (s1 t2) = kind (s2 t2)\nand these two imply\n kind s1 = kind s2\n\n","avg_line_length":36.4504021448,"max_line_length":97,"alphanum_fraction":0.6527287437} {"size":139,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"> import Prelude\n\n> myButLast :: [a] -> a\n> myButLast [] = error \"empty list\"\n> myButLast (x:(y:[])) = x\n> myButLast (x:xs) = myButLast xs\n","avg_line_length":19.8571428571,"max_line_length":35,"alphanum_fraction":0.5899280576} {"size":49167,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"%\n% (c) The University of Glasgow 2006\n%\n\nFunctions for working with the typechecker environment (setters, getters...).\n\n\\begin{code}\n{-# LANGUAGE CPP, ExplicitForAll, FlexibleInstances #-}\n{-# OPTIONS_GHC -fno-warn-orphans #-}\n\nmodule TcRnMonad(\n module TcRnMonad,\n module TcRnTypes,\n module IOEnv\n ) where\n\n#include \"HsVersions.h\"\n\nimport TcRnTypes -- Re-export all\nimport IOEnv -- Re-export all\nimport TcEvidence\n\nimport HsSyn hiding (LIE)\nimport HscTypes\nimport Module\nimport RdrName\nimport Name\nimport Type\n\nimport TcType\nimport InstEnv\nimport FamInstEnv\nimport PrelNames\n\nimport Var\nimport Id\nimport VarSet\nimport VarEnv\nimport ErrUtils\nimport SrcLoc\nimport NameEnv\nimport NameSet\nimport Bag\nimport Outputable\nimport UniqSupply\nimport UniqFM\nimport DynFlags\nimport StaticFlags\nimport FastString\nimport Panic\nimport Util\nimport Annotations\nimport BasicTypes( TopLevelFlag )\n\nimport Control.Exception\nimport Data.IORef\nimport qualified Data.Set as Set\nimport Control.Monad\n\n#ifdef GHCI\nimport qualified Data.Map as Map\n#endif\n\\end{code}\n\n\n\n%************************************************************************\n%* *\n initTc\n%* *\n%************************************************************************\n\n\\begin{code}\n\n-- | Setup the initial typechecking environment\ninitTc :: HscEnv\n -> HscSource\n -> Bool -- True <=> retain renamed syntax trees\n -> Module\n -> TcM r\n -> IO (Messages, Maybe r)\n -- Nothing => error thrown by the thing inside\n -- (error messages should have been printed already)\n\ninitTc hsc_env hsc_src keep_rn_syntax mod do_this\n = do { errs_var <- newIORef (emptyBag, emptyBag) ;\n tvs_var <- newIORef emptyVarSet ;\n keep_var <- newIORef emptyNameSet ;\n used_rdr_var <- newIORef Set.empty ;\n th_var <- newIORef False ;\n th_splice_var<- newIORef False ;\n infer_var <- newIORef True ;\n lie_var <- newIORef emptyWC ;\n dfun_n_var <- newIORef emptyOccSet ;\n type_env_var <- case hsc_type_env_var hsc_env of {\n Just (_mod, te_var) -> return te_var ;\n Nothing -> newIORef emptyNameEnv } ;\n\n dependent_files_var <- newIORef [] ;\n#ifdef GHCI\n th_topdecls_var <- newIORef [] ;\n th_topnames_var <- newIORef emptyNameSet ;\n th_modfinalizers_var <- newIORef [] ;\n th_state_var <- newIORef Map.empty ;\n#endif \/* GHCI *\/\n let {\n maybe_rn_syntax :: forall a. a -> Maybe a ;\n maybe_rn_syntax empty_val\n | keep_rn_syntax = Just empty_val\n | otherwise = Nothing ;\n\n gbl_env = TcGblEnv {\n#ifdef GHCI\n tcg_th_topdecls = th_topdecls_var,\n tcg_th_topnames = th_topnames_var,\n tcg_th_modfinalizers = th_modfinalizers_var,\n tcg_th_state = th_state_var,\n#endif \/* GHCI *\/\n\n tcg_mod = mod,\n tcg_src = hsc_src,\n tcg_rdr_env = emptyGlobalRdrEnv,\n tcg_fix_env = emptyNameEnv,\n tcg_field_env = RecFields emptyNameEnv emptyNameSet,\n tcg_default = Nothing,\n tcg_type_env = emptyNameEnv,\n tcg_type_env_var = type_env_var,\n tcg_inst_env = emptyInstEnv,\n tcg_dinst_env = emptyInstEnv,\n tcg_fam_inst_env = emptyFamInstEnv,\n tcg_ann_env = emptyAnnEnv,\n tcg_th_used = th_var,\n tcg_th_splice_used = th_splice_var,\n tcg_exports = [],\n tcg_imports = emptyImportAvails,\n tcg_used_rdrnames = used_rdr_var,\n tcg_dus = emptyDUs,\n\n tcg_rn_imports = [],\n tcg_rn_exports = maybe_rn_syntax [],\n tcg_rn_decls = maybe_rn_syntax emptyRnGroup,\n\n tcg_binds = emptyLHsBinds,\n tcg_imp_specs = [],\n tcg_sigs = emptyNameSet,\n tcg_ev_binds = emptyBag,\n tcg_warns = NoWarnings,\n tcg_anns = [],\n tcg_tcs = [],\n tcg_insts = [],\n tcg_dinsts = [],\n tcg_fam_insts = [],\n tcg_rules = [],\n tcg_fords = [],\n tcg_vects = [],\n tcg_patsyns = [],\n tcg_dfun_n = dfun_n_var,\n tcg_keep = keep_var,\n tcg_doc_hdr = Nothing,\n tcg_hpc = False,\n tcg_main = Nothing,\n tcg_safeInfer = infer_var,\n tcg_dependent_files = dependent_files_var\n } ;\n lcl_env = TcLclEnv {\n tcl_errs = errs_var,\n tcl_loc = mkGeneralSrcSpan (fsLit \"Top level\"),\n tcl_ctxt = [],\n tcl_rdr = emptyLocalRdrEnv,\n tcl_th_ctxt = topStage,\n tcl_th_bndrs = emptyNameEnv,\n tcl_arrow_ctxt = NoArrowCtxt,\n tcl_env = emptyNameEnv,\n tcl_bndrs = [],\n tcl_tidy = emptyTidyEnv,\n tcl_tyvars = tvs_var,\n tcl_lie = lie_var,\n tcl_untch = noUntouchables\n } ;\n } ;\n\n -- OK, here's the business end!\n maybe_res <- initTcRnIf 'a' hsc_env gbl_env lcl_env $\n do { r <- tryM do_this\n ; case r of\n Right res -> return (Just res)\n Left _ -> return Nothing } ;\n\n -- Check for unsolved constraints\n lie <- readIORef lie_var ;\n if isEmptyWC lie\n then return ()\n else pprPanic \"initTc: unsolved constraints\"\n (pprWantedsWithLocs lie) ;\n\n -- Collect any error messages\n msgs <- readIORef errs_var ;\n\n let { dflags = hsc_dflags hsc_env\n ; final_res | errorsFound dflags msgs = Nothing\n | otherwise = maybe_res } ;\n\n return (msgs, final_res)\n }\n\n\ninitTcInteractive :: HscEnv -> TcM a -> IO (Messages, Maybe a)\n-- Initialise the type checker monad for use in GHCi\ninitTcInteractive hsc_env thing_inside\n = initTc hsc_env HsSrcFile False\n (icInteractiveModule (hsc_IC hsc_env))\n thing_inside\n\ninitTcForLookup :: HscEnv -> TcM a -> IO a\n-- The thing_inside is just going to look up something\n-- in the environment, so we don't need much setup\ninitTcForLookup hsc_env thing_inside\n = do (msgs, m) <- initTc hsc_env HsSrcFile False\n (icInteractiveModule (hsc_IC hsc_env)) -- Irrelevant really\n thing_inside\n case m of\n Nothing -> throwIO $ mkSrcErr $ snd msgs\n Just x -> return x\n\\end{code}\n\n%************************************************************************\n%* *\n Initialisation\n%* *\n%************************************************************************\n\n\n\\begin{code}\ninitTcRnIf :: Char -- Tag for unique supply\n -> HscEnv\n -> gbl -> lcl\n -> TcRnIf gbl lcl a\n -> IO a\ninitTcRnIf uniq_tag hsc_env gbl_env lcl_env thing_inside\n = do { us <- mkSplitUniqSupply uniq_tag ;\n ; us_var <- newIORef us ;\n\n ; let { env = Env { env_top = hsc_env,\n env_us = us_var,\n env_gbl = gbl_env,\n env_lcl = lcl_env} }\n\n ; runIOEnv env thing_inside\n }\n\\end{code}\n\n%************************************************************************\n%* *\n Simple accessors\n%* *\n%************************************************************************\n\n\\begin{code}\ndiscardResult :: TcM a -> TcM ()\ndiscardResult a = a >> return ()\n\ngetTopEnv :: TcRnIf gbl lcl HscEnv\ngetTopEnv = do { env <- getEnv; return (env_top env) }\n\ngetGblEnv :: TcRnIf gbl lcl gbl\ngetGblEnv = do { env <- getEnv; return (env_gbl env) }\n\nupdGblEnv :: (gbl -> gbl) -> TcRnIf gbl lcl a -> TcRnIf gbl lcl a\nupdGblEnv upd = updEnv (\\ env@(Env { env_gbl = gbl }) ->\n env { env_gbl = upd gbl })\n\nsetGblEnv :: gbl -> TcRnIf gbl lcl a -> TcRnIf gbl lcl a\nsetGblEnv gbl_env = updEnv (\\ env -> env { env_gbl = gbl_env })\n\ngetLclEnv :: TcRnIf gbl lcl lcl\ngetLclEnv = do { env <- getEnv; return (env_lcl env) }\n\nupdLclEnv :: (lcl -> lcl) -> TcRnIf gbl lcl a -> TcRnIf gbl lcl a\nupdLclEnv upd = updEnv (\\ env@(Env { env_lcl = lcl }) ->\n env { env_lcl = upd lcl })\n\nsetLclEnv :: lcl' -> TcRnIf gbl lcl' a -> TcRnIf gbl lcl a\nsetLclEnv lcl_env = updEnv (\\ env -> env { env_lcl = lcl_env })\n\ngetEnvs :: TcRnIf gbl lcl (gbl, lcl)\ngetEnvs = do { env <- getEnv; return (env_gbl env, env_lcl env) }\n\nsetEnvs :: (gbl', lcl') -> TcRnIf gbl' lcl' a -> TcRnIf gbl lcl a\nsetEnvs (gbl_env, lcl_env) = updEnv (\\ env -> env { env_gbl = gbl_env, env_lcl = lcl_env })\n\\end{code}\n\n\nCommand-line flags\n\n\\begin{code}\nxoptM :: ExtensionFlag -> TcRnIf gbl lcl Bool\nxoptM flag = do { dflags <- getDynFlags; return (xopt flag dflags) }\n\ndoptM :: DumpFlag -> TcRnIf gbl lcl Bool\ndoptM flag = do { dflags <- getDynFlags; return (dopt flag dflags) }\n\ngoptM :: GeneralFlag -> TcRnIf gbl lcl Bool\ngoptM flag = do { dflags <- getDynFlags; return (gopt flag dflags) }\n\nwoptM :: WarningFlag -> TcRnIf gbl lcl Bool\nwoptM flag = do { dflags <- getDynFlags; return (wopt flag dflags) }\n\nsetXOptM :: ExtensionFlag -> TcRnIf gbl lcl a -> TcRnIf gbl lcl a\nsetXOptM flag = updEnv (\\ env@(Env { env_top = top }) ->\n env { env_top = top { hsc_dflags = xopt_set (hsc_dflags top) flag}} )\n\nunsetGOptM :: GeneralFlag -> TcRnIf gbl lcl a -> TcRnIf gbl lcl a\nunsetGOptM flag = updEnv (\\ env@(Env { env_top = top }) ->\n env { env_top = top { hsc_dflags = gopt_unset (hsc_dflags top) flag}} )\n\nunsetWOptM :: WarningFlag -> TcRnIf gbl lcl a -> TcRnIf gbl lcl a\nunsetWOptM flag = updEnv (\\ env@(Env { env_top = top }) ->\n env { env_top = top { hsc_dflags = wopt_unset (hsc_dflags top) flag}} )\n\n-- | Do it flag is true\nwhenDOptM :: DumpFlag -> TcRnIf gbl lcl () -> TcRnIf gbl lcl ()\nwhenDOptM flag thing_inside = do b <- doptM flag\n when b thing_inside\n\nwhenGOptM :: GeneralFlag -> TcRnIf gbl lcl () -> TcRnIf gbl lcl ()\nwhenGOptM flag thing_inside = do b <- goptM flag\n when b thing_inside\n\nwhenWOptM :: WarningFlag -> TcRnIf gbl lcl () -> TcRnIf gbl lcl ()\nwhenWOptM flag thing_inside = do b <- woptM flag\n when b thing_inside\n\nwhenXOptM :: ExtensionFlag -> TcRnIf gbl lcl () -> TcRnIf gbl lcl ()\nwhenXOptM flag thing_inside = do b <- xoptM flag\n when b thing_inside\n\ngetGhcMode :: TcRnIf gbl lcl GhcMode\ngetGhcMode = do { env <- getTopEnv; return (ghcMode (hsc_dflags env)) }\n\\end{code}\n\n\\begin{code}\nwithDoDynamicToo :: TcRnIf gbl lcl a -> TcRnIf gbl lcl a\nwithDoDynamicToo m = do env <- getEnv\n let dflags = extractDynFlags env\n dflags' = dynamicTooMkDynamicDynFlags dflags\n env' = replaceDynFlags env dflags'\n setEnv env' m\n\\end{code}\n\n\\begin{code}\ngetEpsVar :: TcRnIf gbl lcl (TcRef ExternalPackageState)\ngetEpsVar = do { env <- getTopEnv; return (hsc_EPS env) }\n\ngetEps :: TcRnIf gbl lcl ExternalPackageState\ngetEps = do { env <- getTopEnv; readMutVar (hsc_EPS env) }\n\n-- | Update the external package state. Returns the second result of the\n-- modifier function.\n--\n-- This is an atomic operation and forces evaluation of the modified EPS in\n-- order to avoid space leaks.\nupdateEps :: (ExternalPackageState -> (ExternalPackageState, a))\n -> TcRnIf gbl lcl a\nupdateEps upd_fn = do\n traceIf (text \"updating EPS\")\n eps_var <- getEpsVar\n atomicUpdMutVar' eps_var upd_fn\n\n-- | Update the external package state.\n--\n-- This is an atomic operation and forces evaluation of the modified EPS in\n-- order to avoid space leaks.\nupdateEps_ :: (ExternalPackageState -> ExternalPackageState)\n -> TcRnIf gbl lcl ()\nupdateEps_ upd_fn = do\n traceIf (text \"updating EPS_\")\n eps_var <- getEpsVar\n atomicUpdMutVar' eps_var (\\eps -> (upd_fn eps, ()))\n\ngetHpt :: TcRnIf gbl lcl HomePackageTable\ngetHpt = do { env <- getTopEnv; return (hsc_HPT env) }\n\ngetEpsAndHpt :: TcRnIf gbl lcl (ExternalPackageState, HomePackageTable)\ngetEpsAndHpt = do { env <- getTopEnv; eps <- readMutVar (hsc_EPS env)\n ; return (eps, hsc_HPT env) }\n\\end{code}\n\n%************************************************************************\n%* *\n Unique supply\n%* *\n%************************************************************************\n\n\\begin{code}\nnewUnique :: TcRnIf gbl lcl Unique\nnewUnique\n = do { env <- getEnv ;\n let { u_var = env_us env } ;\n us <- readMutVar u_var ;\n case takeUniqFromSupply us of { (uniq, us') -> do {\n writeMutVar u_var us' ;\n return $! uniq }}}\n -- NOTE 1: we strictly split the supply, to avoid the possibility of leaving\n -- a chain of unevaluated supplies behind.\n -- NOTE 2: we use the uniq in the supply from the MutVar directly, and\n -- throw away one half of the new split supply. This is safe because this\n -- is the only place we use that unique. Using the other half of the split\n -- supply is safer, but slower.\n\nnewUniqueSupply :: TcRnIf gbl lcl UniqSupply\nnewUniqueSupply\n = do { env <- getEnv ;\n let { u_var = env_us env } ;\n us <- readMutVar u_var ;\n case splitUniqSupply us of { (us1,us2) -> do {\n writeMutVar u_var us1 ;\n return us2 }}}\n\nnewLocalName :: Name -> TcM Name\nnewLocalName name = newName (nameOccName name)\n\nnewName :: OccName -> TcM Name\nnewName occ\n = do { uniq <- newUnique\n ; loc <- getSrcSpanM\n ; return (mkInternalName uniq occ loc) }\n\nnewSysName :: OccName -> TcM Name\nnewSysName occ\n = do { uniq <- newUnique\n ; return (mkSystemName uniq occ) }\n\nnewSysLocalIds :: FastString -> [TcType] -> TcRnIf gbl lcl [TcId]\nnewSysLocalIds fs tys\n = do { us <- newUniqueSupply\n ; return (zipWith (mkSysLocal fs) (uniqsFromSupply us) tys) }\n\ninstance MonadUnique (IOEnv (Env gbl lcl)) where\n getUniqueM = newUnique\n getUniqueSupplyM = newUniqueSupply\n\\end{code}\n\n\n%************************************************************************\n%* *\n Debugging\n%* *\n%************************************************************************\n\n\\begin{code}\nnewTcRef :: a -> TcRnIf gbl lcl (TcRef a)\nnewTcRef = newMutVar\n\nreadTcRef :: TcRef a -> TcRnIf gbl lcl a\nreadTcRef = readMutVar\n\nwriteTcRef :: TcRef a -> a -> TcRnIf gbl lcl ()\nwriteTcRef = writeMutVar\n\nupdTcRef :: TcRef a -> (a -> a) -> TcRnIf gbl lcl ()\nupdTcRef = updMutVar\n\\end{code}\n\n%************************************************************************\n%* *\n Debugging\n%* *\n%************************************************************************\n\n\\begin{code}\ntraceTc :: String -> SDoc -> TcRn ()\ntraceTc = traceTcN 1\n\ntraceTcN :: Int -> String -> SDoc -> TcRn ()\ntraceTcN level herald doc\n = do dflags <- getDynFlags\n when (level <= traceLevel dflags) $\n traceOptTcRn Opt_D_dump_tc_trace $ hang (text herald) 2 doc\n\ntraceRn, traceSplice :: SDoc -> TcRn ()\ntraceRn = traceOptTcRn Opt_D_dump_rn_trace\ntraceSplice = traceOptTcRn Opt_D_dump_splices\n\ntraceIf, traceHiDiffs :: SDoc -> TcRnIf m n ()\ntraceIf = traceOptIf Opt_D_dump_if_trace\ntraceHiDiffs = traceOptIf Opt_D_dump_hi_diffs\n\n\ntraceOptIf :: DumpFlag -> SDoc -> TcRnIf m n () -- No RdrEnv available, so qualify everything\ntraceOptIf flag doc = whenDOptM flag $\n do dflags <- getDynFlags\n liftIO (printInfoForUser dflags alwaysQualify doc)\n\ntraceOptTcRn :: DumpFlag -> SDoc -> TcRn ()\n-- Output the message, with current location if opt_PprStyle_Debug\ntraceOptTcRn flag doc = whenDOptM flag $ do\n { loc <- getSrcSpanM\n ; let real_doc\n | opt_PprStyle_Debug = mkLocMessage SevInfo loc doc\n | otherwise = doc -- The full location is\n -- usually way too much\n ; dumpTcRn real_doc }\n\ndumpTcRn :: SDoc -> TcRn ()\ndumpTcRn doc = do { rdr_env <- getGlobalRdrEnv\n ; dflags <- getDynFlags\n ; liftIO (printInfoForUser dflags (mkPrintUnqualified dflags rdr_env) doc) }\n\ndebugDumpTcRn :: SDoc -> TcRn ()\ndebugDumpTcRn doc | opt_NoDebugOutput = return ()\n | otherwise = dumpTcRn doc\n\ndumpOptTcRn :: DumpFlag -> SDoc -> TcRn ()\ndumpOptTcRn flag doc = whenDOptM flag (dumpTcRn doc)\n\\end{code}\n\n\n%************************************************************************\n%* *\n Typechecker global environment\n%* *\n%************************************************************************\n\n\\begin{code}\nsetModule :: Module -> TcRn a -> TcRn a\nsetModule mod thing_inside = updGblEnv (\\env -> env { tcg_mod = mod }) thing_inside\n\ngetIsGHCi :: TcRn Bool\ngetIsGHCi = do { mod <- getModule\n ; return (isInteractiveModule mod) }\n\ngetGHCiMonad :: TcRn Name\ngetGHCiMonad = do { hsc <- getTopEnv; return (ic_monad $ hsc_IC hsc) }\n\ngetInteractivePrintName :: TcRn Name\ngetInteractivePrintName = do { hsc <- getTopEnv; return (ic_int_print $ hsc_IC hsc) }\n\ntcIsHsBoot :: TcRn Bool\ntcIsHsBoot = do { env <- getGblEnv; return (isHsBoot (tcg_src env)) }\n\ngetGlobalRdrEnv :: TcRn GlobalRdrEnv\ngetGlobalRdrEnv = do { env <- getGblEnv; return (tcg_rdr_env env) }\n\ngetRdrEnvs :: TcRn (GlobalRdrEnv, LocalRdrEnv)\ngetRdrEnvs = do { (gbl,lcl) <- getEnvs; return (tcg_rdr_env gbl, tcl_rdr lcl) }\n\ngetImports :: TcRn ImportAvails\ngetImports = do { env <- getGblEnv; return (tcg_imports env) }\n\ngetFixityEnv :: TcRn FixityEnv\ngetFixityEnv = do { env <- getGblEnv; return (tcg_fix_env env) }\n\nextendFixityEnv :: [(Name,FixItem)] -> RnM a -> RnM a\nextendFixityEnv new_bit\n = updGblEnv (\\env@(TcGblEnv { tcg_fix_env = old_fix_env }) ->\n env {tcg_fix_env = extendNameEnvList old_fix_env new_bit})\n\ngetRecFieldEnv :: TcRn RecFieldEnv\ngetRecFieldEnv = do { env <- getGblEnv; return (tcg_field_env env) }\n\ngetDeclaredDefaultTys :: TcRn (Maybe [Type])\ngetDeclaredDefaultTys = do { env <- getGblEnv; return (tcg_default env) }\n\naddDependentFiles :: [FilePath] -> TcRn ()\naddDependentFiles fs = do\n ref <- fmap tcg_dependent_files getGblEnv\n dep_files <- readTcRef ref\n writeTcRef ref (fs ++ dep_files)\n\\end{code}\n\n%************************************************************************\n%* *\n Error management\n%* *\n%************************************************************************\n\n\\begin{code}\ngetSrcSpanM :: TcRn SrcSpan\n -- Avoid clash with Name.getSrcLoc\ngetSrcSpanM = do { env <- getLclEnv; return (tcl_loc env) }\n\nsetSrcSpan :: SrcSpan -> TcRn a -> TcRn a\nsetSrcSpan loc@(RealSrcSpan _) thing_inside\n = updLclEnv (\\env -> env { tcl_loc = loc }) thing_inside\n-- Don't overwrite useful info with useless:\nsetSrcSpan (UnhelpfulSpan _) thing_inside = thing_inside\n\naddLocM :: (a -> TcM b) -> Located a -> TcM b\naddLocM fn (L loc a) = setSrcSpan loc $ fn a\n\nwrapLocM :: (a -> TcM b) -> Located a -> TcM (Located b)\nwrapLocM fn (L loc a) = setSrcSpan loc $ do b <- fn a; return (L loc b)\n\nwrapLocFstM :: (a -> TcM (b,c)) -> Located a -> TcM (Located b, c)\nwrapLocFstM fn (L loc a) =\n setSrcSpan loc $ do\n (b,c) <- fn a\n return (L loc b, c)\n\nwrapLocSndM :: (a -> TcM (b,c)) -> Located a -> TcM (b, Located c)\nwrapLocSndM fn (L loc a) =\n setSrcSpan loc $ do\n (b,c) <- fn a\n return (b, L loc c)\n\\end{code}\n\nReporting errors\n\n\\begin{code}\ngetErrsVar :: TcRn (TcRef Messages)\ngetErrsVar = do { env <- getLclEnv; return (tcl_errs env) }\n\nsetErrsVar :: TcRef Messages -> TcRn a -> TcRn a\nsetErrsVar v = updLclEnv (\\ env -> env { tcl_errs = v })\n\naddErr :: MsgDoc -> TcRn () -- Ignores the context stack\naddErr msg = do { loc <- getSrcSpanM; addErrAt loc msg }\n\nfailWith :: MsgDoc -> TcRn a\nfailWith msg = addErr msg >> failM\n\naddErrAt :: SrcSpan -> MsgDoc -> TcRn ()\n-- addErrAt is mainly (exclusively?) used by the renamer, where\n-- tidying is not an issue, but it's all lazy so the extra\n-- work doesn't matter\naddErrAt loc msg = do { ctxt <- getErrCtxt\n ; tidy_env <- tcInitTidyEnv\n ; err_info <- mkErrInfo tidy_env ctxt\n ; addLongErrAt loc msg err_info }\n\naddErrs :: [(SrcSpan,MsgDoc)] -> TcRn ()\naddErrs msgs = mapM_ add msgs\n where\n add (loc,msg) = addErrAt loc msg\n\ncheckErr :: Bool -> MsgDoc -> TcRn ()\n-- Add the error if the bool is False\ncheckErr ok msg = unless ok (addErr msg)\n\nwarnIf :: Bool -> MsgDoc -> TcRn ()\nwarnIf True msg = addWarn msg\nwarnIf False _ = return ()\n\naddMessages :: Messages -> TcRn ()\naddMessages (m_warns, m_errs)\n = do { errs_var <- getErrsVar ;\n (warns, errs) <- readTcRef errs_var ;\n writeTcRef errs_var (warns `unionBags` m_warns,\n errs `unionBags` m_errs) }\n\ndiscardWarnings :: TcRn a -> TcRn a\n-- Ignore warnings inside the thing inside;\n-- used to ignore-unused-variable warnings inside derived code\ndiscardWarnings thing_inside\n = do { errs_var <- getErrsVar\n ; (old_warns, _) <- readTcRef errs_var ;\n\n ; result <- thing_inside\n\n -- Revert warnings to old_warns\n ; (_new_warns, new_errs) <- readTcRef errs_var\n ; writeTcRef errs_var (old_warns, new_errs) \n\n ; return result }\n\\end{code}\n\n\n%************************************************************************\n%* *\n Shared error message stuff: renamer and typechecker\n%* *\n%************************************************************************\n\n\\begin{code}\nmkLongErrAt :: SrcSpan -> MsgDoc -> MsgDoc -> TcRn ErrMsg\nmkLongErrAt loc msg extra\n = do { rdr_env <- getGlobalRdrEnv ;\n dflags <- getDynFlags ;\n return $ mkLongErrMsg dflags loc (mkPrintUnqualified dflags rdr_env) msg extra }\n\naddLongErrAt :: SrcSpan -> MsgDoc -> MsgDoc -> TcRn ()\naddLongErrAt loc msg extra = mkLongErrAt loc msg extra >>= reportError\n\nreportErrors :: [ErrMsg] -> TcM ()\nreportErrors = mapM_ reportError\n\nreportError :: ErrMsg -> TcRn ()\nreportError err\n = do { traceTc \"Adding error:\" (pprLocErrMsg err) ;\n errs_var <- getErrsVar ;\n (warns, errs) <- readTcRef errs_var ;\n writeTcRef errs_var (warns, errs `snocBag` err) }\n\nreportWarning :: ErrMsg -> TcRn ()\nreportWarning warn\n = do { traceTc \"Adding warning:\" (pprLocErrMsg warn) ;\n errs_var <- getErrsVar ;\n (warns, errs) <- readTcRef errs_var ;\n writeTcRef errs_var (warns `snocBag` warn, errs) }\n\ndumpDerivingInfo :: SDoc -> TcM ()\ndumpDerivingInfo doc\n = do { dflags <- getDynFlags\n ; when (dopt Opt_D_dump_deriv dflags) $ do\n { rdr_env <- getGlobalRdrEnv\n ; let unqual = mkPrintUnqualified dflags rdr_env\n ; liftIO (putMsgWith dflags unqual doc) } }\n\\end{code}\n\n\n\\begin{code}\ntry_m :: TcRn r -> TcRn (Either IOEnvFailure r)\n-- Does try_m, with a debug-trace on failure\ntry_m thing\n = do { mb_r <- tryM thing ;\n case mb_r of\n Left exn -> do { traceTc \"tryTc\/recoverM recovering from\" $\n text (showException exn)\n ; return mb_r }\n Right _ -> return mb_r }\n\n-----------------------\nrecoverM :: TcRn r -- Recovery action; do this if the main one fails\n -> TcRn r -- Main action: do this first\n -> TcRn r\n-- Errors in 'thing' are retained\nrecoverM recover thing\n = do { mb_res <- try_m thing ;\n case mb_res of\n Left _ -> recover\n Right res -> return res }\n\n\n-----------------------\nmapAndRecoverM :: (a -> TcRn b) -> [a] -> TcRn [b]\n-- Drop elements of the input that fail, so the result\n-- list can be shorter than the argument list\nmapAndRecoverM _ [] = return []\nmapAndRecoverM f (x:xs) = do { mb_r <- try_m (f x)\n ; rs <- mapAndRecoverM f xs\n ; return (case mb_r of\n Left _ -> rs\n Right r -> r:rs) }\n\n-- | Succeeds if applying the argument to all members of the lists succeeds,\n-- but nevertheless runs it on all arguments, to collect all errors.\nmapAndReportM :: (a -> TcRn b) -> [a] -> TcRn [b]\nmapAndReportM f xs = checkNoErrs (mapAndRecoverM f xs)\n\n-----------------------\ntryTc :: TcRn a -> TcRn (Messages, Maybe a)\n-- (tryTc m) executes m, and returns\n-- Just r, if m succeeds (returning r)\n-- Nothing, if m fails\n-- It also returns all the errors and warnings accumulated by m\n-- It always succeeds (never raises an exception)\ntryTc m\n = do { errs_var <- newTcRef emptyMessages ;\n res <- try_m (setErrsVar errs_var m) ;\n msgs <- readTcRef errs_var ;\n return (msgs, case res of\n Left _ -> Nothing\n Right val -> Just val)\n -- The exception is always the IOEnv built-in\n -- in exception; see IOEnv.failM\n }\n\n-----------------------\ntryTcErrs :: TcRn a -> TcRn (Messages, Maybe a)\n-- Run the thing, returning\n-- Just r, if m succceeds with no error messages\n-- Nothing, if m fails, or if it succeeds but has error messages\n-- Either way, the messages are returned; even in the Just case\n-- there might be warnings\ntryTcErrs thing\n = do { (msgs, res) <- tryTc thing\n ; dflags <- getDynFlags\n ; let errs_found = errorsFound dflags msgs\n ; return (msgs, case res of\n Nothing -> Nothing\n Just val | errs_found -> Nothing\n | otherwise -> Just val)\n }\n\n-----------------------\ntryTcLIE :: TcM a -> TcM (Messages, Maybe a)\n-- Just like tryTcErrs, except that it ensures that the LIE\n-- for the thing is propagated only if there are no errors\n-- Hence it's restricted to the type-check monad\ntryTcLIE thing_inside\n = do { ((msgs, mb_res), lie) <- captureConstraints (tryTcErrs thing_inside) ;\n ; case mb_res of\n Nothing -> return (msgs, Nothing)\n Just val -> do { emitConstraints lie; return (msgs, Just val) }\n }\n\n-----------------------\ntryTcLIE_ :: TcM r -> TcM r -> TcM r\n-- (tryTcLIE_ r m) tries m;\n-- if m succeeds with no error messages, it's the answer\n-- otherwise tryTcLIE_ drops everything from m and tries r instead.\ntryTcLIE_ recover main\n = do { (msgs, mb_res) <- tryTcLIE main\n ; case mb_res of\n Just val -> do { addMessages msgs -- There might be warnings\n ; return val }\n Nothing -> recover -- Discard all msgs\n }\n\n-----------------------\ncheckNoErrs :: TcM r -> TcM r\n-- (checkNoErrs m) succeeds iff m succeeds and generates no errors\n-- If m fails then (checkNoErrsTc m) fails.\n-- If m succeeds, it checks whether m generated any errors messages\n-- (it might have recovered internally)\n-- If so, it fails too.\n-- Regardless, any errors generated by m are propagated to the enclosing context.\ncheckNoErrs main\n = do { (msgs, mb_res) <- tryTcLIE main\n ; addMessages msgs\n ; case mb_res of\n Nothing -> failM\n Just val -> return val\n }\n\nifErrsM :: TcRn r -> TcRn r -> TcRn r\n-- ifErrsM bale_out normal\n-- does 'bale_out' if there are errors in errors collection\n-- otherwise does 'normal'\nifErrsM bale_out normal\n = do { errs_var <- getErrsVar ;\n msgs <- readTcRef errs_var ;\n dflags <- getDynFlags ;\n if errorsFound dflags msgs then\n bale_out\n else\n normal }\n\nfailIfErrsM :: TcRn ()\n-- Useful to avoid error cascades\nfailIfErrsM = ifErrsM failM (return ())\n\ncheckTH :: Outputable a => a -> String -> TcRn ()\n#ifdef GHCI\ncheckTH _ _ = return () -- OK\n#else\ncheckTH e what = failTH e what -- Raise an error in a stage-1 compiler\n#endif\n\nfailTH :: Outputable a => a -> String -> TcRn x\nfailTH e what -- Raise an error in a stage-1 compiler\n = failWithTc (vcat [ hang (char 'A' <+> text what\n <+> ptext (sLit \"requires GHC with interpreter support:\"))\n 2 (ppr e)\n , ptext (sLit \"Perhaps you are using a stage-1 compiler?\") ])\n\\end{code}\n\n\n%************************************************************************\n%* *\n Context management for the type checker\n%* *\n%************************************************************************\n\n\\begin{code}\ngetErrCtxt :: TcM [ErrCtxt]\ngetErrCtxt = do { env <- getLclEnv; return (tcl_ctxt env) }\n\nsetErrCtxt :: [ErrCtxt] -> TcM a -> TcM a\nsetErrCtxt ctxt = updLclEnv (\\ env -> env { tcl_ctxt = ctxt })\n\naddErrCtxt :: MsgDoc -> TcM a -> TcM a\naddErrCtxt msg = addErrCtxtM (\\env -> return (env, msg))\n\naddErrCtxtM :: (TidyEnv -> TcM (TidyEnv, MsgDoc)) -> TcM a -> TcM a\naddErrCtxtM ctxt = updCtxt (\\ ctxts -> (False, ctxt) : ctxts)\n\naddLandmarkErrCtxt :: MsgDoc -> TcM a -> TcM a\naddLandmarkErrCtxt msg = updCtxt (\\ctxts -> (True, \\env -> return (env,msg)) : ctxts)\n\n-- Helper function for the above\nupdCtxt :: ([ErrCtxt] -> [ErrCtxt]) -> TcM a -> TcM a\nupdCtxt upd = updLclEnv (\\ env@(TcLclEnv { tcl_ctxt = ctxt }) ->\n env { tcl_ctxt = upd ctxt })\n\npopErrCtxt :: TcM a -> TcM a\npopErrCtxt = updCtxt (\\ msgs -> case msgs of { [] -> []; (_ : ms) -> ms })\n\ngetCtLoc :: CtOrigin -> TcM CtLoc\ngetCtLoc origin\n = do { env <- getLclEnv \n ; return (CtLoc { ctl_origin = origin\n , ctl_env = env\n , ctl_depth = initialSubGoalDepth }) }\n\nsetCtLoc :: CtLoc -> TcM a -> TcM a\n-- Set the SrcSpan and error context from the CtLoc\nsetCtLoc (CtLoc { ctl_env = lcl }) thing_inside\n = updLclEnv (\\env -> env { tcl_loc = tcl_loc lcl\n , tcl_bndrs = tcl_bndrs lcl\n , tcl_ctxt = tcl_ctxt lcl }) \n thing_inside\n\\end{code}\n\n%************************************************************************\n%* *\n Error message generation (type checker)\n%* *\n%************************************************************************\n\n The addErrTc functions add an error message, but do not cause failure.\n The 'M' variants pass a TidyEnv that has already been used to\n tidy up the message; we then use it to tidy the context messages\n\n\\begin{code}\naddErrTc :: MsgDoc -> TcM ()\naddErrTc err_msg = do { env0 <- tcInitTidyEnv\n ; addErrTcM (env0, err_msg) }\n\naddErrsTc :: [MsgDoc] -> TcM ()\naddErrsTc err_msgs = mapM_ addErrTc err_msgs\n\naddErrTcM :: (TidyEnv, MsgDoc) -> TcM ()\naddErrTcM (tidy_env, err_msg)\n = do { ctxt <- getErrCtxt ;\n loc <- getSrcSpanM ;\n add_err_tcm tidy_env err_msg loc ctxt }\n\n-- Return the error message, instead of reporting it straight away\nmkErrTcM :: (TidyEnv, MsgDoc) -> TcM ErrMsg\nmkErrTcM (tidy_env, err_msg)\n = do { ctxt <- getErrCtxt ;\n loc <- getSrcSpanM ;\n err_info <- mkErrInfo tidy_env ctxt ;\n mkLongErrAt loc err_msg err_info }\n\\end{code}\n\nThe failWith functions add an error message and cause failure\n\n\\begin{code}\nfailWithTc :: MsgDoc -> TcM a -- Add an error message and fail\nfailWithTc err_msg\n = addErrTc err_msg >> failM\n\nfailWithTcM :: (TidyEnv, MsgDoc) -> TcM a -- Add an error message and fail\nfailWithTcM local_and_msg\n = addErrTcM local_and_msg >> failM\n\ncheckTc :: Bool -> MsgDoc -> TcM () -- Check that the boolean is true\ncheckTc True _ = return ()\ncheckTc False err = failWithTc err\n\\end{code}\n\n Warnings have no 'M' variant, nor failure\n\n\\begin{code}\nwarnTc :: Bool -> MsgDoc -> TcM ()\nwarnTc warn_if_true warn_msg\n | warn_if_true = addWarnTc warn_msg\n | otherwise = return ()\n\naddWarnTc :: MsgDoc -> TcM ()\naddWarnTc msg = do { env0 <- tcInitTidyEnv\n ; addWarnTcM (env0, msg) }\n\naddWarnTcM :: (TidyEnv, MsgDoc) -> TcM ()\naddWarnTcM (env0, msg)\n = do { ctxt <- getErrCtxt ;\n err_info <- mkErrInfo env0 ctxt ;\n add_warn msg err_info }\n\naddWarn :: MsgDoc -> TcRn ()\naddWarn msg = add_warn msg empty\n\naddWarnAt :: SrcSpan -> MsgDoc -> TcRn ()\naddWarnAt loc msg = add_warn_at loc msg empty\n\nadd_warn :: MsgDoc -> MsgDoc -> TcRn ()\nadd_warn msg extra_info \n = do { loc <- getSrcSpanM\n ; add_warn_at loc msg extra_info }\n\nadd_warn_at :: SrcSpan -> MsgDoc -> MsgDoc -> TcRn ()\nadd_warn_at loc msg extra_info\n = do { rdr_env <- getGlobalRdrEnv ;\n dflags <- getDynFlags ;\n let { warn = mkLongWarnMsg dflags loc (mkPrintUnqualified dflags rdr_env)\n msg extra_info } ;\n reportWarning warn }\n\ntcInitTidyEnv :: TcM TidyEnv\ntcInitTidyEnv\n = do { lcl_env <- getLclEnv\n ; return (tcl_tidy lcl_env) }\n\\end{code}\n\n-----------------------------------\n Other helper functions\n\n\\begin{code}\nadd_err_tcm :: TidyEnv -> MsgDoc -> SrcSpan\n -> [ErrCtxt]\n -> TcM ()\nadd_err_tcm tidy_env err_msg loc ctxt\n = do { err_info <- mkErrInfo tidy_env ctxt ;\n addLongErrAt loc err_msg err_info }\n\nmkErrInfo :: TidyEnv -> [ErrCtxt] -> TcM SDoc\n-- Tidy the error info, trimming excessive contexts\nmkErrInfo env ctxts\n-- | opt_PprStyle_Debug -- In -dppr-debug style the output\n-- = return empty -- just becomes too voluminous\n | otherwise\n = go 0 env ctxts\n where\n go :: Int -> TidyEnv -> [ErrCtxt] -> TcM SDoc\n go _ _ [] = return empty\n go n env ((is_landmark, ctxt) : ctxts)\n | is_landmark || n < mAX_CONTEXTS -- Too verbose || opt_PprStyle_Debug\n = do { (env', msg) <- ctxt env\n ; let n' = if is_landmark then n else n+1\n ; rest <- go n' env' ctxts\n ; return (msg $$ rest) }\n | otherwise\n = go n env ctxts\n\nmAX_CONTEXTS :: Int -- No more than this number of non-landmark contexts\nmAX_CONTEXTS = 3\n\\end{code}\n\ndebugTc is useful for monadic debugging code\n\n\\begin{code}\ndebugTc :: TcM () -> TcM ()\ndebugTc thing\n | debugIsOn = thing\n | otherwise = return ()\n\\end{code}\n\n%************************************************************************\n%* *\n Type constraints\n%* *\n%************************************************************************\n\n\\begin{code}\nnewTcEvBinds :: TcM EvBindsVar\nnewTcEvBinds = do { ref <- newTcRef emptyEvBindMap\n ; uniq <- newUnique\n ; return (EvBindsVar ref uniq) }\n\naddTcEvBind :: EvBindsVar -> EvVar -> EvTerm -> TcM ()\n-- Add a binding to the TcEvBinds by side effect\naddTcEvBind (EvBindsVar ev_ref _) var t\n = do { bnds <- readTcRef ev_ref\n ; writeTcRef ev_ref (extendEvBinds bnds var t) }\n\ngetTcEvBinds :: EvBindsVar -> TcM (Bag EvBind)\ngetTcEvBinds (EvBindsVar ev_ref _) \n = do { bnds <- readTcRef ev_ref\n ; return (evBindMapBinds bnds) }\n\nchooseUniqueOccTc :: (OccSet -> OccName) -> TcM OccName\nchooseUniqueOccTc fn =\n do { env <- getGblEnv\n ; let dfun_n_var = tcg_dfun_n env\n ; set <- readTcRef dfun_n_var\n ; let occ = fn set\n ; writeTcRef dfun_n_var (extendOccSet set occ)\n ; return occ }\n\ngetConstraintVar :: TcM (TcRef WantedConstraints)\ngetConstraintVar = do { env <- getLclEnv; return (tcl_lie env) }\n\nsetConstraintVar :: TcRef WantedConstraints -> TcM a -> TcM a\nsetConstraintVar lie_var = updLclEnv (\\ env -> env { tcl_lie = lie_var })\n\nemitConstraints :: WantedConstraints -> TcM ()\nemitConstraints ct\n = do { lie_var <- getConstraintVar ;\n updTcRef lie_var (`andWC` ct) }\n\nemitFlat :: Ct -> TcM ()\nemitFlat ct\n = do { lie_var <- getConstraintVar ;\n updTcRef lie_var (`addFlats` unitBag ct) }\n\nemitFlats :: Cts -> TcM ()\nemitFlats cts\n = do { lie_var <- getConstraintVar ;\n updTcRef lie_var (`addFlats` cts) }\n \nemitImplication :: Implication -> TcM ()\nemitImplication ct\n = do { lie_var <- getConstraintVar ;\n updTcRef lie_var (`addImplics` unitBag ct) }\n\nemitImplications :: Bag Implication -> TcM ()\nemitImplications ct\n = do { lie_var <- getConstraintVar ;\n updTcRef lie_var (`addImplics` ct) }\n\nemitInsoluble :: Ct -> TcM ()\nemitInsoluble ct\n = do { lie_var <- getConstraintVar ;\n updTcRef lie_var (`addInsols` unitBag ct) ;\n v <- readTcRef lie_var ;\n traceTc \"emitInsoluble\" (ppr v) }\n\ncaptureConstraints :: TcM a -> TcM (a, WantedConstraints)\n-- (captureConstraints m) runs m, and returns the type constraints it generates\ncaptureConstraints thing_inside\n = do { lie_var <- newTcRef emptyWC ;\n res <- updLclEnv (\\ env -> env { tcl_lie = lie_var })\n thing_inside ;\n lie <- readTcRef lie_var ;\n return (res, lie) }\n\ncaptureUntouchables :: TcM a -> TcM (a, Untouchables)\ncaptureUntouchables thing_inside\n = do { env <- getLclEnv\n ; let untch' = pushUntouchables (tcl_untch env)\n ; res <- setLclEnv (env { tcl_untch = untch' })\n thing_inside\n ; return (res, untch') }\n\ngetUntouchables :: TcM Untouchables\ngetUntouchables = do { env <- getLclEnv\n ; return (tcl_untch env) }\n\nsetUntouchables :: Untouchables -> TcM a -> TcM a\nsetUntouchables untch thing_inside \n = updLclEnv (\\env -> env { tcl_untch = untch }) thing_inside\n\nisTouchableTcM :: TcTyVar -> TcM Bool\nisTouchableTcM tv\n = do { env <- getLclEnv\n ; return (isTouchableMetaTyVar (tcl_untch env) tv) }\n\ngetLclTypeEnv :: TcM TcTypeEnv\ngetLclTypeEnv = do { env <- getLclEnv; return (tcl_env env) }\n\nsetLclTypeEnv :: TcLclEnv -> TcM a -> TcM a\n-- Set the local type envt, but do *not* disturb other fields,\n-- notably the lie_var\nsetLclTypeEnv lcl_env thing_inside\n = updLclEnv upd thing_inside\n where\n upd env = env { tcl_env = tcl_env lcl_env,\n tcl_tyvars = tcl_tyvars lcl_env }\n\ntraceTcConstraints :: String -> TcM ()\ntraceTcConstraints msg\n = do { lie_var <- getConstraintVar\n ; lie <- readTcRef lie_var\n ; traceTc (msg ++ \": LIE:\") (ppr lie)\n }\n\\end{code}\n\n\n%************************************************************************\n%* *\n Template Haskell context\n%* *\n%************************************************************************\n\n\\begin{code}\nrecordThUse :: TcM ()\nrecordThUse = do { env <- getGblEnv; writeTcRef (tcg_th_used env) True }\n\nrecordThSpliceUse :: TcM ()\nrecordThSpliceUse = do { env <- getGblEnv; writeTcRef (tcg_th_splice_used env) True }\n\nkeepAlive :: Name -> TcRn () -- Record the name in the keep-alive set\nkeepAlive name\n = do { env <- getGblEnv\n ; traceRn (ptext (sLit \"keep alive\") <+> ppr name)\n ; updTcRef (tcg_keep env) (`addOneToNameSet` name) }\n\ngetStage :: TcM ThStage\ngetStage = do { env <- getLclEnv; return (tcl_th_ctxt env) }\n\ngetStageAndBindLevel :: Name -> TcRn (Maybe (TopLevelFlag, ThLevel, ThStage))\ngetStageAndBindLevel name\n = do { env <- getLclEnv;\n ; case lookupNameEnv (tcl_th_bndrs env) name of\n Nothing -> return Nothing\n Just (top_lvl, bind_lvl) -> return (Just (top_lvl, bind_lvl, tcl_th_ctxt env)) }\n\nsetStage :: ThStage -> TcM a -> TcRn a\nsetStage s = updLclEnv (\\ env -> env { tcl_th_ctxt = s })\n\\end{code}\n\n\n%************************************************************************\n%* *\n Safe Haskell context\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | Mark that safe inference has failed\nrecordUnsafeInfer :: TcM ()\nrecordUnsafeInfer = getGblEnv >>= \\env -> writeTcRef (tcg_safeInfer env) False\n\n-- | Figure out the final correct safe haskell mode\nfinalSafeMode :: DynFlags -> TcGblEnv -> IO SafeHaskellMode\nfinalSafeMode dflags tcg_env = do\n safeInf <- readIORef (tcg_safeInfer tcg_env)\n return $ if safeInferOn dflags && not safeInf\n then Sf_None\n else safeHaskell dflags\n\\end{code}\n\n\n%************************************************************************\n%* *\n Stuff for the renamer's local env\n%* *\n%************************************************************************\n\n\\begin{code}\ngetLocalRdrEnv :: RnM LocalRdrEnv\ngetLocalRdrEnv = do { env <- getLclEnv; return (tcl_rdr env) }\n\nsetLocalRdrEnv :: LocalRdrEnv -> RnM a -> RnM a\nsetLocalRdrEnv rdr_env thing_inside\n = updLclEnv (\\env -> env {tcl_rdr = rdr_env}) thing_inside\n\\end{code}\n\n\n%************************************************************************\n%* *\n Stuff for interface decls\n%* *\n%************************************************************************\n\n\\begin{code}\nmkIfLclEnv :: Module -> SDoc -> IfLclEnv\nmkIfLclEnv mod loc = IfLclEnv { if_mod = mod,\n if_loc = loc,\n if_tv_env = emptyUFM,\n if_id_env = emptyUFM }\n\ninitIfaceTcRn :: IfG a -> TcRn a\ninitIfaceTcRn thing_inside\n = do { tcg_env <- getGblEnv\n ; let { if_env = IfGblEnv { if_rec_types = Just (tcg_mod tcg_env, get_type_env) }\n ; get_type_env = readTcRef (tcg_type_env_var tcg_env) }\n ; setEnvs (if_env, ()) thing_inside }\n\ninitIfaceCheck :: HscEnv -> IfG a -> IO a\n-- Used when checking the up-to-date-ness of the old Iface\n-- Initialise the environment with no useful info at all\ninitIfaceCheck hsc_env do_this\n = do let rec_types = case hsc_type_env_var hsc_env of\n Just (mod,var) -> Just (mod, readTcRef var)\n Nothing -> Nothing\n gbl_env = IfGblEnv { if_rec_types = rec_types }\n initTcRnIf 'i' hsc_env gbl_env () do_this\n\ninitIfaceTc :: ModIface\n -> (TcRef TypeEnv -> IfL a) -> TcRnIf gbl lcl a\n-- Used when type-checking checking an up-to-date interface file\n-- No type envt from the current module, but we do know the module dependencies\ninitIfaceTc iface do_this\n = do { tc_env_var <- newTcRef emptyTypeEnv\n ; let { gbl_env = IfGblEnv { if_rec_types = Just (mod, readTcRef tc_env_var) } ;\n ; if_lenv = mkIfLclEnv mod doc\n }\n ; setEnvs (gbl_env, if_lenv) (do_this tc_env_var)\n }\n where\n mod = mi_module iface\n doc = ptext (sLit \"The interface for\") <+> quotes (ppr mod)\n\ninitIfaceLcl :: Module -> SDoc -> IfL a -> IfM lcl a\ninitIfaceLcl mod loc_doc thing_inside\n = setLclEnv (mkIfLclEnv mod loc_doc) thing_inside\n\ngetIfModule :: IfL Module\ngetIfModule = do { env <- getLclEnv; return (if_mod env) }\n\n--------------------\nfailIfM :: MsgDoc -> IfL a\n-- The Iface monad doesn't have a place to accumulate errors, so we\n-- just fall over fast if one happens; it \"shouldnt happen\".\n-- We use IfL here so that we can get context info out of the local env\nfailIfM msg\n = do { env <- getLclEnv\n ; let full_msg = (if_loc env <> colon) $$ nest 2 msg\n ; dflags <- getDynFlags\n ; liftIO (log_action dflags dflags SevFatal noSrcSpan (defaultErrStyle dflags) full_msg)\n ; failM }\n\n--------------------\nforkM_maybe :: SDoc -> IfL a -> IfL (Maybe a)\n-- Run thing_inside in an interleaved thread.\n-- It shares everything with the parent thread, so this is DANGEROUS.\n--\n-- It returns Nothing if the computation fails\n--\n-- It's used for lazily type-checking interface\n-- signatures, which is pretty benign\n\nforkM_maybe doc thing_inside\n -- NB: Don't share the mutable env_us with the interleaved thread since env_us\n -- does not get updated atomically (e.g. in newUnique and newUniqueSupply).\n = do { child_us <- newUniqueSupply\n ; child_env_us <- newMutVar child_us\n -- see Note [Masking exceptions in forkM_maybe]\n ; unsafeInterleaveM $ uninterruptibleMaskM_ $ updEnv (\\env -> env { env_us = child_env_us }) $\n do { traceIf (text \"Starting fork {\" <+> doc)\n ; mb_res <- tryM $\n updLclEnv (\\env -> env { if_loc = if_loc env $$ doc }) $\n thing_inside\n ; case mb_res of\n Right r -> do { traceIf (text \"} ending fork\" <+> doc)\n ; return (Just r) }\n Left exn -> do {\n\n -- Bleat about errors in the forked thread, if -ddump-if-trace is on\n -- Otherwise we silently discard errors. Errors can legitimately\n -- happen when compiling interface signatures (see tcInterfaceSigs)\n whenDOptM Opt_D_dump_if_trace $ do\n dflags <- getDynFlags\n let msg = hang (text \"forkM failed:\" <+> doc)\n 2 (text (show exn))\n liftIO $ log_action dflags dflags SevFatal noSrcSpan (defaultErrStyle dflags) msg\n\n ; traceIf (text \"} ending fork (badly)\" <+> doc)\n ; return Nothing }\n }}\n\nforkM :: SDoc -> IfL a -> IfL a\nforkM doc thing_inside\n = do { mb_res <- forkM_maybe doc thing_inside\n ; return (case mb_res of\n Nothing -> pgmError \"Cannot continue after interface file error\"\n -- pprPanic \"forkM\" doc\n Just r -> r) }\n\\end{code}\n\nNote [Masking exceptions in forkM_maybe]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWhen using GHC-as-API it must be possible to interrupt snippets of code\nexecuted using runStmt (#1381). Since commit 02c4ab04 this is almost possible\nby throwing an asynchronous interrupt to the GHC thread. However, there is a\nsubtle problem: runStmt first typechecks the code before running it, and the\nexception might interrupt the type checker rather than the code. Moreover, the\ntypechecker might be inside an unsafeInterleaveIO (through forkM_maybe), and\nmore importantly might be inside an exception handler inside that\nunsafeInterleaveIO. If that is the case, the exception handler will rethrow the\nasynchronous exception as a synchronous exception, and the exception will end\nup as the value of the unsafeInterleaveIO thunk (see #8006 for a detailed\ndiscussion). We don't currently know a general solution to this problem, but\nwe can use uninterruptibleMask_ to avoid the situation. \n","avg_line_length":36.178807947,"max_line_length":107,"alphanum_fraction":0.5493522078} {"size":17671,"ext":"lhs","lang":"Literate Haskell","max_stars_count":1.0,"content":"%\n% (c) The University of Glasgow 2006\n% (c) The GRASP\/AQUA Project, Glasgow University, 1992-1998\n%\n% Code generation for tail calls.\n\n\\begin{code}\n{-# OPTIONS -fno-warn-tabs #-}\n-- The above warning supression flag is a temporary kludge.\n-- While working on this module you are encouraged to remove it and\n-- detab the module (please do the detabbing in a separate patch). See\n-- http:\/\/hackage.haskell.org\/trac\/ghc\/wiki\/Commentary\/CodingStyle#TabsvsSpaces\n-- for details\n\nmodule CgTailCall (\n\tcgTailCall, performTailCall,\n\tperformReturn, performPrimReturn,\n\treturnUnboxedTuple, ccallReturnUnboxedTuple,\n\tpushUnboxedTuple,\n\ttailCallPrimOp,\n tailCallPrimCall,\n\n\tpushReturnAddress\n ) where\n\n#include \"HsVersions.h\"\n\nimport CgMonad\nimport CgBindery\nimport CgInfoTbls\nimport CgCallConv\nimport CgStackery\nimport CgHeapery\nimport CgUtils\nimport CgTicky\nimport ClosureInfo\nimport OldCmm\t\nimport OldCmmUtils\nimport CLabel\nimport Type\nimport Id\nimport StgSyn\nimport PrimOp\nimport Outputable\nimport StaticFlags\n\nimport Control.Monad\n\n-----------------------------------------------------------------------------\n-- Tail Calls\n\ncgTailCall :: Id -> [StgArg] -> Code\n\n-- Here's the code we generate for a tail call. (NB there may be no\n-- arguments, in which case this boils down to just entering a variable.)\n-- \n-- *\tPut args in the top locations of the stack.\n-- *\tAdjust the stack ptr\n-- *\tMake R1 point to the function closure if necessary.\n-- *\tPerform the call.\n--\n-- Things to be careful about:\n--\n-- *\tDon't overwrite stack locations before you have finished with\n-- \tthem (remember you need the function and the as-yet-unmoved\n-- \targuments).\n-- *\tPreferably, generate no code to replace x by x on the stack (a\n-- \tcommon situation in tail-recursion).\n-- *\tAdjust the stack high water mark appropriately.\n-- \n-- Treat unboxed locals exactly like literals (above) except use the addr\n-- mode for the local instead of (CLit lit) in the assignment.\n\ncgTailCall fun args\n = do\t{ fun_info <- getCgIdInfo fun\n\n\t; if isUnLiftedType (idType fun)\n\t then \t-- Primitive return\n\t\tASSERT( null args )\n\t do\t{ fun_amode <- idInfoToAmode fun_info\n\t\t; performPrimReturn (cgIdInfoArgRep fun_info) fun_amode } \n\n\t else -- Normal case, fun is boxed\n\t do { arg_amodes <- getArgAmodes args\n\t\t; performTailCall fun_info arg_amodes noStmts }\n\t}\n\t\t\n\n-- -----------------------------------------------------------------------------\n-- The guts of a tail-call\n\nperformTailCall \n\t:: CgIdInfo\t\t-- The function\n\t-> [(CgRep,CmmExpr)]\t-- Args\n\t-> CmmStmts\t\t-- Pending simultaneous assignments\n\t\t\t\t-- *** GUARANTEED to contain only stack assignments.\n\t-> Code\n\nperformTailCall fun_info arg_amodes pending_assts\n | Just join_sp <- maybeLetNoEscape fun_info\n = \t -- A let-no-escape is slightly different, because we\n\t -- arrange the stack arguments into pointers and non-pointers\n\t -- to make the heap check easier. The tail-call sequence\n\t -- is very similar to returning an unboxed tuple, so we\n\t -- share some code.\n do\t{ (final_sp, arg_assts) <- pushUnboxedTuple join_sp arg_amodes\n\t; emitSimultaneously (pending_assts `plusStmts` arg_assts)\n\t; let lbl = enterReturnPtLabel (idUnique (cgIdInfoId fun_info))\n\t; doFinalJump final_sp True {- Is LNE -} (jumpToLbl lbl) }\n\n | otherwise\n = do \t{ fun_amode <- idInfoToAmode fun_info\n\t; let assignSt = CmmAssign nodeReg fun_amode\n node_asst = oneStmt assignSt\n\t opt_node_asst | nodeMustPointToIt lf_info = node_asst\n\t\t\t | otherwise\t\t = noStmts\n\t; EndOfBlockInfo sp _ <- getEndOfBlockInfo\n\n\t; dflags <- getDynFlags\n\t; case (getCallMethod dflags fun_name fun_has_cafs lf_info (length arg_amodes)) of\n\n\t -- Node must always point to things we enter\n\t EnterIt -> do\n\t\t{ emitSimultaneously (node_asst `plusStmts` pending_assts) \n\t\t; let target = entryCode (closureInfoPtr (CmmReg nodeReg))\n enterClosure = stmtC (CmmJump target [])\n -- If this is a scrutinee\n -- let's check if the closure is a constructor\n -- so we can directly jump to the alternatives switch\n -- statement.\n jumpInstr = getEndOfBlockInfo >>=\n maybeSwitchOnCons enterClosure\n\t\t; doFinalJump sp False jumpInstr }\n \n\t -- A function, but we have zero arguments. It is already in WHNF,\n\t -- so we can just return it. \n\t -- As with any return, Node must point to it.\n\t ReturnIt -> do\n\t\t{ emitSimultaneously (node_asst `plusStmts` pending_assts)\n\t\t; doFinalJump sp False emitReturnInstr }\n \n\t -- A real constructor. Don't bother entering it, \n\t -- just do the right sort of return instead.\n\t -- As with any return, Node must point to it.\n\t ReturnCon _ -> do\n\t\t{ emitSimultaneously (node_asst `plusStmts` pending_assts)\n\t\t; doFinalJump sp False emitReturnInstr }\n\n\t JumpToIt lbl -> do\n\t\t{ emitSimultaneously (opt_node_asst `plusStmts` pending_assts)\n\t\t; doFinalJump sp False (jumpToLbl lbl) }\n \n\t -- A slow function call via the RTS apply routines\n\t -- Node must definitely point to the thing\n\t SlowCall -> do \n\t\t{ when (not (null arg_amodes)) $ do\n\t\t { if (isKnownFun lf_info) \n\t\t\tthen tickyKnownCallTooFewArgs\n\t\t\telse tickyUnknownCall\n\t\t ; tickySlowCallPat (map fst arg_amodes) \n\t\t }\n\n\t\t; let (apply_lbl, args, extra_args) \n\t\t\t= constructSlowCall arg_amodes\n\n\t\t; directCall sp apply_lbl args extra_args \n\t\t\t(node_asst `plusStmts` pending_assts)\n\n\t\t}\n \n\t -- A direct function call (possibly with some left-over arguments)\n\t DirectEntry lbl arity -> do\n\t\t{ if arity == length arg_amodes\n\t\t\tthen tickyKnownCallExact\n\t\t\telse do tickyKnownCallExtraArgs\n\t\t\t\ttickySlowCallPat (map fst (drop arity arg_amodes))\n\n \t\t; let\n\t\t -- The args beyond the arity go straight on the stack\n\t\t (arity_args, extra_args) = splitAt arity arg_amodes\n \n\t\t; directCall sp lbl arity_args extra_args\n\t\t\t(opt_node_asst `plusStmts` pending_assts)\n\t }\n\t}\n where\n fun_id = cgIdInfoId fun_info\n fun_name = idName fun_id\n lf_info = cgIdInfoLF fun_info\n fun_has_cafs = idCafInfo fun_id\n untag_node = CmmAssign nodeReg (cmmUntag (CmmReg nodeReg))\n -- Test if closure is a constructor\n maybeSwitchOnCons enterClosure eob\n | EndOfBlockInfo _ (CaseAlts lbl _ _) <- eob,\n not opt_SccProfilingOn\n -- we can't shortcut when profiling is on, because we have\n -- to enter a closure to mark it as \"used\" for LDV profiling\n = do { is_constr <- newLabelC\n -- Is the pointer tagged?\n -- Yes, jump to switch statement\n ; stmtC (CmmCondBranch (cmmIsTagged (CmmReg nodeReg)) \n is_constr)\n -- No, enter the closure.\n ; enterClosure\n ; labelC is_constr\n ; stmtC (CmmJump (entryCode $ CmmLit (CmmLabel lbl)) [])\n }\n{-\n -- This is a scrutinee for a case expression\n -- so let's see if we can directly inspect the closure\n | EndOfBlockInfo _ (CaseAlts lbl _ _ _) <- eob\n = do { no_cons <- newLabelC\n -- Both the NCG and gcc optimize away the temp\n ; z <- newTemp wordRep\n ; stmtC (CmmAssign z tag_expr)\n ; let tag = CmmReg z\n -- Is the closure a cons?\n ; stmtC (CmmCondBranch (cond1 tag) no_cons)\n ; stmtC (CmmCondBranch (cond2 tag) no_cons)\n -- Yes, jump to switch statement\n ; stmtC (CmmJump (CmmLit (CmmLabel lbl)) [])\n ; labelC no_cons\n -- No, enter the closure.\n ; enterClosure\n }\n-}\n -- No case expression involved, enter the closure.\n | otherwise\n = do { stmtC untag_node\n ; enterClosure\n }\n where\n --cond1 tag = cmmULtWord tag lowCons\n -- More efficient than the above?\n{-\n tag_expr = cmmGetClosureType (CmmReg nodeReg)\n cond1 tag = cmmEqWord tag (CmmLit (mkIntCLit 0))\n cond2 tag = cmmUGtWord tag highCons\n lowCons = CmmLit (mkIntCLit 1)\n -- CONSTR\n highCons = CmmLit (mkIntCLit 8)\n -- CONSTR_NOCAF_STATIC (from ClosureType.h)\n-}\n\ndirectCall :: VirtualSpOffset -> CLabel -> [(CgRep, CmmExpr)]\n -> [(CgRep, CmmExpr)] -> CmmStmts\n -> Code\ndirectCall sp lbl args extra_args assts = do\n let\n\t-- First chunk of args go in registers\n\t(reg_arg_amodes, stk_args) = assignCallRegs args\n \n\t-- Any \"extra\" arguments are placed in frames on the\n\t-- stack after the other arguments.\n\tslow_stk_args = slowArgs extra_args\n\n\treg_assts = assignToRegs reg_arg_amodes\n --\n (final_sp, stk_assts) <- mkStkAmodes sp (stk_args ++ slow_stk_args)\n\n emitSimultaneously (reg_assts `plusStmts`\n\t\t stk_assts `plusStmts`\n\t\t assts)\n\n doFinalJump final_sp False (jumpToLbl lbl)\n\n-- -----------------------------------------------------------------------------\n-- The final clean-up before we do a jump at the end of a basic block.\n-- This code is shared by tail-calls and returns.\n\ndoFinalJump :: VirtualSpOffset -> Bool -> Code -> Code \ndoFinalJump final_sp is_let_no_escape jump_code\n = do\t{ -- Adjust the high-water mark if necessary\n\t adjustStackHW final_sp\n\n\t-- Push a return address if necessary (after the assignments\n\t-- above, in case we clobber a live stack location)\n\t--\n\t-- DONT push the return address when we're about to jump to a\n\t-- let-no-escape: the final tail call in the let-no-escape\n\t-- will do this.\n\t; eob <- getEndOfBlockInfo\n\t; whenC (not is_let_no_escape) (pushReturnAddress eob)\n\n\t -- Final adjustment of Sp\/Hp\n\t; adjustSpAndHp final_sp\n\n\t -- and do the jump\n\t; jump_code }\n\n-- ----------------------------------------------------------------------------\n-- A general return (just a special case of doFinalJump, above)\n\nperformReturn :: Code\t-- The code to execute to actually do the return\n\t -> Code\n\nperformReturn finish_code\n = do { EndOfBlockInfo args_sp _sequel <- getEndOfBlockInfo\n\t; doFinalJump args_sp False{-not a LNE-} finish_code }\n\n-- ----------------------------------------------------------------------------\n-- Primitive Returns\n-- Just load the return value into the right register, and return.\n\nperformPrimReturn :: CgRep -> CmmExpr\t-- The thing to return\n\t\t -> Code\nperformPrimReturn rep amode\n = do { whenC (not (isVoidArg rep))\n\t\t(stmtC (CmmAssign ret_reg amode))\n\t; performReturn emitReturnInstr }\n where\n ret_reg = dataReturnConvPrim rep\n\n-- ---------------------------------------------------------------------------\n-- Unboxed tuple returns\n\n-- These are a bit like a normal tail call, except that:\n--\n-- - The tail-call target is an info table on the stack\n--\n-- - We separate stack arguments into pointers and non-pointers,\n-- to make it easier to leave things in a sane state for a heap check.\n-- This is OK because we can never partially-apply an unboxed tuple,\n-- unlike a function. The same technique is used when calling\n-- let-no-escape functions, because they also can't be partially\n-- applied.\n\nreturnUnboxedTuple :: [(CgRep, CmmExpr)] -> Code\nreturnUnboxedTuple amodes\n = do \t{ (EndOfBlockInfo args_sp _sequel) <- getEndOfBlockInfo\n\t; tickyUnboxedTupleReturn (length amodes)\n\t; (final_sp, assts) <- pushUnboxedTuple args_sp amodes\n\t; emitSimultaneously assts\n\t; doFinalJump final_sp False{-not a LNE-} emitReturnInstr }\n\npushUnboxedTuple :: VirtualSpOffset\t\t-- Sp at which to start pushing\n\t\t -> [(CgRep, CmmExpr)]\t\t-- amodes of the components\n\t\t -> FCode (VirtualSpOffset,\t-- final Sp\n\t\t\t CmmStmts)\t\t-- assignments (regs+stack)\n\npushUnboxedTuple sp [] \n = return (sp, noStmts)\npushUnboxedTuple sp amodes\n = do\t{ let\t(reg_arg_amodes, stk_arg_amodes) = assignReturnRegs amodes\n\t\n\t\t-- separate the rest of the args into pointers and non-pointers\n\t\t(ptr_args, nptr_args) = separateByPtrFollowness stk_arg_amodes\n\t\treg_arg_assts = assignToRegs reg_arg_amodes\n\t\t\n\t -- push ptrs, then nonptrs, on the stack\n\t; (ptr_sp, ptr_assts) <- mkStkAmodes sp ptr_args\n\t; (final_sp, nptr_assts) <- mkStkAmodes ptr_sp nptr_args\n\n\t; returnFC (final_sp,\n\t \t reg_arg_assts `plusStmts` \n\t\t ptr_assts `plusStmts` nptr_assts) }\n \n\t\t \n-- -----------------------------------------------------------------------------\n-- Returning unboxed tuples. This is mainly to support _ccall_GC_, where\n-- we want to do things in a slightly different order to normal:\n-- \n-- \t\t- push return address\n-- \t\t- adjust stack pointer\n-- \t\t- r = call(args...)\n-- \t\t- assign regs for unboxed tuple (usually just R1 = r)\n-- \t\t- return to continuation\n-- \n-- The return address (i.e. stack frame) must be on the stack before\n-- doing the call in case the call ends up in the garbage collector.\n-- \n-- Sadly, the information about the continuation is lost after we push it\n-- (in order to avoid pushing it again), so we end up doing a needless\n-- indirect jump (ToDo).\n\nccallReturnUnboxedTuple :: [(CgRep, CmmExpr)] -> Code -> Code\nccallReturnUnboxedTuple amodes before_jump\n = do \t{ eob@(EndOfBlockInfo args_sp _) <- getEndOfBlockInfo\n\n\t-- Push a return address if necessary\n\t; pushReturnAddress eob\n\t; setEndOfBlockInfo (EndOfBlockInfo args_sp OnStack)\n\t (do\t{ adjustSpAndHp args_sp\n\t\t; before_jump\n \t\t; returnUnboxedTuple amodes })\n }\n\n-- -----------------------------------------------------------------------------\n-- Calling an out-of-line primop\n\ntailCallPrimOp :: PrimOp -> [StgArg] -> Code\ntailCallPrimOp op\n = tailCallPrim (mkRtsPrimOpLabel op)\n\ntailCallPrimCall :: PrimCall -> [StgArg] -> Code\ntailCallPrimCall primcall\n = tailCallPrim (mkPrimCallLabel primcall)\n\ntailCallPrim :: CLabel -> [StgArg] -> Code\ntailCallPrim lbl args\n = do\t{\t-- We're going to perform a normal-looking tail call, \n\t\t-- except that *all* the arguments will be in registers.\n\t\t-- Hence the ASSERT( null leftovers )\n\t arg_amodes <- getArgAmodes args\n\t; let (arg_regs, leftovers) = assignPrimOpCallRegs arg_amodes\n\t jump_to_primop = jumpToLbl lbl\n\n\t; ASSERT(null leftovers) -- no stack-resident args\n \t emitSimultaneously (assignToRegs arg_regs)\n\n\t; EndOfBlockInfo args_sp _ <- getEndOfBlockInfo\n\t; doFinalJump args_sp False{-not a LNE-} jump_to_primop }\n\n-- -----------------------------------------------------------------------------\n-- Return Addresses\n\n-- We always push the return address just before performing a tail call\n-- or return. The reason we leave it until then is because the stack\n-- slot that the return address is to go into might contain something\n-- useful.\n-- \n-- If the end of block info is 'CaseAlts', then we're in the scrutinee of a\n-- case expression and the return address is still to be pushed.\n-- \n-- There are cases where it doesn't look necessary to push the return\n-- address: for example, just before doing a return to a known\n-- continuation. However, the continuation will expect to find the\n-- return address on the stack in case it needs to do a heap check.\n\npushReturnAddress :: EndOfBlockInfo -> Code\n\npushReturnAddress (EndOfBlockInfo args_sp (CaseAlts lbl _ _))\n = do\t{ sp_rel <- getSpRelOffset args_sp\n\t; stmtC (CmmStore sp_rel (mkLblExpr lbl)) }\n\npushReturnAddress _ = nopC\n\n-- -----------------------------------------------------------------------------\n-- Misc.\n\njumpToLbl :: CLabel -> Code\n-- Passes no argument to the destination procedure\njumpToLbl lbl = stmtC (CmmJump (CmmLit (CmmLabel lbl)) [{- No args -}])\n\nassignToRegs :: [(CmmExpr, GlobalReg)] -> CmmStmts\nassignToRegs reg_args \n = mkStmts [ CmmAssign (CmmGlobal reg_id) expr\n\t | (expr, reg_id) <- reg_args ] \n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\\subsection[CgStackery-adjust]{Adjusting the stack pointers}\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\nThis function adjusts the stack and heap pointers just before a tail\ncall or return. The stack pointer is adjusted to its final position\n(i.e. to point to the last argument for a tail call, or the activation\nrecord for a return). The heap pointer may be moved backwards, in\ncases where we overallocated at the beginning of the basic block (see\nCgCase.lhs for discussion).\n\nThese functions {\\em do not} deal with high-water-mark adjustment.\nThat's done by functions which allocate stack space.\n\n\\begin{code}\nadjustSpAndHp :: VirtualSpOffset \t-- New offset for Arg stack ptr\n\t -> Code\nadjustSpAndHp newRealSp \n = do\t{ -- Adjust stack, if necessary.\n\t -- NB: the conditional on the monad-carried realSp\n\t -- is out of line (via codeOnly), to avoid a black hole\n\t; new_sp <- getSpRelOffset newRealSp\n\t; checkedAbsC (CmmAssign spReg new_sp)\t-- Will generate no code in the case\n\t; setRealSp newRealSp\t\t\t-- where realSp==newRealSp\n\n\t -- Adjust heap. The virtual heap pointer may be less than the real Hp\n\t -- because the latter was advanced to deal with the worst-case branch\n\t -- of the code, and we may be in a better-case branch. In that case,\n \t -- move the real Hp *back* and retract some ticky allocation count.\n\t; hp_usg <- getHpUsage\n\t; let rHp = realHp hp_usg\n\t vHp = virtHp hp_usg\n\t; new_hp <- getHpRelOffset vHp\n\t; checkedAbsC (CmmAssign hpReg new_hp)\t-- Generates nothing when vHp==rHp\n\t; tickyAllocHeap (vHp - rHp)\t\t-- ...ditto\n\t; setRealHp vHp\n\t}\n\\end{code}\n\n","avg_line_length":35.8438133874,"max_line_length":83,"alphanum_fraction":0.6409937185} {"size":27937,"ext":"lhs","lang":"Literate Haskell","max_stars_count":2.0,"content":"Jacobi sum prime test.\n\nOriginal code:\n\\begin{nocode}\nmodule JacobiSum where \n-- Autor: Kent Kwee\n\\end{nocode}\n\nModified:\n\\begin{code}\n{-# OPTIONS -cpp #-}\n#ifdef INTERACTIVE\nmodule NewFarmJacobiSum where\n#else \nmodule Main where\n#endif\n-- Modifications by Oleg Lobachev\n\nimport FarmUntil\nimport Parallel.Skel.EdenSkel (map_farm)\nimport Data.Maybe\n\\end{code}\n\nand so it goes...\n\\begin{code}\nimport ModArithmetik\nimport Ringerweiterung \nimport System (getArgs)\nimport Array\n-- import List (elemIndex)\nimport Debug.Trace (trace)\n-- loud s x = trace (s ++ \" \" ++ show x) x\n\n-------------------------------------------\n-- Hilfsfunktionen fuer Maybe [Integer] --\n-------------------------------------------\n\n-- | Liefert zu einem Element der Form Just [Integer] die Integerliste\ngetlist :: Maybe [Integer] -- ^ Das Eingabeelement\n -> [Integer] -- ^ Die Integerliste\ngetlist Nothing = error \"Ergnistyp ist Nothing\"\ngetlist (Just lpliste) = lpliste\n\n-- | Entfernt aus einer sortierten Liste ein Element \nremprime :: [Integer] -- ^ Eingabeliste\n -> Integer -- ^ zu entfernendes Element\n -> [Integer] -- ^ Ausgabeliste\nremprime [] p = []\nremprime (l:ls) p \n | (p Integer -- ^ das Ergebnis \ne t = foldr (*) 2 [ (d+1)^( (nu t (d+1) ) +1) | d <- (teiler t), isPrime (d+1) ]\n\n-- | Bestimme ein t, so dass e(t)^2 > B\nbestimme_t :: Integer -- ^ obere Schranke B fuer die zu pruefenden Zahlen\n -> Integer -- ^ Der Parameter t\nbestimme_t b = hilfsf 2 b\n-- where hilfsf = (\\x w -> if ((e x)*(e x) > w) then x else (hilfsf (x+1) w))\n{-\nbestimme_t n \n | ( n < 4292870400 ) = 12 \n\t\t | ( n < 2^101 ) = 180\n\t\t | ( n < 2^152 ) = 720\n\t\t | ( n < 2^204 ) = 1260\n\t\t | ( n < 2^268 ) = 2520\n\t\t | ( n < 2^344 ) = 5040\n\t\t | ( n < 2^525 ) = 27720\n\t\t | ( n < 2^774 ) = 98280\n\t\t | ( n < 2^1035 ) = 166320\n\t\t | ( n < 2^1566 ) = 720720\n\t\t | ( n < 2^2082 ) = 1663200\n\t\t | ( n < 2^3491 ) = 8648640 \n\t\t | otherwise = error \"The number is too large.\"\n-}\t\t \n\n-- | Berechung von f(x) von Vorberechnungsschritt 2.1\nf_slow :: Integer -- ^ x: Der Parameter x \n -> Integer -- ^ q: Der Primzahlmodulus q fuer die primitive Wurzel g (mod q)\n -> Integer -- ^ Der diskrete Logarithmus f(x) mit 1 - g^x = g^(f(x)) \nf_slow x q = f2 1 (q + 1 - (powermod pr x q)) pr q \n where pr = primitiveRoot q\n\t \n-- Hilfsfunktion zur Bestimmung des primitiven Logarithmus \nf2 erg aim pr q\n | (aim == (powermod pr erg q)) = erg\n | otherwise = f2 (erg+1) aim pr q\n\nf=f_slow\n\nf_slowlist q = map (\\x -> (x,f_slow x q)) [1..(q-2)]\n\n\n-- unjust (Just a) = a\n-- unjust (Nothing) = error \"Not found\"\n\npos :: Integer -> [Integer] -> Integer\npos _ [] = error \"Not found\"\npos e (l:ls)\n | (e==l) = 1\n\t\t| otherwise = 1 + (pos e ls)\n\nintegertake :: Integer -> [Integer] -> [Integer]\nintegertake 0 _ = []\nintegertake n (l:ls) = l:(integertake (n-1) ls)\n{-\nprwlist :: Integer -> Integer -> [Integer]\nprwlist pr q = pr:(map (\\x -> x*pr `mod` q) $ prwlist pr q)\nprwurzellist pr q = integertake (q-2) $ prwlist pr q\n-}\n{-\nprwurzellist pr q = [powermod pr x q | x <- [1..(q-2)]]\naim pr q = map (\\x -> q+1 -x) $ prwurzellist pr q\ndisklog pr q x= (pos x (prwurzellist pr q) )\nf_fasterlist pr q = map (disklog pr q) (aim pr q)\n-}\n{-\nfpairlist q = zip [1..(q-2)] (f_fasterlist) \n where pr = primitiveRoot q\n prwurzellist = [ powermod pr x q | x <- [1..(q-2)] ] --ohne powermod moeglich\n aim = map (\\x -> q+1 - x) $ prwurzellist\n disklog x = (pos x (prwurzellist) )\n f_fasterlist= map (disklog) (aim)\n-}\n\nfpairlist q = zip [1..(q-2)] (f_fasterlist) \n where pr = primitiveRoot q\n prwurzellist = [ (powermod pr x q) | x <- [1..(q-2)] ]\n prwlist = array (2,(q-1)) [ (powermod pr x q , x) | x <- [1..(q-2)] ]\t\n\t\t\t\t -- (zip [1,q-2] prwurzellist) --\t\t\t \n aim = map (\\x -> q+1 - x) $ prwurzellist\n disklog x = (prwlist!x)\n -- (pos x (prwurzellist) )\n f_fasterlist= map (disklog) (aim)\n\t\t\t\t \n -- prwlist = (array (1,(q-2) ) [ powermod pr x q | x <- [1..(q-2)] ]\n\n-- Schritt 2.3 der Vorberechnung\n-- Berechnung von J(p,q)\n\n{-\n-- | Berechnung von J(p,q) fuer den Fall p>=3 oder (p=2,k=2)\nj :: Integer -- ^ Primzahl p\n-> Integer -- ^ Primzahl q\n-> Poly -- ^ Das Ergebnis J(p,q) aus Z[p^k-te EW], k Vielfachheit von p in q-1\nj p q = foldr (plus) [] [nrootpot (p^k) (x + (f x q) ) | x <- [1..(q-2)] ]\n where k=nu (q-1) p\n\n-- | Berechnung von J_3(q) fuer den Fall p=2, k>=3\nj3 :: Integer -- ^ Die Primzahl q \n -> Poly -- ^ Das Ergebnis J_3(q) aus Z[2^k-te EW], k Vielfachheit von 2 in q-1\nj3 q = easysimp (mal (j 2 q) (j3faktor q)) (2^k)\n where k=nu (q-1) 2\n\n-- Hilfsfunktion zur Berechnung von j3(q)\nj3faktor :: Integer -> Poly\nj3faktor q = foldr plus [] [nrootpot (2^k) (2*x + (f x q) ) | x <- [1..(q-2)] ]\n where k=nu (q-1) 2\n\n\n-- | Berechnung von J_2(q) fuer den Fall p=2, k>=3\nj2 :: Integer -- ^ Die Primzahl q \n -> Poly -- ^ Das Ergebnis J_2(q) aus Z[8-te EW], k Vielfachheit von 2 in q-1\nj2 q = easysimp (mal faktor faktor) (2^k) -- eigentlich 8\n where faktor = j2faktor q\n k = nu (q-1) 2\n\n-- Hilfsfunktion zur Berechnung von J_2(q) \nj2faktor :: Integer -> Poly\nj2faktor q = foldr (plus) [] [nrootpot (2^k) (unterschied*(3*x + (f x q) )) | x <- [1..(q-2)] ]\n where k=nu (q-1) 2\n unterschied = 2^(k-3)\n\n-}\n\n-- | Berechnung von J(p,q) fuer den Fall p>=3 oder (p=2,k=2)\nj :: Integer -- ^ Primzahl p\n -> Integer -- ^ Primzahl q\n -> [(Integer,Integer)] -- ^ vorberchnetes f\n -> Poly -- ^ Das Ergebnis J(p,q) aus Z[p^k-te EW], k Vielfachheit von p in q-1\nj p q fplist = foldr (plus) [] [nrootpot (p^k) ( (\\(x,y) -> x + y) f ) | f <- fplist ]\n where k=nu (q-1) p\n\n-- | Berechnung von J_3(q) fuer den Fall p=2, k>=3\nj3 :: Integer -- ^ Die Primzahl q \n -> [(Integer,Integer)] -- ^ vorberchnetes f\n -> Poly -- ^ Das Ergebnis J_3(q) aus Z[2^k-te EW], k Vielfachheit von 2 in q-1\nj3 q fplist = easysimp (mal (j 2 q fplist) (j3faktor q fplist)) (2^k)\n where k=nu (q-1) 2\n\n-- Hilfsfunktion zur Berechnung von j3(q)\nj3faktor :: Integer \n -> [(Integer,Integer)] -- ^ vorberchnetes f\n -> Poly\nj3faktor q fplist = foldr plus [] [nrootpot (2^k) ( (\\(x,y) -> 2*x + y) f ) | f <- fplist ]\n where k=nu (q-1) 2\n\n\n-- | Berechnung von J_2(q) fuer den Fall p=2, k>=3\nj2 :: Integer -- ^ Die Primzahl q \n -> [(Integer,Integer)] -- ^ vorberchnetes f\n -> Poly -- ^ Das Ergebnis J_2(q) aus Z[8-te EW], k Vielfachheit von 2 in q-1\nj2 q fplist = easysimp (mal faktor faktor) (2^k) -- eigentlich 8\n where faktor = j2faktor q fplist\n k = nu (q-1) 2\n\n-- Hilfsfunktion zur Berechnung von J_2(q) \nj2faktor :: Integer \n -> [(Integer,Integer)] -- ^ vorberchnetes f\n -> Poly\nj2faktor q fplist = foldr (plus) [] [nrootpot (2^k) (unterschied*( (\\(x,y) -> 3*x + y) f ) ) | f <- fplist ]\n where k=nu (q-1) 2\n unterschied = 2^(k-3)\n\t\t\t \n\n-- Hilfsfunktion zu lplist_init\nget_lplist_init n t = [p| p <- primteiler t, \n not (p >= 3 && (not ((powermod n (p-1) (p*p)) == 1))) ]\n\n\n----------------------- \n-- Hauptberechnungen --\n-----------------------\n\n\n-- Schritt 3 \n-- | Liefert eine Liste von Primzahlpaaren (p,q) mit p^k || (q-1) | t\ngetpairs :: Integer -- ^ Parameter t\n -> [(Integer,Integer)] -- ^ (p,q) - Liste\ngetpairs t = getpairs2 qliste\n where qliste = [ q+1 | q <- (teiler t), isPrime (q+1) ]\n\n-- Hilfsfunktion fuer getpairs\ngetpairs2 [] = []\ngetpairs2 (q:qs) = [ (p,q) | p <- (primteiler (q-1)) ] ++ (getpairs2 qs)\n\\end{code}\n\n\\begin{orig}\njacobisumteststep3 :: [(Integer,Integer)] -- ^ (p,q) - Liste mit p^k || (q-1) | t\n -> Integer -- ^ n: zu pruefende Zahl\n -> Integer -- ^ t: Parameter t aus der Vorberechnung\n -> [Integer] -- ^ Liste von Primzahlen mit l_p = 0\n -> Maybe [Integer] -- ^ Das Ergebnis des Tests:\n -- Nothing: keine Primzahl\n -- Plist (Liste von Primzahlen mit l_p = 0)\njacobisumteststep3 [] n t lpliste = -- loud \"i haz win step 3\" $ \n Just lpliste \njacobisumteststep3 ((p,q):rest) n t lpliste \n | (erg == Nothing) = Nothing\n | otherwise = jacobisumteststep3 rest n t lpnew\n where k = nu (q-1) p -- Vielfachheit von p in (q-1)\n erg = jacobisumteststep4 p k q n lpliste\n lpnew = getlist erg \n\\end{orig}\n\\begin{code}\njacobisumteststep3 :: [(Integer,Integer)] -- ^ (p,q) - Liste mit p^k || (q-1) | t\n -> Integer -- ^ n: zu pruefende Zahl\n -> Integer -- ^ t: Parameter t aus der Vorberechnung\n -> [Integer] -- ^ Liste von Primzahlen mit l_p = 0\n -> Maybe [Integer] -- ^ Das Ergebnis des Tests:\n -- Nothing: keine Primzahl\n -- Plist (Liste von Primzahlen mit l_p = 0)\njacobisumteststep3seq1 [] n t lpliste = -- loud \"i haz win step 3\" $ \n Just lpliste \njacobisumteststep3seq1 ((p,q):rest) n t lpliste \n | (erg == Nothing) = Nothing\n | otherwise = jacobisumteststep3 rest n t lpnew\n where k = nu (q-1) p -- Vielfachheit von p in (q-1)\n erg = jacobisumteststep4 p k q n lpliste\n lpnew = getlist erg \n\\end{code}\n\nA nice test call is:\njacobisumteststep3 [(2,3),(2,5),(2,7),(3,7),(2,13),(3,13)] 31 12 [2]\n\na script to test the whole thing:\nfor n in `seq 3 100`; do test `.\/olJacobiPar $n +RTS -qp8 2>\/dev\/null | egrep -i 'true|false'` = `.\/olJacobiSeq $n 2>\/dev\/null | egrep -i 'true|false'` || (echo -n \"FAIL: \"; echo -n $n; echo -n \", \") ; done; echo \"done\";\n\nan example, where the parallel version fails: 2083.\n\nTO INVERSTIGATE FURTHER!\n\n\n---- ok.\n\nWe did not update the list for other children. So!\n\nNew approach:\n\nif a child has found a non-trivial update to lplist, it should tell so the father and the farm will be killed. The list will be updated and new farm started.\n\nanother possibility: Mischa's workpool.\n\n\\begin{code}\njacobisumteststep3 xs n t lps = trace (\"Step 3: testing \" ++ (show $ length xs) ++ \" pq pairs\") $\n jacobisumteststep3' xs n t lps ()\njacobisumteststep3' [] n t lps _ = Just lps\njacobisumteststep3' xs n t lps _\n = consumer $ map_farm worker' xs\n where\n worker' :: PQ -> (PQ, Result)\n worker' pq = (pq, worker pq n t lps)\n consumer :: [(PQ, Result)] -> Maybe [Integer]\n consumer ((_, (True, _, _)):_) = trace \"Terminating!\" $\n Nothing -- termiate the whole thing\n consumer ((headpq, x@(False, True, newlps)):xs) \n -- | newlps\/=lps && newlps\/=[] -- reset it!\n = trace (\"Resetting the step 4 with new lp list \" ++ show newlps) $\n -- recurse into jacobisumteststep3 with \n -- not-yet-tested pq pairs and the new lp list\n jacobisumteststep3' newpqs n t newlps ()\n -- | newlps==lps -- no reset\n -- = trace (\"In step 3: eaten (1) \" ++ show x ++ \" THIS SHOULD NOT HAPPEN!\") $\n -- consumer xs -- eat the list\n -- where newpqs = map (\\(_,_,_,pq) -> pq) (x:xs)\n -- where newpqs = drop c xs\n where newpqs = headpq:(map fst xs)\n \n consumer [(_, (False, _, newlps))] = trace (\"In step 3: passing over \" ++ show newlps) $ \n Just newlps -- pass the list on\n consumer ((x@(_, (False, False, _))):xs) = trace (\"In step 3: eaten (2) \" ++ show x) $ \n consumer xs -- eat the list\n consumer [] = trace (\"Step 3: arrived at an empty list!\") $ \n Just []\n consumer _ = trace (\"Step 3: Failing and returning \" ++ show lps) $\n Just lps -- never happens\n\\end{code}\n | (erg == Nothing) = Nothing\n | otherwise = jacobisumteststep3 rest n t lpnew\n where k = nu (q-1) p -- Vielfachheit von p in (q-1)\n erg = jacobisumteststep4 p k q n lpliste\n lpnew = getlist erg \n\\begin{code}\ntype PQ = (Integer, Integer) -- ^ pq pair\ntype Result = (Bool, Bool, [Integer]) -- ^ (terminate, reset, lp list, nothing)\nworker :: PQ -- ^ pq pair\n -> Integer -- ^ n\n -> Integer -- ^ t\n -> [Integer] -- ^ lp list \n -> Result -- ^ result\nworker (p, q) n t lps\n = let k = nu (q-1) p -- Vielfachheit von p in (q-1)\n newlp = jacobisumteststep4 p k q n lps\n terminate | newlp==Nothing = True\n | otherwise = False\n reset | newlp==Nothing = True -- will be terminated anyway\n | lps \/= getlist newlp = True -- we have an updated lp list\n | otherwise = False -- no new knowledge\n reslist | newlp==Nothing = []\n | otherwise = getlist newlp\n in trace (\"In worker: tested \" ++ show (p,q)) $ \n (terminate, reset, reslist)\n\\end{code}\n\n\\begin{code}\n-- | Schritt 4 vom Jacobisumtest\njacobisumteststep4 :: Integer -- ^ Primzahl p\n -> Integer -- ^ Exponent k\n -> Integer -- ^ Primzahl q\n -> Integer -- ^ zu pruefende Zahl\n -> [Integer] -- ^ Liste von Primzahlen mit l_p = 0\n -> Maybe [Integer] -- ^ Das Ergebnis des Tests:\n -- Nothing: keine Primzahl\n -- Plist (Liste von Primzahlen mit l_p = 0)\njacobisumteststep4 p k q spprime lpliste = \n trace (\"In step 4: \" ++ show lpliste)\n jacobisumteststep4' p k q spprime lpliste\n\njacobisumteststep4' p k q spprime lpliste\n | (p >= 3) = jacobisumteststep4a p k q spprime lpliste fplist\n | (k>=3) = jacobisumteststep4b p k q spprime lpliste fplist\n | (k==2) = jacobisumteststep4c p k q spprime lpliste fplist\n | (k==1) = jacobisumteststep4d p k q spprime lpliste fplist\n | otherwise = error \"fail in selecting step4\"\n\t\t\t\t where fplist = fpairlist q \n\n\n\n\n-- | Schritt 4a vom Jacobisumtest\njacobisumteststep4a :: Integer -- ^ Primzahl p\n -> Integer -- ^ Exponent k\n -> Integer -- ^ Primzahl q\n -> Integer -- ^ zu pruefende Zahl\n -> [Integer] -- ^ Liste von Primzahlen mit l_p = 0\n -> [(Integer,Integer)] -- ^ vorberchnetes f\n -> Maybe [Integer] -- ^ Das Ergebnis des Tests:\n -- Nothing: keine Primzahl\n -- Plist (Liste von Primzahlen mit l_p = 0)\njacobisumteststep4a p k q spprime lpliste fplist\n | (not $ isrootofunity spq p k spprime) = Nothing\n | (isprrootofunity spq p k spprime) = Just (remprime lpliste p)\n -- | (unitroottypeerg == Keine) = Nothing\n -- | (unitroottypeerg == Primitive) = Just (remprime lpliste p)\n | otherwise = Just lpliste\n where emenge = [l | l <- [1..n], (l `mod` p) \/= 0] \n theta = [(x, multinverse x n) | x <- emenge ] \n alpha = [((r*x) `div` n, multinverse x n) | x <- emenge ]\n r = spprime `mod` n\n n = p^k\n s1 = wgpot (j p q fplist) theta \n s2 = wpot s1 (spprime `div` n) \n spq = wmal s2 jpot\n jpot = wgpot (j p q fplist) alpha\n wpot b e = binpolypotsimmod b e p k spprime\n wmal w1 w2 = malsimmod w1 w2 p k spprime\n wgpot b e = gpotsimmod b e p k spprime\n unitroottypeerg = unitroottype spq p k spprime\n\n\n-- | Schritt 4b vom Jacobisumtest mit p=2, k>=3 \njacobisumteststep4b :: Integer -- ^ Primzahl p\n -> Integer -- ^ Exponent k\n -> Integer -- ^ Primzahl q\n -> Integer -- ^ zu pruefende Zahl\n -> [Integer] -- ^ Liste von Primzahlen mit l_p = 0\n\t\t\t\t -> [(Integer,Integer)] -- ^ vorberchnetes f\n -> Maybe [Integer] -- ^ Das Ergebnis des Tests:\n -- Nothing: keine Primzahl\n -- Plist (Liste von Primzahlen mit l_p = 0)\njacobisumteststep4b p k q spprime lpliste fplist\n | (not $ isrootofunity s2q p k spprime) = Nothing\n | ((isprrootofunity s2q p k spprime) && (powermod q ((spprime -1) `div` 2) spprime == (spprime - 1))) = Just (remprime lpliste p)\n -- | (unitroottypeerg == Keine) = Nothing\n -- | ((unitroottypeerg == Primitive) && (powermod q ((spprime -1) `div` 2) spprime == (spprime - 1))) = Just (remprime lpliste p)\t\t\t \n | otherwise = Just lpliste\n where emenge = [l | l <- [1..n], (l `mod` 8) == 1 || (l `mod` 8) == 3 ] \n theta = [(x ,multinverse x n) | x <- emenge ] \n alpha = [((r*x) `div` n ,multinverse x n) | x <- emenge ] \n r = spprime `mod` n\n n = 2^k -- = p^k \n s1 = wgpot (j3 q fplist) theta \n s2 = wpot s1 (spprime `div` n )\n s2qtemp = wmal s2 jpot\n s2q = if (delta==0) then s2qtemp else (wmal s2qtemp (j2 q fplist))\n jpot = wgpot (j3 q fplist) alpha \n -- delta = if (elem r emenge) then 0 else 1\n delta = if ( (r `mod` 8) == 1 || (r `mod` 8) == 3 ) then 0 else 1\n wpot b e = binpolypotsimmod b e p k spprime\n wmal w1 w2 = malsimmod w1 w2 p k spprime\n wgpot b e = gpotsimmod b e p k spprime\n -- unitroottypeerg = unitroottype s2q p k spprime\n\n-- | Schritt 4c vom Jacobisumtest mit p=2, k=2 \njacobisumteststep4c :: Integer -- ^ Primzahl p\n -> Integer -- ^ Exponent k\n -> Integer -- ^ Primzahl q\n -> Integer -- ^ zu pruefende Zahl\n -> [Integer] -- ^ Liste von Primzahlen mit l_p = 0\n\t\t\t\t -> [(Integer,Integer)] -- ^ vorberchnetes f\n -> Maybe [Integer] -- ^ Das Ergebnis des Tests:\n -- Nothing: keine Primzahl\n -- Plist (Liste von Primzahlen mit l_p = 0)\njacobisumteststep4c p k q spprime lpliste fplist\n | (not $ isrootofunity s2q p k spprime) = Nothing\n | ((isprrootofunity s2q p k spprime) && (powermod q ((spprime -1) `div` 2) spprime == (spprime - 1))) = Just (remprime lpliste p)\n -- | (unitroottypeerg == Keine) = Nothing\n -- | ((unitroottypeerg == Primitive) && (powermod q ((spprime -1) `div` 2) spprime == (spprime - 1))) = Just (remprime lpliste p)\n | otherwise = Just lpliste\n where s1 = polymodulo (skalmul q (wpot (j 2 q fplist) 2) ) spprime\n s2 = wpot s1 (spprime `div` 4)\n s2q = if (spprime `mod` 4 == 1) then s2 else (wmal s2 (wpot (j 2 q fplist) 2))\n n = p^k\n wpot b e = binpolypotsimmod b e p k spprime\n wmal w1 w2 = malsimmod w1 w2 p k spprime\n -- unitroottypeerg = unitroottype s2q p k spprime\n -- es gibt nur diesen anderen Fall, da spprime ungerade\n\n-- | Schritt 4d vom Jacobisumtest mit p=2, k=2 \njacobisumteststep4d :: Integer -- ^ Primzahl p\n -> Integer -- ^ Exponent k\n -> Integer -- ^ kleine Primzahl q\n -> Integer -- ^ zu pruefende Zahl\n -> [Integer] -- ^ Liste von Primzahlen mit l_p = 0\n\t\t\t\t -> [(Integer,Integer)] -- ^ vorberchnetes f\n -> Maybe [Integer] -- ^ Das Ergebnis des Tests:\n -- Nothing: keine Primzahl\n -- Plist (Liste von Primzahlen mit l_p = 0)\njacobisumteststep4d p k q spprime lpliste fplist\n | ( (not $ s2q == 1) && (not $ s2q == spprime -1) ) = Nothing\n | ((s2q == spprime -1) && (spprime `mod` 4 == 1)) = Just (remprime lpliste p)\n | otherwise = Just lpliste\n where s2q = powermod (spprime - q) ( (spprime-1) `div` 2) spprime\n -- Fehlerhaftes Ergebnis fuer spprime = q, q aber klein spprime gro\u00df\n\n\n-- | Schritt 5 des Jacobisumtests\njacobisumteststep5 :: [Integer] -- ^ Liste von Primzahlen mit l_p = 0\n -> Integer -- ^ zu pruefende Zahl\n -> Integer -- ^ der Parameter t aus der Vorberechnung\n -> Bool -- ^ Gibt True als Ergebnis, falls Schritt 5 erfolgreich und n noch eine Primzahl sein koennte\n-- jacobisumteststep5 [] _ _ = True\n-- jacobisumteststep5 (p:ps) n t = (jacobisumteststep5single p q n t 30) && (jacobisumteststep5 ps n t)\n-- where q = p+1\n\njacobisumteststep5 ps n t = trace (\"In step 5: \" ++ show ps) $ \n all (\\p -> jacobisumteststep5single p (p+1) n t 30) ps\n\n-- Hilfsfunktion fuer jacobisumtststep5\njacobisumteststep5single :: Integer -- ^ Primzahl p\n -> Integer -- ^ moegliche Primzahl q = 1 (mod p)\n -> Integer -- ^ zu pruefende Zahl n\n -> Integer -- ^ der Parameter t aus der Vorberechnung\n -> Integer -- ^ Ein Zaehler fuer die Versuche\n -> Bool -- ^ Gibt True als Ergebnis, falls Schritt 5 erfolgreich und n noch eine Primzahl sein koennte\njacobisumteststep5single _ _ _ _ 0 = error \"Test failed\" \njacobisumteststep5single p q n t counter \n | ( (isPrime q) && (et `mod` q \/= 0) && (ggt q n == 1) ) = (jsum4erg \/= Nothing) && ( (jsum4erg == Just []) || (jacobisumteststep5single p (q+p) n t (counter-1)))\n | otherwise = (jacobisumteststep5single p (q+p) n t (counter-1))\n where et = e t\n k = nu (q-1) p\n jsum4erg = trace \"Calling step 4 from step 5...\" $ \n jacobisumteststep4 p k q n [p]\n\n\n\n-- start mit rinit=1 , n = spprime , count = t\n-- | Schritt 6 vom Jacobisumtest\njacobisumteststep6 :: Integer -- ^ zu pruefende Zahl n\n -> Integer -- ^ e(t)\n -> Integer -- ^ Zaehlvariable fuer i\n -> Integer -- ^ n^i \n -> Bool -- ^ Ist n Primzahl? \njacobisumteststep6 n et 0 rinit = True\njacobisumteststep6 n et count rinit \n | ( (ri == 1) || (ri == n) || (n `mod` ri \/= 0) ) = jacobisumteststep6 n et (count-1) ri\n |otherwise = False\n where ri = (rinit * n) `mod` et \n\n-- | Der Jacobisumtest \njacobisumtest :: Integer -- ^ zu pruefende Zahl n\n -> Integer -- ^ der Parameter t aus der Vorberechnung\n -> Bool -- ^ Ist n Primzahl? \njacobisumtest n t\n | (n < 2) = False\n | (n == 2) = True\n\t\t| (n == 3) = True\n\t\t| (n == 5) = True\n\t\t| (n == 7) = True\t\t\n -- rabinmillertest\n -- Pruefe Voraussetzung e(t)^2 > n\n | (et*et <= n) = error \"e(t)^2 < n\"\n -- Schritt 1 Pruefe ggt\n | ( ( (ggt (t*et) n) > 1) && ( (ggt (t*et) n) \/= n ) ) = False \n -- Schritt 3 (Loop on characters)\n -- | otherwise = error (ergstep3)\n | (ergstep3 == Nothing) = False \n | ( not (jacobisumteststep5 restlpliste n t) ) = False \n\t\t-- | otherwise = error \"Test5 passed\"\n | otherwise = jacobisumteststep6 n et t 1 \n where et = e t\n paare = -- loud \"paare\" $ \n getpairs t\n -- lplist-init aus Schritt 2\n lplist_init = -- loud \"lplist_init\" $ \n get_lplist_init n t\n ergstep3 = -- loud \"ergstep3\" $ \n jacobisumteststep3 paare n t lplist_init\n restlpliste = -- loud \"restlpliste\" $ \n getlist ergstep3\n -- noch zu optimieren\n\n\n-------------------------------------------------------------\n-- Rechnen mit PairPoly, um zu sehen, was effizienter ist ---\n-------------------------------------------------------------\n\n-- | Berechnung von J(p,q) fuer den Fall p>=3 oder (p=2,k=2)\npairpoly_j :: Integer -- ^ Primzahl p\n -> Integer -- ^ Primzahl q\n -> PairPoly -- ^ Das Ergebnis J(p,q) aus Z[p^k-te EW], k Vielfachheit von p in q-1\npairpoly_j p q = normpairpoly [( (x + (f x q) ) `mod` n ,1) | x <- [1..(q-2)] ]\n where k=nu (q-1) p\n n=p^k\n\n-- | Berechnung von J_3(q) fuer den Fall p=2, k>=3\npairpoly_j3 :: Integer -- ^ Die Primzahl q \n -> PairPoly -- ^ Das Ergebnis J_3(q) aus Z[2^k-te EW], k Vielfachheit von 2 in q-1\npairpoly_j3 q = pairmult (pairpoly_j 2 q) (pairpoly_j3faktor q)\n\n\n-- Hilfsfunktion zur Berechnung von j3(q)\npairpoly_j3faktor :: Integer -> PairPoly\npairpoly_j3faktor q = normpairpoly [ ((2*x + (f x q) ) `mod` n,1) | x <- [1..(q-2)] ]\n where k=nu (q-1) 2\n n=2^k\n\n-- | Berechnung von J_2(q) fuer den Fall p=2, k>=3\npairpoly_j2 :: Integer -- ^ Die Primzahl q \n -> PairPoly -- ^ Das Ergebnis J_2(q) aus Z[8-te EW], k Vielfachheit von 2 in q-1\npairpoly_j2 q = pairmult faktor faktor\n where faktor = pairpoly_j2faktor q\n\n-- Hilfsfunktion zur Berechnung von J_2(q) \npairpoly_j2faktor :: Integer -> PairPoly\npairpoly_j2faktor q = normpairpoly [(3*x + (f x q) `mod` 8,1 ) | x <- [1..(q-2)] ]\n where k=nu (q-1) 2\n n=8\n\n-- Tester\n\nmain = do\n args <- getArgs\n let k = read $ head args\n n | length args < 3 = k\n | read (args!!2) = 2^k-1\n | otherwise = k\n t | length args > 1 && read (args!!1) > 0 = read (args!!1)\n | otherwise = bestimme_t n\n putStrLn $ \"Jacobi Sum Test.\\nn = \" ++ (show n) ++ \", t = \" ++ (show t)\n print $ jacobisumtest n t\n putStrLn \"done\"\n{-\n -- gotcha!\n $ .\/jacobi 31 \n Jacobi Sum Test.\n n = 31, t = 2\n False\n done\n\n 31 ist aber prim!\n-}\n\n\ntest20 = all (\\n -> jacobisumtest n 2) (primelist 132)\n-- ergibt true\n\n-- e(4)=240 --> n<= 240^2=57600\ntest80 = all (\\n -> jacobisumtest n 4) (primelist 1000) \n-- ergibt true\ntesta = all (\\n -> jacobisumtest n 6) (primelist 200) \n\n-- jacobisumtest 79 2 = False\n\njacobi n = jacobisumtest n (bestimme_t n)\n\n\n-- Testfuntionen\n\n-- n=2999 , t=6\n-- getpairs 6 = [(2,3),(2,7),(3,7)]\n-- lplist_init = [2]\n-- test = jacobisumtest 2999 6\n\n{-\neventuelle Optimierungsm\u00f6glichkeiten\n- Teiler mit Primfaktorzerlegung \n - ist so schon schnell genug\n- Ineffiziente Vorberechnungen wegen unguenstiger Darstellung der EW\n --> Berechnungen mit PairPoly\n --> Vermeiden von plus [0,...,0,1,0,..0] [0,...,0,1,0,..0]\n- Waehrend der Rechnung nur vereinfachen mit (n-te EW)^n=1 und (mod m) und erst am Ende mit Mipo\n- Berechnung vom Typ der Einheitswurzel gleichzeitig\n-}\n\n\\end{code}\n","avg_line_length":42.0105263158,"max_line_length":221,"alphanum_fraction":0.5145148012} {"size":5428,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"{-# OPTIONS_GHC -Wno-overlapping-patterns #-}\nAlgrebraic data types\n=====================\n\nCIS 194 Week 2 \n21 January 2013\n\nSuggested reading:\n\n * [Real World Haskell](http:\/\/book.realworldhaskell.org\/),\n chapters 2 and 3\n\n> -- |\n> -- Module : ADTs\n> -- Description: Yorgey's CIS 194, 02 Algrebraic Data Types\n> -- Copyright: (c) Eduardo Ram\u00f3n Lestau, 2020\n> -- LICENSE: BSD3\n> -- Mantainer: erlestau@gmail.com\n> -- Portability: POSIX\n> -- \n> -- Definitions in `02-ADTs.lhs` as a module\n> --\n> module ADTs where\n\nEnumeration types\n-----------------\n\n> -- |'Thing' is a simple enumeration of things\n> data Thing = Shoe\n> \t | Ship\n> \t | SealingWax\n> \t | Cabbage\n> \t | King\n> deriving Show\n\nWe can define variables of type `Thing`:\n\n> -- |A shoe\n> shoe :: Thing\n> shoe = Shoe\n>\n> -- | My favourite 'Thing's.\n> listO'Things :: [Thing]\n> listO'Things = [Shoe, SealingWax, King, Cabbage, King]\n\nWe can write functions on `Thing`s by *pattern-matching*.\n\n> -- |Given a 'Thing' returns 'True' if is a small 'Thing' or 'False'\n> -- otherwise.\n> -- \n> -- Example\n> --\n> -- >>> map isSmall listO'Things\n> -- [True,True,False,True,False]\n> isSmall :: Thing -> Bool\n> isSmall Ship = False\n> isSmall King = False\n> isSmall _ = True\n\nBeyond enumerations\n-------------------\n\n`FailableDouble` type has two data constructor. The first one,\n`Failure`, takes no arguments, so `Failure` by itself is a value\nof type `FailableDouble`. The second one, `OK`, takes an argument\nof type `Double`. So `OK` by itself is not a value of type\n`FailableDouble`; we need it a `Double`.\n\n> -- |A 'FailableDouble' can be a 'Failure' or a 'OK' 'Double'. \n> data FailableDouble = Failure\n> | OK Double\n> deriving Show\n>\n> exD1, exD2 :: FailableDouble\n> -- | Example of a 'FailableDouble' variable\n> exD1 = Failure\n> -- | Another example of a 'FailableDouble' variable\n> exD2 = OK 3.4\n\nThought exercise: What is the type of `OK`?\n\t\n `OK` is a function that takes a `Double` and\n returns a `FailableDouble` value. \n\n ~~~\n OK :: Double -> FailableDouble\n ~~~\n\n> -- |'safeDiv' returns the result of the division or 'Failure'\n> -- if the denominator is \\(0\\).\n> safeDiv :: Double -> Double -> FailableDouble\n> safeDiv _ 0 = Failure\n> safeDiv x y = OK (x\/y)\n\n> -- |Returns 0 if 'Failure', the 'Double' in any other case.\n> failureToZero :: FailableDouble -> Double\n> failureToZero Failure = 0\n> failureToZero (OK d) = d\n\nData constructors can have more than one argument.\n\n> -- | Store a person's name, age and favourite 'Thing'\n> data Person\n> = Person \n> String -- ^ name\n> Int -- ^ age\n> Thing -- ^ Favourite 'Thing'\n> deriving Show\n> \n> -- |A boy\n> brent :: Person\n> brent = Person \"Brent\" 31 SealingWax\n>\n> -- |An old person\n> stan :: Person\n> stan = Person \"Stan\" 94 Cabbage\n>\n> -- |Gets the age of a 'Person'\n> -- \n> -- >>> getAge brent + 50 < getAge stan\n> -- True\n> getAge :: Person -> Int\n> getAge (Person _ a _) = a\n\nAlgebraic data types in general\n-------------------------------\n\nIn general, an algebraic data type has one or more data constructors,\nand each data constructor can have zero or more arguments.\n\n data AlgDataType = Constr1 Type11 Type12\n | Constr2 Type21\n | Constr3 Type31 Type32 Type33\n | Constr4\n\nType and data constructor names must always start with a capital letter.\n\nPattern-matching\n----------------\n\n> -- |Example of @x\\@pat@ to match a pattern @pat@ and give the name\n> -- @x@ to the entire value being matched.\n> baz :: Person -> String\n> baz p@(Person n _ _) = \"The name field of (\" ++ show p ++ \") is \" ++ n\n\n> -- |Example of *nested* patterns\n> checkFav :: Person -> String\n> checkFav (Person n _ SealingWax) = n ++ \", you're my kind of person!\"\n> checkFav (Person n _ _) = n ++ \", your favorite thing is lame.\"\n\nCase expression\n---------------\n\n> -- | Example of @case@ expression\n> --\n> -- Note:\n> -- Warns about \"Pattern match is redundant\"\n> -- In a case alternative: [] ->\n> -- In a case alternative: _ ->\n> exCase :: Int\n> exCase = case \"Hello\" of\n> [] -> 3\n> ('H':s) -> length s\n> _ -> 7\n\nThe syntas for defining functions we have seen is really just convenient\nsyntax sugar for defining a `case` expression. For example:\n\n> -- | Alternative definition of 'failureToZero' using a @case@ expresion.\n> failureToZero' :: FailableDouble -> Double\n> failureToZero' x = case x of\n> Failure -> 0\n> OK d -> d\n\nRecursive data types\n--------------------\n\nData types can be *recursive*, that is, defined in terms of themselves.\nIn fact, we have already seen a recursive type---the type of lists.\nA list is either empty, or a single element followed by a remaning list.\nWe could define our own list type like so:\n\n> -- | A list of 'Int' as recursive algrebraic data type\n> data IntList = Empty | Cons Int IntList\n>\n> -- | Product of all elements in a 'IntList'\n> intListProd :: IntList -> Int\n> intListProd Empty = 1\n> intListProd (Cons x l) = x * intListProd l\n\n> -- | A example of recursive algebaric data type. A binary tree holding\n> -- an 'Int' in its nodes and a 'Char' in its leaves.\n> data Tree = Leaf Char\n> | Node Tree Int Tree\n> deriving Show\n\n> -- | A tree\n> tree :: Tree\n> tree = Node (Leaf 'x') 1 (Node (Leaf 'y') 2 (Leaf 'z'))\n","avg_line_length":27.14,"max_line_length":74,"alphanum_fraction":0.6121960206} {"size":20403,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"\n\\subsection{Bcc.BM.Data.Tracer}\n\\label{code:Bcc.BM.Data.Tracer}\n\n%if style == newcode\n\\begin{code}\n{-# LANGUAGE DefaultSignatures #-}\n{-# LANGUAGE FlexibleContexts #-}\n{-# LANGUAGE FlexibleInstances #-}\n{-# LANGUAGE InstanceSigs #-}\n{-# LANGUAGE MultiParamTypeClasses #-}\n{-# LANGUAGE ScopedTypeVariables #-}\n\nmodule Bcc.BM.Data.Tracer\n ( Tracer (..)\n , TracingVerbosity (..)\n , Transformable (..)\n , ToLogObject (..)\n , ToObject (..)\n , HasTextFormatter (..)\n , HasSeverityAnnotation (..)\n , HasPrivacyAnnotation (..)\n , WithSeverity (..)\n , WithPrivacyAnnotation (..)\n , contramap\n , mkObject, emptyObject\n , traceWith\n -- * tracer transformers\n , natTracer\n , nullTracer\n , stdoutTracer\n , debugTracer\n , showTracing\n , trStructured\n , trStructuredText\n -- * conditional tracing\n , condTracing\n , condTracingM\n -- * severity transformers\n , annotateSeverity\n , filterSeverity\n , setSeverity\n , severityDebug\n , severityInfo\n , severityNotice\n , severityWarning\n , severityError\n , severityCritical\n , severityAlert\n , severityEmergency\n -- * privacy annotation transformers\n , annotateConfidential\n , annotatePublic\n , annotatePrivacyAnnotation\n , filterPrivacyAnnotation\n ) where\n\n\nimport Control.Monad (when)\nimport Control.Monad.IO.Class (MonadIO (..))\n\nimport Data.Aeson (Object, ToJSON (..), Value (..))\nimport Data.Aeson.Text (encodeToLazyText)\nimport qualified Data.HashMap.Strict as HM\nimport Data.Text (Text)\nimport qualified Data.Text as T\nimport qualified Data.Text.Lazy as TL\nimport Data.Word (Word64)\n\nimport Bcc.BM.Data.Aggregated\nimport Bcc.BM.Data.LogItem (LoggerName,\n LogObject (..), LOContent (..), LOMeta (..),\n PrivacyAnnotation (..), mkLOMeta)\nimport Bcc.BM.Data.Severity (Severity (..))\nimport Bcc.BM.Data.Trace\nimport Control.Tracer\n\n\\end{code}\n%endif\n\nThis module extends the basic |Tracer| with one that keeps a list of context names to\ncreate the basis for |Trace| which accepts messages from a Tracer and ends in the |Switchboard|\nfor further processing of the messages.\n\n\\begin{scriptsize}\n\\begin{verbatim}\n +-----------------------+\n | |\n | external code |\n | |\n +----------+------------+\n |\n |\n +-----v-----+\n | |\n | Tracer |\n | |\n +-----+-----+\n |\n |\n +-----------v------------+\n | |\n | Trace |\n | |\n +-----------+------------+\n |\n +-----------v------------+\n | Switchboard |\n +------------------------+\n\n +----------+ +-----------+\n |Monitoring| |Aggregation|\n +----------+ +-----------+\n\n +-------+\n |Logging|\n +-------+\n\n+-------------+ +------------+\n|Visualisation| |Benchmarking|\n+-------------+ +------------+\n\n\\end{verbatim}\n\\end{scriptsize}\n\n\\subsubsection{ToLogObject - transforms a logged item to LogObject}\n\\label{code:ToLogObject}\\index{ToLogObject}\n\\label{code:toLogObject}\\index{ToLogObject!toLogObject}\n\\label{code:toLogObject'}\\index{ToLogObject!toLogObject'}\n\\label{code:toLogObjectVerbose}\\index{ToLogObject!toLogObjectVerbose}\n\\label{code:toLogObjectMinimal}\\index{ToLogObject!toLogObjectMinimal}\n\nThe transformer |toLogObject| accepts any type for which a |ToObject| instance\nis available and returns a |LogObject| which can be forwarded into the\n|Switchboard|. It adds a verbosity hint of |NormalVerbosity|.\n\\\\\nA verbosity level |TracingVerbosity| can be passed\nto the transformer |toLogObject'|.\n\n\n\\begin{code}\nclass Monad m => ToLogObject m where\n toLogObject :: (ToObject a, Transformable a m b)\n => Trace m a -> Tracer m b\n toLogObject' :: (ToObject a, Transformable a m b)\n => TracingVerbosity -> Trace m a -> Tracer m b\n toLogObjectVerbose :: (ToObject a, Transformable a m b)\n => Trace m a -> Tracer m b\n default toLogObjectVerbose :: (ToObject a, Transformable a m b)\n => Trace m a -> Tracer m b\n toLogObjectVerbose = trTransformer MaximalVerbosity\n toLogObjectMinimal :: (ToObject a, Transformable a m b)\n => Trace m a -> Tracer m b\n default toLogObjectMinimal :: (ToObject a, Transformable a m b)\n => Trace m a -> Tracer m b\n toLogObjectMinimal = trTransformer MinimalVerbosity\n\ninstance ToLogObject IO where\n toLogObject :: (MonadIO m, ToObject a, Transformable a m b)\n => Trace m a -> Tracer m b\n toLogObject = trTransformer NormalVerbosity\n toLogObject' :: (MonadIO m, ToObject a, Transformable a m b)\n => TracingVerbosity -> Trace m a -> Tracer m b\n toLogObject' = trTransformer\n\n\\end{code}\n\n\\begin{spec}\nTo be placed in shardagnostic-network.\n\ninstance (MonadFork m, MonadTimer m) => ToLogObject m where\n toLogObject tr = Tracer $ \\a -> do\n lo <- LogObject <$> pure \"\"\n <*> (LOMeta <$> getMonotonicTime -- must be evaluated at the calling site\n <*> (pack . show <$> myThreadId)\n <*> pure Debug\n <*> pure Public)\n <*> pure (LogMessage a)\n traceWith tr lo\n\n\\end{spec}\n\n\\subsubsection{Verbosity levels}\n\\label{code:TracingVerbosity}\\index{TracingVerbosity}\n\\label{code:MinimalVerbosity}\\index{TracingVerbosity!MinimalVerbosity}\n\\label{code:NormalVerbosity}\\index{TracingVerbosity!NormalVerbosity}\n\\label{code:MaximalVerbosity}\\index{TracingVerbosity!MaximalVerbosity}\nThe tracing verbosity will be passed to instances of |ToObject| for rendering\nthe traced item accordingly.\n\\begin{code}\ndata TracingVerbosity = MinimalVerbosity | NormalVerbosity | MaximalVerbosity\n deriving (Eq, Read, Ord)\n\n\\end{code}\n\n\\subsubsection{ToObject - transforms a logged item to a JSON Object}\n\\label{code:ToObject}\\index{ToObject}\n\\label{code:toObject}\\index{ToObject!toObject}\nKatip requires JSON objects to be logged as context. This\ntypeclass provides a default instance which uses |ToJSON| and\nproduces an empty object if 'toJSON' results in any type other than\n|Object|. If you have a type you want to log that produces an Array\nor Number for example, you'll want to write an explicit instance of\n|ToObject|. You can trivially add a |ToObject| instance for something with\na |ToJSON| instance like:\n\\begin{spec}\ninstance ToObject Foo\n\\end{spec}\n\\\\\nThe |toObject| function accepts a |TracingVerbosity| level as argument\nand can render the traced item differently depending on the verbosity level.\n\n\\begin{code}\nclass ToObject a where\n toObject :: TracingVerbosity -> a -> Object\n default toObject :: ToJSON a => TracingVerbosity -> a -> Object\n toObject _ v = case toJSON v of\n Object o -> o\n s@(String _) -> HM.singleton \"string\" s\n _ -> mempty\n textTransformer :: a -> Object -> Text\n default textTransformer :: a -> Object -> Text\n textTransformer _ o = TL.toStrict $ encodeToLazyText o\n\n\\end{code}\n\nA helper function for creating an |Object| given a list of pairs, named items,\nor the empty |Object|.\n\\label{code:mkObject}\\index{mkObject}\n\\label{code:emptyObject}\\index{emptyObject}\n\\begin{code}\nmkObject :: ToObject a => [(Text, a)] -> HM.HashMap Text a\nmkObject = HM.fromList\n\nemptyObject :: ToObject a => HM.HashMap Text a\nemptyObject = HM.empty\n\n\\end{code}\n\ndefault instances:\n\\begin{code}\ninstance ToObject () where\n toObject _ _ = mempty\n\ninstance ToObject String\ninstance ToObject Text\ninstance ToObject Value\ninstance ToJSON a => ToObject (LogObject a)\ninstance ToJSON a => ToObject (LOContent a)\n\n\\end{code}\n\n\\subsubsection{A transformable Tracer}\n\\label{code:Transformable}\\index{Transformable}\n\\label{code:trTransformer}\\index{Transformable!trTransformer}\nParameterised over the source |Tracer| (\\emph{b}) and\nthe target |Tracer| (\\emph{a}).\\\\\nThe default definition of |trTransformer| is the |nullTracer|. This blocks output\nof all items which lack a corresponding instance of |Transformable|.\\\\\nDepending on the input type it can create objects of |LogValue| for numerical values,\n|LogMessage| for textual messages, and for all others a |LogStructured| of their\n|ToObject| representation.\n\n\\begin{code}\nclass (Monad m, HasPrivacyAnnotation b, HasSeverityAnnotation b) => Transformable a m b where\n trTransformer :: TracingVerbosity -> Trace m a -> Tracer m b\n default trTransformer :: TracingVerbosity -> Trace m a -> Tracer m b\n trTransformer _ _ = nullTracer\n\ntrFromIntegral :: (Integral b, MonadIO m, HasPrivacyAnnotation b, HasSeverityAnnotation b)\n => LoggerName -> Trace m a -> Tracer m b\ntrFromIntegral name tr = Tracer $ \\arg ->\n traceWith tr =<< do\n meta <- mkLOMeta (getSeverityAnnotation arg) (getPrivacyAnnotation arg)\n return ( mempty\n , LogObject mempty meta (LogValue name $ PureI $ fromIntegral arg)\n )\n\ntrFromReal :: (Real b, MonadIO m, HasPrivacyAnnotation b, HasSeverityAnnotation b)\n => LoggerName -> Trace m a -> Tracer m b\ntrFromReal name tr = Tracer $ \\arg ->\n traceWith tr =<< do\n meta <- mkLOMeta (getSeverityAnnotation arg) (getPrivacyAnnotation arg)\n return ( mempty\n , LogObject mempty meta (LogValue name $ PureD $ realToFrac arg)\n )\n\ninstance Transformable a IO Int where\n trTransformer MinimalVerbosity = trFromIntegral \"\"\n trTransformer _ = trFromIntegral \"int\"\ninstance Transformable a IO Integer where\n trTransformer MinimalVerbosity = trFromIntegral \"\"\n trTransformer _ = trFromIntegral \"integer\"\ninstance Transformable a IO Word64 where\n trTransformer MinimalVerbosity = trFromIntegral \"\"\n trTransformer _ = trFromIntegral \"word64\"\ninstance Transformable a IO Double where\n trTransformer MinimalVerbosity = trFromReal \"\"\n trTransformer _ = trFromReal \"double\"\ninstance Transformable a IO Float where\n trTransformer MinimalVerbosity = trFromReal \"\"\n trTransformer _ = trFromReal \"float\"\ninstance Transformable Text IO Text where\n trTransformer _ tr = Tracer $ \\arg ->\n traceWith tr =<< do\n meta <- mkLOMeta (getSeverityAnnotation arg) (getPrivacyAnnotation arg)\n return ( mempty\n , LogObject mempty meta (LogMessage arg)\n )\ninstance Transformable String IO String where\n trTransformer _ tr = Tracer $ \\arg ->\n traceWith tr =<< do\n meta <- mkLOMeta (getSeverityAnnotation arg) (getPrivacyAnnotation arg)\n return ( mempty\n , LogObject mempty meta (LogMessage arg)\n )\ninstance Transformable Text IO String where\n trTransformer _ tr = Tracer $ \\arg ->\n traceWith tr =<< do\n meta <- mkLOMeta (getSeverityAnnotation arg) (getPrivacyAnnotation arg)\n return ( mempty\n , LogObject mempty meta (LogMessage $ T.pack arg)\n )\ninstance Transformable String IO Text where\n trTransformer _ tr = Tracer $ \\arg ->\n traceWith tr =<< do\n meta <- mkLOMeta (getSeverityAnnotation arg) (getPrivacyAnnotation arg)\n return ( mempty\n , LogObject mempty meta (LogMessage $ T.unpack arg)\n )\n\n\\end{code}\n\nThe function |trStructured| is a tracer transformer which transforms traced items\nto their |ToObject| representation and further traces them as a |LogObject| of type\n|LogStructured|. If the |ToObject| representation is empty, then no tracing happens.\n\\label{code:trStructured}\\index{trStructured}\n\\begin{code}\ntrStructured :: (ToObject b, MonadIO m, HasPrivacyAnnotation b, HasSeverityAnnotation b)\n => TracingVerbosity -> Trace m a -> Tracer m b\ntrStructured verb tr = Tracer $ \\arg ->\n let\n obj = toObject verb arg\n in traceWith tr =<< do\n meta <- mkLOMeta (getSeverityAnnotation arg) (getPrivacyAnnotation arg)\n return ( mempty\n , LogObject mempty meta (LogStructuredText obj (T.pack $ show $ obj))\n )\n\n\\end{code}\n\n\n\\label{code:trStructuredText}\\index{trStructuredText}\n\\label{code:HasTextFormatter}\\index{HasTextFormatter}\n\\begin{code}\nclass HasTextFormatter a where\n formatText :: a -> Object -> Text\n default formatText :: a -> Object -> Text\n formatText _a = T.pack . show\n\ntrStructuredText :: ( ToObject b, MonadIO m, HasTextFormatter b\n , HasPrivacyAnnotation b, HasSeverityAnnotation b )\n => TracingVerbosity -> Trace m a -> Tracer m b\ntrStructuredText verb tr = Tracer $ \\arg ->\n let\n obj = toObject verb arg\n in traceWith tr =<< do\n meta <- mkLOMeta (getSeverityAnnotation arg) (getPrivacyAnnotation arg)\n return ( mempty\n , LogObject mempty meta (LogStructuredText obj (formatText arg obj))\n )\n\n\\end{code}\n\n\\subsubsection{Transformers for setting severity level}\n\\label{code:setSeverity}\n\\label{code:severityDebug}\n\\label{code:severityInfo}\n\\label{code:severityNotice}\n\\label{code:severityWarning}\n\\label{code:severityError}\n\\label{code:severityCritical}\n\\label{code:severityAlert}\n\\label{code:severityEmergency}\n\\index{setSeverity}\\index{severityDebug}\\index{severityInfo}\n\\index{severityNotice}\\index{severityWarning}\\index{severityError}\n\\index{severityCritical}\\index{severityAlert}\\index{severityEmergency}\nThe log |Severity| level of a |LogObject| can be altered.\n\\begin{code}\nsetSeverity :: Severity -> Trace m a -> Trace m a\nsetSeverity sev tr = Tracer $ \\(ctx,lo@(LogObject _nm meta@(LOMeta _ts _tid _hn _sev _pr) _lc)) ->\n traceWith tr $ (ctx, lo { loMeta = meta { severity = sev } })\n\nseverityDebug, severityInfo, severityNotice,\n severityWarning, severityError, severityCritical,\n severityAlert, severityEmergency :: Trace m a -> Trace m a\nseverityDebug = setSeverity Debug\nseverityInfo = setSeverity Info\nseverityNotice = setSeverity Notice\nseverityWarning = setSeverity Warning\nseverityError = setSeverity Error\nseverityCritical = setSeverity Critical\nseverityAlert = setSeverity Alert\nseverityEmergency = setSeverity Emergency\n\n\\end{code}\n\n\\label{code:annotateSeverity}\\index{annotateSeverity}\nThe |Severity| of any |Tracer| can be set with wrapping it in |WithSeverity|.\nThe traced types need to be of class |HasSeverityAnnotation|.\n\\begin{code}\nannotateSeverity :: HasSeverityAnnotation a => Tracer m (WithSeverity a) -> Tracer m a\nannotateSeverity tr = Tracer $ \\arg ->\n traceWith tr $ WithSeverity (getSeverityAnnotation arg) arg\n\n\\end{code}\n\n\\subsubsection{Transformers for setting privacy annotation}\n\\label{code:setPrivacy}\n\\label{code:annotateConfidential}\n\\label{code:annotatePublic}\n\\index{setPrivacy}\\index{annotateConfidential}\\index{annotatePublic}\nThe privacy annotation (|PrivacyAnnotation|) of the |LogObject| can\nbe altered with the following functions.\n\\begin{code}\nsetPrivacy :: PrivacyAnnotation -> Trace m a -> Trace m a\nsetPrivacy prannot tr = Tracer $ \\(ctx,lo@(LogObject _nm meta _lc)) ->\n traceWith tr $ (ctx, lo { loMeta = meta { privacy = prannot }})\n\nannotateConfidential, annotatePublic :: Trace m a -> Trace m a\nannotateConfidential = setPrivacy Confidential\nannotatePublic = setPrivacy Public\n\n\\end{code}\n\n\\label{code:annotatePrivacyAnnotation}\\index{annotatePrivacyAnnotation}\nThe |PrivacyAnnotation| of any |Tracer| can be set with wrapping it in |WithPrivacyAnnotation|.\nThe traced types need to be of class |DefinePrivacyAnnotation|.\n\\begin{code}\nannotatePrivacyAnnotation :: HasPrivacyAnnotation a => Tracer m (WithPrivacyAnnotation a) -> Tracer m a\nannotatePrivacyAnnotation tr = Tracer $ \\arg ->\n traceWith tr $ WithPrivacyAnnotation (getPrivacyAnnotation arg) arg\n\n\\end{code}\n\n\\subsubsection{Transformer for filtering based on \\emph{Severity}}\n\\label{code:WithSeverity}\\index{WithSeverity}\nThis structure wraps a |Severity| around traced observables.\n\\begin{code}\ndata WithSeverity a = WithSeverity Severity a\n\n\\end{code}\n\n\\label{code:filterSeverity}\\index{filterSeverity}\nThe traced observables with annotated severity are filtered.\n\\begin{code}\nfilterSeverity :: forall m a. (Monad m, HasSeverityAnnotation a)\n => (a -> m Severity)\n -> Tracer m a\n -> Tracer m a\nfilterSeverity msevlimit tr = Tracer $ \\arg -> do\n sevlimit <- msevlimit arg\n when (getSeverityAnnotation arg >= sevlimit) $\n traceWith tr arg\n\n\\end{code}\n\nGeneral instances of |WithSeverity| wrapped observable types.\n\n\\begin{code}\ninstance forall m a t. (Monad m, Transformable t m a) => Transformable t m (WithSeverity a) where\n trTransformer verb tr = Tracer $ \\(WithSeverity sev arg) ->\n let transformer :: Tracer m a\n transformer = trTransformer verb $ setSeverity sev tr\n in traceWith transformer arg\n\n\\end{code}\n\n\\subsubsection{Transformer for filtering based on \\emph{PrivacyAnnotation}}\n\\label{code:WithPrivacyAnnotation}\\index{WithPrivacyAnnotation}\nThis structure wraps a |Severity| around traced observables.\n\\begin{code}\ndata WithPrivacyAnnotation a = WithPrivacyAnnotation PrivacyAnnotation a\n\n\\end{code}\n\n\\label{code:filterPrivacyAnnotation}\\index{filterPrivacyAnnotation}\nThe traced observables with annotated severity are filtered.\n\\begin{code}\nfilterPrivacyAnnotation :: forall m a. (Monad m, HasPrivacyAnnotation a)\n => (a -> m PrivacyAnnotation)\n -> Tracer m a\n -> Tracer m a\nfilterPrivacyAnnotation mpa tr = Tracer $ \\arg -> do\n pa <- mpa arg\n when (getPrivacyAnnotation arg == pa) $\n traceWith tr arg\n\n\\end{code}\n\nGeneral instances of |WithPrivacyAnnotation| wrapped observable types.\n\n\\begin{code}\ninstance forall m a t. (Monad m, Transformable t m a) => Transformable t m (WithPrivacyAnnotation a) where\n trTransformer verb tr = Tracer $ \\(WithPrivacyAnnotation pa arg) ->\n let transformer :: Tracer m a\n transformer = trTransformer verb $ setPrivacy pa tr\n in traceWith transformer arg\n\n\\end{code}\n\n\\subsubsection{The properties of being annotated with severity and privacy}\n\\label{code:HasSeverityAnnotation}\\index{HasSeverityAnnotation}\nFrom a type with the property of |HasSeverityAnnotation|, one will be able to\nextract its severity annotation.\n\n\\begin{code}\nclass HasSeverityAnnotation a where\n getSeverityAnnotation :: a -> Severity\n default getSeverityAnnotation :: a -> Severity\n getSeverityAnnotation _ = Debug\n\ninstance HasSeverityAnnotation (WithSeverity a) where\n getSeverityAnnotation (WithSeverity sev _) = sev\n\ninstance HasSeverityAnnotation a => HasSeverityAnnotation (WithPrivacyAnnotation a) where\n getSeverityAnnotation (WithPrivacyAnnotation _ a) = getSeverityAnnotation a\n\n-- default instances\ninstance HasSeverityAnnotation Double\ninstance HasSeverityAnnotation Float\ninstance HasSeverityAnnotation Int\ninstance HasSeverityAnnotation Integer\ninstance HasSeverityAnnotation String\ninstance HasSeverityAnnotation Text\ninstance HasSeverityAnnotation Word64\n\n\\end{code}\n\n\\label{code:HasPrivacyAnnotation}\\index{HasPrivacyAnnotation}\nAnd, privacy annotation can be extracted from types with the property |HasPrivacyAnnotation|.\n\n\\begin{code}\nclass HasPrivacyAnnotation a where\n getPrivacyAnnotation :: a -> PrivacyAnnotation\n default getPrivacyAnnotation :: a -> PrivacyAnnotation\n getPrivacyAnnotation _ = Public\n\ninstance HasPrivacyAnnotation (WithPrivacyAnnotation a) where\n getPrivacyAnnotation (WithPrivacyAnnotation pva _) = pva\n\ninstance HasPrivacyAnnotation a => HasPrivacyAnnotation (WithSeverity a) where\n getPrivacyAnnotation (WithSeverity _ a) = getPrivacyAnnotation a\n\n-- default instances\ninstance HasPrivacyAnnotation Double\ninstance HasPrivacyAnnotation Float\ninstance HasPrivacyAnnotation Int\ninstance HasPrivacyAnnotation Integer\ninstance HasPrivacyAnnotation String\ninstance HasPrivacyAnnotation Text\ninstance HasPrivacyAnnotation Word64\n\n\\end{code}\n","avg_line_length":36.1115044248,"max_line_length":106,"alphanum_fraction":0.6817624859} {"size":8106,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"%let lectureNotes = not book\n\n%if submit\n\\documentclass{book}\n%else\n\\documentclass[oneside]{book} % use one-side as long as we edit the book (easier to read in pdf reader).\n%endif %submit\n%if book\n\\usepackage{mathpazo}\n%endif\n%if book\n\\usepackage[bindingoffset=0cm,a4paper,centering,totalheight=200mm,textwidth=120mm,includefoot,includehead,\n% showframe=true, showcrop=true, verbose\n ]{geometry}\n%Publisher instructions:\n%\\usepackage[ paperheight =297mm,paperwidth =210mm, % or: \"paper=a4paper\"\n% layoutheight =200mm,layoutwidth =120mm,\n% layoutvoffset= 48.9mm,layouthoffset= 45mm,\n% centering,\n% margin=0pt, includehead, includefoot,\n% footskip=0mm,\n% showframe=true, showcrop=true, verbose\n% ]{geometry}\n% It might need a little tweaking.\n% This version, although not using A4 paper, also works\n%\\usepackage[%\n% papersize={156mm,234mm}, %7in,10in},\n% lmargin=18.9mm,\n% rmargin=18.9mm,\n% tmargin=22mm,\n% bmargin=22mm,\n% includefoot,\n% includehead,\n% showframe=true, showcrop=true, verbose\n% ]{geometry}\n%else\n\\usepackage[margin=2cm,a4paper]{geometry}\n% \\usepackage{a4wide} (the above is even more compact and easier to control)\n%endif\n\\usepackage{layout}\n\\usepackage{amsmath}\n\\usepackage{amsthm}\n\\usepackage{mathrsfs}\n%\\usepackage{textgreek}\n%include polycode.fmt\n%include ..\/L\/dslm.format\n%%% Somewhat updated version of polycode.fmt from the lhs2TeX dist.\n% %include dslmagda.fmt\n%%% Our own formatting directives\n%include dslm.format\n%\\usepackage{showidx}\n\\usepackage{imakeidx}\n\\usepackage{natbib}\n\\usepackage{wrapfig}\n\\usepackage{graphicx}\n\\usepackage[dvipsnames]{xcolor}\n\\usepackage{colortbl}\n\\RequirePackage[T1]{fontenc}\n\\RequirePackage[utf8]{inputenc}\n\\usepackage{newunicodechar}\n%include newunicodedefs.tex\n\\usepackage[type={CC},modifier={by-nc-sa},version={4.0}]{doclicense}\n\\RequirePackage{amsfonts}\n\\usepackage{tikz}\n%\\usepackage[labelfont=bf]{caption}\n\\usepackage{caption}\n\\usepackage{subcaption}\n\\usepackage{tikz-cd}\n\\usetikzlibrary{trees,graphs,quotes,fit,shapes,matrix}\n\\usepackage{lineno}\n\\usepackage{enumitem}\n\\usepackage{pdfpages}\n\\usepackage{tabu}\n%if book\n%\\usepackage[backref=page,colorlinks=true,allcolors=blue]{hyperref}\n\\usepackage[colorlinks=true,allcolors=blue]{hyperref}\n%else\n\\usepackage{hyperref}\n%endif\n\\usepackage[capitalize]{cleveref} % \\cref\n\\usepackage{doi}\n% \\usepackage{backref} % does not work with cleveref\n\\newcommand\\jp[1]{\\todo{JP: #1}}\n\\newcommand\\ci[1]{\\todo{CI: #1}}\n\\newcommand\\pj[1]{\\todo{PJ: #1}}\n%\\newcommand\\extraMaterial{\\(\\ast\\)}\n\\newcommand\\extraMaterial{*}\n\\newcommand\\pedantic[1]{\\footnote{Pedantic remark: #1}}\n\n\\providecommand\\mathbbm{\\mathbb}\n\n\n%if submit\n\\newcommand{\\todo}[2][?]{}\n%else\n\\newcommand{\\todo}[2][?]{\\marginpar{\\raggedright\\tiny{}TODO: #2}}\n%endif\n\n\\newcommand{\\addtoindex}[1]{#1\\index{#1}}\n\\setlength{\\parindent}{0pt}\n\\setlength{\\parskip}{6pt plus 2pt minus 1pt}\n\n\\theoremstyle{definition}\n\\newtheorem{exercise}{Exercise}[chapter]\n\\newtheorem{theorem}{Theorem}[chapter]\n\\newtheorem{lemma}{Lemma}[chapter]\n\\newtheorem{definition}{Definition}[chapter]\n\\newtheorem{proposition}{Proposition}[chapter]\n% Old exercise style (until 2017-11-08): \\renewcommand*{\\theenumi}{\\textbf{E\\thesection.\\arabic{enumi}}}\n\\newenvironment{solution}% environment name\n{% begin code\n \\textbf{Solution:}%\n}%\n{}% end code\n\\newenvironment{example}% environment name\n{% begin code\n \\paragraph{Example}%\n}%\n{}% end code\n\\newcommand{\\reviseForBook}[1]{} % don't show anything for now\n%if book\n\\newcommand\\bookOnly[1]{#1} % show only in the book version (not the lecture notes)\n%else\n\\newcommand\\bookOnly[1]{} % show only in the book version (not the lecture notes)\n%endif\n%if lectureNotes\n\\newcommand\\lnOnly[1]{#1} % show only in the lecture notes (not the book version)\n%else\n\\newcommand\\lnOnly[1]{} % show only in the lecture notes (not the book version)\n%endif\n\\newcommand\\course{\\lnOnly{course}\\bookOnly{book}}\n\\newcommand{\\TODO}[1]{\\todo{#1}}\n\\newcommand\\refFig{\\cref}\n\\newcommand\\refSec{\\cref}\n\\newcommand{\\refTab}[1]{Tab.~\\ref{#1}}\n\\newcommand{\\tyconsym}[1]{\\mathrel{{:}{#1}{:}}}\n% the `doubleequals' macro is due to Jeremy Gibbons\n\\def\\doubleequals{\\mathrel{\\unitlength 0.01em\n \\begin{picture}(78,40)\n \\put(7,34){\\line(1,0){25}} \\put(45,34){\\line(1,0){25}}\n \\put(7,14){\\line(1,0){25}} \\put(45,14){\\line(1,0){25}}\n \\end{picture}}}\n%% If you remove the %format == command the lhs2TeX default yields \u2261, which can be a problem\n\\def\\tripleequals{\\mathrel{\\unitlength 0.01em\n \\begin{picture}(116,40)\n \\put(7,34){\\line(1,0){25}} \\put(45,34){\\line(1,0){25}} \\put(83,34){\\line(1,0){25}}\n \\put(7,14){\\line(1,0){25}} \\put(45,14){\\line(1,0){25}} \\put(83,14){\\line(1,0){25}}\n \\end{picture}}}\n\\providecommand{\\cpp}{C\\kern-0.05em\\texttt{+\\kern-0.03em+}}\n\\newcommand{\\colvec}[1]{\\colvecc{#1_0}{#1_n}}\n\\newcommand{\\colvecc}[2]{\\colveccc{#1 \\\\ \\vdots \\\\ #2}}\n\\newcommand{\\colveccc}[1]{\\begin{bmatrix} #1 \\end{bmatrix}}\n\\newcommand{\\rowvec}[1]{\\rowvecc{#1_0}{#1_n}}\n\\newcommand{\\rowvecc}[2]{\\rowveccc{#1 & \\cdots & #2}}\n\\newcommand{\\rowveccc}[1]{\\begin{bmatrix} #1 \\end{bmatrix}}\n\\newcommand{\\fromExam}[1]{\\lnOnly{\\emph{From exam #1}}}\n\\newcommand\\crefatpage[1]{\\cref{#1}, on page \\pageref{#1}}\n\n\\title{Domain-Specific Languages of Mathematics\n%if lectureNotes\n: Lecture Notes\n%endif lectureNotes\n%if not submit\n\\\\{\\small Draft of 2nd edition}\n%endif\n}\n\\author{Patrik Jansson \\and Cezar Ionescu \\and Jean-Philippe Bernardy}\n% {Chalmers University of Technology, Sweden}\n% {\\texttt{patrikj@@chalmers.se}}\n% {\\texttt{cezar@@chalmers.se}}\n\\date{Work in progress: draft of \\today\\\\[3ex]\\includegraphics{..\/admin\/DSL_logo\/DSL_logo.pdf}}\n\\makeindex\n\\begin{document}\n\\frontmatter\n%if not submit\n% Publisher instruction: The submitted file should not include titlepage etc.\n\\maketitle\n\\paragraph{Abstract}\n%\n The main idea behind this book is to encourage readers to\n approach mathematical domains from a functional programming\n perspective:\n%\n to identify the main functions and types involved and, when\n necessary, to introduce new abstractions;\n%\n to give calculational proofs;\n%\n to pay attention to the syntax of the mathematical expressions;\n%\n and, finally, to organize the resulting functions and types in\n domain-specific languages.\n\n The book is recommended for developers who are learning mathematics\n and would like to use Haskell to make sense of definitions and\n theorems.\n%\n It is also a book for the mathematically interested who wants to\n explore functional programming and domain-specific languages.\n%\n The book helps put into perspective the domains of Mathematics and\n Functional Programming and shows how Computer Science and\n Mathematics are usefully taught together.\n\n\\vfill\n\n\\paragraph{License} \\doclicenseThis\n\nThe printed book version of this document is published through College\nPublications \\citep{JanssonIonescuBernardyDSLsofMathBook2022}.\n\n% \\layout\n%endif % not submit\n%if submit\n% Publisher instruction: the ToC should start at page v = page 5\n\\setcounter{page}{5}\n%endif % submit\n\\addtocontents{toc}{\\protect\\setcounter{tocdepth}{1}}\n\\tableofcontents\n\\mainmatter\n\n%include 00\/Intro.lhs\n%include 01\/W01.lhs\n%include 02\/W02.lhs\n%include 03\/W03.lhs\n%include 04\/W04.lhs\n%include 05\/W05.lhs\n%include 06\/W06.lhs\n%include 07\/W07.lhs\n%include 08\/W08.lhs\n%include 09\/W09.lhs\n% %include End.lhs\n\n% ----------------------------------------------------------------\n\\appendix\n\n%if lectureNotes\n\\chapter{Exam Practice}\n\n\\newcommand{\\includeexam}[1]{\\includeexaminner{Exam #1}{app:Exam#1}{..\/Exam\/#1\/Exam-#1.pdf}}\n\n\\newcommand{\\includeexaminner}[3]{%\n\\section{#1}\n\\label{#2}\n\\includegraphics[page=1,trim={2cm 3.5cm 2cm 3.5cm},clip]{#3}\n\\includepdf[pages=2-3]{#3}\n}\n\n\\includeexam{2016-Practice}\n\\includeexam{2016-03}\n\\includeexam{2016-08}\n\\includeexam{2017-03}\n\\includeexam{2017-08}\n%endif\n\n%include Appendix\/DSLMCourse.lhs\n%include 01\/CSem.lhs\n\n\\bibliographystyle{abbrvnat}\n\\bibliography{ref,pj}\n\\printindex\n\\end{document}\n","avg_line_length":30.8212927757,"max_line_length":106,"alphanum_fraction":0.7179866765} {"size":4562,"ext":"lhs","lang":"Literate Haskell","max_stars_count":8.0,"content":"> module Pixley where\n\n> import Text.ParserCombinators.Parsec\n> import qualified Data.Map as Map\n\nDefinitions\n===========\n\nAn environment maps names (represented as strings) to expressions.\n\n> type Env = Map.Map String Expr\n\n> data Expr = Symbol String\n> | Cons Expr Expr\n> | Null\n> | Boolean Bool\n> | Lambda Env Expr Expr\n> | Macro Env Expr\n> deriving (Ord, Eq)\n\n> instance Show Expr where\n> show (Symbol s) = s\n> show e@(Cons _ _) = \"(\" ++ (showl e)\n> show Null = \"()\"\n> show (Boolean True) = \"#t\"\n> show (Boolean False) = \"#f\"\n> show (Lambda env args body) = \"(lambda \" ++ (show args) ++ \" \" ++ (show body) ++ \")\"\n> show (Macro env body) = \"(macro \" ++ (show body) ++ \")\"\n\n> showl Null = \")\"\n> showl (Cons a Null) = (show a) ++ \")\"\n> showl (Cons a b) = (show a) ++ \" \" ++ (showl b)\n> showl other = \". \" ++ (show other) ++ \")\"\n\nParser\n======\n\nThe overall grammar of the language is:\n\n Expr ::= symbol | \"(\" {Expr} \")\"\n\nA symbol is denoted by a string which may contain only alphanumeric\ncharacters, hyphens, underscores, and question marks.\n\n> symbol = do\n> c <- letter\n> cs <- many (alphaNum <|> char '-' <|> char '?' <|> char '_' <|> char '*')\n> return (Symbol (c:cs))\n\n> list = do\n> string \"(\"\n> e <- many expr\n> spaces\n> string \")\"\n> return (consFromList e)\n\nThe top-level parsing function implements the overall grammar given above.\nNote that we need to give the type of this parser here -- otherwise the\ntype inferencer freaks out for some reason.\n\n> expr :: Parser Expr\n> expr = do\n> spaces\n> r <- (symbol <|> list)\n> return r\n\nA convenience function for parsing Pixley programs.\n\n> pa program = parse expr \"\" program\n\nA helper function to make Cons cells from Haskell lists.\n\n> consFromList [] =\n> Null\n> consFromList (x:xs) =\n> Cons x (consFromList xs)\n\nEvaluator\n=========\n\n> car (Cons a b) = a\n> cdr (Cons a b) = b\n\nWe need to check for properly-formed lists, because that's what\nScheme and Pixley do.\n\n> listp Null = Boolean True\n> listp (Cons a b) = listp b\n> listp _ = Boolean False\n\n> eval env (Symbol s) =\n> (Map.!) env s\n> eval env (Cons (Symbol \"quote\") (Cons sexpr Null)) =\n> sexpr\n> eval env (Cons (Symbol \"car\") (Cons sexpr Null)) =\n> car (eval env sexpr)\n> eval env (Cons (Symbol \"cdr\") (Cons sexpr Null)) =\n> cdr (eval env sexpr)\n> eval env (Cons (Symbol \"cons\") (Cons sexpr1 (Cons sexpr2 Null))) =\n> Cons (eval env sexpr1) (eval env sexpr2)\n> eval env (Cons (Symbol \"list?\") (Cons sexpr Null)) =\n> listp (eval env sexpr)\n> eval env (Cons (Symbol \"equal?\") (Cons sexpr1 (Cons sexpr2 Null))) =\n> Boolean ((eval env sexpr1) == (eval env sexpr2))\n> eval env (Cons (Symbol \"let*\") (Cons bindings (Cons body Null))) =\n> eval (bindAll bindings env) body\n> eval env (Cons (Symbol \"cond\") rest) =\n> checkAll env rest\n> eval env (Cons (Symbol \"lambda\") (Cons args (Cons body Null))) =\n> Lambda env args body\n\n> eval env (Cons fun actuals) =\n> case eval env fun of\n> Lambda closedEnv formals body ->\n> eval (bindArgs env closedEnv formals actuals) body\n> Macro closedEnv body ->\n> let\n> env' = Map.insert \"arg\" actuals closedEnv\n> in\n> eval env' body\n> eval env weirdThing =\n> error (\"You can't evaluate a \" ++ show weirdThing)\n\n> checkAll env (Cons (Cons (Symbol \"else\") (Cons branch Null)) Null) =\n> eval env branch\n> checkAll env (Cons (Cons test (Cons branch Null)) rest) =\n> case eval env test of\n> Boolean True ->\n> eval env branch\n> Boolean False ->\n> checkAll env rest\n\n> bindAll Null env =\n> env\n> bindAll (Cons binding rest) env =\n> bindAll rest (bind binding env)\n\n> bind (Cons (Symbol sym) (Cons sexpr Null)) env =\n> Map.insert sym (eval env sexpr) env\n\n> bindArgs env closedEnv Null Null =\n> closedEnv\n> bindArgs env closedEnv (Cons (Symbol sym) formals) (Cons actual actuals) =\n> Map.insert sym (eval env actual) (bindArgs env closedEnv formals actuals)\n\n> consFromEnvList [] =\n> Null\n> consFromEnvList ((k,v):rest) =\n> Cons (Cons (Symbol k) (Cons v Null)) (consFromEnvList rest)\n\n> envFromCons Null =\n> Map.empty\n> envFromCons (Cons (Cons (Symbol k) (Cons v Null)) rest) =\n> Map.insert k v (envFromCons rest)\n\nTop-Level Driver\n================\n\n> runPixley program =\n> let\n> Right ast = parse expr \"\" program\n> in\n> eval Map.empty ast\n","avg_line_length":27.8170731707,"max_line_length":90,"alphanum_fraction":0.5951337133} {"size":17225,"ext":"lhs","lang":"Literate Haskell","max_stars_count":2.0,"content":"%\n% (c) The University of Glasgow 2006\n% (c) The GRASP\/AQUA Project, Glasgow University, 1993-1998\n%\n\\section[IdInfo]{@IdInfos@: Non-essential information about @Ids@}\n\n(And a pretty good illustration of quite a few things wrong with\nHaskell. [WDP 94\/11])\n\n\\begin{code}\n{-# OPTIONS -fno-warn-tabs #-}\n-- The above warning supression flag is a temporary kludge.\n-- While working on this module you are encouraged to remove it and\n-- detab the module (please do the detabbing in a separate patch). See\n-- http:\/\/ghc.haskell.org\/trac\/ghc\/wiki\/Commentary\/CodingStyle#TabsvsSpaces\n-- for details\n\nmodule IdInfo (\n -- * The IdDetails type\n\tIdDetails(..), pprIdDetails, coVarDetails,\n\n -- * The IdInfo type\n\tIdInfo,\t\t-- Abstract\n\tvanillaIdInfo, noCafIdInfo,\n\tseqIdInfo, megaSeqIdInfo,\n\n -- ** The OneShotInfo type\n OneShotInfo(..),\n oneShotInfo, noOneShotInfo, hasNoOneShotInfo,\n setOneShotInfo, \n\n\t-- ** Zapping various forms of Info\n\tzapLamInfo, zapFragileInfo,\n zapDemandInfo,\n\n\t-- ** The ArityInfo type\n\tArityInfo,\n\tunknownArity, \n\tarityInfo, setArityInfo, ppArityInfo, \n\n\t-- ** Demand and strictness Info\n \tstrictnessInfo, setStrictnessInfo, \n \tdemandInfo, setDemandInfo, pprStrictness,\n\n\t-- ** Unfolding Info\n\tunfoldingInfo, setUnfoldingInfo, setUnfoldingInfoLazily,\n\n\t-- ** The InlinePragInfo type\n\tInlinePragInfo,\n\tinlinePragInfo, setInlinePragInfo,\n\n\t-- ** The OccInfo type\n\tOccInfo(..),\n\tisDeadOcc, isStrongLoopBreaker, isWeakLoopBreaker,\n\toccInfo, setOccInfo,\n\n\tInsideLam, OneBranch,\n\tinsideLam, notInsideLam, oneBranch, notOneBranch,\n\n\t-- ** The SpecInfo type\n\tSpecInfo(..),\n\temptySpecInfo,\n\tisEmptySpecInfo, specInfoFreeVars,\n\tspecInfoRules, seqSpecInfo, setSpecInfoHead,\n specInfo, setSpecInfo,\n\n\t-- ** The CAFInfo type\n\tCafInfo(..),\n\tppCafInfo, mayHaveCafRefs,\n\tcafInfo, setCafInfo,\n\n -- ** Tick-box Info\n TickBoxOp(..), TickBoxId,\n ) where\n\nimport CoreSyn\n\nimport Class\nimport {-# SOURCE #-} PrimOp (PrimOp)\nimport Name\nimport VarSet\nimport BasicTypes\nimport DataCon\nimport TyCon\nimport ForeignCall\nimport Outputable\t\nimport Module\nimport FastString\nimport Demand\n\n-- infixl so you can say (id `set` a `set` b)\ninfixl \t1 `setSpecInfo`,\n\t `setArityInfo`,\n\t `setInlinePragInfo`,\n\t `setUnfoldingInfo`,\n\t `setOneShotInfo`,\n\t `setOccInfo`,\n\t `setCafInfo`,\n\t `setStrictnessInfo`,\n\t `setDemandInfo`\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n IdDetails\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\n-- | The 'IdDetails' of an 'Id' give stable, and necessary, \n-- information about the Id. \ndata IdDetails\n = VanillaId\t\n\n -- | The 'Id' for a record selector\n | RecSelId \n { sel_tycon :: TyCon\t-- ^ For a data type family, this is the \/instance\/ 'TyCon'\n\t\t\t\t-- not the family 'TyCon'\n , sel_naughty :: Bool -- True <=> a \"naughty\" selector which can't actually exist, for example @x@ in:\n -- data T = forall a. MkT { x :: a }\n }\t\t\t\t-- See Note [Naughty record selectors] in TcTyClsDecls\n\n | DataConWorkId DataCon\t-- ^ The 'Id' is for a data constructor \/worker\/\n | DataConWrapId DataCon\t-- ^ The 'Id' is for a data constructor \/wrapper\/\n\t\t\t\t\n\t\t\t\t-- [the only reasons we need to know is so that\n\t\t\t\t-- a) to support isImplicitId\n\t\t\t\t-- b) when desugaring a RecordCon we can get \n\t\t\t\t-- from the Id back to the data con]\n\n | ClassOpId Class \t\t-- ^ The 'Id' is a superclass selector or class operation of a class\n\n | PrimOpId PrimOp\t\t-- ^ The 'Id' is for a primitive operator\n | FCallId ForeignCall\t\t-- ^ The 'Id' is for a foreign call\n\n | TickBoxOpId TickBoxOp\t-- ^ The 'Id' is for a HPC tick box (both traditional and binary)\n\n | DFunId Int Bool -- ^ A dictionary function.\n -- Int = the number of \"silent\" arguments to the dfun\n -- e.g. class D a => C a where ...\n -- instance C a => C [a]\n -- has is_silent = 1, because the dfun\n -- has type dfun :: (D a, C a) => C [a]\n -- See Note [Silent superclass arguments] in TcInstDcls\n --\n -- Bool = True <=> the class has only one method, so may be\n -- implemented with a newtype, so it might be bad\n -- to be strict on this dictionary\n\ncoVarDetails :: IdDetails\ncoVarDetails = VanillaId\n\ninstance Outputable IdDetails where\n ppr = pprIdDetails\n\npprIdDetails :: IdDetails -> SDoc\npprIdDetails VanillaId = empty\npprIdDetails other = brackets (pp other)\n where\n pp VanillaId = panic \"pprIdDetails\"\n pp (DataConWorkId _) = ptext (sLit \"DataCon\")\n pp (DataConWrapId _) = ptext (sLit \"DataConWrapper\")\n pp (ClassOpId {}) = ptext (sLit \"ClassOp\")\n pp (PrimOpId _) = ptext (sLit \"PrimOp\")\n pp (FCallId _) = ptext (sLit \"ForeignCall\")\n pp (TickBoxOpId _) = ptext (sLit \"TickBoxOp\")\n pp (DFunId ns nt) = ptext (sLit \"DFunId\")\n <> ppWhen (ns \/= 0) (brackets (int ns))\n <> ppWhen nt (ptext (sLit \"(nt)\"))\n pp (RecSelId { sel_naughty = is_naughty })\n \t\t\t = brackets $ ptext (sLit \"RecSel\") \n \t\t\t <> ppWhen is_naughty (ptext (sLit \"(naughty)\"))\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\\subsection{The main IdInfo type}\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\n-- | An 'IdInfo' gives \/optional\/ information about an 'Id'. If\n-- present it never lies, but it may not be present, in which case there\n-- is always a conservative assumption which can be made.\n-- \n-- Two 'Id's may have different info even though they have the same\n-- 'Unique' (and are hence the same 'Id'); for example, one might lack\n-- the properties attached to the other.\n-- \n-- The 'IdInfo' gives information about the value, or definition, of the\n-- 'Id'. It does not contain information about the 'Id''s usage,\n-- except for 'demandInfo' and 'oneShotInfo'.\ndata IdInfo\n = IdInfo {\n\tarityInfo \t:: !ArityInfo,\t\t-- ^ 'Id' arity\n\tspecInfo \t:: SpecInfo,\t\t-- ^ Specialisations of the 'Id's function which exist\n\t\t\t \t\t\t-- See Note [Specialisations and RULES in IdInfo]\n\tunfoldingInfo\t:: Unfolding,\t\t-- ^ The 'Id's unfolding\n\tcafInfo\t\t:: CafInfo,\t\t-- ^ 'Id' CAF info\n oneShotInfo\t:: OneShotInfo,\t\t-- ^ Info about a lambda-bound variable, if the 'Id' is one\n\tinlinePragInfo\t:: InlinePragma,\t-- ^ Any inline pragma atached to the 'Id'\n\toccInfo\t\t:: OccInfo,\t\t-- ^ How the 'Id' occurs in the program\n\n strictnessInfo :: StrictSig, -- ^ A strictness signature\n\n demandInfo :: Demand -- ^ ID demand information\n\n }\n\n-- | Just evaluate the 'IdInfo' to WHNF\nseqIdInfo :: IdInfo -> ()\nseqIdInfo (IdInfo {}) = ()\n\n-- | Evaluate all the fields of the 'IdInfo' that are generally demanded by the\n-- compiler\nmegaSeqIdInfo :: IdInfo -> ()\nmegaSeqIdInfo info\n = seqSpecInfo (specInfo info)\t\t\t`seq`\n\n-- Omitting this improves runtimes a little, presumably because\n-- some unfoldings are not calculated at all\n-- seqUnfolding (unfoldingInfo info)\t\t`seq`\n\n seqDemandInfo (demandInfo info) `seq`\n seqStrictnessInfo (strictnessInfo info) `seq`\n seqCaf (cafInfo info)\t\t\t`seq`\n seqOneShot (oneShotInfo info)\t\t`seq`\n seqOccInfo (occInfo info)\n\nseqOneShot :: OneShotInfo -> ()\nseqOneShot l = l `seq` ()\n\nseqStrictnessInfo :: StrictSig -> ()\nseqStrictnessInfo ty = seqStrictSig ty\n\nseqDemandInfo :: Demand -> ()\nseqDemandInfo dmd = seqDemand dmd\n\\end{code}\n\nSetters\n\n\\begin{code}\nsetSpecInfo :: IdInfo -> SpecInfo -> IdInfo\nsetSpecInfo \t info sp = sp `seq` info { specInfo = sp }\nsetInlinePragInfo :: IdInfo -> InlinePragma -> IdInfo\nsetInlinePragInfo info pr = pr `seq` info { inlinePragInfo = pr }\nsetOccInfo :: IdInfo -> OccInfo -> IdInfo\nsetOccInfo\t info oc = oc `seq` info { occInfo = oc }\n\t-- Try to avoid spack leaks by seq'ing\n\nsetUnfoldingInfoLazily :: IdInfo -> Unfolding -> IdInfo\nsetUnfoldingInfoLazily info uf \t-- Lazy variant to avoid looking at the\n =\t\t\t\t-- unfolding of an imported Id unless necessary\n info { unfoldingInfo = uf }\t-- (In this case the demand-zapping is redundant.)\n\nsetUnfoldingInfo :: IdInfo -> Unfolding -> IdInfo\nsetUnfoldingInfo info uf \n = -- We don't seq the unfolding, as we generate intermediate\n -- unfoldings which are just thrown away, so evaluating them is a\n -- waste of time.\n -- seqUnfolding uf `seq`\n info { unfoldingInfo = uf }\n\nsetArityInfo :: IdInfo -> ArityInfo -> IdInfo\nsetArityInfo\t info ar = info { arityInfo = ar }\nsetCafInfo :: IdInfo -> CafInfo -> IdInfo\nsetCafInfo info caf = info { cafInfo = caf }\n\nsetOneShotInfo :: IdInfo -> OneShotInfo -> IdInfo\nsetOneShotInfo info lb = {-lb `seq`-} info { oneShotInfo = lb }\n\nsetDemandInfo :: IdInfo -> Demand -> IdInfo\nsetDemandInfo info dd = dd `seq` info { demandInfo = dd }\n\nsetStrictnessInfo :: IdInfo -> StrictSig -> IdInfo\nsetStrictnessInfo info dd = dd `seq` info { strictnessInfo = dd }\n\\end{code}\n\n\n\\begin{code}\n-- | Basic 'IdInfo' that carries no useful information whatsoever\nvanillaIdInfo :: IdInfo\nvanillaIdInfo \n = IdInfo {\n\t cafInfo\t\t= vanillaCafInfo,\n\t arityInfo\t\t= unknownArity,\n\t specInfo\t\t= emptySpecInfo,\n\t unfoldingInfo\t= noUnfolding,\n\t oneShotInfo\t\t= NoOneShotInfo,\n\t inlinePragInfo \t= defaultInlinePragma,\n\t occInfo\t\t= NoOccInfo,\n demandInfo\t = topDmd,\n\t strictnessInfo = nopSig\n\t }\n\n-- | More informative 'IdInfo' we can use when we know the 'Id' has no CAF references\nnoCafIdInfo :: IdInfo\nnoCafIdInfo = vanillaIdInfo `setCafInfo` NoCafRefs\n\t-- Used for built-in type Ids in MkId.\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\\subsection[arity-IdInfo]{Arity info about an @Id@}\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\nFor locally-defined Ids, the code generator maintains its own notion\nof their arities; so it should not be asking...\t (but other things\nbesides the code-generator need arity info!)\n\n\\begin{code}\n-- | An 'ArityInfo' of @n@ tells us that partial application of this \n-- 'Id' to up to @n-1@ value arguments does essentially no work.\n--\n-- That is not necessarily the same as saying that it has @n@ leading \n-- lambdas, because coerces may get in the way.\n--\n-- The arity might increase later in the compilation process, if\n-- an extra lambda floats up to the binding site.\ntype ArityInfo = Arity\n\n-- | It is always safe to assume that an 'Id' has an arity of 0\nunknownArity :: Arity\nunknownArity = 0 :: Arity\n\nppArityInfo :: Int -> SDoc\nppArityInfo 0 = empty\nppArityInfo n = hsep [ptext (sLit \"Arity\"), int n]\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\\subsection{Inline-pragma information}\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\n-- | Tells when the inlining is active.\n-- When it is active the thing may be inlined, depending on how\n-- big it is.\n--\n-- If there was an @INLINE@ pragma, then as a separate matter, the\n-- RHS will have been made to look small with a Core inline 'Note'\n--\n-- The default 'InlinePragInfo' is 'AlwaysActive', so the info serves\n-- entirely as a way to inhibit inlining until we want it\ntype InlinePragInfo = InlinePragma\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Strictness\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\npprStrictness :: StrictSig -> SDoc\npprStrictness sig = ppr sig\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\tSpecInfo\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\nNote [Specialisations and RULES in IdInfo]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nGenerally speaking, a GlobalIdshas an *empty* SpecInfo. All their\nRULES are contained in the globally-built rule-base. In principle,\none could attach the to M.f the RULES for M.f that are defined in M.\nBut we don't do that for instance declarations and so we just treat\nthem all uniformly.\n\nThe EXCEPTION is PrimOpIds, which do have rules in their IdInfo. That is\njsut for convenience really.\n\nHowever, LocalIds may have non-empty SpecInfo. We treat them \ndifferently because:\n a) they might be nested, in which case a global table won't work\n b) the RULE might mention free variables, which we use to keep things alive\n\nIn TidyPgm, when the LocalId becomes a GlobalId, its RULES are stripped off\nand put in the global list.\n\n\\begin{code}\n-- | Records the specializations of this 'Id' that we know about\n-- in the form of rewrite 'CoreRule's that target them\ndata SpecInfo \n = SpecInfo \n\t[CoreRule] \n\tVarSet\t\t-- Locally-defined free vars of *both* LHS and RHS \n\t\t\t-- of rules. I don't think it needs to include the\n\t\t\t-- ru_fn though.\n\t\t\t-- Note [Rule dependency info] in OccurAnal\n\n-- | Assume that no specilizations exist: always safe\nemptySpecInfo :: SpecInfo\nemptySpecInfo = SpecInfo [] emptyVarSet\n\nisEmptySpecInfo :: SpecInfo -> Bool\nisEmptySpecInfo (SpecInfo rs _) = null rs\n\n-- | Retrieve the locally-defined free variables of both the left and\n-- right hand sides of the specialization rules\nspecInfoFreeVars :: SpecInfo -> VarSet\nspecInfoFreeVars (SpecInfo _ fvs) = fvs\n\nspecInfoRules :: SpecInfo -> [CoreRule]\nspecInfoRules (SpecInfo rules _) = rules\n\n-- | Change the name of the function the rule is keyed on on all of the 'CoreRule's\nsetSpecInfoHead :: Name -> SpecInfo -> SpecInfo\nsetSpecInfoHead fn (SpecInfo rules fvs)\n = SpecInfo (map (setRuleIdName fn) rules) fvs\n\nseqSpecInfo :: SpecInfo -> ()\nseqSpecInfo (SpecInfo rules fvs) = seqRules rules `seq` seqVarSet fvs\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\\subsection[CG-IdInfo]{Code generator-related information}\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\n-- CafInfo is used to build Static Reference Tables (see simplStg\/SRT.lhs).\n\n-- | Records whether an 'Id' makes Constant Applicative Form references\ndata CafInfo \n\t= MayHaveCafRefs\t\t-- ^ Indicates that the 'Id' is for either:\n\t\t\t\t\t--\n\t\t\t\t\t-- 1. A function or static constructor\n\t\t\t\t\t-- that refers to one or more CAFs, or\n\t\t\t\t\t--\n\t\t\t\t\t-- 2. A real live CAF\n\n\t| NoCafRefs\t\t\t-- ^ A function or static constructor\n\t\t\t\t -- that refers to no CAFs.\n deriving (Eq, Ord)\n\n-- | Assumes that the 'Id' has CAF references: definitely safe\nvanillaCafInfo :: CafInfo\nvanillaCafInfo = MayHaveCafRefs\n\nmayHaveCafRefs :: CafInfo -> Bool\nmayHaveCafRefs MayHaveCafRefs = True\nmayHaveCafRefs _\t = False\n\nseqCaf :: CafInfo -> ()\nseqCaf c = c `seq` ()\n\ninstance Outputable CafInfo where\n ppr = ppCafInfo\n\nppCafInfo :: CafInfo -> SDoc\nppCafInfo NoCafRefs = ptext (sLit \"NoCafRefs\")\nppCafInfo MayHaveCafRefs = empty\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\\subsection{Bulk operations on IdInfo}\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\n-- | This is used to remove information on lambda binders that we have\n-- setup as part of a lambda group, assuming they will be applied all at once,\n-- but turn out to be part of an unsaturated lambda as in e.g:\n--\n-- > (\\x1. \\x2. e) arg1\nzapLamInfo :: IdInfo -> Maybe IdInfo\nzapLamInfo info@(IdInfo {occInfo = occ, demandInfo = demand})\n | is_safe_occ occ && is_safe_dmd demand\n = Nothing\n | otherwise\n = Just (info {occInfo = safe_occ, demandInfo = topDmd})\n where\n\t-- The \"unsafe\" occ info is the ones that say I'm not in a lambda\n\t-- because that might not be true for an unsaturated lambda\n is_safe_occ (OneOcc in_lam _ _) = in_lam\n is_safe_occ _other\t\t = True\n\n safe_occ = case occ of\n\t\t OneOcc _ once int_cxt -> OneOcc insideLam once int_cxt\n\t\t _other\t \t -> occ\n\n is_safe_dmd dmd = not (isStrictDmd dmd)\n\\end{code}\n\n\\begin{code}\n-- | Remove demand info on the 'IdInfo' if it is present, otherwise return @Nothing@\nzapDemandInfo :: IdInfo -> Maybe IdInfo\nzapDemandInfo info = Just (info {demandInfo = topDmd})\n\\end{code}\n\n\\begin{code}\nzapFragileInfo :: IdInfo -> Maybe IdInfo\n-- ^ Zap info that depends on free variables\nzapFragileInfo info \n = Just (info `setSpecInfo` emptySpecInfo\n `setUnfoldingInfo` noUnfolding\n\t `setOccInfo` zapFragileOcc occ)\n where\n occ = occInfo info\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\\subsection{TickBoxOp}\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ntype TickBoxId = Int\n\n-- | Tick box for Hpc-style coverage\ndata TickBoxOp \n = TickBox Module {-# UNPACK #-} !TickBoxId\n\ninstance Outputable TickBoxOp where\n ppr (TickBox mod n) = ptext (sLit \"tick\") <+> ppr (mod,n)\n\\end{code}\n","avg_line_length":32.4387947269,"max_line_length":112,"alphanum_fraction":0.6180551524} {"size":21121,"ext":"lhs","lang":"Literate Haskell","max_stars_count":2.0,"content":"%\n% (c) The GRASP\/AQUA Project, Glasgow University, 1992-1998\n%\n\\section[PrimOp]{Primitive operations (machine-level)}\n\n\\begin{code}\nmodule PrimOp (\n PrimOp(..), PrimOpVecCat(..), allThePrimOps,\n primOpType, primOpSig,\n primOpTag, maxPrimOpTag, primOpOcc,\n\n tagToEnumKey,\n\n primOpOutOfLine, primOpCodeSize,\n primOpOkForSpeculation, primOpOkForSideEffects,\n primOpIsCheap, primOpFixity,\n\n getPrimOpResultInfo, PrimOpResultInfo(..),\n\n PrimCall(..)\n ) where\n\n#include \"HsVersions.h\"\n\nimport TysPrim\nimport TysWiredIn\n\nimport CmmType\nimport Demand\nimport Var ( TyVar )\nimport OccName ( OccName, pprOccName, mkVarOccFS )\nimport TyCon ( TyCon, isPrimTyCon, tyConPrimRep, PrimRep(..) )\nimport Type ( Type, mkForAllTys, mkFunTy, mkFunTys, tyConAppTyCon,\n typePrimRep )\nimport BasicTypes ( Arity, Fixity(..), FixityDirection(..), TupleSort(..) )\nimport ForeignCall ( CLabelString )\nimport Unique ( Unique, mkPrimOpIdUnique )\nimport Outputable\nimport FastTypes\nimport FastString\nimport Module ( PackageId )\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection[PrimOp-datatype]{Datatype for @PrimOp@ (an enumeration)}\n%* *\n%************************************************************************\n\nThese are in \\tr{state-interface.verb} order.\n\n\\begin{code}\n\n-- supplies:\n-- data PrimOp = ...\n#include \"primop-data-decl.hs-incl\"\n\\end{code}\n\nUsed for the Ord instance\n\n\\begin{code}\nprimOpTag :: PrimOp -> Int\nprimOpTag op = iBox (tagOf_PrimOp op)\n\n-- supplies\n-- tagOf_PrimOp :: PrimOp -> FastInt\n#include \"primop-tag.hs-incl\"\ntagOf_PrimOp _ = error \"tagOf_PrimOp: unknown primop\"\n\n\ninstance Eq PrimOp where\n op1 == op2 = tagOf_PrimOp op1 ==# tagOf_PrimOp op2\n\ninstance Ord PrimOp where\n op1 < op2 = tagOf_PrimOp op1 <# tagOf_PrimOp op2\n op1 <= op2 = tagOf_PrimOp op1 <=# tagOf_PrimOp op2\n op1 >= op2 = tagOf_PrimOp op1 >=# tagOf_PrimOp op2\n op1 > op2 = tagOf_PrimOp op1 ># tagOf_PrimOp op2\n op1 `compare` op2 | op1 < op2 = LT\n | op1 == op2 = EQ\n | otherwise = GT\n\ninstance Outputable PrimOp where\n ppr op = pprPrimOp op\n\\end{code}\n\n\\begin{code}\ndata PrimOpVecCat = IntVec\n | WordVec\n | FloatVec\n\\end{code}\n\nAn @Enum@-derived list would be better; meanwhile... (ToDo)\n\n\\begin{code}\nallThePrimOps :: [PrimOp]\nallThePrimOps =\n#include \"primop-list.hs-incl\"\n\\end{code}\n\n\\begin{code}\ntagToEnumKey :: Unique\ntagToEnumKey = mkPrimOpIdUnique (primOpTag TagToEnumOp)\n\\end{code}\n\n\n\n%************************************************************************\n%* *\n\\subsection[PrimOp-info]{The essential info about each @PrimOp@}\n%* *\n%************************************************************************\n\nThe @String@ in the @PrimOpInfos@ is the ``base name'' by which the user may\nrefer to the primitive operation. The conventional \\tr{#}-for-\nunboxed ops is added on later.\n\nThe reason for the funny characters in the names is so we do not\ninterfere with the programmer's Haskell name spaces.\n\nWe use @PrimKinds@ for the ``type'' information, because they're\n(slightly) more convenient to use than @TyCons@.\n\\begin{code}\ndata PrimOpInfo\n = Dyadic OccName -- string :: T -> T -> T\n Type\n | Monadic OccName -- string :: T -> T\n Type\n | Compare OccName -- string :: T -> T -> Int#\n Type\n | GenPrimOp OccName -- string :: \\\/a1..an . T1 -> .. -> Tk -> T\n [TyVar]\n [Type]\n Type\n\nmkDyadic, mkMonadic, mkCompare :: FastString -> Type -> PrimOpInfo\nmkDyadic str ty = Dyadic (mkVarOccFS str) ty\nmkMonadic str ty = Monadic (mkVarOccFS str) ty\nmkCompare str ty = Compare (mkVarOccFS str) ty\n\nmkGenPrimOp :: FastString -> [TyVar] -> [Type] -> Type -> PrimOpInfo\nmkGenPrimOp str tvs tys ty = GenPrimOp (mkVarOccFS str) tvs tys ty\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsubsection{Strictness}\n%* *\n%************************************************************************\n\nNot all primops are strict!\n\n\\begin{code}\nprimOpStrictness :: PrimOp -> Arity -> StrictSig\n -- See Demand.StrictnessInfo for discussion of what the results\n -- The arity should be the arity of the primop; that's why\n -- this function isn't exported.\n#include \"primop-strictness.hs-incl\"\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsubsection{Fixity}\n%* *\n%************************************************************************\n\n\\begin{code}\nprimOpFixity :: PrimOp -> Maybe Fixity\n#include \"primop-fixity.hs-incl\"\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsubsection[PrimOp-comparison]{PrimOpInfo basic comparison ops}\n%* *\n%************************************************************************\n\n@primOpInfo@ gives all essential information (from which everything\nelse, notably a type, can be constructed) for each @PrimOp@.\n\n\\begin{code}\nprimOpInfo :: PrimOp -> PrimOpInfo\n#include \"primop-primop-info.hs-incl\"\nprimOpInfo _ = error \"primOpInfo: unknown primop\"\n\\end{code}\n\nHere are a load of comments from the old primOp info:\n\nA @Word#@ is an unsigned @Int#@.\n\n@decodeFloat#@ is given w\/ Integer-stuff (it's similar).\n\n@decodeDouble#@ is given w\/ Integer-stuff (it's similar).\n\nDecoding of floating-point numbers is sorta Integer-related. Encoding\nis done with plain ccalls now (see PrelNumExtra.lhs).\n\nA @Weak@ Pointer is created by the @mkWeak#@ primitive:\n\n mkWeak# :: k -> v -> f -> State# RealWorld\n -> (# State# RealWorld, Weak# v #)\n\nIn practice, you'll use the higher-level\n\n data Weak v = Weak# v\n mkWeak :: k -> v -> IO () -> IO (Weak v)\n\nThe following operation dereferences a weak pointer. The weak pointer\nmay have been finalized, so the operation returns a result code which\nmust be inspected before looking at the dereferenced value.\n\n deRefWeak# :: Weak# v -> State# RealWorld ->\n (# State# RealWorld, v, Int# #)\n\nOnly look at v if the Int# returned is \/= 0 !!\n\nThe higher-level op is\n\n deRefWeak :: Weak v -> IO (Maybe v)\n\nWeak pointers can be finalized early by using the finalize# operation:\n\n finalizeWeak# :: Weak# v -> State# RealWorld ->\n (# State# RealWorld, Int#, IO () #)\n\nThe Int# returned is either\n\n 0 if the weak pointer has already been finalized, or it has no\n finalizer (the third component is then invalid).\n\n 1 if the weak pointer is still alive, with the finalizer returned\n as the third component.\n\nA {\\em stable name\/pointer} is an index into a table of stable name\nentries. Since the garbage collector is told about stable pointers,\nit is safe to pass a stable pointer to external systems such as C\nroutines.\n\n\\begin{verbatim}\nmakeStablePtr# :: a -> State# RealWorld -> (# State# RealWorld, StablePtr# a #)\nfreeStablePtr :: StablePtr# a -> State# RealWorld -> State# RealWorld\ndeRefStablePtr# :: StablePtr# a -> State# RealWorld -> (# State# RealWorld, a #)\neqStablePtr# :: StablePtr# a -> StablePtr# a -> Int#\n\\end{verbatim}\n\nIt may seem a bit surprising that @makeStablePtr#@ is a @IO@\noperation since it doesn't (directly) involve IO operations. The\nreason is that if some optimisation pass decided to duplicate calls to\n@makeStablePtr#@ and we only pass one of the stable pointers over, a\nmassive space leak can result. Putting it into the IO monad\nprevents this. (Another reason for putting them in a monad is to\nensure correct sequencing wrt the side-effecting @freeStablePtr@\noperation.)\n\nAn important property of stable pointers is that if you call\nmakeStablePtr# twice on the same object you get the same stable\npointer back.\n\nNote that we can implement @freeStablePtr#@ using @_ccall_@ (and,\nbesides, it's not likely to be used from Haskell) so it's not a\nprimop.\n\nQuestion: Why @RealWorld@ - won't any instance of @_ST@ do the job? [ADR]\n\nStable Names\n~~~~~~~~~~~~\n\nA stable name is like a stable pointer, but with three important differences:\n\n (a) You can't deRef one to get back to the original object.\n (b) You can convert one to an Int.\n (c) You don't need to 'freeStableName'\n\nThe existence of a stable name doesn't guarantee to keep the object it\npoints to alive (unlike a stable pointer), hence (a).\n\nInvariants:\n\n (a) makeStableName always returns the same value for a given\n object (same as stable pointers).\n\n (b) if two stable names are equal, it implies that the objects\n from which they were created were the same.\n\n (c) stableNameToInt always returns the same Int for a given\n stable name.\n\n\n-- HWL: The first 4 Int# in all par... annotations denote:\n-- name, granularity info, size of result, degree of parallelism\n-- Same structure as _seq_ i.e. returns Int#\n-- KSW: v, the second arg in parAt# and parAtForNow#, is used only to determine\n-- `the processor containing the expression v'; it is not evaluated\n\nThese primops are pretty weird.\n\n dataToTag# :: a -> Int (arg must be an evaluated data type)\n tagToEnum# :: Int -> a (result type must be an enumerated type)\n\nThe constraints aren't currently checked by the front end, but the\ncode generator will fall over if they aren't satisfied.\n\n%************************************************************************\n%* *\n Which PrimOps are out-of-line\n%* *\n%************************************************************************\n\nSome PrimOps need to be called out-of-line because they either need to\nperform a heap check or they block.\n\n\n\\begin{code}\nprimOpOutOfLine :: PrimOp -> Bool\n#include \"primop-out-of-line.hs-incl\"\n\\end{code}\n\n\n%************************************************************************\n%* *\n Failure and side effects\n%* *\n%************************************************************************\n\nNote [PrimOp can_fail and has_side_effects]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nBoth can_fail and has_side_effects mean that the primop has\nsome effect that is not captured entirely by its result value.\n\n ---------- has_side_effects ---------------------\n Has some imperative side effect, perhaps on the world (I\/O),\n or perhaps on some mutable data structure (writeIORef).\n Generally speaking all such primops have a type like\n State -> input -> (State, output)\n so the state token guarantees ordering, and also ensures\n that the primop is executed even if 'output' is discarded.\n\n ---------- can_fail ----------------------------\n Can fail with a seg-fault or divide-by-zero error on some elements\n of its input domain. Main examples:\n division (fails on zero demoninator\n array indexing (fails if the index is out of bounds)\n However (ASSUMPTION), these can_fail primops are ALWAYS surrounded\n with a test that checks for the bad cases. \n\nConsequences:\n\n* You can discard a can_fail primop, or float it _inwards_.\n But you cannot float it _outwards_, lest you escape the\n dynamic scope of the test. Example:\n case d ># 0# of\n True -> case x \/# d of r -> r +# 1\n False -> 0\n Here we must not float the case outwards to give\n case x\/# d of r ->\n case d ># 0# of\n True -> r +# 1\n False -> 0\n\n* I believe that exactly the same rules apply to a has_side_effects\n primop; you can discard it (remember, the state token will keep\n it alive if necessary), or float it in, but not float it out.\n\n Example of the latter\n if blah then let! s1 = writeMutVar s0 v True in s1\n else s0\n Notice that s0 is mentioned in both branches of the 'if', but \n only one of these two will actually be consumed. But if we\n float out to\n let! s1 = writeMutVar s0 v True\n in if blah then s1 else s0\n the writeMutVar will be performed in both branches, which is\n utterly wrong.\n\n* You cannot duplicate a has_side_effect primop. You might wonder\n how this can occur given the state token threading, but just look\n at Control.Monad.ST.Lazy.Imp.strictToLazy! We get something like\n this\n p = case readMutVar# s v of\n (# s', r #) -> (S# s', r)\n s' = case p of (s', r) -> s'\n r = case p of (s', r) -> r\n\n (All these bindings are boxed.) If we inline p at its two call\n sites, we get a catastrophe: because the read is performed once when\n s' is demanded, and once when 'r' is demanded, which may be much \n later. Utterly wrong. Trac #3207 is real example of this happening.\n\n However, it's fine to duplicate a can_fail primop. That is\n the difference between can_fail and has_side_effects.\n\n can_fail has_side_effects\nDiscard YES YES\nFloat in YES YES\nFloat out NO NO\nDuplicate YES NO\n\nHow do we achieve these effects?\n\nNote [primOpOkForSpeculation]\n * The \"no-float-out\" thing is achieved by ensuring that we never\n let-bind a can_fail or has_side_effects primop. The RHS of a\n let-binding (which can float in and out freely) satisfies\n exprOkForSpeculation. And exprOkForSpeculation is false of\n can_fail and no_side_effect.\n\n * So can_fail and no_side_effect primops will appear only as the\n scrutinees of cases, and that's why the FloatIn pass is capable\n of floating case bindings inwards.\n\n * The no-duplicate thing is done via primOpIsCheap, by making\n has_side_effects things (very very very) not-cheap!\n\n\n\\begin{code}\nprimOpHasSideEffects :: PrimOp -> Bool\n#include \"primop-has-side-effects.hs-incl\"\n\nprimOpCanFail :: PrimOp -> Bool\n#include \"primop-can-fail.hs-incl\"\n\nprimOpOkForSpeculation :: PrimOp -> Bool\n -- See Note [primOpOkForSpeculation and primOpOkForFloatOut]\n -- See comments with CoreUtils.exprOkForSpeculation\nprimOpOkForSpeculation op\n = not (primOpHasSideEffects op || primOpOutOfLine op || primOpCanFail op)\n\nprimOpOkForSideEffects :: PrimOp -> Bool\nprimOpOkForSideEffects op\n = not (primOpHasSideEffects op)\n\\end{code}\n\n\nNote [primOpIsCheap]\n~~~~~~~~~~~~~~~~~~~~\n@primOpIsCheap@, as used in \\tr{SimplUtils.lhs}. For now (HACK\nWARNING), we just borrow some other predicates for a\nwhat-should-be-good-enough test. \"Cheap\" means willing to call it more\nthan once, and\/or push it inside a lambda. The latter could change the\nbehaviour of 'seq' for primops that can fail, so we don't treat them as cheap.\n\n\\begin{code}\nprimOpIsCheap :: PrimOp -> Bool\nprimOpIsCheap op = primOpOkForSpeculation op\n-- In March 2001, we changed this to\n-- primOpIsCheap op = False\n-- thereby making *no* primops seem cheap. But this killed eta\n-- expansion on case (x ==# y) of True -> \\s -> ...\n-- which is bad. In particular a loop like\n-- doLoop n = loop 0\n-- where\n-- loop i | i == n = return ()\n-- | otherwise = bar i >> loop (i+1)\n-- allocated a closure every time round because it doesn't eta expand.\n--\n-- The problem that originally gave rise to the change was\n-- let x = a +# b *# c in x +# x\n-- were we don't want to inline x. But primopIsCheap doesn't control\n-- that (it's exprIsDupable that does) so the problem doesn't occur\n-- even if primOpIsCheap sometimes says 'True'.\n\\end{code}\n\n\n%************************************************************************\n%* *\n PrimOp code size\n%* *\n%************************************************************************\n\nprimOpCodeSize\n~~~~~~~~~~~~~~\nGives an indication of the code size of a primop, for the purposes of\ncalculating unfolding sizes; see CoreUnfold.sizeExpr.\n\n\\begin{code}\nprimOpCodeSize :: PrimOp -> Int\n#include \"primop-code-size.hs-incl\"\n\nprimOpCodeSizeDefault :: Int\nprimOpCodeSizeDefault = 1\n -- CoreUnfold.primOpSize already takes into account primOpOutOfLine\n -- and adds some further costs for the args in that case.\n\nprimOpCodeSizeForeignCall :: Int\nprimOpCodeSizeForeignCall = 4\n\\end{code}\n\n\n%************************************************************************\n%* *\n PrimOp types\n%* *\n%************************************************************************\n\n\\begin{code}\nprimOpType :: PrimOp -> Type -- you may want to use primOpSig instead\nprimOpType op\n = case primOpInfo op of\n Dyadic _occ ty -> dyadic_fun_ty ty\n Monadic _occ ty -> monadic_fun_ty ty\n Compare _occ ty -> compare_fun_ty ty\n\n GenPrimOp _occ tyvars arg_tys res_ty ->\n mkForAllTys tyvars (mkFunTys arg_tys res_ty)\n\nprimOpOcc :: PrimOp -> OccName\nprimOpOcc op = case primOpInfo op of\n Dyadic occ _ -> occ\n Monadic occ _ -> occ\n Compare occ _ -> occ\n GenPrimOp occ _ _ _ -> occ\n\n-- primOpSig is like primOpType but gives the result split apart:\n-- (type variables, argument types, result type)\n-- It also gives arity, strictness info\n\nprimOpSig :: PrimOp -> ([TyVar], [Type], Type, Arity, StrictSig)\nprimOpSig op\n = (tyvars, arg_tys, res_ty, arity, primOpStrictness op arity)\n where\n arity = length arg_tys\n (tyvars, arg_tys, res_ty)\n = case (primOpInfo op) of\n Monadic _occ ty -> ([], [ty], ty )\n Dyadic _occ ty -> ([], [ty,ty], ty )\n Compare _occ ty -> ([], [ty,ty], intPrimTy)\n GenPrimOp _occ tyvars arg_tys res_ty -> (tyvars, arg_tys, res_ty )\n\\end{code}\n\n\\begin{code}\ndata PrimOpResultInfo\n = ReturnsPrim PrimRep\n | ReturnsAlg TyCon\n\n-- Some PrimOps need not return a manifest primitive or algebraic value\n-- (i.e. they might return a polymorphic value). These PrimOps *must*\n-- be out of line, or the code generator won't work.\n\ngetPrimOpResultInfo :: PrimOp -> PrimOpResultInfo\ngetPrimOpResultInfo op\n = case (primOpInfo op) of\n Dyadic _ ty -> ReturnsPrim (typePrimRep ty)\n Monadic _ ty -> ReturnsPrim (typePrimRep ty)\n Compare _ _ -> ReturnsPrim (tyConPrimRep intPrimTyCon)\n GenPrimOp _ _ _ ty | isPrimTyCon tc -> ReturnsPrim (tyConPrimRep tc)\n | otherwise -> ReturnsAlg tc\n where\n tc = tyConAppTyCon ty\n -- All primops return a tycon-app result\n -- The tycon can be an unboxed tuple, though, which\n -- gives rise to a ReturnAlg\n\\end{code}\n\nWe do not currently make use of whether primops are commutable.\n\nWe used to try to move constants to the right hand side for strength\nreduction.\n\n\\begin{code}\n{-\ncommutableOp :: PrimOp -> Bool\n#include \"primop-commutable.hs-incl\"\n-}\n\\end{code}\n\nUtils:\n\\begin{code}\ndyadic_fun_ty, monadic_fun_ty, compare_fun_ty :: Type -> Type\ndyadic_fun_ty ty = mkFunTys [ty, ty] ty\nmonadic_fun_ty ty = mkFunTy ty ty\ncompare_fun_ty ty = mkFunTys [ty, ty] intPrimTy\n\\end{code}\n\nOutput stuff:\n\\begin{code}\npprPrimOp :: PrimOp -> SDoc\npprPrimOp other_op = pprOccName (primOpOcc other_op)\n\\end{code}\n\n\n%************************************************************************\n%* *\n\\subsubsection[PrimCall]{User-imported primitive calls}\n%* *\n%************************************************************************\n\n\\begin{code}\ndata PrimCall = PrimCall CLabelString PackageId\n\ninstance Outputable PrimCall where\n ppr (PrimCall lbl pkgId)\n = text \"__primcall\" <+> ppr pkgId <+> ppr lbl\n\n\\end{code}\n","avg_line_length":35.4974789916,"max_line_length":84,"alphanum_fraction":0.5733630036} {"size":1597,"ext":"lhs","lang":"Literate Haskell","max_stars_count":36.0,"content":"> {-# LANGUAGE DataKinds #-}\n> {-# LANGUAGE PolyKinds #-}\n> {-# LANGUAGE QuasiQuotes #-}\n> {-# LANGUAGE StandaloneDeriving #-}\n> {-# LANGUAGE TemplateHaskell #-}\n> {-# LANGUAGE TypeOperators #-}\n>\n> module S05_singletons_via_th where\n>\n> import Data.Singletons\n> import Data.Singletons.TH\n\nEisenberg's `singletons`\n- Dependently Typed Programming with Singletons http:\/\/www.cis.upenn.edu\/~eir\/papers\/2012\/singletons\/paper.pdf\n- (http:\/\/hackage.haskell.org\/package\/singletons) package\n- use to generate original, promoted, singleton versions of nats and operations\n\n> singletons [d|\n> data Nat = Z | S Nat\n> deriving (Show, Eq, Ord)\n>\n> (+) :: Nat -> Nat -> Nat\n> Z + n = n\n> S m + n = S (m + n)\n>\n> (*) :: Nat -> Nat -> Nat\n> Z * _ = Z\n> S n * m = n * m + m\n> |]\n>\n> deriving instance Show (SNat n)\n> deriving instance Eq (SNat n)\n\nTemplate Haskell generates singletons for given definition.\n\nNaming conventions: http:\/\/www.cis.upenn.edu\/~eir\/packages\/singletons\/README.html\n\n- type `Name`\n - promoted to kind `Name`\n - has singleton `SName`\n- function `name`\n - promoted to type family `Name`\n - has singleton `sName`\n- binary operator `+`\n - promoted to type family `:+`\n - has singleton `%:+`\n\n`singletons` package has `Sing` *data family* that treat singleton types in unifom manner.\n- Uses `PolyKinds`\n\nabove generates:\n\ndata family Sing (a :: k) -- from Data.Singletons\ndata instance Sing (n :: Nat) where\n SZ :: Sing Z\n SS :: Sing n -> Sing (S n)\ntype SNat (n :: Nat) = Sing n\n\nalso generates singleton instances for `Eq`\n\n","avg_line_length":26.1803278689,"max_line_length":110,"alphanum_fraction":0.6455854728} {"size":70483,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"%\n% (c) The University of Glasgow 2006\n% (c) The GRASP\/AQUA Project, Glasgow University, 1992-1998\n%\n\nType checking of type signatures in interface files\n\n\\begin{code}\nmodule TcIface (\n tcLookupImported_maybe,\n importDecl, checkWiredInTyCon, tcHiBootIface, typecheckIface,\n tcIfaceDecl, tcIfaceInst, tcIfaceFamInst, tcIfaceRules,\n tcIfaceVectInfo, tcIfaceAnnotations,\n tcIfaceExpr, -- Desired by HERMIT (Trac #7683)\n tcIfaceGlobal,\n tcExtCoreBindings\n ) where\n\n#include \"HsVersions.h\"\n\nimport TcTypeNats(typeNatCoAxiomRules)\nimport IfaceSyn\nimport LoadIface\nimport IfaceEnv\nimport BuildTyCl\nimport TcRnMonad\nimport TcType\nimport Type\nimport Coercion\nimport TypeRep\nimport HscTypes\nimport Annotations\nimport InstEnv\nimport FamInstEnv\nimport CoreSyn\nimport CoreUtils\nimport CoreUnfold\nimport CoreLint\nimport MkCore ( castBottomExpr )\nimport Id\nimport MkId\nimport IdInfo\nimport Class\nimport TyCon\nimport CoAxiom\nimport ConLike\nimport DataCon\nimport PrelNames\nimport TysWiredIn\nimport TysPrim ( superKindTyConName )\nimport BasicTypes ( strongLoopBreaker )\nimport Literal\nimport qualified Var\nimport VarEnv\nimport VarSet\nimport Name\nimport NameEnv\nimport NameSet\nimport OccurAnal ( occurAnalyseExpr )\nimport Demand\nimport Module\nimport UniqFM\nimport UniqSupply\nimport Outputable\nimport ErrUtils\nimport Maybes\nimport SrcLoc\nimport DynFlags\nimport Util\nimport FastString\n\nimport Control.Monad\nimport qualified Data.Map as Map\nimport Data.Traversable ( traverse )\n\\end{code}\n\nThis module takes\n\n IfaceDecl -> TyThing\n IfaceType -> Type\n etc\n\nAn IfaceDecl is populated with RdrNames, and these are not renamed to\nNames before typechecking, because there should be no scope errors etc.\n\n -- For (b) consider: f = \\$(...h....)\n -- where h is imported, and calls f via an hi-boot file.\n -- This is bad! But it is not seen as a staging error, because h\n -- is indeed imported. We don't want the type-checker to black-hole\n -- when simplifying and compiling the splice!\n --\n -- Simple solution: discard any unfolding that mentions a variable\n -- bound in this module (and hence not yet processed).\n -- The discarding happens when forkM finds a type error.\n\n%************************************************************************\n%* *\n%* tcImportDecl is the key function for \"faulting in\" *\n%* imported things\n%* *\n%************************************************************************\n\nThe main idea is this. We are chugging along type-checking source code, and\nfind a reference to GHC.Base.map. We call tcLookupGlobal, which doesn't find\nit in the EPS type envt. So it\n 1 loads GHC.Base.hi\n 2 gets the decl for GHC.Base.map\n 3 typechecks it via tcIfaceDecl\n 4 and adds it to the type env in the EPS\n\nNote that DURING STEP 4, we may find that map's type mentions a type\nconstructor that also\n\nNotice that for imported things we read the current version from the EPS\nmutable variable. This is important in situations like\n ...$(e1)...$(e2)...\nwhere the code that e1 expands to might import some defns that\nalso turn out to be needed by the code that e2 expands to.\n\n\\begin{code}\ntcLookupImported_maybe :: Name -> TcM (MaybeErr MsgDoc TyThing)\n-- Returns (Failed err) if we can't find the interface file for the thing\ntcLookupImported_maybe name\n = do { hsc_env <- getTopEnv\n ; mb_thing <- liftIO (lookupTypeHscEnv hsc_env name)\n ; case mb_thing of\n Just thing -> return (Succeeded thing)\n Nothing -> tcImportDecl_maybe name }\n\ntcImportDecl_maybe :: Name -> TcM (MaybeErr MsgDoc TyThing)\n-- Entry point for *source-code* uses of importDecl\ntcImportDecl_maybe name\n | Just thing <- wiredInNameTyThing_maybe name\n = do { when (needWiredInHomeIface thing)\n (initIfaceTcRn (loadWiredInHomeIface name))\n -- See Note [Loading instances for wired-in things]\n ; return (Succeeded thing) }\n | otherwise\n = initIfaceTcRn (importDecl name)\n\nimportDecl :: Name -> IfM lcl (MaybeErr MsgDoc TyThing)\n-- Get the TyThing for this Name from an interface file\n-- It's not a wired-in thing -- the caller caught that\nimportDecl name\n = ASSERT( not (isWiredInName name) )\n do { traceIf nd_doc\n\n -- Load the interface, which should populate the PTE\n ; mb_iface <- ASSERT2( isExternalName name, ppr name )\n loadInterface nd_doc (nameModule name) ImportBySystem\n ; case mb_iface of {\n Failed err_msg -> return (Failed err_msg) ;\n Succeeded _ -> do\n\n -- Now look it up again; this time we should find it\n { eps <- getEps\n ; case lookupTypeEnv (eps_PTE eps) name of\n Just thing -> return (Succeeded thing)\n Nothing -> return (Failed not_found_msg)\n }}}\n where\n nd_doc = ptext (sLit \"Need decl for\") <+> ppr name\n not_found_msg = hang (ptext (sLit \"Can't find interface-file declaration for\") <+>\n pprNameSpace (occNameSpace (nameOccName name)) <+> ppr name)\n 2 (vcat [ptext (sLit \"Probable cause: bug in .hi-boot file, or inconsistent .hi file\"),\n ptext (sLit \"Use -ddump-if-trace to get an idea of which file caused the error\")])\n\\end{code}\n\n%************************************************************************\n%* *\n Checks for wired-in things\n%* *\n%************************************************************************\n\nNote [Loading instances for wired-in things]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWe need to make sure that we have at least *read* the interface files\nfor any module with an instance decl or RULE that we might want.\n\n* If the instance decl is an orphan, we have a whole separate mechanism\n (loadOprhanModules)\n\n* If the instance decl not an orphan, then the act of looking at the\n TyCon or Class will force in the defining module for the\n TyCon\/Class, and hence the instance decl\n\n* BUT, if the TyCon is a wired-in TyCon, we don't really need its interface;\n but we must make sure we read its interface in case it has instances or\n rules. That is what LoadIface.loadWiredInHomeInterface does. It's called\n from TcIface.{tcImportDecl, checkWiredInTyCon, ifCheckWiredInThing}\n\n* HOWEVER, only do this for TyCons. There are no wired-in Classes. There\n are some wired-in Ids, but we don't want to load their interfaces. For\n example, Control.Exception.Base.recSelError is wired in, but that module\n is compiled late in the base library, and we don't want to force it to\n load before it's been compiled!\n\nAll of this is done by the type checker. The renamer plays no role.\n(It used to, but no longer.)\n\n\n\\begin{code}\ncheckWiredInTyCon :: TyCon -> TcM ()\n-- Ensure that the home module of the TyCon (and hence its instances)\n-- are loaded. See Note [Loading instances for wired-in things]\n-- It might not be a wired-in tycon (see the calls in TcUnify),\n-- in which case this is a no-op.\ncheckWiredInTyCon tc\n | not (isWiredInName tc_name)\n = return ()\n | otherwise\n = do { mod <- getModule\n ; ASSERT( isExternalName tc_name )\n when (mod \/= nameModule tc_name)\n (initIfaceTcRn (loadWiredInHomeIface tc_name))\n -- Don't look for (non-existent) Float.hi when\n -- compiling Float.lhs, which mentions Float of course\n -- A bit yukky to call initIfaceTcRn here\n }\n where\n tc_name = tyConName tc\n\nifCheckWiredInThing :: TyThing -> IfL ()\n-- Even though we are in an interface file, we want to make\n-- sure the instances of a wired-in thing are loaded (imagine f :: Double -> Double)\n-- Ditto want to ensure that RULES are loaded too\n-- See Note [Loading instances for wired-in things]\nifCheckWiredInThing thing\n = do { mod <- getIfModule\n -- Check whether we are typechecking the interface for this\n -- very module. E.g when compiling the base library in --make mode\n -- we may typecheck GHC.Base.hi. At that point, GHC.Base is not in\n -- the HPT, so without the test we'll demand-load it into the PIT!\n -- C.f. the same test in checkWiredInTyCon above\n ; let name = getName thing\n ; ASSERT2( isExternalName name, ppr name )\n when (needWiredInHomeIface thing && mod \/= nameModule name)\n (loadWiredInHomeIface name) }\n\nneedWiredInHomeIface :: TyThing -> Bool\n-- Only for TyCons; see Note [Loading instances for wired-in things]\nneedWiredInHomeIface (ATyCon {}) = True\nneedWiredInHomeIface _ = False\n\\end{code}\n\n%************************************************************************\n%* *\n Type-checking a complete interface\n%* *\n%************************************************************************\n\nSuppose we discover we don't need to recompile. Then we must type\ncheck the old interface file. This is a bit different to the\nincremental type checking we do as we suck in interface files. Instead\nwe do things similarly as when we are typechecking source decls: we\nbring into scope the type envt for the interface all at once, using a\nknot. Remember, the decls aren't necessarily in dependency order --\nand even if they were, the type decls might be mutually recursive.\n\n\\begin{code}\ntypecheckIface :: ModIface -- Get the decls from here\n -> TcRnIf gbl lcl ModDetails\ntypecheckIface iface\n = initIfaceTc iface $ \\ tc_env_var -> do\n -- The tc_env_var is freshly allocated, private to\n -- type-checking this particular interface\n { -- Get the right set of decls and rules. If we are compiling without -O\n -- we discard pragmas before typechecking, so that we don't \"see\"\n -- information that we shouldn't. From a versioning point of view\n -- It's not actually *wrong* to do so, but in fact GHCi is unable\n -- to handle unboxed tuples, so it must not see unfoldings.\n ignore_prags <- goptM Opt_IgnoreInterfacePragmas\n\n -- Typecheck the decls. This is done lazily, so that the knot-tying\n -- within this single module work out right. In the If monad there is\n -- no global envt for the current interface; instead, the knot is tied\n -- through the if_rec_types field of IfGblEnv\n ; names_w_things <- loadDecls ignore_prags (mi_decls iface)\n ; let type_env = mkNameEnv names_w_things\n ; writeMutVar tc_env_var type_env\n\n -- Now do those rules, instances and annotations\n ; insts <- mapM tcIfaceInst (mi_insts iface)\n ; fam_insts <- mapM tcIfaceFamInst (mi_fam_insts iface)\n ; rules <- tcIfaceRules ignore_prags (mi_rules iface)\n ; anns <- tcIfaceAnnotations (mi_anns iface)\n\n -- Vectorisation information\n ; vect_info <- tcIfaceVectInfo (mi_module iface) type_env (mi_vect_info iface)\n\n -- Exports\n ; exports <- ifaceExportNames (mi_exports iface)\n\n -- Finished\n ; traceIf (vcat [text \"Finished typechecking interface for\" <+> ppr (mi_module iface),\n text \"Type envt:\" <+> ppr type_env])\n ; return $ ModDetails { md_types = type_env\n , md_insts = insts\n , md_fam_insts = fam_insts\n , md_rules = rules\n , md_anns = anns\n , md_vect_info = vect_info\n , md_exports = exports\n }\n }\n\\end{code}\n\n\n%************************************************************************\n%* *\n Type and class declarations\n%* *\n%************************************************************************\n\n\\begin{code}\ntcHiBootIface :: HscSource -> Module -> TcRn ModDetails\n-- Load the hi-boot iface for the module being compiled,\n-- if it indeed exists in the transitive closure of imports\n-- Return the ModDetails, empty if no hi-boot iface\ntcHiBootIface hsc_src mod\n | isHsBoot hsc_src -- Already compiling a hs-boot file\n = return emptyModDetails\n | otherwise\n = do { traceIf (text \"loadHiBootInterface\" <+> ppr mod)\n\n ; mode <- getGhcMode\n ; if not (isOneShot mode)\n -- In --make and interactive mode, if this module has an hs-boot file\n -- we'll have compiled it already, and it'll be in the HPT\n --\n -- We check wheher the interface is a *boot* interface.\n -- It can happen (when using GHC from Visual Studio) that we\n -- compile a module in TypecheckOnly mode, with a stable,\n -- fully-populated HPT. In that case the boot interface isn't there\n -- (it's been replaced by the mother module) so we can't check it.\n -- And that's fine, because if M's ModInfo is in the HPT, then\n -- it's been compiled once, and we don't need to check the boot iface\n then do { hpt <- getHpt\n ; case lookupUFM hpt (moduleName mod) of\n Just info | mi_boot (hm_iface info)\n -> return (hm_details info)\n _ -> return emptyModDetails }\n else do\n\n -- OK, so we're in one-shot mode.\n -- In that case, we're read all the direct imports by now,\n -- so eps_is_boot will record if any of our imports mention us by\n -- way of hi-boot file\n { eps <- getEps\n ; case lookupUFM (eps_is_boot eps) (moduleName mod) of {\n Nothing -> return emptyModDetails ; -- The typical case\n\n Just (_, False) -> failWithTc moduleLoop ;\n -- Someone below us imported us!\n -- This is a loop with no hi-boot in the way\n\n Just (_mod, True) -> -- There's a hi-boot interface below us\n\n do { read_result <- findAndReadIface\n need mod\n True -- Hi-boot file\n\n ; case read_result of\n Failed err -> failWithTc (elaborate err)\n Succeeded (iface, _path) -> typecheckIface iface\n }}}}\n where\n need = ptext (sLit \"Need the hi-boot interface for\") <+> ppr mod\n <+> ptext (sLit \"to compare against the Real Thing\")\n\n moduleLoop = ptext (sLit \"Circular imports: module\") <+> quotes (ppr mod)\n <+> ptext (sLit \"depends on itself\")\n\n elaborate err = hang (ptext (sLit \"Could not find hi-boot interface for\") <+>\n quotes (ppr mod) <> colon) 4 err\n\\end{code}\n\n\n%************************************************************************\n%* *\n Type and class declarations\n%* *\n%************************************************************************\n\nWhen typechecking a data type decl, we *lazily* (via forkM) typecheck\nthe constructor argument types. This is in the hope that we may never\npoke on those argument types, and hence may never need to load the\ninterface files for types mentioned in the arg types.\n\nE.g.\n data Foo.S = MkS Baz.T\nMabye we can get away without even loading the interface for Baz!\n\nThis is not just a performance thing. Suppose we have\n data Foo.S = MkS Baz.T\n data Baz.T = MkT Foo.S\n(in different interface files, of course).\nNow, first we load and typecheck Foo.S, and add it to the type envt.\nIf we do explore MkS's argument, we'll load and typecheck Baz.T.\nIf we explore MkT's argument we'll find Foo.S already in the envt.\n\nIf we typechecked constructor args eagerly, when loading Foo.S we'd try to\ntypecheck the type Baz.T. So we'd fault in Baz.T... and then need Foo.S...\nwhich isn't done yet.\n\nAll very cunning. However, there is a rather subtle gotcha which bit\nme when developing this stuff. When we typecheck the decl for S, we\nextend the type envt with S, MkS, and all its implicit Ids. Suppose\n(a bug, but it happened) that the list of implicit Ids depended in\nturn on the constructor arg types. Then the following sequence of\nevents takes place:\n * we build a thunk for the constructor arg tys\n * we build a thunk for the extended type environment (depends on )\n * we write the extended type envt into the global EPS mutvar\n\nNow we look something up in the type envt\n * that pulls on \n * which reads the global type envt out of the global EPS mutvar\n * but that depends in turn on \n\nIt's subtle, because, it'd work fine if we typechecked the constructor args\neagerly -- they don't need the extended type envt. They just get the extended\ntype envt by accident, because they look at it later.\n\nWhat this means is that the implicitTyThings MUST NOT DEPEND on any of\nthe forkM stuff.\n\n\n\\begin{code}\ntcIfaceDecl :: Bool -- True <=> discard IdInfo on IfaceId bindings\n -> IfaceDecl\n -> IfL TyThing\ntcIfaceDecl = tc_iface_decl NoParentTyCon\n\ntc_iface_decl :: TyConParent -- For nested declarations\n -> Bool -- True <=> discard IdInfo on IfaceId bindings\n -> IfaceDecl\n -> IfL TyThing\ntc_iface_decl _ ignore_prags (IfaceId {ifName = occ_name, ifType = iface_type,\n ifIdDetails = details, ifIdInfo = info})\n = do { name <- lookupIfaceTop occ_name\n ; ty <- tcIfaceType iface_type\n ; details <- tcIdDetails ty details\n ; info <- tcIdInfo ignore_prags name ty info\n ; return (AnId (mkGlobalId details name ty info)) }\n\ntc_iface_decl parent _ (IfaceData {ifName = occ_name,\n ifCType = cType,\n ifTyVars = tv_bndrs,\n ifRoles = roles,\n ifCtxt = ctxt, ifGadtSyntax = gadt_syn,\n ifCons = rdr_cons,\n ifRec = is_rec, ifPromotable = is_prom,\n ifAxiom = mb_axiom_name })\n = bindIfaceTyVars_AT tv_bndrs $ \\ tyvars -> do\n { tc_name <- lookupIfaceTop occ_name\n ; tycon <- fixM $ \\ tycon -> do\n { stupid_theta <- tcIfaceCtxt ctxt\n ; parent' <- tc_parent tyvars mb_axiom_name\n ; cons <- tcIfaceDataCons tc_name tycon tyvars rdr_cons\n ; return (buildAlgTyCon tc_name tyvars roles cType stupid_theta\n cons is_rec is_prom gadt_syn parent') }\n ; traceIf (text \"tcIfaceDecl4\" <+> ppr tycon)\n ; return (ATyCon tycon) }\n where\n tc_parent :: [TyVar] -> Maybe Name -> IfL TyConParent\n tc_parent _ Nothing = return parent\n tc_parent tyvars (Just ax_name)\n = ASSERT( isNoParent parent )\n do { ax <- tcIfaceCoAxiom ax_name\n ; let fam_tc = coAxiomTyCon ax\n ax_unbr = toUnbranchedAxiom ax\n -- data families don't have branches:\n branch = coAxiomSingleBranch ax_unbr\n ax_tvs = coAxBranchTyVars branch\n ax_lhs = coAxBranchLHS branch\n tycon_tys = mkTyVarTys tyvars\n subst = mkTopTvSubst (ax_tvs `zip` tycon_tys)\n -- The subst matches the tyvar of the TyCon\n -- with those from the CoAxiom. They aren't\n -- necessarily the same, since the two may be\n -- gotten from separate interface-file declarations\n -- NB: ax_tvs may be shorter because of eta-reduction\n -- See Note [Eta reduction for data family axioms] in TcInstDcls\n lhs_tys = substTys subst ax_lhs `chkAppend`\n dropList ax_tvs tycon_tys\n -- The 'lhs_tys' should be 1-1 with the 'tyvars'\n -- but ax_tvs maybe shorter because of eta-reduction\n ; return (FamInstTyCon ax_unbr fam_tc lhs_tys) }\n\ntc_iface_decl parent _ (IfaceSyn {ifName = occ_name, ifTyVars = tv_bndrs,\n ifRoles = roles,\n ifSynRhs = mb_rhs_ty,\n ifSynKind = kind })\n = bindIfaceTyVars_AT tv_bndrs $ \\ tyvars -> do\n { tc_name <- lookupIfaceTop occ_name\n ; rhs_kind <- tcIfaceKind kind -- Note [Synonym kind loop]\n ; rhs <- forkM (mk_doc tc_name) $\n tc_syn_rhs mb_rhs_ty\n ; tycon <- buildSynTyCon tc_name tyvars roles rhs rhs_kind parent\n ; return (ATyCon tycon) }\n where\n mk_doc n = ptext (sLit \"Type syonym\") <+> ppr n\n tc_syn_rhs IfaceOpenSynFamilyTyCon = return OpenSynFamilyTyCon\n tc_syn_rhs (IfaceClosedSynFamilyTyCon ax_name)\n = do { ax <- tcIfaceCoAxiom ax_name\n ; return (ClosedSynFamilyTyCon ax) }\n tc_syn_rhs IfaceAbstractClosedSynFamilyTyCon = return AbstractClosedSynFamilyTyCon\n tc_syn_rhs (IfaceSynonymTyCon ty) = do { rhs_ty <- tcIfaceType ty\n ; return (SynonymTyCon rhs_ty) }\n\ntc_iface_decl _parent ignore_prags\n (IfaceClass {ifCtxt = rdr_ctxt, ifName = tc_occ,\n ifTyVars = tv_bndrs, ifRoles = roles, ifFDs = rdr_fds,\n ifATs = rdr_ats, ifSigs = rdr_sigs,\n ifMinDef = mindef_occ, ifRec = tc_isrec })\n-- ToDo: in hs-boot files we should really treat abstract classes specially,\n-- as we do abstract tycons\n = bindIfaceTyVars tv_bndrs $ \\ tyvars -> do\n { tc_name <- lookupIfaceTop tc_occ\n ; traceIf (text \"tc-iface-class1\" <+> ppr tc_occ)\n ; ctxt <- mapM tc_sc rdr_ctxt\n ; traceIf (text \"tc-iface-class2\" <+> ppr tc_occ)\n ; sigs <- mapM tc_sig rdr_sigs\n ; fds <- mapM tc_fd rdr_fds\n ; traceIf (text \"tc-iface-class3\" <+> ppr tc_occ)\n ; mindef <- traverse lookupIfaceTop mindef_occ\n ; cls <- fixM $ \\ cls -> do\n { ats <- mapM (tc_at cls) rdr_ats\n ; traceIf (text \"tc-iface-class4\" <+> ppr tc_occ)\n ; buildClass ignore_prags tc_name tyvars roles ctxt fds ats sigs mindef tc_isrec }\n ; return (ATyCon (classTyCon cls)) }\n where\n tc_sc pred = forkM (mk_sc_doc pred) (tcIfaceType pred)\n -- The *length* of the superclasses is used by buildClass, and hence must\n -- not be inside the thunk. But the *content* maybe recursive and hence\n -- must be lazy (via forkM). Example:\n -- class C (T a) => D a where\n -- data T a\n -- Here the associated type T is knot-tied with the class, and\n -- so we must not pull on T too eagerly. See Trac #5970\n\n tc_sig (IfaceClassOp occ dm rdr_ty)\n = do { op_name <- lookupIfaceTop occ\n ; op_ty <- forkM (mk_op_doc op_name rdr_ty) (tcIfaceType rdr_ty)\n -- Must be done lazily for just the same reason as the\n -- type of a data con; to avoid sucking in types that\n -- it mentions unless it's necessary to do so\n ; return (op_name, dm, op_ty) }\n\n tc_at cls (IfaceAT tc_decl defs_decls)\n = do ATyCon tc <- tc_iface_decl (AssocFamilyTyCon cls) ignore_prags tc_decl\n defs <- forkM (mk_at_doc tc) (tc_ax_branches tc defs_decls)\n -- Must be done lazily in case the RHS of the defaults mention\n -- the type constructor being defined here\n -- e.g. type AT a; type AT b = AT [b] Trac #8002\n return (tc, defs)\n\n mk_sc_doc pred = ptext (sLit \"Superclass\") <+> ppr pred\n mk_at_doc tc = ptext (sLit \"Associated type\") <+> ppr tc\n mk_op_doc op_name op_ty = ptext (sLit \"Class op\") <+> sep [ppr op_name, ppr op_ty]\n\n tc_fd (tvs1, tvs2) = do { tvs1' <- mapM tcIfaceTyVar tvs1\n ; tvs2' <- mapM tcIfaceTyVar tvs2\n ; return (tvs1', tvs2') }\n\ntc_iface_decl _ _ (IfaceForeign {ifName = rdr_name, ifExtName = ext_name})\n = do { name <- lookupIfaceTop rdr_name\n ; return (ATyCon (mkForeignTyCon name ext_name\n liftedTypeKind)) }\n\ntc_iface_decl _ _ (IfaceAxiom { ifName = ax_occ, ifTyCon = tc\n , ifAxBranches = branches, ifRole = role })\n = do { tc_name <- lookupIfaceTop ax_occ\n ; tc_tycon <- tcIfaceTyCon tc\n ; tc_branches <- tc_ax_branches tc_tycon branches\n ; let axiom = computeAxiomIncomps $\n CoAxiom { co_ax_unique = nameUnique tc_name\n , co_ax_name = tc_name\n , co_ax_tc = tc_tycon\n , co_ax_role = role\n , co_ax_branches = toBranchList tc_branches\n , co_ax_implicit = False }\n ; return (ACoAxiom axiom) }\n\ntc_iface_decl _ _ (IfacePatSyn{ ifName = occ_name\n , ifPatHasWrapper = has_wrapper\n , ifPatIsInfix = is_infix\n , ifPatUnivTvs = univ_tvs\n , ifPatExTvs = ex_tvs\n , ifPatProvCtxt = prov_ctxt\n , ifPatReqCtxt = req_ctxt\n , ifPatArgs = args\n , ifPatTy = pat_ty })\n = do { name <- lookupIfaceTop occ_name\n ; traceIf (ptext (sLit \"tc_iface_decl\") <+> ppr name)\n ; bindIfaceTyVars univ_tvs $ \\univ_tvs -> do\n { bindIfaceTyVars ex_tvs $ \\ex_tvs -> do\n { bindIfaceIdVars args $ \\args -> do\n { ~(prov_theta, req_theta, pat_ty) <- forkM (mk_doc name) $\n do { prov_theta <- tcIfaceCtxt prov_ctxt\n ; req_theta <- tcIfaceCtxt req_ctxt\n ; pat_ty <- tcIfaceType pat_ty\n ; return (prov_theta, req_theta, pat_ty) }\n ; bindIfaceTyVar (fsLit \"r\", toIfaceKind liftedTypeKind) $ \\tv -> do\n { patsyn <- buildPatSyn name is_infix has_wrapper args univ_tvs ex_tvs prov_theta req_theta pat_ty tv\n ; return (AConLike (PatSynCon patsyn)) }}}}}\n where\n mk_doc n = ptext (sLit \"Pattern synonym\") <+> ppr n\n\n\ntc_ax_branches :: TyCon -> [IfaceAxBranch] -> IfL [CoAxBranch]\ntc_ax_branches tc if_branches = foldlM (tc_ax_branch (tyConKind tc)) [] if_branches\n\ntc_ax_branch :: Kind -> [CoAxBranch] -> IfaceAxBranch -> IfL [CoAxBranch]\ntc_ax_branch tc_kind prev_branches\n (IfaceAxBranch { ifaxbTyVars = tv_bndrs, ifaxbLHS = lhs, ifaxbRHS = rhs\n , ifaxbRoles = roles, ifaxbIncomps = incomps })\n = bindIfaceTyVars_AT tv_bndrs $ \\ tvs -> do\n -- The _AT variant is needed here; see Note [CoAxBranch type variables] in CoAxiom\n { tc_lhs <- tcIfaceTcArgs tc_kind lhs -- See Note [Checking IfaceTypes vs IfaceKinds]\n ; tc_rhs <- tcIfaceType rhs\n ; let br = CoAxBranch { cab_loc = noSrcSpan\n , cab_tvs = tvs\n , cab_lhs = tc_lhs\n , cab_roles = roles\n , cab_rhs = tc_rhs\n , cab_incomps = map (prev_branches !!) incomps }\n ; return (prev_branches ++ [br]) }\n\ntcIfaceDataCons :: Name -> TyCon -> [TyVar] -> IfaceConDecls -> IfL AlgTyConRhs\ntcIfaceDataCons tycon_name tycon _ if_cons\n = case if_cons of\n IfAbstractTyCon dis -> return (AbstractTyCon dis)\n IfDataFamTyCon -> return DataFamilyTyCon\n IfDataTyCon cons -> do { data_cons <- mapM tc_con_decl cons\n ; return (mkDataTyConRhs data_cons) }\n IfNewTyCon con -> do { data_con <- tc_con_decl con\n ; mkNewTyConRhs tycon_name tycon data_con }\n where\n tc_con_decl (IfCon { ifConInfix = is_infix,\n ifConUnivTvs = univ_tvs, ifConExTvs = ex_tvs,\n ifConOcc = occ, ifConCtxt = ctxt, ifConEqSpec = spec,\n ifConArgTys = args, ifConFields = field_lbls,\n ifConStricts = if_stricts})\n = bindIfaceTyVars univ_tvs $ \\ univ_tyvars -> do\n bindIfaceTyVars ex_tvs $ \\ ex_tyvars -> do\n { traceIf (text \"Start interface-file tc_con_decl\" <+> ppr occ)\n ; name <- lookupIfaceTop occ\n\n -- Read the context and argument types, but lazily for two reasons\n -- (a) to avoid looking tugging on a recursive use of\n -- the type itself, which is knot-tied\n -- (b) to avoid faulting in the component types unless\n -- they are really needed\n ; ~(eq_spec, theta, arg_tys, stricts) <- forkM (mk_doc name) $\n do { eq_spec <- tcIfaceEqSpec spec\n ; theta <- tcIfaceCtxt ctxt\n ; arg_tys <- mapM tcIfaceType args\n ; stricts <- mapM tc_strict if_stricts\n -- The IfBang field can mention\n -- the type itself; hence inside forkM\n ; return (eq_spec, theta, arg_tys, stricts) }\n ; lbl_names <- mapM lookupIfaceTop field_lbls\n\n -- Remember, tycon is the representation tycon\n ; let orig_res_ty = mkFamilyTyConApp tycon\n (substTyVars (mkTopTvSubst eq_spec) univ_tyvars)\n\n ; con <- buildDataCon (pprPanic \"tcIfaceDataCons: FamInstEnvs\" (ppr name))\n name is_infix\n stricts lbl_names\n univ_tyvars ex_tyvars\n eq_spec theta\n arg_tys orig_res_ty tycon\n ; traceIf (text \"Done interface-file tc_con_decl\" <+> ppr name)\n ; return con }\n mk_doc con_name = ptext (sLit \"Constructor\") <+> ppr con_name\n\n tc_strict IfNoBang = return HsNoBang\n tc_strict IfStrict = return HsStrict\n tc_strict IfUnpack = return (HsUnpack Nothing)\n tc_strict (IfUnpackCo if_co) = do { co <- tcIfaceCo if_co\n ; return (HsUnpack (Just co)) }\n\ntcIfaceEqSpec :: [(OccName, IfaceType)] -> IfL [(TyVar, Type)]\ntcIfaceEqSpec spec\n = mapM do_item spec\n where\n do_item (occ, if_ty) = do { tv <- tcIfaceTyVar (occNameFS occ)\n ; ty <- tcIfaceType if_ty\n ; return (tv,ty) }\n\\end{code}\n\nNote [Synonym kind loop]\n~~~~~~~~~~~~~~~~~~~~~~~~\nNotice that we eagerly grab the *kind* from the interface file, but\nbuild a forkM thunk for the *rhs* (and family stuff). To see why,\nconsider this (Trac #2412)\n\nM.hs: module M where { import X; data T = MkT S }\nX.hs: module X where { import {-# SOURCE #-} M; type S = T }\nM.hs-boot: module M where { data T }\n\nWhen kind-checking M.hs we need S's kind. But we do not want to\nfind S's kind from (typeKind S-rhs), because we don't want to look at\nS-rhs yet! Since S is imported from X.hi, S gets just one chance to\nbe defined, and we must not do that until we've finished with M.T.\n\nSolution: record S's kind in the interface file; now we can safely\nlook at it.\n\n%************************************************************************\n%* *\n Instances\n%* *\n%************************************************************************\n\n\\begin{code}\ntcIfaceInst :: IfaceClsInst -> IfL ClsInst\ntcIfaceInst (IfaceClsInst { ifDFun = dfun_occ, ifOFlag = oflag\n , ifInstCls = cls, ifInstTys = mb_tcs })\n = do { dfun <- forkM (ptext (sLit \"Dict fun\") <+> ppr dfun_occ) $\n tcIfaceExtId dfun_occ\n ; let mb_tcs' = map (fmap ifaceTyConName) mb_tcs\n ; return (mkImportedInstance cls mb_tcs' dfun oflag) }\n\ntcIfaceFamInst :: IfaceFamInst -> IfL FamInst\ntcIfaceFamInst (IfaceFamInst { ifFamInstFam = fam, ifFamInstTys = mb_tcs\n , ifFamInstAxiom = axiom_name } )\n = do { axiom' <- forkM (ptext (sLit \"Axiom\") <+> ppr axiom_name) $\n tcIfaceCoAxiom axiom_name\n -- will panic if branched, but that's OK\n ; let axiom'' = toUnbranchedAxiom axiom'\n mb_tcs' = map (fmap ifaceTyConName) mb_tcs\n ; return (mkImportedFamInst fam mb_tcs' axiom'') }\n\\end{code}\n\n\n%************************************************************************\n%* *\n Rules\n%* *\n%************************************************************************\n\nWe move a IfaceRule from eps_rules to eps_rule_base when all its LHS free vars\nare in the type environment. However, remember that typechecking a Rule may\n(as a side effect) augment the type envt, and so we may need to iterate the process.\n\n\\begin{code}\ntcIfaceRules :: Bool -- True <=> ignore rules\n -> [IfaceRule]\n -> IfL [CoreRule]\ntcIfaceRules ignore_prags if_rules\n | ignore_prags = return []\n | otherwise = mapM tcIfaceRule if_rules\n\ntcIfaceRule :: IfaceRule -> IfL CoreRule\ntcIfaceRule (IfaceRule {ifRuleName = name, ifActivation = act, ifRuleBndrs = bndrs,\n ifRuleHead = fn, ifRuleArgs = args, ifRuleRhs = rhs,\n ifRuleAuto = auto })\n = do { ~(bndrs', args', rhs') <-\n -- Typecheck the payload lazily, in the hope it'll never be looked at\n forkM (ptext (sLit \"Rule\") <+> ftext name) $\n bindIfaceBndrs bndrs $ \\ bndrs' ->\n do { args' <- mapM tcIfaceExpr args\n ; rhs' <- tcIfaceExpr rhs\n ; return (bndrs', args', rhs') }\n ; let mb_tcs = map ifTopFreeName args\n ; return (Rule { ru_name = name, ru_fn = fn, ru_act = act,\n ru_bndrs = bndrs', ru_args = args',\n ru_rhs = occurAnalyseExpr rhs',\n ru_rough = mb_tcs,\n ru_auto = auto,\n ru_local = False }) } -- An imported RULE is never for a local Id\n -- or, even if it is (module loop, perhaps)\n -- we'll just leave it in the non-local set\n where\n -- This function *must* mirror exactly what Rules.topFreeName does\n -- We could have stored the ru_rough field in the iface file\n -- but that would be redundant, I think.\n -- The only wrinkle is that we must not be deceived by\n -- type syononyms at the top of a type arg. Since\n -- we can't tell at this point, we are careful not\n -- to write them out in coreRuleToIfaceRule\n ifTopFreeName :: IfaceExpr -> Maybe Name\n ifTopFreeName (IfaceType (IfaceTyConApp tc _ )) = Just (ifaceTyConName tc)\n ifTopFreeName (IfaceApp f _) = ifTopFreeName f\n ifTopFreeName (IfaceExt n) = Just n\n ifTopFreeName _ = Nothing\n\\end{code}\n\n\n%************************************************************************\n%* *\n Annotations\n%* *\n%************************************************************************\n\n\\begin{code}\ntcIfaceAnnotations :: [IfaceAnnotation] -> IfL [Annotation]\ntcIfaceAnnotations = mapM tcIfaceAnnotation\n\ntcIfaceAnnotation :: IfaceAnnotation -> IfL Annotation\ntcIfaceAnnotation (IfaceAnnotation target serialized) = do\n target' <- tcIfaceAnnTarget target\n return $ Annotation {\n ann_target = target',\n ann_value = serialized\n }\n\ntcIfaceAnnTarget :: IfaceAnnTarget -> IfL (AnnTarget Name)\ntcIfaceAnnTarget (NamedTarget occ) = do\n name <- lookupIfaceTop occ\n return $ NamedTarget name\ntcIfaceAnnTarget (ModuleTarget mod) = do\n return $ ModuleTarget mod\n\n\\end{code}\n\n\n%************************************************************************\n%* *\n Vectorisation information\n%* *\n%************************************************************************\n\n\\begin{code}\n-- We need access to the type environment as we need to look up information about type constructors\n-- (i.e., their data constructors and whether they are class type constructors). If a vectorised\n-- type constructor or class is defined in the same module as where it is vectorised, we cannot\n-- look that information up from the type constructor that we obtained via a 'forkM'ed\n-- 'tcIfaceTyCon' without recursively loading the interface that we are already type checking again\n-- and again and again...\n--\ntcIfaceVectInfo :: Module -> TypeEnv -> IfaceVectInfo -> IfL VectInfo\ntcIfaceVectInfo mod typeEnv (IfaceVectInfo\n { ifaceVectInfoVar = vars\n , ifaceVectInfoTyCon = tycons\n , ifaceVectInfoTyConReuse = tyconsReuse\n , ifaceVectInfoParallelVars = parallelVars\n , ifaceVectInfoParallelTyCons = parallelTyCons\n })\n = do { let parallelTyConsSet = mkNameSet parallelTyCons\n ; vVars <- mapM vectVarMapping vars\n ; let varsSet = mkVarSet (map fst vVars)\n ; tyConRes1 <- mapM (vectTyConVectMapping varsSet) tycons\n ; tyConRes2 <- mapM (vectTyConReuseMapping varsSet) tyconsReuse\n ; vParallelVars <- mapM vectVar parallelVars\n ; let (vTyCons, vDataCons, vScSels) = unzip3 (tyConRes1 ++ tyConRes2)\n ; return $ VectInfo\n { vectInfoVar = mkVarEnv vVars `extendVarEnvList` concat vScSels\n , vectInfoTyCon = mkNameEnv vTyCons\n , vectInfoDataCon = mkNameEnv (concat vDataCons)\n , vectInfoParallelVars = mkVarSet vParallelVars\n , vectInfoParallelTyCons = parallelTyConsSet\n }\n }\n where\n vectVarMapping name\n = do { vName <- lookupOrig mod (mkLocalisedOccName mod mkVectOcc name)\n ; var <- forkM (ptext (sLit \"vect var\") <+> ppr name) $\n tcIfaceExtId name\n ; vVar <- forkM (ptext (sLit \"vect vVar [mod =\") <+>\n ppr mod <> ptext (sLit \"; nameModule =\") <+>\n ppr (nameModule name) <> ptext (sLit \"]\") <+> ppr vName) $\n tcIfaceExtId vName\n ; return (var, (var, vVar))\n }\n -- where\n -- lookupLocalOrExternalId name\n -- = do { let mb_id = lookupTypeEnv typeEnv name\n -- ; case mb_id of\n -- -- id is local\n -- Just (AnId id) -> return id\n -- -- name is not an Id => internal inconsistency\n -- Just _ -> notAnIdErr\n -- -- Id is external\n -- Nothing -> tcIfaceExtId name\n -- }\n --\n -- notAnIdErr = pprPanic \"TcIface.tcIfaceVectInfo: not an id\" (ppr name)\n\n vectVar name\n = forkM (ptext (sLit \"vect scalar var\") <+> ppr name) $\n tcIfaceExtId name\n\n vectTyConVectMapping vars name\n = do { vName <- lookupOrig mod (mkLocalisedOccName mod mkVectTyConOcc name)\n ; vectTyConMapping vars name vName\n }\n\n vectTyConReuseMapping vars name\n = vectTyConMapping vars name name\n\n vectTyConMapping vars name vName\n = do { tycon <- lookupLocalOrExternalTyCon name\n ; vTycon <- forkM (ptext (sLit \"vTycon of\") <+> ppr vName) $\n lookupLocalOrExternalTyCon vName\n\n -- Map the data constructors of the original type constructor to those of the\n -- vectorised type constructor \/unless\/ the type constructor was vectorised\n -- abstractly; if it was vectorised abstractly, the workers of its data constructors\n -- do not appear in the set of vectorised variables.\n --\n -- NB: This is lazy! We don't pull at the type constructors before we actually use\n -- the data constructor mapping.\n ; let isAbstract | isClassTyCon tycon = False\n | datacon:_ <- tyConDataCons tycon\n = not $ dataConWrapId datacon `elemVarSet` vars\n | otherwise = True\n vDataCons | isAbstract = []\n | otherwise = [ (dataConName datacon, (datacon, vDatacon))\n | (datacon, vDatacon) <- zip (tyConDataCons tycon)\n (tyConDataCons vTycon)\n ]\n\n -- Map the (implicit) superclass and methods selectors as they don't occur in\n -- the var map.\n vScSels | Just cls <- tyConClass_maybe tycon\n , Just vCls <- tyConClass_maybe vTycon\n = [ (sel, (sel, vSel))\n | (sel, vSel) <- zip (classAllSelIds cls) (classAllSelIds vCls)\n ]\n | otherwise\n = []\n\n ; return ( (name, (tycon, vTycon)) -- (T, T_v)\n , vDataCons -- list of (Ci, Ci_v)\n , vScSels -- list of (seli, seli_v)\n )\n }\n where\n -- we need a fully defined version of the type constructor to be able to extract\n -- its data constructors etc.\n lookupLocalOrExternalTyCon name\n = do { let mb_tycon = lookupTypeEnv typeEnv name\n ; case mb_tycon of\n -- tycon is local\n Just (ATyCon tycon) -> return tycon\n -- name is not a tycon => internal inconsistency\n Just _ -> notATyConErr\n -- tycon is external\n Nothing -> tcIfaceTyCon (IfaceTc name)\n }\n\n notATyConErr = pprPanic \"TcIface.tcIfaceVectInfo: not a tycon\" (ppr name)\n\\end{code}\n\n%************************************************************************\n%* *\n Types\n%* *\n%************************************************************************\n\n\\begin{code}\ntcIfaceType :: IfaceType -> IfL Type\ntcIfaceType (IfaceTyVar n) = do { tv <- tcIfaceTyVar n; return (TyVarTy tv) }\ntcIfaceType (IfaceAppTy t1 t2) = do { t1' <- tcIfaceType t1; t2' <- tcIfaceType t2; return (AppTy t1' t2') }\ntcIfaceType (IfaceLitTy l) = do { l1 <- tcIfaceTyLit l; return (LitTy l1) }\ntcIfaceType (IfaceFunTy t1 t2) = do { t1' <- tcIfaceType t1; t2' <- tcIfaceType t2; return (FunTy t1' t2') }\ntcIfaceType (IfaceTyConApp tc tks) = do { tc' <- tcIfaceTyCon tc\n ; tks' <- tcIfaceTcArgs (tyConKind tc') tks\n ; return (mkTyConApp tc' tks') }\ntcIfaceType (IfaceForAllTy tv t) = bindIfaceTyVar tv $ \\ tv' -> do { t' <- tcIfaceType t; return (ForAllTy tv' t') }\n\ntcIfaceTypes :: [IfaceType] -> IfL [Type]\ntcIfaceTypes tys = mapM tcIfaceType tys\n\ntcIfaceTcArgs :: Kind -> [IfaceType] -> IfL [Type]\ntcIfaceTcArgs _ []\n = return []\ntcIfaceTcArgs kind (tk:tks)\n = case splitForAllTy_maybe kind of\n Nothing -> tcIfaceTypes (tk:tks)\n Just (_, kind') -> do { k' <- tcIfaceKind tk\n ; tks' <- tcIfaceTcArgs kind' tks\n ; return (k':tks') }\n\n-----------------------------------------\ntcIfaceCtxt :: IfaceContext -> IfL ThetaType\ntcIfaceCtxt sts = mapM tcIfaceType sts\n\n-----------------------------------------\ntcIfaceTyLit :: IfaceTyLit -> IfL TyLit\ntcIfaceTyLit (IfaceNumTyLit n) = return (NumTyLit n)\ntcIfaceTyLit (IfaceStrTyLit n) = return (StrTyLit n)\n\n-----------------------------------------\ntcIfaceKind :: IfaceKind -> IfL Kind -- See Note [Checking IfaceTypes vs IfaceKinds]\ntcIfaceKind (IfaceTyVar n) = do { tv <- tcIfaceTyVar n; return (TyVarTy tv) }\ntcIfaceKind (IfaceAppTy t1 t2) = do { t1' <- tcIfaceKind t1; t2' <- tcIfaceKind t2; return (AppTy t1' t2') }\ntcIfaceKind (IfaceFunTy t1 t2) = do { t1' <- tcIfaceKind t1; t2' <- tcIfaceKind t2; return (FunTy t1' t2') }\ntcIfaceKind (IfaceTyConApp tc ts) = do { tc' <- tcIfaceKindCon tc; ts' <- tcIfaceKinds ts; return (mkTyConApp tc' ts') }\ntcIfaceKind (IfaceForAllTy tv t) = bindIfaceTyVar tv $ \\ tv' -> do { t' <- tcIfaceKind t; return (ForAllTy tv' t') }\ntcIfaceKind t = pprPanic \"tcIfaceKind\" (ppr t) -- IfaceCoApp, IfaceLitTy\n\ntcIfaceKinds :: [IfaceKind] -> IfL [Kind]\ntcIfaceKinds tys = mapM tcIfaceKind tys\n\\end{code}\n\nNote [Checking IfaceTypes vs IfaceKinds]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWe need to know whether we are checking a *type* or a *kind*.\nConsider module M where\n Proxy :: forall k. k -> *\n data T = T\nand consider the two IfaceTypes\n M.Proxy * M.T{tc}\n M.Proxy 'M.T{tc} 'M.T(d}\nThe first is conventional, but in the latter we use the promoted\ntype constructor (as a kind) and data constructor (as a type). However,\nthe Name of the promoted type constructor is just M.T; it's the *same name*\nas the ordinary type constructor.\n\nWe could add a \"promoted\" flag to an IfaceTyCon, but that's a bit heavy.\nInstead we use context to distinguish, as in the source language.\n - When checking a kind, we look up M.T{tc} and promote it\n - When checking a type, we look up M.T{tc} and don't promote it\n and M.T{d} and promote it\n See tcIfaceKindCon and tcIfaceKTyCon respectively\n\nThis context business is why we need tcIfaceTcArgs, and tcIfaceApps\n\n\n%************************************************************************\n%* *\n Coercions\n%* *\n%************************************************************************\n\n\\begin{code}\ntcIfaceCo :: IfaceCoercion -> IfL Coercion\ntcIfaceCo (IfaceReflCo r t) = mkReflCo r <$> tcIfaceType t\ntcIfaceCo (IfaceFunCo r c1 c2) = mkFunCo r <$> tcIfaceCo c1 <*> tcIfaceCo c2\ntcIfaceCo (IfaceTyConAppCo r tc cs) = mkTyConAppCo r <$> tcIfaceTyCon tc\n <*> mapM tcIfaceCo cs\ntcIfaceCo (IfaceAppCo c1 c2) = mkAppCo <$> tcIfaceCo c1\n <*> tcIfaceCo c2\ntcIfaceCo (IfaceForAllCo tv c) = bindIfaceTyVar tv $ \\ tv' ->\n mkForAllCo tv' <$> tcIfaceCo c\ntcIfaceCo (IfaceCoVarCo n) = mkCoVarCo <$> tcIfaceCoVar n\ntcIfaceCo (IfaceAxiomInstCo n i cs) = AxiomInstCo <$> tcIfaceCoAxiom n\n <*> pure i\n <*> mapM tcIfaceCo cs\ntcIfaceCo (IfaceUnivCo r t1 t2) = UnivCo r <$> tcIfaceType t1\n <*> tcIfaceType t2\ntcIfaceCo (IfaceSymCo c) = SymCo <$> tcIfaceCo c\ntcIfaceCo (IfaceTransCo c1 c2) = TransCo <$> tcIfaceCo c1\n <*> tcIfaceCo c2\ntcIfaceCo (IfaceInstCo c1 t2) = InstCo <$> tcIfaceCo c1\n <*> tcIfaceType t2\ntcIfaceCo (IfaceNthCo d c) = NthCo d <$> tcIfaceCo c\ntcIfaceCo (IfaceLRCo lr c) = LRCo lr <$> tcIfaceCo c\ntcIfaceCo (IfaceSubCo c) = SubCo <$> tcIfaceCo c\ntcIfaceCo (IfaceAxiomRuleCo ax tys cos) = AxiomRuleCo\n <$> tcIfaceCoAxiomRule ax\n <*> mapM tcIfaceType tys\n <*> mapM tcIfaceCo cos\n\ntcIfaceCoVar :: FastString -> IfL CoVar\ntcIfaceCoVar = tcIfaceLclId\n\ntcIfaceCoAxiomRule :: FastString -> IfL CoAxiomRule\ntcIfaceCoAxiomRule n =\n case Map.lookup n typeNatCoAxiomRules of\n Just ax -> return ax\n _ -> pprPanic \"tcIfaceCoAxiomRule\" (ppr n)\n\\end{code}\n\n\n%************************************************************************\n%* *\n Core\n%* *\n%************************************************************************\n\n\\begin{code}\ntcIfaceExpr :: IfaceExpr -> IfL CoreExpr\ntcIfaceExpr (IfaceType ty)\n = Type <$> tcIfaceType ty\n\ntcIfaceExpr (IfaceCo co)\n = Coercion <$> tcIfaceCo co\n\ntcIfaceExpr (IfaceCast expr co)\n = Cast <$> tcIfaceExpr expr <*> tcIfaceCo co\n\ntcIfaceExpr (IfaceLcl name)\n = Var <$> tcIfaceLclId name\n\ntcIfaceExpr (IfaceExt gbl)\n = Var <$> tcIfaceExtId gbl\n\ntcIfaceExpr (IfaceLit lit)\n = do lit' <- tcIfaceLit lit\n return (Lit lit')\n\ntcIfaceExpr (IfaceFCall cc ty) = do\n ty' <- tcIfaceType ty\n u <- newUnique\n dflags <- getDynFlags\n return (Var (mkFCallId dflags u cc ty'))\n\ntcIfaceExpr (IfaceTuple boxity args) = do\n args' <- mapM tcIfaceExpr args\n -- Put the missing type arguments back in\n let con_args = map (Type . exprType) args' ++ args'\n return (mkApps (Var con_id) con_args)\n where\n arity = length args\n con_id = dataConWorkId (tupleCon boxity arity)\n\n\ntcIfaceExpr (IfaceLam bndr body)\n = bindIfaceBndr bndr $ \\bndr' ->\n Lam bndr' <$> tcIfaceExpr body\n\ntcIfaceExpr (IfaceApp fun arg)\n = tcIfaceApps fun arg\n\ntcIfaceExpr (IfaceECase scrut ty)\n = do { scrut' <- tcIfaceExpr scrut\n ; ty' <- tcIfaceType ty\n ; return (castBottomExpr scrut' ty') }\n\ntcIfaceExpr (IfaceCase scrut case_bndr alts) = do\n scrut' <- tcIfaceExpr scrut\n case_bndr_name <- newIfaceName (mkVarOccFS case_bndr)\n let\n scrut_ty = exprType scrut'\n case_bndr' = mkLocalId case_bndr_name scrut_ty\n tc_app = splitTyConApp scrut_ty\n -- NB: Won't always succeed (polymorphic case)\n -- but won't be demanded in those cases\n -- NB: not tcSplitTyConApp; we are looking at Core here\n -- look through non-rec newtypes to find the tycon that\n -- corresponds to the datacon in this case alternative\n\n extendIfaceIdEnv [case_bndr'] $ do\n alts' <- mapM (tcIfaceAlt scrut' tc_app) alts\n return (Case scrut' case_bndr' (coreAltsType alts') alts')\n\ntcIfaceExpr (IfaceLet (IfaceNonRec (IfLetBndr fs ty info) rhs) body)\n = do { name <- newIfaceName (mkVarOccFS fs)\n ; ty' <- tcIfaceType ty\n ; id_info <- tcIdInfo False {- Don't ignore prags; we are inside one! -}\n name ty' info\n ; let id = mkLocalIdWithInfo name ty' id_info\n ; rhs' <- tcIfaceExpr rhs\n ; body' <- extendIfaceIdEnv [id] (tcIfaceExpr body)\n ; return (Let (NonRec id rhs') body') }\n\ntcIfaceExpr (IfaceLet (IfaceRec pairs) body)\n = do { ids <- mapM tc_rec_bndr (map fst pairs)\n ; extendIfaceIdEnv ids $ do\n { pairs' <- zipWithM tc_pair pairs ids\n ; body' <- tcIfaceExpr body\n ; return (Let (Rec pairs') body') } }\n where\n tc_rec_bndr (IfLetBndr fs ty _)\n = do { name <- newIfaceName (mkVarOccFS fs)\n ; ty' <- tcIfaceType ty\n ; return (mkLocalId name ty') }\n tc_pair (IfLetBndr _ _ info, rhs) id\n = do { rhs' <- tcIfaceExpr rhs\n ; id_info <- tcIdInfo False {- Don't ignore prags; we are inside one! -}\n (idName id) (idType id) info\n ; return (setIdInfo id id_info, rhs') }\n\ntcIfaceExpr (IfaceTick tickish expr) = do\n expr' <- tcIfaceExpr expr\n tickish' <- tcIfaceTickish tickish\n return (Tick tickish' expr')\n\n-------------------------\ntcIfaceApps :: IfaceExpr -> IfaceExpr -> IfL CoreExpr\n-- See Note [Checking IfaceTypes vs IfaceKinds]\ntcIfaceApps fun arg\n = go_down fun [arg]\n where\n go_down (IfaceApp fun arg) args = go_down fun (arg:args)\n go_down fun args = do { fun' <- tcIfaceExpr fun\n ; go_up fun' (exprType fun') args }\n\n go_up :: CoreExpr -> Type -> [IfaceExpr] -> IfL CoreExpr\n go_up fun _ [] = return fun\n go_up fun fun_ty (IfaceType t : args)\n | Just (tv,body_ty) <- splitForAllTy_maybe fun_ty\n = do { t' <- if isKindVar tv -- See Note [Checking IfaceTypes vs IfaceKinds]\n then tcIfaceKind t\n else tcIfaceType t\n ; let fun_ty' = substTyWith [tv] [t'] body_ty\n ; go_up (App fun (Type t')) fun_ty' args }\n go_up fun fun_ty (arg : args)\n | Just (_, fun_ty') <- splitFunTy_maybe fun_ty\n = do { arg' <- tcIfaceExpr arg\n ; go_up (App fun arg') fun_ty' args }\n go_up fun fun_ty args = pprPanic \"tcIfaceApps\" (ppr fun $$ ppr fun_ty $$ ppr args)\n\n-------------------------\ntcIfaceTickish :: IfaceTickish -> IfM lcl (Tickish Id)\ntcIfaceTickish (IfaceHpcTick modl ix) = return (HpcTick modl ix)\ntcIfaceTickish (IfaceSCC cc tick push) = return (ProfNote cc tick push)\n\n-------------------------\ntcIfaceLit :: Literal -> IfL Literal\n-- Integer literals deserialise to (LitInteger i )\n-- so tcIfaceLit just fills in the type.\n-- See Note [Integer literals] in Literal\ntcIfaceLit (LitInteger i _)\n = do t <- tcIfaceTyCon (IfaceTc integerTyConName)\n return (mkLitInteger i (mkTyConTy t))\ntcIfaceLit lit = return lit\n\n-------------------------\ntcIfaceAlt :: CoreExpr -> (TyCon, [Type])\n -> (IfaceConAlt, [FastString], IfaceExpr)\n -> IfL (AltCon, [TyVar], CoreExpr)\ntcIfaceAlt _ _ (IfaceDefault, names, rhs)\n = ASSERT( null names ) do\n rhs' <- tcIfaceExpr rhs\n return (DEFAULT, [], rhs')\n\ntcIfaceAlt _ _ (IfaceLitAlt lit, names, rhs)\n = ASSERT( null names ) do\n lit' <- tcIfaceLit lit\n rhs' <- tcIfaceExpr rhs\n return (LitAlt lit', [], rhs')\n\n-- A case alternative is made quite a bit more complicated\n-- by the fact that we omit type annotations because we can\n-- work them out. True enough, but its not that easy!\ntcIfaceAlt scrut (tycon, inst_tys) (IfaceDataAlt data_occ, arg_strs, rhs)\n = do { con <- tcIfaceDataCon data_occ\n ; when (debugIsOn && not (con `elem` tyConDataCons tycon))\n (failIfM (ppr scrut $$ ppr con $$ ppr tycon $$ ppr (tyConDataCons tycon)))\n ; tcIfaceDataAlt con inst_tys arg_strs rhs }\n\ntcIfaceDataAlt :: DataCon -> [Type] -> [FastString] -> IfaceExpr\n -> IfL (AltCon, [TyVar], CoreExpr)\ntcIfaceDataAlt con inst_tys arg_strs rhs\n = do { us <- newUniqueSupply\n ; let uniqs = uniqsFromSupply us\n ; let (ex_tvs, arg_ids)\n = dataConRepFSInstPat arg_strs uniqs con inst_tys\n\n ; rhs' <- extendIfaceTyVarEnv ex_tvs $\n extendIfaceIdEnv arg_ids $\n tcIfaceExpr rhs\n ; return (DataAlt con, ex_tvs ++ arg_ids, rhs') }\n\\end{code}\n\n\n\\begin{code}\ntcExtCoreBindings :: [IfaceBinding] -> IfL CoreProgram -- Used for external core\ntcExtCoreBindings [] = return []\ntcExtCoreBindings (b:bs) = do_one b (tcExtCoreBindings bs)\n\ndo_one :: IfaceBinding -> IfL [CoreBind] -> IfL [CoreBind]\ndo_one (IfaceNonRec bndr rhs) thing_inside\n = do { rhs' <- tcIfaceExpr rhs\n ; bndr' <- newExtCoreBndr bndr\n ; extendIfaceIdEnv [bndr'] $ do\n { core_binds <- thing_inside\n ; return (NonRec bndr' rhs' : core_binds) }}\n\ndo_one (IfaceRec pairs) thing_inside\n = do { bndrs' <- mapM newExtCoreBndr bndrs\n ; extendIfaceIdEnv bndrs' $ do\n { rhss' <- mapM tcIfaceExpr rhss\n ; core_binds <- thing_inside\n ; return (Rec (bndrs' `zip` rhss') : core_binds) }}\n where\n (bndrs,rhss) = unzip pairs\n\\end{code}\n\n\n%************************************************************************\n%* *\n IdInfo\n%* *\n%************************************************************************\n\n\\begin{code}\ntcIdDetails :: Type -> IfaceIdDetails -> IfL IdDetails\ntcIdDetails _ IfVanillaId = return VanillaId\ntcIdDetails ty (IfDFunId ns)\n = return (DFunId ns (isNewTyCon (classTyCon cls)))\n where\n (_, _, cls, _) = tcSplitDFunTy ty\n\ntcIdDetails _ (IfRecSelId tc naughty)\n = do { tc' <- tcIfaceTyCon tc\n ; return (RecSelId { sel_tycon = tc', sel_naughty = naughty }) }\n\ntcIdInfo :: Bool -> Name -> Type -> IfaceIdInfo -> IfL IdInfo\ntcIdInfo ignore_prags name ty info\n | ignore_prags = return vanillaIdInfo\n | otherwise = case info of\n NoInfo -> return vanillaIdInfo\n HasInfo info -> foldlM tcPrag init_info info\n where\n -- Set the CgInfo to something sensible but uninformative before\n -- we start; default assumption is that it has CAFs\n init_info = vanillaIdInfo\n\n tcPrag :: IdInfo -> IfaceInfoItem -> IfL IdInfo\n tcPrag info HsNoCafRefs = return (info `setCafInfo` NoCafRefs)\n tcPrag info (HsArity arity) = return (info `setArityInfo` arity)\n tcPrag info (HsStrictness str) = return (info `setStrictnessInfo` str)\n tcPrag info (HsInline prag) = return (info `setInlinePragInfo` prag)\n\n -- The next two are lazy, so they don't transitively suck stuff in\n tcPrag info (HsUnfold lb if_unf)\n = do { unf <- tcUnfolding name ty info if_unf\n ; let info1 | lb = info `setOccInfo` strongLoopBreaker\n | otherwise = info\n ; return (info1 `setUnfoldingInfoLazily` unf) }\n\\end{code}\n\n\\begin{code}\ntcUnfolding :: Name -> Type -> IdInfo -> IfaceUnfolding -> IfL Unfolding\ntcUnfolding name _ info (IfCoreUnfold stable if_expr)\n = do { dflags <- getDynFlags\n ; mb_expr <- tcPragExpr name if_expr\n ; let unf_src | stable = InlineStable\n | otherwise = InlineRhs\n ; return $ case mb_expr of\n Nothing -> NoUnfolding\n Just expr -> mkUnfolding dflags unf_src\n True {- Top level -}\n (isBottomingSig strict_sig)\n expr\n }\n where\n -- Strictness should occur before unfolding!\n strict_sig = strictnessInfo info\ntcUnfolding name _ _ (IfCompulsory if_expr)\n = do { mb_expr <- tcPragExpr name if_expr\n ; return (case mb_expr of\n Nothing -> NoUnfolding\n Just expr -> mkCompulsoryUnfolding expr) }\n\ntcUnfolding name _ _ (IfInlineRule arity unsat_ok boring_ok if_expr)\n = do { mb_expr <- tcPragExpr name if_expr\n ; return (case mb_expr of\n Nothing -> NoUnfolding\n Just expr -> mkCoreUnfolding InlineStable True expr arity\n (UnfWhen unsat_ok boring_ok))\n }\n\ntcUnfolding name dfun_ty _ (IfDFunUnfold bs ops)\n = bindIfaceBndrs bs $ \\ bs' ->\n do { mb_ops1 <- forkM_maybe doc $ mapM tcIfaceExpr ops\n ; return (case mb_ops1 of\n Nothing -> noUnfolding\n Just ops1 -> mkDFunUnfolding bs' (classDataCon cls) ops1) }\n where\n doc = text \"Class ops for dfun\" <+> ppr name\n (_, _, cls, _) = tcSplitDFunTy dfun_ty\n\\end{code}\n\nFor unfoldings we try to do the job lazily, so that we never type check\nan unfolding that isn't going to be looked at.\n\n\\begin{code}\ntcPragExpr :: Name -> IfaceExpr -> IfL (Maybe CoreExpr)\ntcPragExpr name expr\n = forkM_maybe doc $ do\n core_expr' <- tcIfaceExpr expr\n\n -- Check for type consistency in the unfolding\n whenGOptM Opt_DoCoreLinting $ do\n in_scope <- get_in_scope\n case lintUnfolding noSrcLoc in_scope core_expr' of\n Nothing -> return ()\n Just fail_msg -> do { mod <- getIfModule\n ; pprPanic \"Iface Lint failure\"\n (vcat [ ptext (sLit \"In interface for\") <+> ppr mod\n , hang doc 2 fail_msg\n , ppr name <+> equals <+> ppr core_expr'\n , ptext (sLit \"Iface expr =\") <+> ppr expr ]) }\n return core_expr'\n where\n doc = text \"Unfolding of\" <+> ppr name\n\n get_in_scope :: IfL [Var] -- Totally disgusting; but just for linting\n get_in_scope\n = do { (gbl_env, lcl_env) <- getEnvs\n ; rec_ids <- case if_rec_types gbl_env of\n Nothing -> return []\n Just (_, get_env) -> do\n { type_env <- setLclEnv () get_env\n ; return (typeEnvIds type_env) }\n ; return (varEnvElts (if_tv_env lcl_env) ++\n varEnvElts (if_id_env lcl_env) ++\n rec_ids) }\n\\end{code}\n\n\n\n%************************************************************************\n%* *\n Getting from Names to TyThings\n%* *\n%************************************************************************\n\n\\begin{code}\ntcIfaceGlobal :: Name -> IfL TyThing\ntcIfaceGlobal name\n | Just thing <- wiredInNameTyThing_maybe name\n -- Wired-in things include TyCons, DataCons, and Ids\n -- Even though we are in an interface file, we want to make\n -- sure the instances and RULES of this thing (particularly TyCon) are loaded\n -- Imagine: f :: Double -> Double\n = do { ifCheckWiredInThing thing; return thing }\n | otherwise\n = do { env <- getGblEnv\n ; case if_rec_types env of { -- Note [Tying the knot]\n Just (mod, get_type_env)\n | nameIsLocalOrFrom mod name\n -> do -- It's defined in the module being compiled\n { type_env <- setLclEnv () get_type_env -- yuk\n ; case lookupNameEnv type_env name of\n Just thing -> return thing\n Nothing -> pprPanic \"tcIfaceGlobal (local): not found:\"\n (ppr name $$ ppr type_env) }\n\n ; _ -> do\n\n { hsc_env <- getTopEnv\n ; mb_thing <- liftIO (lookupTypeHscEnv hsc_env name)\n ; case mb_thing of {\n Just thing -> return thing ;\n Nothing -> do\n\n { mb_thing <- importDecl name -- It's imported; go get it\n ; case mb_thing of\n Failed err -> failIfM err\n Succeeded thing -> return thing\n }}}}}\n\n-- Note [Tying the knot]\n-- ~~~~~~~~~~~~~~~~~~~~~\n-- The if_rec_types field is used in two situations:\n--\n-- a) Compiling M.hs, which indiretly imports Foo.hi, which mentions M.T\n-- Then we look up M.T in M's type environment, which is splatted into if_rec_types\n-- after we've built M's type envt.\n--\n-- b) In ghc --make, during the upsweep, we encounter M.hs, whose interface M.hi\n-- is up to date. So we call typecheckIface on M.hi. This splats M.T into\n-- if_rec_types so that the (lazily typechecked) decls see all the other decls\n--\n-- In case (b) it's important to do the if_rec_types check *before* looking in the HPT\n-- Because if M.hs also has M.hs-boot, M.T will *already be* in the HPT, but in its\n-- emasculated form (e.g. lacking data constructors).\n\ntcIfaceTyCon :: IfaceTyCon -> IfL TyCon\ntcIfaceTyCon (IfaceTc name)\n = do { thing <- tcIfaceGlobal name\n ; case thing of -- A \"type constructor\" can be a promoted data constructor\n -- c.f. Trac #5881\n ATyCon tc -> return tc\n AConLike (RealDataCon dc) -> return (promoteDataCon dc)\n _ -> pprPanic \"tcIfaceTyCon\" (ppr name $$ ppr thing) }\n\ntcIfaceKindCon :: IfaceTyCon -> IfL TyCon\ntcIfaceKindCon (IfaceTc name)\n = do { thing <- tcIfaceGlobal name\n ; case thing of -- A \"type constructor\" here is a promoted type constructor\n -- c.f. Trac #5881\n ATyCon tc\n | isSuperKind (tyConKind tc)\n -> return tc -- Mainly just '*' or 'AnyK'\n | Just prom_tc <- promotableTyCon_maybe tc\n -> return prom_tc\n\n _ -> pprPanic \"tcIfaceKindCon\" (ppr name $$ ppr thing) }\n\ntcIfaceCoAxiom :: Name -> IfL (CoAxiom Branched)\ntcIfaceCoAxiom name = do { thing <- tcIfaceGlobal name\n ; return (tyThingCoAxiom thing) }\n\ntcIfaceDataCon :: Name -> IfL DataCon\ntcIfaceDataCon name = do { thing <- tcIfaceGlobal name\n ; case thing of\n AConLike (RealDataCon dc) -> return dc\n _ -> pprPanic \"tcIfaceExtDC\" (ppr name$$ ppr thing) }\n\ntcIfaceExtId :: Name -> IfL Id\ntcIfaceExtId name = do { thing <- tcIfaceGlobal name\n ; case thing of\n AnId id -> return id\n _ -> pprPanic \"tcIfaceExtId\" (ppr name$$ ppr thing) }\n\\end{code}\n\n%************************************************************************\n%* *\n Bindings\n%* *\n%************************************************************************\n\n\\begin{code}\nbindIfaceBndr :: IfaceBndr -> (CoreBndr -> IfL a) -> IfL a\nbindIfaceBndr (IfaceIdBndr (fs, ty)) thing_inside\n = do { name <- newIfaceName (mkVarOccFS fs)\n ; ty' <- tcIfaceType ty\n ; let id = mkLocalId name ty'\n ; extendIfaceIdEnv [id] (thing_inside id) }\nbindIfaceBndr (IfaceTvBndr bndr) thing_inside\n = bindIfaceTyVar bndr thing_inside\n\nbindIfaceBndrs :: [IfaceBndr] -> ([CoreBndr] -> IfL a) -> IfL a\nbindIfaceBndrs [] thing_inside = thing_inside []\nbindIfaceBndrs (b:bs) thing_inside\n = bindIfaceBndr b $ \\ b' ->\n bindIfaceBndrs bs $ \\ bs' ->\n thing_inside (b':bs')\n\n-----------------------\nnewExtCoreBndr :: IfaceLetBndr -> IfL Id\nnewExtCoreBndr (IfLetBndr var ty _) -- Ignoring IdInfo for now\n = do { mod <- getIfModule\n ; name <- newGlobalBinder mod (mkVarOccFS var) noSrcSpan\n ; ty' <- tcIfaceType ty\n ; return (mkLocalId name ty') }\n\n-----------------------\nbindIfaceTyVar :: IfaceTvBndr -> (TyVar -> IfL a) -> IfL a\nbindIfaceTyVar (occ,kind) thing_inside\n = do { name <- newIfaceName (mkTyVarOccFS occ)\n ; tyvar <- mk_iface_tyvar name kind\n ; extendIfaceTyVarEnv [tyvar] (thing_inside tyvar) }\n\nbindIfaceTyVars :: [IfaceTvBndr] -> ([TyVar] -> IfL a) -> IfL a\nbindIfaceTyVars bndrs thing_inside\n = do { names <- newIfaceNames (map mkTyVarOccFS occs)\n ; let (kis_kind, tys_kind) = span isSuperIfaceKind kinds\n (kis_name, tys_name) = splitAt (length kis_kind) names\n -- We need to bring the kind variables in scope since type\n -- variables may mention them.\n ; kvs <- zipWithM mk_iface_tyvar kis_name kis_kind\n ; extendIfaceTyVarEnv kvs $ do\n { tvs <- zipWithM mk_iface_tyvar tys_name tys_kind\n ; extendIfaceTyVarEnv tvs (thing_inside (kvs ++ tvs)) } }\n where\n (occs,kinds) = unzip bndrs\n\nbindIfaceIdVar :: IfaceIdBndr -> (Id -> IfL a) -> IfL a\nbindIfaceIdVar (occ, ty) thing_inside\n = do { name <- newIfaceName (mkVarOccFS occ)\n ; ty' <- tcIfaceType ty\n ; let id = mkLocalId name ty'\n ; extendIfaceIdEnv [id] (thing_inside id) }\n\nbindIfaceIdVars :: [IfaceIdBndr] -> ([Id] -> IfL a) -> IfL a\nbindIfaceIdVars [] thing_inside = thing_inside []\nbindIfaceIdVars (v:vs) thing_inside\n = bindIfaceIdVar v $ \\ v' ->\n bindIfaceIdVars vs $ \\ vs' ->\n thing_inside (v':vs')\n\nisSuperIfaceKind :: IfaceKind -> Bool\nisSuperIfaceKind (IfaceTyConApp (IfaceTc n) []) = n == superKindTyConName\nisSuperIfaceKind _ = False\n\nmk_iface_tyvar :: Name -> IfaceKind -> IfL TyVar\nmk_iface_tyvar name ifKind\n = do { kind <- tcIfaceKind ifKind\n ; return (Var.mkTyVar name kind) }\n\nbindIfaceTyVars_AT :: [IfaceTvBndr] -> ([TyVar] -> IfL a) -> IfL a\n-- Used for type variable in nested associated data\/type declarations\n-- where some of the type variables are already in scope\n-- class C a where { data T a b }\n-- Here 'a' is in scope when we look at the 'data T'\nbindIfaceTyVars_AT [] thing_inside\n = thing_inside []\nbindIfaceTyVars_AT (b@(tv_occ,_) : bs) thing_inside\n = do { mb_tv <- lookupIfaceTyVar tv_occ\n ; let bind_b :: (TyVar -> IfL a) -> IfL a\n bind_b = case mb_tv of\n Just b' -> \\k -> k b'\n Nothing -> bindIfaceTyVar b\n ; bind_b $ \\b' ->\n bindIfaceTyVars_AT bs $ \\bs' ->\n thing_inside (b':bs') }\n\\end{code}\n","avg_line_length":44.3010685104,"max_line_length":120,"alphanum_fraction":0.5562192302} {"size":13978,"ext":"lhs","lang":"Literate Haskell","max_stars_count":41.0,"content":"\\begin{code}\n{-# OPTIONS_GHC -fno-implicit-prelude #-}\n-----------------------------------------------------------------------------\n-- |\n-- Module : GHC.Real\n-- Copyright : (c) The FFI Task Force, 1994-2002\n-- License : see libraries\/base\/LICENSE\n-- \n-- Maintainer : cvs-ghc@haskell.org\n-- Stability : internal\n-- Portability : non-portable (GHC Extensions)\n--\n-- The types 'Ratio' and 'Rational', and the classes 'Real', 'Fractional',\n-- 'Integral', and 'RealFrac'.\n--\n-----------------------------------------------------------------------------\n\n-- #hide\nmodule GHC.Real where\n\nimport {-# SOURCE #-} GHC.Err\nimport GHC.Base\nimport GHC.Num\nimport GHC.List\nimport GHC.Enum\nimport GHC.Show\n\ninfixr 8 ^, ^^\ninfixl 7 \/, `quot`, `rem`, `div`, `mod`\ninfixl 7 %\n\ndefault ()\t\t-- Double isn't available yet, \n\t\t\t-- and we shouldn't be using defaults anyway\n\\end{code}\n\n\n%*********************************************************\n%*\t\t\t\t\t\t\t*\n\\subsection{The @Ratio@ and @Rational@ types}\n%*\t\t\t\t\t\t\t*\n%*********************************************************\n\n\\begin{code}\n-- | Rational numbers, with numerator and denominator of some 'Integral' type.\ndata (Integral a)\t=> Ratio a = !a :% !a deriving (Eq)\n\n-- | Arbitrary-precision rational numbers, represented as a ratio of\n-- two 'Integer' values. A rational number may be constructed using\n-- the '%' operator.\ntype Rational\t\t= Ratio Integer\n\nratioPrec, ratioPrec1 :: Int\nratioPrec = 7 \t-- Precedence of ':%' constructor\nratioPrec1 = ratioPrec + 1\n\ninfinity, notANumber :: Rational\ninfinity = 1 :% 0\nnotANumber = 0 :% 0\n\n-- Use :%, not % for Inf\/NaN; the latter would \n-- immediately lead to a runtime error, because it normalises. \n\\end{code}\n\n\n\\begin{code}\n-- | Forms the ratio of two integral numbers.\n{-# SPECIALISE (%) :: Integer -> Integer -> Rational #-}\n(%)\t\t\t:: (Integral a) => a -> a -> Ratio a\n\n-- | Extract the numerator of the ratio in reduced form:\n-- the numerator and denominator have no common factor and the denominator\n-- is positive.\nnumerator\t:: (Integral a) => Ratio a -> a\n\n-- | Extract the denominator of the ratio in reduced form:\n-- the numerator and denominator have no common factor and the denominator\n-- is positive.\ndenominator\t:: (Integral a) => Ratio a -> a\n\\end{code}\n\n\\tr{reduce} is a subsidiary function used only in this module .\nIt normalises a ratio by dividing both numerator and denominator by\ntheir greatest common divisor.\n\n\\begin{code}\nreduce :: (Integral a) => a -> a -> Ratio a\n{-# SPECIALISE reduce :: Integer -> Integer -> Rational #-}\nreduce _ 0\t\t= error \"Ratio.%: zero denominator\"\nreduce x y\t\t= (x `quot` d) :% (y `quot` d)\n\t\t\t where d = gcd x y\n\\end{code}\n\n\\begin{code}\nx % y\t\t\t= reduce (x * signum y) (abs y)\n\nnumerator (x :% _)\t= x\ndenominator (_ :% y)\t= y\n\\end{code}\n\n\n%*********************************************************\n%*\t\t\t\t\t\t\t*\n\\subsection{Standard numeric classes}\n%*\t\t\t\t\t\t\t*\n%*********************************************************\n\n\\begin{code}\nclass (Num a, Ord a) => Real a where\n -- | the rational equivalent of its real argument with full precision\n toRational\t\t:: a -> Rational\n\n-- | Integral numbers, supporting integer division.\n--\n-- Minimal complete definition: 'quotRem' and 'toInteger'\nclass (Real a, Enum a) => Integral a where\n -- | integer division truncated toward zero\n quot\t\t:: a -> a -> a\n -- | integer remainder, satisfying\n --\n -- > (x `quot` y)*y + (x `rem` y) == x\n rem\t\t\t:: a -> a -> a\n -- | integer division truncated toward negative infinity\n div\t\t\t:: a -> a -> a\n -- | integer modulus, satisfying\n --\n -- > (x `div` y)*y + (x `mod` y) == x\n mod\t\t\t:: a -> a -> a\n -- | simultaneous 'quot' and 'rem'\n quotRem\t\t:: a -> a -> (a,a)\n -- | simultaneous 'div' and 'mod'\n divMod\t\t:: a -> a -> (a,a)\n -- | conversion to 'Integer'\n toInteger\t\t:: a -> Integer\n\n n `quot` d\t\t= q where (q,_) = quotRem n d\n n `rem` d\t\t= r where (_,r) = quotRem n d\n n `div` d\t\t= q where (q,_) = divMod n d\n n `mod` d\t\t= r where (_,r) = divMod n d\n divMod n d \t\t= if signum r == negate (signum d) then (q-1, r+d) else qr\n\t\t\t where qr@(q,r) = quotRem n d\n\n-- | Fractional numbers, supporting real division.\n--\n-- Minimal complete definition: 'fromRational' and ('recip' or @('\/')@)\nclass (Num a) => Fractional a where\n -- | fractional division\n (\/)\t\t\t:: a -> a -> a\n -- | reciprocal fraction\n recip\t\t:: a -> a\n -- | Conversion from a 'Rational' (that is @'Ratio' 'Integer'@).\n -- A floating literal stands for an application of 'fromRational'\n -- to a value of type 'Rational', so such literals have type\n -- @('Fractional' a) => a@.\n fromRational\t:: Rational -> a\n\n recip x\t\t= 1 \/ x\n x \/ y\t\t= x * recip y\n\n-- | Extracting components of fractions.\n--\n-- Minimal complete definition: 'properFraction'\nclass (Real a, Fractional a) => RealFrac a where\n -- | The function 'properFraction' takes a real fractional number @x@\n -- and returns a pair @(n,f)@ such that @x = n+f@, and:\n --\n -- * @n@ is an integral number with the same sign as @x@; and\n --\n -- * @f@ is a fraction with the same type and sign as @x@,\n -- and with absolute value less than @1@.\n --\n -- The default definitions of the 'ceiling', 'floor', 'truncate'\n -- and 'round' functions are in terms of 'properFraction'.\n properFraction\t:: (Integral b) => a -> (b,a)\n -- | @'truncate' x@ returns the integer nearest @x@ between zero and @x@\n truncate\t\t:: (Integral b) => a -> b\n -- | @'round' x@ returns the nearest integer to @x@\n round\t\t:: (Integral b) => a -> b\n -- | @'ceiling' x@ returns the least integer not less than @x@\n ceiling\t\t:: (Integral b) => a -> b\n -- | @'floor' x@ returns the greatest integer not greater than @x@\n floor\t\t:: (Integral b) => a -> b\n\n truncate x\t\t= m where (m,_) = properFraction x\n \n round x\t\t= let (n,r) = properFraction x\n \t\t\t m = if r < 0 then n - 1 else n + 1\n \t\t\t in case signum (abs r - 0.5) of\n \t\t\t\t-1 -> n\n \t\t\t \t0 -> if even n then n else m\n \t\t\t\t1 -> m\n \n ceiling x\t\t= if r > 0 then n + 1 else n\n \t\t\t where (n,r) = properFraction x\n \n floor x\t\t= if r < 0 then n - 1 else n\n \t\t\t where (n,r) = properFraction x\n\\end{code}\n\n\nThese 'numeric' enumerations come straight from the Report\n\n\\begin{code}\nnumericEnumFrom\t\t:: (Fractional a) => a -> [a]\nnumericEnumFrom\t\t= iterate (+1)\n\nnumericEnumFromThen\t:: (Fractional a) => a -> a -> [a]\nnumericEnumFromThen n m\t= iterate (+(m-n)) n\n\nnumericEnumFromTo :: (Ord a, Fractional a) => a -> a -> [a]\nnumericEnumFromTo n m = takeWhile (<= m + 1\/2) (numericEnumFrom n)\n\nnumericEnumFromThenTo :: (Ord a, Fractional a) => a -> a -> a -> [a]\nnumericEnumFromThenTo e1 e2 e3 = takeWhile pred (numericEnumFromThen e1 e2)\n\t\t\t\twhere\n\t\t\t\t mid = (e2 - e1) \/ 2\n\t\t\t\t pred | e2 >= e1 = (<= e3 + mid)\n\t\t\t\t | otherwise = (>= e3 + mid)\n\\end{code}\n\n\n%*********************************************************\n%*\t\t\t\t\t\t\t*\n\\subsection{Instances for @Int@}\n%*\t\t\t\t\t\t\t*\n%*********************************************************\n\n\\begin{code}\ninstance Real Int where\n toRational x\t= toInteger x % 1\n\ninstance Integral Int\twhere\n toInteger i = int2Integer i -- give back a full-blown Integer\n\n a `quot` 0 = divZeroError\n a `quot` b\t= a `quotInt` b\n\n a `rem` 0 = divZeroError\n a `rem` b\t= a `remInt` b\n\n a `div` 0 = divZeroError\n a `div` b = a `divInt` b\n\n a `mod` 0 = divZeroError\n a `mod` b = a `modInt` b\n\n a `quotRem` 0 = divZeroError\n a `quotRem` b = a `quotRemInt` b\n\n a `divMod` 0 = divZeroError\n a `divMod` b = a `divModInt` b\n\\end{code}\n\n\n%*********************************************************\n%*\t\t\t\t\t\t\t*\n\\subsection{Instances for @Integer@}\n%*\t\t\t\t\t\t\t*\n%*********************************************************\n\n\\begin{code}\ninstance Real Integer where\n toRational x\t= x % 1\n\ninstance Integral Integer where\n toInteger n\t = n\n\n a `quot` 0 = divZeroError\n n `quot` d = n `quotInteger` d\n\n a `rem` 0 = divZeroError\n n `rem` d = n `remInteger` d\n\n a `divMod` 0 = divZeroError\n a `divMod` b = a `divModInteger` b\n\n a `quotRem` 0 = divZeroError\n a `quotRem` b = a `quotRemInteger` b\n\n -- use the defaults for div & mod\n\\end{code}\n\n\n%*********************************************************\n%*\t\t\t\t\t\t\t*\n\\subsection{Instances for @Ratio@}\n%*\t\t\t\t\t\t\t*\n%*********************************************************\n\n\\begin{code}\ninstance (Integral a)\t=> Ord (Ratio a) where\n {-# SPECIALIZE instance Ord Rational #-}\n (x:%y) <= (x':%y')\t= x * y' <= x' * y\n (x:%y) < (x':%y')\t= x * y' < x' * y\n\ninstance (Integral a)\t=> Num (Ratio a) where\n {-# SPECIALIZE instance Num Rational #-}\n (x:%y) + (x':%y')\t= reduce (x*y' + x'*y) (y*y')\n (x:%y) - (x':%y')\t= reduce (x*y' - x'*y) (y*y')\n (x:%y) * (x':%y')\t= reduce (x * x') (y * y')\n negate (x:%y)\t= (-x) :% y\n abs (x:%y)\t\t= abs x :% y\n signum (x:%_)\t= signum x :% 1\n fromInteger x\t= fromInteger x :% 1\n\ninstance (Integral a)\t=> Fractional (Ratio a) where\n {-# SPECIALIZE instance Fractional Rational #-}\n (x:%y) \/ (x':%y')\t= (x*y') % (y*x')\n recip (x:%y)\t= y % x\n fromRational (x:%y) = fromInteger x :% fromInteger y\n\ninstance (Integral a)\t=> Real (Ratio a) where\n {-# SPECIALIZE instance Real Rational #-}\n toRational (x:%y)\t= toInteger x :% toInteger y\n\ninstance (Integral a)\t=> RealFrac (Ratio a) where\n {-# SPECIALIZE instance RealFrac Rational #-}\n properFraction (x:%y) = (fromInteger (toInteger q), r:%y)\n\t\t\t where (q,r) = quotRem x y\n\ninstance (Integral a) => Show (Ratio a) where\n {-# SPECIALIZE instance Show Rational #-}\n showsPrec p (x:%y)\t= showParen (p > ratioPrec) $\n\t\t\t showsPrec ratioPrec1 x . \n\t\t\t showString \"%\" . \t-- H98 report has spaces round the %\n\t\t\t\t\t\t-- but we removed them [May 04]\n\t\t\t showsPrec ratioPrec1 y\n\ninstance (Integral a)\t=> Enum (Ratio a) where\n {-# SPECIALIZE instance Enum Rational #-}\n succ x\t = x + 1\n pred x\t = x - 1\n\n toEnum n = fromInteger (int2Integer n) :% 1\n fromEnum = fromInteger . truncate\n\n enumFrom\t\t= numericEnumFrom\n enumFromThen \t= numericEnumFromThen\n enumFromTo\t\t= numericEnumFromTo\n enumFromThenTo\t= numericEnumFromThenTo\n\\end{code}\n\n\n%*********************************************************\n%*\t\t\t\t\t\t\t*\n\\subsection{Coercions}\n%*\t\t\t\t\t\t\t*\n%*********************************************************\n\n\\begin{code}\n-- | general coercion from integral types\nfromIntegral :: (Integral a, Num b) => a -> b\nfromIntegral = fromInteger . toInteger\n\n{-# RULES\n\"fromIntegral\/Int->Int\" fromIntegral = id :: Int -> Int\n #-}\n\n-- | general coercion to fractional types\nrealToFrac :: (Real a, Fractional b) => a -> b\nrealToFrac = fromRational . toRational\n\n{-# RULES\n\"realToFrac\/Int->Int\" realToFrac = id :: Int -> Int\n #-}\n\\end{code}\n\n%*********************************************************\n%*\t\t\t\t\t\t\t*\n\\subsection{Overloaded numeric functions}\n%*\t\t\t\t\t\t\t*\n%*********************************************************\n\n\\begin{code}\n-- | Converts a possibly-negative 'Real' value to a string.\nshowSigned :: (Real a)\n => (a -> ShowS)\t-- ^ a function that can show unsigned values\n -> Int\t\t-- ^ the precedence of the enclosing context\n -> a\t\t\t-- ^ the value to show\n -> ShowS\nshowSigned showPos p x \n | x < 0 = showParen (p > 6) (showChar '-' . showPos (-x))\n | otherwise = showPos x\n\neven, odd\t:: (Integral a) => a -> Bool\neven n\t\t= n `rem` 2 == 0\nodd\t\t= not . even\n\n-------------------------------------------------------\n-- | raise a number to a non-negative integral power\n{-# SPECIALISE (^) ::\n\tInteger -> Integer -> Integer,\n\tInteger -> Int -> Integer,\n\tInt -> Int -> Int #-}\n(^)\t\t:: (Num a, Integral b) => a -> b -> a\n_ ^ 0\t\t= 1\nx ^ n | n > 0\t= f x (n-1) x\n\t\t where f _ 0 y = y\n\t\t f a d y = g a d where\n\t\t\t g b i | even i = g (b*b) (i `quot` 2)\n\t\t\t\t | otherwise = f b (i-1) (b*y)\n_ ^ _\t\t= error \"Prelude.^: negative exponent\"\n\n-- | raise a number to an integral power\n{-# SPECIALISE (^^) ::\n\tRational -> Int -> Rational #-}\n(^^)\t\t:: (Fractional a, Integral b) => a -> b -> a\nx ^^ n\t\t= if n >= 0 then x^n else recip (x^(negate n))\n\n\n-------------------------------------------------------\n-- | @'gcd' x y@ is the greatest (positive) integer that divides both @x@\n-- and @y@; for example @'gcd' (-3) 6@ = @3@, @'gcd' (-3) (-6)@ = @3@,\n-- @'gcd' 0 4@ = @4@. @'gcd' 0 0@ raises a runtime error.\ngcd\t\t:: (Integral a) => a -> a -> a\ngcd 0 0\t\t= error \"Prelude.gcd: gcd 0 0 is undefined\"\ngcd x y\t\t= gcd' (abs x) (abs y)\n\t\t where gcd' a 0 = a\n\t\t\t gcd' a b = gcd' b (a `rem` b)\n\n-- | @'lcm' x y@ is the smallest positive integer that both @x@ and @y@ divide.\nlcm\t\t:: (Integral a) => a -> a -> a\n{-# SPECIALISE lcm :: Int -> Int -> Int #-}\nlcm _ 0\t\t= 0\nlcm 0 _\t\t= 0\nlcm x y\t\t= abs ((x `quot` (gcd x y)) * y)\n\n\n{-# RULES\n\"gcd\/Int->Int->Int\" gcd = gcdInt\n\"gcd\/Integer->Integer->Integer\" gcd = gcdInteger\n\"lcm\/Integer->Integer->Integer\" lcm = lcmInteger\n #-}\n\nintegralEnumFrom :: (Integral a, Bounded a) => a -> [a]\nintegralEnumFrom n = map fromInteger [toInteger n .. toInteger (maxBound `asTypeOf` n)]\n\nintegralEnumFromThen :: (Integral a, Bounded a) => a -> a -> [a]\nintegralEnumFromThen n1 n2\n | i_n2 >= i_n1 = map fromInteger [i_n1, i_n2 .. toInteger (maxBound `asTypeOf` n1)]\n | otherwise = map fromInteger [i_n1, i_n2 .. toInteger (minBound `asTypeOf` n1)]\n where\n i_n1 = toInteger n1\n i_n2 = toInteger n2\n\nintegralEnumFromTo :: Integral a => a -> a -> [a]\nintegralEnumFromTo n m = map fromInteger [toInteger n .. toInteger m]\n\nintegralEnumFromThenTo :: Integral a => a -> a -> a -> [a]\nintegralEnumFromThenTo n1 n2 m\n = map fromInteger [toInteger n1, toInteger n2 .. toInteger m]\n\\end{code}\n","avg_line_length":30.9247787611,"max_line_length":87,"alphanum_fraction":0.5399914151} {"size":4865,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"> {-# LANGUAGE DataKinds #-}\r\n> {-# LANGUAGE KindSignatures #-}\r\n> {-# LANGUAGE GADTs #-}\r\n> \r\n> module SampleGen where\r\n> \r\n> import System.Random\r\n> import Data.Array\r\n>\r\n> import IPFTools\r\n>\r\n> -- generates a random vector in the given bounds, and also returns the random generator\r\n>\r\n> randomEntry :: (Vect n Int, Vect n Int) -> StdGen -> (Vect n Int, StdGen)\r\n> randomEntry (Nil , Nil) baseGen = (Nil, baseGen)\r\n> randomEntry ((x:#v),(y:#w)) baseGen = ((newNum:#oldNums), newGen) where\r\n> \t\t\t\t\t\t\t\t\t\t\t(oldNums, oldGen) = randomEntry (v, w) baseGen\r\n>\t\t\t\t\t\t\t\t\t\t \t(newNum, newGen) = randomR (x, y) oldGen \t\t\r\n>\r\n> -- generates a list of random vectors in the given bounds, the length of a list will be the given natural number,\r\n> -- also returns the random generator\r\n>\r\n> randomEntries :: (Vect n Int, Vect n Int) -> StdGen -> Nat -> ([Vect n Int], StdGen)\r\n> randomEntries bounds baseGen Z = ([], baseGen)\r\n> randomEntries bounds baseGen (S k) = (newEntrty : oldEntries, newGen) where\r\n> (oldEntries, oldGen) = randomEntries bounds baseGen k\r\n> \t (newEntrty, newGen) = randomEntry bounds oldGen\r\n>\r\n> -- adds an integer to a (n-dimentional) matrix\/tensor of integers at a given entry, if that entry is within the bounds of the matrix\r\n>\r\n> put :: Int -> Array (Vect n Int) Int -> Vect n Int -> Array (Vect n Int) Int\r\n> put x mx v = array (bounds mx) newAssocs where\r\n> newAssocs = map maybePut (assocs mx)\r\n> maybePut (w,z) | w == v = (w, z+x)\r\n> | otherwise = (w, z)\t\t\t\r\n>\r\n> -- randomly adds the given number to different places in the matrix.\r\n>\r\n> sprinkle :: Int -> ((Array (Vect n Int) Int), StdGen) -> ((Array (Vect n Int) Int), StdGen)\r\n> sprinkle x (mx, g) = (foldl (put x) mx entries, newGen) where\r\n>\t\t\t\t(entries, newGen) = randomEntries (bounds mx) g (intToNat (9*(rangeSize (bounds mx))))\r\n>\r\n>\r\n> -- sprinkles differently sized values on a matrix whose entries are all 1 at the start. \r\n> -- in the end the average value of a cell is 1000\r\n>\r\n> randomMatrix1 :: (Vect n Int, Vect n Int) -> StdGen -> ((Array (Vect n Int) Int), StdGen)\r\n> randomMatrix1 bounds gen = sprinkle 1 $ sprinkle 10 $ sprinkle 100 ((listArray bounds [1,1..]), gen)\r\n>\r\n>\r\n> -- creates a list of random integers between 1 and 2000, it length is given by the first argument.\r\n>\r\n> randomValues :: Int -> StdGen -> ([Int], StdGen)\r\n> randomValues n baseGen | n<=0 = ([], baseGen)\r\n> | otherwise = (newVal:oldVals, newGen) where\r\n> (oldVals, oldGen) = randomValues (n-1) baseGen \r\n> (newVal, newGen) = randomR (1, 2000) oldGen\r\n>\r\n>\r\n> -- creates a matrix with completely random values between 1 and 2000, in this function the will not be\r\n> -- a guaranteed average of 1000, but it will on average be roughly 1000 \r\n>\r\n> randomMatrix2 :: (Vect n Int, Vect n Int) -> StdGen -> ((Array (Vect n Int) Int), StdGen)\r\n> randomMatrix2 bounds gen = (listArray bounds values, newGen) where\r\n> (values, newGen) = randomValues (rangeSize bounds) gen \r\n>\r\n> -- creates a random marginal \r\n>\r\n> randomMarginal :: (Vect (S n) Int, Vect (S n) Int) -> StdGen -> Fin (S n) -> (Array (Vect (S Z) Int) Int, StdGen)\r\n> randomMarginal (lb, ub) gen d = sprinkle (1*otherdims) $ sprinkle (10*otherdims) $ \r\n> sprinkle (100*otherdims) $ sprinkle (1000*otherdims) $ \r\n> ((listArray ((0:#Nil), ((thisDim - 1):#Nil)) [otherdims, otherdims..]), gen) where\r\n> thisDim = rangeSize (vGet d lb, vGet d ub)\r\n> otherdims = div (rangeSize (lb, ub)) thisDim \r\n>\r\n> -- creates all marginals, randomly.\r\n>\r\n> randomMarginals :: (Vect n Int, Vect n Int) -> StdGen -> TypeNat n -> Vect k (Fin n) -> ((Vect k [Int]), StdGen)\r\n> randomMarginals bounds baseGen t Nil = (Nil, baseGen)\r\n> randomMarginals bounds baseGen (TS t) (i:#is) = (((elems newMarg) :# oldMargs), newGen) where\r\n> (oldMargs, oldGen) = randomMarginals bounds baseGen (TS t) is\r\n> (newMarg, newGen) = randomMarginal bounds oldGen i \r\n>\r\n> -- generates a random set of IPFData (i. e. a matrix with marginals)\r\n> \r\n> randomData :: TypeNat n -> (Vect n Int, Vect n Int) -> StdGen -> (IPFData n Int, StdGen)\r\n> randomData t bounds gen0 = ((matrix, marginals), gen2) where\r\n> (matrix, gen1) = randomMatrix2 bounds gen0\r\n> (marginals, gen2) = randomMarginals bounds gen1 t (allFin t)\r\n>\r\n","avg_line_length":53.4615384615,"max_line_length":151,"alphanum_fraction":0.5632065776} {"size":968,"ext":"lhs","lang":"Literate Haskell","max_stars_count":6.0,"content":"> module Typeclasses.Monoid.Class where\n\n> import Typeclasses.Semigroup.Class\n\n\nThe class of monoids (types with an associative binary operation that\nhas an identity). Instances should satisfy the following laws:\n\n x <> mempty = x\n mempty <> x = x\n x <> (y <> z) = (x <> y) <> z (Semigroup law)\n mconcat = foldr '(<>)' mempty\n\nThe method names refer to the monoid of lists under concatenation, but\nthere are many other instances.\n\nSome types can be viewed as a monoid in more than one way, e.g. both\naddition and multiplication on numbers. In such cases we often define\nnewtypes and make those instances of Monoid, e.g. Sum and Product.\n\n> class Semigroup a => Monoid a where\n> mempty :: a\n> mappend :: a -> a -> a\n> mappend = (<>)\n\n TODO: Solve Foldable\/Monoid import cycle\n\n> mconcat :: [a] -> a\n> mconcat = foldr mappend mempty\n> where foldr f z =\n> let go [] = z\n> go (y:ys) = y `f` go ys \n> in go\n","avg_line_length":28.4705882353,"max_line_length":70,"alphanum_fraction":0.6425619835} {"size":11429,"ext":"lhs","lang":"Literate Haskell","max_stars_count":1.0,"content":"Random Matrices as Pure Association Lists\n=========================================\n\nThis module provides a simple generation of different types of random matrices (e.g. triangle or\ndiagonal matrices). The randomness is provided by the `System.Random` implementation and\nis thus repeatable, which makes it very useful for testing.\n\n> module RandomMatrix (\n> randomSquareMatLike,\n> randomDiagonalLike,\n> randomTriangleLike,\n> randomStrictTriangleLike,\n> \n> MatLike,\n> chopUniform,\n> \n> -- * Re-export from \"System.Random\" to simplify testing\n> mkStdGen,\n> StdGen,\n> Random\n> ) where\n> \n> import Control.Arrow ( first )\n> \n> import Data.Maybe ( mapMaybe )\n> \n> import System.Random ( Random (..), RandomGen, StdGen, split, mkStdGen )\n> import System.Random.Shuffle ( shuffle' )\n> \n> import KleeneAlgebra ( Tropical ( .. ), Regular ( .. ), Balance )\n\nGeneration of random matrices\n-----------------------------\n\nFor simplicity only the \"pure association lists\" of matrices are generated, which are represented\nby the stacked association lists. Values of the type `MatLike a` are required to satisfy\nthe following two conditions:\n\n* Its indices are exactly *0* to *n - 1* for some natural number *n* in precisely this order.\n* The value at every index is an association list that is sorted with respect to its indices.\n\n> type MatLike a = [(Int, [(Int, a)])]\n\nThis function generates the necessary values. First, it creates a table of the right size and fills\nit with the correct number of \"non-zero\" values at the first positions and with \"zeroes\" at the\nremaining positions. Then it uses the supplied random generator to shuffle the table. Finally, the\ntable is reduced to an association list.\nAny density larger than 1 behaves as 1 and every density smaller than 0 behaves as 0.\n\n> randomMatLikeWith :: (RandomGen g, Random a) =>\n> g -- ^ random generator\n> -> Int -- ^ number of rows\n> -> Int -- ^ number of columns\n> -> (Int -> Int -> Int) -- ^ function that computes the number of entries\n> -- in the matrix, i.e. @(*)@\n> -> ([Maybe a] -> MatLike a) -- ^ function that splits the overall list into\n> -- sublists which then become rows\n> -> Double -- ^ density, i.e. percentage of edges, which is\n> -- a 'Double' value between \/0\/ and \/1\/\n> -> (a, a) -- ^ Lower\\\/upper bounds for the random values\n> -> MatLike a\n> randomMatLikeWith g rs cs size resize d lu = resize shuffled where\n> shuffled = shuffle' toGo entries g2\n> entries = size rs cs\n> fill = floor (fromIntegral entries * d)\n> toGo = map Just (take fill (randomRs lu g1)) -- \\\"interesting\\\" values\n> ++ replicate (entries - fill) Nothing -- \\\"zeroes\\\"\n> (g1, g2) = split g\n\nThis function creates a random matrix by computing the necessary number of entries,\nthen shuffling them and finally splitting them into uniform chunks which are then\nused as rows.\n\n> randomMatLike ::\n> (RandomGen g, Random a) => g -- ^ random generator\n> -> Int -- ^ number of rows\n> -> Int -- ^ number of colums\n> -> Double -- ^ density (\/0 <= d <= 1\/)\n> -> (a, a) -- ^ lower\\\/upper bounds\n> -> MatLike a\n> randomMatLike gen rows cols = randomMatLikeWith gen rows cols (*) (resizeWith (chopUniform cols))\n\nA random graph is a random matrix with the same number of rows and columns.\n\n> randomSquareMatLike ::\n> (RandomGen g, Random a) => g -- ^ random generator\n> -> Int -- ^ number of rows and columns\n> -> Double -- ^ density (\/0 <= d <= 1\/)\n> -> (a, a) -- ^ lower\\\/upper bounds\n> -> MatLike a\n> randomSquareMatLike gen size = randomMatLike gen size size\n\nCreates a random diagonal square matrix. Please note that the density is\ncomputed w.r.t. the diagonal and *not* the number of entries altogether. That\nis: `randomDiagonalLike (mkStdGen 1234) 10 0.3 (0, 1)` will create a square matrix\nwith exactly three (not thirty) entries.\n\n> randomDiagonalLike ::\n> (RandomGen g, Random a) => g -- ^ random generator\n> -> Int -- ^ number of rows and columns\n> -> Double -- ^ density (\/0 <= d <= 1\/)\n> -> (a, a) -- ^ lower\\\/upper bounds\n> -> MatLike a\n> randomDiagonalLike gen size = randomMatLikeWith gen size size const resize where\n> \n> resize = zipWith (\\i mv -> (i, maybe [] (\\v -> [(i, v)]) mv)) [0..]\n\nCreates a random triangle square matrix. As with `randomDiagonalLike` the density\nrefers to the density of the triangle. That is the number of entries in the matrix will\nbe `floor (density * size * (size + 1) \/ 2)`.\n\n> randomTriangleLike ::\n> (RandomGen g, Random a) => g -- ^ random generator\n> -> Int -- ^ number of rows and columns\n> -> Double -- ^ density (\/0 <= d <= 1\/)\n> -> (a, a) -- ^ lower\\\/upper bounds\n> -> MatLike a\n> randomTriangleLike gen size = randomMatLikeWith gen size size f (resizeWith chopTriangle) where\n> \n> f n _ = n * (n + 1) `div` 2\n\nCreates a random strict triangle matrix (no entries at the diagonal). The density refers to the\ndensity of the strict triangle, that is the number of entries is\n`floor (density * size * (size - 1) \/ 2)`.\n\n> randomStrictTriangleLike ::\n> (RandomGen g, Random a) => g -- ^ random generator\n> -> Int -- ^ number of rows and columns\n> -> Double -- ^ density (\/0 <= d <= 1\/)\n> -> (a, a) -- ^ lower\\\/upper bounds\n> -> MatLike a\n> randomStrictTriangleLike gen size = randomMatLikeWith gen size size f (resizeWith chopStrictTriangle)\n> where f n _ = n * (n - 1) `div` 2\n\nRandom instances\n----------------\n\n`Random` instance for pairs of Random instances that simply generates two values in\nsequence.\n\n> instance (Random a, Random b) => Random (a, b) where\n> \n> randomR ((la, lb), (ua, ub)) g = ((x, y), g'') where\n> (x, g') = randomR (la, ua) g\n> (y, g'') = randomR (lb, ub) g'\n>\n> random g = ((x, y), g'') where\n> (x, g') = random g\n> (y, g'') = random g'\n\nWe define a simple instance of `Random` for `Tropical`.\nWhen both bounds are actual weights, we use the inner values to generate a random value and\nthen wrap it in a `Weight` constructor.\nWhen the first bound is `MinWeight`, we interpret it as `toEnum 0` and use the latter as a lower\nbound, i.e. we identify `MinWeight == toEnum 0`. If none of the above applies, we ignore the bounds.\nThis way you cannot generate a `MaxWeight` value.\n\n> instance (Random a, Ord a, Enum a) => Random (Tropical a) where\n> randomR (Weight l, Weight u) = wrapWeight . randomR (l, u)\n> randomR (MinWeight, Weight u) = wrapWeight . randomR (toEnum 0, u)\n> randomR (Weight _, MaxWeight) = wrapWeight . random\n> randomR (u, l) | u > l = randomR (l, u)\n> | otherwise = wrapWeight . random\n> random = randomBounded\n\n> wrapWeight :: (Enum a, Eq a) => (a, x) -> (Tropical a, x)\n> wrapWeight = first toWeight\n\n> toWeight :: (Enum a, Eq a) => a -> Tropical a\n> toWeight x | x == toEnum 0 = MinWeight\n> | otherwise = Weight x\n\nOur random instance for `Regular` creates only `Letter` values.\nWhile this is a greatly simplified approach, it is a convenient restriction, \nbecause in the case of applications to graphs,\nthe edges are usually labelled with exactly such values and not composite ones\n(the value `NoWord` typically represents \"no edge\" rather than an edge with no label,\nsimilarly we omit the case of empty labels, which are represented by `EmptyWord`).\nAdditionally, we obtain a simple uniform distribution when letter bounds are used.\n\n> instance (Random a, Enum a) => Random (Regular a) where\n> randomR (Letter l, Letter u) g = (Letter pos, g') where\n> (pos, g') = randomR (l, u) g\n> randomR _ g = (Letter pos, g') where\n> (pos, g') = random g\n> \n> random = randomR (NoWord, EmptyWord)\n\n> instance Random Balance where\n> randomR = randomREnum\n> random = randomBounded\n\nAuxiliary functions\n-------------------\n\nGiven a function, an integer and a list, this function breaks the list in sublists. The given\nfunction is used to determine the length of the next chunk. For instance:\n\n* `breakWith id 3 \"Explanation\" == [\"Exp\",\"lan\",\"ati\",\"on\"]`\n* `breakWith (+ 1) 1 \"Explanation\" == [\"E\",\"xp\",\"lan\",\"atio\",\"n\"]`\n\n> breakWith :: (Int -> Int) -> Int -> [a] -> [[a]]\n> breakWith f = go where\n> \n> go _ [] = []\n> go n xs = take n xs : go (f n) (drop n xs)\n\nGiven an integer *n* and a list this function breaks the list into chunks of length *n*. The\nlast chunk is shorter, iff the length of the given list is not a multiple of *n*.\n\n> chopUniform :: Int -> [a] -> [[a]]\n> chopUniform = breakWith id\n\nThis function chops a given list into sublists of increasing length beginning with 1. The last\nelement is shorter than the second to last, if the length of the list is not _n*(n+1)\/2_ for\nsome natural number *n*\n\nThe name of the function hints at its use, since one can use the resulting chunks to fill a\nlower triangle matrix.\n\n> chopTriangle :: [a] -> [[a]]\n> chopTriangle = breakWith (+ 1) 1\n\nThis function behaves very similarly to `chopTriangle`, but its first list is empty.\nThe last element of this list is shorter than the second to last iff the list length is not\n_n * (n+1)\/2_ for some integer *n*\n\nAgain, the name hints at the function's application, namely the construction of a strict lower\ntriangle matrix.\n\n> chopStrictTriangle :: [a] -> [[a]]\n> chopStrictTriangle = breakWith (+1) 0\n\nOne particular recurring scheme is to split a list of `Maybe a` values into lists of such values\nand then transform these lists into rows. This scheme is captured by the following function.\n\n> resizeWith :: ([Maybe a] -> [[Maybe a]]) -> [Maybe a] -> MatLike a\n> resizeWith f = zip [0 .. ] . map toRow . f\n\nThis function transforms a list of `Maybe` values into an association list by first\nindexing the list and then removing the `Nothing` values. For example,\n\n* `toRow [Just 'h', Nothing, Just 'i'] == [(0, 'h'), (2, 'i')]`\n\n> toRow :: [Maybe a] -> [(Int, a)]\n> toRow = mapMaybe (uncurry (fmap . (,))) . zip [0 .. ]\n\nThis is a generic way to define the `random` function for bounded values. This implementation is the\nsame as the one in `System.Random`.\n\n> randomBounded :: (RandomGen g, Random a, Bounded a) => g -> (a, g)\n> randomBounded = randomR (minBound, maxBound)\n\nSimilarly, the following function is a generic way to define the `randomR` function for types that\nare instances of `Enum` by falling back to the `Random` instance of `Int`.\n\n> randomREnum :: (RandomGen g, Random a, Enum a) => (a, a) -> g -> (a, g)\n> randomREnum (l, u) g = (toEnum x, g') where\n> (x, g') = randomR (fromEnum l, fromEnum u) g","avg_line_length":44.1274131274,"max_line_length":103,"alphanum_fraction":0.6105521043} {"size":26897,"ext":"lhs","lang":"Literate Haskell","max_stars_count":2.0,"content":"%\n% (c) The University of Glasgow 2006\n% (c) The GRASP\/AQUA Project, Glasgow University, 1992-1998\n%\n\\section[Id]{@Ids@: Value and constructor identifiers}\n\n\\begin{code}\n-- |\n-- #name_types#\n-- GHC uses several kinds of name internally:\n--\n-- * 'OccName.OccName': see \"OccName#name_types\"\n--\n-- * 'RdrName.RdrName': see \"RdrName#name_types\"\n--\n-- * 'Name.Name': see \"Name#name_types\"\n--\n-- * 'Id.Id' represents names that not only have a 'Name.Name' but also a 'TypeRep.Type' and some additional\n-- details (a 'IdInfo.IdInfo' and one of 'Var.LocalIdDetails' or 'IdInfo.GlobalIdDetails') that\n-- are added, modified and inspected by various compiler passes. These 'Var.Var' names may either\n-- be global or local, see \"Var#globalvslocal\"\n--\n-- * 'Var.Var': see \"Var#name_types\"\n\nmodule Id (\n -- * The main types\n Var, Id, isId,\n\n -- ** Simple construction\n mkGlobalId, mkVanillaGlobal, mkVanillaGlobalWithInfo,\n mkLocalId, mkLocalIdWithInfo, mkExportedLocalId,\n mkSysLocal, mkSysLocalM, mkUserLocal, mkUserLocalM,\n mkTemplateLocals, mkTemplateLocalsNum, mkTemplateLocal,\n mkWorkerId, mkWiredInIdName,\n\n -- ** Taking an Id apart\n idName, idType, idUnique, idInfo, idDetails, idRepArity,\n recordSelectorFieldLabel,\n\n -- ** Modifying an Id\n setIdName, setIdUnique, Id.setIdType, \n setIdExported, setIdNotExported, \n globaliseId, localiseId, \n setIdInfo, lazySetIdInfo, modifyIdInfo, maybeModifyIdInfo,\n zapLamIdInfo, zapDemandIdInfo, zapFragileIdInfo, transferPolyIdInfo,\n zapIdStrictness,\n\n -- ** Predicates on Ids\n isImplicitId, isDeadBinder, \n isStrictId,\n isExportedId, isLocalId, isGlobalId,\n isRecordSelector, isNaughtyRecordSelector,\n isClassOpId_maybe, isDFunId,\n isPrimOpId, isPrimOpId_maybe,\n isFCallId, isFCallId_maybe,\n isDataConWorkId, isDataConWorkId_maybe, isDataConId_maybe, idDataCon,\n isConLikeId, isBottomingId, idIsFrom,\n hasNoBinding,\n\n -- ** Evidence variables\n DictId, isDictId, dfunNSilent, isEvVar,\n\n -- ** Inline pragma stuff\n idInlinePragma, setInlinePragma, modifyInlinePragma,\n idInlineActivation, setInlineActivation, idRuleMatchInfo,\n\n -- ** One-shot lambdas\n isOneShotBndr, isOneShotLambda, isProbablyOneShotLambda,\n setOneShotLambda, clearOneShotLambda, \n updOneShotInfo, setIdOneShotInfo,\n isStateHackType, stateHackOneShot, typeOneShot,\n\n -- ** Reading 'IdInfo' fields\n idArity, \n idUnfolding, realIdUnfolding,\n idSpecialisation, idCoreRules, idHasRules,\n idCafInfo,\n idOneShotInfo,\n idOccInfo,\n\n -- ** Writing 'IdInfo' fields\n setIdUnfoldingLazily,\n setIdUnfolding,\n setIdArity,\n\n setIdSpecialisation,\n setIdCafInfo,\n setIdOccInfo, zapIdOccInfo,\n\n setIdDemandInfo, \n setIdStrictness, \n\n idDemandInfo, \n idStrictness,\n\n ) where\n\n#include \"HsVersions.h\"\n\nimport CoreSyn ( CoreRule, Unfolding( NoUnfolding ) )\n\nimport IdInfo\nimport BasicTypes\n\n-- Imported and re-exported\nimport Var( Id, DictId,\n idInfo, idDetails, globaliseId, varType,\n isId, isLocalId, isGlobalId, isExportedId )\nimport qualified Var\n\nimport TyCon\nimport Type\nimport TysPrim\nimport DataCon\nimport Demand\nimport Name\nimport Module\nimport Class\nimport {-# SOURCE #-} PrimOp (PrimOp)\nimport ForeignCall\nimport Maybes\nimport SrcLoc\nimport Outputable\nimport Unique\nimport UniqSupply\nimport FastString\nimport Util\nimport StaticFlags\n\n-- infixl so you can say (id `set` a `set` b)\ninfixl 1 `setIdUnfoldingLazily`,\n `setIdUnfolding`,\n `setIdArity`,\n `setIdOccInfo`,\n `setIdOneShotInfo`,\n\n `setIdSpecialisation`,\n `setInlinePragma`,\n `setInlineActivation`,\n `idCafInfo`,\n\n `setIdDemandInfo`,\n `setIdStrictness`\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Basic Id manipulation}\n%* *\n%************************************************************************\n\n\\begin{code}\nidName :: Id -> Name\nidName = Var.varName\n\nidUnique :: Id -> Unique\nidUnique = Var.varUnique\n\nidType :: Id -> Kind\nidType = Var.varType\n\nsetIdName :: Id -> Name -> Id\nsetIdName = Var.setVarName\n\nsetIdUnique :: Id -> Unique -> Id\nsetIdUnique = Var.setVarUnique\n\n-- | Not only does this set the 'Id' 'Type', it also evaluates the type to try and\n-- reduce space usage\nsetIdType :: Id -> Type -> Id\nsetIdType id ty = seqType ty `seq` Var.setVarType id ty\n\nsetIdExported :: Id -> Id\nsetIdExported = Var.setIdExported\n\nsetIdNotExported :: Id -> Id\nsetIdNotExported = Var.setIdNotExported\n\nlocaliseId :: Id -> Id\n-- Make an with the same unique and type as the\n-- incoming Id, but with an *Internal* Name and *LocalId* flavour\nlocaliseId id\n | ASSERT( isId id ) isLocalId id && isInternalName name\n = id\n | otherwise\n = mkLocalIdWithInfo (localiseName name) (idType id) (idInfo id)\n where\n name = idName id\n\nlazySetIdInfo :: Id -> IdInfo -> Id\nlazySetIdInfo = Var.lazySetIdInfo\n\nsetIdInfo :: Id -> IdInfo -> Id\nsetIdInfo id info = seqIdInfo info `seq` (lazySetIdInfo id info)\n -- Try to avoid spack leaks by seq'ing\n\nmodifyIdInfo :: (IdInfo -> IdInfo) -> Id -> Id\nmodifyIdInfo fn id = setIdInfo id (fn (idInfo id))\n\n-- maybeModifyIdInfo tries to avoid unnecesary thrashing\nmaybeModifyIdInfo :: Maybe IdInfo -> Id -> Id\nmaybeModifyIdInfo (Just new_info) id = lazySetIdInfo id new_info\nmaybeModifyIdInfo Nothing id = id\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Simple Id construction}\n%* *\n%************************************************************************\n\nAbsolutely all Ids are made by mkId. It is just like Var.mkId,\nbut in addition it pins free-tyvar-info onto the Id's type,\nwhere it can easily be found.\n\nNote [Free type variables]\n~~~~~~~~~~~~~~~~~~~~~~~~~~\nAt one time we cached the free type variables of the type of an Id\nat the root of the type in a TyNote. The idea was to avoid repeating\nthe free-type-variable calculation. But it turned out to slow down\nthe compiler overall. I don't quite know why; perhaps finding free\ntype variables of an Id isn't all that common whereas applying a\nsubstitution (which changes the free type variables) is more common.\nAnyway, we removed it in March 2008.\n\n\\begin{code}\n-- | For an explanation of global vs. local 'Id's, see \"Var#globalvslocal\"\nmkGlobalId :: IdDetails -> Name -> Type -> IdInfo -> Id\nmkGlobalId = Var.mkGlobalVar\n\n-- | Make a global 'Id' without any extra information at all\nmkVanillaGlobal :: Name -> Type -> Id\nmkVanillaGlobal name ty = mkVanillaGlobalWithInfo name ty vanillaIdInfo\n\n-- | Make a global 'Id' with no global information but some generic 'IdInfo'\nmkVanillaGlobalWithInfo :: Name -> Type -> IdInfo -> Id\nmkVanillaGlobalWithInfo = mkGlobalId VanillaId\n\n\n-- | For an explanation of global vs. local 'Id's, see \"Var#globalvslocal\"\nmkLocalId :: Name -> Type -> Id\nmkLocalId name ty = mkLocalIdWithInfo name ty\n (vanillaIdInfo `setOneShotInfo` typeOneShot ty)\n\nmkLocalIdWithInfo :: Name -> Type -> IdInfo -> Id\nmkLocalIdWithInfo name ty info = Var.mkLocalVar VanillaId name ty info\n -- Note [Free type variables]\n\n-- | Create a local 'Id' that is marked as exported.\n-- This prevents things attached to it from being removed as dead code.\nmkExportedLocalId :: Name -> Type -> Id\nmkExportedLocalId name ty = Var.mkExportedLocalVar VanillaId name ty vanillaIdInfo\n -- Note [Free type variables]\n\n\n-- | Create a system local 'Id'. These are local 'Id's (see \"Var#globalvslocal\")\n-- that are created by the compiler out of thin air\nmkSysLocal :: FastString -> Unique -> Type -> Id\nmkSysLocal fs uniq ty = mkLocalId (mkSystemVarName uniq fs) ty\n\nmkSysLocalM :: MonadUnique m => FastString -> Type -> m Id\nmkSysLocalM fs ty = getUniqueM >>= (\\uniq -> return (mkSysLocal fs uniq ty))\n\n\n-- | Create a user local 'Id'. These are local 'Id's (see \"Var#globalvslocal\") with a name and location that the user might recognize\nmkUserLocal :: OccName -> Unique -> Type -> SrcSpan -> Id\nmkUserLocal occ uniq ty loc = mkLocalId (mkInternalName uniq occ loc) ty\n\nmkUserLocalM :: MonadUnique m => OccName -> Type -> SrcSpan -> m Id\nmkUserLocalM occ ty loc = getUniqueM >>= (\\uniq -> return (mkUserLocal occ uniq ty loc))\n\nmkWiredInIdName :: Module -> FastString -> Unique -> Id -> Name\nmkWiredInIdName mod fs uniq id\n = mkWiredInName mod (mkOccNameFS varName fs) uniq (AnId id) UserSyntax\n\\end{code}\n\nMake some local @Ids@ for a template @CoreExpr@. These have bogus\n@Uniques@, but that's OK because the templates are supposed to be\ninstantiated before use.\n\n\\begin{code}\n-- | Workers get local names. \"CoreTidy\" will externalise these if necessary\nmkWorkerId :: Unique -> Id -> Type -> Id\nmkWorkerId uniq unwrkr ty\n = mkLocalId (mkDerivedInternalName mkWorkerOcc uniq (getName unwrkr)) ty\n\n-- | Create a \/template local\/: a family of system local 'Id's in bijection with @Int@s, typically used in unfoldings\nmkTemplateLocal :: Int -> Type -> Id\nmkTemplateLocal i ty = mkSysLocal (fsLit \"tpl\") (mkBuiltinUnique i) ty\n\n-- | Create a template local for a series of types\nmkTemplateLocals :: [Type] -> [Id]\nmkTemplateLocals = mkTemplateLocalsNum 1\n\n-- | Create a template local for a series of type, but start from a specified template local\nmkTemplateLocalsNum :: Int -> [Type] -> [Id]\nmkTemplateLocalsNum n tys = zipWith mkTemplateLocal [n..] tys\n\\end{code}\n\n\n%************************************************************************\n%* *\n\\subsection{Special Ids}\n%* *\n%************************************************************************\n\n\\begin{code}\n-- | If the 'Id' is that for a record selector, extract the 'sel_tycon' and label. Panic otherwise\nrecordSelectorFieldLabel :: Id -> (TyCon, FieldLabel)\nrecordSelectorFieldLabel id\n = case Var.idDetails id of\n RecSelId { sel_tycon = tycon } -> (tycon, idName id)\n _ -> panic \"recordSelectorFieldLabel\"\n\nisRecordSelector :: Id -> Bool\nisNaughtyRecordSelector :: Id -> Bool\nisPrimOpId :: Id -> Bool\nisFCallId :: Id -> Bool\nisDataConWorkId :: Id -> Bool\nisDFunId :: Id -> Bool\n\nisClassOpId_maybe :: Id -> Maybe Class\nisPrimOpId_maybe :: Id -> Maybe PrimOp\nisFCallId_maybe :: Id -> Maybe ForeignCall\nisDataConWorkId_maybe :: Id -> Maybe DataCon\n\nisRecordSelector id = case Var.idDetails id of\n RecSelId {} -> True\n _ -> False\n\nisNaughtyRecordSelector id = case Var.idDetails id of\n RecSelId { sel_naughty = n } -> n\n _ -> False\n\nisClassOpId_maybe id = case Var.idDetails id of\n ClassOpId cls -> Just cls\n _other -> Nothing\n\nisPrimOpId id = case Var.idDetails id of\n PrimOpId _ -> True\n _ -> False\n\nisDFunId id = case Var.idDetails id of\n DFunId {} -> True\n _ -> False\n\ndfunNSilent :: Id -> Int\ndfunNSilent id = case Var.idDetails id of\n DFunId ns _ -> ns\n _ -> pprPanic \"dfunSilent: not a dfun:\" (ppr id)\n\nisPrimOpId_maybe id = case Var.idDetails id of\n PrimOpId op -> Just op\n _ -> Nothing\n\nisFCallId id = case Var.idDetails id of\n FCallId _ -> True\n _ -> False\n\nisFCallId_maybe id = case Var.idDetails id of\n FCallId call -> Just call\n _ -> Nothing\n\nisDataConWorkId id = case Var.idDetails id of\n DataConWorkId _ -> True\n _ -> False\n\nisDataConWorkId_maybe id = case Var.idDetails id of\n DataConWorkId con -> Just con\n _ -> Nothing\n\nisDataConId_maybe :: Id -> Maybe DataCon\nisDataConId_maybe id = case Var.idDetails id of\n DataConWorkId con -> Just con\n DataConWrapId con -> Just con\n _ -> Nothing\n\nidDataCon :: Id -> DataCon\n-- ^ Get from either the worker or the wrapper 'Id' to the 'DataCon'. Currently used only in the desugarer.\n--\n-- INVARIANT: @idDataCon (dataConWrapId d) = d@: remember, 'dataConWrapId' can return either the wrapper or the worker\nidDataCon id = isDataConId_maybe id `orElse` pprPanic \"idDataCon\" (ppr id)\n\nhasNoBinding :: Id -> Bool\n-- ^ Returns @True@ of an 'Id' which may not have a\n-- binding, even though it is defined in this module.\n\n-- Data constructor workers used to be things of this kind, but\n-- they aren't any more. Instead, we inject a binding for\n-- them at the CorePrep stage.\n-- EXCEPT: unboxed tuples, which definitely have no binding\nhasNoBinding id = case Var.idDetails id of\n PrimOpId _ -> True -- See Note [Primop wrappers]\n FCallId _ -> True\n DataConWorkId dc -> isUnboxedTupleCon dc\n _ -> False\n\nisImplicitId :: Id -> Bool\n-- ^ 'isImplicitId' tells whether an 'Id's info is implied by other\n-- declarations, so we don't need to put its signature in an interface\n-- file, even if it's mentioned in some other interface unfolding.\nisImplicitId id\n = case Var.idDetails id of\n FCallId {} -> True\n ClassOpId {} -> True\n PrimOpId {} -> True\n DataConWorkId {} -> True\n DataConWrapId {} -> True\n -- These are are implied by their type or class decl;\n -- remember that all type and class decls appear in the interface file.\n -- The dfun id is not an implicit Id; it must *not* be omitted, because\n -- it carries version info for the instance decl\n _ -> False\n\nidIsFrom :: Module -> Id -> Bool\nidIsFrom mod id = nameIsLocalOrFrom mod (idName id)\n\\end{code}\n\nNote [Primop wrappers]\n~~~~~~~~~~~~~~~~~~~~~~\nCurrently hasNoBinding claims that PrimOpIds don't have a curried\nfunction definition. But actually they do, in GHC.PrimopWrappers,\nwhich is auto-generated from prelude\/primops.txt.pp. So actually, hasNoBinding\ncould return 'False' for PrimOpIds.\n\nBut we'd need to add something in CoreToStg to swizzle any unsaturated\napplications of GHC.Prim.plusInt# to GHC.PrimopWrappers.plusInt#.\n\nNota Bene: GHC.PrimopWrappers is needed *regardless*, because it's\nused by GHCi, which does not implement primops direct at all.\n\n\n\n\\begin{code}\nisDeadBinder :: Id -> Bool\nisDeadBinder bndr | isId bndr = isDeadOcc (idOccInfo bndr)\n | otherwise = False -- TyVars count as not dead\n\\end{code}\n\n%************************************************************************\n%* *\n Evidence variables\n%* *\n%************************************************************************\n\n\\begin{code}\nisEvVar :: Var -> Bool\nisEvVar var = isPredTy (varType var)\n\nisDictId :: Id -> Bool\nisDictId id = isDictTy (idType id)\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{IdInfo stuff}\n%* *\n%************************************************************************\n\n\\begin{code}\n ---------------------------------\n -- ARITY\nidArity :: Id -> Arity\nidArity id = arityInfo (idInfo id)\n\nsetIdArity :: Id -> Arity -> Id\nsetIdArity id arity = modifyIdInfo (`setArityInfo` arity) id\n\nidRepArity :: Id -> RepArity\nidRepArity x = typeRepArity (idArity x) (idType x)\n\n-- | Returns true if an application to n args would diverge\nisBottomingId :: Id -> Bool\nisBottomingId id = isBottomingSig (idStrictness id)\n\nidStrictness :: Id -> StrictSig\nidStrictness id = strictnessInfo (idInfo id)\n\nsetIdStrictness :: Id -> StrictSig -> Id\nsetIdStrictness id sig = modifyIdInfo (`setStrictnessInfo` sig) id\n\nzapIdStrictness :: Id -> Id\nzapIdStrictness id = modifyIdInfo (`setStrictnessInfo` nopSig) id\n\n-- | This predicate says whether the 'Id' has a strict demand placed on it or\n-- has a type such that it can always be evaluated strictly (i.e an\n-- unlifted type, as of GHC 7.6). We need to\n-- check separately whether the 'Id' has a so-called \\\"strict type\\\" because if\n-- the demand for the given @id@ hasn't been computed yet but @id@ has a strict\n-- type, we still want @isStrictId id@ to be @True@.\nisStrictId :: Id -> Bool\nisStrictId id\n = ASSERT2( isId id, text \"isStrictId: not an id: \" <+> ppr id )\n (isStrictType (idType id)) ||\n -- Take the best of both strictnesses - old and new \n (isStrictDmd (idDemandInfo id))\n\n ---------------------------------\n -- UNFOLDING\nidUnfolding :: Id -> Unfolding\n-- Do not expose the unfolding of a loop breaker!\nidUnfolding id\n | isStrongLoopBreaker (occInfo info) = NoUnfolding\n | otherwise = unfoldingInfo info\n where\n info = idInfo id\n\nrealIdUnfolding :: Id -> Unfolding\n-- Expose the unfolding if there is one, including for loop breakers\nrealIdUnfolding id = unfoldingInfo (idInfo id)\n\nsetIdUnfoldingLazily :: Id -> Unfolding -> Id\nsetIdUnfoldingLazily id unfolding = modifyIdInfo (`setUnfoldingInfoLazily` unfolding) id\n\nsetIdUnfolding :: Id -> Unfolding -> Id\nsetIdUnfolding id unfolding = modifyIdInfo (`setUnfoldingInfo` unfolding) id\n\nidDemandInfo :: Id -> Demand\nidDemandInfo id = demandInfo (idInfo id)\n\nsetIdDemandInfo :: Id -> Demand -> Id\nsetIdDemandInfo id dmd = modifyIdInfo (`setDemandInfo` dmd) id\n\n ---------------------------------\n -- SPECIALISATION\n\n-- See Note [Specialisations and RULES in IdInfo] in IdInfo.lhs\n\nidSpecialisation :: Id -> SpecInfo\nidSpecialisation id = specInfo (idInfo id)\n\nidCoreRules :: Id -> [CoreRule]\nidCoreRules id = specInfoRules (idSpecialisation id)\n\nidHasRules :: Id -> Bool\nidHasRules id = not (isEmptySpecInfo (idSpecialisation id))\n\nsetIdSpecialisation :: Id -> SpecInfo -> Id\nsetIdSpecialisation id spec_info = modifyIdInfo (`setSpecInfo` spec_info) id\n\n ---------------------------------\n -- CAF INFO\nidCafInfo :: Id -> CafInfo\nidCafInfo id = cafInfo (idInfo id)\n\nsetIdCafInfo :: Id -> CafInfo -> Id\nsetIdCafInfo id caf_info = modifyIdInfo (`setCafInfo` caf_info) id\n\n ---------------------------------\n -- Occcurrence INFO\nidOccInfo :: Id -> OccInfo\nidOccInfo id = occInfo (idInfo id)\n\nsetIdOccInfo :: Id -> OccInfo -> Id\nsetIdOccInfo id occ_info = modifyIdInfo (`setOccInfo` occ_info) id\n\nzapIdOccInfo :: Id -> Id\nzapIdOccInfo b = b `setIdOccInfo` NoOccInfo\n\\end{code}\n\n\n ---------------------------------\n -- INLINING\nThe inline pragma tells us to be very keen to inline this Id, but it's still\nOK not to if optimisation is switched off.\n\n\\begin{code}\nidInlinePragma :: Id -> InlinePragma\nidInlinePragma id = inlinePragInfo (idInfo id)\n\nsetInlinePragma :: Id -> InlinePragma -> Id\nsetInlinePragma id prag = modifyIdInfo (`setInlinePragInfo` prag) id\n\nmodifyInlinePragma :: Id -> (InlinePragma -> InlinePragma) -> Id\nmodifyInlinePragma id fn = modifyIdInfo (\\info -> info `setInlinePragInfo` (fn (inlinePragInfo info))) id\n\nidInlineActivation :: Id -> Activation\nidInlineActivation id = inlinePragmaActivation (idInlinePragma id)\n\nsetInlineActivation :: Id -> Activation -> Id\nsetInlineActivation id act = modifyInlinePragma id (\\prag -> setInlinePragmaActivation prag act)\n\nidRuleMatchInfo :: Id -> RuleMatchInfo\nidRuleMatchInfo id = inlinePragmaRuleMatchInfo (idInlinePragma id)\n\nisConLikeId :: Id -> Bool\nisConLikeId id = isDataConWorkId id || isConLike (idRuleMatchInfo id)\n\\end{code}\n\n\n ---------------------------------\n -- ONE-SHOT LAMBDAS\n\\begin{code}\nidOneShotInfo :: Id -> OneShotInfo\nidOneShotInfo id = oneShotInfo (idInfo id)\n\n-- | Returns whether the lambda associated with the 'Id' is certainly applied at most once\n-- This one is the \"business end\", called externally.\n-- It works on type variables as well as Ids, returning True\n-- Its main purpose is to encapsulate the Horrible State Hack\nisOneShotBndr :: Var -> Bool\nisOneShotBndr var\n | isTyVar var = True\n | otherwise = isOneShotLambda var\n\n-- | Should we apply the state hack to values of this 'Type'?\nstateHackOneShot :: OneShotInfo\nstateHackOneShot = OneShotLam -- Or maybe ProbOneShot?\n\ntypeOneShot :: Type -> OneShotInfo\ntypeOneShot ty\n | isStateHackType ty = stateHackOneShot\n | otherwise = NoOneShotInfo\n\nisStateHackType :: Type -> Bool\nisStateHackType ty\n | opt_NoStateHack\n = False\n | otherwise\n = case tyConAppTyCon_maybe ty of\n Just tycon -> tycon == statePrimTyCon\n _ -> False\n -- This is a gross hack. It claims that\n -- every function over realWorldStatePrimTy is a one-shot\n -- function. This is pretty true in practice, and makes a big\n -- difference. For example, consider\n -- a `thenST` \\ r -> ...E...\n -- The early full laziness pass, if it doesn't know that r is one-shot\n -- will pull out E (let's say it doesn't mention r) to give\n -- let lvl = E in a `thenST` \\ r -> ...lvl...\n -- When `thenST` gets inlined, we end up with\n -- let lvl = E in \\s -> case a s of (r, s') -> ...lvl...\n -- and we don't re-inline E.\n --\n -- It would be better to spot that r was one-shot to start with, but\n -- I don't want to rely on that.\n --\n -- Another good example is in fill_in in PrelPack.lhs. We should be able to\n -- spot that fill_in has arity 2 (and when Keith is done, we will) but we can't yet.\n\n\n-- | Returns whether the lambda associated with the 'Id' is certainly applied at most once.\n-- You probably want to use 'isOneShotBndr' instead\nisOneShotLambda :: Id -> Bool\nisOneShotLambda id = case idOneShotInfo id of\n OneShotLam -> True\n _ -> False\n\nisProbablyOneShotLambda :: Id -> Bool\nisProbablyOneShotLambda id = case idOneShotInfo id of\n OneShotLam -> True\n ProbOneShot -> True\n NoOneShotInfo -> False\n\nsetOneShotLambda :: Id -> Id\nsetOneShotLambda id = modifyIdInfo (`setOneShotInfo` OneShotLam) id\n\nclearOneShotLambda :: Id -> Id\nclearOneShotLambda id = modifyIdInfo (`setOneShotInfo` NoOneShotInfo) id\n\nsetIdOneShotInfo :: Id -> OneShotInfo -> Id\nsetIdOneShotInfo id one_shot = modifyIdInfo (`setOneShotInfo` one_shot) id\n\nupdOneShotInfo :: Id -> OneShotInfo -> Id\n-- Combine the info in the Id with new info\nupdOneShotInfo id one_shot\n | do_upd = setIdOneShotInfo id one_shot\n | otherwise = id\n where\n do_upd = case (idOneShotInfo id, one_shot) of\n (NoOneShotInfo, _) -> True\n (OneShotLam, _) -> False\n (_, NoOneShotInfo) -> False\n _ -> True\n\n-- The OneShotLambda functions simply fiddle with the IdInfo flag\n-- But watch out: this may change the type of something else\n-- f = \\x -> e\n-- If we change the one-shot-ness of x, f's type changes\n\\end{code}\n\n\\begin{code}\nzapInfo :: (IdInfo -> Maybe IdInfo) -> Id -> Id\nzapInfo zapper id = maybeModifyIdInfo (zapper (idInfo id)) id\n\nzapLamIdInfo :: Id -> Id\nzapLamIdInfo = zapInfo zapLamInfo\n\nzapFragileIdInfo :: Id -> Id\nzapFragileIdInfo = zapInfo zapFragileInfo \n\nzapDemandIdInfo :: Id -> Id\nzapDemandIdInfo = zapInfo zapDemandInfo\n\\end{code}\n\nNote [transferPolyIdInfo]\n~~~~~~~~~~~~~~~~~~~~~~~~~\nThis transfer is used in two places:\n FloatOut (long-distance let-floating)\n SimplUtils.abstractFloats (short-distance let-floating)\n\nConsider the short-distance let-floating:\n\n f = \/\\a. let g = rhs in ...\n\nThen if we float thus\n\n g' = \/\\a. rhs\n f = \/\\a. ...[g' a\/g]....\n\nwe *do not* want to lose g's\n * strictness information\n * arity\n * inline pragma (though that is bit more debatable)\n * occurrence info\n\nMostly this is just an optimisation, but it's *vital* to\ntransfer the occurrence info. Consider\n\n NonRec { f = \/\\a. let Rec { g* = ..g.. } in ... }\n\nwhere the '*' means 'LoopBreaker'. Then if we float we must get\n\n Rec { g'* = \/\\a. ...(g' a)... }\n NonRec { f = \/\\a. ...[g' a\/g]....}\n\nwhere g' is also marked as LoopBreaker. If not, terrible things\ncan happen if we re-simplify the binding (and the Simplifier does\nsometimes simplify a term twice); see Trac #4345.\n\nIt's not so simple to retain\n * worker info\n * rules\nso we simply discard those. Sooner or later this may bite us.\n\nIf we abstract wrt one or more *value* binders, we must modify the\narity and strictness info before transferring it. E.g.\n f = \\x. e\n-->\n g' = \\y. \\x. e\n + substitute (g' y) for g\nNotice that g' has an arity one more than the original g\n\n\\begin{code}\ntransferPolyIdInfo :: Id -- Original Id\n -> [Var] -- Abstract wrt these variables\n -> Id -- New Id\n -> Id\ntransferPolyIdInfo old_id abstract_wrt new_id\n = modifyIdInfo transfer new_id\n where\n arity_increase = count isId abstract_wrt -- Arity increases by the\n -- number of value binders\n\n old_info = idInfo old_id\n old_arity = arityInfo old_info\n old_inline_prag = inlinePragInfo old_info\n old_occ_info = occInfo old_info\n new_arity = old_arity + arity_increase\n\n old_strictness = strictnessInfo old_info\n new_strictness = increaseStrictSigArity arity_increase old_strictness\n\n transfer new_info = new_info `setArityInfo` new_arity\n `setInlinePragInfo` old_inline_prag\n `setOccInfo` old_occ_info\n `setStrictnessInfo` new_strictness\n\\end{code}\n","avg_line_length":34.9765929779,"max_line_length":133,"alphanum_fraction":0.6153846154} {"size":14581,"ext":"lhs","lang":"Literate Haskell","max_stars_count":41.0,"content":"% -----------------------------------------------------------------------------\n% $Id: TimeExts.lhs,v 1.3 2004\/10\/06 11:16:40 ross Exp $\n{-\n TimeExts.lhs implements more useful time differences for Glasgow\n Haskell times. Time differences can be in picoseconds, seconds,\n minutes, hours, days, months or years. So for example you can\n add\/subtract N months to a date, or find someone's age in days\n given the current date and their date of birth.\n\n Note on Timezones\n -----------------\n\n We use UTC where necessary. This will \n occasionally matter. For example if it is 1am April 1st\n local time and 11pm March 31st UTC, then adding 1 month will give \n \"11pm April 31st UTC\", which will get rolled over to\n \"11pm May 1st UTC\", or \"1am May 2nd local time\", assuming a \n constant time difference between local time and UTC. \n Adding 1 month in local time would instead give \"1am May 1st local\n time\". It would not be too hard to use local time, but (a)\n I doubt if anyone will really notice the difference; (b) \n this sort of thing really ought not to be done unless Haskell\n has a proper notion of time-zones; (c) I'm not quite sure what\n to do about daylight saving time.\n \n -}\n\n\\begin{code}\nmodule TimeExts\n\t{-# DEPRECATED \"This module is unmaintained, and will disappear soon\" #-}\n (\n DiffPico(..),\n DiffSecond(..),\n DiffMinute(..),\n DiffHour(..),\n DiffDay(..),\n DiffMonth(..),\n DiffYear(..),\n\n TimeAddable,\n\n addClock,\n diffClock,\n \n addClockPico,\n diffClockPico,\n addClockSecond,\n diffClockSecond,\n addClockMinute,\n diffClockMinute,\n addClockHour,\n diffClockHour,\n addClockDay,\n diffClockDay,\n addClockMonth,\n diffClockMonth,\n addClockYear,\n diffClockYear\n )\n where\n\nimport System.Time\n\n-- Time difference types\ndata DiffPico = DiffPico Integer\ndata DiffSecond = DiffSecond Integer\ndata DiffMinute = DiffMinute Integer\ndata DiffHour = DiffHour Int\n-- 2^31 hours is more than 200000 years so Int is probably enough.\ndata DiffDay = DiffDay Int \ndata DiffMonth = DiffMonth Int\ndata DiffYear = DiffYear Int\n\n-- this class is implemented for each of the above types.\nclass TimeAddable diffType where\n addClock :: ClockTime -> diffType -> ClockTime\n -- add given time difference. Where necessary, lower fields\n -- are rolled over to higher ones. For example\n -- adding 1 month to March 31st gives May 1st.\n diffClock :: ClockTime -> ClockTime -> diffType\n -- diffClock timeTo timeFrom\n -- returns the time difference from timeFrom to timeTo.\n -- for example, if diffType is DayDiff,\n -- diffClock (\"23:00:00 on January 2nd\") (\"00:00:00 on January 1st\") \n -- will be \"DayDiff 1\", since 1 whole day (plus a bit extra) has\n -- elapsed from the second date to the first.\n\n-- For those who don't like the overloading in the above, we also\n-- provide monomorphic versions of each of these functions for each type.\naddClockPico :: ClockTime -> DiffPico -> ClockTime\ndiffClockPico :: ClockTime -> ClockTime -> DiffPico\n\naddClockSecond :: ClockTime -> DiffSecond -> ClockTime\ndiffClockSecond :: ClockTime -> ClockTime -> DiffSecond\n\naddClockMinute :: ClockTime -> DiffMinute -> ClockTime\ndiffClockMinute :: ClockTime -> ClockTime -> DiffMinute\n\naddClockHour :: ClockTime -> DiffHour -> ClockTime\ndiffClockHour :: ClockTime -> ClockTime -> DiffHour\n\naddClockDay :: ClockTime -> DiffDay -> ClockTime\ndiffClockDay :: ClockTime -> ClockTime -> DiffDay\n\naddClockMonth :: ClockTime -> DiffMonth -> ClockTime\ndiffClockMonth :: ClockTime -> ClockTime -> DiffMonth\n\naddClockYear :: ClockTime -> DiffYear -> ClockTime\ndiffClockYear :: ClockTime -> ClockTime -> DiffYear\n\n--- END OF SPECIFICATION\n\ninstance TimeAddable DiffPico where\n addClock = addClockPico\n diffClock = diffClockPico\n\n\ninstance TimeAddable DiffSecond where\n addClock = addClockSecond\n diffClock = diffClockSecond\n\n\ninstance TimeAddable DiffMinute where\n addClock = addClockMinute\n diffClock = diffClockMinute\n\n\ninstance TimeAddable DiffHour where\n addClock = addClockHour\n diffClock = diffClockHour\n\n\ninstance TimeAddable DiffDay where\n addClock = addClockDay\n diffClock = diffClockDay\n\n\ninstance TimeAddable DiffMonth where\n addClock = addClockMonth\n diffClock = diffClockMonth\n\ninstance TimeAddable DiffYear where\n addClock = addClockYear\n diffClock = diffClockYear\n\n-- Now we have to implement these functions. We have two strategies.\n-- (1) For DiffPico and DiffSecond this can be done trivially\n-- by extracting the fields of the ClockTime type, which gives\n-- seconds and picoseconds directly.\n-- (2) For other types we convert to CalendarTime and use\n-- Gregorian calendar calculations.\n\n{-\n DiffPico\n -}\n\nnPicos = 1000000000000\n\naddClockPico (TOD seconds picos) (DiffPico diffPicos) =\n let\n (diffSeconds,diffRestPicos) = divMod diffPicos nPicos\n seconds' = seconds + diffSeconds\n picos' = picos + diffRestPicos\n (seconds'',picos'') =\n if picos' >= nPicos\n then\n (seconds'+1,picos'-nPicos)\n else\n (seconds',picos')\n in\n TOD seconds'' picos''\n\ndiffClockPico (TOD secondsTo picosTo) (TOD secondsFrom picosFrom) =\n DiffPico((picosTo-picosFrom) + nPicos * (secondsTo - secondsFrom))\n\n{- \n DiffSecond\n -}\n\naddClockSecond (TOD seconds picos) (DiffSecond diffSeconds) =\n TOD (seconds + diffSeconds) picos\n\ndiffClockSecond (TOD secondsTo picosTo) (TOD secondsFrom picosFrom) =\n DiffSecond(if picosTo >= picosFrom\n then\n (secondsTo - secondsFrom)\n else -- borrow\n (secondsTo - secondsFrom - 1)\n )\n{- \n DiffMinute \n -}\n-- The remaining functions use the Gregorian Calendar code which\n-- is shoved down to the end of this file.\naddClockMinute clockTo (DiffMinute diffMinutes) =\n let\n calTo @ (CalendarTime \n {ctYear=ctYear,ctMonth=ctMonth,ctDay=ctDay,\n ctHour=ctHour,ctMin=ctMin}) = \n toUTCTime clockTo\n -- we will leave the other fields unchanged and hope that\n -- toClockTime will ignore them. (Which it does, for GHC.)\n (diffHours',diffRestMinutes) = divMod diffMinutes 60\n minute' = ctMin + fromInteger diffRestMinutes\n (diffHours,minute) = \n if minute'<60\n then\n (diffHours',minute')\n else\n (diffHours'+1,minute'-60)\n (diffDays',diffRestHours) = divMod diffHours 24\n hour' = ctHour + fromInteger diffRestHours\n (diffDays,hour) =\n if hour'<24\n then\n (diffDays',hour')\n else\n (diffDays+1,hour'-24)\n (year,month,day) =\n addDateDays (ctYear,ctMonth,ctDay) (fromInteger diffDays)\n in\n toClockTime (calTo\n {ctYear=year,ctMonth=month,ctDay=day,\n ctHour=hour,ctMin=minute\n })\n\ndiffClockMinute clockTo clockFrom =\n let\n CalendarTime {ctYear=toYear,ctMonth=toMonth,ctDay=toDay,ctHour=toHour,\n ctMin=toMinute,ctSec=toSec,ctPicosec=toPicosec} = \n toUTCTime clockTo\n CalendarTime {ctYear=fromYear,ctMonth=fromMonth,ctDay=fromDay,\n ctHour=fromHour,ctMin=fromMinute,ctSec=fromSec,\n ctPicosec=fromPicosec} = toUTCTime clockFrom\n borrow = (toSec,toPicosec) < (fromSec,toPicosec)\n diff' = (24*60) * (toInteger (diffDates (toYear,toMonth,toDay) \n (fromYear,fromMonth,fromDay))) + 60*(toInteger(toHour-fromHour)) + \n toInteger(toMinute-fromMinute)\n in\n DiffMinute(if borrow then diff'-1 else diff')\n\n{- \n DiffHour\n We're lazy and just call the minute functions for hours and days.\n -}\naddClockHour clockTo (DiffHour diffHours) =\n addClockMinute clockTo (DiffMinute (60*(toInteger diffHours)))\n\ndiffClockHour clockTo clockFrom =\n let\n DiffMinute diffMinutes = diffClockMinute clockTo clockFrom\n in\n DiffHour(fromInteger(diffMinutes `div` 60))\n\n{- \n DiffDay\n We're lazy and just call the minute functions for hours and days.\n For days at least this involves unnecessary multiplication and\n division, unless the compiler is very clever.\n -}\naddClockDay clockTo (DiffDay diffDays) =\n addClockMinute clockTo (DiffMinute ((24*60)*(toInteger diffDays)))\n\ndiffClockDay clockTo clockFrom =\n let\n DiffMinute diffMinutes = diffClockMinute clockTo clockFrom\n in\n DiffDay(fromInteger(diffMinutes `div` (24*60)))\n\n{- \n DiffMonth\n Here we assume that toClockTime will roll over illegal dates,\n as when you add 1 month to March 31st and get April 31st.\n This is avoidable by doing some Gregorian calendar calculations; \n the equivalent situation when you roll over a leap second is \n not.\n -}\naddClockMonth clockTo (DiffMonth diffMonths) =\n let\n calTo @ (CalendarTime \n {ctYear=ctYear,ctMonth=ctMonth}) = \n toUTCTime clockTo\n mn = (fromEnum ctMonth) + diffMonths\n (yearDiff,monthNo) = divMod mn 12\n in\n toClockTime(calTo {ctYear=ctYear+yearDiff,ctMonth=toEnum monthNo})\n\ndiffClockMonth clockTo clockFrom =\n let\n CalendarTime {ctYear=toYear,ctMonth=toMonth,ctDay=toDay,ctHour=toHour,\n ctMin=toMinute,ctSec=toSec,ctPicosec=toPicosec} = \n toUTCTime clockTo\n CalendarTime {ctYear=fromYear,ctMonth=fromMonth,ctDay=fromDay,\n ctHour=fromHour,ctMin=fromMinute,ctSec=fromSec,\n ctPicosec=fromPicosec} = toUTCTime clockFrom\n borrow = \n -- hack around GHC failure to order tuples with\n -- more than 5 elements.\n (toDay,toHour,toMinute,toDay,(toSec,toPicosec)) <\n (fromDay,fromHour,fromMinute,fromDay,(fromSec,fromPicosec))\n diff' = 12*(toYear-fromYear) + \n (fromEnum toMonth - fromEnum fromMonth)\n in\n DiffMonth(if borrow then diff' -1 else diff')\n\n{- \n DiffYear\n It's getting late so waste CPU time\/leave it to the\n compiler and use the month functions\n -}\naddClockYear clockTo (DiffYear diffYears) =\n addClockMonth clockTo (DiffMonth (12*diffYears))\n\ndiffClockYear clockTo clockFrom =\n let\n DiffMonth diffMonths = diffClockMonth clockTo clockFrom\n in\n DiffYear(diffMonths `div` 12)\n\n{- Magic code for implementing the Gregorian Calendar -}\n \n-- Here are two ways of representing a date\ntype Date = (Int,Month,Int) -- year, month, day\ntype NDays = Int \n-- Counts days starting at 1st March Year 0 \n-- (in the Gregorian Calendar). So the 1st March Year 0 is\n-- day 0, and so on. We start years at March as that means\n-- leap days always come at the end. \n\n-- The difficult bit of this module is converting from Date to\n-- NDays. We do this by going via a YDPair:\ntype YDPair = (Int,Int)\n-- a YDPair is the number of whole years since 0th March Year 0\n-- plus the number of days after that. So the YDPair for\n-- 29th Feb 2000 is (1999,360) and the YDPair for 1st Mar 2000 is\n-- (2000,0).\n\naddDateDays date n =\n nDaysToDate ( dateToNDays date + n)\ndiffDates dateTo dateFrom =\n (dateToNDays dateTo - dateToNDays dateFrom)\n\ndateToNDays = ydPairToNDays . dateToYDPair\nnDaysToDate = ydPairToDate . nDaysToYDPair\n\nydPairToNDays :: YDPair -> NDays\nydPairToNDays (years,days) =\n days + years * 365 + (years `div` 4) - (years `div` 100) + \n (years `div` 400)\n\nnDaysToYDPair :: NDays -> YDPair\nnDaysToYDPair ndays = -- there must be a neater way of writing this!\n (400*q + 100*r + 4*s + t,days) \n where\n -- the idea being that 0<=r<4, 0<=s<25, 0<=t<4,\n -- and so ndays = q*qd + r*rd + s*sd + t*td + days\n -- where days is as small as possible while still being non-negative.\n qd = 4*rd +1 -- days in 400 years\n rd = 25*sd - 1 -- days in 100 years\n sd = 4*td + 1 -- days in 4 years\n td = 365 -- days in 1 year.\n \n (q,qrest) = divMod ndays qd\n (r',rrest) = divMod qrest rd\n (s',srest) = divMod rrest sd\n (t',days') = divMod srest td\n \n -- r',s',t',days' are not quite the right values of r,s,t if there's\n -- a leap day, which gives rise to d=0 and r=4 or t=4.\n (r,s,t,days) = \n if days'\/=0 \n then\n (r',s',t',days')\n else -- March 1st or leap day\n if t'==4\n then\n -- leap day\n (r',s',3,365)\n else if r'==4\n then\n -- leap day of year divisible by 400\n (3,24,3,365)\n else -- March 1st\n (r',s',t',days')\n\n-- magic numbers to subtract from a day number in a year\n-- (remember March 1st is day 0) to get a date in a month.\nnMarch = -1\nnApril = 30\nnMay = 60\nnJune = 91\nnJuly = 121\nnAugust = 152\nnSeptember = 183\nnOctober = 213\nnNovember = 244\nnDecember = 274\nnJanuary = 305\nnFebruary = 336\n\ndateToYDPair :: Date -> YDPair\ndateToYDPair (year,month,date) =\n case month of\n March -> (year,date+nMarch)\n April -> (year,date+nApril)\n May -> (year,date+nMay)\n June -> (year,date+nJune)\n July -> (year,date+nJuly)\n August -> (year,date+nAugust)\n September -> (year,date+nSeptember)\n October -> (year,date+nOctober)\n November -> (year,date+nNovember)\n December -> (year,date+nDecember)\n January -> (year-1,date+nJanuary)\n February -> (year-1,date+nFebruary)\n\nydPairToDate :: YDPair -> Date\nydPairToDate (years,days) =\n if days<=nSeptember\n then\n if days<=nJune\n then\n if days<=nMay\n then\n if days<=nApril\n then -- March\n (years,March,days-nMarch)\n else -- April\n (years,April,days-nApril)\n else -- May\n (years,May,days-nMay)\n else\n if days<=nAugust\n then\n if days<=nJuly\n then -- June\n (years,June,days-nJune)\n else -- July\n (years,July,days-nJuly)\n else -- August\n (years,August,days-nAugust)\n else \n if days<=nDecember\n then\n if days<=nNovember\n then\n if days<=nOctober\n then -- September\n (years,September,days-nSeptember)\n else -- October\n (years,October,days-nOctober)\n else -- November\n (years,November,days-nNovember)\n else\n if days<=nFebruary\n then\n if days<=nJanuary\n then -- December\n (years,December,days-nDecember)\n else -- January\n (years+1,January,days-nJanuary)\n else -- February\n (years+1,February,days-nFebruary)\n\\end{code}\n","avg_line_length":31.3569892473,"max_line_length":79,"alphanum_fraction":0.6470749606} {"size":10780,"ext":"lhs","lang":"Literate Haskell","max_stars_count":6.0,"content":"%\n% (c) The University of Glasgow 2000-2006\n%\nByteCodeLink: Bytecode assembler and linker\n\n\\begin{code}\n{-# LANGUAGE BangPatterns #-}\n{-# LANGUAGE CPP #-}\n{-# LANGUAGE FlexibleInstances #-}\n{-# LANGUAGE MagicHash #-}\n{-# LANGUAGE MultiParamTypeClasses #-}\n{-# LANGUAGE UnboxedTuples #-}\n{-# OPTIONS_GHC -optc-DNON_POSIX_SOURCE #-}\n\nmodule ByteCodeLink (\n ClosureEnv, emptyClosureEnv, extendClosureEnv,\n linkBCO, lookupStaticPtr, lookupName\n ,lookupIE\n ) where\n\n#include \"HsVersions.h\"\n\nimport ByteCodeItbls\nimport ByteCodeAsm\nimport ObjLink\n\nimport DynFlags\nimport BasicTypes\nimport Name\nimport NameEnv\nimport PrimOp\nimport Module\nimport FastString\nimport Panic\nimport Outputable\nimport Util\n\n-- Standard libraries\n\nimport Data.Array.Base\n\nimport Control.Monad\nimport Control.Monad.ST ( stToIO )\n\nimport GHC.Arr ( Array(..), STArray(..) )\nimport GHC.IO ( IO(..) )\nimport GHC.Exts\nimport GHC.Ptr ( castPtr )\n\\end{code}\n\n\n%************************************************************************\n%* *\n\\subsection{Linking interpretables into something we can run}\n%* *\n%************************************************************************\n\n\\begin{code}\ntype ClosureEnv = NameEnv (Name, HValue)\n\nemptyClosureEnv :: ClosureEnv\nemptyClosureEnv = emptyNameEnv\n\nextendClosureEnv :: ClosureEnv -> [(Name,HValue)] -> ClosureEnv\nextendClosureEnv cl_env pairs\n = extendNameEnvList cl_env [ (n, (n,v)) | (n,v) <- pairs]\n\\end{code}\n\n\n%************************************************************************\n%* *\n\\subsection{Linking interpretables into something we can run}\n%* *\n%************************************************************************\n\n\\begin{code}\n{-\ndata BCO# = BCO# ByteArray# -- instrs :: Array Word16#\n ByteArray# -- literals :: Array Word32#\n PtrArray# -- ptrs :: Array HValue\n ByteArray# -- itbls :: Array Addr#\n-}\n\nlinkBCO :: DynFlags -> ItblEnv -> ClosureEnv -> UnlinkedBCO -> IO HValue\nlinkBCO dflags ie ce ul_bco\n = do BCO bco# <- linkBCO' dflags ie ce ul_bco\n -- SDM: Why do we need mkApUpd0 here? I *think* it's because\n -- otherwise top-level interpreted CAFs don't get updated\n -- after evaluation. A top-level BCO will evaluate itself and\n -- return its value when entered, but it won't update itself.\n -- Wrapping the BCO in an AP_UPD thunk will take care of the\n -- update for us.\n --\n -- Update: the above is true, but now we also have extra invariants:\n -- (a) An AP thunk *must* point directly to a BCO\n -- (b) A zero-arity BCO *must* be wrapped in an AP thunk\n -- (c) An AP is always fully saturated, so we *can't* wrap\n -- non-zero arity BCOs in an AP thunk.\n --\n if (unlinkedBCOArity ul_bco > 0)\n then return (HValue (unsafeCoerce# bco#))\n else case mkApUpd0# bco# of { (# final_bco #) -> return (HValue final_bco) }\n\n\nlinkBCO' :: DynFlags -> ItblEnv -> ClosureEnv -> UnlinkedBCO -> IO BCO\nlinkBCO' dflags ie ce (UnlinkedBCO _ arity insns_barr bitmap literalsSS ptrsSS)\n -- Raises an IO exception on failure\n = do let literals = ssElts literalsSS\n ptrs = ssElts ptrsSS\n\n linked_literals <- mapM (lookupLiteral dflags ie) literals\n\n let n_literals = sizeSS literalsSS\n n_ptrs = sizeSS ptrsSS\n\n ptrs_arr <- mkPtrsArray dflags ie ce n_ptrs ptrs\n\n let\n !ptrs_parr = case ptrs_arr of Array _lo _hi _n parr -> parr\n\n litRange\n | n_literals > 0 = (0, fromIntegral n_literals - 1)\n | otherwise = (1, 0)\n literals_arr :: UArray Word Word\n literals_arr = listArray litRange linked_literals\n !literals_barr = case literals_arr of UArray _lo _hi _n barr -> barr\n\n !(I# arity#) = arity\n\n newBCO insns_barr literals_barr ptrs_parr arity# bitmap\n\n\n-- we recursively link any sub-BCOs while making the ptrs array\nmkPtrsArray :: DynFlags -> ItblEnv -> ClosureEnv -> Word -> [BCOPtr] -> IO (Array Word HValue)\nmkPtrsArray dflags ie ce n_ptrs ptrs = do\n let ptrRange = if n_ptrs > 0 then (0, n_ptrs-1) else (1, 0)\n marr <- newArray_ ptrRange\n let\n fill (BCOPtrName n) i = do\n ptr <- lookupName ce n\n unsafeWrite marr i ptr\n fill (BCOPtrPrimOp op) i = do\n ptr <- lookupPrimOp op\n unsafeWrite marr i ptr\n fill (BCOPtrBCO ul_bco) i = do\n BCO bco# <- linkBCO' dflags ie ce ul_bco\n writeArrayBCO marr i bco#\n fill (BCOPtrBreakInfo brkInfo) i =\n unsafeWrite marr i (HValue (unsafeCoerce# brkInfo))\n fill (BCOPtrArray brkArray) i =\n unsafeWrite marr i (HValue (unsafeCoerce# brkArray))\n zipWithM_ fill ptrs [0..]\n unsafeFreeze marr\n\nnewtype IOArray i e = IOArray (STArray RealWorld i e)\n\ninstance MArray IOArray e IO where\n getBounds (IOArray marr) = stToIO $ getBounds marr\n getNumElements (IOArray marr) = stToIO $ getNumElements marr\n newArray lu init = stToIO $ do\n marr <- newArray lu init; return (IOArray marr)\n newArray_ lu = stToIO $ do\n marr <- newArray_ lu; return (IOArray marr)\n unsafeRead (IOArray marr) i = stToIO (unsafeRead marr i)\n unsafeWrite (IOArray marr) i e = stToIO (unsafeWrite marr i e)\n\n-- XXX HACK: we should really have a new writeArray# primop that takes a BCO#.\nwriteArrayBCO :: IOArray Word a -> Int -> BCO# -> IO ()\nwriteArrayBCO (IOArray (STArray _ _ _ marr#)) (I# i#) bco# = IO $ \\s# ->\n case (unsafeCoerce# writeArray#) marr# i# bco# s# of { s# ->\n (# s#, () #) }\n\n{-\nwriteArrayMBA :: IOArray Int a -> Int -> MutableByteArray# a -> IO ()\nwriteArrayMBA (IOArray (STArray _ _ marr#)) (I# i#) mba# = IO $ \\s# ->\n case (unsafeCoerce# writeArray#) marr# i# bco# s# of { s# ->\n (# s#, () #) }\n-}\n\ndata BCO = BCO BCO#\n\nnewBCO :: ByteArray# -> ByteArray# -> Array# a -> Int# -> ByteArray# -> IO BCO\nnewBCO instrs lits ptrs arity bitmap\n = IO $ \\s -> case newBCO# instrs lits ptrs arity bitmap s of\n (# s1, bco #) -> (# s1, BCO bco #)\n\n\nlookupLiteral :: DynFlags -> ItblEnv -> BCONPtr -> IO Word\nlookupLiteral _ _ (BCONPtrWord lit) = return lit\nlookupLiteral _ _ (BCONPtrLbl sym) = do Ptr a# <- lookupStaticPtr sym\n return (W# (int2Word# (addr2Int# a#)))\nlookupLiteral dflags ie (BCONPtrItbl nm) = do Ptr a# <- lookupIE dflags ie nm\n return (W# (int2Word# (addr2Int# a#)))\n\nlookupStaticPtr :: FastString -> IO (Ptr ())\nlookupStaticPtr addr_of_label_string\n = do let label_to_find = unpackFS addr_of_label_string\n m <- lookupSymbol label_to_find\n case m of\n Just ptr -> return ptr\n Nothing -> linkFail \"ByteCodeLink: can't find label\"\n label_to_find\n\nlookupPrimOp :: PrimOp -> IO HValue\nlookupPrimOp primop\n = do let sym_to_find = primopToCLabel primop \"closure\"\n m <- lookupSymbol sym_to_find\n case m of\n Just (Ptr addr) -> case addrToAny# addr of\n (# a #) -> return (HValue a)\n Nothing -> linkFail \"ByteCodeLink.lookupCE(primop)\" sym_to_find\n\nlookupName :: ClosureEnv -> Name -> IO HValue\nlookupName ce nm\n = case lookupNameEnv ce nm of\n Just (_,aa) -> return aa\n Nothing\n -> ASSERT2(isExternalName nm, ppr nm)\n do let sym_to_find = nameToCLabel nm \"closure\"\n m <- lookupSymbol sym_to_find\n case m of\n Just (Ptr addr) -> case addrToAny# addr of\n (# a #) -> return (HValue a)\n Nothing -> linkFail \"ByteCodeLink.lookupCE\" sym_to_find\n\nlookupIE :: DynFlags -> ItblEnv -> Name -> IO (Ptr a)\nlookupIE dflags ie con_nm\n = case lookupNameEnv ie con_nm of\n Just (_, a) -> return (castPtr (itblCode dflags a))\n Nothing\n -> do -- try looking up in the object files.\n let sym_to_find1 = nameToCLabel con_nm \"con_info\"\n m <- lookupSymbol sym_to_find1\n case m of\n Just addr -> return addr\n Nothing\n -> do -- perhaps a nullary constructor?\n let sym_to_find2 = nameToCLabel con_nm \"static_info\"\n n <- lookupSymbol sym_to_find2\n case n of\n Just addr -> return addr\n Nothing -> linkFail \"ByteCodeLink.lookupIE\"\n (sym_to_find1 ++ \" or \" ++ sym_to_find2)\n\nlinkFail :: String -> String -> IO a\nlinkFail who what\n = throwGhcExceptionIO (ProgramError $\n unlines [ \"\",who\n , \"During interactive linking, GHCi couldn't find the following symbol:\"\n , ' ' : ' ' : what\n , \"This may be due to you not asking GHCi to load extra object files,\"\n , \"archives or DLLs needed by your current session. Restart GHCi, specifying\"\n , \"the missing library using the -L\/path\/to\/object\/dir and -lmissinglibname\"\n , \"flags, or simply by naming the relevant files on the GHCi command line.\"\n , \"Alternatively, this link failure might indicate a bug in GHCi.\"\n , \"If you suspect the latter, please send a bug report to:\"\n , \" glasgow-haskell-bugs@haskell.org\"\n ])\n\n-- HACKS!!! ToDo: cleaner\nnameToCLabel :: Name -> String{-suffix-} -> String\nnameToCLabel n suffix\n = if pkgid \/= mainPackageKey\n then package_part ++ '_': qual_name\n else qual_name\n where\n pkgid = modulePackageKey mod\n mod = ASSERT( isExternalName n ) nameModule n\n package_part = zString (zEncodeFS (packageKeyFS (modulePackageKey mod)))\n module_part = zString (zEncodeFS (moduleNameFS (moduleName mod)))\n occ_part = zString (zEncodeFS (occNameFS (nameOccName n)))\n qual_name = module_part ++ '_':occ_part ++ '_':suffix\n\n\nprimopToCLabel :: PrimOp -> String{-suffix-} -> String\nprimopToCLabel primop suffix\n = let str = \"ghczmprim_GHCziPrimopWrappers_\" ++ zString (zEncodeFS (occNameFS (primOpOcc primop))) ++ '_':suffix\n in --trace (\"primopToCLabel: \" ++ str)\n str\n\\end{code}\n\n","avg_line_length":38.2269503546,"max_line_length":115,"alphanum_fraction":0.570593692} {"size":37022,"ext":"lhs","lang":"Literate Haskell","max_stars_count":2.0,"content":"%\n% (c) The University of Glasgow 2011\n%\n\nThe deriving code for the Generic class\n(equivalent to the code in TcGenDeriv, for other classes)\n\n\\begin{code}\n{-# LANGUAGE ScopedTypeVariables #-}\n{-# OPTIONS -fno-warn-tabs #-}\n-- The above warning supression flag is a temporary kludge.\n-- While working on this module you are encouraged to remove it and\n-- detab the module (please do the detabbing in a separate patch). See\n-- http:\/\/ghc.haskell.org\/trac\/ghc\/wiki\/Commentary\/CodingStyle#TabsvsSpaces\n-- for details\n\n\nmodule TcGenGenerics (canDoGenerics, canDoGenerics1,\n GenericKind(..),\n MetaTyCons, genGenericMetaTyCons,\n gen_Generic_binds, get_gen1_constrained_tys) where\n\nimport DynFlags\nimport HsSyn\nimport Type\nimport Kind ( isKind )\nimport TcType\nimport TcGenDeriv\nimport DataCon\nimport TyCon\nimport FamInstEnv ( FamInst, FamFlavor(..), mkSingleCoAxiom )\nimport FamInst\nimport Module ( Module, moduleName, moduleNameString )\nimport IfaceEnv ( newGlobalBinder )\nimport Name hiding ( varName )\nimport RdrName\nimport BasicTypes\nimport TysWiredIn\nimport PrelNames\nimport InstEnv\nimport TcEnv\nimport MkId\nimport TcRnMonad\nimport HscTypes\nimport BuildTyCl\nimport SrcLoc\nimport Bag\nimport VarSet (elemVarSet)\nimport Outputable \nimport FastString\nimport Util\n\nimport Control.Monad (mplus,forM)\n\n#include \"HsVersions.h\"\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Bindings for the new generic deriving mechanism}\n%* *\n%************************************************************************\n\nFor the generic representation we need to generate:\n\\begin{itemize}\n\\item A Generic instance\n\\item A Rep type instance \n\\item Many auxiliary datatypes and instances for them (for the meta-information)\n\\end{itemize}\n\n\\begin{code}\ngen_Generic_binds :: GenericKind -> TyCon -> MetaTyCons -> Module\n -> TcM (LHsBinds RdrName, FamInst)\ngen_Generic_binds gk tc metaTyCons mod = do\n repTyInsts <- tc_mkRepFamInsts gk tc metaTyCons mod\n return (mkBindsRep gk tc, repTyInsts)\n\ngenGenericMetaTyCons :: TyCon -> Module -> TcM (MetaTyCons, BagDerivStuff)\ngenGenericMetaTyCons tc mod =\n do loc <- getSrcSpanM\n let\n tc_name = tyConName tc\n tc_cons = tyConDataCons tc\n tc_arits = map dataConSourceArity tc_cons\n\n tc_occ = nameOccName tc_name\n d_occ = mkGenD tc_occ\n c_occ m = mkGenC tc_occ m\n s_occ m n = mkGenS tc_occ m n\n\n mkTyCon name = ASSERT( isExternalName name )\n buildAlgTyCon name [] [] Nothing [] distinctAbstractTyConRhs\n NonRecursive \n False -- Not promotable\n False -- Not GADT syntax\n NoParentTyCon\n\n d_name <- newGlobalBinder mod d_occ loc\n c_names <- forM (zip [0..] tc_cons) $ \\(m,_) ->\n newGlobalBinder mod (c_occ m) loc\n s_names <- forM (zip [0..] tc_arits) $ \\(m,a) -> forM [0..a-1] $ \\n ->\n newGlobalBinder mod (s_occ m n) loc\n\n let metaDTyCon = mkTyCon d_name\n metaCTyCons = map mkTyCon c_names\n metaSTyCons = map (map mkTyCon) s_names\n\n metaDts = MetaTyCons metaDTyCon metaCTyCons metaSTyCons\n\n -- pprTrace \"rep0\" (ppr rep0_tycon) $\n (,) metaDts `fmap` metaTyConsToDerivStuff tc metaDts\n\n-- both the tycon declarations and related instances\nmetaTyConsToDerivStuff :: TyCon -> MetaTyCons -> TcM BagDerivStuff\nmetaTyConsToDerivStuff tc metaDts =\n do loc <- getSrcSpanM\n dflags <- getDynFlags\n dClas <- tcLookupClass datatypeClassName\n let new_dfun_name clas tycon = newDFunName clas [mkTyConApp tycon []] loc\n d_dfun_name <- new_dfun_name dClas tc\n cClas <- tcLookupClass constructorClassName\n c_dfun_names <- sequence [ new_dfun_name cClas tc | _ <- metaC metaDts ]\n sClas <- tcLookupClass selectorClassName\n s_dfun_names <- sequence (map sequence [ [ new_dfun_name sClas tc \n | _ <- x ] \n | x <- metaS metaDts ])\n fix_env <- getFixityEnv\n\n let\n safeOverlap = safeLanguageOn dflags\n (dBinds,cBinds,sBinds) = mkBindsMetaD fix_env tc\n mk_inst clas tc dfun_name \n = mkLocalInstance (mkDictFunId dfun_name [] [] clas tys)\n (NoOverlap safeOverlap)\n [] clas tys\n where\n tys = [mkTyConTy tc]\n \n -- Datatype\n d_metaTycon = metaD metaDts\n d_inst = mk_inst dClas d_metaTycon d_dfun_name\n d_binds = InstBindings { ib_binds = dBinds\n , ib_pragmas = []\n , ib_standalone_deriving = False }\n d_mkInst = DerivInst (InstInfo { iSpec = d_inst, iBinds = d_binds })\n \n -- Constructor\n c_metaTycons = metaC metaDts\n c_insts = [ mk_inst cClas c ds\n | (c, ds) <- myZip1 c_metaTycons c_dfun_names ]\n c_binds = [ InstBindings { ib_binds = c\n , ib_pragmas = []\n , ib_standalone_deriving = False }\n | c <- cBinds ]\n c_mkInst = [ DerivInst (InstInfo { iSpec = is, iBinds = bs })\n | (is,bs) <- myZip1 c_insts c_binds ]\n \n -- Selector\n s_metaTycons = metaS metaDts\n s_insts = map (map (\\(s,ds) -> mk_inst sClas s ds))\n (myZip2 s_metaTycons s_dfun_names)\n s_binds = [ [ InstBindings { ib_binds = s\n , ib_pragmas = []\n , ib_standalone_deriving = False }\n | s <- ss ] | ss <- sBinds ]\n s_mkInst = map (map (\\(is,bs) -> DerivInst (InstInfo { iSpec = is\n , iBinds = bs})))\n (myZip2 s_insts s_binds)\n \n myZip1 :: [a] -> [b] -> [(a,b)]\n myZip1 l1 l2 = ASSERT(length l1 == length l2) zip l1 l2\n \n myZip2 :: [[a]] -> [[b]] -> [[(a,b)]]\n myZip2 l1 l2 =\n ASSERT(and (zipWith (>=) (map length l1) (map length l2)))\n [ zip x1 x2 | (x1,x2) <- zip l1 l2 ]\n \n return $ mapBag DerivTyCon (metaTyCons2TyCons metaDts)\n `unionBags` listToBag (d_mkInst : c_mkInst ++ concat s_mkInst)\n\\end{code}\n\n%************************************************************************\n%* *\n\\subsection{Generating representation types}\n%* *\n%************************************************************************\n\n\\begin{code}\nget_gen1_constrained_tys :: TyVar -> [Type] -> [Type]\n-- called by TcDeriv.inferConstraints; generates a list of types, each of which\n-- must be a Functor in order for the Generic1 instance to work.\nget_gen1_constrained_tys argVar =\n concatMap $ argTyFold argVar $ ArgTyAlg {\n ata_rec0 = const [],\n ata_par1 = [], ata_rec1 = const [],\n ata_comp = (:)}\n\n{-\n\nNote [Requirements for deriving Generic and Rep]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIn the following, T, Tfun, and Targ are \"meta-variables\" ranging over type\nexpressions.\n\n(Generic T) and (Rep T) are derivable for some type expression T if the\nfollowing constraints are satisfied.\n\n (a) T = (D v1 ... vn) with free variables v1, v2, ..., vn where n >= 0 v1\n ... vn are distinct type variables. Cf #5939.\n\n (b) D is a type constructor *value*. In other words, D is either a type\n constructor or it is equivalent to the head of a data family instance (up to\n alpha-renaming).\n\n (c) D cannot have a \"stupid context\".\n\n (d) The right-hand side of D cannot include unboxed types, existential types,\n or universally quantified types.\n\n (e) T :: *.\n\n(Generic1 T) and (Rep1 T) are derivable for some type expression T if the\nfollowing constraints are satisfied.\n\n (a),(b),(c),(d) As above.\n\n (f) T must expect arguments, and its last parameter must have kind *.\n\n We use `a' to denote the parameter of D that corresponds to the last\n parameter of T.\n\n (g) For any type-level application (Tfun Targ) in the right-hand side of D\n where the head of Tfun is not a tuple constructor:\n\n (b1) `a' must not occur in Tfun.\n\n (b2) If `a' occurs in Targ, then Tfun :: * -> *.\n\n-}\n\ncanDoGenerics :: TyCon -> [Type] -> Maybe SDoc\n-- canDoGenerics rep_tc tc_args determines if Generic\/Rep can be derived for a\n-- type expression (rep_tc tc_arg0 tc_arg1 ... tc_argn).\n--\n-- Check (b) from Note [Requirements for deriving Generic and Rep] is taken\n-- care of because canDoGenerics is applied to rep tycons.\n--\n-- It returns Nothing if deriving is possible. It returns (Just reason) if not.\ncanDoGenerics tc tc_args\n = mergeErrors (\n -- Check (c) from Note [Requirements for deriving Generic and Rep].\n (if (not (null (tyConStupidTheta tc)))\n then (Just (tc_name <+> text \"must not have a datatype context\"))\n else Nothing) :\n -- Check (a) from Note [Requirements for deriving Generic and Rep].\n --\n -- Data family indices can be instantiated; the `tc_args` here are\n -- the representation tycon args\n (if (all isTyVarTy (filterOut isKind tc_args))\n then Nothing\n else Just (tc_name <+> text \"must not be instantiated;\" <+>\n text \"try deriving `\" <> tc_name <+> tc_tys <>\n text \"' instead\"))\n -- See comment below\n : (map bad_con (tyConDataCons tc)))\n where\n -- The tc can be a representation tycon. When we want to display it to the\n -- user (in an error message) we should print its parent\n (tc_name, tc_tys) = case tyConParent tc of\n FamInstTyCon _ ptc tys -> (ppr ptc, hsep (map ppr\n (tys ++ drop (length tys) tc_args)))\n _ -> (ppr tc, hsep (map ppr (tyConTyVars tc)))\n\n -- Check (d) from Note [Requirements for deriving Generic and Rep].\n --\n -- If any of the constructors has an unboxed type as argument,\n -- then we can't build the embedding-projection pair, because\n -- it relies on instantiating *polymorphic* sum and product types\n -- at the argument types of the constructors\n bad_con dc = if (any bad_arg_type (dataConOrigArgTys dc))\n then (Just (ppr dc <+> text \"must not have unlifted or polymorphic arguments\"))\n else (if (not (isVanillaDataCon dc))\n then (Just (ppr dc <+> text \"must be a vanilla data constructor\"))\n else Nothing)\n\n\t-- Nor can we do the job if it's an existential data constructor,\n\t-- Nor if the args are polymorphic types (I don't think)\n bad_arg_type ty = isUnLiftedType ty || not (isTauTy ty)\n\nmergeErrors :: [Maybe SDoc] -> Maybe SDoc\nmergeErrors [] = Nothing\nmergeErrors ((Just s):t) = case mergeErrors t of\n Nothing -> Just s\n Just s' -> Just (s <> text \", and\" $$ s')\nmergeErrors (Nothing :t) = mergeErrors t\n\n-- A datatype used only inside of canDoGenerics1. It's the result of analysing\n-- a type term.\ndata Check_for_CanDoGenerics1 = CCDG1\n { _ccdg1_hasParam :: Bool -- does the parameter of interest occurs in\n -- this type?\n , _ccdg1_errors :: Maybe SDoc -- errors generated by this type\n }\n\n{-\n\nNote [degenerate use of FFoldType]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWe use foldDataConArgs here only for its ability to treat tuples\nspecially. foldDataConArgs also tracks covariance (though it assumes all\nhigher-order type parameters are covariant) and has hooks for special handling\nof functions and polytypes, but we do *not* use those.\n\nThe key issue is that Generic1 deriving currently offers no sophisticated\nsupport for functions. For example, we cannot handle\n\n data F a = F ((a -> Int) -> Int)\n\neven though a is occurring covariantly.\n\nIn fact, our rule is harsh: a is simply not allowed to occur within the first\nargument of (->). We treat (->) the same as any other non-tuple tycon.\n\nUnfortunately, this means we have to track \"the parameter occurs in this type\"\nexplicitly, even though foldDataConArgs is also doing this internally.\n\n-}\n\n-- canDoGenerics1 rep_tc tc_args determines if a Generic1\/Rep1 can be derived\n-- for a type expression (rep_tc tc_arg0 tc_arg1 ... tc_argn).\n--\n-- Checks (a) through (d) from Note [Requirements for deriving Generic and Rep]\n-- are taken care of by the call to canDoGenerics.\n--\n-- It returns Nothing if deriving is possible. It returns (Just reason) if not.\ncanDoGenerics1 :: TyCon -> [Type] -> Maybe SDoc\ncanDoGenerics1 rep_tc tc_args =\n canDoGenerics rep_tc tc_args `mplus` additionalChecks\n where\n additionalChecks\n -- check (f) from Note [Requirements for deriving Generic and Rep]\n | null (tyConTyVars rep_tc) = Just $\n ptext (sLit \"Data type\") <+> quotes (ppr rep_tc)\n <+> ptext (sLit \"must have some type parameters\")\n\n | otherwise = mergeErrors $ concatMap check_con data_cons\n\n data_cons = tyConDataCons rep_tc\n check_con con = case check_vanilla con of\n j@(Just _) -> [j]\n Nothing -> _ccdg1_errors `map` foldDataConArgs (ft_check con) con\n\n bad :: DataCon -> SDoc -> SDoc\n bad con msg = ptext (sLit \"Constructor\") <+> quotes (ppr con) <+> msg\n\n check_vanilla :: DataCon -> Maybe SDoc\n check_vanilla con | isVanillaDataCon con = Nothing\n | otherwise = Just (bad con existential)\n\n bmzero = CCDG1 False Nothing\n bmbad con s = CCDG1 True $ Just $ bad con s\n bmplus (CCDG1 b1 m1) (CCDG1 b2 m2) = CCDG1 (b1 || b2) (mplus m1 m2)\n\n -- check (g) from Note [degenerate use of FFoldType]\n ft_check :: DataCon -> FFoldType Check_for_CanDoGenerics1\n ft_check con = FT\n { ft_triv = bmzero\n\n , ft_var = caseVar, ft_co_var = caseVar\n\n -- (component_0,component_1,...,component_n)\n , ft_tup = \\_ components -> if any _ccdg1_hasParam (init components)\n then bmbad con wrong_arg\n else foldr bmplus bmzero components\n\n -- (dom -> rng), where the head of ty is not a tuple tycon\n , ft_fun = \\dom rng -> -- cf #8516\n if _ccdg1_hasParam dom\n then bmbad con wrong_arg\n else bmplus dom rng\n\n -- (ty arg), where head of ty is neither (->) nor a tuple constructor and\n -- the parameter of interest does not occur in ty\n , ft_ty_app = \\_ arg -> arg\n\n , ft_bad_app = bmbad con wrong_arg\n , ft_forall = \\_ body -> body -- polytypes are handled elsewhere\n }\n where\n caseVar = CCDG1 True Nothing\n\n\n existential = text \"must not have existential arguments\"\n wrong_arg = text \"applies a type to an argument involving the last parameter\"\n $$ text \"but the applied type is not of kind * -> *\"\n\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\\subsection{Generating the RHS of a generic default method}\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ntype US = Int\t-- Local unique supply, just a plain Int\ntype Alt = (LPat RdrName, LHsExpr RdrName)\n\n-- GenericKind serves to mark if a datatype derives Generic (Gen0) or\n-- Generic1 (Gen1).\ndata GenericKind = Gen0 | Gen1\n\n-- as above, but with a payload of the TyCon's name for \"the\" parameter\ndata GenericKind_ = Gen0_ | Gen1_ TyVar\n\n-- as above, but using a single datacon's name for \"the\" parameter\ndata GenericKind_DC = Gen0_DC | Gen1_DC TyVar\n\nforgetArgVar :: GenericKind_DC -> GenericKind\nforgetArgVar Gen0_DC = Gen0\nforgetArgVar Gen1_DC{} = Gen1\n\n-- When working only within a single datacon, \"the\" parameter's name should\n-- match that datacon's name for it.\ngk2gkDC :: GenericKind_ -> DataCon -> GenericKind_DC\ngk2gkDC Gen0_ _ = Gen0_DC\ngk2gkDC Gen1_{} d = Gen1_DC $ last $ dataConUnivTyVars d\n\n\n\n-- Bindings for the Generic instance\nmkBindsRep :: GenericKind -> TyCon -> LHsBinds RdrName\nmkBindsRep gk tycon = \n unitBag (L loc (mkFunBind (L loc from01_RDR) from_matches))\n `unionBags`\n unitBag (L loc (mkFunBind (L loc to01_RDR) to_matches))\n where\n from_matches = [mkSimpleHsAlt pat rhs | (pat,rhs) <- from_alts]\n to_matches = [mkSimpleHsAlt pat rhs | (pat,rhs) <- to_alts ]\n loc = srcLocSpan (getSrcLoc tycon)\n datacons = tyConDataCons tycon\n\n (from01_RDR, to01_RDR) = case gk of\n Gen0 -> (from_RDR, to_RDR)\n Gen1 -> (from1_RDR, to1_RDR)\n\n -- Recurse over the sum first\n from_alts, to_alts :: [Alt]\n (from_alts, to_alts) = mkSum gk_ (1 :: US) tycon datacons\n where gk_ = case gk of\n Gen0 -> Gen0_\n Gen1 -> ASSERT(length tyvars >= 1)\n Gen1_ (last tyvars)\n where tyvars = tyConTyVars tycon\n \n--------------------------------------------------------------------------------\n-- The type synonym instance and synonym\n-- type instance Rep (D a b) = Rep_D a b\n-- type Rep_D a b = ...representation type for D ...\n--------------------------------------------------------------------------------\n\ntc_mkRepFamInsts :: GenericKind -- Gen0 or Gen1\n -> TyCon -- The type to generate representation for\n -> MetaTyCons -- Metadata datatypes to refer to\n -> Module -- Used as the location of the new RepTy\n -> TcM (FamInst) -- Generated representation0 coercion\ntc_mkRepFamInsts gk tycon metaDts mod = \n -- Consider the example input tycon `D`, where data D a b = D_ a\n -- Also consider `R:DInt`, where { data family D x y :: * -> *\n -- ; data instance D Int a b = D_ a }\n do { -- `rep` = GHC.Generics.Rep or GHC.Generics.Rep1 (type family)\n fam_tc <- case gk of\n Gen0 -> tcLookupTyCon repTyConName\n Gen1 -> tcLookupTyCon rep1TyConName\n\n ; let -- `tyvars` = [a,b]\n (tyvars, gk_) = case gk of\n Gen0 -> (all_tyvars, Gen0_)\n Gen1 -> ASSERT(not $ null all_tyvars)\n (init all_tyvars, Gen1_ $ last all_tyvars)\n where all_tyvars = tyConTyVars tycon\n\n tyvar_args = mkTyVarTys tyvars\n\n appT :: [Type]\n appT = case tyConFamInst_maybe tycon of\n -- `appT` = D Int a b (data families case)\n Just (famtycon, apps) ->\n -- `fam` = D\n -- `apps` = [Int, a]\n let allApps = apps ++\n drop (length apps + length tyvars\n - tyConArity famtycon) tyvar_args\n in [mkTyConApp famtycon allApps]\n -- `appT` = D a b (normal case)\n Nothing -> [mkTyConApp tycon tyvar_args]\n\n -- `repTy` = D1 ... (C1 ... (S1 ... (Rec0 a))) :: * -> *\n ; repTy <- tc_mkRepTy gk_ tycon metaDts\n \n -- `rep_name` is a name we generate for the synonym\n ; rep_name <- let mkGen = case gk of Gen0 -> mkGenR; Gen1 -> mkGen1R\n in newGlobalBinder mod (mkGen (nameOccName (tyConName tycon)))\n (nameSrcSpan (tyConName tycon))\n\n ; let axiom = mkSingleCoAxiom rep_name tyvars fam_tc appT repTy\n ; newFamInst SynFamilyInst axiom }\n\n--------------------------------------------------------------------------------\n-- Type representation\n--------------------------------------------------------------------------------\n\n-- | See documentation of 'argTyFold'; that function uses the fields of this\n-- type to interpret the structure of a type when that type is considered as an\n-- argument to a constructor that is being represented with 'Rep1'.\ndata ArgTyAlg a = ArgTyAlg\n { ata_rec0 :: (Type -> a)\n , ata_par1 :: a, ata_rec1 :: (Type -> a)\n , ata_comp :: (Type -> a -> a)\n }\n\n-- | @argTyFold@ implements a generalised and safer variant of the @arg@\n-- function from Figure 3 in . @arg@\n-- is conceptually equivalent to:\n--\n-- > arg t = case t of\n-- > _ | isTyVar t -> if (t == argVar) then Par1 else Par0 t\n-- > App f [t'] |\n-- > representable1 f &&\n-- > t' == argVar -> Rec1 f\n-- > App f [t'] |\n-- > representable1 f &&\n-- > t' has tyvars -> f :.: (arg t')\n-- > _ -> Rec0 t\n--\n-- where @argVar@ is the last type variable in the data type declaration we are\n-- finding the representation for.\n--\n-- @argTyFold@ is more general than @arg@ because it uses 'ArgTyAlg' to\n-- abstract out the concrete invocations of @Par0@, @Rec0@, @Par1@, @Rec1@, and\n-- @:.:@.\n--\n-- @argTyFold@ is safer than @arg@ because @arg@ would lead to a GHC panic for\n-- some data types. The problematic case is when @t@ is an application of a\n-- non-representable type @f@ to @argVar@: @App f [argVar]@ is caught by the\n-- @_@ pattern, and ends up represented as @Rec0 t@. This type occurs \/free\/ in\n-- the RHS of the eventual @Rep1@ instance, which is therefore ill-formed. Some\n-- representable1 checks have been relaxed, and others were moved to\n-- @canDoGenerics1@.\nargTyFold :: forall a. TyVar -> ArgTyAlg a -> Type -> a\nargTyFold argVar (ArgTyAlg {ata_rec0 = mkRec0,\n ata_par1 = mkPar1, ata_rec1 = mkRec1,\n ata_comp = mkComp}) =\n -- mkRec0 is the default; use it if there is no interesting structure\n -- (e.g. occurrences of parameters or recursive occurrences)\n \\t -> maybe (mkRec0 t) id $ go t where\n go :: Type -> -- type to fold through\n Maybe a -- the result (e.g. representation type), unless it's trivial\n go t = isParam `mplus` isApp where\n\n isParam = do -- handles parameters\n t' <- getTyVar_maybe t\n Just $ if t' == argVar then mkPar1 -- moreover, it is \"the\" parameter\n else mkRec0 t -- NB mkRec0 instead of the conventional mkPar0\n\n isApp = do -- handles applications\n (phi, beta) <- tcSplitAppTy_maybe t\n\n let interesting = argVar `elemVarSet` exactTyVarsOfType beta\n\n -- Does it have no interesting structure to represent?\n if not interesting then Nothing\n else -- Is the argument the parameter? Special case for mkRec1.\n if Just argVar == getTyVar_maybe beta then Just $ mkRec1 phi\n else mkComp phi `fmap` go beta -- It must be a composition.\n\n\ntc_mkRepTy :: -- Gen0_ or Gen1_, for Rep or Rep1\n GenericKind_\n -- The type to generate representation for\n -> TyCon\n -- Metadata datatypes to refer to\n -> MetaTyCons \n -- Generated representation0 type\n -> TcM Type\ntc_mkRepTy gk_ tycon metaDts = \n do\n d1 <- tcLookupTyCon d1TyConName\n c1 <- tcLookupTyCon c1TyConName\n s1 <- tcLookupTyCon s1TyConName\n nS1 <- tcLookupTyCon noSelTyConName\n rec0 <- tcLookupTyCon rec0TyConName\n rec1 <- tcLookupTyCon rec1TyConName\n par1 <- tcLookupTyCon par1TyConName\n u1 <- tcLookupTyCon u1TyConName\n v1 <- tcLookupTyCon v1TyConName\n plus <- tcLookupTyCon sumTyConName\n times <- tcLookupTyCon prodTyConName\n comp <- tcLookupTyCon compTyConName\n \n let mkSum' a b = mkTyConApp plus [a,b]\n mkProd a b = mkTyConApp times [a,b]\n mkComp a b = mkTyConApp comp [a,b]\n mkRec0 a = mkTyConApp rec0 [a]\n mkRec1 a = mkTyConApp rec1 [a]\n mkPar1 = mkTyConTy par1\n mkD a = mkTyConApp d1 [metaDTyCon, sumP (tyConDataCons a)]\n mkC i d a = mkTyConApp c1 [d, prod i (dataConInstOrigArgTys a $ mkTyVarTys $ tyConTyVars tycon)\n (null (dataConFieldLabels a))]\n -- This field has no label\n mkS True _ a = mkTyConApp s1 [mkTyConTy nS1, a]\n -- This field has a label\n mkS False d a = mkTyConApp s1 [d, a]\n \n -- Sums and products are done in the same way for both Rep and Rep1\n sumP [] = mkTyConTy v1\n sumP l = ASSERT(length metaCTyCons == length l)\n foldBal mkSum' [ mkC i d a\n | (d,(a,i)) <- zip metaCTyCons (zip l [0..])]\n -- The Bool is True if this constructor has labelled fields\n prod :: Int -> [Type] -> Bool -> Type\n prod i [] _ = ASSERT(length metaSTyCons > i)\n ASSERT(length (metaSTyCons !! i) == 0)\n mkTyConTy u1\n prod i l b = ASSERT(length metaSTyCons > i)\n ASSERT(length l == length (metaSTyCons !! i))\n foldBal mkProd [ arg d t b\n | (d,t) <- zip (metaSTyCons !! i) l ]\n \n arg :: Type -> Type -> Bool -> Type\n arg d t b = mkS b d $ case gk_ of \n -- Here we previously used Par0 if t was a type variable, but we\n -- realized that we can't always guarantee that we are wrapping-up\n -- all type variables in Par0. So we decided to stop using Par0\n -- altogether, and use Rec0 all the time.\n Gen0_ -> mkRec0 t\n Gen1_ argVar -> argPar argVar t\n where\n -- Builds argument represention for Rep1 (more complicated due to\n -- the presence of composition).\n argPar argVar = argTyFold argVar $ ArgTyAlg\n {ata_rec0 = mkRec0, ata_par1 = mkPar1,\n ata_rec1 = mkRec1, ata_comp = mkComp}\n \n \n metaDTyCon = mkTyConTy (metaD metaDts)\n metaCTyCons = map mkTyConTy (metaC metaDts)\n metaSTyCons = map (map mkTyConTy) (metaS metaDts)\n \n return (mkD tycon)\n\n--------------------------------------------------------------------------------\n-- Meta-information\n--------------------------------------------------------------------------------\n\ndata MetaTyCons = MetaTyCons { -- One meta datatype per dataype\n metaD :: TyCon\n -- One meta datatype per constructor\n , metaC :: [TyCon]\n -- One meta datatype per selector per constructor\n , metaS :: [[TyCon]] }\n \ninstance Outputable MetaTyCons where\n ppr (MetaTyCons d c s) = ppr d $$ vcat (map ppr c) $$ vcat (map ppr (concat s))\n \nmetaTyCons2TyCons :: MetaTyCons -> Bag TyCon\nmetaTyCons2TyCons (MetaTyCons d c s) = listToBag (d : c ++ concat s)\n\n\n-- Bindings for Datatype, Constructor, and Selector instances\nmkBindsMetaD :: FixityEnv -> TyCon \n -> ( LHsBinds RdrName -- Datatype instance\n , [LHsBinds RdrName] -- Constructor instances\n , [[LHsBinds RdrName]]) -- Selector instances\nmkBindsMetaD fix_env tycon = (dtBinds, allConBinds, allSelBinds)\n where\n mkBag l = foldr1 unionBags \n [ unitBag (L loc (mkFunBind (L loc name) matches)) \n | (name, matches) <- l ]\n dtBinds = mkBag ( [ (datatypeName_RDR, dtName_matches)\n , (moduleName_RDR, moduleName_matches)]\n ++ ifElseEmpty (isNewTyCon tycon)\n [ (isNewtypeName_RDR, isNewtype_matches) ] )\n\n allConBinds = map conBinds datacons\n conBinds c = mkBag ( [ (conName_RDR, conName_matches c)]\n ++ ifElseEmpty (dataConIsInfix c)\n [ (conFixity_RDR, conFixity_matches c) ]\n ++ ifElseEmpty (length (dataConFieldLabels c) > 0)\n [ (conIsRecord_RDR, conIsRecord_matches c) ]\n )\n\n ifElseEmpty p x = if p then x else []\n fixity c = case lookupFixity fix_env (dataConName c) of\n Fixity n InfixL -> buildFix n leftAssocDataCon_RDR\n Fixity n InfixR -> buildFix n rightAssocDataCon_RDR\n Fixity n InfixN -> buildFix n notAssocDataCon_RDR\n buildFix n assoc = nlHsApps infixDataCon_RDR [nlHsVar assoc\n , nlHsIntLit (toInteger n)]\n\n allSelBinds = map (map selBinds) datasels\n selBinds s = mkBag [(selName_RDR, selName_matches s)]\n\n loc = srcLocSpan (getSrcLoc tycon)\n mkStringLHS s = [mkSimpleHsAlt nlWildPat (nlHsLit (mkHsString s))]\n datacons = tyConDataCons tycon\n datasels = map dataConFieldLabels datacons\n\n tyConName_user = case tyConFamInst_maybe tycon of\n Just (ptycon, _) -> tyConName ptycon\n Nothing -> tyConName tycon\n\n dtName_matches = mkStringLHS . occNameString . nameOccName\n $ tyConName_user\n moduleName_matches = mkStringLHS . moduleNameString . moduleName \n . nameModule . tyConName $ tycon\n isNewtype_matches = [mkSimpleHsAlt nlWildPat (nlHsVar true_RDR)]\n\n conName_matches c = mkStringLHS . occNameString . nameOccName\n . dataConName $ c\n conFixity_matches c = [mkSimpleHsAlt nlWildPat (fixity c)]\n conIsRecord_matches _ = [mkSimpleHsAlt nlWildPat (nlHsVar true_RDR)]\n\n selName_matches s = mkStringLHS (occNameString (nameOccName s))\n\n\n--------------------------------------------------------------------------------\n-- Dealing with sums\n--------------------------------------------------------------------------------\n\nmkSum :: GenericKind_ -- Generic or Generic1?\n -> US -- Base for generating unique names\n -> TyCon -- The type constructor\n -> [DataCon] -- The data constructors\n -> ([Alt], -- Alternatives for the T->Trep \"from\" function\n [Alt]) -- Alternatives for the Trep->T \"to\" function\n\n-- Datatype without any constructors\nmkSum _ _ tycon [] = ([from_alt], [to_alt])\n where\n from_alt = (nlWildPat, mkM1_E (makeError errMsgFrom))\n to_alt = (mkM1_P nlWildPat, makeError errMsgTo)\n -- These M1s are meta-information for the datatype\n makeError s = nlHsApp (nlHsVar error_RDR) (nlHsLit (mkHsString s))\n tyConStr = occNameString (nameOccName (tyConName tycon))\n errMsgFrom = \"No generic representation for empty datatype \" ++ tyConStr\n errMsgTo = \"No values for empty datatype \" ++ tyConStr\n\n-- Datatype with at least one constructor\nmkSum gk_ us _ datacons =\n -- switch the payload of gk_ to be datacon-centric instead of tycon-centric\n unzip [ mk1Sum (gk2gkDC gk_ d) us i (length datacons) d\n | (d,i) <- zip datacons [1..] ]\n\n-- Build the sum for a particular constructor\nmk1Sum :: GenericKind_DC -- Generic or Generic1?\n -> US -- Base for generating unique names\n -> Int -- The index of this constructor\n -> Int -- Total number of constructors\n -> DataCon -- The data constructor\n -> (Alt, -- Alternative for the T->Trep \"from\" function\n Alt) -- Alternative for the Trep->T \"to\" function\nmk1Sum gk_ us i n datacon = (from_alt, to_alt)\n where\n gk = forgetArgVar gk_\n\n -- Existentials already excluded\n argTys = dataConOrigArgTys datacon\n n_args = dataConSourceArity datacon\n\n datacon_varTys = zip (map mkGenericLocal [us .. us+n_args-1]) argTys\n datacon_vars = map fst datacon_varTys\n us' = us + n_args\n\n datacon_rdr = getRdrName datacon\n \n from_alt = (nlConVarPat datacon_rdr datacon_vars, from_alt_rhs)\n from_alt_rhs = mkM1_E (genLR_E i n (mkProd_E gk_ us' datacon_varTys))\n \n to_alt = (mkM1_P (genLR_P i n (mkProd_P gk us' datacon_vars)), to_alt_rhs)\n -- These M1s are meta-information for the datatype\n to_alt_rhs = case gk_ of\n Gen0_DC -> nlHsVarApps datacon_rdr datacon_vars\n Gen1_DC argVar -> nlHsApps datacon_rdr $ map argTo datacon_varTys\n where\n argTo (var, ty) = converter ty `nlHsApp` nlHsVar var where\n converter = argTyFold argVar $ ArgTyAlg\n {ata_rec0 = const $ nlHsVar unK1_RDR,\n ata_par1 = nlHsVar unPar1_RDR,\n ata_rec1 = const $ nlHsVar unRec1_RDR,\n ata_comp = \\_ cnv -> (nlHsVar fmap_RDR `nlHsApp` cnv)\n `nlHsCompose` nlHsVar unComp1_RDR}\n\n\n\n-- Generates the L1\/R1 sum pattern\ngenLR_P :: Int -> Int -> LPat RdrName -> LPat RdrName\ngenLR_P i n p\n | n == 0 = error \"impossible\"\n | n == 1 = p\n | i <= div n 2 = nlConPat l1DataCon_RDR [genLR_P i (div n 2) p]\n | otherwise = nlConPat r1DataCon_RDR [genLR_P (i-m) (n-m) p]\n where m = div n 2\n\n-- Generates the L1\/R1 sum expression\ngenLR_E :: Int -> Int -> LHsExpr RdrName -> LHsExpr RdrName\ngenLR_E i n e\n | n == 0 = error \"impossible\"\n | n == 1 = e\n | i <= div n 2 = nlHsVar l1DataCon_RDR `nlHsApp` genLR_E i (div n 2) e\n | otherwise = nlHsVar r1DataCon_RDR `nlHsApp` genLR_E (i-m) (n-m) e\n where m = div n 2\n\n--------------------------------------------------------------------------------\n-- Dealing with products\n--------------------------------------------------------------------------------\n\n-- Build a product expression\nmkProd_E :: GenericKind_DC -- Generic or Generic1?\n -> US\t -- Base for unique names\n -> [(RdrName, Type)] -- List of variables matched on the lhs and their types\n\t -> LHsExpr RdrName -- Resulting product expression\nmkProd_E _ _ [] = mkM1_E (nlHsVar u1DataCon_RDR)\nmkProd_E gk_ _ varTys = mkM1_E (foldBal prod appVars)\n -- These M1s are meta-information for the constructor\n where\n appVars = map (wrapArg_E gk_) varTys\n prod a b = prodDataCon_RDR `nlHsApps` [a,b]\n\nwrapArg_E :: GenericKind_DC -> (RdrName, Type) -> LHsExpr RdrName\nwrapArg_E Gen0_DC (var, _) = mkM1_E (k1DataCon_RDR `nlHsVarApps` [var])\n -- This M1 is meta-information for the selector\nwrapArg_E (Gen1_DC argVar) (var, ty) = mkM1_E $ converter ty `nlHsApp` nlHsVar var\n -- This M1 is meta-information for the selector\n where converter = argTyFold argVar $ ArgTyAlg\n {ata_rec0 = const $ nlHsVar k1DataCon_RDR,\n ata_par1 = nlHsVar par1DataCon_RDR,\n ata_rec1 = const $ nlHsVar rec1DataCon_RDR,\n ata_comp = \\_ cnv -> nlHsVar comp1DataCon_RDR `nlHsCompose`\n (nlHsVar fmap_RDR `nlHsApp` cnv)}\n\n\n\n-- Build a product pattern\nmkProd_P :: GenericKind -- Gen0 or Gen1\n -> US\t\t -- Base for unique names\n\t -> [RdrName] -- List of variables to match\n\t -> LPat RdrName -- Resulting product pattern\nmkProd_P _ _ [] = mkM1_P (nlNullaryConPat u1DataCon_RDR)\nmkProd_P gk _ vars = mkM1_P (foldBal prod appVars)\n -- These M1s are meta-information for the constructor\n where\n appVars = map (wrapArg_P gk) vars\n prod a b = prodDataCon_RDR `nlConPat` [a,b]\n\nwrapArg_P :: GenericKind -> RdrName -> LPat RdrName\nwrapArg_P Gen0 v = mkM1_P (k1DataCon_RDR `nlConVarPat` [v])\n -- This M1 is meta-information for the selector\nwrapArg_P Gen1 v = m1DataCon_RDR `nlConVarPat` [v]\n\nmkGenericLocal :: US -> RdrName\nmkGenericLocal u = mkVarUnqual (mkFastString (\"g\" ++ show u))\n\nmkM1_E :: LHsExpr RdrName -> LHsExpr RdrName\nmkM1_E e = nlHsVar m1DataCon_RDR `nlHsApp` e\n\nmkM1_P :: LPat RdrName -> LPat RdrName\nmkM1_P p = m1DataCon_RDR `nlConPat` [p]\n\nnlHsCompose :: LHsExpr RdrName -> LHsExpr RdrName -> LHsExpr RdrName\nnlHsCompose x y = compose_RDR `nlHsApps` [x, y]\n\n-- | Variant of foldr1 for producing balanced lists\nfoldBal :: (a -> a -> a) -> [a] -> a\nfoldBal op = foldBal' op (error \"foldBal: empty list\")\n\nfoldBal' :: (a -> a -> a) -> a -> [a] -> a\nfoldBal' _ x [] = x\nfoldBal' _ _ [y] = y\nfoldBal' op x l = let (a,b) = splitAt (length l `div` 2) l\n in foldBal' op x a `op` foldBal' op x b\n\n\\end{code}\n","avg_line_length":41.7855530474,"max_line_length":107,"alphanum_fraction":0.5720112366} {"size":20690,"ext":"lhs","lang":"Literate Haskell","max_stars_count":null,"content":"%\n% (c) The University of Glasgow 2006\n%\n\n\\begin{code}\nmodule Unify ( \n\t-- Matching of types: \n\t--\tthe \"tc\" prefix indicates that matching always\n\t--\trespects newtypes (rather than looking through them)\n\ttcMatchTy, tcMatchTys, tcMatchTyX, \n\truleMatchTyX, tcMatchPreds, \n\n\tMatchEnv(..), matchList, \n\n\ttypesCantMatch,\n\n -- Side-effect free unification\n tcUnifyTys, BindFlag(..),\n niFixTvSubst, niSubstTvSet\n\n ) where\n\n#include \"HsVersions.h\"\n\nimport Var\nimport VarEnv\nimport VarSet\nimport Kind\nimport Type\nimport TyCon\nimport TypeRep\nimport Outputable\nimport ErrUtils\nimport Util\nimport Maybes\nimport FastString\n\nimport Control.Monad (guard)\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\t\tMatching\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\nMatching is much tricker than you might think.\n\n1. The substitution we generate binds the *template type variables*\n which are given to us explicitly.\n\n2. We want to match in the presence of foralls; \n\te.g \t(forall a. t1) ~ (forall b. t2)\n\n That is what the RnEnv2 is for; it does the alpha-renaming\n that makes it as if a and b were the same variable.\n Initialising the RnEnv2, so that it can generate a fresh\n binder when necessary, entails knowing the free variables of\n both types.\n\n3. We must be careful not to bind a template type variable to a\n locally bound variable. E.g.\n\t(forall a. x) ~ (forall b. b)\n where x is the template type variable. Then we do not want to\n bind x to a\/b! This is a kind of occurs check.\n The necessary locals accumulate in the RnEnv2.\n\n\n\\begin{code}\ndata MatchEnv\n = ME\t{ me_tmpls :: VarSet\t-- Template variables\n \t, me_env :: RnEnv2\t-- Renaming envt for nested foralls\n\t}\t\t\t-- In-scope set includes template variables\n -- Nota Bene: MatchEnv isn't specific to Types. It is used\n -- for matching terms and coercions as well as types\n\ntcMatchTy :: TyVarSet\t\t-- Template tyvars\n\t -> Type\t\t-- Template\n\t -> Type\t\t-- Target\n\t -> Maybe TvSubst\t-- One-shot; in principle the template\n\t\t\t\t-- variables could be free in the target\n\ntcMatchTy tmpls ty1 ty2\n = case match menv emptyTvSubstEnv ty1 ty2 of\n\tJust subst_env -> Just (TvSubst in_scope subst_env)\n\tNothing\t -> Nothing\n where\n menv = ME { me_tmpls = tmpls, me_env = mkRnEnv2 in_scope }\n in_scope = mkInScopeSet (tmpls `unionVarSet` tyVarsOfType ty2)\n\t-- We're assuming that all the interesting \n\t-- tyvars in tys1 are in tmpls\n\ntcMatchTys :: TyVarSet\t\t-- Template tyvars\n\t -> [Type]\t\t-- Template\n\t -> [Type]\t\t-- Target\n\t -> Maybe TvSubst\t-- One-shot; in principle the template\n\t\t\t\t-- variables could be free in the target\n\ntcMatchTys tmpls tys1 tys2\n = case match_tys menv emptyTvSubstEnv tys1 tys2 of\n\tJust subst_env -> Just (TvSubst in_scope subst_env)\n\tNothing\t -> Nothing\n where\n menv = ME { me_tmpls = tmpls, me_env = mkRnEnv2 in_scope }\n in_scope = mkInScopeSet (tmpls `unionVarSet` tyVarsOfTypes tys2)\n\t-- We're assuming that all the interesting \n\t-- tyvars in tys1 are in tmpls\n\n-- This is similar, but extends a substitution\ntcMatchTyX :: TyVarSet \t\t-- Template tyvars\n\t -> TvSubst\t\t-- Substitution to extend\n\t -> Type\t\t-- Template\n\t -> Type\t\t-- Target\n\t -> Maybe TvSubst\ntcMatchTyX tmpls (TvSubst in_scope subst_env) ty1 ty2\n = case match menv subst_env ty1 ty2 of\n\tJust subst_env -> Just (TvSubst in_scope subst_env)\n\tNothing\t -> Nothing\n where\n menv = ME {me_tmpls = tmpls, me_env = mkRnEnv2 in_scope}\n\ntcMatchPreds\n\t:: [TyVar]\t\t\t-- Bind these\n\t-> [PredType] -> [PredType]\n \t-> Maybe TvSubstEnv\ntcMatchPreds tmpls ps1 ps2\n = matchList (match menv) emptyTvSubstEnv ps1 ps2\n where\n menv = ME { me_tmpls = mkVarSet tmpls, me_env = mkRnEnv2 in_scope_tyvars }\n in_scope_tyvars = mkInScopeSet (tyVarsOfTypes ps1 `unionVarSet` tyVarsOfTypes ps2)\n\n-- This one is called from the expression matcher, which already has a MatchEnv in hand\nruleMatchTyX :: MatchEnv \n\t -> TvSubstEnv\t\t-- Substitution to extend\n\t -> Type\t\t-- Template\n\t -> Type\t\t-- Target\n\t -> Maybe TvSubstEnv\n\nruleMatchTyX menv subst ty1 ty2 = match menv subst ty1 ty2\t-- Rename for export\n\\end{code}\n\nNow the internals of matching\n\n\\begin{code}\nmatch :: MatchEnv\t-- For the most part this is pushed downwards\n -> TvSubstEnv \t-- Substitution so far:\n\t\t\t-- Domain is subset of template tyvars\n\t\t\t-- Free vars of range is subset of \n\t\t\t--\tin-scope set of the RnEnv2\n -> Type -> Type\t-- Template and target respectively\n -> Maybe TvSubstEnv\n-- This matcher works on core types; that is, it ignores PredTypes\n-- Watch out if newtypes become transparent agin!\n-- \tthis matcher must respect newtypes\n\nmatch menv subst ty1 ty2 | Just ty1' <- coreView ty1 = match menv subst ty1' ty2\n\t\t\t | Just ty2' <- coreView ty2 = match menv subst ty1 ty2'\n\nmatch menv subst (TyVarTy tv1) ty2\n | Just ty1' <- lookupVarEnv subst tv1'\t-- tv1' is already bound\n = if eqTypeX (nukeRnEnvL rn_env) ty1' ty2\n\t-- ty1 has no locally-bound variables, hence nukeRnEnvL\n then Just subst\n else Nothing\t-- ty2 doesn't match\n\n | tv1' `elemVarSet` me_tmpls menv\n = if any (inRnEnvR rn_env) (varSetElems (tyVarsOfType ty2))\n then Nothing\t-- Occurs check\n else do { subst1 <- match_kind menv subst tv1 ty2\n\t\t\t-- Note [Matching kinds]\n\t ; return (extendVarEnv subst1 tv1' ty2) }\n\n | otherwise\t-- tv1 is not a template tyvar\n = case ty2 of\n\tTyVarTy tv2 | tv1' == rnOccR rn_env tv2 -> Just subst\n\t_ -> Nothing\n where\n rn_env = me_env menv\n tv1' = rnOccL rn_env tv1\n\nmatch menv subst (ForAllTy tv1 ty1) (ForAllTy tv2 ty2) \n = match menv' subst ty1 ty2\n where\t\t-- Use the magic of rnBndr2 to go under the binders\n menv' = menv { me_env = rnBndr2 (me_env menv) tv1 tv2 }\n\nmatch menv subst (TyConApp tc1 tys1) (TyConApp tc2 tys2) \n | tc1 == tc2 = match_tys menv subst tys1 tys2\nmatch menv subst (FunTy ty1a ty1b) (FunTy ty2a ty2b) \n = do { subst' <- match menv subst ty1a ty2a\n ; match menv subst' ty1b ty2b }\nmatch menv subst (AppTy ty1a ty1b) ty2\n | Just (ty2a, ty2b) <- repSplitAppTy_maybe ty2\n\t-- 'repSplit' used because the tcView stuff is done above\n = do { subst' <- match menv subst ty1a ty2a\n ; match menv subst' ty1b ty2b }\n\nmatch _ _ _ _\n = Nothing\n\n--------------\nmatch_kind :: MatchEnv -> TvSubstEnv -> TyVar -> Type -> Maybe TvSubstEnv\n-- Match the kind of the template tyvar with the kind of Type\n-- Note [Matching kinds]\nmatch_kind _ subst tv ty\n = guard (typeKind ty `isSubKind` tyVarKind tv) >> return subst\n\n-- Note [Matching kinds]\n-- ~~~~~~~~~~~~~~~~~~~~~\n-- For ordinary type variables, we don't want (m a) to match (n b) \n-- if say (a::*) and (b::*->*). This is just a yes\/no issue. \n--\n-- For coercion kinds matters are more complicated. If we have a \n-- coercion template variable co::a~[b], where a,b are presumably also\n-- template type variables, then we must match co's kind against the \n-- kind of the actual argument, so as to give bindings to a,b. \n--\n-- In fact I have no example in mind that *requires* this kind-matching\n-- to instantiate template type variables, but it seems like the right\n-- thing to do. C.f. Note [Matching variable types] in Rules.lhs\n\n--------------\nmatch_tys :: MatchEnv -> TvSubstEnv -> [Type] -> [Type] -> Maybe TvSubstEnv\nmatch_tys menv subst tys1 tys2 = matchList (match menv) subst tys1 tys2\n\n--------------\nmatchList :: (env -> a -> b -> Maybe env)\n\t -> env -> [a] -> [b] -> Maybe env\nmatchList _ subst [] [] = Just subst\nmatchList fn subst (a:as) (b:bs) = do { subst' <- fn subst a b\n\t\t\t\t ; matchList fn subst' as bs }\nmatchList _ _ _ _ = Nothing\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\t\tGADTs\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\nNote [Pruning dead case alternatives]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nConsider\tdata T a where\n\t\t T1 :: T Int\n\t\t T2 :: T a\n\n\t\tnewtype X = MkX Int\n\t\tnewtype Y = MkY Char\n\n\t\ttype family F a\n\t\ttype instance F Bool = Int\n\nNow consider\tcase x of { T1 -> e1; T2 -> e2 }\n\nThe question before the house is this: if I know something about the type\nof x, can I prune away the T1 alternative?\n\nSuppose x::T Char. It's impossible to construct a (T Char) using T1, \n\tAnswer = YES we can prune the T1 branch (clearly)\n\nSuppose x::T (F a), where 'a' is in scope. Then 'a' might be instantiated\nto 'Bool', in which case x::T Int, so\n\tANSWER = NO (clearly)\n\nSuppose x::T X. Then *in Haskell* it's impossible to construct a (non-bottom) \nvalue of type (T X) using T1. But *in FC* it's quite possible. The newtype\ngives a coercion\n\tCoX :: X ~ Int\nSo (T CoX) :: T X ~ T Int; hence (T1 `cast` sym (T CoX)) is a non-bottom value\nof type (T X) constructed with T1. Hence\n\tANSWER = NO we can't prune the T1 branch (surprisingly)\n\nFurthermore, this can even happen; see Trac #1251. GHC's newtype-deriving\nmechanism uses a cast, just as above, to move from one dictionary to another,\nin effect giving the programmer access to CoX.\n\nFinally, suppose x::T Y. Then *even in FC* we can't construct a\nnon-bottom value of type (T Y) using T1. That's because we can get\nfrom Y to Char, but not to Int.\n\n\nHere's a related question. \tdata Eq a b where EQ :: Eq a a\nConsider\n\tcase x of { EQ -> ... }\n\nSuppose x::Eq Int Char. Is the alternative dead? Clearly yes.\n\nWhat about x::Eq Int a, in a context where we have evidence that a~Char.\nThen again the alternative is dead. \n\n\n\t\t\tSummary\n\nWe are really doing a test for unsatisfiability of the type\nconstraints implied by the match. And that is clearly, in general, a\nhard thing to do. \n\nHowever, since we are simply dropping dead code, a conservative test\nsuffices. There is a continuum of tests, ranging from easy to hard, that\ndrop more and more dead code.\n\nFor now we implement a very simple test: type variables match\nanything, type functions (incl newtypes) match anything, and only\ndistinct data types fail to match. We can elaborate later.\n\n\\begin{code}\ntypesCantMatch :: [(Type,Type)] -> Bool\ntypesCantMatch prs = any (\\(s,t) -> cant_match s t) prs\n where\n cant_match :: Type -> Type -> Bool\n cant_match t1 t2\n\t| Just t1' <- coreView t1 = cant_match t1' t2\n\t| Just t2' <- coreView t2 = cant_match t1 t2'\n\n cant_match (FunTy a1 r1) (FunTy a2 r2)\n\t= cant_match a1 a2 || cant_match r1 r2\n\n cant_match (TyConApp tc1 tys1) (TyConApp tc2 tys2)\n\t| isDistinctTyCon tc1 && isDistinctTyCon tc2\n\t= tc1 \/= tc2 || typesCantMatch (zipEqual \"typesCantMatch\" tys1 tys2)\n\n cant_match (FunTy {}) (TyConApp tc _) = isDistinctTyCon tc\n cant_match (TyConApp tc _) (FunTy {}) = isDistinctTyCon tc\n\t-- tc can't be FunTyCon by invariant\n\n cant_match (AppTy f1 a1) ty2\n\t| Just (f2, a2) <- repSplitAppTy_maybe ty2\n\t= cant_match f1 f2 || cant_match a1 a2\n cant_match ty1 (AppTy f2 a2)\n\t| Just (f1, a1) <- repSplitAppTy_maybe ty1\n\t= cant_match f1 f2 || cant_match a1 a2\n\n cant_match _ _ = False -- Safe!\n\n-- Things we could add;\n--\tforalls\n--\tlook through newtypes\n--\ttake account of tyvar bindings (EQ example above)\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Unification\n%* *\n%************************************************************************\n\n\\begin{code}\ntcUnifyTys :: (TyVar -> BindFlag)\n\t -> [Type] -> [Type]\n\t -> Maybe TvSubst\t-- A regular one-shot (idempotent) substitution\n-- The two types may have common type variables, and indeed do so in the\n-- second call to tcUnifyTys in FunDeps.checkClsFD\n--\ntcUnifyTys bind_fn tys1 tys2\n = maybeErrToMaybe $ initUM bind_fn $\n do { subst <- unifyList emptyTvSubstEnv tys1 tys2\n\n\t-- Find the fixed point of the resulting non-idempotent substitution\n ; return (niFixTvSubst subst) }\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n Non-idempotent substitution\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\nNote [Non-idempotent substitution]\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nDuring unification we use a TvSubstEnv that is\n (a) non-idempotent\n (b) loop-free; ie repeatedly applying it yields a fixed point\n\n\\begin{code}\nniFixTvSubst :: TvSubstEnv -> TvSubst\n-- Find the idempotent fixed point of the non-idempotent substitution\n-- ToDo: use laziness instead of iteration?\nniFixTvSubst env = f env\n where\n f e | not_fixpoint = f (mapVarEnv (substTy subst) e)\n | otherwise = subst\n where\n range_tvs = foldVarEnv (unionVarSet . tyVarsOfType) emptyVarSet e\n subst = mkTvSubst (mkInScopeSet range_tvs) e \n not_fixpoint = foldVarSet ((||) . in_domain) False range_tvs\n in_domain tv = tv `elemVarEnv` e\n\nniSubstTvSet :: TvSubstEnv -> TyVarSet -> TyVarSet\n-- Apply the non-idempotent substitution to a set of type variables,\n-- remembering that the substitution isn't necessarily idempotent\n-- This is used in the occurs check, before extending the substitution\nniSubstTvSet subst tvs\n = foldVarSet (unionVarSet . get) emptyVarSet tvs\n where\n get tv = case lookupVarEnv subst tv of\n\t Nothing -> unitVarSet tv\n Just ty -> niSubstTvSet subst (tyVarsOfType ty)\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\t\tThe workhorse\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\nunify :: TvSubstEnv\t-- An existing substitution to extend\n -> Type -> Type \t-- Types to be unified, and witness of their equality\n -> UM TvSubstEnv\t\t-- Just the extended substitution, \n\t\t\t\t-- Nothing if unification failed\n-- We do not require the incoming substitution to be idempotent,\n-- nor guarantee that the outgoing one is. That's fixed up by\n-- the wrappers.\n\n-- Respects newtypes, PredTypes\n\n-- in unify, any NewTcApps\/Preds should be taken at face value\nunify subst (TyVarTy tv1) ty2 = uVar subst tv1 ty2\nunify subst ty1 (TyVarTy tv2) = uVar subst tv2 ty1\n\nunify subst ty1 ty2 | Just ty1' <- tcView ty1 = unify subst ty1' ty2\nunify subst ty1 ty2 | Just ty2' <- tcView ty2 = unify subst ty1 ty2'\n\nunify subst (TyConApp tyc1 tys1) (TyConApp tyc2 tys2) \n | tyc1 == tyc2 = unify_tys subst tys1 tys2\n\nunify subst (FunTy ty1a ty1b) (FunTy ty2a ty2b) \n = do\t{ subst' <- unify subst ty1a ty2a\n\t; unify subst' ty1b ty2b }\n\n\t-- Applications need a bit of care!\n\t-- They can match FunTy and TyConApp, so use splitAppTy_maybe\n\t-- NB: we've already dealt with type variables and Notes,\n\t-- so if one type is an App the other one jolly well better be too\nunify subst (AppTy ty1a ty1b) ty2\n | Just (ty2a, ty2b) <- repSplitAppTy_maybe ty2\n = do\t{ subst' <- unify subst ty1a ty2a\n ; unify subst' ty1b ty2b }\n\nunify subst ty1 (AppTy ty2a ty2b)\n | Just (ty1a, ty1b) <- repSplitAppTy_maybe ty1\n = do\t{ subst' <- unify subst ty1a ty2a\n ; unify subst' ty1b ty2b }\n\nunify _ ty1 ty2 = failWith (misMatch ty1 ty2)\n\t-- ForAlls??\n\n------------------------------\nunify_tys :: TvSubstEnv -> [Type] -> [Type] -> UM TvSubstEnv\nunify_tys subst xs ys = unifyList subst xs ys\n\nunifyList :: TvSubstEnv -> [Type] -> [Type] -> UM TvSubstEnv\nunifyList subst orig_xs orig_ys\n = go subst orig_xs orig_ys\n where\n go subst [] [] = return subst\n go subst (x:xs) (y:ys) = do { subst' <- unify subst x y\n\t\t\t\t; go subst' xs ys }\n go _ _ _ = failWith (lengthMisMatch orig_xs orig_ys)\n\n---------------------------------\nuVar :: TvSubstEnv\t-- An existing substitution to extend\n -> TyVar -- Type variable to be unified\n -> Type -- with this type\n -> UM TvSubstEnv\n\n-- PRE-CONDITION: in the call (uVar swap r tv1 ty), we know that\n--\tif swap=False\t(tv1~ty)\n--\tif swap=True\t(ty~tv1)\n\nuVar subst tv1 ty\n = -- Check to see whether tv1 is refined by the substitution\n case (lookupVarEnv subst tv1) of\n Just ty' -> unify subst ty' ty -- Yes, call back into unify'\n Nothing -> uUnrefined subst -- No, continue\n\t\t\t tv1 ty ty\n\nuUnrefined :: TvSubstEnv -- An existing substitution to extend\n -> TyVar -- Type variable to be unified\n -> Type -- with this type\n -> Type -- (version w\/ expanded synonyms)\n -> UM TvSubstEnv\n\n-- We know that tv1 isn't refined\n\nuUnrefined subst tv1 ty2 ty2'\n | Just ty2'' <- tcView ty2'\n = uUnrefined subst tv1 ty2 ty2''\t-- Unwrap synonyms\n\t\t-- This is essential, in case we have\n\t\t--\ttype Foo a = a\n\t\t-- and then unify a ~ Foo a\n\nuUnrefined subst tv1 ty2 (TyVarTy tv2)\n | tv1 == tv2\t\t-- Same type variable\n = return subst\n\n -- Check to see whether tv2 is refined\n | Just ty' <- lookupVarEnv subst tv2\n = uUnrefined subst tv1 ty' ty'\n\n -- So both are unrefined; next, see if the kinds force the direction\n | eqKind k1 k2\t-- Can update either; so check the bind-flags\n = do\t{ b1 <- tvBindFlag tv1\n\t; b2 <- tvBindFlag tv2\n\t; case (b1,b2) of\n\t (BindMe, _) \t -> bind tv1 ty2\n\t (Skolem, Skolem)\t -> failWith (misMatch ty1 ty2)\n\t (Skolem, _)\t\t -> bind tv2 ty1\n\t}\n\n | k1 `isSubKind` k2 = bindTv subst tv2 ty1 -- Must update tv2\n | k2 `isSubKind` k1 = bindTv subst tv1 ty2 -- Must update tv1\n\n | otherwise = failWith (kindMisMatch tv1 ty2)\n where\n ty1 = TyVarTy tv1\n k1 = tyVarKind tv1\n k2 = tyVarKind tv2\n bind tv ty = return $ extendVarEnv subst tv ty\n\nuUnrefined subst tv1 ty2 ty2'\t-- ty2 is not a type variable\n | tv1 `elemVarSet` niSubstTvSet subst (tyVarsOfType ty2')\n = failWith (occursCheck tv1 ty2)\t-- Occurs check\n | not (k2 `isSubKind` k1)\n = failWith (kindMisMatch tv1 ty2)\t-- Kind check\n | otherwise\n = bindTv subst tv1 ty2\t\t-- Bind tyvar to the synonym if poss\n where\n k1 = tyVarKind tv1\n k2 = typeKind ty2'\n\nbindTv :: TvSubstEnv -> TyVar -> Type -> UM TvSubstEnv\nbindTv subst tv ty\t-- ty is not a type variable\n = do { b <- tvBindFlag tv\n\t; case b of\n\t Skolem -> failWith (misMatch (TyVarTy tv) ty)\n\t BindMe -> return $ extendVarEnv subst tv ty\n\t}\n\\end{code}\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\t\tBinding decisions\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\ndata BindFlag \n = BindMe\t-- A regular type variable\n\n | Skolem\t-- This type variable is a skolem constant\n\t\t-- Don't bind it; it only matches itself\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\t\tUnification monad\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\nnewtype UM a = UM { unUM :: (TyVar -> BindFlag)\n\t\t -> MaybeErr Message a }\n\ninstance Monad UM where\n return a = UM (\\_tvs -> Succeeded a)\n fail s = UM (\\_tvs -> Failed (text s))\n m >>= k = UM (\\tvs -> case unUM m tvs of\n\t\t\t Failed err -> Failed err\n\t\t\t Succeeded v -> unUM (k v) tvs)\n\ninitUM :: (TyVar -> BindFlag) -> UM a -> MaybeErr Message a\ninitUM badtvs um = unUM um badtvs\n\ntvBindFlag :: TyVar -> UM BindFlag\ntvBindFlag tv = UM (\\tv_fn -> Succeeded (tv_fn tv))\n\nfailWith :: Message -> UM a\nfailWith msg = UM (\\_tv_fn -> Failed msg)\n\nmaybeErrToMaybe :: MaybeErr fail succ -> Maybe succ\nmaybeErrToMaybe (Succeeded a) = Just a\nmaybeErrToMaybe (Failed _) = Nothing\n\\end{code}\n\n\n%************************************************************************\n%*\t\t\t\t\t\t\t\t\t*\n\t\tError reporting\n\tWe go to a lot more trouble to tidy the types\n\tin TcUnify. Maybe we'll end up having to do that\n\there too, but I'll leave it for now.\n%*\t\t\t\t\t\t\t\t\t*\n%************************************************************************\n\n\\begin{code}\nmisMatch :: Type -> Type -> SDoc\nmisMatch t1 t2\n = ptext (sLit \"Can't match types\") <+> quotes (ppr t1) <+> \n ptext (sLit \"and\") <+> quotes (ppr t2)\n\nlengthMisMatch :: [Type] -> [Type] -> SDoc\nlengthMisMatch tys1 tys2\n = sep [ptext (sLit \"Can't match unequal length lists\"), \n\t nest 2 (ppr tys1), nest 2 (ppr tys2) ]\n\nkindMisMatch :: TyVar -> Type -> SDoc\nkindMisMatch tv1 t2\n = vcat [ptext (sLit \"Can't match kinds\") <+> quotes (ppr (tyVarKind tv1)) <+> \n\t ptext (sLit \"and\") <+> quotes (ppr (typeKind t2)),\n\t ptext (sLit \"when matching\") <+> quotes (ppr tv1) <+> \n\t\tptext (sLit \"with\") <+> quotes (ppr t2)]\n\noccursCheck :: TyVar -> Type -> SDoc\noccursCheck tv ty\n = hang (ptext (sLit \"Can't construct the infinite type\"))\n 2 (ppr tv <+> equals <+> ppr ty)\n\\end{code}\n","avg_line_length":33.1570512821,"max_line_length":87,"alphanum_fraction":0.6223779604}